From b15eb1fc6c4843d894106adf3e445687a76fd98a Mon Sep 17 00:00:00 2001 From: Benjamin-eecs Date: Wed, 7 Dec 2022 02:35:03 +0800 Subject: [PATCH 01/24] chore: update README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 3dc1155f..8d08504f 100644 --- a/README.md +++ b/README.md @@ -146,7 +146,7 @@ Check out section [Explicit Gradient (EG)](#explicit-gradient-eg) functional API We design a bilevel-optimization updating scheme, which can be easily extended to realize various differentiable optimization processes.
- +
As shown above, the scheme contains an outer level that has parameters $\phi$ that can be learned end-to-end through the inner level parameters solution $\theta^{\prime}(\phi)$ by using the best-response derivatives $\partial \theta^{\prime}(\phi) / \partial \phi$. @@ -436,7 +436,7 @@ If you find TorchOpt useful, please cite it in your publications. ## The Team -TorchOpt is a work by [Jie Ren](https://github.com/JieRen98), [Xidong Feng](https://github.com/waterhorse1), [Bo Liu](https://github.com/Benjamin-eecs), [Xuehai Pan](https://github.com/XuehaiPan), [Luo Mai](https://luomai.github.io), and [Yaodong Yang](https://www.yangyaodong.com). +TorchOpt is a work by [Jie Ren](https://github.com/JieRen98), [Xidong Feng](https://github.com/waterhorse1), [Bo Liu](https://benjamin-eecs.github.io/), [Xuehai Pan](https://github.com/XuehaiPan), [Luo Mai](https://luomai.github.io), and [Yaodong Yang](https://www.yangyaodong.com). ## License From 9f65b5e8af626478ec4e48d47d2c21ba78099074 Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Mon, 12 Dec 2022 16:27:34 +0800 Subject: [PATCH 02/24] chore(.github): update issue templates --- .github/ISSUE_TEMPLATE/bug-report.yml | 35 ++++++++++++---------- .github/ISSUE_TEMPLATE/feature-request.yml | 10 +++---- .github/ISSUE_TEMPLATE/questions.yml | 3 +- 3 files changed, 24 insertions(+), 24 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/bug-report.yml b/.github/ISSUE_TEMPLATE/bug-report.yml index 4b90bb84..6d381b28 100644 --- a/.github/ISSUE_TEMPLATE/bug-report.yml +++ b/.github/ISSUE_TEMPLATE/bug-report.yml @@ -25,10 +25,9 @@ body: - type: input id: version attributes: - label: | - What version of TorchOpt are you using? - value: | - python3 -m pip show torchopt + label: What version of TorchOpt are you using? + description: Run command `python3 -c 'print(__import__("torchopt").__version__)'` in your shell and paste the output here. + placeholder: E.g., 0.6.0 validations: required: true @@ -36,7 +35,7 @@ body: id: system-info attributes: label: System information - value: | + description: | Describe the characteristic of your environment: - Describe how the library was installed (pip, conda, source, ...) @@ -55,7 +54,7 @@ body: id: description attributes: label: Problem description - placeholder: | + description: >- Provide a short description, state the expected behavior and what actually happens. Include relevant information like what version of TorchOpt you are using, what system you are on, and any useful commands / output. @@ -66,18 +65,18 @@ body: id: code attributes: label: Reproducible example code + description: >- + The code should be minimal, have minimal external dependencies, and isolate the functions + that cause breakage. Submit matched and complete snippets that can be easily run to diagnose + the issue. value: | - - The Python snippets: ```python ``` - Run the snippets with the following commands: + Command lines: ```bash @@ -88,6 +87,12 @@ body: ```text ``` + + Steps to reproduce: + + 1. + 2. + 3. validations: required: true @@ -95,9 +100,8 @@ body: id: traceback attributes: label: Traceback + description: Put the Python traceback information here. placeholder: | - Put the Python traceback information here. - Traceback (most recent call last): File ... render: pytb @@ -106,14 +110,13 @@ body: id: expected attributes: label: Expected behavior - placeholder: | - Provide a clear and concise description of what you expected to happen. + description: Provide a clear and concise description of what you expected to happen. - type: textarea id: additional-context attributes: label: Additional context - placeholder: | + description: >- Add any other context about the problem here. Screenshots may also be helpful. If you know or suspect the reason for this bug, paste the code lines and suggest modifications. diff --git a/.github/ISSUE_TEMPLATE/feature-request.yml b/.github/ISSUE_TEMPLATE/feature-request.yml index 959ec909..ee76e770 100644 --- a/.github/ISSUE_TEMPLATE/feature-request.yml +++ b/.github/ISSUE_TEMPLATE/feature-request.yml @@ -19,6 +19,7 @@ body: id: motivation attributes: label: Motivation + description: Outline the motivation for the proposal. value: | +For zero-order differentiation, users need to define the forward pass calculation and the noise sampling procedure. TorchOpt provides the decorator to wrap the forward function for enabling zero-order differentiation. + ```python # Customize the noise sampling function in ES -def sample(sample_shape): +def distribution(sample_shape): + # Generate a batch of noise samples + # NOTE: The distribution should be spherical symmetric and with a constant variance of 1. ... - return sample_noise + return noise_batch + +# Distribution can also be an instance of `torch.distributions.Distribution`, e.g., `torch.distributions.Normal(...)` +distribution = torch.distributions.Normal(loc=0, scale=1) # Specify method and hyper-parameter of ES -@torchopt.diff.zero_order(sample, method) +@torchopt.diff.zero_order(distribution, method) def forward(params, batch, labels): - # forward process - return output + # Forward process + ... + return objective # the returned tensor should be a scalar tensor +``` + +#### OOP API + +TorchOpt also offer an OOP API, users need to inherit from the class `torchopt.nn.ZeroOrderGradientModule` to construct the network as an `nn.Module` following a classical PyTorch style. +Users need to define the forward process zero-order gradient procedures `forward()` and a noise sampling function `sample()`. + +```python +# Inherited from the class ZeroOrderGradientModule +# Optionally specify the `method` and/or `num_samples` and/or `sigma` used for sampling +class Net(ZeroOrderGradientModule, method=method, num_samples=num_samples, sigma=sigma): + def __init__(self, ...): + ... + + def forward(self, batch): + # Forward process + ... + return objective # the returned tensor should be a scalar tensor + + def sample(self, sample_shape=torch.Size()): + # Generate a batch of noise samples + # NOTE: The distribution should be spherical symmetric and with a constant variance of 1. + ... + return noise_batch + +# Get model and data +net = Net(...) +data = ... + +# Forward pass +loss = Net(data) +# Backward pass using zero-order differentiation +grads = torch.autograd.grad(loss, net.parameters()) ``` -------------------------------------------------------------------------------- diff --git a/tests/test_zero_order.py b/tests/test_zero_order.py index 32d3ae3b..a2e2c1f7 100644 --- a/tests/test_zero_order.py +++ b/tests/test_zero_order.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -16,6 +16,7 @@ import functorch import torch import torch.nn as nn +import torch.nn.functional as F import torch.types import helpers @@ -30,20 +31,17 @@ class FcNet(nn.Module): def __init__(self, dim, out): super().__init__() self.fc = nn.Linear(in_features=dim, out_features=out, bias=True) - nn.init.ones_(self.fc.weight) - nn.init.zeros_(self.fc.bias) def forward(self, x): return self.fc(x) @helpers.parametrize( - dtype=[torch.float64, torch.float32], lr=[1e-2, 1e-3], method=['naive', 'forward', 'antithetic'], sigma=[0.01, 0.1, 1], ) -def test_zero_order(dtype: torch.dtype, lr: float, method: str, sigma: float) -> None: +def test_zero_order(lr: float, method: str, sigma: float) -> None: helpers.seed_everything(42) input_size = 32 output_size = 1 @@ -59,21 +57,63 @@ def test_zero_order(dtype: torch.dtype, lr: float, method: str, sigma: float) -> y = torch.randn(input_size) * coef distribution = torch.distributions.Normal(loc=0, scale=1) - @torchopt.diff.zero_order.zero_order( + @torchopt.diff.zero_order( distribution=distribution, method=method, argnums=0, sigma=sigma, num_samples=num_samples ) def forward_process(params, fn, x, y): y_pred = fn(params, x) - loss = torch.mean((y - y_pred) ** 2) + loss = F.mse_loss(y_pred, y) return loss optimizer = torchopt.adam(lr=lr) - opt_state = optimizer.init(params) + opt_state = optimizer.init(params) # init optimizer for i in range(num_iterations): - opt_state = optimizer.init(params) # init optimizer loss = forward_process(params, fmodel, x, y) # compute loss grads = torch.autograd.grad(loss, params) # compute gradients updates, opt_state = optimizer.update(grads, opt_state) # get updates params = torchopt.apply_updates(params, updates) # update network parameters + + +@helpers.parametrize( + lr=[1e-2, 1e-3], + method=['naive', 'forward', 'antithetic'], + sigma=[0.01, 0.1, 1], +) +def test_zero_order_module(lr: float, method: str, sigma: float) -> None: + helpers.seed_everything(42) + input_size = 32 + output_size = 1 + batch_size = BATCH_SIZE + coef = 0.1 + num_iterations = NUM_UPDATES + num_samples = 500 + + class FcNetWithLoss( + torchopt.nn.ZeroOrderGradientModule, method=method, sigma=sigma, num_samples=num_samples + ): + def __init__(self, dim, out): + super().__init__() + self.net = FcNet(dim, out) + self.loss = nn.MSELoss() + self.distribution = torch.distributions.Normal(loc=0, scale=1) + + def forward(self, x, y): + return self.loss(self.net(x), y) + + def sample(self, sample_shape=torch.Size()): + return self.distribution.sample(sample_shape) + + x = torch.randn(batch_size, input_size) * coef + y = torch.randn(input_size) * coef + model_with_loss = FcNetWithLoss(input_size, output_size) + + optimizer = torchopt.Adam(model_with_loss.parameters(), lr=lr) + + for i in range(num_iterations): + loss = model_with_loss(x, y) # compute loss + + optimizer.zero_grad() + loss.backward() # compute gradients + optimizer.step() # update network parameters diff --git a/torchopt/diff/__init__.py b/torchopt/diff/__init__.py index 45674fcf..984841ed 100644 --- a/torchopt/diff/__init__.py +++ b/torchopt/diff/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -16,3 +16,4 @@ from torchopt.diff import implicit, zero_order from torchopt.diff.implicit import ImplicitMetaGradientModule +from torchopt.diff.zero_order import ZeroOrderGradientModule diff --git a/torchopt/diff/implicit/nn/__init__.py b/torchopt/diff/implicit/nn/__init__.py index 95a2ea85..5bc7aa8d 100644 --- a/torchopt/diff/implicit/nn/__init__.py +++ b/torchopt/diff/implicit/nn/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,9 +14,10 @@ # ============================================================================== """The base class for differentiable implicit meta-gradient models.""" -# Preload to resolve circular references -import torchopt.nn.module # pylint: disable=unused-import +import torchopt.nn.module # preload to resolve circular references from torchopt.diff.implicit.nn.module import ImplicitMetaGradientModule __all__ = ['ImplicitMetaGradientModule'] + +del torchopt diff --git a/torchopt/diff/zero_order/__init__.py b/torchopt/diff/zero_order/__init__.py index a76dcb9a..5b85d03d 100644 --- a/torchopt/diff/zero_order/__init__.py +++ b/torchopt/diff/zero_order/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -17,10 +17,12 @@ import sys as _sys from types import ModuleType as _ModuleType +from torchopt.diff.zero_order import nn from torchopt.diff.zero_order.decorator import zero_order +from torchopt.diff.zero_order.nn import ZeroOrderGradientModule -__all__ = ['zero_order'] +__all__ = ['zero_order', 'ZeroOrderGradientModule'] class _CallableModule(_ModuleType): # pylint: disable=too-few-public-methods diff --git a/torchopt/diff/zero_order/nn/__init__.py b/torchopt/diff/zero_order/nn/__init__.py new file mode 100644 index 00000000..1bf64efe --- /dev/null +++ b/torchopt/diff/zero_order/nn/__init__.py @@ -0,0 +1,23 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== +"""The base class for zero-order gradient models.""" + +import torchopt.nn.module # preload to resolve circular references +from torchopt.diff.zero_order.nn.module import ZeroOrderGradientModule + + +__all__ = ['ZeroOrderGradientModule'] + +del torchopt diff --git a/torchopt/diff/zero_order/nn/module.py b/torchopt/diff/zero_order/nn/module.py new file mode 100644 index 00000000..c71048ad --- /dev/null +++ b/torchopt/diff/zero_order/nn/module.py @@ -0,0 +1,116 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== +"""The base class for zero-order gradient models.""" + +# pylint: disable=redefined-builtin + +import abc +import functools +from typing import Dict, Optional, Sequence, Tuple, Type, Union + +import torch +import torch.nn as nn + +from torchopt import pytree +from torchopt.diff.implicit.nn.module import container_context +from torchopt.diff.zero_order.decorator import Method, Samplable, zero_order +from torchopt.typing import Numeric, TupleOfTensors +from torchopt.utils import extract_module_containers + + +__all__ = ['ZeroOrderGradientModule'] + + +def enable_zero_order_gradients( + cls: Type['ZeroOrderGradientModule'], + method: Method = 'naive', + num_samples: int = 1, + sigma: Numeric = 1.0, +) -> Type['ZeroOrderGradientModule']: + """Enable zero-order gradient estimation for the :func:`forward` method.""" + cls_forward = cls.forward + if getattr(cls_forward, '__zero_order_gradients_enabled__', False): + raise TypeError( + 'Zero-order gradient estimation is already enabled for the `forward` method.' + ) + + @functools.wraps(cls_forward) + def wrapped( # pylint: disable=too-many-locals + self: 'ZeroOrderGradientModule', *input, **kwargs + ) -> torch.Tensor: + """Do the forward pass calculation.""" + params_containers = extract_module_containers(self, with_buffers=False)[0] + + flat_params: TupleOfTensors + flat_params, params_containers_treespec = pytree.tree_flatten_as_tuple( + params_containers # type: ignore[arg-type] + ) + + @zero_order(self.sample, argnums=0, method=method, num_samples=num_samples, sigma=sigma) + def forward_fn( + __flat_params: TupleOfTensors, # pylint: disable=unused-argument + *input, + **kwargs, + ) -> torch.Tensor: + flat_grad_tracking_params = __flat_params + grad_tracking_params_containers: Tuple[ + Dict[str, Optional[torch.Tensor]], ... + ] = pytree.tree_unflatten( # type: ignore[assignment] + params_containers_treespec, flat_grad_tracking_params + ) + + with container_context( + params_containers, + grad_tracking_params_containers, + ): + return cls_forward(self, *input, **kwargs) + + return forward_fn(flat_params, *input, **kwargs) + + wrapped.__zero_order_gradients_enabled__ = True # type: ignore[attr-defined] + cls.forward = wrapped # type: ignore[assignment] + return cls + + +class ZeroOrderGradientModule(nn.Module, Samplable): + """The base class for zero-order gradient models.""" + + def __init_subclass__( # pylint: disable=arguments-differ + cls, + method: Method = 'naive', + num_samples: int = 1, + sigma: Numeric = 1.0, + ) -> None: + """Validate and initialize the subclass.""" + super().__init_subclass__() + enable_zero_order_gradients( + cls, + method=method, + num_samples=num_samples, + sigma=sigma, + ) + + @abc.abstractmethod + def forward(self, *args, **kwargs) -> torch.Tensor: + """Do the forward pass of the model.""" + raise NotImplementedError + + @abc.abstractmethod + def sample( + self, sample_shape: torch.Size = torch.Size() # pylint: disable=unused-argument + ) -> Union[torch.Tensor, Sequence[Numeric]]: + # pylint: disable-next=line-too-long + """Generate a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched.""" + raise NotImplementedError diff --git a/torchopt/nn/__init__.py b/torchopt/nn/__init__.py index 57a8e802..f206972c 100644 --- a/torchopt/nn/__init__.py +++ b/torchopt/nn/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -15,7 +15,8 @@ """Base class for neural network modules that hold meta-parameters and meta-modules.""" from torchopt.diff.implicit.nn.module import ImplicitMetaGradientModule # circular reference +from torchopt.diff.zero_order.nn.module import ZeroOrderGradientModule # circular reference from torchopt.nn.module import MetaGradientModule -__all__ = ['MetaGradientModule', 'ImplicitMetaGradientModule'] +__all__ = ['MetaGradientModule', 'ImplicitMetaGradientModule', 'ZeroOrderGradientModule'] diff --git a/tutorials/5_Implicit_Differentiation.ipynb b/tutorials/5_Implicit_Differentiation.ipynb index c2913101..31413fc8 100644 --- a/tutorials/5_Implicit_Differentiation.ipynb +++ b/tutorials/5_Implicit_Differentiation.ipynb @@ -48,20 +48,15 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "id": "0cdaac49-4b94-4900-9bb5-a39057ac8b21", "metadata": {}, "source": [ "## 1. Functional API\n", "\n", - "The basic functional API is `torchopt.diff.implicit.custom_root`, which is used as the decorator for the forward process implicit gradient procedures. Users are required to implement the stationary conditions for the inner-loop process, which will be used as the input of custom_root decorator. We show the pseudo code in the following part." - ] - }, - { - "cell_type": "markdown", - "id": "c0b4400b-a491-4f07-926c-c421ac5a2069", - "metadata": {}, - "source": [ + "The basic functional API is `torchopt.diff.implicit.custom_root`, which is used as the decorator for the forward process implicit gradient procedures. Users are required to implement the stationary conditions for the inner-loop process, which will be used as the input of custom_root decorator. We show the pseudo code in the following part.\n", + "\n", "```python\n", "# Functional API for implicit gradient\n", "def stationary(params, meta_params, data):\n", @@ -334,6 +329,7 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "id": "c92e67ea-b220-4a14-a1ea-4eb3c5f52b6b", "metadata": {}, diff --git a/tutorials/6_Zero_Order_Differentiation.ipynb b/tutorials/6_Zero_Order_Differentiation.ipynb index c8d1e551..d824ab61 100644 --- a/tutorials/6_Zero_Order_Differentiation.ipynb +++ b/tutorials/6_Zero_Order_Differentiation.ipynb @@ -58,49 +58,52 @@ "\n", "The basic functional API is `torchopt.diff.zero_order.zero_order`, which is used as the decorator for the forward process zero-order gradient procedures. Users are required to implement the noise sampling function, which will be used as the input of zero_order decorator. Here we show the specific meaning for each parameter used in the decorator.\n", "\n", - "- `distribution` for noise sampling distribution\n", + "- `distribution` for noise sampling distribution. The distribution $\\lambda$ should be spherical symmetric and with a constant variance of $1$ for each element. I.e.:\n", + " - Spherical symmetric: $\\mathbb{E}_{\\boldsymbol{z} \\sim \\lambda} [ \\boldsymbol{z} ] = \\boldsymbol{0}$.\n", + " - Constant variance of $1$ for each element: $\\mathbb{E}_{\\boldsymbol{z} \\sim \\lambda} [ {\\lvert \\boldsymbol{z}_i \\rvert}^2 ] = 1$.\n", "- `method` for different kind of algorithms, we support `'naive'` ([ES-RL](https://arxiv.org/abs/1703.03864)), `'forward'` ([Forward-FD](http://proceedings.mlr.press/v80/choromanski18a/choromanski18a.pdf)), and `'antithetic'` ([antithetic](https://d1wqtxts1xzle7.cloudfront.net/75609515/coredp2011_1web-with-cover-page-v2.pdf?Expires=1670215467&Signature=RfP~mQhhhI7aGknwXbRBgSggFrKuNTPYdyUSdMmfTxOa62QoOJAm-Xhr3F1PLyjUQc2JVxmKIKGGuyYvyfCTpB31dfmMtuVQxZMWVF-SfErTN05SliC93yjA1x1g2kjhn8bkBFdQqGl~1RQSKnhj88BakgSeDNzyCxwbD5VgR89BXRs4YIK5RBIKYtgLhoyz5jar7wHS3TJhRzs3WNeTIAjAmLqJ068oGFZ0Jr7maGquTe3w~8LEEIprJ6cyCMc6b1UUJkmwjNq0RLTVbxgFjfi4Z9kyxyJB9IOS1J25OOON4jfwh5JlXS7MVskuONUyHJim1TQ8OwCraKlBsQLPQw__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA)).\n", "- `argnums` specifies which parameter we want to trace the meta-gradient.\n", - "- `sigma` is for precision.\n", "- `num_samples` specifies how many times we want to conduct the sampling.\n", + "- `sigma` is for precision. This is the scaling factor for the sampling distribution.\n", + "\n", + "We show the pseudo code in the following part.\n", "\n", - "We show the pseudo code in the following part." - ] - }, - { - "cell_type": "markdown", - "id": "c0b4400b-a491-4f07-926c-c421ac5a2069", - "metadata": {}, - "source": [ "```python\n", "# Functional API for zero-order differentiation\n", "# 1. Customize the noise distribution via a distribution class\n", "class Distribution:\n", - " def sample(self, sample_shape = torch.Size()):\n", - " # sampling function for noise\n", + " def sample(self, sample_shape=torch.Size()):\n", + " # Sampling function for noise\n", + " # NOTE: The distribution should be spherical symmetric and with a constant variance of 1.\n", + " ...\n", " return noise_batch\n", "\n", "distribution = Distribution()\n", "\n", "# 2. Customize the noise distribution via a sampling function\n", - "def distribution(sample_shape = torch.Size()):\n", - " # sampling function for noise\n", + "def distribution(sample_shape=torch.Size()):\n", + " # Sampling function for noise\n", + " # NOTE: The distribution should be spherical symmetric and with a constant variance of 1.\n", + " ...\n", " return noise_batch\n", "\n", "# 3. Distribution can also be an instance of `torch.distributions.Distribution`, e.g., `torch.distributions.Normal(...)`\n", "distribution = torch.distributions.Normal(loc=0, scale=1)\n", "\n", "# Decorator that wraps the function\n", - "@torchopt.diff.zero_order(distribution=distribution, method='naive', argnums=0, sigma=0.01, num_samples=100)\n", + "@torchopt.diff.zero_order(distribution=distribution, method='naive', argnums=0, num_samples=100, sigma=0.01)\n", "def forward(params, data):\n", " # Forward optimization process for params\n", - " return output\n", + " ...\n", + " return objective # the returned tensor should be a scalar tensor\n", "\n", "# Define params and get data\n", "params, data = ..., ...\n", - "loss = forward(params, data)\n", "\n", - "meta_grads = torch.autograd.grad(loss, params)\n", + "# Forward pass\n", + "loss = forward(params, data)\n", + "# Backward pass using zero-order differentiation\n", + "grads = torch.autograd.grad(loss, params)\n", "```" ] }, @@ -122,57 +125,56 @@ "name": "stdout", "output_type": "stream", "text": [ - "001: tensor(0.0269, grad_fn=)\n", - "002: tensor(0.0246, grad_fn=)\n", - "003: tensor(0.0225, grad_fn=)\n", - "004: tensor(0.0205, grad_fn=)\n", - "005: tensor(0.0187, grad_fn=)\n", - "006: tensor(0.0171, grad_fn=)\n", - "007: tensor(0.0156, grad_fn=)\n", - "008: tensor(0.0144, grad_fn=)\n", - "009: tensor(0.0134, grad_fn=)\n", - "010: tensor(0.0128, grad_fn=)\n", - "011: tensor(0.0122, grad_fn=)\n", - "012: tensor(0.0118, grad_fn=)\n", + "001: tensor(0.0265, grad_fn=)\n", + "002: tensor(0.0241, grad_fn=)\n", + "003: tensor(0.0222, grad_fn=)\n", + "004: tensor(0.0202, grad_fn=)\n", + "005: tensor(0.0185, grad_fn=)\n", + "006: tensor(0.0170, grad_fn=)\n", + "007: tensor(0.0158, grad_fn=)\n", + "008: tensor(0.0147, grad_fn=)\n", + "009: tensor(0.0139, grad_fn=)\n", + "010: tensor(0.0132, grad_fn=)\n", + "011: tensor(0.0126, grad_fn=)\n", + "012: tensor(0.0122, grad_fn=)\n", "013: tensor(0.0120, grad_fn=)\n", - "014: tensor(0.0117, grad_fn=)\n", + "014: tensor(0.0118, grad_fn=)\n", "015: tensor(0.0117, grad_fn=)\n", - "016: tensor(0.0118, grad_fn=)\n", - "017: tensor(0.0121, grad_fn=)\n", - "018: tensor(0.0117, grad_fn=)\n", - "019: tensor(0.0118, grad_fn=)\n", - "020: tensor(0.0118, grad_fn=)\n", - "021: tensor(0.0115, grad_fn=)\n", - "022: tensor(0.0117, grad_fn=)\n", - "023: tensor(0.0117, grad_fn=)\n", - "024: tensor(0.0116, grad_fn=)\n", - "025: tensor(0.0113, grad_fn=)\n" + "016: tensor(0.0117, grad_fn=)\n", + "017: tensor(0.0117, grad_fn=)\n", + "018: tensor(0.0118, grad_fn=)\n", + "019: tensor(0.0119, grad_fn=)\n", + "020: tensor(0.0120, grad_fn=)\n", + "021: tensor(0.0120, grad_fn=)\n", + "022: tensor(0.0121, grad_fn=)\n", + "023: tensor(0.0122, grad_fn=)\n", + "024: tensor(0.0122, grad_fn=)\n", + "025: tensor(0.0122, grad_fn=)\n" ] } ], "source": [ "torch.random.manual_seed(0)\n", "\n", - "fmodel, params = functorch.make_functional(torch.nn.Linear(32, 1))\n", + "fmodel, params = functorch.make_functional(nn.Linear(32, 1))\n", "x = torch.randn(64, 32) * 0.1\n", - "y = torch.randn(64) * 0.1\n", + "y = torch.randn(64, 1) * 0.1\n", "distribution = torch.distributions.Normal(loc=0, scale=1)\n", "\n", "\n", - "@torchopt.diff.zero_order.zero_order(\n", - " distribution=distribution, method='forward', argnums=0, sigma=0.01, num_samples=1000\n", + "@torchopt.diff.zero_order(\n", + " distribution=distribution, method='forward', argnums=0, num_samples=1000, sigma=0.01\n", ")\n", "def forward_process(params, fn, x, y):\n", " y_pred = fn(params, x)\n", - " loss = torch.mean((y - y_pred) ** 2)\n", + " loss = F.mse_loss(y_pred, y)\n", " return loss\n", "\n", "\n", "optimizer = torchopt.adam(lr=0.01)\n", - "opt_state = optimizer.init(params)\n", + "opt_state = optimizer.init(params) # init optimizer\n", "\n", "for i in range(25):\n", - " opt_state = optimizer.init(params) # init optimizer\n", " loss = forward_process(params, fmodel, x, y) # compute loss\n", "\n", " grads = torch.autograd.grad(loss, params) # compute gradients\n", @@ -181,6 +183,136 @@ "\n", " print(f'{i + 1:03d}: {loss!r}')" ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "db723f6b", + "metadata": {}, + "source": [ + "## 2. OOP API\n", + "\n", + "The basic OOP API is the class `ZeroOrderGradientModule`. We make the network as an `nn.Module` following a classical PyTorch style. Users need to define the forward process zero-order gradient procedures `forward()` and a noise sampling function `sample()`. Here we show the specific meaning for each parameter used in the class.\n", + "\n", + "- `method` for different kind of algorithms, we support `'naive'` ([ES-RL](https://arxiv.org/abs/1703.03864)), `'forward'` ([Forward-FD](http://proceedings.mlr.press/v80/choromanski18a/choromanski18a.pdf)), and `'antithetic'` ([antithetic](https://d1wqtxts1xzle7.cloudfront.net/75609515/coredp2011_1web-with-cover-page-v2.pdf?Expires=1670215467&Signature=RfP~mQhhhI7aGknwXbRBgSggFrKuNTPYdyUSdMmfTxOa62QoOJAm-Xhr3F1PLyjUQc2JVxmKIKGGuyYvyfCTpB31dfmMtuVQxZMWVF-SfErTN05SliC93yjA1x1g2kjhn8bkBFdQqGl~1RQSKnhj88BakgSeDNzyCxwbD5VgR89BXRs4YIK5RBIKYtgLhoyz5jar7wHS3TJhRzs3WNeTIAjAmLqJ068oGFZ0Jr7maGquTe3w~8LEEIprJ6cyCMc6b1UUJkmwjNq0RLTVbxgFjfi4Z9kyxyJB9IOS1J25OOON4jfwh5JlXS7MVskuONUyHJim1TQ8OwCraKlBsQLPQw__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA)).\n", + "- `num_samples` specifies how many times we want to conduct the sampling.\n", + "- `sigma` is for precision. This is the scaling factor for the sampling distribution.\n", + "\n", + "We show the pseudo code in the following part.\n", + "\n", + "```python\n", + "from torchopt.nn import ZeroOrderGradientModule\n", + "\n", + "# Inherited from the class ZeroOrderGradientModule\n", + "# Optionally specify the `method` and/or `num_samples` and/or `sigma` used for sampling\n", + "class Net(ZeroOrderGradientModule, method='naive', num_samples=100, sigma=0.01):\n", + " def __init__(self, ...):\n", + " ...\n", + "\n", + " def forward(self, batch):\n", + " # Forward process\n", + " ...\n", + " return objective # the returned tensor should be a scalar tensor\n", + "\n", + " def sample(self, sample_shape=torch.Size()):\n", + " # Generate a batch of noise samples\n", + " # NOTE: The distribution should be spherical symmetric and with a constant variance of 1.\n", + " ...\n", + " return noise_batch\n", + "\n", + "# Get model and data\n", + "net = Net(...)\n", + "data = ...\n", + "\n", + "# Forward pass\n", + "loss = Net(data)\n", + "# Backward pass using zero-order differentiation\n", + "grads = torch.autograd.grad(loss, net.parameters())\n", + "```" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "b53524f5", + "metadata": {}, + "source": [ + "Here we reimplement the functional API example above with the OOP API." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "ecc5730c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "001: tensor(0.0201, grad_fn=)\n", + "002: tensor(0.0181, grad_fn=)\n", + "003: tensor(0.0167, grad_fn=)\n", + "004: tensor(0.0153, grad_fn=)\n", + "005: tensor(0.0142, grad_fn=)\n", + "006: tensor(0.0133, grad_fn=)\n", + "007: tensor(0.0125, grad_fn=)\n", + "008: tensor(0.0119, grad_fn=)\n", + "009: tensor(0.0116, grad_fn=)\n", + "010: tensor(0.0114, grad_fn=)\n", + "011: tensor(0.0112, grad_fn=)\n", + "012: tensor(0.0112, grad_fn=)\n", + "013: tensor(0.0113, grad_fn=)\n", + "014: tensor(0.0116, grad_fn=)\n", + "015: tensor(0.0118, grad_fn=)\n", + "016: tensor(0.0121, grad_fn=)\n", + "017: tensor(0.0123, grad_fn=)\n", + "018: tensor(0.0125, grad_fn=)\n", + "019: tensor(0.0127, grad_fn=)\n", + "020: tensor(0.0127, grad_fn=)\n", + "021: tensor(0.0125, grad_fn=)\n", + "022: tensor(0.0123, grad_fn=)\n", + "023: tensor(0.0120, grad_fn=)\n", + "024: tensor(0.0118, grad_fn=)\n", + "025: tensor(0.0117, grad_fn=)\n" + ] + } + ], + "source": [ + "torch.random.manual_seed(0)\n", + "\n", + "\n", + "class Net(torchopt.nn.ZeroOrderGradientModule, method='forward', num_samples=100, sigma=0.01):\n", + " def __init__(self, dim):\n", + " super().__init__()\n", + " self.fc = nn.Linear(dim, 1)\n", + " self.distribution = torch.distributions.Normal(loc=0, scale=1)\n", + "\n", + " def forward(self, x, y):\n", + " y_pred = self.fc(x)\n", + " loss = F.mse_loss(y_pred, y)\n", + " return loss\n", + "\n", + " def sample(self, sample_shape=torch.Size()):\n", + " return self.distribution.sample(sample_shape)\n", + "\n", + "\n", + "x = torch.randn(64, 32) * 0.1\n", + "y = torch.randn(64, 1) * 0.1\n", + "net = Net(dim=32)\n", + "\n", + "\n", + "optimizer = torchopt.Adam(net.parameters(), lr=0.01)\n", + "\n", + "for i in range(25):\n", + " loss = net(x, y) # compute loss\n", + "\n", + " optimizer.zero_grad()\n", + " loss.backward() # backward pass\n", + " optimizer.step() # update network parameters\n", + "\n", + " print(f'{i + 1:03d}: {loss!r}')" + ] } ], "metadata": { From 997cf56d06f4d6a1c94eeb52ae17231103e9e485 Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Sun, 15 Jan 2023 14:49:57 +0800 Subject: [PATCH 10/24] fix(diff/implicit): fix memory leak of OOP APIs (#113) --- .pre-commit-config.yaml | 2 +- CHANGELOG.md | 2 +- docs/source/spelling_wordlist.txt | 1 + examples/FuncTorch/maml_omniglot_vmap.py | 4 +- .../distributed/few-shot/maml_omniglot.py | 4 +- .../few-shot/maml_omniglot_local_loader.py | 4 +- examples/few-shot/maml_omniglot.py | 4 +- examples/iMAML/imaml_omniglot.py | 27 +-- examples/iMAML/imaml_omniglot_functional.py | 4 +- torchopt/__init__.py | 3 +- torchopt/diff/implicit/decorator.py | 3 +- torchopt/diff/implicit/nn/module.py | 217 ++++++++---------- torchopt/diff/zero_order/nn/module.py | 31 +-- torchopt/nn/__init__.py | 9 +- torchopt/nn/module.py | 23 +- torchopt/nn/stateless.py | 86 +++++++ torchopt/optim/meta/base.py | 12 +- torchopt/typing.py | 9 +- torchopt/utils.py | 13 +- 19 files changed, 253 insertions(+), 205 deletions(-) create mode 100644 torchopt/nn/stateless.py diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index ab7be5fe..2bbbb67a 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -54,7 +54,7 @@ repos: ^setup.py$ ) - repo: https://github.com/pycqa/pydocstyle - rev: 6.2.2 + rev: 6.2.3 hooks: - id: pydocstyle additional_dependencies: ['.[toml]'] diff --git a/CHANGELOG.md b/CHANGELOG.md index 341cb07b..ea62b695 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,7 +21,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Fixed - +- Fix memory leak in implicit MAML omniglot few-shot classification example with OOP APIs by [@XuehaiPan](https://github.com/XuehaiPan) in [#113](https://github.com/metaopt/torchopt/pull/113). ### Removed diff --git a/docs/source/spelling_wordlist.txt b/docs/source/spelling_wordlist.txt index abf398d6..bd646f0c 100644 --- a/docs/source/spelling_wordlist.txt +++ b/docs/source/spelling_wordlist.txt @@ -145,3 +145,4 @@ jvp ATen samplable conj +reparameterize diff --git a/examples/FuncTorch/maml_omniglot_vmap.py b/examples/FuncTorch/maml_omniglot_vmap.py index 41c17db8..0933b44d 100644 --- a/examples/FuncTorch/maml_omniglot_vmap.py +++ b/examples/FuncTorch/maml_omniglot_vmap.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -196,7 +196,6 @@ def train(db, net, device, meta_opt, epoch, log): qry_accs = 100.0 * torch.mean(torch.stack(qry_accs)).item() i = epoch + float(batch_idx) / n_train_iter iter_time = time.time() - start_time - torch.cuda.empty_cache() if batch_idx % 4 == 0: print( @@ -249,7 +248,6 @@ def test(db, net, device, epoch, log): qry_losses = torch.mean(torch.stack(qry_losses)).item() qry_accs = 100.0 * torch.mean(torch.stack(qry_accs)).item() - torch.cuda.empty_cache() print(f'[Epoch {epoch+1:.2f}] Test Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f}') log.append( diff --git a/examples/distributed/few-shot/maml_omniglot.py b/examples/distributed/few-shot/maml_omniglot.py index 2150c153..867caf43 100644 --- a/examples/distributed/few-shot/maml_omniglot.py +++ b/examples/distributed/few-shot/maml_omniglot.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -229,7 +229,6 @@ def train(db: OmniglotNShot, net: nn.Module, meta_opt: optim.Adam, epoch: int, l qry_acc = 100.0 * qry_acc i = epoch + float(batch_idx) / n_train_iter iter_time = time.time() - start_time - torch.cuda.empty_cache() print( f'[Epoch {i:.2f}] Train Loss: {qry_loss:.2f} | Acc: {qry_acc:.2f} | Time: {iter_time:.2f}' @@ -272,7 +271,6 @@ def test(db, net, epoch, log): qry_losses = np.mean(qry_losses) qry_accs = 100.0 * np.mean(qry_accs) - torch.cuda.empty_cache() print(f'[Epoch {epoch+1:.2f}] Test Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f}') log.append( diff --git a/examples/distributed/few-shot/maml_omniglot_local_loader.py b/examples/distributed/few-shot/maml_omniglot_local_loader.py index 9205d104..7f042854 100644 --- a/examples/distributed/few-shot/maml_omniglot_local_loader.py +++ b/examples/distributed/few-shot/maml_omniglot_local_loader.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -272,7 +272,6 @@ def train(net: nn.Module, meta_opt: optim.Adam, epoch: int, log: list): qry_acc = 100.0 * qry_acc i = epoch + float(batch_idx) / n_train_iter iter_time = time.time() - start_time - torch.cuda.empty_cache() print( f'[Epoch {i:.2f}] Train Loss: {qry_loss:.2f} | Acc: {qry_acc:.2f} | Time: {iter_time:.2f}' @@ -316,7 +315,6 @@ def test(net, epoch, log): qry_losses = np.mean(qry_losses) qry_accs = 100.0 * np.mean(qry_accs) - torch.cuda.empty_cache() print(f'[Epoch {epoch+1:.2f}] Test Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f}') log.append( diff --git a/examples/few-shot/maml_omniglot.py b/examples/few-shot/maml_omniglot.py index eae45136..17172bdd 100644 --- a/examples/few-shot/maml_omniglot.py +++ b/examples/few-shot/maml_omniglot.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -176,7 +176,6 @@ def train(db, net, meta_opt, epoch, log): qry_accs = 100.0 * np.mean(qry_accs) i = epoch + float(batch_idx) / n_train_iter iter_time = time.time() - start_time - torch.cuda.empty_cache() print( f'[Epoch {i:.2f}] Train Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f} | Time: {iter_time:.2f}' @@ -237,7 +236,6 @@ def test(db, net, epoch, log): qry_losses = np.mean(qry_losses) qry_accs = 100.0 * np.mean(qry_accs) - torch.cuda.empty_cache() print(f'[Epoch {epoch+1:.2f}] Test Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f}') log.append( diff --git a/examples/iMAML/imaml_omniglot.py b/examples/iMAML/imaml_omniglot.py index f4fe29c1..09344900 100644 --- a/examples/iMAML/imaml_omniglot.py +++ b/examples/iMAML/imaml_omniglot.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -51,6 +51,13 @@ def __init__(self, meta_net, n_inner_iter, reg_param): self.net = torchopt.module_clone(meta_net, by='deepcopy', detach_buffers=True) self.n_inner_iter = n_inner_iter self.reg_param = reg_param + self.reset_parameters() + + def reset_parameters(self): + with torch.no_grad(): + for p1, p2 in zip(self.parameters(), self.meta_parameters()): + p1.data.copy_(p2.data) + p1.detach_().requires_grad_() def forward(self, x): return self.net(x) @@ -145,21 +152,16 @@ def main(): def train(db, net, meta_opt, epoch, log, args): n_train_iter = db.x_train.shape[0] // db.batchsz - # Given this module we've created, rip out the parameters and buffers - # and return a functional version of the module. `fnet` is stateless - # and can be called with `fnet(params, buffers, args, kwargs)` - # fnet, params, buffers = functorch.make_functional_with_buffers(net) + n_inner_iter = args.inner_steps + reg_param = args.reg_params + task_num = args.task_num + inner_nets = [InnerNet(net, n_inner_iter, reg_param) for _ in range(task_num)] for batch_idx in range(n_train_iter): start_time = time.time() # Sample a batch of support and query images and labels. x_spt, y_spt, x_qry, y_qry = db.next() - task_num = x_spt.size(0) - - n_inner_iter = args.inner_steps - reg_param = args.reg_params - qry_losses = [] qry_accs = [] meta_opt.zero_grad() @@ -169,7 +171,8 @@ def train(db, net, meta_opt, epoch, log, args): # gradient steps w.r.t. the model's parameters. # This adapts the model's meta-parameters to the task. - inner_net = InnerNet(net, n_inner_iter, reg_param) + inner_net = inner_nets[i] + inner_net.reset_parameters() optimal_inner_net = inner_net.solve(x_spt[i], y_spt[i]) # The final set of adapted parameters will induce some @@ -188,7 +191,6 @@ def train(db, net, meta_opt, epoch, log, args): qry_accs = 100.0 * np.mean(qry_accs) i = epoch + float(batch_idx) / n_train_iter iter_time = time.time() - start_time - torch.cuda.empty_cache() print( f'[Epoch {i:.2f}] Train Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f} | Time: {iter_time:.2f}' @@ -243,7 +245,6 @@ def test(db, net, epoch, log, args): qry_losses = np.mean(qry_losses) qry_accs = 100.0 * np.mean(qry_accs) - torch.cuda.empty_cache() print(f'[Epoch {epoch+1:.2f}] Test Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f}') log.append( diff --git a/examples/iMAML/imaml_omniglot_functional.py b/examples/iMAML/imaml_omniglot_functional.py index dc62b5b8..1c0a089a 100644 --- a/examples/iMAML/imaml_omniglot_functional.py +++ b/examples/iMAML/imaml_omniglot_functional.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -165,7 +165,6 @@ def train(db, model, meta_opt_and_state, epoch, log, args): qry_accs = 100.0 * np.mean(qry_accs) i = epoch + float(batch_idx) / n_train_iter iter_time = time.time() - start_time - torch.cuda.empty_cache() print( f'[Epoch {i:.2f}] Train Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f} | Time: {iter_time:.2f}' @@ -227,7 +226,6 @@ def test(db, model, epoch, log, args): qry_losses = np.mean(qry_losses) qry_accs = 100.0 * np.mean(qry_accs) - torch.cuda.empty_cache() print(f'[Epoch {epoch+1:.2f}] Test Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f}') log.append( diff --git a/torchopt/__init__.py b/torchopt/__init__.py index db78f217..38c00a79 100644 --- a/torchopt/__init__.py +++ b/torchopt/__init__.py @@ -32,7 +32,7 @@ from torchopt.clip import clip_grad_norm from torchopt.combine import chain from torchopt.hook import register_hook -from torchopt.optim import SGD, Adam, AdamW, Optimizer, RMSProp, RMSprop, meta +from torchopt.optim import SGD, Adam, AdamW, Optimizer, RMSProp, RMSprop from torchopt.optim.func import FuncOptimizer from torchopt.optim.meta import ( MetaAdam, @@ -56,7 +56,6 @@ __all__ = [ 'accelerated_op_available', - 'diff', 'adam', 'adamw', 'rmsprop', diff --git a/torchopt/diff/implicit/decorator.py b/torchopt/diff/implicit/decorator.py index 20f47477..64dd15dd 100644 --- a/torchopt/diff/implicit/decorator.py +++ b/torchopt/diff/implicit/decorator.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -289,6 +289,7 @@ def forward( # type: ignore[override] # pylint: disable=arguments-differ f'solver_fn should be a torch.Tensor or a tuple of torch.Tensor. ' f'Got {output}' ) + output = tuple(t.data for t in output) ( args_treespec, diff --git a/torchopt/diff/implicit/nn/module.py b/torchopt/diff/implicit/nn/module.py index af0d2cc4..cac9c395 100644 --- a/torchopt/diff/implicit/nn/module.py +++ b/torchopt/diff/implicit/nn/module.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -16,77 +16,89 @@ # pylint: disable=redefined-builtin -import contextlib +import abc import functools import itertools -from typing import Any, Callable, Dict, Generator, Iterable, Optional, Tuple, Type +from typing import Any, Iterable, Optional, Tuple, Type import functorch import torch -from torchopt import pytree from torchopt.diff.implicit.decorator import custom_root from torchopt.nn.module import MetaGradientModule -from torchopt.typing import LinearSolver, TensorTree, TupleOfTensors -from torchopt.utils import extract_module_containers +from torchopt.nn.stateless import reparameterize, swap_state +from torchopt.typing import LinearSolver, TupleOfTensors __all__ = ['ImplicitMetaGradientModule'] -def update_containers( - dst_containers: Iterable[Dict[str, Optional[torch.Tensor]]], - src_containers: Iterable[Dict[str, Optional[torch.Tensor]]], -) -> None: - """Update the tensor containers in ``dst_containers`` with the ones in ``src_containers``.""" - for src_container, dst_container in zip(src_containers, dst_containers): - dst_container.update(src_container) - - -@contextlib.contextmanager -def container_context( - orig_containers: Iterable[Dict[str, Optional[torch.Tensor]]], - args_containers: Iterable[Dict[str, Optional[torch.Tensor]]], -) -> Generator[None, None, None]: - # pylint: disable-next=line-too-long - """Return a context manager that temporarily updates the containers in ``orig_containers`` with the ones in ``args_containers``.""" - if not isinstance(orig_containers, (list, tuple)): - orig_containers = list(orig_containers) - orig_containers_backups = [container.copy() for container in orig_containers] - try: - update_containers(orig_containers, args_containers) - yield - finally: - update_containers(orig_containers, orig_containers_backups) +def _stateless_objective_fn( + __flat_params: TupleOfTensors, + __flat_meta_params: TupleOfTensors, + __params_names: Iterable[str], + __meta_params_names: Iterable[str], + self: 'ImplicitMetaGradientModule', + *input, + **kwargs, +) -> torch.Tensor: + with reparameterize( + self, + itertools.chain( + zip(__params_names, __flat_params), + zip(__meta_params_names, __flat_meta_params), + ), + ): + return self.objective(*input, **kwargs) + + +def _stateless_optimality_fn( + __flat_params: TupleOfTensors, + __flat_meta_params: TupleOfTensors, + __params_names: Iterable[str], + __meta_params_names: Iterable[str], + self: 'ImplicitMetaGradientModule', + *input, + **kwargs, +) -> TupleOfTensors: + with reparameterize( + self, + itertools.chain( + zip(__params_names, __flat_params), + zip(__meta_params_names, __flat_meta_params), + ), + ): + return self.optimality(*input, **kwargs) def make_optimality_from_objective( - objective: Callable[..., torch.Tensor] -) -> Callable[..., TupleOfTensors]: - """Make a function that computes the optimality function of the objective function.""" + cls: Type['ImplicitMetaGradientModule'], +) -> Type['ImplicitMetaGradientModule']: + """Derives the optimality function of the objective function.""" + if ( + getattr(cls, 'objective', ImplicitMetaGradientModule.objective) + is ImplicitMetaGradientModule.objective + ): + raise TypeError('The objective function is not defined.') def optimality(self: 'ImplicitMetaGradientModule', *input, **kwargs) -> TupleOfTensors: - params_containers = extract_module_containers(self, with_buffers=False)[0] - flat_params: TupleOfTensors - # pylint: disable-next=line-too-long - flat_params, params_containers_treespec = pytree.tree_flatten_as_tuple(params_containers) # type: ignore[arg-type] - - def objective_fn(__flat_params: TupleOfTensors, *input, **kwargs) -> torch.Tensor: - flat_grad_tracking_params = __flat_params - grad_tracking_params_containers: Tuple[ - Dict[str, Optional[torch.Tensor]], ... - ] = pytree.tree_unflatten( # type: ignore[assignment] - params_containers_treespec, flat_grad_tracking_params - ) - - with container_context(params_containers, grad_tracking_params_containers): - return objective(self, *input, **kwargs) - - objective_grad_fn = functorch.grad(objective_fn, argnums=0) - flat_grads = objective_grad_fn(flat_params, *input, **kwargs) + params_names, flat_params = tuple(zip(*self.named_parameters())) + meta_params_names, flat_meta_params = tuple(zip(*self.named_meta_parameters())) + + objective_grad_fn = functorch.grad(_stateless_objective_fn, argnums=0) + flat_grads = objective_grad_fn( + flat_params, + flat_meta_params, + params_names, + meta_params_names, + self, + *input, + **kwargs, + ) return flat_grads - return optimality + cls.optimality = optimality # type: ignore[assignment] + return cls def enable_implicit_gradients( @@ -102,72 +114,39 @@ def enable_implicit_gradients( else: solve_kwargs = {} - @functools.wraps(cls_solve) - def wrapped( # pylint: disable=too-many-locals - self: 'ImplicitMetaGradientModule', *input, **kwargs - ) -> Any: + @custom_root(_stateless_optimality_fn, argnums=1, has_aux=True, **solve_kwargs) + def stateless_solver_fn( + # pylint: disable=unused-argument + __flat_params: TupleOfTensors, + __flat_meta_params: TupleOfTensors, + __params_names: Iterable[str], + __meta_params_names: Iterable[str], + # pylint: enable=unused-argument + self: 'ImplicitMetaGradientModule', + *input, + **kwargs, + ) -> Tuple[TupleOfTensors, Any]: """Solve the optimization problem.""" - params_containers = extract_module_containers(self, with_buffers=False)[0] - meta_params_containers = [self._meta_parameters] # pylint: disable=protected-access - for meta_module in self.meta_children(): - meta_params_containers.extend( - extract_module_containers(meta_module, with_buffers=False)[0] - ) - meta_params_containers = tuple(meta_params_containers) + output = cls_solve(self, *input, **kwargs) + flat_optimal_params = tuple(p.detach_() for p in self.parameters()) + return flat_optimal_params, output - flat_params: TupleOfTensors - flat_meta_params: TupleOfTensors - flat_params, params_containers_treespec = pytree.tree_flatten_as_tuple( - params_containers # type: ignore[arg-type] - ) - flat_meta_params, meta_params_containers_treespec = pytree.tree_flatten_as_tuple( - meta_params_containers # type: ignore[arg-type] - ) - - def optimality_fn( - __flat_params: TupleOfTensors, - __flat_meta_params: TupleOfTensors, - *input, - **kwargs, - ) -> TupleOfTensors: - flat_grad_tracking_params = __flat_params - grad_tracking_params_containers: Tuple[ - Dict[str, Optional[torch.Tensor]], ... - ] = pytree.tree_unflatten( # type: ignore[assignment] - params_containers_treespec, flat_grad_tracking_params - ) - flat_grad_tracking_meta_params = __flat_meta_params - grad_tracking_meta_params_containers: Tuple[ - Dict[str, Optional[torch.Tensor]], ... - ] = pytree.tree_unflatten( # type: ignore[assignment] - meta_params_containers_treespec, flat_grad_tracking_meta_params - ) - - with container_context( - itertools.chain( - params_containers, - meta_params_containers, - ), - itertools.chain( - grad_tracking_params_containers, - grad_tracking_meta_params_containers, - ), - ): - return self.optimality(*input, **kwargs) - - @custom_root(optimality_fn, argnums=1, has_aux=True, **solve_kwargs) - def solver_fn( - __flat_params: TupleOfTensors, # pylint: disable=unused-argument - __flat_meta_params: TupleOfTensors, # pylint: disable=unused-argument + @functools.wraps(cls_solve) + def wrapped(self: 'ImplicitMetaGradientModule', *input, **kwargs) -> Any: + """Solve the optimization problem.""" + params_names, flat_params = tuple(zip(*self.named_parameters())) + meta_params_names, flat_meta_params = tuple(zip(*self.named_meta_parameters())) + + flat_optimal_params, output = stateless_solver_fn( + flat_params, + flat_meta_params, + params_names, + meta_params_names, + self, *input, **kwargs, - ) -> Tuple[TupleOfTensors, Any]: - output = cls_solve(self, *input, **kwargs) - flat_optimal_params: TupleOfTensors = tuple(pytree.tree_leaves(params_containers)) # type: ignore[arg-type] - return flat_optimal_params, output - - # pylint: disable-next=unused-variable - flat_optimal_params, output = solver_fn(flat_params, flat_meta_params, *input, **kwargs) + ) + swap_state(self, zip(params_names, flat_optimal_params)) return output wrapped.__implicit_gradients_enabled__ = True # type: ignore[attr-defined] @@ -211,10 +190,11 @@ def __init_subclass__(cls, linear_solve: Optional[LinearSolver] = None) -> None: if not callable(objective): raise TypeError('method objective() must be callable.') - cls.optimality = make_optimality_from_objective(objective) # type: ignore[assignment] + make_optimality_from_objective(cls) enable_implicit_gradients(cls) + @abc.abstractmethod def solve(self, *input, **kwargs) -> Any: """Solve the inner optimization problem. @@ -243,7 +223,7 @@ def solve(self, batch, labels): """ raise NotImplementedError # update parameters - def optimality(self, *input, **kwargs) -> TensorTree: + def optimality(self, *input, **kwargs) -> TupleOfTensors: r"""Compute the optimality residual. This method stands for the optimality residual to the optimal parameters after solving the @@ -280,8 +260,9 @@ def optimality(self, *input, **kwargs) -> TensorTree: :math:`\boldsymbol{\theta}` is the joint vector of the meta-parameters. Returns: - A tree of tensors, the optimality residual to the optimal parameters after solving the - inner optimization problem. + A tuple of tensors, the optimality residual to the optimal parameters after solving the + inner optimization problem. The returned tensors should correspond to the outputs of + `tuple(self.parameters())`. """ # pylint: disable=line-too-long raise NotImplementedError diff --git a/torchopt/diff/zero_order/nn/module.py b/torchopt/diff/zero_order/nn/module.py index c71048ad..9be7b16a 100644 --- a/torchopt/diff/zero_order/nn/module.py +++ b/torchopt/diff/zero_order/nn/module.py @@ -18,16 +18,14 @@ import abc import functools -from typing import Dict, Optional, Sequence, Tuple, Type, Union +from typing import Sequence, Type, Union import torch import torch.nn as nn -from torchopt import pytree -from torchopt.diff.implicit.nn.module import container_context from torchopt.diff.zero_order.decorator import Method, Samplable, zero_order +from torchopt.nn.stateless import reparameterize from torchopt.typing import Numeric, TupleOfTensors -from torchopt.utils import extract_module_containers __all__ = ['ZeroOrderGradientModule'] @@ -47,34 +45,17 @@ def enable_zero_order_gradients( ) @functools.wraps(cls_forward) - def wrapped( # pylint: disable=too-many-locals - self: 'ZeroOrderGradientModule', *input, **kwargs - ) -> torch.Tensor: + def wrapped(self: 'ZeroOrderGradientModule', *input, **kwargs) -> torch.Tensor: """Do the forward pass calculation.""" - params_containers = extract_module_containers(self, with_buffers=False)[0] - - flat_params: TupleOfTensors - flat_params, params_containers_treespec = pytree.tree_flatten_as_tuple( - params_containers # type: ignore[arg-type] - ) + params_names, flat_params = tuple(zip(*self.named_parameters())) @zero_order(self.sample, argnums=0, method=method, num_samples=num_samples, sigma=sigma) def forward_fn( - __flat_params: TupleOfTensors, # pylint: disable=unused-argument + __flat_params: TupleOfTensors, *input, **kwargs, ) -> torch.Tensor: - flat_grad_tracking_params = __flat_params - grad_tracking_params_containers: Tuple[ - Dict[str, Optional[torch.Tensor]], ... - ] = pytree.tree_unflatten( # type: ignore[assignment] - params_containers_treespec, flat_grad_tracking_params - ) - - with container_context( - params_containers, - grad_tracking_params_containers, - ): + with reparameterize(self, zip(params_names, __flat_params)): return cls_forward(self, *input, **kwargs) return forward_fn(flat_params, *input, **kwargs) diff --git a/torchopt/nn/__init__.py b/torchopt/nn/__init__.py index f206972c..e6dcc14f 100644 --- a/torchopt/nn/__init__.py +++ b/torchopt/nn/__init__.py @@ -17,6 +17,13 @@ from torchopt.diff.implicit.nn.module import ImplicitMetaGradientModule # circular reference from torchopt.diff.zero_order.nn.module import ZeroOrderGradientModule # circular reference from torchopt.nn.module import MetaGradientModule +from torchopt.nn.stateless import reparameterize, swap_state -__all__ = ['MetaGradientModule', 'ImplicitMetaGradientModule', 'ZeroOrderGradientModule'] +__all__ = [ + 'MetaGradientModule', + 'ImplicitMetaGradientModule', + 'ZeroOrderGradientModule', + 'reparameterize', + 'swap_state', +] diff --git a/torchopt/nn/module.py b/torchopt/nn/module.py index 9b2e691a..156b7b3f 100644 --- a/torchopt/nn/module.py +++ b/torchopt/nn/module.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -21,6 +21,7 @@ import torch.nn as nn from torchopt import pytree +from torchopt.typing import TensorContainer class MetaInputsContainer(NamedTuple): @@ -34,7 +35,7 @@ class MetaGradientModule(nn.Module): # pylint: disable=abstract-method """Base class for neural network modules that hold meta-parameters and meta-modules.""" _meta_inputs: MetaInputsContainer - _meta_parameters: Dict[str, Optional[torch.Tensor]] + _meta_parameters: TensorContainer _meta_modules: Dict[str, Optional[nn.Module]] def __new__(cls, *args, **kwargs) -> 'MetaGradientModule': @@ -49,7 +50,7 @@ def __new__(cls, *args, **kwargs) -> 'MetaGradientModule': meta_modules.update(meta_module.modules()) instance._meta_inputs = MetaInputsContainer(meta_parameters, meta_modules) - instance._meta_parameters: Dict[str, Optional[torch.Tensor]] = OrderedDict() # type: ignore[misc] + instance._meta_parameters: TensorContainer = OrderedDict() # type: ignore[misc] instance._meta_modules: Dict[str, Optional[nn.Module]] = OrderedDict() # type: ignore[misc] return instance @@ -199,9 +200,9 @@ def register_parameter(self, name: str, param: Optional[torch.Tensor]) -> None: if not isinstance(name, str): raise TypeError(f'parameter name should be a string. Got {torch.typename(name)}') if '.' in name: - raise KeyError("parameter name can't contain \".\"") + raise KeyError("parameter name can't contain '.'") if name == '': - raise KeyError("parameter name can't be empty string \"\"") + raise KeyError("parameter name can't be empty string ''") if hasattr(self, name) and name not in self._parameters: raise KeyError(f"attribute '{name}' already exists") @@ -246,9 +247,9 @@ def register_meta_parameter(self, name: str, param: Optional[torch.Tensor]) -> N if not isinstance(name, str): raise TypeError(f'meta-parameter name should be a string. Got {torch.typename(name)}') if '.' in name: - raise KeyError("meta-parameter name can't contain \".\"") + raise KeyError("meta-parameter name can't contain '.'") if name == '': - raise KeyError("meta-parameter name can't be empty string \"\"") + raise KeyError("meta-parameter name can't be empty string ''") if hasattr(self, name) and name not in self._meta_parameters: raise KeyError(f"attribute '{name}' already exists") @@ -285,9 +286,9 @@ def add_module(self, name: str, module: Optional[nn.Module]) -> None: if hasattr(self, name) and name not in self._modules: raise KeyError(f"attribute '{name}' already exists") if '.' in name: - raise KeyError(f"module name can't contain \".\", got: {name}") + raise KeyError(f"module name can't contain '.', got: '{name}'") if name == '': - raise KeyError("module name can't be empty string \"\"") + raise KeyError("module name can't be empty string ''") if module in self._meta_inputs.meta_modules: raise ValueError( f"cannot add module that is a meta-module to module '{name}'. " @@ -317,9 +318,9 @@ def add_meta_module(self, name: str, meta_module: Optional[nn.Module]) -> None: if hasattr(self, name) and name not in self._meta_modules: raise KeyError(f"attribute '{name}' already exists") if '.' in name: - raise KeyError(f"meta-module name can't contain \".\", got: {name}") + raise KeyError(f"meta-module name can't contain '.', got: '{name}'") if name == '': - raise KeyError("meta-module name can't be empty string \"\"") + raise KeyError("meta-module name can't be empty string ''") self._meta_modules[name] = meta_module diff --git a/torchopt/nn/stateless.py b/torchopt/nn/stateless.py new file mode 100644 index 00000000..d8537169 --- /dev/null +++ b/torchopt/nn/stateless.py @@ -0,0 +1,86 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== +"""Utility functions for stateless module calls.""" + +import contextlib +from typing import Dict, Generator, Iterable, Tuple, Union + +import torch +import torch.nn as nn + + +__all__ = ['swap_state', 'reparameterize'] + + +MISSING: torch.Tensor = object() # type: ignore[assignment] + + +def swap_state( + module: nn.Module, + named_tensors: Union[Dict[str, torch.Tensor], Iterable[Tuple[str, torch.Tensor]]], + allow_missing: bool = False, +) -> Dict[str, torch.Tensor]: + """Swap the module parameters and/or buffers.""" + if not isinstance(named_tensors, dict): + named_tensors = dict(named_tensors) + + def recursive_setattr(mod: nn.Module, path: str, value: torch.Tensor) -> torch.Tensor: + """Set attribute recursively.""" + attr, dot, suffix = path.partition('.') + if dot: + return recursive_setattr(getattr(mod, attr), suffix, value) + + if allow_missing: + orig = getattr(mod, attr, MISSING) + else: + orig = getattr(mod, attr) + + # pylint: disable=protected-access + if value is MISSING: + delattr(mod, attr) + elif hasattr(mod, '_parameters') and attr in mod._parameters: + mod._parameters[attr] = value # type: ignore[assignment] + elif hasattr(mod, '_buffers') and attr in mod._buffers: + mod._buffers[attr] = value + elif hasattr(mod, '_meta_parameters') and attr in mod._meta_parameters: # type: ignore[operator] + mod._meta_parameters[attr] = value # type: ignore[operator,index] + else: + setattr(mod, attr, value) + # pylint: enable=protected-access + + return orig + + orig_named_tensors = { + name: recursive_setattr(module, name, tensor) for name, tensor in named_tensors.items() + } + return orig_named_tensors + + +@contextlib.contextmanager +def reparameterize( + module: nn.Module, + named_tensors: Union[Dict[str, torch.Tensor], Iterable[Tuple[str, torch.Tensor]]], + allow_missing: bool = False, +) -> Generator[nn.Module, None, None]: + """Reparameterize the module parameters and/or buffers.""" + if not isinstance(named_tensors, dict): + named_tensors = dict(named_tensors) + + orig_named_tensors = {} + try: + orig_named_tensors = swap_state(module, named_tensors, allow_missing=allow_missing) + yield module + finally: + swap_state(module, orig_named_tensors, allow_missing=allow_missing) diff --git a/torchopt/optim/meta/base.py b/torchopt/optim/meta/base.py index 13e55b15..8db4f0a7 100644 --- a/torchopt/optim/meta/base.py +++ b/torchopt/optim/meta/base.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,14 +14,14 @@ # ============================================================================== """The base class for differentiable meta-optimizers.""" -from typing import Dict, List, Optional, Sequence, Tuple +from typing import List, Sequence, Tuple import torch import torch.nn as nn from torchopt import pytree from torchopt.base import UninitializedState -from torchopt.typing import GradientTransformation, OptState, TupleOfTensors +from torchopt.typing import GradientTransformation, ModuleTensorContainers, OptState, TupleOfTensors from torchopt.update import apply_updates from torchopt.utils import extract_module_containers @@ -49,7 +49,7 @@ def __init__(self, module: nn.Module, impl: GradientTransformation) -> None: raise TypeError(f'{impl} (type: {type(impl).__name__}) is not a GradientTransformation') self.impl: GradientTransformation = impl - self.param_containers_groups: List[Tuple[Dict[str, Optional[torch.Tensor]], ...]] = [] + self.param_containers_groups: List[ModuleTensorContainers] = [] self.state_groups: List[OptState] = [] self.add_param_group(module) @@ -87,9 +87,7 @@ def step(self, loss: torch.Tensor) -> None: # pylint: disable=too-many-locals ) self.state_groups[i] = new_state flat_new_params = apply_updates(flat_params, updates, inplace=False) - new_params: Tuple[ - Dict[str, Optional[torch.Tensor]], ... - ] = pytree.tree_unflatten( # type: ignore[assignment] + new_params: ModuleTensorContainers = pytree.tree_unflatten( # type: ignore[assignment] container_treespec, flat_new_params ) for container, new_param in zip(param_container, new_params): diff --git a/torchopt/typing.py b/torchopt/typing.py index 28d794d3..938f583e 100644 --- a/torchopt/typing.py +++ b/torchopt/typing.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,7 +14,7 @@ # ============================================================================== """Typing utilities.""" -from typing import Callable, List, Optional, Sequence, Tuple, TypeVar, Union +from typing import Callable, Dict, List, Optional, Sequence, Tuple, TypeVar, Union from typing_extensions import TypeAlias # Python 3.10+ from typing_extensions import Protocol, runtime_checkable # Python 3.8+ @@ -59,6 +59,8 @@ 'SequenceOfOptionalTensors', 'OptionalTensorOrOptionalTensors', 'OptionalTensorTree', + 'TensorContainer', + 'ModuleTensorContainers', 'Future', 'LinearSolver', 'Device', @@ -90,6 +92,9 @@ OptionalTensorOrOptionalTensors = Union[OptionalTensor, SequenceOfOptionalTensors] OptionalTensorTree: TypeAlias = PyTreeTypeVar('OptionalTensorTree', OptionalTensor) # type: ignore[valid-type] +TensorContainer = Dict[str, Optional[Tensor]] +ModuleTensorContainers = Tuple[TensorContainer, ...] + # Parameters are arbitrary nests of `torch.Tensor`. Params: TypeAlias = TensorTree Updates: TypeAlias = Params # Gradient updates are of the same type as parameters. diff --git a/torchopt/utils.py b/torchopt/utils.py index f60bc6d6..c00e6b4f 100644 --- a/torchopt/utils.py +++ b/torchopt/utils.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -36,7 +36,7 @@ import torch.nn as nn from torchopt import pytree -from torchopt.typing import Device, OptState, TensorTree +from torchopt.typing import Device, ModuleTensorContainers, OptState, TensorContainer, TensorTree if TYPE_CHECKING: @@ -287,14 +287,11 @@ def get_variable(t): def extract_module_containers( module: nn.Module, with_buffers: bool = True -) -> Tuple[ - Tuple[Dict[str, Optional[torch.Tensor]], ...], - Tuple[Dict[str, Optional[torch.Tensor]], ...], -]: +) -> Tuple[ModuleTensorContainers, ModuleTensorContainers]: """Extract the references to the containers of parameters and buffers from a module.""" if isinstance(module, nn.Module): - params: List[Dict[str, Optional[torch.Tensor]]] = [] - buffers: List[Dict[str, Optional[torch.Tensor]]] = [] + params: List[TensorContainer] = [] + buffers: List[TensorContainer] = [] memo: Set[nn.Module] = set() def update_container(container, items): From 87f7c1b4aa5ced6d6204d7698a0d687b16c8990f Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Tue, 17 Jan 2023 17:11:21 +0800 Subject: [PATCH 11/24] perf(nn/stateless): cache intermediate submodules (#128) --- torchopt/nn/stateless.py | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/torchopt/nn/stateless.py b/torchopt/nn/stateless.py index d8537169..e0a9ecb8 100644 --- a/torchopt/nn/stateless.py +++ b/torchopt/nn/stateless.py @@ -36,11 +36,24 @@ def swap_state( if not isinstance(named_tensors, dict): named_tensors = dict(named_tensors) - def recursive_setattr(mod: nn.Module, path: str, value: torch.Tensor) -> torch.Tensor: + submodules = {'': module} + + def get_submodule(path: str) -> nn.Module: + """Get submodules recursively.""" + try: + return submodules[path] + except KeyError: + prefix, dot, attr = path.rpartition('.') + if dot: + submodule = submodules[path] = getattr(get_submodule(prefix), attr) + else: + submodule = submodules[path] = getattr(module, attr) + return submodule + + def recursive_setattr(path: str, value: torch.Tensor) -> torch.Tensor: """Set attribute recursively.""" - attr, dot, suffix = path.partition('.') - if dot: - return recursive_setattr(getattr(mod, attr), suffix, value) + prefix, _, attr = path.rpartition('.') + mod = get_submodule(prefix) if allow_missing: orig = getattr(mod, attr, MISSING) @@ -63,7 +76,7 @@ def recursive_setattr(mod: nn.Module, path: str, value: torch.Tensor) -> torch.T return orig orig_named_tensors = { - name: recursive_setattr(module, name, tensor) for name, tensor in named_tensors.items() + name: recursive_setattr(name, tensor) for name, tensor in named_tensors.items() } return orig_named_tensors From 48cfa88e2728247a99de96f868e9c6cd61917bec Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 24 Jan 2023 22:40:10 +0800 Subject: [PATCH 12/24] chore(pre-commit): [pre-commit.ci] autoupdate (#129) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Xuehai Pan --- .pre-commit-config.yaml | 7 ++++++- docs/conda-recipe.yaml | 2 +- torchopt/diff/implicit/nn/module.py | 1 - 3 files changed, 7 insertions(+), 3 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 2bbbb67a..7ac1da2c 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -1,5 +1,10 @@ # See https://pre-commit.com for more information # See https://pre-commit.com/hooks.html for more hooks +ci: + skip: [pylint] + autofix_prs: true + autofix_commit_msg: 'fix: [pre-commit.ci] auto fixes [...]' + autoupdate_commit_msg: 'chore(pre-commit): [pre-commit.ci] autoupdate' repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.4.0 @@ -54,7 +59,7 @@ repos: ^setup.py$ ) - repo: https://github.com/pycqa/pydocstyle - rev: 6.2.3 + rev: 6.3.0 hooks: - id: pydocstyle additional_dependencies: ['.[toml]'] diff --git a/docs/conda-recipe.yaml b/docs/conda-recipe.yaml index bd927022..07ed7863 100644 --- a/docs/conda-recipe.yaml +++ b/docs/conda-recipe.yaml @@ -63,7 +63,7 @@ dependencies: - sphinx-copybutton - sphinxcontrib-spelling - sphinxcontrib-bibtex - - sphinx-autodoc-typehints >= 1.19.2 + - sphinx-autodoc-typehints >= 1.19.2, != 1.23.4 - pyenchant - hunspell-en - myst-nb diff --git a/torchopt/diff/implicit/nn/module.py b/torchopt/diff/implicit/nn/module.py index cac9c395..4e2436f6 100644 --- a/torchopt/diff/implicit/nn/module.py +++ b/torchopt/diff/implicit/nn/module.py @@ -199,7 +199,6 @@ def solve(self, *input, **kwargs) -> Any: """Solve the inner optimization problem. .. warning:: - For gradient-based optimization methods, the parameter inputs should be explicitly specified in the :func:`torch.autograd.backward` function as argument ``inputs``. Otherwise, if not provided, the gradient is accumulated into all the leaf Tensors From e1113ba03b93621c87da202e41d316d8e898ff33 Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Tue, 24 Jan 2023 23:28:24 +0800 Subject: [PATCH 13/24] feat(pre-commit): add `clang-format` hook (#130) --- .editorconfig | 12 +++++++++++- .pre-commit-config.yaml | 5 +++++ docs/source/bibtex.json | 10 +++++----- 3 files changed, 21 insertions(+), 6 deletions(-) diff --git a/.editorconfig b/.editorconfig index 96ef7342..3ae9f69a 100644 --- a/.editorconfig +++ b/.editorconfig @@ -14,7 +14,7 @@ insert_final_newline = true indent_size = 4 src_paths=torchopt,tests,examples -[*.{yaml,yml}] +[*.{yaml,yml,json}] indent_size = 2 [*.md] @@ -25,8 +25,18 @@ x-soft-wrap-text = true indent_size = 4 x-soft-wrap-text = true +[*.{bib,tex}] +indent_size = 2 + [Makefile] indent_style = tab +[*.sh] +indent_style = tab + +[*.bat] +end_of_line = crlf +indent_style = tab + [*.{cpp,h,cu,cuh}] indent_size = 2 diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 7ac1da2c..4d8231bc 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -23,6 +23,11 @@ repos: - id: detect-private-key - id: debug-statements - id: double-quote-string-fixer + - repo: https://github.com/pre-commit/mirrors-clang-format + rev: v15.0.7 + hooks: + - id: clang-format + stages: [commit, push, manual] - repo: https://github.com/PyCQA/isort rev: 5.11.4 hooks: diff --git a/docs/source/bibtex.json b/docs/source/bibtex.json index c2aa9165..7abea503 100644 --- a/docs/source/bibtex.json +++ b/docs/source/bibtex.json @@ -1,7 +1,7 @@ { - "cited": { - "examples/MAML": [ - "MAML", - ] - } + "cited": { + "examples/MAML": [ + "MAML", + ] + } } From 2c1102aadb687113557aa9c47843c7df5ee3857a Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Wed, 25 Jan 2023 01:40:18 +0800 Subject: [PATCH 14/24] feat: add `clang-tidy` integration (#131) --- .github/workflows/lint.yml | 5 ++++ CPPLINT.cfg | 3 +++ Makefile | 28 +++++++++++++++++---- docs/source/spelling_wordlist.txt | 1 + include/adam_op/adam_op.h | 3 ++- include/adam_op/adam_op_impl_cpu.h | 1 + include/adam_op/adam_op_impl_cuda.cuh | 1 + include/common.h | 1 + include/utils.h | 1 + src/adam_op/adam_op.cpp | 2 +- src/adam_op/adam_op_impl_cpu.cpp | 35 +++++++++++---------------- 11 files changed, 53 insertions(+), 28 deletions(-) diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index 92d6036f..3af88008 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -86,6 +86,11 @@ jobs: run: | make clang-format + - name: clang-tidy + run: | + sudo apt-get update && sudo apt-get install libomp-dev --yes + make clang-tidy + - name: addlicense run: | make addlicense diff --git a/CPPLINT.cfg b/CPPLINT.cfg index 41265bb6..dd346401 100644 --- a/CPPLINT.cfg +++ b/CPPLINT.cfg @@ -1 +1,4 @@ linelength=100 +filter=-readability/nolint +filter=-readability/braces +filter=-whitespace/newline diff --git a/Makefile b/Makefile index e29e0dc5..d539f3ee 100644 --- a/Makefile +++ b/Makefile @@ -5,7 +5,8 @@ PROJECT_PATH = $(PROJECT_NAME) SHELL = /bin/bash SOURCE_FOLDERS = $(PROJECT_PATH) examples include src tests docs PYTHON_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.py" -o -name "*.pyi") -CXX_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.h" -o -name "*.cpp" -o -name "*.cuh" -o -name "*.cu") +CXX_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.h" -o -name "*.cpp") +CUDA_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.cuh" -o -name "*.cu") COMMIT_HASH = $(shell git log -1 --format=%h) PATH := $(HOME)/go/bin:$(PATH) PYTHON ?= $(shell command -v python3 || command -v python) @@ -81,6 +82,9 @@ pytest-install: $(call check_pip_install,pytest-cov) $(call check_pip_install,pytest-xdist) +cmake-install: + command -v cmake || $(call check_pip_install,cmake) + cpplint-install: $(call check_pip_install,cpplint) @@ -129,11 +133,25 @@ pre-commit: pre-commit-install # C++ linters +cmake-configure: cmake-install + cmake -S . -B cmake-build-debug \ + -DCMAKE_BUILD_TYPE=Debug \ + -DCMAKE_EXPORT_COMPILE_COMMANDS=ON \ + -DPYTHON_EXECUTABLE="$(PYTHON)" + +cmake-build: cmake-configure + cmake --build cmake-build-debug --parallel + +cmake: cmake-build + cpplint: cpplint-install - $(PYTHON) -m cpplint $(CXX_FILES) + $(PYTHON) -m cpplint $(CXX_FILES) $(CUDA_FILES) clang-format: clang-format-install - $(CLANG_FORMAT) --style=file -i $(CXX_FILES) -n --Werror + $(CLANG_FORMAT) --style=file -i $(CXX_FILES) $(CUDA_FILES) -n --Werror + +clang-tidy: clang-tidy-install cmake-configure + clang-tidy -p=cmake-build-debug $(CXX_FILES) # Documentation @@ -156,12 +174,12 @@ clean-docs: # Utility functions -lint: flake8 py-format mypy pylint clang-format cpplint addlicense docstyle spelling +lint: flake8 py-format mypy pylint clang-format clang-tidy cpplint addlicense docstyle spelling format: py-format-install clang-format-install addlicense-install $(PYTHON) -m isort --project $(PROJECT_NAME) $(PYTHON_FILES) $(PYTHON) -m black $(PYTHON_FILES) tutorials - $(CLANG_FORMAT) -style=file -i $(CXX_FILES) + $(CLANG_FORMAT) -style=file -i $(CXX_FILES) $(CUDA_FILES) addlicense -c $(COPYRIGHT) -ignore tests/coverage.xml -l apache -y 2022-$(shell date +"%Y") $(SOURCE_FOLDERS) clean-py: diff --git a/docs/source/spelling_wordlist.txt b/docs/source/spelling_wordlist.txt index bd646f0c..92244376 100644 --- a/docs/source/spelling_wordlist.txt +++ b/docs/source/spelling_wordlist.txt @@ -146,3 +146,4 @@ ATen samplable conj reparameterize +rtype diff --git a/include/adam_op/adam_op.h b/include/adam_op/adam_op.h index 8b7ae2bf..76baea3f 100644 --- a/include/adam_op/adam_op.h +++ b/include/adam_op/adam_op.h @@ -14,6 +14,7 @@ // ============================================================================= #pragma once + #include #include @@ -69,7 +70,7 @@ TensorArray<2> adamBackwardUpdates(const torch::Tensor &dupdates, const pyfloat_t b2, const pyuint_t count); -void buildSubmodule(py::module &mod); // NOLINT +void buildSubmodule(py::module &mod); // NOLINT[runtime/references] } // namespace adam_op } // namespace torchopt diff --git a/include/adam_op/adam_op_impl_cpu.h b/include/adam_op/adam_op_impl_cpu.h index 3e8da376..20f12ae1 100644 --- a/include/adam_op/adam_op_impl_cpu.h +++ b/include/adam_op/adam_op_impl_cpu.h @@ -14,6 +14,7 @@ // ============================================================================= #pragma once + #include #include diff --git a/include/adam_op/adam_op_impl_cuda.cuh b/include/adam_op/adam_op_impl_cuda.cuh index a7ddb937..cdb3ae58 100644 --- a/include/adam_op/adam_op_impl_cuda.cuh +++ b/include/adam_op/adam_op_impl_cuda.cuh @@ -14,6 +14,7 @@ // ============================================================================= #pragma once + #include #include diff --git a/include/common.h b/include/common.h index 5353e48e..ac281eb9 100644 --- a/include/common.h +++ b/include/common.h @@ -14,6 +14,7 @@ // ============================================================================= #pragma once + #include #include diff --git a/include/utils.h b/include/utils.h index 714f98d4..d5cd2e00 100644 --- a/include/utils.h +++ b/include/utils.h @@ -14,6 +14,7 @@ // ============================================================================= #pragma once + #include #include diff --git a/src/adam_op/adam_op.cpp b/src/adam_op/adam_op.cpp index 18bb5d27..57b6ee0f 100644 --- a/src/adam_op/adam_op.cpp +++ b/src/adam_op/adam_op.cpp @@ -149,7 +149,7 @@ TensorArray<2> adamBackwardUpdates(const torch::Tensor &dupdates, } } -void buildSubmodule(py::module &mod) { // NOLINT +void buildSubmodule(py::module &mod) { // NOLINT[runtime/references] py::module m = mod.def_submodule("adam_op", "Adam Ops"); m.def("forward_", &adamForwardInplace, diff --git a/src/adam_op/adam_op_impl_cpu.cpp b/src/adam_op/adam_op_impl_cpu.cpp index cf734c4f..e242bedf 100644 --- a/src/adam_op/adam_op_impl_cpu.cpp +++ b/src/adam_op/adam_op_impl_cpu.cpp @@ -40,9 +40,8 @@ void adamForwardInplaceCPUKernel(const other_t b1, scalar_t *__restrict__ updates_ptr, scalar_t *__restrict__ mu_ptr, scalar_t *__restrict__ nu_ptr) { -#pragma omp parallel for num_threads( \ - std::min(n / MIN_NUMEL_USE_OMP, \ - static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) // NOLINT +#pragma omp parallel for num_threads(std::min( \ + n / MIN_NUMEL_USE_OMP, static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) for (size_t tid = 0; tid < n; ++tid) { const scalar_t updates = updates_ptr[tid]; const scalar_t mu = mu_ptr[tid]; @@ -94,9 +93,8 @@ void adamForwardMuCPUKernel(const scalar_t *__restrict__ updates_ptr, const other_t b1, const size_t n, scalar_t *__restrict__ mu_out_ptr) { -#pragma omp parallel for num_threads( \ - std::min(n / MIN_NUMEL_USE_OMP, \ - static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) // NOLINT +#pragma omp parallel for num_threads(std::min( \ + n / MIN_NUMEL_USE_OMP, static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) for (size_t tid = 0; tid < n; ++tid) { const scalar_t updates = updates_ptr[tid]; const scalar_t mu = mu_ptr[tid]; @@ -128,9 +126,8 @@ void adamForwardNuCPUKernel(const scalar_t *__restrict__ updates_ptr, const other_t b2, const size_t n, scalar_t *__restrict__ nu_out_ptr) { -#pragma omp parallel for num_threads( \ - std::min(n / MIN_NUMEL_USE_OMP, \ - static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) // NOLINT +#pragma omp parallel for num_threads(std::min( \ + n / MIN_NUMEL_USE_OMP, static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) for (size_t tid = 0; tid < n; ++tid) { const scalar_t updates = updates_ptr[tid]; const scalar_t nu = nu_ptr[tid]; @@ -166,9 +163,8 @@ void adamForwardUpdatesCPUKernel(const scalar_t *__restrict__ new_mu_ptr, const other_t eps_root, const size_t n, scalar_t *__restrict__ updates_out_ptr) { -#pragma omp parallel for num_threads( \ - std::min(n / MIN_NUMEL_USE_OMP, \ - static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) // NOLINT +#pragma omp parallel for num_threads(std::min( \ + n / MIN_NUMEL_USE_OMP, static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) for (size_t tid = 0; tid < n; ++tid) { const scalar_t new_mu = new_mu_ptr[tid]; const scalar_t new_nu = new_nu_ptr[tid]; @@ -212,9 +208,8 @@ void adamBackwardMuCPUKernel(const scalar_t *__restrict__ dmu_ptr, const size_t n, scalar_t *__restrict__ dupdates_out_ptr, scalar_t *__restrict__ dmu_out_ptr) { -#pragma omp parallel for num_threads( \ - std::min(n / MIN_NUMEL_USE_OMP, \ - static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) // NOLINT +#pragma omp parallel for num_threads(std::min( \ + n / MIN_NUMEL_USE_OMP, static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) for (size_t tid = 0; tid < n; ++tid) { const scalar_t dmu = dmu_ptr[tid]; @@ -249,9 +244,8 @@ void adamBackwardNuCPUKernel(const scalar_t *__restrict__ dnu_ptr, const size_t n, scalar_t *__restrict__ dupdates_out_ptr, scalar_t *__restrict__ dnu_out_ptr) { -#pragma omp parallel for num_threads( \ - std::min(n / MIN_NUMEL_USE_OMP, \ - static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) // NOLINT +#pragma omp parallel for num_threads(std::min( \ + n / MIN_NUMEL_USE_OMP, static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) for (size_t tid = 0; tid < n; ++tid) { const scalar_t dnu = dnu_ptr[tid]; const scalar_t updates = updates_ptr[tid]; @@ -290,9 +284,8 @@ void adamBackwardUpdatesCPUKernel(const scalar_t *__restrict__ dupdates_ptr, const size_t n, scalar_t *__restrict__ dnew_mu_out_ptr, scalar_t *__restrict__ dnew_nu_out_ptr) { -#pragma omp parallel for num_threads( \ - std::min(n / MIN_NUMEL_USE_OMP, \ - static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) // NOLINT +#pragma omp parallel for num_threads(std::min( \ + n / MIN_NUMEL_USE_OMP, static_cast (omp_get_num_procs()))) if (n > MIN_NUMEL_USE_OMP) for (size_t tid = 0; tid < n; ++tid) { const scalar_t dupdates = dupdates_ptr[tid]; const scalar_t updates = updates_ptr[tid]; From 8ff3d12bb36915a47a84f524710c4b6594e89ec6 Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Mon, 30 Jan 2023 12:41:00 +0800 Subject: [PATCH 15/24] deps(workflows): bump Python version for linters --- .github/workflows/lint.yml | 4 ++-- .pre-commit-config.yaml | 2 +- Makefile | 2 +- conda-recipe-minimal.yaml | 2 +- conda-recipe.yaml | 2 +- docs/conda-recipe.yaml | 2 +- 6 files changed, 7 insertions(+), 7 deletions(-) diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index 3af88008..474517a3 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -26,10 +26,10 @@ jobs: submodules: "recursive" fetch-depth: 1 - - name: Set up Python 3.7 + - name: Set up Python 3.8 uses: actions/setup-python@v4 with: - python-version: "3.7" # the lowest version we support (sync with requires-python in pyproject.toml) + python-version: "3.8" update-environment: true - name: Setup CUDA Toolkit diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 4d8231bc..9c4fd1fc 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -29,7 +29,7 @@ repos: - id: clang-format stages: [commit, push, manual] - repo: https://github.com/PyCQA/isort - rev: 5.11.4 + rev: 5.12.0 hooks: - id: isort stages: [commit, push, manual] diff --git a/Makefile b/Makefile index d539f3ee..24c688e0 100644 --- a/Makefile +++ b/Makefile @@ -6,7 +6,7 @@ SHELL = /bin/bash SOURCE_FOLDERS = $(PROJECT_PATH) examples include src tests docs PYTHON_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.py" -o -name "*.pyi") CXX_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.h" -o -name "*.cpp") -CUDA_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.cuh" -o -name "*.cu") +CUDA_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.cuh" -o -name "*.cu") COMMIT_HASH = $(shell git log -1 --format=%h) PATH := $(HOME)/go/bin:$(PATH) PYTHON ?= $(shell command -v python3 || command -v python) diff --git a/conda-recipe-minimal.yaml b/conda-recipe-minimal.yaml index 4ae91303..5d17d45d 100644 --- a/conda-recipe-minimal.yaml +++ b/conda-recipe-minimal.yaml @@ -27,7 +27,7 @@ channels: - conda-forge dependencies: - - python = 3.9 + - python = 3.10 - pip # Learning diff --git a/conda-recipe.yaml b/conda-recipe.yaml index 0947e399..9f4e8f5e 100644 --- a/conda-recipe.yaml +++ b/conda-recipe.yaml @@ -27,7 +27,7 @@ channels: - conda-forge dependencies: - - python = 3.9 + - python = 3.10 - pip # Learning diff --git a/docs/conda-recipe.yaml b/docs/conda-recipe.yaml index 07ed7863..673563e4 100644 --- a/docs/conda-recipe.yaml +++ b/docs/conda-recipe.yaml @@ -27,7 +27,7 @@ channels: - conda-forge dependencies: - - python = 3.9 + - python = 3.10 - pip # Learning From 66b0fcee631dee9680426eca47656d52a2b7301d Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Tue, 31 Jan 2023 20:26:23 +0800 Subject: [PATCH 16/24] feat(.github): add DependaBot to bump GitHub Action versions --- .github/dependabot.yml | 13 +++++++++++++ 1 file changed, 13 insertions(+) create mode 100644 .github/dependabot.yml diff --git a/.github/dependabot.yml b/.github/dependabot.yml new file mode 100644 index 00000000..24937aad --- /dev/null +++ b/.github/dependabot.yml @@ -0,0 +1,13 @@ +version: 2 +updates: + - package-ecosystem: "github-actions" + directory: "/" + labels: + - dependencies + schedule: + interval: "weekly" + day: "monday" + time: "12:00" + timezone: "Asia/Shanghai" + commit-message: + prefix: "deps(workflows)" From 4b0ad30b5c7961d6be5e54cbef919e1091456f89 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 31 Jan 2023 21:22:19 +0800 Subject: [PATCH 17/24] deps(workflows): bump pypa/cibuildwheel from 2.11.2 to 2.12.0 (#132) Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .github/workflows/build.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 93539731..fe4e587b 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -132,7 +132,7 @@ jobs: run: python .github/workflows/set_cibw_build.py - name: Build wheels - uses: pypa/cibuildwheel@v2.11.2 + uses: pypa/cibuildwheel@v2.12.0 env: CIBW_BUILD: ${{ env.CIBW_BUILD }} with: @@ -182,7 +182,7 @@ jobs: run: python .github/workflows/set_cibw_build.py - name: Build wheels - uses: pypa/cibuildwheel@v2.11.2 + uses: pypa/cibuildwheel@v2.12.0 env: CIBW_BUILD: ${{ env.CIBW_BUILD }} with: From 2c0a0dfb3dbdfd885935df8aae089de9aa2db66f Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Thu, 2 Feb 2023 00:04:38 +0800 Subject: [PATCH 18/24] style: apply suggestions from `pylint` and `black` --- .pylintrc | 4 ++-- examples/L2R/helpers/utils.py | 5 ++--- examples/LOLA/helpers/agent.py | 3 +-- examples/MAML-RL/func_maml.py | 3 +-- .../few-shot/helpers/omniglot_loaders.py | 7 ++---- examples/few-shot/helpers/omniglot_loaders.py | 7 ++---- examples/iMAML/helpers/omniglot_loaders.py | 7 ++---- tests/helpers.py | 4 +--- torchopt/alias/utils.py | 3 +-- torchopt/diff/implicit/decorator.py | 1 - torchopt/diff/implicit/nn/module.py | 2 +- torchopt/distributed/api.py | 4 ++-- torchopt/transform/add_decayed_weights.py | 3 +-- torchopt/transform/utils.py | 4 +--- torchopt/visual.py | 22 +++++++++---------- tutorials/5_Implicit_Differentiation.ipynb | 2 ++ 16 files changed, 32 insertions(+), 49 deletions(-) diff --git a/.pylintrc b/.pylintrc index f0846434..efcfa8fd 100644 --- a/.pylintrc +++ b/.pylintrc @@ -319,8 +319,8 @@ min-public-methods=2 [EXCEPTIONS] # Exceptions that will emit a warning when caught. -overgeneral-exceptions=BaseException, - Exception +overgeneral-exceptions=builtins.BaseException, + builtins.Exception [FORMAT] diff --git a/examples/L2R/helpers/utils.py b/examples/L2R/helpers/utils.py index 954b27b2..fe923860 100644 --- a/examples/L2R/helpers/utils.py +++ b/examples/L2R/helpers/utils.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -33,7 +33,6 @@ def get_imbalance_dataset( class_0=4, class_1=9, ): - ratio = 1 - pos_ratio ratio_test = 0.5 @@ -116,7 +115,7 @@ def get_imbalance_dataset( x_test_subset = x_test_subset[idx].astype(np.float32) y_test_subset = y_test_subset[idx].astype(np.float32) - (x_train_subset, y_train_subset, x_val_subset, y_val_subset, x_test_subset, y_test_subset,) = ( + x_train_subset, y_train_subset, x_val_subset, y_val_subset, x_test_subset, y_test_subset = ( torch.tensor(x_train_subset), torch.tensor(y_train_subset), torch.tensor(x_val_subset), diff --git a/examples/LOLA/helpers/agent.py b/examples/LOLA/helpers/agent.py index 3b37daf2..a8f8ee31 100644 --- a/examples/LOLA/helpers/agent.py +++ b/examples/LOLA/helpers/agent.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -30,7 +30,6 @@ def __init__(self, theta): class Agent: def __init__(self, args): - self.args = args # init theta and its optimizer self.theta = nn.Parameter(torch.zeros(5, requires_grad=True)) diff --git a/examples/MAML-RL/func_maml.py b/examples/MAML-RL/func_maml.py index 6413cc71..2534caeb 100644 --- a/examples/MAML-RL/func_maml.py +++ b/examples/MAML-RL/func_maml.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -156,7 +156,6 @@ def main(args): param_orig = [p.detach().clone().requires_grad_() for p in params] _params = list(params) for idx in range(TASK_NUM): - for _ in range(inner_iters): pre_trajs = sample_traj(env, tasks[idx], fpolicy, _params) inner_loss = a2c_loss(pre_trajs, fpolicy, _params, value_coef=0.5) diff --git a/examples/distributed/few-shot/helpers/omniglot_loaders.py b/examples/distributed/few-shot/helpers/omniglot_loaders.py index d857d386..e8f02042 100644 --- a/examples/distributed/few-shot/helpers/omniglot_loaders.py +++ b/examples/distributed/few-shot/helpers/omniglot_loaders.py @@ -118,7 +118,7 @@ def download(self): def find_classes(root_dir): retour = [] - for (root, dirs, files) in os.walk(root_dir): + for root, dirs, files in os.walk(root_dir): for f in files: if f.endswith('png'): r = root.split('/') @@ -170,7 +170,7 @@ def __init__(self, root, batchsz, n_way, k_shot, k_query, imgsz, rng, device=Non # {label: [img1, img2..., img20], label2: [img1, img2, ...], ... 1623 labels in total} temp = {} - for (img, label) in self.x: + for img, label in self.x: if label in temp.keys(): temp[label].append(img) else: @@ -255,15 +255,12 @@ def load_data_cache(self, data_pack): # print('preload next 50 caches of batchsz of batch.') for sample in range(10): # num of episodes - x_spts, y_spts, x_qrys, y_qrys = [], [], [], [] for i in range(self.batchsz): # one batch means one set - x_spt, y_spt, x_qry, y_qry = [], [], [], [] selected_cls = self.rng.choice(data_pack.shape[0], self.n_way, False) for j, cur_class in enumerate(selected_cls): - selected_img = self.rng.choice(20, self.k_shot + self.k_query, False) # meta-training and meta-test diff --git a/examples/few-shot/helpers/omniglot_loaders.py b/examples/few-shot/helpers/omniglot_loaders.py index d857d386..e8f02042 100644 --- a/examples/few-shot/helpers/omniglot_loaders.py +++ b/examples/few-shot/helpers/omniglot_loaders.py @@ -118,7 +118,7 @@ def download(self): def find_classes(root_dir): retour = [] - for (root, dirs, files) in os.walk(root_dir): + for root, dirs, files in os.walk(root_dir): for f in files: if f.endswith('png'): r = root.split('/') @@ -170,7 +170,7 @@ def __init__(self, root, batchsz, n_way, k_shot, k_query, imgsz, rng, device=Non # {label: [img1, img2..., img20], label2: [img1, img2, ...], ... 1623 labels in total} temp = {} - for (img, label) in self.x: + for img, label in self.x: if label in temp.keys(): temp[label].append(img) else: @@ -255,15 +255,12 @@ def load_data_cache(self, data_pack): # print('preload next 50 caches of batchsz of batch.') for sample in range(10): # num of episodes - x_spts, y_spts, x_qrys, y_qrys = [], [], [], [] for i in range(self.batchsz): # one batch means one set - x_spt, y_spt, x_qry, y_qry = [], [], [], [] selected_cls = self.rng.choice(data_pack.shape[0], self.n_way, False) for j, cur_class in enumerate(selected_cls): - selected_img = self.rng.choice(20, self.k_shot + self.k_query, False) # meta-training and meta-test diff --git a/examples/iMAML/helpers/omniglot_loaders.py b/examples/iMAML/helpers/omniglot_loaders.py index d857d386..e8f02042 100644 --- a/examples/iMAML/helpers/omniglot_loaders.py +++ b/examples/iMAML/helpers/omniglot_loaders.py @@ -118,7 +118,7 @@ def download(self): def find_classes(root_dir): retour = [] - for (root, dirs, files) in os.walk(root_dir): + for root, dirs, files in os.walk(root_dir): for f in files: if f.endswith('png'): r = root.split('/') @@ -170,7 +170,7 @@ def __init__(self, root, batchsz, n_way, k_shot, k_query, imgsz, rng, device=Non # {label: [img1, img2..., img20], label2: [img1, img2, ...], ... 1623 labels in total} temp = {} - for (img, label) in self.x: + for img, label in self.x: if label in temp.keys(): temp[label].append(img) else: @@ -255,15 +255,12 @@ def load_data_cache(self, data_pack): # print('preload next 50 caches of batchsz of batch.') for sample in range(10): # num of episodes - x_spts, y_spts, x_qrys, y_qrys = [], [], [], [] for i in range(self.batchsz): # one batch means one set - x_spt, y_spt, x_qry, y_qry = [], [], [], [] selected_cls = self.rng.choice(data_pack.shape[0], self.n_way, False) for j, cur_class in enumerate(selected_cls): - selected_img = self.rng.choice(20, self.k_shot + self.k_query, False) # meta-training and meta-test diff --git a/tests/helpers.py b/tests/helpers.py index 6c7c4f01..fce77725 100644 --- a/tests/helpers.py +++ b/tests/helpers.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -166,7 +166,6 @@ def assert_model_all_close( atol: Optional[float] = None, equal_nan: bool = False, ): - if isinstance(model, tuple): params, buffers = model elif isinstance(model, nn.Module): @@ -191,7 +190,6 @@ def assert_all_close( atol: Optional[float] = None, equal_nan: bool = False, ) -> None: - if base is not None: actual = actual - base expected = expected - base diff --git a/torchopt/alias/utils.py b/torchopt/alias/utils.py index 0ef0754c..08f9fa08 100644 --- a/torchopt/alias/utils.py +++ b/torchopt/alias/utils.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -57,7 +57,6 @@ def f(g, p): return updates, state else: # gradient ascent - if weight_decay == 0.0: # pylint: disable-next=unused-argument def update_fn(updates, state, *, params=None, inplace=True): diff --git a/torchopt/diff/implicit/decorator.py b/torchopt/diff/implicit/decorator.py index 64dd15dd..377bc1f4 100644 --- a/torchopt/diff/implicit/decorator.py +++ b/torchopt/diff/implicit/decorator.py @@ -91,7 +91,6 @@ def _root_vjp( argnums: Tuple[int, ...], solve: Callable[..., TensorOrTensors] = linear_solve.solve_normal_cg(), ) -> TupleOfOptionalTensors: - if output_is_tensor: def optimality_cond(solution: TupleOfTensors) -> TensorOrTensors: diff --git a/torchopt/diff/implicit/nn/module.py b/torchopt/diff/implicit/nn/module.py index 4e2436f6..adac97db 100644 --- a/torchopt/diff/implicit/nn/module.py +++ b/torchopt/diff/implicit/nn/module.py @@ -110,7 +110,7 @@ def enable_implicit_gradients( raise TypeError('Implicit gradients are already enabled for the `solve` method.') if cls.linear_solve is not None: - solve_kwargs = dict(solve=cls.linear_solve) + solve_kwargs = {'solve': cls.linear_solve} else: solve_kwargs = {} diff --git a/torchopt/distributed/api.py b/torchopt/distributed/api.py index d701dcd7..4a969a6a 100644 --- a/torchopt/distributed/api.py +++ b/torchopt/distributed/api.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -202,7 +202,7 @@ def __reduce__( return ( TensorDimensionPartitioner, (self.dim,), - dict(exclusive=self.exclusive, keepdim=self.keepdim, workers=self.workers), + {'exclusive': self.exclusive, 'keepdim': self.keepdim, 'workers': self.workers}, ) diff --git a/torchopt/transform/add_decayed_weights.py b/torchopt/transform/add_decayed_weights.py index d3a878e8..48a117f5 100644 --- a/torchopt/transform/add_decayed_weights.py +++ b/torchopt/transform/add_decayed_weights.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -100,7 +100,6 @@ def _masked( *, already_flattened: bool = False, ) -> GradientTransformation: - if already_flattened: tree_map = tree_map_flat else: diff --git a/torchopt/transform/utils.py b/torchopt/transform/utils.py index 943fe71e..b3adedc8 100644 --- a/torchopt/transform/utils.py +++ b/torchopt/transform/utils.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -119,7 +119,6 @@ def _update_moment(updates, moments, decay, *, order, inplace=True, already_flat assert order in (1, 2) if inplace: - if order == 2: def f(g, t): @@ -131,7 +130,6 @@ def f(g, t): return t.mul_(decay).add_(g, alpha=1 - decay) if g is not None else t else: - if order == 2: def f(g, t): diff --git a/torchopt/visual.py b/torchopt/visual.py index a30c27de..83872b8c 100644 --- a/torchopt/visual.py +++ b/torchopt/visual.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -138,16 +138,16 @@ def make_dot( else: param_map.update({v: k for k, v in cast(Mapping, param).items()}) - node_attr = dict( - style='filled', - shape='box', - align='left', - fontsize='10', - ranksep='0.1', - height='0.2', - fontname='monospace', - ) - dot = Digraph(node_attr=node_attr, graph_attr=dict(size='12,12')) + node_attr = { + 'style': 'filled', + 'shape': 'box', + 'align': 'left', + 'fontsize': '10', + 'ranksep': '0.1', + 'height': '0.2', + 'fontname': 'monospace', + } + dot = Digraph(node_attr=node_attr, graph_attr={'size': '12,12'}) seen = set() def size_to_str(size): diff --git a/tutorials/5_Implicit_Differentiation.ipynb b/tutorials/5_Implicit_Differentiation.ipynb index 31413fc8..21dd2ed6 100644 --- a/tutorials/5_Implicit_Differentiation.ipynb +++ b/tutorials/5_Implicit_Differentiation.ipynb @@ -123,6 +123,7 @@ "# backpropogate, in this case we want to backpropogate to the initial parameters so we set it as 1.\n", "# You can also set argnums as (1, 2) if you want to backpropogate through multiple meta-parameters\n", "\n", + "\n", "# Here we pass argnums=1 to the custom_root. That means we want to compute the gradient of\n", "# optimal_params w.r.t. the 1-indexed argument in inner_solver, i.e., params.\n", "# torchopt.linear_solve.solve_normal_cg specify that we use the conjugate gradient based linear solver\n", @@ -241,6 +242,7 @@ "fmodel, meta_params = functorch.make_functional(model)\n", "data = (x, y, fmodel)\n", "\n", + "\n", "# Clone function for parameters\n", "def clone(params):\n", " cloned = []\n", From 773d140d970464fe278aa33157b968d992bd4ba8 Mon Sep 17 00:00:00 2001 From: Bo Liu Date: Thu, 2 Feb 2023 23:51:25 +0800 Subject: [PATCH 19/24] docs: refactor documentation (#127) Co-authored-by: waterhorse1 <1098616530@qq.com> Co-authored-by: Xuehai Pan Co-authored-by: Jie Ren --- .gitignore | 1 + .pre-commit-config.yaml | 5 +- .pylintrc | 2 +- CHANGELOG.md | 15 +- LICENSE | 2 +- Makefile | 6 +- README.md | 49 +- conda-recipe-minimal.yaml | 3 +- conda-recipe.yaml | 3 +- docs/conda-recipe.yaml | 3 +- .../_static/images/explicit-gradient.png | Bin 0 -> 171327 bytes .../_static/images/implicit-gradient.png | Bin 0 -> 177159 bytes .../_static/images/visualization-fig1.svg | 57 ++ .../_static/images/visualization-fig2.svg | 106 +++ .../_static/images/visualization-fig3.svg | 339 ++++++++ docs/source/_static/images/zero-order.png | Bin 0 -> 129922 bytes docs/source/api/api.rst | 34 +- docs/source/basics/basics.rst | 34 + docs/source/conf.py | 36 +- docs/source/developer/contributing.rst | 4 +- docs/source/developer/contributor.rst | 4 +- docs/source/distributed/distributed.rst | 740 ++++++++++++++++++ docs/source/explicit_diff/explicit_diff.rst | 162 ++++ docs/source/implicit_diff/implicit_diff.rst | 178 +++++ docs/source/index.rst | 49 +- docs/source/optimizer/optim.rst | 193 +++++ docs/source/spelling_wordlist.txt | 23 + docs/source/visualization/visualization.rst | 146 ++++ .../zero_order_diff/zero_order_diff.rst | 146 ++++ tutorials/1_Functional_Optimizer.ipynb | 6 +- tutorials/2_Visualization.ipynb | 2 +- tutorials/3_Meta_Optimizer.ipynb | 8 +- tutorials/5_Implicit_Differentiation.ipynb | 2 +- tutorials/6_Zero_Order_Differentiation.ipynb | 52 +- 34 files changed, 2299 insertions(+), 111 deletions(-) create mode 100644 docs/source/_static/images/explicit-gradient.png create mode 100644 docs/source/_static/images/implicit-gradient.png create mode 100644 docs/source/_static/images/visualization-fig1.svg create mode 100644 docs/source/_static/images/visualization-fig2.svg create mode 100644 docs/source/_static/images/visualization-fig3.svg create mode 100644 docs/source/_static/images/zero-order.png create mode 100644 docs/source/basics/basics.rst create mode 100644 docs/source/distributed/distributed.rst create mode 100644 docs/source/explicit_diff/explicit_diff.rst create mode 100644 docs/source/implicit_diff/implicit_diff.rst create mode 100644 docs/source/optimizer/optim.rst create mode 100644 docs/source/visualization/visualization.rst create mode 100644 docs/source/zero_order_diff/zero_order_diff.rst diff --git a/.gitignore b/.gitignore index 62b1adbc..450d7b0c 100644 --- a/.gitignore +++ b/.gitignore @@ -77,6 +77,7 @@ instance/ # Sphinx documentation docs/_build/ docs/source/_build/ +_autosummary/ # PyBuilder .pybuilder/ diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 9c4fd1fc..66f0bdf0 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -34,7 +34,7 @@ repos: - id: isort stages: [commit, push, manual] - repo: https://github.com/psf/black - rev: 22.12.0 + rev: 23.1.0 hooks: - id: black-jupyter stages: [commit, push, manual] @@ -59,6 +59,7 @@ repos: stages: [commit, push, manual] exclude: | (?x)( + ^docs/| ^examples/| ^tests/| ^setup.py$ @@ -71,8 +72,8 @@ repos: exclude: | (?x)( ^.github/| + ^docs/| ^examples/| ^tests/| - ^docs/source/conf.py$| ^setup.py$ ) diff --git a/.pylintrc b/.pylintrc index efcfa8fd..accc71d5 100644 --- a/.pylintrc +++ b/.pylintrc @@ -48,7 +48,7 @@ ignore=CVS,.vscode,.history # ignore-list. The regex matches against paths and can be in Posix or Windows # format. Because '\' represents the directory delimiter on Windows systems, it # can't be used as an escape character. -ignore-paths=^_C/$,^examples/$,^tests/$ +ignore-paths=^_C/$,^docs/$,^examples/$,^tests/$ # Files or directories matching the regular expression patterns are skipped. # The regex matches against base names, not paths. The default value ignores diff --git a/CHANGELOG.md b/CHANGELOG.md index ea62b695..10d29960 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,6 +13,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added +- Update Sphinx documentation by [@XuehaiPan](https://github.com/XuehaiPan) and [@Benjamin-eecs](https://github.com/Benjamin-eecs) and [@waterhorse1](https://github.com/waterhorse1) and [@JieRen98](https://github.com/JieRen98) in [#127](https://github.com/metaopt/torchopt/pull/127). - Add object-oriented modules support for zero-order differentiation by [@XuehaiPan](https://github.com/XuehaiPan) in [#125](https://github.com/metaopt/torchopt/pull/125). ### Changed @@ -147,10 +148,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ------ -[Unreleased]: https://github.com/olivierlacan/keep-a-changelog/compare/v0.6.0...HEAD -[0.6.0]: https://github.com/olivierlacan/keep-a-changelog/compare/v0.5.0...v0.6.0 -[0.5.0]: https://github.com/olivierlacan/keep-a-changelog/compare/v0.4.3...v0.5.0 -[0.4.3]: https://github.com/olivierlacan/keep-a-changelog/compare/v0.4.2...v0.4.3 -[0.4.2]: https://github.com/olivierlacan/keep-a-changelog/compare/v0.4.1...v0.4.2 -[0.4.1]: https://github.com/olivierlacan/keep-a-changelog/compare/v0.4.0...v0.4.1 -[0.4.0]: https://github.com/olivierlacan/keep-a-changelog/releases/tag/v0.4.0 +[Unreleased]: https://github.com/metaopt/torchopt/compare/v0.6.0...HEAD +[0.6.0]: https://github.com/metaopt/torchopt/compare/v0.5.0...v0.6.0 +[0.5.0]: https://github.com/metaopt/torchopt/compare/v0.4.3...v0.5.0 +[0.4.3]: https://github.com/metaopt/torchopt/compare/v0.4.2...v0.4.3 +[0.4.2]: https://github.com/metaopt/torchopt/compare/v0.4.1...v0.4.2 +[0.4.1]: https://github.com/metaopt/torchopt/compare/v0.4.0...v0.4.1 +[0.4.0]: https://github.com/metaopt/torchopt/releases/tag/v0.4.0 diff --git a/LICENSE b/LICENSE index 710ed864..8d26c203 100644 --- a/LICENSE +++ b/LICENSE @@ -187,7 +187,7 @@ same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright [2022] [MetaOPT Team. All Rights Reserved.] + Copyright [2022-2023] [MetaOPT Team. All Rights Reserved.] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/Makefile b/Makefile index 24c688e0..241516f6 100644 --- a/Makefile +++ b/Makefile @@ -42,6 +42,7 @@ check_pip_install_extra = $(PYTHON) -m pip show $(1) &>/dev/null || (cd && $(PYT pylint-install: $(call check_pip_install_extra,pylint,pylint[spelling]) + $(call check_pip_install,pyenchant) flake8-install: $(call check_pip_install,flake8) @@ -82,6 +83,9 @@ pytest-install: $(call check_pip_install,pytest-cov) $(call check_pip_install,pytest-xdist) +test-install: pytest-install + $(PYTHON) -m pip install --requirement tests/requirements.txt + cmake-install: command -v cmake || $(call check_pip_install,cmake) @@ -105,7 +109,7 @@ addlicense-install: go-install # Tests -pytest: pytest-install +pytest: test-install cd tests && \ $(PYTHON) -m pytest --verbose --color=yes --durations=0 \ --cov="$(PROJECT_NAME)" --cov-report=xml --cov-report=term-missing \ diff --git a/README.md b/README.md index 9b7769c6..ea045c09 100644 --- a/README.md +++ b/README.md @@ -30,11 +30,11 @@ **TorchOpt** is an efficient library for differentiable optimization built upon [PyTorch](https://pytorch.org). TorchOpt is: -- **Comprehensive**: TorchOpt provides three differentiation mode - explicit differentiation, implicit differentiation and zero-order differentiation for handling different differentiable optimization situations. -- **Flexible**: TorchOpt provides both functional and objective-oriented API for user different preferences. Users can implement differentiable optimization in JAX-like or PyTorch-like style. -- **Efficient**: TorchOpt provides (1) CPU/GPU acceleration differentiable optimizer (2) RPC-based distributed training framework (3) Fast Tree Operations, to largely increase the training efficiency for bi-level optimization problem. +- **Comprehensive**: TorchOpt provides three differentiation modes - explicit differentiation, implicit differentiation, and zero-order differentiation for handling different differentiable optimization situations. +- **Flexible**: TorchOpt provides both functional and objective-oriented API for users' different preferences. Users can implement differentiable optimization in JAX-like or PyTorch-like style. +- **Efficient**: TorchOpt provides (1) CPU/GPU acceleration differentiable optimizer (2) RPC-based distributed training framework (3) Fast Tree Operations, to largely increase the training efficiency for bi-level optimization problems. -Beyond differentiable optimization, TorchOpt can also be regarded as a functional optimizer which enables [JAX-like](https://github.com/google/jax) composable functional optimizer for PyTorch. +Beyond differentiable optimization, TorchOpt can also be regarded as a functional optimizer that enables [JAX-like](https://github.com/google/jax) composable functional optimizer for PyTorch. With TorchOpt, users can easily conduct neural network optimization in PyTorch with functional style optimizer, similar to [Optax](https://github.com/deepmind/optax) in JAX. -------------------------------------------------------------------------------- @@ -66,13 +66,13 @@ The README is organized as follows: ## TorchOpt as Functional Optimizer The design of TorchOpt follows the philosophy of functional programming. -Aligned with [`functorch`](https://github.com/pytorch/functorch), users can conduct functional style programing with models, optimizers and training in PyTorch. +Aligned with [`functorch`](https://github.com/pytorch/functorch), users can conduct functional style programming with models, optimizers and training in PyTorch. We use the Adam optimizer as an example in the following illustration. You can also check out the tutorial notebook [Functional Optimizer](tutorials/1_Functional_Optimizer.ipynb) for more details. ### Optax-Like API -For those users who prefer fully functional programing, we offer Optax-Like API by passing gradients and optimizers states to the optimizer function. +For those users who prefer fully functional programming, we offer Optax-Like API by passing gradients and optimizer states to the optimizer function. Here is an example coupled with `functorch`: ```python @@ -114,7 +114,7 @@ for xs, ys in loader: # get data ### PyTorch-Like API -We also design base class `torchopt.Optimizer` that has the same interface as `torch.optim.Optimizer`. +We also design a base class `torchopt.Optimizer` that has the same interface as `torch.optim.Optimizer`. We offer origin PyTorch APIs (e.g. `zero_grad()` or `step()`) by wrapping our Optax-Like API for traditional PyTorch users. ```python @@ -134,8 +134,8 @@ optimizer.step() # step updates ### Differentiable On top of the same optimization function as `torch.optim`, an important benefit of functional optimizer is that one can implement differentiable optimization easily. -This is particularly helpful when the algorithm requires to differentiate through optimization update (such as meta-learning practices). -We take as the inputs the gradients and optimizer states, use non-in-place operators to compute and output the updates. +This is particularly helpful when the algorithm requires to differentiate through optimization updates (such as meta-learning practices). +We take as the inputs the gradients and optimizer states, and use non-in-place operators to compute and output the updates. The processes can be automatically implemented, with the only need from users being to pass the argument `inplace=False` to the functions. Check out section [Explicit Gradient (EG)](#explicit-gradient-eg) functional API for example. @@ -196,7 +196,7 @@ Refer to the example and the tutorial notebook [Meta-Optimizer](tutorials/3_Meta meta_params = ... model = ... # Define differentiable optimizer -optimizer = torchopt.MetaAdam(model) # a model instance as argument instead of model.parameters() +optimizer = torchopt.MetaAdam(model) # a model instance as the argument instead of model.parameters() for iter in range(iter_times): # Perform inner update @@ -212,7 +212,7 @@ loss.backward() By treating the solution $\theta^{\prime}$ as an implicit function of $\phi$, the idea of IG is to directly get analytical best-response derivatives $\partial \theta^{\prime} (\phi) / \partial \phi$ by [implicit function theorem](https://en.wikipedia.org/wiki/Implicit_function_theorem). This is suitable for algorithms when the inner-level optimal solution is achieved ${\left. \frac{\partial F (\theta, \phi)}{\partial \theta} \right\rvert}_{\theta=\theta^{\prime}} = 0$ or reaches some stationary conditions $F (\theta^{\prime}, \phi) = 0$, such as [iMAML](https://arxiv.org/abs/1909.04630) and [DEQ](https://arxiv.org/abs/1909.01377). TorchOpt offers both functional and OOP APIs for supporting both [conjugate gradient-based](https://arxiv.org/abs/1909.04630) and [Neumann series-based](https://arxiv.org/abs/1911.02590) IG methods. -Refer to the example [iMAML](https://github.com/waterhorse1/torchopt/tree/readme/examples/iMAML) and the notebook [Implicit Gradient](tutorials/5_Implicit_Differentiation.ipynb) for more guidances. +Refer to the example [iMAML](https://github.com/waterhorse1/torchopt/tree/readme/examples/iMAML) and the notebook [Implicit Gradient](tutorials/5_Implicit_Differentiation.ipynb) for more guidance. #### Functional API @@ -275,7 +275,7 @@ class InnerNet(ImplicitMetaGradientModule, linear_solve=linear_solver): meta_params, data = ..., ... inner_net = InnerNet(meta_params) -# Solve for inner-loop process related with the meta-parameters +# Solve for inner-loop process related to the meta-parameters optimal_inner_net = inner_net.solve(data) # Get outer loss and solve for meta-gradient @@ -356,7 +356,7 @@ grads = torch.autograd.grad(loss, net.parameters()) We take the optimizer as a whole instead of separating it into several basic operators (e.g., `sqrt` and `div`). Therefore, by manually writing the forward and backward functions, we can perform the symbolic reduction. In addition, we can store some intermediate data that can be reused during the backpropagation. -We write the accelerated functions in C++ OpenMP and CUDA, bind them by [`pybind11`](https://github.com/pybind/pybind11) to allow they can be called by Python, and then we define the forward and backward behavior using `torch.autograd.Function`. +We write the accelerated functions in C++ OpenMP and CUDA, bind them by [`pybind11`](https://github.com/pybind/pybind11) to allow they can be called by Python, and then define the forward and backward behavior using `torch.autograd.Function`. Users can use by simply setting the `use_accelerated_op` flag as `True`. Refer to the corresponding sections in tutorials [Functional Optimizer](tutorials/1_Functional_Optimizer.ipynb) and [Meta-Optimizer](tutorials/3_Meta_Optimizer.ipynb) @@ -368,22 +368,22 @@ optimizer = torchopt.MetaAdam(model, lr, use_accelerated_op=True) `TorchOpt` provides distributed training features based on the PyTorch RPC module for better training speed and multi-node multi-GPU support. Different from the MPI-like parallelization paradigm, which uses multiple homogenous workers and requires carefully designed communication hooks, the RPC APIs allow users to build their optimization pipeline more flexibly. -Experimental results show that we achieve approximately linear relationship between the speed-up ratio and the number of workers. -Check out the [distributed MAML example](https://github.com/metaopt/torchopt/tree/main/examples/distributed/few-shot) for more specific guidance. +Experimental results show that we achieve an approximately linear relationship between the speed-up ratio and the number of workers. +Check out the [Distributed Training Documentation](https://torchopt.readthedocs.io/en/latest/distributed/distributed.html) and [distributed MAML example](https://github.com/metaopt/torchopt/tree/main/examples/distributed/few-shot) for more specific guidance. ### OpTree -We implement the *PyTree* to enable fast nested structure flatten using C++. +We implement the *PyTree* to enable fast nested structure flattening using C++. The tree operations (e.g., flatten and unflatten) are very important in enabling functional and Just-In-Time (JIT) features of deep learning frameworks. -By implementing it in C++, we can use some cache/memory friendly structures (e.g., `absl::InlinedVector`) to improve the performance. -For more guidance and comparison results, please refer to our open source project [`OpTree`](https://github.com/metaopt/optree). +By implementing it in C++, we can use some cache/memory-friendly structures (e.g., `absl::InlinedVector`) to improve the performance. +For more guidance and comparison results, please refer to our open-source project [`OpTree`](https://github.com/metaopt/optree). -------------------------------------------------------------------------------- ## Visualization Complex gradient flow in meta-learning brings in a great challenge for managing the gradient flow and verifying the correctness of it. -TorchOpt provides a visualization tool that draw variable (e.g., network parameters or meta-parameters) names on the gradient graph for better analyzing. +TorchOpt provides a visualization tool that draws variable (e.g., network parameters or meta-parameters) names on the gradient graph for better analysis. The visualization tool is modified from [`torchviz`](https://github.com/szagoruyko/pytorchviz). Refer to the example [visualization code](examples/visualize.py) and the tutorial notebook [Visualization](tutorials/2_Visualization.ipynb) for more details. @@ -398,7 +398,7 @@ Compared with [`torchviz`](https://github.com/szagoruyko/pytorchviz), TorchOpt f ## Examples -In the [`examples`](examples) directory, we offer several examples of functional optimizer and light-weight meta-learning examples with TorchOpt. +In the [`examples`](examples) directory, we offer several examples of functional optimizers and light-weight meta-learning examples with TorchOpt. - [Model-Agnostic Meta-Learning (MAML) - Supervised Learning](https://arxiv.org/abs/1703.03400) (ICML 2017) - [Learning to Reweight Examples for Robust Deep Learning](https://arxiv.org/abs/1803.09050) (ICML 2018) @@ -419,13 +419,15 @@ Requirements - (Optional) For visualizing computation graphs - [Graphviz](https://graphviz.org/download) (for Linux users use `apt/yum install graphviz` or `conda install -c anaconda python-graphviz`) -**Please follow the instructions at to install PyTorch in your Python environment first.** Then run the following command to install TorchOpt from PyPI ([![PyPI](https://img.shields.io/pypi/v/torchopt?label=pypi&logo=pypi)](https://pypi.org/project/torchopt) / ![Status](https://img.shields.io/pypi/status/torchopt?label=status)): +**Please follow the instructions at to install PyTorch in your Python environment first.** +Then run the following command to install TorchOpt from PyPI ([![PyPI](https://img.shields.io/pypi/v/torchopt?label=pypi&logo=pypi)](https://pypi.org/project/torchopt) / ![Status](https://img.shields.io/pypi/status/torchopt?label=status)): ```bash pip3 install torchopt ``` -If the minimum version of PyTorch is not satisfied, `pip` will install/upgrade it for you. Please be careful about the `torch` build for CPU / CUDA support (e.g. `cpu`, `cu116`, `cu117`). You may need to specify the extra index URL for the `torch` package: +If the minimum version of PyTorch is not satisfied, `pip` will install/upgrade it for you. Please be careful about the `torch` build for CPU / CUDA support (e.g. `cpu`, `cu116`, `cu117`). +You may need to specify the extra index URL for the `torch` package: ```bash pip3 install torchopt --extra-index-url https://download.pytorch.org/whl/cu117 @@ -441,7 +443,8 @@ cd torchopt pip3 install . ``` -We provide a [conda](https://github.com/conda/conda) environment recipe to install the build toolchain such as `cmake`, `g++`, and `nvcc`: +We provide a [conda](https://github.com/conda/conda) environment recipe to install the build toolchain such as `cmake`, `g++`, and `nvcc`. +You can use the following commands with [`conda`](https://github.com/conda/conda) / [`mamba`](https://github.com/mamba-org/mamba) to create a new isolated environment. ```bash git clone https://github.com/metaopt/torchopt.git diff --git a/conda-recipe-minimal.yaml b/conda-recipe-minimal.yaml index 5d17d45d..c3d155b8 100644 --- a/conda-recipe-minimal.yaml +++ b/conda-recipe-minimal.yaml @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -44,7 +44,6 @@ dependencies: - cmake >= 3.11 - make - cxx-compiler - - gxx = 10 - nvidia/label/cuda-11.7.1::cuda-nvcc - nvidia/label/cuda-11.7.1::cuda-cudart-dev - pybind11 >= 2.10.1 diff --git a/conda-recipe.yaml b/conda-recipe.yaml index 9f4e8f5e..74d2d94d 100644 --- a/conda-recipe.yaml +++ b/conda-recipe.yaml @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -50,7 +50,6 @@ dependencies: - cmake >= 3.11 - make - cxx-compiler - - gxx = 10 - nvidia/label/cuda-11.7.1::cuda-nvcc - nvidia/label/cuda-11.7.1::cuda-cudart-dev - patchelf >= 0.14 diff --git a/docs/conda-recipe.yaml b/docs/conda-recipe.yaml index 673563e4..cc68310e 100644 --- a/docs/conda-recipe.yaml +++ b/docs/conda-recipe.yaml @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -42,7 +42,6 @@ dependencies: - cmake >= 3.11 - make - cxx-compiler - - gxx = 10 - nvidia/label/cuda-11.7.1::cuda-nvcc - nvidia/label/cuda-11.7.1::cuda-cudart-dev - pybind11 >= 2.10.1 diff --git a/docs/source/_static/images/explicit-gradient.png b/docs/source/_static/images/explicit-gradient.png new file mode 100644 index 0000000000000000000000000000000000000000..90cf4d4d1b45d0c1e1bf74f7619eaa9d20eb7eb0 GIT binary patch literal 171327 zcmeFZby(Ev_bv>GfFOb*AdQ4bOG|@F3rKfJ=g{4tg0ys_z=$wIOQ!`$cb9Y{&5-Ar zz4!O``@QeG$8-LDuj_DK$n2T<)Oyys*S+qwp8fWPk_9Cqu-rn>pqkRvLxbEW>k5RYH zI-9=>so6Z!8~&;A^RMESyJ}ymOeJMErwZ zQ4tZL8OS^cTn530f-6g* zOOK!3@9c`JT5a4eY57N? z?}x)R)mwT$vKQot{g;}yJIjg2ST2%NpA^f}sB!*`R1QwTEt7ipkPFwuEsP4UeBrAG zf35GCktHwrDSDXR!Qyt1`Y|@ipUx zpaWK!t)xnZ4O^Br4r!h$jdJqc*6*wf-xbO|O-&MZ=Q<@Y=K^V4Fu1h`C8*mma;lo% zhHOP-TjC6$Voq~5(m?@2T+>;XNxUfWmk=C z$5N*4x&Dh5d#3Yl?)RrNcfOdmhh6^4#Ac;wO}_?fxSHJ$*%Ww#pNd0FPjS#baPKn% z7Ewsp%kP9a?2y2-Ut*q^4OlJfkv{W=A^Rs5%AQYFX+;?H>0R$Y6iK=hFA2>VR$rao zFVxTT%f_3+wW<3RRKuwr+VPl30PR){>%q6yiw~9p?at~>ofThCCC?mqPPVN-Z9Y`G zo4_4;N1_bjlvkHoHyIl)GCI2=LuNbiqDk4?R z3?XulPM_EGK#g%zZY>9@zPObXG;Az7hFKL%Wh^j8K;ClO@lFOhsibfX)^^MA;PtE^ z|KC`?&j??=>bs4V6O%%l6ecDvQbQBySz`~|N%BY# z#Db9-&wA*o+SwP;2O=7tiM_uk|J3aT{SYqITkjk@UdkvLrW`UWG2P%EndJAj{Wh+o zx-nVLICDn(ln%&*0@0u2o0@z~xwib)snm+?x&$EY8!P)z=N*XoJ5{~GxS|3?D-t%o0Y2|OVmQKE<(xw{RpM?bSCD{7Ih8z0zv$ zZ-*EMue{B@eGZG){Q4}{us19>I7c1&nk@q<#>gW1QcOzw=RQl$I%fV7`-b?|`DTa4 zf0B^zGfyO6hDFVA>9Au<``E)FS(DE_s|hOET!W8W9(Q^s?s6>{61{nw{kGjK@uyRK zKAQnadg3?TPMep9>Evl|Qahekr&Oi-^H%UHJW98%8r{qi9G)5!nf6Wes_<%l{Z=aP zhiimT#AMQ41?A4?oqS1_?6bN$dW<^kr8cGOWg`{Ky2;w3Ih(a*3rI1-y3~Czqp*R@VR@f2V@6R>29iF-gRh2CeLi`0HP3Yn(@1Nl0t4L zo|cc+hYjh`bjZzW+3M?%Zbay=;T&ApTVsEL4g+-6-DCr^2Q#~~TyFt4O3p)&B z4DEN)i1_X>F^oKDQpASmc~T9l6+6aAw8=Jj^3}kN zQ6^nwwk%KoxG#AbaxzS|&zBX&V&!EO<)>3m-;#F2mhtnP1`7vH5tNtYmpqbz#) zR$2A$Zz>%t8oir||CL_GohvXXUO`u({QY(jX=6KX=S0FGzpzmKgELzt)t(uBXNb-A z%d)D+dQ)+h8Cn^m_sfzKny<^-&sj2)GUP%ZzqN2_s zv}cAZREWu^XU8RTxj5y_+i;-aBSL!LvNzyBZB^L~x(*!}puqDbvC|FXyvuHvyVn(byTq|*?DLyC>8g(|_TML3-AFc)iq1ZZxm;9sv~)CTywQj#oPiM9 z)iiJ!FJvi7r*{bWjj>Jmx+ttBo!Y?G!;8KbN$F+VDNMN^k@gzdR8!b-&R*6^>krn_ zyUQpP$=GixealqX=o8X0*eSkIKjDGB1({kXwd^!pp4sy=&bFH!G*Sy_Y!dek*hlZZ z=5tGz8ZPwxvASl=kaE$EL%ZFo-E+w1W07S?mw4sD53`g3FhLXVyuDA`{m`~gZOb%g z-VKW>?XU@)l;boDM zbJQ8Z9o%e159%j(LMNss*EzlPZS0P19M^aph6YpA%J1Is?;iTg zT+0|d>VWmMt5=tKIg@$yRc*B^vJSEiXM?-S^;vRRew(ZpY8$#*8|N4 z15GvsUjlO~*Y4Hu9TRf9CB2gqw7|NWo5No!(&qJkjz!e2?*;Z(|j|zxJQk*H*!2 zXp(AA<>bI$HB)DEbBK$TgKKAE>OOemhNG;G3mO^;Bk~trPL<&|IRBWny0)vf;&UNW z2YZf}W)3Fi9G>=$$n&6ycnX1+_U5iHX+7=jATC0lqI7?KLkPS^zRgKT``1@oZAIy{ z6<^Rkb#OMP<>z?B@rX_gmzI`R#M#V3NcEZYKZk?=5~Z_pb#)ZtO(Qe}B7Yt5|M}>@FZn+X)&3uc3UK{D4*ee={l5>@a4~m&>R=Bp>MHi%3--@-|L2GQ z94NwxeD?ogi@y#1*So;dVz?rl|FvpjxDOBVuE0W4T0c`(2Y-Q^A%D;hz&{WF{tI5C z%M9J60Occ^1e)A4Np(;3&1q~;b&ZLZT>;*;0TuZ>gi^c@gJro=UXbS2|9(p=ijHX} zB@rI{kkG(PgDngrSUx!Lq4A`Fx7=OgcS@!22{Bj(ga;6wJ3=0o-0@wDAG?TKtuUYkS__&eaFOJ)|NK-T2%ks!@w*) zp+oucUmu*jMhjwX*2cmL`qP+;gfQSxOA77rd>RV(c@%Z{6i7{$lgRCTU?efN$JGWEE; zjQdFrHA*l?$DG2%(XV}XqV*bbk^iOe`}gn9u}R2Oo~9Y#t6DqChp0pcs(;L66XV6u znY^TEPphyAF(c`sBkH8%o<^M|aGs!JX!+Ny))BwQg3T{1@K9!2sJYqpx8SH-YijtD z7+uqercJUiSwm>w{eu-GT66+i5-X&|jBH!vLxjl8wFHATa_c^Kth}?h+>MsdlpfcP zMk85yX};aAhWb3gir_3Xa8C<#R+-X5y-uv!x(g!+ml?@-y1+!e^#)SZ=JCNLtm$ZT z>tr*AT57Io29lTJzKYX9Q&=ITps+PdFMDlF3+H!1oh@6^z|z}7#@mt_PYDqKN$r_4A|DIVOdU2PTKJS zJA3z1JfAV2y`zjC1bdbH9ov)@$q0S?vd>LpYhz9!At5}6T;@M`_n1Loiw1R(vGgPc z*3&nM0`rmFrDI}!>6E>#%p$fjedd@y#7x z=MZ%}lb9BIQB9clhjc4pj0fzybU6H8!PmE;Gv6v{IA`}lf4v4#5{OlLa1N`5@z=#h zQVEaX7}u&hhx~y9pfgI$u_Py{RbvgYeOO#H+b4R$n#k2t79RP@dc^6Mjzl%V(0<7g zSP8_V#PYv%7u+|)-7P{af7lIOS_f3PJK#7;zKl9*Mcm9NoTu#UU-t`#h?VxB)z zwKN&mLQz^!;u73nc`4RHYgGKniD!kS*)x0{-llA?Ua1*e!iEk#2Qhq<>mD?9ht@Gs zYCJEG!Fr~lc0A&JqTIU@(wOIIoGx1SyloQ8iE>}AqmHPm+u-omP&%u`>u?cB-Aap! z*X4SuR`i^Pwswk@g+BP9j(HXH^bh8RfqYX=OM2&yN!5eFj=bOLc*#D7>Z z%yT0EnFO|ZliiFD9;C68M(wb^yUo|_!*Q-)tk_Wu`vN0de*bu$qg@ zHIQw~=hhc8P-K3Mv(YnS7K5lb%YzRY4EJ~}dlnm5)k5%4gZH@_xRYeJU;3<^68`5W z@8tH)w+o|MalU>18p(c5aHk`NH1flJRPK%02%_S76Gnz@;*Ds0YN+@Q3qQZ+ojZ5z z9yz}WL+wD0NZdD`9(1_fUaLtaf$(xW-7nTGF=Qu?{S%%AUfn<*rpkP}B}c4(XUlZG$Jtpd84c>ylVk=-|66$sg@6*sncYmxJXx`Q^5($mn~d-Eu%r+C z`Y8{_%1ot3Q~rb=mq;X>_O{|ziX!LhnkWdNPJpmeq#WyysTucl;By0XZk+?>dsZEM zlYpRl#q1s*>U%!LU_zm+M+3TfY03X+QnZv;hD`oOMHMY97ly zf6i<3k9}ReJIrZuzDH&$jcCxaF`!PxII;le0ytR5HPI0l?&%2-vU~l|jasAKR;ikLr9!VVX6XO#zm9 z3YpUm%kU}MYlcR~&!1KlA4KxtqFzdz#S)1=>|zi@d0Hig%Pbj4grTw{CFRhM-4Jf; zL_{R(f1GKue%0k&o{l^<$hj;?^t4q~jVgc}HtOc&=5C3xx5l6hzrdpF!D5a}4T^K0 zdf3g?lQpXh`!;X-e;a-dTh`(!9!TD8G+8V=>K@RVyqpo-K!!ALnA{%{R?s;S&S{H= ztZZ08Huu7f8%63P4FpvY1y`r4S-xkpkU_)CUYR_V%tzJU^;%XV8qa1)qpV8}-cM`A z@1hhrFoPvtU>CPWQ84Z9?zRfO(PW_q5qt9$U!_Q>7-w%hBHO`f&FGUc`nZ4tTwg6O zOu_H{6!6p@_n@Hil&)*2GEmDAh9p(Z08dn0e0-EthR~|y+DLiO7AdbVQBBdTLb=b? z?PRT?5)H*TL;oQtM=T_x8ZR`*N*% zR-rAR%X#DKs84BtdHq)PsYIc1eT#&y;xbCSGPVqQkFWpZ#}7FZlbka5T%qa&6$9OI zFRBXBqm^a0_+&Btt)r|W2mIvU8y)q-*s8Dh-Gj=?%Hm>U31tUOIZzX#F=^zTf?6*w zDanHE?uy>q&oukd@QJ;OPSWZpYxCtBjh6cB7`v^4j>IG+59&R3Zhg1rWxjzm_~M%#vr$1XifrrY+O6~4b+;R*({Gmo=fp!;eBe+;8Wi5FPK zuB($@bsT-P7yW%7R^rF`P6~iBJV(;5241x8o0l zX0g%WIO|B~EEA_|QG@q!$qIGSr%8w-jNm)&$`*#;3JalDiXh5FIVMT_@S4^?tO2h4 zDANI&s-fq8@jy60m`(i3z5S%oW%jS(A@htL{}> zm3xWnEq%Zq&8KwSey=xlCt2QHe(9usf8#ZD zGKyRL6fEvCA1P`Fqm327kR+`cV4q79aDc3?^Ls|L9(Pi#iJT{F)gQ~nkS>!?qI;i? zJ-Lo2yMa{+mCJn8UxY!FT;yR(GP$gDqeNdx48i?l%&@QM%Wyb1V|Yh|-xW({`sne& zn7bQSZhmK4g1YGxOsj-sS~U)31Oe*f#YE^D1bg)gb8Q%we4jEb2zIPxAA4U-JW;&I z$nlIMk;6Yqq`%;`tG$X5i@x4~=jBv}`QAV%Y~tFs2;Qypi?iB&J!3Wd+K||KZenUG6_W9%>bPAv=sn(l@Z`S8-@G%VVX)_ia@ym7 zN)ch7)HdaNletkheSSAj(hCXH_#)!ecTO`h&KELVD$kdCCb+LsA14bnA~?1@-un8M z>o>TLZ|THn;xBf$QMgiswp4r;L=0|89@?&HV^{>n#V4#aOSkr zvDB$G)s)P6e3gC>Wj(K1LQEcJBVrXdS&?5~mBd7C^9#I{JApc}O23)!=;#(g|WDb6UT!w!lm~A^}*N>4q=VKA6_w26)=i}JXrM|a3ko|i{ z#-Ag%c1f$fWu)fI-S)=PX@nu&v)Q7PGC2+<<}<=Zl%ig`kTAa)(~;pJNvdLt(&@FS zDb3NpsM_!rVdb;G1Kg?m3U#JI3u?HHQ`e&5=~(XKE9!!!R(VQG01&UMutUze-l-t(z%mST4` zlr3)hwbvFD72nVwx2e6yh6)iS&;a`?cbc==?`>`u{Wjxzrc1)1{(@hb>cMbh*H&>* zNe6)sLCrxkAH3Z6;*>DC?m(b~<9S?JMbgKTqV0Cmx6zrfle&>uZ&0OII~t(OYfEE(4|$PQ4xyy*6T zTr5oRCXCYfwG_>85Mvf^ma9e*p<v6S4peh{*t^U5!>C0aK+KsYN5r5L-ZVFb$aN@(0E|u|vb%ZwaT2EVDfsK@o zxcR4Cxi`(i-+L35C_HLJ9X!?CbPxlx|3{9af&^P9C#SfS6f5}BkR*Y8brQZ^b#Z8P zc3qvptY^%E;eLli?be$@>TK(!0v=1K&8X!PaDNW_|5UC83IY2_%IDd);k>rDbP%(M znBn3sHQ=fC7S?#~8SwgfO7*}!XFEGM0|UdD^+_Z#sH#TVLf{ii(6#3guOPD5F0!W^`hAa$43JjoKaKjRK&@g zMsfV~`JHt%$BzrUW`|jm-bJZVC)1euDB;!i?SQFo@ z!<0Zst5#n*qggjAv#|@RPE&#D>0t&A_*3Emym}QXKB@$-s()WnSKPXMX zI>+&2(+PIp>hAq;#B-ckZN(2$=T(vH!Zmg$jWXb2R+%f-o zN~2Gm$fo(sKUG*7_wH{>2|2k7MO{%r#h#4%xdCN|w3#~`3J(rEVAZBlpsqCQ4Ki)* zQ&Uruy7pX9C8Oeept3Km{qco?EYe+O}AANwjqS zLjhtmf=#y}|23=_a;U44d*@$xl1QlJ^dxowOfhIrW6x)>=GuIc@OZUsvrO;%T$ zCKnHiEVvUAKtHSM=oKITt`=BCdh-PWU% z2)zp#+bY>+tree}(Hbmjs+vuYCBCvGyKGptg^OZG%L99Le^^nUTc7(9qFSKJxZkK# z+xK1rgv7hI&~a_4;;j49w5+VbpQTUXT#z-Ki{&e{7!IJ89 zw;ocB4kK2)tTpD_e1(s-CQrRV3+FAF=(m)o>gov$jEqZTzWnwR3=WNlNo6x0h~W#E#C~ z8>Hs6+cC5ozFm)S%WUpht<6tPsOFW^Rw#lr!I1-bt4ITJFHJsaq<6Uh9e| zZ(A{!znf^9-|G$X4x46PY2@bAlQ$xS<`#~ES=Ybrr1TG(BZbApR36K4EP?$TyP1aZ zu2yF|BkLs>K8lTZGSpmE#6x!Qudng+yzu={5@N!3EoAk#1vE~YXf0e2KjGbxcrb0P=o?z_Or!6pi|cM87de2t$@6ZHF?YqL9MlB4Wv%i zJL1nYds(B!fBspEf9G7W$6y-k5i3;pHTM7?`1lK?<7a{M`br_#>&EkWM@!`jmQ0fI*-SxF4@enq{tg)IwVpJs^u%Cj%#-& zb5@_Tbhobe=Z2<6X%wn6ckblSqoUww7=rACpMzLefOuhD-xAqGyVcf2597bu+&h_o zS?X=2@;spJg*&HyN=^@J;%v{l))uMh<<*p!{L=i?b7Q0kJSfE8s!|=M+094+lYK|0 zX(T0%ikqTimLsnQlMr3$7!;TXN%p@)8K(&E%EkB-F%Gvco(lvkdgg5g6=gI2XZ8zr zaP!V29e<4=jO+U4W%9`!_6P&iYbenYF42coe76&k0h5^F_}$ic2L07CI@Ls@EH?B+ z+QWWJljb>b&m^Yw-__ApezW}~KM#fND97>EeZNfO;X;jWJ9wSj2JX0aQ%CT|2?x_{uEVQd;)gzOQwa#xt{Ym--fiy%#N%F>&(=jT+aC?8R;k~p zJ~b{FbD8_}4D=i_iXjE74oj>3x#eE_7X1<`7pI36q74ZNcfs=k7LYWXKdyaob}WFQ zlNROH%{Qv;Pl_7g2py5PAb&gfWqr%X5X1O{&lk~r0{)|gs*?OlRLiQQb^^eRt*x!$ z7IRCM{St*(1qYUc9~E17M179#J=Ohe_3k!9$Io4B{t@Aw2GfkFO0%DLI!i|E;E?G$ z;Hc&u31#OTLMX{7W;Ig1#_+3e84SosWHHD}JQ#nLnLc}eFw_-y-JyA(KL{u;{J|++ z=+2ktUWo1boi5r$dTqD!6Q>v&qt8|laNpAnlx<0{}H3=OxvOpofcY`#)b|zKmGa^N-=pMh>od+DZyEq-!4P!t3k#$XmF=r=YS>1(xvcnfK9I@1y z+agf?F()4BKvp?gh>UM8A1SWp9YdNmRqaK0O=01Oey0}L?$jR2V5e)}f8p`*&VO1~PC z3Fe8~ftzH)Dtry9AqMoZxjQx!p9}7C=s)hCCUy6X=sVjZ#;X;gf_^=%uWTSdB zQ({g-k=d12G?Zio4jRYuD@kG-HtH)$+=U7-*P@!r^p=05#;%tuN8_5y*wFgEEsUM| z9Wlgmy2F6z?uBNrf10pI@eY+;-`?hU1qd@zPQ%Y4HY;{;KvQ)62~Y#^o*}R!_WnPr z@$R#VV$j`X`jFV+Z6%jzfcaNZpivDQ?5aCw$yd%!x%q7O!7N&L-9U@9vvc|GXWNKVZZOwGz+O-u61Yzi1XQ@3jr7?iK>P+OER|CQ z9J1Oz48e8O+_$H!#U`sAvTwKOHh7jb?6y+;RV}2Jke;Rjb!Xs{3O#-7V3oaIfnj4^ z$@G3vriH2XBb}9EaUlZO(uYuL?%vjN6RSfP)I%k6=|IT6nE_@B^St(#mk0l;B=)oL zd~o1@t05^N294g8&tzoMj}cxUbe@G^;d5iCthI4D&Pk+-csZZ0+f8pw>3z2PwHF;0 zhQkv}L`KH^8jsZ7Ydc0j*nQLSXKS4=q8$3tX0Ah!0`~0foj*BfPzKUL-$E@6z=#Gp z`xy5lqKD%Ccl(d8|25?C74+rlUD{^foNZS0c7VKL@a8c4>RiUe#6*9++-Ltgk#f5N zXJf)ITlqv5Ms~d_-fKGU?X5x{+ZhV{BZ&PKo{a2j>`Wmy?iIsYXLE;cuV_@FODrBh zNN&kt_9b3nCeYBa0{P_I;~kcG4v{^2^c8jtH|>Igf>up0f8J*<4*{Pf?k*qL{E#CR z(E+f*3BB4!Bdw^{YN;l z0pM#%Y}QMB2PhLtdk%0WPUE%eo0P+~(8%?H4h2r?mFkvP$6D>LyjZ(mo(z%%Ex#Wy z=*kv1s`J=^*s0W-{fu)}uJG{L9OLSDsfeZ)wONxQuW{;W#l^?3JjiO;%mYb`Ywareb2h>D#}TuIVlihq_ln_U=s5!AmO^}NkL#J1kAk9OzpwsPF;#z zDZ~t(3MUY=YThIB_d7lKx%R??($^2r2klt&Y0dwn=5Zz$`weY9q23AX|t9o~_ zQ5F^!d0yM-d5+S6D}NpeYsB7?PDiV%T4kkee?@6VMOjalsI3yQYGa7%7cNvk)NDNNh zm8JPy{;gKCcI{oJhb@0nRs}SOt-DnSkTygeHnpb&35UMb4EDrp3)Aj2zRoV`GXT_V z%4ODrA1(Mb0fl304vj`p?n;y*C)wuZg|j`N2K7~+95TETe~k8tgkHzdU3&CsaD(^E zYP!yCd0ac->TEX3I8u!$n^Pv5(%Ot?+GnTn4hVOoSJyiD^MU%6jK|_0^497+cXOa0 z5;%=!x?0Qk`6H4ZD5J_sEi&Ml#hwTXD9|*J|Kr%TH%7f4#)?_-xo>Ls^L0dUJIz14 zfsdaE@LKju@b&$9diygnax|EBS&mnd=Fd1rC@6ed$6we_Rwb5}K5EWRw=iuyp78-O z1}bj0m|D5`2fz}bU}67*C1^puDz8ZcqkmtQ9IN@5>lrK#ZtfSQ{mju5T9y*@{Qds6 zU4PLc?<%+J>I-_p59nm!^(|UPevC*jItvlEMM`q?VL-$It56}u4;;Q8tohbtwIwSB zOgUSu^*lE>mxp&A9Yu{vxC%grt@!u$bbS;&>-+a-`}k&tsdv4c--!*faE53KeL47| zP{gYCG4kfUM-o`KC?956=h0oU_KNIBo-fc`oft_y&|;KN0HBLYK+v0486Z;U_$wG- z-`Ze4I{_T9TwsMto8dqB1L(cu=$o7V5V>CRc%Oyhz0PXdtR$;UQlf={7mK&MS~X;P zakq^o8Ni#j%^g=adtcAQydOEXInHo=w*d^e-bXf`m($m>-bwgl7rE;1ukWeS<|K1=qVK2@r| zZ8ywP(qe_LQ>OnjBc!g!Ur75_< zj=dBhgqm5HYLpmG@A<5vVumNV09z=viSZ4=u(~Xm<3;e}8!A#vsQ})@w^7sH6ex~a zL|Ksn^x5Ybk5!t^wuNOhJ~nR*!}BgF4+qC5M*J>ZcsObv$*QuFs?oEQ8Aq#T{2e^Cg`%9BG~}UsGBuv z;4y7(Vfy;bn?LZcpmXGnecQ_l)|%Gkja4xS_DQeVeAKH`D~Z+6)UrfO&JZ3Q?OxEi zL(!c>ChGm!_k6c4$I{q~mHhMiY;&{7(U|GiB!{cPfi9Rs^To)`MJwYzKpH%E=babZ zZnD^H9X2K0Ert`w~PsTi6rstOtL5KR1${*I4}W8mOWIw0ta z{yGwL>n;33shRKn(NWv`#CuSZt!>c%_Mp?txf;mZ+WLLg3=@2c@+*)QTb$c)O>+tK~NT7;}N{*yt{oQGd z?aEO@o}Mp4j&p&^-+(kwd6Py=#JK9qzV(2PRbHxAn!Uz{C$HHKPK^e$M148{JX@*{ z6&vUQeiRv*hs-@BWcuVc1J1UpqiC}%Of$s;E(78_ziOm%n|IM_!Zgi7EP|px{(Gi~ z?F8WDd|Nw2>&A^6pe3`j$84ZCl$;Ny!0+i6z!gLV=)lH}#Fbg2`(UQn^;cL_6d#)b zi@j^^_!Cg2B^2pZpYpB)j!4F3^2&94N+atopwq z>sih2w95bvz-hZPg-54UcTaW~Qm>iq9rOtA09FSGaO&s9+bD1}wjOz_qobbn`9;_G z&cZ87^mj!w&7th)ezp>$I%fXWK=h#xtn@`VC%dF`uWLZ%iOEDfOAj_im3bB3m>5X( zb_{Io`7#6TXQ%Nv4G^~ol0&FoQxyP`nA9jJ`oN}{DRS>aO1%Y|DvAOqsNpUM2bqa9 zKjfwrFyQHmcveMlx8EL}uhRyo!4Xl@RamZM4aw<^KF1wAy(08@By6#R`N}@l?5gEv za%+>-B{#`=W!uAcJC@3VvV{39y6(mFa>GELUG7d#4`LHGGX?(n6Ld6s4JvfWd992s zOviueKZ5O}5>QV<0j`M{Vs|$Upe5~(YYAaaiu{KLng4#n>k_dN<&x}wwk1%qzy^eL*YzQ}H@B!*>KZA2cm#m}pa^<0 zU9~&u(0s_e&^bg4m$UCNECUGM``0a5QYyptuGNB6^L zJQwg&A3NoopKg?hDQZ;&z5(VA5zXy=q9+PqaOu0uejMr4Hi$ z{Ssaxg@BPFy%jzR+m=9dZXo!HP?~T14RJP~n^>6IkC!v*jcbDriZtyZX8=NB8$U(h z-zUT*B)~R-y^y7Dwm_(*vGq}+LG|q^AbNJy)=|xYDnV4ft+kb$OfzG0v8Mvax%7ch zGRd@eOGWC%`s0tUAv+Q8K(_X>?b*MH3EBf7Apxunx`sWa>wh$Exi{K;Ib1ZeO`I0R z=P)C}6XpScxl^rfAzBzP9STUz7t~4h3z*^Vh#3*CKufwcc>8-AP+MrCCj1_f!%s~n z!T5b4arsIM*D;osEe4;l+tH>D$Ul+0`)xDOBuLCsx5nQ{BD#{Md?4b!92D+5v+JHq??%|&P`yL;OSiqO}zP)w_dxDT*t)SpS z@d2y!^NHGnBnu5PbXbBx`GX((WT zL8L5lpR?ij6X}@7k!&$P-3QN8Xpy1=zyfc3qxTZ%YkeO=Zc+)UgSPKxr6_&_o)y*r z2zw}iq8SPwtKEJX8Zui|xHi{9^Xrt9d#1Lc!kG6!FDZmT>`_G-ucu|@5KODLtF=%A zmNY!tNAT~Y3*!Z-=4dzzfgc2AEiNwJlqn2?16C15Gp*X^l=(8$PR!9v-3w1$P)xc4 z26~QBk$5th%~vC630Qt&mnRF9WQ7)Mc^^N@19Dw)(FA%bokIm`=JVg*=FHwUk_V3z z=~BAgeB}=|6*2~3o*r%4W-ujQ4)S>};Fzp=IgpR-9q(W}S0U>@|~ z`TXGxAt-uugpUEf+m$CZVUPeOl#jSA=7Y^~wL3&a6&G0toU-!r+&y1( z>)q{7WvF)?8g`}W1BY{E2=p*zM3la^CJw{2NT>ZdE9Toe!GWt zQu;uG@mKfwbLyN!S&BGPFs|71Az&-k1^D|cWOpv$v@$??oHU1`~>we#Co0llhxqIC}@+W zaHgP(w4wJJWB-0XrETVT{lOH>eQSbj5(lIw-inVRogV|Uva6gHq3Xoq(gV8y7Yn@#VR)O;8usi>w z9CV4Y%gYMY3+S3JPi>1MnK+FK$F)JFBoN|>eT%a6e76oi*M*;C5?A+Q-oH0C5y-tT zMhJwX@Qn;%52fwe6>1%zaYJ8jBn5p7CFtGXWNkAZh3nAgN=Mzf*itCasrY!4g1>7_ zz7&9Olj{dX{_DJbHP(X_1JlcOV?Jb8ISz=q zU_A1mzLuNz-`(&x*zDC&fmey+dG}{SH(X@<9fhN!u5}z78@NP=2IoWh%-jXN9{} zy?=kfX7`J;u+b>e9afzM`j@20iX9tlC3r0b6lt~=ro(_Qx3_k5CN5U|E3az@yI)~& zuYPiL1Rxnv6-_aP9Fu??@F*A^n}E8jG-^Q- z0Kwv~P@1IsPv6K|S$)bC>h;eS_0cr+S|X-@Sn9H(RFVxFX92oobJ^AhNTzIAObd`S z7eRgNLRN~!keid!B@u1G0K{wYek z0^N4{)%Mnj8mC-szzPB@T=+Hp?Uqv4=6@6ZTP_TuB?Obrm=RrfU1a9we?$y`=8F!{ zzB{q4(ByfNDaX~)sP;+%E;?~={C0nka@Y^fTD><0<8N!Q{qxIoho*NbxzfmO0;?yD zND+!m$n_JD7WG_N`NIQnhYB@1Pavm4Lf}^5LTy#SY3~+yq?$|cNI7^b%8s@sExp>S zXS`uxJOy;`*_tCz=ygcRo^>|E>UE$Dx+qiO%WrRxI|lZ>k~s__U2?l{^CQ5Z6B#-@ zM(k7;G=~-s7Mqjn64SA6od1@RHtf zZ&B{;+qds3ZAZhg42tf8&=93i;7K@dwEC;#6eW=aszxBA48*z>Bm46qoxIpg+)v_SRt~c=mb&(WiTovg$+mMf&H*R zc#OPy-WB@J3XGFPf%4n#{2cV%#18hR7L)WKToq;OLk09^re5wjB>%elaOCdD!h2h{ zf76y4f5BMrW$AvRQ)!bJN|FWRrR4=_U;)l1N(5vS!z_wZxh>F5FdxX1+XMAz80U)n zS}9+M-O2A?Ig_-i=gP+W%iR{VD}dG<2~sFj$0J4cbB8B7_xb^o2APub|u>a+rex5@z4d(r&X1CN?e7j;7KkkJ!E7Qpuy5(GwN%51&9*aB(+UzDkl_d zx#2eXjnk>6n=sGcV!)~ntH>ANaC zZyVP>rRr7UPx7`VR0s$sVJC05>-yJi%t3Y_y?5~6=airgxXurmvdSpX03?l!-!A1~ zwV&ST@^KcJ*30nv?D*@+enUx@8L!oFb3gzYuT`woqlo!gjGl4rt%=I`=8G+dB&?}2 zQ<;93D#{?rZl&_4h?9kwZrT}St>x7&FzZ_ZtOitOc2q%#2^ujl34M*_yXqJQ_3nuP zE@v}bpz?`kb1gyK#w}?5?9+R7R)J4@H_`yIorCVUp}1i+{wC;2BN@G`u%!GP?kbrz z01uZNc5hz4PL*U6#V!K*(VyV`xHiyNq-hl@%9Ju5zqIZG%^@NXhmHCFT8|KJ0xt=^ zpdf77S5{O^1d{_sM|nTr;BFHj1jj^iLA=zmhUzQy}Q_8;ZQy)|msH+&!QLv?D2C|Hi0MfC077 zCRcBiy&+6TzNG2Y=3~Xf7w-dCMcpq+00(cWd_7m#+%!1CrylAo&5(b1WG54$}8 zI^F%oSd5P_U0FM|484A={CxH1O`0SeNPrV`6dQdXo`QlElu-VzxZnee~py6!-%`?b%VyNpO>WMrl!qbRaTX(^+~D$0nAWM;3Tp2*0^2o0ia z*|QWPBbB{+B7~Arh`gVl_q^{pPj&u#&UsF!`~KbE?{$5y&$xaV)A(CR)H3g_&LNz^P2bOQWo;tlN+^cTA&u zz(G$v8s)+5G3h(P;@S6??dDVi>PJVtOzjt^8pL3R+&e#6y>18{w=clTv}7+B&?y9e-yrpG)NWD}5ms-9}4MR+{*Fwnx1ykIlX6@IZkiSvDqbrw} zz7Ug4+Q~Gn1P6;mt~vUPkBKT!Y4~uWci$zehW9j z#|1}=t?BXXU3l|maJF&?ZTj|klSpB`))Nw+{)jmz4RDXVN_Rh%&F1!+Q@+razTGV) z_?~{L&>{33;L@*Y>w*`Jl>D>Ut6mdL-cs`JeM9$&eEX4>yFN|Atv2fC8)MC0 zd4K=hdo%6){CB`D3xMSe1F59O+#r=d4waSQKZl|lHuU(r;cqkr(VCPYzOhf6;9FxEi7T2dbyLlc^3e!_P5c^bS-WWUcZr-iZ^M^B-iQ!FVQLZ00 znWA2r!H_jo#~*SnQ$iq{3X`w7hs*2_0}!|n`feuEh#D4Ypc0syZppXq)PfDbY*gO0 zbF?mupRiwMVC{x+yEI1wgF~UrsXyMfj#D5+JljQ$4=RA*`VjKCWGN0C7H{uVhM)4z z(7e~|yHkYdxpD`)p+2|{-*GYgxe{K3(qF~u@i*$;|fl>ljl5u7r4>^1I<0rj1Lsfvxi!@r~l5B=m^IAmg$YpB9t^GyHGvDOZrlv~zo^ArlTIN1qADv#f`LP6Ip+=(PJiK)$kNG(b(MspHN*N=@DOPGtugQJb?V-cb&(vt zIOHvCJe&7t_lXZtchA#jF6~olKKIC|vZ7meH}zfLC?KZ32|ZP2Os}B?aUgQyVBz|= z1Cf<$ti)qa?QA!y@aXJNwFkR21=;1t{@3Yx5+69zF&rfB`F>bPSh;p7`dJ3+wbJRVdx~CD(@I|K>sdIO`+6E5 zQL5?b)U1an{kaRPbwar0A0Q0TIcHkZr8$uHFhRV4D_`7s1!OlcupP#Px5r->X6Afx zoBJW=Ch{{aD&~yp%bQdF{DT|d2e&^O^lmx$nDQ$aL zSeWgXu{RGx%9g%$+s>Z6%zEhX;jo$Rhn6Wlp}Z>ln^|^fKZ^JsJ!nVQ%RW^+oNx=w z@ip_f%RAT|uidyIQIp=O8!@n<<;x_-x?pbbi;MHq3ada<9wda@#O@yk)u{UbVe|^v zy_*5|)QB^Xf!gxh=ScTw;HiL_Mn&d8^zokszyoeBu+4BB* z!%@9cZXcNitVrc`W_qx1@cJP{U zZ^=9@i$e_^_T$Hv%Y})w@iImR_;J%-G^(z2h=|3c60+goxBXc@FE;aJtQ98 zQ6M*5Vn6iP`m$K)FC)%=6%p7X(Qs`&{{!5Xd;L3(*wI?|SM&n)NW5)5Z-T>UsGom< zIdpGq@GPd7nCU{9!}R)2pWePi{=m93_rI9qeR-fUGw;7)^Dylb_mx@U*z_no#+Mks zw|oES?w51+{17ZPm{ZH;;7(9?{3>-5l+0XY%qGIz-itXPko>(WA|SPJP!iJ9T&rAd zw+BjkOIHl!x3VnR0pTAc}BQFQ;lPwrcb0Us8&P|>* zao3XP7FF~s*3r3C9Y7Dz2~=k~#H^YKhd~h-LFcSLr;J)L$s7@TR&GRX5!JMY+r!vZ zc%8yrl3IeC9-g#Jw2L9kQlHwh#2^znr6EY=w}=5n3!O6B&9;ngq*IB#Y5`Ys|0?~y zzkcSNS+x%$;8!=vm_ipl*UavuU^a;w>asiX?rmc`W@NvP+z(+_MC1PkJnAc!)P4Ix zXwQ+?)9r*q%rOI)H75AMQ@7r8Z~7a)eyD{9%Y%mG9(&8&#A&?ab-d#8H#e<&f3Wsb z9|tCRTKw5d(Tbi4h497HUT(J$-Vcjx>ccz!uL>8Hm%=G2j*3+1CgicPMQyG8DWP4v zG-ragUp&yR+FaF|mpuh(Bz9ihYw_}jkjYr(4;dEDhcLPf5hfFYC$nmDppv5nIP^a? zdK74GfAgs~YQjser|*ty5QCt($wWs^^;D?Sm?p-_|FvWapJ@GqM?4#glXi5Aoi#pg zj5;`lM`-S9&#uujK>`0_m}NiQ(EL?@_Z(`Lz&LaV3`JFO?3ggE1>+(Anm_Nc6S)27XvdBo31A>afvf@GonCpVU__usPD$p-I|#fVfHq<1mTJ>OZ#Giy2Jmyfg`DzsEHf;zJWMm|rc ziasl!_oELJqi~KF9Bre2p7jJSZ3X&z%_BsnR41|TY_}mr3+wSEdjBkr2Ld}Gk>v#B z6vltB9#C~8zg%&5%W{~H1I%+$7Re*aJC&PvI$Ect4sZOUd`~)x0%@FazUq}ZOno(y zdNIdF2pXy2zExkDt9hfihx6CJ4-`Z8L^$q6a_~>ylg$^lBb@*nL?3vDGT)A-PTeS+XH}ybRWnM?%yD%m6Ub}%R;Wb+274fSn1?(1PEctiB zKzEKmG#|Z0xLG~#wr$C1itzX8Kn4OezpQDcD2&{|&VIq%WVCV^l? z)R;(u{JhhTJwb^8#I|&)1{P)yE!zAaS|<{{=!xZ<-02Go=6p$%yuJH0{BJIQ_g;=Z ze02G;BeaH?ooD^N27W^r#(nciH1nV!<;G`!UAs1kdP427y!8EJr$%f{>iH})9bzz% zb@}`%{kly85((z_EE(0kXS`1uV5>jUSxaApJE9-iDYWSp=r|O*97wJA4!ls zY}etkoiiObbuv&YWR=z?R5~;Y`G@Rb0IPMFvV((!qu2CM<@?x9e1mtqK8Q_X|0?dVbYI z7>2;vY04OOcaQ-C_AP%%ngY2|%HKi^6OSLoe7IuQ<@=e4yBM3^OZ*GL8$ zOS9ISBx|@L%k~UQlcf0w`ggP==YDr7(T*iJ(lAOD%t@)|<$P?qV`YASR3|~_FQD_u zs#PBsVX3qJFQ)A}UUW!ls+U&dVxbWcTOjJV#1U+}X??_YpxD6>pVb@2KuROm^KCCh zFUcD0;k~hsb;4gec18>Bxj*!R)hGg(aUM?%wE}*^WU#;YeD&!Qv>5A-`6_rW)M1{E z8$HM*tlI*?H}CGu!OCdo4sj>cbkR;-wk3|R(ZMW*w@SItJ|GyDJH40wQ-5I~xNl{( zGm*%OC7iu?kz=dg@ERrV-${KSw!FD z!4N2tHQ`X=YyS1v9hrA~*MPu;fok;Lv=uZ~u|`XOr6}?vos3T>l-|w106$|udx7z2 z<$cJuda}qu66Q*Z#aP^-Yec#pjl-$Q&lWqd;F~ zxLnK4eV#rCH8VUZyhlZ5`W|d_i)J9T?q{Sru>QaU7wRikYet6x`VLT$l^Gu#KIr!4 z5UgE&?@O1bdwXRK!ecgv_jWce%c*lYHz&kMPj{Pc{CMZP{00egwGeZkwa7Y5Q6JH; zG=L$7JSH@+(&p>j=Q}rt<3uo8%w^Bqq}=HHSK+K^Q}ULDR9L_XacayMibiA0dpah; zq>AXU>sOunIs_TnNV?o;!VA$t87?RnCrbD8JOr)~y7n$fdDX0|<%NT1`Ye`zE`763 z_g}LvATB3ON`&gCb*Ju&8?3A-`PaWX(Q5dRn35>Sf=ZPC|BnO36st}p#9gAhL1S3% zrmZkHuu2a~OZ6+S#jgh~J9Cp&62rFPS)I_6X)@V%EOra7x5MX;4&2!I&xCDZJ9N@_ z?!Ey@D3yYxt!`>}s^tiUj6b8r7~-+Q`cDpph9$R)o7VAQx{uPy(8rIc;x=5}p26=E z9*5wJrjb>9W@K#$x4T?1__H*n+qE1EmsgpZ@Xp!DdP@x7JfXz+1#X7Itl**G*-f5m zwbdtn{}M`+S=2l2hM3K{wg{2WlwIasp})uDBDSBP6>Um66!z?XHUIrJRwVmO=ETt?f+ zst)uBX+J7ji>Gx59jhieadR8vUhkh}L7dWBp*AkVO`CfU$1MG9>CJgcS+}TE(*ZAl zFwo|kURpLT!fp*_GL7y-35Oj8S&f=g*7+6JbNxK?;BM=5;BdBugguuoC)!R8JY5xN zElf50YWHF~U*1M#VGmyS@IQWFjjG!?IW_Tn>(^0bf3Cw^yxIIOdpnFDksAb#?499o zZ7sTF5~JW*QmtVa)Kl#65Lvg5yNT-Q0zP}{w;qodL^%kV{CfeEk?_|ud{&Ip2N)YH z3XX!W)9zF;WE$?WYFqe8_n`AibnrX0FkA@d}itdV^U3&k6g-v&^+%MPfa!{hS z1ikML14$My?p+)WL41XG|9QfR!_HHJ^TB+&Z`8sr0hKdqV89r_N(X*P+{_!btlu@8 zMAQ@H{L(ks>;U1>;HKZzKHmXP%R!(MJt3YakV^xv0}vd`R*ibcp1V}B{L{S?BJtg{ zp~kpVPUaPY^(RP;g-00tYXwdp+dQ{NYM&%a zeOOmpjoaMh-8A{XS6>6u)RmW`aW3@F&w9X{wAOH`*)N2oWK75zaJ)oIC`;0bQ_0!D zE~BYTIXtmdJwf=`a-PD5xMk{e{rn^3Y?u~>vd)x;qgv94-ZscT8=Q}HS{)?#wHe)! zk~aE;><1iIBj!h;<1}1$Plw_uce>)-;LD9ad>QMugCCI(_1{Cc#LGdac)!3p9dItL zspd$cLb-}0OWPBZ`@cVFAJS_*AK%9#i=WpF6yFaz@%r9@cfE^UZ;nO^MRH2pKmS0t zfH{rWw(}D9hWyfvR5aO@$jr*jOkbZRYkVeF3$zwqG#XR?rYNz0EAPhzctc-;6m=So z2g^V#BBsR)eG*Q)$gR<+iX-8UMj*2x_Dm(^@GVudb9dLKRHAzPI1(3pmEulJz*Iq( zwQ#yXjq1Nk-oLBgy*M1<*AtsVRu7kYExDxuX*R_0Z8XFB7||uS%^1xJPr~wHrW?qx z+4OCyrs&dmlF~`)foF3Nj0hpu9WzkTGf&`KIMjPAtCwLYBimLB;v-5L=Ku-PRTQ|aL+zQT1-ySQo>A<^ zk*7VP4auJvTQmP%3on7n39 z14RGLl27maz#Q^z?&cD_Mn@CmT%Q*&ZX}UlpmEr{I+3cN%HU#oa-T!`Q~w*aSQ4Uz ztX*RK?p(SYhBgSIYd0&peC`oiMd#Z~MYCa3SAf4DV<#zuTT!P!kLV*$;|VfOWO0}uzuiZyJ*IghH zPzAzuwB-^)cNge+qFEhfLf^NRySqedo=8y(Lv`&z4ffrZ4Y6T6rs{_4NUo`ovk{R{ zxxQJppkYk2_Ywx7(ayYwukOg6!uP%BUW6)1cjWXvYmPQ=llwkmWJ0WMR@Fxj*@W~l z$Vw0Jc@^;sR@@xJj}VA*`I3<4$;a^+4q)_k2sYJky^7ZVqk;eHG-vzaDC=Bi-5V&l zzUuTi{GO4xt+M0Xte-C3Rdk&iWLR=jN2yJzCvo8#o)!G;cF-t%%VExi`7M)F3JW9r z@YCoNZ40>Z0#7OoH0?F9TyA8Mw&MThR41pDZ4b3eLIG>2)4L~@CzuBsBX>k+#4T%8 zohQ}R!1>WYj|#$4AzAa88BtZ8)sAsRFOy1Zqr4*pIzzR z#|EHd_&E(}=r9WG$~nWt-d3{jux`Zo+P49TCRFt{0?c=g_Or`hT=X#UlB7r_OI&-~^BPDt*q!AtxhQQ}nj>;3azb;` zXYc!HQ$4{+-XIX}wX_fu3$N0LLBQ&qHTFj$1mCmvkJTx>F*o6*>rhjX)Is?B-TeKT z?Rhz`1noKgANKu64b(!pPJKI6A;N11MVf|zfn_*f1zI|Pk=Cv}m?J=!m>@0K5ySp(HdQQECeRR%wi#1u&78v7J{HMTFBGeY6`KbCt zDbn8l^a{&d0TWyY_ z)#CZ>->~g?`ozZ%7Cb(5wQvn<3)Ib!_JMcU5Y!wc64z{M+gFxGDk2IK0ODv9KQJ(E z#6~a)nrwk9m`_+R8^M*x(`Jqrie6Ldg;SgIa_qo;Qk9zR+#7KcAGf!{v-_=KXz(nu zr(OP+D;CSdok%^`{?L~ynNi>`_RAdaIYHTD8FJ%G*t=(?6kPx%`Kz@|HwXUsD&~u+ zui^GCleecXQ{KfSU60KJBA3q16<2xSCqBsR_%1FYpRPnW=V)UuH zGttE&>ugPo?SsvUM%|6Iuhl5@)(LLDE-Y;N-V&vyn@XTAy)8vOF|(fQb60Hz63qWpB zX~n!gr!mU9OX^_iJvI@@@yEvK0uWbx;w(3LEp0sflaa?9&WI(0e#v#4z2NWuK3j?9 z^6aNbVTjS>D{WauS-BAigIbNA=!4b`LyHO_QCsw8rt9TB+Tf=vB`PRb*E(GDQd4P% zPf#^x$j(*L2s5@-aK8TlxA#k2B7RPjSynVj(E@AX9=oqHow|u0Q`c#U-yZc$N71P% zPqyWeF3 zoB5wlc`w0W<=0t(yxv)JsXsv>UkhQ}^W9$4uAjpAePrKaH0FV!&!K}?wCDbQP6-W8 z-MPU%RH5ZpZ#?U>1>}x;>c6G|6?O;&c}XB%B#=ql1)@!R$$G@LijpZi$>)q@8?>fm zh(;{&u7hf$;9kGs;;!2h7mSl8^xE7+026`z*1GaP95=>n{FLNzgNBXMDOEt@?|O%t zYB4%$T5(E$pOz-Sq8{Lr&do1z9OXm9k;uEz-u00f`k_}4%qZtxyQ-FoP^b0m?3n6K z?%JK;FKDbyX@jZY(^=0Q5>};HYonY&ADDQh+Y!(Shl#W3RdX>v9*dAkI$#hm&W7RW`FIkgm`~C)nEugk zawi@f;+@kwD5(W9V`(b~-LQ!P$2hoQ+vdhY+nPF~!k`$4^2gb&{X_Yt6vRoSs7R9& z=%<glS&rt{ zzv1kDKh)w)5BwNDI~p8=ZWz4mo?A69HYtwYER<&Q`dQ|LQG{}6{dxc zQQzK9zDnmvrez8RyCBQC3t?h=Xp=OwwDu#z8}a#-cBW$1wO{_}+528s>>)Z}E7aw~ z7~ww8)p7aIO%V|d$XGOOQ?IQAAI?K{JBk)h9-`_LWUM0*lnyk57?}e&(QP$H;v_TD zmM%No$MTd%EKmMb7r~H?=g7U{uyhZnq1w z!w|;By_t2y&ZW$u+HF7LQcb-c$0=1v#GR+vxU~+RogoPJCEZ&Ng(}5L+B_od!z}c6 zGk52zP(1<-&dP_(+66s<{+pI4A92Lwz7h{DPiP*-KX<)yFlVtc$M6Xe98JdBk(6)m zZv-DtRG2_pocn5W;!V|;^Y~7=9UcG4q5TU?QFBtR&Lj<&55|4R^7leOKF;(ll z;L9h-a-GoQLE=3czrM^X>KFmR`KWE`T!CH+cYU^zFcEKzNZ7CQ26}*_6r|Xcw;2Yf z$XDPgd@QRVw3;wZWccSJFQHN@h7@J2n0U_LHNjRE91Qn-6p3ci#`XM}{x#KHq5R?N z6FIi=ijq`cC*514f}LhRX}@`R)_`>_o1S1?{U63)A8(ZVq5d!2=eZA)1nI8bSb)rN zse^w3mLHy?(%xGErS;Q84cjmt+INhDR%S7Le*V6aDYdXc0R|qzVqoC{>ta||icAzM=M%A}&Lr1Agme5NnhmUH#J{p)m$wMzZ zIyF9F>BrAUrX64^9UuDTzt=CYPV*yR|Le`oluXoAl;zgKixup0m+g4sA2mqxWM^PN zgL$E0)BVkY$`=ec2A!7!T5bfpU%Nxs^yauul(?*mKv$Fc?GYUf(|` zVA=YB>&|JRvR|`X$0eTF?3N>4Cm-ccD#Jl3H-|$7^f#Pr>q-r&w6; zk5XNS7OmaZ0l{_cF*??96%`fjtz-$owPBYuC};+TWiX2R%Q8hNS#ed_PcV**~`^9#uPG|9eGIdVqt%oEG1s$3vYI z#C;^wlH>EC+E~IC-{#EWW|(zGK1R{Iq8j7Dhk8t{)dvPQpPbv0OHNSq7KIN}i}Egb zj0UucWRLl&FwfhEdi(KN-pc5r-tBqPU*##lqnkUS1BT;M6$|eUDeb}Y$dsqzKEYE@1+($JdcOmm zlzgtWAhE(R+@v|#YHU50m3%Bt2TqAC7|g?%wrh~1KKZPQxspb3Ky{#h`DfuG zlnZ;;Axx>4+#i-SC_TIh9qJG&O1#?vGv~97T#~f9dKAfvT#8=fx12gN4U73!tzJub zdbPn@CpSd=i~57$aUl16bLL8uyhC-S!ehH`;aA`|t~&*YthLqr1?uJCkI5ItDX;Bwz{s+dA=MzEu$QXR?3?bXKOV*S4##FjP-}&yj9KQ9~qlq@p zpc_ByO7?w=EHG<|*JI#T5H9LyQl)%Mo8)MYLE_yhJlE%}F|0)ca`uCV@}1iOdYsZU z{!Yhq-U_QLL8E@Y%#fty$f)$~Z%7iRFbCw?4;^{xcG)~ugyS(%#fYgKEM&BZ01QVf?-OfXRqtbGoKzma-F(z+YK=GS`;$3Y0 z9gn*nv+(JKPG!o+)DgmZS_K#TGgJK4{NXt|S>ik)ggDCg1*WiFetX=WJ$q&f^(f$` z5u}HF1|3La0_Lx7ubgA55(?WEP!;~==?SkBw`I`zy@dPpt@C`NiK)pkAfG#m3jMFl zaYk6s9NW95<1SJjkN9=45C(*gyxPxsyzx7!XcP7{2?t*1xn%;}pp~k@8|cNa{;4fv z2euj5ZTd^83Y~BZsvEH04WcOGn;NAoZUh^CqWNJx0k=pLA-Jv`t!-q zdFaX|fzO*aiDv^a^TC$XP89BI$4SG@Sm2do_n`*&*GYXDlX|$s`QyEpbhv#`(rpVe zg0JR_HWt8>B4cW@OV?H>y!+SrP05i3SX!p?hn&l4O2e}HTiZf=5CX87*kI?ytxSFdHr!R>je}z@>u@-xn2%Kxi7t? zg|8CiyNAv<^gf3nTtv{d<6kJNlz05T(i0XXNfuIdff9plrkyQ#&2YpPWn9)gccA(T zRs@>}b~>G@t9dre2dme)svahOse8Yi6JHAgtnT)B<656UL!=BenwpE3;PRP5#8BF0 zOyLV&lTyFo64xA@$8~U(4S-o=nv9{SpE_%^!y`*}jY03RPq4xe-TNqr9RW-ap9A(= z_5;cgrs@my+;VMJG#j>r!gqKCcBTP7IiSyhGZa#nI{b8XiJoZQ@bPbIOxDa)p%SKb z13!IOJ{^!`IkME%#H}onM+_F2yLz632G0xcx%uma5|GNYth?r``Pg4cwqu|@aQ5@t~bl}oReLC%MxV61?vEOO?Vav>t{34yvEgOlS! ze_?;j^l3agkQ$t74}PPd>cl_s6JZS)@b(zWsj{h?21kk*=n%XF@jLLW>C~Oud@aaS zf?lto;~=isMCYNo&gla+DG#R)GB!sA=VL3vE=UfQoAJ|7Kgj#_{NRU~T*l1!Ro0iZ zn#wNF70v-eRlZVFcuokZL9jevgLTuy(t%5#%5)w24@qB)m)G@!WJ^iFvv_!*KU;ijcN1sfe;}y z(5IE4agt`B$s%Ikx zizk&V(DMV#1FOCOo*z4Ju}S6-l|Q0=kB&IPV`;e4yF7~jAj2rukO~RZ2_KRdsW?vg zfrOqLP9zdtR+-9lw*?5H+4Y;6G~V8&LJ&hy(+jstwk`Ej*3~QkXip3`H}l-L{~dby z5)5Hd*)6#cLDs4g7Qq`=a+NTXpNP?clAkd80WHVS?$j)>7w0$4Jo95tuN&^(QWrGV z^aTZ#M$rb)#lM#NMKH%Hknls(du)y;$Zf|&L~P}z6|bAohcN>?(41Q?&i|q;*Fghf=eCZSYfsb?Z zZI)3#R0*c+M-*-0SPF2vgeTakC*Awgfusj*@95NmJmH zlPpo4PM(hIo}pf@CN2N`UO80Q?{y12L9Bv-fh5rl$YW;nDy4{tSmfnk)!6w9UC>gn zF@Rvi(w6M_f5R+3k>pK9xl^Up#pE&yQM{NW?o?B}^as04KKH34g$VEV`KCB4%two1L6JL9Kv$0=sjFXDJRWM^))++C-*fr=L){6 znc5#>vi;vo_{vCE^$53AZ}!gnvU|Z*P)ez+%%ls67byVu4X;#3I$zio8Zhh#5ln3I zo=d)6pXjZ=T){a=bFnySJXIvrY&W#jENBr9 zl_wNGl{=1x%X83;eJwvO*+#f}jW~9I)eB+TtNq*atXjr7SV$yuYeI7=I^hNs!BBh9 zGI!M)hC??wr6J^8A^qGk`RUybWj%#-<{>c~$p5X_fS`v>pga=$vdkJ|v0rB2190Qd z=w7(#d)y%{K)iYOWj|Fe4&QoP0FHwn8=}>{1HE>W5~b(HbI4yY{MxvE1JiA0!Ygs3 zQ#WP%?`ZEb6yF_U{?m3$cH4YreeHZlQB@ek+8=2jjofHIGPsd;Kff=m`Kdt}z}9X{ z>6#UEdLg>eAK@b`sa1~>j(K9!ts~<+E(q#-rD-SmR8ZvRz+rB^ZjgtAj`AxZeI>uj z{Mq-@`#H2Z0SITG8b%~5*j?tQ%mS^`@v@58SX#eN1fdxD;`b>RHv;mQ=lQ)^7Ta2| znFD`ONQ3J#uE7*+b)_?T{v=<~$*G~E`Kvia-#UYfArugNjqeMVCKGp%ys3joO%=oA zMz34|yh+dhu=?wd0uT&ZC@0$bVwgr#z6~`7 zPsXHT{i4&Sh|y}3Mr9jG9jIH!zTDu@`b|XTGeFa;V;TA>Xskuy;uBo-Y&6D4Fo4jH+6pMZb> zkTTk9gNJ!wi1V+8 zD>2c4KT-kV`8oSL7#=#7yC|QfDvF63cu7#Gyhe%|Tko64@Xps;JOXTk(?Z1sWJZ;n zEGl6X(pJ=Mf4{aWqPQ8fD`KySj(-9R@WCJ*u~`UjA{c&sQ&R_6;EIn-TWzjeDYyaD zN{y+vYAPv@QQ$G;n>$zA{LUP4ff^WE&${LWeidw66%y1N^pKi^OK%6(i=KL`?sG87 zhfIY#&PQyCCNQIhlHlIGx*6NwLF7?RYVV#PHUT5K93yEHd)NLnwSK;BuYN}F&^oKJ z%iXu|`<$d%**FL-teDU%RB)-1SKIgsdxJMDF)j#7-?;7gZ5bD9EZcFi}H74*ebp2u%k z>N+8vmbuN=e^|wD;qak`zOo?!|M8JLhZTxLic|DDKey z5S)x9)v*vR&GG(9qsdJ!7VfQF3Lb(`$1fF625b2WvcdMXLH%QM4%0PjU+htCfMjV3 z8QC7b?cUyP|DqIgv+X~=N+U$(Bx^!NekY?_cW-wq2#Az=F1*0Ex6bY+Oh|l(Da>ZA z*$xZ7vqI4~Th*Rr#C?OOwBjwS zI7szrA$Y}RY0PtkyR{xx`g^jHk5&^dp`}-u28B&6yc$Bgcke!Vn&Kv;k`)^h(=|>? zQxh9ReaZG9_J!9I4PXJ0nd|YZ3{1u9YGSd*QpOw1KnkUV$k}IC?lsmxz$}jGwfjwG zzLr9Khc{9Q4>#{2>E?aJwBgtxeS~JBGwP_qvp2uC{SSK~%~87pTVAvaGCGm)d2!N? z66*n+)_nh(psAo47mReut~V0Z_v(QNz?RlFQygdsA~8{%|);H31gWqH2?Xy|+MQWWZn zQ)FVd4A|6+hMU;mC`BysFkG0G(Fm|WfIzgGv1lK~4LHu`ynW%xdAuh(boMfwgzq{L z1|gh#0@pS49Z@nbOk`rk&{)%Y<)U?piyDwTw*u||t5feGPdfCco!ItckTIUf@@ext zL=ZQM_DT1)(HV+PUEOuoV13c8^}UXlYO4GD`xB>~_uBAhxsKn6(y(nn>w|js>P#9W z?7oe}OmfZP@aBjm8IQT_Eyv?K__rJOs9xV}7|~t;VVwF%2tJ59^j)TF-$;lV-y+r^ zAwNcC^+Hd+&R_T@2)IYNfpamEt(YIu+n(<06p>)#R=xRj7gLhbTWV}rIZAea*6m*5 zF+bjU2x-R2IBN*3=4sOa0Jm=AOo#IL!bbs=Uu|+u@z;kJ-l}9 zPaITIsM-d_9TM?XEpHb{1w`U8MnV7=Epn318OkIH1bTyd1{VZ-MrA=f?78ssw$kC9# z<=2O8ERT<#{yo7?m2yi9UTHzRG{vyV0`HR|Ht$D!mEXkDiX*G(*$2g~VXgo4EVzMg z91Tm225x+fZ(sT^_DiB0wf;*#f1sY&*Y_o3jo=PqizhJ)5z7p|XV-0bjP!GooFz6h zIo3LWTo~Z+1?yW6U&FX>Jjl6CO*PLJoDnwuMeu6lThE zG$tP7^NlS1rCknuN2#7)GN{{_ol9}V<2`8(ya{JY46n*+5qoB zy$;klXTZl-F(*!&eh&;+X4nYzP*;OOj}h#Ky{tsf>t(c?BY?_;57D-Ezu7iX`)17V z8IgtOo4Hg1_et+c_rE%>W;j9+v*|nRCc5?jedOly(GHfu8Ywhl)|>a$+8o&WuqixP zbv5F-ZHcA#2kRpncsXu(a~GdD$NkHd93$kQ-tPetbd0_zx+RSP&#n#$f zZ61dAJZRj{1n|g?3USnje)X6*#2pL0VYAhyprafDw;7n%o;s$0H9kqgqfmCp0_0it zBNlfD&q62O0##>45)w3CPuLAeZMyxBKpdr*2PP8yPb!C1c2uI`Zv3mCeFUE+?1aaT z9UhG4CiB&$&C>szRbC$>MCETcr#(FR2+96soK#lB0W;y$%;?Vdw$F%0&UfIiI;4*| z&(v&w(L^a8NC`fdb`y(I%p+o^!r>&FDb99rTrA@y5(03z90AKU8*p%8tnN599UT#t zq!G0n+e1}~A3_LkrSr?xw`NN6S*8YrjY%}G_XZ9)KIv*^_4MgexOMv6#zZQq2Ybp~ z-=I-qXfJs4b;QVYWwHCWX?4(s2fH11{&Hq$_XnelYksIN*Qz>@1aY1)u$9hWt?QRh zO-6+GlW>s{5y}R^h_}i#5|Wsp?5S3mMeGtm&1MtQj@)gjW%=$=B<(q~SGRZBZ+-vt z47ep>(25$9sgTg;hqLSl!@N*?)Z{*Yy>3B?5nYn7T9qqO| z%Mr5~p@lBn-dbE()*b;*pxD5iy7&%=v(JM$+KCxb$+vyuWw_9ZtpvnkUPO6XdfDoI zn_~AI1p;{}D=D;k;F-wH5aD#9tjJK+-*o!Yce;^{QiOO2lTy% zzy?Jkl3BtC*Q~uk;d}3#ltw(Laq$XZf)zEF7}%LEds0S>5N~}E;xEFMcsrMCS{{Ng zBcYty)m|RE3e)(}FMZ|juJ%A_9N}WoTLYR-JDDoX_bCVhssM^3Yd=U}@O0-&46QUb z$v9Kql@RQNSJ=D(Kc@gaZWj5#g#(PuAEaV9e$kz?ui-cB1x-)-Lqy&jC5VxxCbfD!JiioMu&e6}S*He0XAY z9ZbwooxL4})xFhN(G~{br2EGt1*N#@)od`@nqqdq&%|C2A%Qt$UzB6y)5RjOwIEfo z=sN$v-y$DnVX!F1n_E6}*@YG0>I%{Az>HqsCyGFl1@-5F3KI*4TRy<{>!x zz2np4^!~2lHeg)mtsX=FHv$Wd$Q-a0;->#o(l9;)s%hG}C!7?uzAARNZf#BRd%lwZ z1$Rne@nO2gM`=jKg%`9M!as%OxQqu5UN7r~)_BtHJzHlY3ewx&UG;IW)fwy~q?lgf zYx(l~BsD_dnDC89@M?6mJ(e?y={}ZV?7|5GR>d{H@aYNTNbT?-UbJ?<0`rFP1;t;T zk^feRzWdnS#v(MTgpzSB2A^MjBY3(Vg6z0$}Ht37y0vX2hdhCJ2@%fgp!><1)t39I$BLM)r_-5P79lsdC{y#Y)2+M_p`KwTMh56@0oG6Uj`aPzDkFH=+D_S0(D|B(t zObe5xB6E z$F7eP=L5oD3Pu~^1*JrXO!K_#fbDq%627C@wqlk@tNx(Mhgy(i`Z(sZzF}%fFD>QjDsNgUar9NqCfa3Q6A|G+d4r5CGCXgjp;V^uYSV~DG?ed+Y z`;@~peH=z0g1{!OL`Y8*9XMjK+1mAtq|eRl~pVeCVJ|72f+jA1mePVq_&&A#a)Lie;c zC{!M1ZU=D6H;fyZE&`q@`2cYKeE4eZJ0TF@zPa z)Mw6~-sypR8)Ip%kEf3S96;0tbCVd&LCJso(ms`Vazj~PIERk`$iRNPgSagOle>PT znjEqB(%*Xt%Zs2fJ!N{F76j?um&T_znQHR!oJ3@+_p9bEiZP?BSFh64o3UBeQN8fc z+_2;SY>65?N@RjaiUB=Qdww_z1889jOI?Zgaz*WMFn#DtVj&VhW`knfxxs%PKp$>` zh&y3*^`=rnH!DqXPq&w`dHjYw+vUFLK!YmJDNIGesRa^QlM^ma2M#g_Jh1D&R zVbau3CqcE;Bke|J({rO|H3-|)hUgk(%Ip6N7pan}ASjWa#-qf=h(2aj^zw)dzRKeT zZFNc25`@hZ`dq3Ki}u0~GD}asz zej%33xFqfOCbYWh#-P=NeD+`)X7rx1o<dEbD8KeYgX+lae7?q8xQ`0PF!gzhgancSd=e)@+Thp zs<{Cs`VnaCAIEMHitP?>?={lPJ}j5#r;-Pn28msVAEx(jrK)$3?r;3@bPacEsi%D3 z3RhPqN+3rz1&<=N*xiSN4Bx8Vk;!))X;H>sxefEp)PjfRYAcWsgn3rl{agCyua}-O zn^k8^8bOD3TA6+ri%6I(iFZe(Dz(9mOK)ZZ6P>DOAK9-S{e1#S%<;wVajzZr4}5|r z@1z+S`;tUn)rbq#Ui!zU_ld-f$!#rtTCAbOpA8P&5*@gN!xbUGPkdKCnyIW)*|+cQ zfs4aWVZDifMAls#*9-~i1})DGnx{$dG12D6&FDi;CM%1j2M*l`pv%OT<;V5x1aE(L zJMzxQBIVOzV!iv7b3Z^xrVkbkV4gk#qYjCI)Tt}@lyH(YH}3A?oJ5qmqY~Se&i!tR z_GyQ>HYFpy?vU>_GLh0Dwqz2^VC>$nrcZvN@pto^Qu9kW90JebrTN>g1u87<=Z`PQ z^Z;#D-c;E)ncA{B5Z~u{EME2vp1w!<51+aB#w{UvqT@iQ0(YxfFp-jpV8rugL6viy z`+ciei-0nOjVgL@$G!UpF7iuE;9(2h*aM@6x#af+N!@h~1#scK6vyP?v>M4bz|S*^ ztX!f0RP}w*tg&Spo)n#*GI1R0aj;)%ey_=Z#0v z6Q3k{?eY>o~{fbM9~|;wa|&}YDGTp{PqNGTNxJ0fo!(Vxi9Y23vMSe zh;vyKx!b~gXOTds6VD|y8{*ZVX6hq=MUSow5TV_=do5OT=@;OsrahPQDw5C`D9Az7 z$dl?2E0=ce_FA0N!Y(y;p4wOa1`M(@ts71IFXZLGre^!8uI_Uoq7vIrs3I!ySJ*E( zgs?4O*S09zOEFzjqfGA%RrL09U7X9Qw!L(I*O8&iU3O<7@cu2DY(^}U2dvdKI8|D++Fv7aLcABIVQ0FmKb`nWA@bl|QSod~>FVCr zcW+XMZFrSbJ@Kb=_7@74&c=8Xj3@s}a{$4Zm5upLMEqCE3~yt>(;YYtRqV zIJrbn@XBxU@MXfoRBPJfL+V?vRDc(S&OevlNJ(eKE#r6ysprAh`?FV*Sa>VJI8{6E zUCpvkqrV0pOKFB#Y4u5r>ckFwzo~l_7J`6=w0tTKm6;~vfk+%f6ch^ z!CVPhv1=Ur&frL>w?#?atAZk~lR2)Ieff0X1QKR|(q^!A0@- zh`R|^mSIgCv9{lMNnV|U^byS4Jh4#26tXG0_CwYwWANP4=mK_I7? zis}CWUg?j1>Z8TPzzo=TtYu{*ym_!1 z+Yu&q;>sr(ax17qDSmJFtWJm;!G171^MlXEF?OL!@UBQx@E%n6^vEl375eyBYw9bK z-L~fhxV=O1Q(@mXX}=ZzGA!54yq(KQYK_r1UJk3Di}T;6Yh{ehVJ&!cD$BOHrap{6 z@DH-b|2iOzFZmMn9e?VD^cV%}YW$=hzm8uzOW=Odp-V*a@@dJsl|!Sxo zj5H*?X*vQ8v~=7OjKumg*3O*+m77tm!?CrzxC|oP1#DprUl}5Nt61i6Kgo&P+~_+K zbk%7od8yu(_l7d>g42%%OwI~@AlEjAmq;?3X4T+2wn!2^P(x`)Qe#aQqJ(flSl#^i z^43Awx04?*tIch>^=^`CS(b5{NYm}U7Zboi3l0;0{@r;Tw!}^Tw{pIJBcXb{47iJR%j-QJltFVdMRA03)j8{%$B}j|~>axpU z_mML-qAvdUHc_(eCWQus9&M#m=;b9J?w_ML;tGXPolwC9ol@H;V*-5)V4B&2Lj1pE zni6Fp2|Z8~F0r-y7<<)JiqD%?BDDJYjE>mFzA|&?p;?@N>u zr+l^D`qiv1ouG0?wrAsQ3QoJ6F^c%D-Bzvxmp60Og^ujI>VAE?(N@H!X{=Rq1>e)> zD1aN~7asWkaxBp&eMn&V}J+W;$W2Y1TuM5r+ zhMH5V-N-cxV>#Z}mok!Fu9Le^I}Chb@1PUUPC0%@2-LyrJ8Q+25JDBkGD;?oo0m&_ z%Un+on|YvDBQ%*g!lj_>N7af*a=rq+SokOVOSlJ&*GVulM4@$GDLuGo6Da-=e6*G? zCn0u!p@+~0F{@g>_SzX3EZa`+=b`g{ycN63exqlpm2jp=lgEa(lQSvA>0q9m=d(v5 zi!ktVCCq{uX5>almL*ycVYagsG&+EFjQad|MbiF&zFz2xd zGMpHw{BESKkby#R%wxv?XEVgdp44AaurPMHojj)w2sM=s{FJq4%|+{=wYoNvQ4X#b z?mJ-O$MRJ`+JwGTv){4?x<>)Nt;)IM>B){{6+raHY^#wp3P4Y5qv zt~Es0T8R<#2a^wl=suX*^vw3~Ui+MRHaHxEwBEUl4W^N34oK|qw%yy+iaoX>Xc5tn zmuX13113>hJyyI)X@hDs}b5J)Z;$Q7s;RieRb#9(5c~x$p1y zx<2Floagz8c~3k{;qP5< zh*V$*3qa*&R)*QwZt5@olw0_n4(Df$r!1=jv)ix!%O-3wCwCw1Q9hO;j(AU$GpX-N zUa^C*b@tT%w&I{5FECKL)#EE-F5!#8fI23~bT);L?<4oiz@gKgYYnjhqNk@$&jj4d zzGBm{7{cioyG9tP4R^5iwQHr%Gh-e+v7JGnlIcG%h5T>}40F7m5M=?zaMd#k%xicQ zJ>FwZ0HW8O{cIE>iGdAWRCp|K=L3^%{{s)4MsqlPVMBxpbpf-o8d;2*`_dIuK7vrp zO_JEpEvAkc*bzkCP>G4Gp>_Y;4lk^|Q;tVPK8`lPN$h9alph35)D=!W=hInhFv9{`8<^4bT`nHd@FJK8Z zl*gt)%eZApRoPBRL?ios_EgCiK}^?) zhy_rjI%*|~y955J5}WcJ{hi+LAlmqp5FP-5g%I~~}>ylY+Wn}+mOu&VVKC-lzYz}pEXus&jtDhdUI0k;z^F@ z6q9hgf=>KU?k4`*fz(aqwEJ)(hkK@94R^HHwqXjN`k4<(5o z0_yN`_1lqC!rbz%y}BzlJKw%>?M74((+gqGbI>V~AC}HL*#Bt*(S9cF)j9q1RgB0* z#dCVu$Mgy?m3Ti(I*lqo9@7iJ!( zK^GJYZo}4V!ZR-UP+qzweQKj4lmo(dJRZFl+J7>OqQLb`4WG%dn~@$~_d2J~=q@&d z>Yk8o!JAn8J^#p?o5%8rzZK#WH*tj_H$Juye7(+rg4vw-dg5S_x`U594UpzAN|`I4 zn+>?H%bzGt2+wxy(+9~h9q`m4x-48+!U?Y5THW!lDJRx;K$SfHVECFa>R~i7T1=-q z4%v%gofKAeA8;DFL`J&G)(oJ6Ara~+A3^CDbjk!FfrA1KYX0zO!Mu&^UOq^LY@GVz zTBgI|hS;JleyG1G+Cn^}JU+O2QhPEp$qDm`)f@qhrUhIbrS)5yoI98w7Z0p%iu56C ze=c1S$jT%$P9T`otuov->_Y!hefYq8C!~cQpXezKDUr~0=jL75^&+!-F$p)mDa^`F zJS+BLBFh=`^&Od+9^03SB*@rsY4{YZGo!Bz5DWrp+6F}hKYiFO>L=b3eokT37K+l4 z!9Pl?d%U#(e!7t5+x9Ohx`H|^<*k3)(&0y3k>Q&lQFooGEGOda)D6m;cLVJQdFFou z74I_wo*!&+PdNRUrSSS~v?uzfOMvpfyY;htrGa4|<8ia2Cdgf2-w8m7wbEXW67tXK z`U04!gb0NAFZsRIXURr=QH2>6k=?N@34Qy)spvF@Ed-y~#Pz~I>16{TRdIxAHtR=O zBm!kz@sXvB%5J`rENeLjwQJ_*7y)}FVVgAYbCDuay##*ZVC$jS0C*k!AC&lw8X0+y zIQ%rdb-fV!G$As!MuT_ntMt5crRmUzJ0tCX4pMh;n=XhBTjBIL8-O zpV{r}!~7KR#74(vQ|6~{b1*km!?Vdg7_|DakjL=_coW5UHWi$Q+v`^JERyO2U0+2%d9wDq&$ z9pT)6*&QkS0HEhEcNEqPreP*iYgp5KusP@l!d5j5AB4~ftls|R>4_a6MM)5@(Z>gn z7u^dphh+eTACG&X%LD||6pn4oy>O@qTe(T}-m(ef$!*}5E#{HeBJ}4XI?WeYcD68f zeW>d~?Sxp$4ZlHuW3l+mTGtPj6&ECc7d~3#4C|Oqhf7_*Nuvokqo9ny8Bn2VUf)*D zguD4)FFGiQ(M_}+?A)wX7+#1GK{Coz$%n)YmpEt;au5XUssJ-ggW4?DORl7dhz0`- z(SX+nDmG;fEc=dLOUCY_^aMHyS~@fIBa&Le1 zf{XxJUQox#wY1J=y&oXeTpRA5N#pa+HcbAm#F}%#$uSgb?)p?dovf( z?LH&v@x0C9OpN&|?C+*fmsqWa_hyodrmW`vOPS_s6cUKMjfT#}dn*mt00 zA2ksG+bp~8xR~Bi1`$xd&%CW&;)4o<=6NJ?A3Zv-vv1BUE&ELL%Zn{>x{z7 zIT{Q7R(>h%=@^vD#kI6yd$sY*FHp6b@yykBA-qKWXEvr!xa#)?7WeG(Rb?HAk5n>5 z>5`sT=X(>1RLm%l)!=i`tb*g`$7*QH?tkyLB4tY)C*jMWDNWnl?vjo(Dasr0c5nWX zah|fCO_JUG@Hzs>d>39vlyT|$7ZY=ipdK`k5=P=GeyZu^+gf!oXs<5q>~uDt)1OH0 z7YxHXaqFzMpoQt-u?WQ*p11vHo|h4Z*b?V1CeK2MG}YR~67gDJpKVRtKC&POnSShR zA=uCHIP^70DLgs2;g?|SfBm%QmJ}i^FxT_Nc~k?))CHtj@&I-M%IPUH_0g7xaMW~- zruBsT|Yt(ncARc@vK{2);f{~O4zS1%c3TQeb>I4r9qIF^HOg~g$rd7M?<}cuB zHC%_zu)JX|aEH5;@KE?wuEDOO%yp-3K4rOOn(Is~Uv#Zy6fbt?Eo=cc zO{fim&_@y54}J&s<(P39GxcGv^(?rM2h;XD-C;L)C|!Uzr*t%>kWz!;)3(IJPhC*s z&jjsZW?GSpIy!otCSXR;C0QS37IXA4{j03fCuV!<0}Fdpe1+Ll;H5?8Gi~&hIb`2^ z^%f_$?L7?YhI|4RILu20hSqdI3aO5NX+9kK)Wa@m7CaZztUX^|u_fB0lGZlbm;Y4x zH~cR2Zn|Rd*1MtM3s1IJCF_nHL*P)iLEiry4{a;XeG4NmcMTK|*_m3}=Ahr+sa-S) zU^|h_xj@H1d~8i~6FpF|t<9%qAW@g`azix$?*j}cjQ^u9*qMwcuAN%rb)Zm%hF+Kh z-o!T4SZ#Z}27SC@!Z)C%x4{V?hPo-({=}16gfpJs;}kXp2c6T{7Nv&(g$}0(9zf=& z>r><7jlpc=VIr%NE(ZZIni1k z2;pUla|WC&GUMxa@K&r*64RzJMSTk!lnsmhxHn{UOHdkW@C*J=HS-VD&dHfEz3z*9 z2x#G2T$`@L6XDhw&IZocvQxHeLN$I$df#Uxg^WHYkvD*!w}&nuLqE~1q{s9x=IBv! zt8)}Ja%$*BRoHws+`ybTK;Nkg z>46mbh{Ly_BZn?Qvd8A}0=eY2rlCOI1gQr1jav*Z{A-d-Z$}c+lb}Kh**5# z$g|#TxJs(x0t;dP7`min;xK@eL>p3w2T7NFU(udO%fbU?JoO!6c$jG!=w(k4%_L;W zo(sMzixd4{z07g~Y*DcWYw^CO)%s04d%ivglA{{L;W;VQsyeI~n4g#n9R@ zr>B9prvAmt2~ChP%mqwg#!Li}^!riIl-OJu4=efdou1YxNzr|HCn#n&{R}H_*B`G8 zLp*&T>UoY;n^N!593*3Zzt(!(zEV3QMOu53A=G=(L#hF6?dP%sr|Wd(oU^1m{G5S) z5SNELze>XY){3O7C1y-*RT+si)Z$Lq)=r=a1+U-l+r+h%AaBKc!0lt8+_ZitCN^6)An)Q>~F(mUHobpmjz^dk|1wc zr5cb4GsdIOsT{IYsF;W=G{=j@COtowFf!`MZ6a*N389*8dt@6>0)6lCo*F;#&LxL# zD1&0Z|9`^CN7OA@YJ-OucW&?{wloWu@}$(RJbk{so$MJR>cKmLTF!K*A09IS1IHTd z3mp(k=39(G6AG7lPPmMq2C>fyH2vYpb+!niWO}w~08g>6b`zlMx7o)H^wgy>Jf35@ z-B9MGZ(SG&?70%{;1XLaizO)l7#9_PW6B~{@M*QHDI}6lTNb6e${Sh9&%Z4x z9`0(6jQo`!8${iX!q94tTE@83fSPt9DmKJbCGQJ5AOV<7Y4@EX{$^u?U|(U5awSEa zf|t%?v%YKp_-ey$;=Xl1Y3%@1txlB8IvtfAn&mNm(0nwgg#%M#it5uD4qrQA+J=Jt ze;_+wI7sQW7E~zgV07Ep3NV^qf0JYse*v z1ge86ZETe83lCQ{Ohqq$ad+jmwweZ|_!sYLnRhQ`W7zhr<;)-Srbqb~+j;8mLj{`_ zO7Fa}7~7rHiHxn=h9F0Y#Le0Kasihuu}utCqJSjKtLU`uZGslkFWq3HY07cTcL=-d zi`M`XTlzOvPR=_rjA1;pSUIjsoer^qD%zhgyJ9F_aI*&w`4Wdq=y+ONed}~gc<%xu zID3&&eRxuKU=q_MCl^{`-ynv!wN!~vw*S0*<`?*kA!Y+|b#XLiEJr~9BFn6nR)oN` z%9u3mnWV)&S4%^dETRa_6-vRjAV&K7bfb6cx^6|fZvs#7vZ7}WuYktC70ImZ?6>C_ zdh9nqP>Hk;Oiw$vvVpU7fQHt2N`(73h8`%^bWNojo%{nNxS3|X&1K_^)Eq#|n*VH5 z2XH25&$P%NXEZc!MVw&e;So37v#ZgGi_{NioY(7bT}J4Y!mVP|F}lRT7I#IoSfmrf zLK|~j1jCH-m?zu~8<3UGO-=o*lT7UT(H#vem|PI!9%%}hHe-%VJ0svU{o7TKpSjO; zh5b|NbX=BS%1f(zCrXG;0nCmBVzXE?F}rYg-8R@}+|#`eS0$kR|KqB(ja2-aE%N)X z(&%$72RLB-q|7}*nCo1YFXeRJyhQ=CAiRr0pTeZRu$KTHspeYno}nw^bCi2YK|7!i zp^&560FsK)=Op!8VB-Aen0{UBvndI%DnD3jQE1REZ}GCW`h;RedL8s>cYbxe3U}%V zLeT(N<<7E*#p0;a(5CJEuQ=I|Jm2H_Qr9@>rp%~jH%p+1ztJC)1CpP*mTc)7&cue= zs}W>KvGYzL7FPfZ1N=#7eP0f)gF-pBV~0^bia3h$6aIGb&?ZuV97NTiQxTydaDDch zM;(P*g$Gf9a^x8U?WF3%q? z0J^Nf$)Hb$Grz7EPW)5V6Jbm4X$vVNHwGGE2122US0p;w@`yD9^LX={eXV%NWWull zi@13d+_c6GV_}kly$2fjR?>NrAdyvu3s`6u&2mI${UfcCJ3U1w; z=TiJR6h*{=6-JfDhh_EMboxPA2c_fVN6?<%@oJDFLSL|=rmAEi+r(DR6GE||P<=-1&#LZ6!Y9ey>BDJ-*-xqNx= zS(nonUmg`yrXjzWy=ZhzcX=u+1Tt6K2OQCm&$LQQMFSC$u zeY0WZk~@JMFHtx&N?q>|pl0Qr`{irB%O@nwdefIYLst55D(84Cli2Nn-Ghh%>M1jjv8v8*5^+ZhUt^AkOvcHL4re-}DFI5nPo~bsPdv~q-NNOM%Uo?=JlQZg2 z_4lkJZ}>?s5o_xcTULtdUxcoKO;e$WdA~!V>4gZqyu%Ewrzh$S4lR*C>t+TGRaHkg zyGFBN!PIc*vmWIn8=*7zx6xxS7wZc-eN3yS^sH7;z9_ycekm_!&^p1f`)U6bTFdK& zWXCR^?9bdVR5oc`Mu`j7(t(tJn_wPh*@KpO$LdA9qPkptw$6QFY6h~}51<{_;M2yX zh@ZQ@XgV!Zz9hCm1go#B8;9XE!I zAUxLskkNKAEg3KxPiL8%_C!0q3bW#VphkA?cF7&gb-mersr6Fuj)>3@!>e zW@lA#`*kHt(eD9`-L<#x5zX9VaiXDb3(xbTPqW=)esbY5aS1v0AL}wBx%&Y5-yAF7 zf;f7;hReE5zSf$pa7(y|YcuZT2F$P%ep_QA za$?Knoo<{QX#R5d^Vc`Gy0GK(r-&`}+zE7{y&*1&A&&LSOWN3rd1I!l>S9GMn_>Sq z01kdFUc#9$OLO=vigPuLMT#1viNWKzGiu~JcLn6hT>4(5N9B@i0G6A*XU+^Rq`p9$ zL%PftJAI5T8!Y!@S@dAlY-I2Ub$JtSt_1=GSRZ95^5{qYq=Sdmgny~bpIb@Inz7v{ zM2!3O#{k-BZeQrD?{OJVyO!v9qa)=rDJ-dGtGdDRV29L+6DP7zZNu^DdH$uwxmUQ9 z^=spkheR%!uo`m&DNV)E7#U%oJ$AS0LW2prw7M}H*54X;_sor zq)mC6RS|w)mDh+xS(rumDq$MCe#wx$rr@4sa5;8W9Cj`}M2VX`|0F8h;4_fdl7YUjO3hxx+o{blf03$DwkWRq)p^=R1tspiU+f(kvn|Y`a ztB7r-_LL{*yOY&NE>0=Ex%<^sMMbEP8rO;X_n5PO^2rQt=4J}nvX#bBJa5C%)kc`> zJbt&Naqj0*Vn`MrB3{m-V^{yG#aH^67+ADSr#X5b}3IQ zz$L9h`x)4ln3v&1iP^0CsM-!C2L?^x!d_!HO z>5&85m#^CZ{l~@X*mLV!oNFWw*SlY%9Q(qBzQD&l^P&KV@`B&<_)RETkwojYJ<0{^ z=F~P0O9TOQv}#k_RTDmn!Roqd?ld&dy}!Z#T{-+*H^s`<>z`9}ZW9t|+2^nmd4kNl zC|eUX_Z;C{WacqD*D_sX5;-ia`) z&k0&~1Lo!Fa9MuS5f0@qhuUSCjjz5(zy0Dj509z#Y=P+iR#=|Rb8N5E8bbFV)CM}os>J#5+<@<>z|N= z>_f{VNTvCilmkGv6+(lHo)L8_@5|#P)Ig3|bT6puBk1byX)IvO7+dx{_9E{guHx1F z-du|A_nEC$k&jQmje0p>4E)Ml&olqhJJ4AgE!P0(CoeBEz4Q3UqPCny2R7s+;O5i+ z$w`7o@!*BNed{WV^7%=7!}=MPQg%Y`X?A@Z7Nrlr)7<7BYxKzoI}(&v9A*z#wf&qh z<6O{oV=#SwY9G-o#a1{0zPPi_pN78KV9+Nd&iop^C-yvB7mdH7O>!XE4v9rh)UU%h zBQBCW=*B=sY}0E^Hiy39W=eScrn2COP5SRGgZWoLX4J86^Qw3tr;j(PpswI%jqX>~ zd!+;Un{&UEqlIt>qOTj9EoTUbpi2M1(p^Vr>bg?;<<5`10@mzYuJWQaa0C-c#k9-i9MsXrYG}y5#wNAQCOkq%dQvuVRMCb z84n|uq+Qw?S^mGJBDEarjrQvBN5$)-lovBoW>H|OC<%DMHxI@HO5|>$j=4xS`rZ8u z`i0(+b%_ha=L${wt;F!Fl2rO|QSu9vCGL{EU}Y?w%>90iDl`dg>H+uX=BgHJF^JQR z>l+jd>e>=Gms*EuF$n6YTCxdARb%zz&%!J;oC8dmqJ!ne;FI_KM!$Hxkvk1r(eWL$ z?6m(@1ggCHq|!>;P;ug-4--AckbB@rBKI(=u-6e14l^75Q%F)bi)mI$NJ51m2EZo@ z1(ngaAlJEn11}wsLzJflQBkt8J_`J2>1$=5BrSZw;qXqJ)V=Wt3!(IG6nc!e1=BG~ z{CV6Ag;W;c7Omd3bhvLJ+wynW$U}y)m$saOm9vzi`0~h)ECI~#J*_xIbV~mlKC}gJ zq~E{(St`gpNlP5JPdB^YCE)K3^WUgDk`YjHc1)tZ(j*R33+{lMDFcFTqBH0GrSMrn zZTQd|X>*vkjZelJO=FLe9(=GLs4zfr{w943GCpfoVmK*D(Lqjfdb&$bPmj&l@d>y+K0-YApqEISmMmM`k(9P8%d6`jEpRa@^aJx zJOS(Cwex3wAJ29>BHp7!iHSbX83@`-cnBQrA^=H=aY%A0>RP8#DXLtKC~i74L;7_` z)?&gC8NAvzucN#jr}e&|f>mM|;q56pr0p60qd_9W=^%J{K?e#<* zqicj~_o6y?E?>4f&{VFk2iKv%pNRZ69&gx7D9Xfi$7EeoAV^7Z5Vq7@+w3BK{T}wB zsQZqhz$!2GyOSJhDMSv1>RMXx3`8m?tar7BCkNL1%W*dEA*aAN|4eQL_i>!t^TXxG znx1CrSQc@I=C1joG|97OK7RPh?|zv9}A83^GAW%s%t zd|)d7-BV_VTDQUzwdUwi!Q(S;hxX_!zYr4ODJgYLiFc1|UDQQSgPQO@1Fu>~an-9L z@twK%ADkIJ>*$`Bl>0WkPKklGL@}PhmeD@6ATw!5#lF_&$jfUp;Y+^v7ad<0pR4f9 zKB@oE_v;7bQPUwE0&6;+ z;~5J?bne87PpGRmoJsCj5fgRtvdf~hA7jhCt)?v*jK9VWboHM9?tkIpqPhA-wU87M zmzldl0~vU?1y4x#E(%eLGFiPlyd>&S%(Kx>I?g%so4L zGuwChvE+S;YF9RTD3da{3J`JIk>Uqeo7Gywvn|UQlS_Y}dTaeD$Nfd}q;qk-pk0xh zfJsqfEUTDH)BR`}z0U#u)H&A@YN+lqodjE zU6Qg!c<;Is%^g`>UcRs(zWa5`QyJURm2(|g zEh#br)~zw$OEt0iWw7=(t-T@R#cP(oZrk^$tFWbDTz5HYAY-19%{RoHeroXVaGR0S zo(iv@E;4TFbWGo+_3Rk_RFsji;`~5Yc1=&`$6vGTD*BvNI(PPq<%Z~AJXEks^t6FE z_lRh*1rL=}=7CB53b8NURTFHU{t0UqkyS92fp0NGPQ@Vgt`djAYIYU_m9^J4k8^rk z`Tra`9aE4wlR_&QTIy)m`e@O%;#~Gz=Na8RF8#c9`g#04d-1=|v8~O9$kFjo4KAeP z))){RjGP&3-R(O4w8dyB`q$~9S$C`0&}CwVyJadk*qSQ%rou<}h`C)h$&c4Hg=A{A z&|ExDK#pCzcf!(wrn1gLDeSkerA}vDuy%*^7wP!Q^;t)|J)OV3WO>-e$Nmlz4MB~# zLic9NZ$fYS+KV$ff^tVr8kzmes;DO zELyOvcQ)cSVEdWzj%84A=@dR|(=z?0dFoeX!+1upOMP)d<&tT>f=0-w3>eeFAyeyG zw`bx$+k;}&TAP#)g#$K6O6B88=xLVPVOu_sEVto>94EU`rT>u_;&7F47PhIGAI_%d zU*y%5MB=O5;v(40U8C|q{UOur>~&hl^tZ#iBV>nFY8KAibB}%VT7Wv3@-T#6GCY|} z#QZw|iLlk4QQJt}*&}_3by|77h;vOz%85%3muofz%U1mgED0+igr^#~KAfq}_A~TS zdtW}#dDo=-t=qn-@J+1iTY5~dY>;Vabxe;^9i0ApM#?cIaJ_KAjpdAjPnwMP<~TOq zQ@BD?&+w#Oed|D(_p&mvrf$(PZg!``#FIWZAuYNyKiea(%iZHq?**QST2o{0Ed}un z@<+4f<5x2DCFUJ6woR_@9MchKoBg@XF?}{Ks+Ol&{6l+5)#@#7TEpG1`U;9><9~au z^VR7uP2d<>t(GjbFUW3tW8>!K7jhN6EhC0Y{rH(0Prb;5ogY8iy*NmN4}s_@0yuFv zccw|=Bf>X+3l4d7J5S)nOJg0ZDc{&?MjJdx5_PotRz_+ z1oW=F=ITEsF=4v@`=P!~rKp5!K&B}>w34nsl9itmwXlRU8lVst@E{xLtMWVBcEt#3{3&3$ZOVuP+aMTDS2q{#4GC{bl?` zKzpj{FOA`!DG7JDItQ+I_D9)g*%$HF%kKDfb!MBK?$N54$vC%;u$VhuSM=z|#&q1R z$~=>f@^O6ku;P;wROwa0$}3EA4dQo~=@l;y_LO9+Wj_-dlGb|0EoKg(a3@JgdOj(A z8MpMcaF(3j8>bd^(S?DjZ2Gu&4Ab42D4ivf^eVdp_~YvmBlo>)iR$|x>A5Kl6JE$Q z3e)yB4vj^ptYZ3b*ThUb>_bnd>B`CdFO>@7liqch2Gxe)i(Sg?@;u)%eanQaGINzi z*YEn!=q(2ie|qM1m;Hr^)d&9$0BJDOE((=yDL3%jPw79Q>xP_@jDUo`W3 zvr>OQPojfGQ>4(Ls#$iU^MmCfBc}XoIAaXQD_S`Ou_wsw!b+fiz zykK`(Dp2O=>cuiF(7j_&skCJiBqY=}{*NN*jx!;+S`#$1oi-zKopZ2xMn+2?v zG)sl=d^tApVx#Tthf$XXy1K`C@@Je6f19k3UCf=;Ep=-qusxjPRqj+Bw;12;n+F}m zW4XmNPt=Xxt%M>#vYw*f}#&bHa^`)xmcJD;`nHFF9@U5G|^xG z)-)$ACC9|qtXp@=Q>qC9XnpmT*~685(Kl!t*=M6qldzn`3F8(>D)=$q`6YT6Ni~?w z`I?GqjTy%=xU|)cOSntMbnTv}*F{&m+ zkG3mj{?J)%ba?dD1sBOz^b@O>ZHzC!ve+tCIe=0M*bR9e4U9g0{r0aHdCqMy>9eDo z20|Za+|A1zjvM#hwba>)ZmC?~oq?{2i-YCAHKSaII#=hO`hF!aZS;P$tkwQ1{fukX zOmUmdl451DU78K2%zh8c#-TnDES^Kxe4W@-SwqB{l}n~=i)Yjdm3+~Os=;r^%S>yv z_tc5Ya67A9h26jh%C^bL_G8<-G0(=h`C*q2trFJYifG>q06e=Bj#y`YL~5OPXw2ES zaPe|pu;#PTp%guKhq5&pO&FPA5Ce>VCIkJ<^6<-8yh3ItT39~9GW+feEc$!zixePu zI@pfVIK4BgtzdR0?E&`(rPDu5$q>r!x)NlZI(PT4(sU4S>Ja%wZ@8(n_1RU$e7jwg zmusGg{QB!trVvI@yJ(0T*ye3v^*t{y3!rh>zY>YQpgT2P_l=>dd8Xkgh>_7Llb4BR{^F5dHbXw3jG!iRa2Rygkos-jKKdUSlq- z{?8D%0Pr~tKeqSrViL8^~bRzgGLQn^6ghT$a z39st#8^rNH_mrE}_m+Z|2I(OOqvxmWBha*M7V#zR?WH9Vb1=rvc)Xc6U3HHxDrfOY zgOcRln|k-m1Fq2XFe}c|%rz<^cGyJAt(lD?OC9Kq*Pn@AgVb1$KZlqh2_>Gr75s3j;9 zWiS{`W*;uYjft$rZ4KmS&xNLjSa)EW^rL>U`}Ws|pmF)z5#yti%?AnY-7DTP1=D;5 z;;6%zbY;`=D-Fx`(1}+Q+4Rw-CST;bJDKk1LxX?9rwFqIK)g$1O(u+(q0POsVva=|scP%km*a%TIcy zaNEuy)gLl8ZHG1JSJvO`xgw;MK8N_UE=PsIv!m()zUoVQUU6O(9$c7LX~)2X@5Iaa zDf%eO-*9l5!k~|4Jz*3PgT4c`G;`QZ=N>9P>&dZ!hn#uore0u(#L)D;*|R1VwWiV0 z_9NHH?2OruK>E1uG60Lf2R7( zO31r`5fYwj0e304Lb=H4&H06XAXwTviP7xY;IIul1>gVk0G2Zps(6B4Bmn#nOPM~|GJv|Aj$+A!xBUnvktpWu#GTvM8QfRrG#w1#CUJl zF0~%VGvadu38w_A;`a6@O6%quvS8y)exrVZK}TR@x3MKIU*xQ7+F^n(L`XGdsg=tUyng?vs3P8#==7OPFU zNAn%x{GB|PAO!c(xGe)-e|b+70?^8bo{9Bopp!m=vGd1ct{>hLnPa2_5Ip$^(H-2Qhz@UUeArs?N){3EE8dI9ZDg z*G%(0;hLXhd}EV8`89I5*l!POaj#L;0tV5P2p+>sNsI!23J=9e9y|ubeL{K@roQjq za*H+0%E+Ic3!q0>KyEPZbT`8viP=@tsYZM3M`n>)S(!~8LB`&0u~)xCA)x_~TEfu! zy#~N<8_Z39^Z%Kc`G`*vmIKdU^nc@>yUaN(pfUIuHX@tO2;lYnC>NBVXy z-DnJDJ>I;tcq5COmEw4-zj_p%V>*skq(RGW922L?GU z29fuG#f3o{YlBvMJs@G9a$W8ka0N1Qum+s1zO??9!RGmyfGV#k*`&x%zn(tgJ_xk| zeH*t=1}@q#e98;Kk8#Qgq^w%l{CP;e(zvmx??LCCeOKc1H4oM?Si zrz8gH)1aFGpC)^8-t&XFidrFf(gh?Zuap@+yz)gR#>N(D4@)=RduA=ZS7a z?ELpA6hV~SyeoPdC;Q`CV)jBdevI^gI)F;m_W7Foe8Z-EEwS2WUZZ;M5Eh1)HO(RY zyuKOCZZM0n0eAnNqp&w7SncQ6-L$mGhm)uckKYilu44+#LmM}#1AiV_Carm%AtbQd z%_R20okASe7!P%<`D}@P=kbmQGXYVI&D)Yn2|@GYXgP!ciScs!WJ!th7m(v@xIS{p4qc3dqv;;P}qmd7kWz#yvP^TVOuK?rgsH}a5$!8nSrOf z)puBYxEaO$Y75r`6}xVuXaH~Y#}d}F z5dSAwHO&oQd8xP3nu~-_g&&UI;(E+|YN$f*X%qU1OM03kL#U^9NWIcHjcpcsKoQrX zI?MX9h|=Obb;sVw}=(&WEUNFTx^D}FiTKB8ZUG1Rig-j0geWTw1KXz;sp z&vvfR2qPal=Z^0@L5p7*ADUjdTLd5a<_;=Gk$_929Qijo*?dIL{<#fCA(C2s3l@(m_kpSHws`pDHE28sV&?{%9ZuR zV_{`!O0>tI#5~bW{CiVWsrX!37#67WoJoM#->~ehQu7$fH}lj7S4GKIlfayc(T>f8 zDoILM9qS@)F<7wh9bC&{z}IGJGk1mQfliD1Ww?tX`&F_P-n76B^UVIR!M&=wVAbwT zw1QBS#jxh!>AyB?}rjAs9n!xbK)7Ob^3NtN$=G570m-$<-v5R9aL3-NCyo1h(C0F?PJu`?Xe#KJ9 zbUb?;m>+pR_aE4?T=YTuTN}l>53Ndgac@iAmGGg@vAJRKdwu(rdNzT9MeXPQeo{4> z$^x{r%HUv!U4!uYaUYg7sX?Q68&&>=&PU2;BXJKeC44v*IA7e`WpC-5FLZo_XgnO# z$tnm?uNLHxZF18qMHkvJFIWWGRrgT2@#7;=_Krl=ajrXD51Vxh$i*uwTtgeIO1jUg z7zBl?lkM)G1S7A}^V;em_tG+ayHBsOukiErxxk@J)5h!Lq?&m4cj{uef|!e2tTlX0 ztRRfCZ&a}GKS6aF4b2CKC|8^FXyd70Zx*(l7MW`m1eqmi2Fq34j8Q*BH?Z0<9VLor zvOZd`64&1;V@!_e2rBLmyul>MDW_d_6cicsda;y4?39IcMk;JtQejh9D$wEgCmDFp z%I7^GoWP^J@L^yVoR}|6B9-MC3U64#3TL=#T_Wwvt%0}u039spk^U%0`FE%oN*DV= zb?M=3#aB8B;i6smsx}9Uoql=IlxeET^v-82%}4R*@39+!G-z3Kp|q?_#4$b4>dflNwd3yMg`|*yekKXJDIkj*TEpyN}DDU)`{2*q_QFwqyrncI> zlci?VC~7`cv+ihxOywB$J;;fblBUQU85$i|%V)MGltV+t4-2R}%VGB!bN4Z<=@K$L zq%BX%RNU?nT)#liT2;!l7q#H@D$74q*?`zWzzB4V z>0q*(b(!o`y@4txVDwu59QDkHP$T)VP5>;1X){@sHhwAVbayMho%!A2#80#FSGoFi z$)B@gnboRsc|x?4y4YiJzyf1w@-mk*JT*SCiS)Vg%_;h78l#5iQQYP*@Q&#sbdqg{ z2WG*V>vGwSB2(KHk*?3>fW#_d% zW7Uvd8SkcY6yJ9~@bqiLp>l8(^Vi97CWzQEPfSQ1|haV{G;bz9$F4WKoCq+GcMmjdiu%8? z6|-sQKcjfd^HkBFTr5Q54u@IIjg!u?`+{|?20DY?^`*^*(xPwJjh-|)hu%NSkIt$J zR~U)1ait3CXvGgeVZDl!{(dU(QPZ5XY!Y$SF0d+}jX=x#`EDJN$U6BbEox^XN=rqL zkw~ntn%v6sxE}>@)OxFO+WRH`^LMcIFpY%qV`jno!RvS95`Iz8y%0Ws8u`_!%2&ZP zWW-GXjfuL};XMw|zsqtgA=}SSUD=bo#Mw#$u1N=HQ`WO}z5R46r<{7hWsvqf*^S`w zd;3k8M6}&JN)drTGLw>llC)5dHu7;kVr`b)Cz*#Qh$8~?O6k`c#mHsJxREl1)_Lmi zR=_mTfFSGBSI!e-B7MXdc%}3K-E?6_a_o$jDbPR-2rr81gbZqzFPg&+nfKA)pj@Q1 zflHM#9reJLSce86l6{D8)KZ)I+t;_BtcC}QMu%o)t3L@ecK*(;$&n;Z0;`3PI@{aZ z?M6#>&p#U~lyO4?q`g{8&uc{Rku&0LBu(IPF7sGIeA)?b9dup4hIdOXeI1cLK1*(! zpS$zkp+EVf$%wdlZ|DXoSwz#uZN0#ImOn2~weTvH!oybIz|*Qz{%=~WF8a*dv_K(uo`Oy&`D95dDWsdmXc)wA zqFnkz@1gy}yF|hDi?-E^_)`O1yXo<@i%_^RYWx;zd8`p5;-xdgxgE~G3q5ibTa1c4 zPiOQnS(!{LB`Ct2Hbo(;HN326nc}MXW@j8Us@QF!q>*kV$g;R?r`P6s z_D(%pV-4g=x=K>7xb4ySeQt|XY6}lMA_@eh;CHZnXU$C#q3{b1avTO8E1Lj;I4Sz# z*ou{1JaL0GpOE*xGVDJ+i}?5!YlTpMKIC10UwiO%Sir9{ZnDvBb9^9WappoV2l(O; z2QQnC7P5!Mc@;-q?;zvt1IrFS%6I!4N|^{Nk$zaRkww^hwOcE7{l*q0 zqiY}=YsR!T_||V=H^lPa6ilhB|5DiSkUsJ<3$GD+CJH@z1-D33v*CGd({ECf_-Wt* zp4u@`a%tg21`hO@)o(7&As~Wl$frxp?T(FBj^^_jT&y~B?4{l)hE3J@s4DsU-A<7E z&K#n35hHvJEF5@za9-CCsl1ns>^&#uP5Beu+(qrtmz$u|sJ@tsiq&tI;2$(9(g%8i zYs7f(XfO);JC!2P#vQ$?B`Tl%R3^a%!Fn}VfFYk>h0{5T2OKCJEdep2Ij{V{iFtL< zc~WSe-!Co|N*le!m)w2ZM6Yg_%b0LHN6S`nw<(<}v&v#L=(TzzsB5Y)w0wDGNd8ZO z!R`zHq-tE$dPQBwjJ5%Oq_X7nyVI-w(V_S#(r6iOKVh@y#?!YkhNR9AThA(47JGKD z^d%z@^w!BO58L^je(_QGV13O!yZq5x?wX})IRrHI{fJSap505fJs3Wl^E!dl*u_Ft z={*zx0hmp&Y9H8K2D$au;b%w?92-2dw*XYqUKyf3L6q~MwG5#WPi#A(fQobb1x9Z| z@sFLXZdo7q7drbW-~_pHks_!20|WsfNUJmJSz`31W!$lN52rvh=l8jQ_G zpTD9W#5>DuhCcaVfftNX!*KXu(QtyW934h7sPjh4ar@em~X3Iu^z? z>f%#G2iV%o6C!SsBYub7;P-J*(Qe*3@}U6j&9p^LqnlYO@;p$B5Csn_qw9=O~b*6F}mn2#H_z?%-Ot z1@lA^Qd-4O@rPm;sELR((8Z&@;eZPz_0|EA%qeGC5|L)UAW*IW-A4T=q?U&u;;^Yp z_yPeCj}hRR-PbF~dnvq*nUZr)o|7&<9VbKKmL-|@A08_BQGXM|zF~-HJY!G%DKAtD zuTfzwAEsXfYB0OJY3{}1|BF?qg09f}KXez6fcHoRcN%EI!=PY(R4Qo1lVQT@`VcS) zQGbpGOE{Krr=imz*P#F>Oh!o`tv9c43W2+heRa3QgSksj#sI{gA@K(he2z7_L}+ei z`up?#`3-LzeIW%QM}-gVTMh1WCyMaLt}7}Z)lWX!CUh`P;m&shO*=>9bxj;FvgiSs zpLv!*giBU#)Cv)Ivm%)HUq&wTVZ5Kf71GZEnt)laP}o9HH+T)W$^@vn!OgPEf0+_< z!2TQa0sBw#Ab(WW;iGa)C%S)Fii=_Z!8DN-X~H;AO(dtdfo^OqRpyd zlFH+Slq(v%%(S|%;X$@Y7Js__zrQ)nsyvf|QN{;bC!l5VC@kEIdyAAea#}-4fH;Z8 zRuAu6>|jV4D;I@^?Ecx0b9gWB5YB_T!j50$tkTeA$WNkX@5J1OPS4Y>I1>f+bg=MLu>)aW=E)zK2*u z|C+l-R$Z3z^8dwj{XbuY@|}D?3X)vTuqn?BDrSfqphaV0 zMx$!MG6h~@2r?0rW4p4;39UwG8la34ntl@sh#vFfzj92Rm(t^xXC zAxX*o(3J6Fr%yDyh|b5=0u*pXS)4THakddw^O)TK=go1`-Vg-lbwS7S6-MijED!p-GDQvoHxa}t6>?jl|3i=;iixPv9=Of$Oqw;|&crc#w3>GUnHxJS*GzJKbd zfBtA5UWmkjw+geUukV->c4K;cT6}Lp9m7pS;))o-dK?`JUaxp(`7<0{JqIGg5oy)i z?tZSxXm{Nc6pkhsb*_am_~MUoi^xmmO(g*(^_B3$zwztO8~*#wdHAUUtI-K`X?GTK zGp$x9<;L*BESHr5-)c~K?{#BsGbRe@+_M=OVn}p9-K+p*d=XpRigxU_N+9Fe=a4i$ zeu+(Kej@o(veuMVvs4)EujR>hdAmQ?(f76n@ka8?5E~CLH(|SPhjQRv3Z#&q`EsG! zdy$iQ2vc#rJ6;~5PX7Nnn0CxQfZZZNBNHPB?~Q+o!hWJTeGfB*Ta{#&_WgXca?Q4t zScct7wfYnof#Oh3!r|*0kxx-SClW`JB*=xySh{RJP$8S^Tvjt%G@sT`=CHnw(N)b0 zv){+pHBxqjQBpW#V<2a&(f$L6z2{$4>ptb@bscOfR_uYkv$+9E*FH)zII_O@Er+lDC9XeS01p#BHtQ&+#X^G0k3^#Rx>u{c@T$fEYXniz zBVJE<0^~@|8Ou69X!Sd&iRQot_WRW)Vup$oV9wWYEuNTo-Rg?{6zT+yFf$wN)!J{~N`C5r)o)z{PCn0A1AfcRh6gOZLrGd9y%i0Jy{FX*i{yP* z^_h(`&;*t5piBaM1-AokNE2a_Gx2Lm(YGrt%s#n(e&Ua>6=_YcOl_TWDOn_SBb{ER z%&bWIwV)d!Pjeg%`D(bP*mW-_i)0YCLRWa#h~Y;t>wUav=yySVvZjjo58!3T;=O+w z2$QhEh6Pj#T*MgoG4|X&JM~?(ao8Wu};ah7v&?Y-#8;*ri^jl{6q zAj!%9?e2kaJ;ZsfD|_z$fGi%7AuNe?$0|eEh?3H>cZTDbNdqN@K4EjpZm<46C=aZY zn)RFwO@`!t@H$|YBN-=E!eP~d;+=?Temw11Y7iFee@=;7{O}7b`=nl^liDD_>aa_% z@M7OmWqhryQ+B3FTca~lMKZqz#uf^u+8xYu(>7=5kFl^rGRlSW^KX1!^6&mSn1#^98;s`Q<3M2+4&SkkU<#2$h3_tFV2e5vFmtQ=}Ui#wQ znni2|Pb^ zM)xPLdi2`}o>8CjK3!>hUausey{yi>GCbLxvb^xu(V^xc%;4*=(T3#Je+jrq?z=>9 ze&QI`Adud0``eEw$Dl-?V!iW7DA8e!XaD+%p^7#2#l}z{(uR8fKc?P0oa+CLA4f;_ z-jqF#kxd!dWbc(t%F2$+IQERJO7_SWLS&DShMlr!8JUr-?|t5%@Ar58`okY}UE+0~ zujhT=kNYudo)oeRVAXW?6&N!|PE_sX|BLI7K6m+EnabPsg1c4}n>x*FAX+)#d= zde^>Mg02EG>ZBQFn2XY3fi1Y8i>61xnu5r0#uW$f`TXr4z)#AUc5BfB!wrF0t-+@! zqrV7Fps6XWu0$&hg8-=x%VJjHHm%8nwE7WIuDIIli3^a0r~`zRMo!>L+P9JYc&>IT z>o7$SlfICB@#AW-T0ND?Dfcl9(-S+gloA7E*x^LHVe{MQoI7=+$#3$wU1+4nIq16N zBV85RAQCM~6>;>0O$PV0g(#gii86t0s1B|Il~X}FI*^>oR!E)V!j7_W?~QNJsBg^Q zGT&qCIxablt?#E4-zitJ zBR?#_)m2Wajn~kWZit64Fl8aMNuSax_R7Aa^hu41;93}WZE0s*Bwf9DAKD!e@boGp z@iKH>Pj|=551e5}Cr2N^d|K~db@5{NAsTD%*Hrj}HYJ4JHLvM#1^DZH&XzJsPP<(j zc1GEaat_;7-W89F>a4k{9)gV94Z`f2cYt1G{8~3Y6PAuhgep+V%BM*BL(JCX4lcmH zr=lid@fOM1qnjw3Bj1FqKIC?e?9;E3m^Znr6AGhlX8cU7hbX1cIa9Pm2Z%QeulmPj z62w7Onz|&wSuO-w)VR@vSCOlfL*5XB+MA&%GS z?EtgiQ^F_c-1UdqY}pdycnVFd2iaB<(aF`SDLJq`i2h1G=LWGh<$N`^K30JAbah(- z1PVsl}FvN zyoJ4(A?nV1{4ScAQ&^4r#L zyl)F67;~h&5RPS*0$w!~o}7H!%-2R?4qSP5hw&MF9^CP(N{`AiNA;&=<$HPDA)Gb{ z2F^GU>Z#J!{UM$V4bOj(qkWFZ6hlhm%byQExXXUr$Ciy1qM^>YQd6U$NF6dhV;NQS zQ@7p)s~C45?=D${@&1H)Vl^=7~C@_0qSi zM(+l{zc{?AS~GkJm*4eGskEob4g~oOVyJHXj9&q$I=zrB*axITC$(O2>wRLNgn|%j z8fQDj=H#2_u-cRgAol$!gC(kxFBErn$MC<2kq5D~uZrv$7rF+?xW>LB*~Qpff2@bAQyS{CQklKmPa5YKJfq z&z9_|@+_?T%8FW`b`Z^k-T_pE(paUC3|YDD{_SpD`pHf^g8>cYlFd7)w<4TGdwMoq z?vQyOHe2R=o#!AHLqKknPpFQwM=AUkL< z)eA+f;L*K1XZ1WbP&Un^>BwM42Scs(P6F*B9a1`VciN3}fDPqwo77LBEB<4mouTi- zJ6cB3TGoKC1AkE1JSB)hl?aYed)N^U4l7S5jiWD?RX{U$v-pSS7XaO52e>#V;Msmw z_;VV{n1L$ndwIQ4cmG;o+_X@{IO1In6i?JsitH~EigUTt?2$}fDQ{AcQI4RH0hB|o zefReS8j`EqNN23+!N5sa^jZ|Ue^3xMQe)mtf3=B_8clZyT}Fd7*UG%FFUgas#a&S= zHcRr8F6%BX#ed`$+}!B5DiD-xtYcs|LZHExA}ZrlwzciA^rV<8H%Om|+@Bte*7sl4 z)Npk{o*izMkhc_y^Q|_&a8G>2;BwN!bbZlfO8;T%i{POf)D!-rl2 z<=lgqBbtZ`!gq4=uUFs26?f!%c&p)GjL~w~Qa2p=L-iL_(>DKTU}G!vNo^ASQir(B27m?r0Q0Rn-KZmyZVm~K`te2; za-m7UvXS5?nQHFVtc<4j`K{;o5@XIh&-Xw}o*jmFz1$`i3+>6MhrLHQ8FqUw9~|>q zS$)UViF2Tz+tFH3-rEJMo7LjyR{IGy;E07L)eW_KUa=B@Bb9;X$SMM^n*kP43>D1} zm?Q)aR2ZJ^k9uOs!KOjlKL?$2x()=Gg7=X$;tGk&%^3f{GjwZDrsq!=IF*cJS>5`h zLMMO`A_WT1L&_co;&|Undyw82ko<_)>o(_|@!DTelh-!c4G;SZ?gaIIBsULm50W11 zEuWzeY%yM*kUHWL!_qen4s2Kn$2J@8p4%Od4kbTxKl`pVF5uJn_VtOQ@6u#LT-dOt zXSwBdQ5u^vSLnBt8CL&^ag->tTO<*DK^~7^Y5GF`j}R22EWLW?PP-Er8jMQr{C2yC zo7Vd@K4a<|-bN4e%NzLSJLzI<@}0S2;xvs|sYkzpKMbQ#{>_UqbAkBFsGH=i>R2D4 z=nA}P`;uX*mfhRfy}Y_PXQT50V2DCvc}}WcW7Jvi)9z zw3{^S3@dFv?K`{NJxZ59D)1#MMZIdJe)~bzTl6ZW^{rf`PU8Q65CWG_b8c}KwuJLeA(C6VYN z?HmebErp`4nN9O#X3o>On_!cm3ZI-2&I_E+I3F$$=iF5(_xO3GM8v#DbzhzYhtXiC znY2z$v(R0N56p&NlgaF4UQN6?H|*Oc^AB=(&}xi*<`0;DbF8T5SJ*kifU=k<_Wp%6 z)2y_`(D#LjEAE!KDb;KycA!L)?yqt~%P+X|>UD!0apHFYaZy`!89hba5&TKR=#$y` z)(t2RjZ_cJWKl9WMD*;qiA8wQpBGqYRMFZGe^3tGj)n*{GW{4X9YqF)fFZ%ww`)Bv zQ)^GP3Y0qMgR}dPD6Vk$@`JsB?c-ffkUat0{S)|+jdy^+%lvO&7)#X8L`Zf%`yT@F z7cHO7wJb1STBIY8z@5&2UQ|#sYMt8Hw)8Hy*P2$I&siR20!T^~kL_4&iv;~)zNS&5m@Ze<6 zq3TPkNxnDXV**p0&D+W7`Sl78@j>7e$mMyJuM$V~q)zUrQRw0B@c7+u+FuAK8H8!Q z!O|qeZsa!#hf()lgu^tzOH`wNWI2Dqd@2<>GXJUBfX%9X*h?VqvkjUzIIl`v)!_r0 z=JCQ6`E8_D-muv_w`~0!r~BH$-O2a%S0ZNoU{Ox#Cyt$h#`6Gh+mvX1yfYVbw-KF{ z-53*C0zY_8}et99P1at7BGU>-8+m8c5v{t4TEs7ehkF^tJ z{!#ABcY1w(I;=lUEcVZpbjb-o!VTtbz40pZ$q?lXm23<`azyA>4(eT?}^A?tqh?Le<>AGkKnAQsd&LQ6N723~i^-SoS_t+)06KP>u3}Xc^M*%5^yvMxTJ}S^B2J#}(D4kTBLCq@d z(%fUI!Bnm!;mVOI)-*T4H^P4APTzh5;6%ZPRN--8@4mwljfWVNMh0=$Du(tUV{>_- zjO66M3IsCE)bEGiYq~yxc=t;JE)ZBX|0fsl$N>EE>%QpLm)r&#dX78s0yiQSA{N`) znuM65XoyXA;BjP+6}}^U0tjszUuIRpi;O8_WTbHkG=Td6ZB(f2KIl|n{t%2Pt@nDO zeI7}#@Pt;HL~lkOgE0?mgiXRx&y!@hDDm^{OD&q~U^G*oK>$umzI^}X6s3cZqF9U+F*WWe9w>t!VO*%MX|)bjyQ^Z)q=X8EK2 z&uD^wrxZbMGCCjwHUYOUBx+F~I)J7!waqiyf!@wO172Y19_!9?nWBQNc$=OW(uLqz ziO-@co==OX0}7US4(L#eBnf1TiD);LuHbHMB_gGtp{%C3#dIh9=} z=hsMo)^dgA1@msxbzr0w`&Lp)U|G!IGr5Euou~>(NAGOdbvXDXqY2NJs({_0g@RTz zhv>{;=tJHNq7fc+N5TQ_d{#b{znM)~yN^~szch$^+c3&0x%xE1adrdD0UaMe}5xq$jhYm^}b?y0n}>;>Mht%+95q8_IUr zh+-^jiksu>l%n66%Vh50kPg~|l}UimpP7QhTr|aoeL)mj<1EgNk`CCnS464@ud)36 zw2QqefWV|~B%MR$UkUbaE{@2Ua(fvY;kh7BW9!O1U<7w}QbahXdHuLJru`p5X-YuN z3e4tV3K9lM+^gHQcW>c(c=BE|3_cV6`7!2T>FrL8DHawwb_H3HOo4BUA%@}*98BIJ zjl+&8P#);iMa=mK&V=<-6IwB&6%qiVD1o~Rqq<4PX16!(;7j1ql!@83p~9cIg??5{0jz>{v9dV|AQ3g}o~N86W987r-Jp7WEWSv6GOoh$8I2P}oD9mYyCYzEU_4}z$x0w0@E`v#c?P}Z`AOFwZv-T* z)~^ZzKu)ugx@3%?lN=^^w(09ww)LJ{kE97;Xrn8NFN(mX?c*r3Q<*DXYLz-XQt(HZ zDs99aske)K{6y9~yY3kqBB5EpFUc3-(y3kAiXX%2j6lDvmlIXyl6eWv{8v!mEKN97@>J&)g)5+2#d1q<^}P?M;o!8Utkq|( zpHyKrk9Dr8GqkW}s^s2gmn$~1{~3AoXTalW?_EIO5%>T7LW0-(ugjkuDhk}KwLrF6+v^KM z5&v(3lZ0d1eXHj(&G@6x^YrVzeK@wrH>6esi`3lPtN@WM{6X!inf^zlwXp{(#Wjj^od|5=FT? z6*ft=9SKbh8f8W5l-k%X{kGuSeg}30Axy;_Q1(=EM3@w`@&CT;fBu1hJou30I1kSVug(lYB(6evER$I#V+O1e(h4KaVVF!Qo()^h&a;3Qm^f=WeMrpCUR@zjlD?`0=$V_uC4xic)=%JMPty~&hA!T{ z?7Twq-FfIg8s362df9LLfHtR7qSSQe^nt1Sk6Iy?O6qizuX{cPTo+6DHyVOtRTMq3 z2{xcUly{jDQK@Qd{rgF5e1c}D4<2E~JM$SDmFX+jc$k7Fc^&wU=iU9v{);uTyiuNj-%bhSxBqHqzl`H8QRS>=QgV6;_UG@D7Oh^-o>l%m1us#& zKQ?!ZKLkLjt5tweDB2-Toi`fQ_t+tbwvR43nN9$iwnhO^&#pvg!L!EW@F22Sy(b7+ zgKoW*g-%5QL^EIVyJNg%^&{|HP zgbwQ`WXkorA#bAY9K2$qRysJB{t<%k2w^&X;3DCFz+}|9qw?rdm{f(xN-t9ma~RTt zSDOj21zID?p7xb5yd&=LPnNTi4Jf23`NP#Eg*!;CwK4fdKvx=(0xUn|ePV^lE|I?g zjO-aku8L5n?C!u*+h|V)5$llWL)y(wbE1=CX0#?VpOox9F+S}*bEA(#ApruGGmk}g zSY(ZED+jFN*o*Ko}vnB%h+FxavC7q}BWVvi$Ih{HA~y7@PzBkPFRm}zU| zl3QqpG$|jG!4&`K7tC5CtGVCb8hECghAldgC=wxSc6VTJLIT3CB;FP4qW^7&(M-y4 zk#cKX#dq}!zQp|SI@CUT&t>Ku-}FV_JeT3hpzzA6VCsrnS(UY;a!dut_IT`1mD{1m zwBx@t;11XY746XV$e4_Mw`3x&E)CjeYW95x*k+qT;aPb)m<%C{G&WGn??^8>l ziLyt7TQTQ}13r|h9wd{}sRzw93-^5|Eg$=f^<4PCqARCP&Sw~m3WE`GG{{!iuNB1# zClB+w)!#cE0;81x1%{=$E7&hy+H`Hae&x1r_=hG;C$|M(y&f=^fQ@@s=DY@^t|r|> zGg*w>MEqmzYX)@};>7c#fw0U54^wA08U!T=X@z>qigOu}O3fudZ~-#8&NBo|7q)`0=pb&? z26jBTC<1Y;BX!DJdaT6@61?of^u+uKGv8xfo=LZjHHq(c*ANLuz9Cjr`A37)#(3y) zEtz2i3EB@6E0cWW=HXm1u1TSP75$k?HPVz^F|UXl0_khxwl2`-x7-Q!#K6p;7W~x% z8(t%%Qjh}px0R(ae^*hJLgp-xJp%lRGFH!X^$qbhoellkRBqt5g#7V$n=pr~$akWV zRHi4d*zPgGn4xb=rk4ZK{^GR}-nZq(RpkncOSkqzody+u_x$+{(9ixpXZIfjr3Y#S zM28uX1T^J6xbxAirU66G<+k#Yq2&T7N01LBUA+0z`OQ|Rpr)>jc^swaTq^5>Ftd;_}LPa<9Z5MUTA5KgP zO;;P1rP$aKEj=>jd+Ytj4-4}mjAv&8&tj&gSZDidkjb;`X8>=h!9@|IU4A5UC_$Tt zF_>)qj7(LJ8iHS*hIJO?wYBBi#n?x;bP`n`;P@;M`w6L z??97PcJFfE=)|SSGkJOMDWcwswN1IeC+0|FzzEiY>-*-oMX=#V* zPuw+|-LOKXT#Q7$0$H3(6`LTF5}`O*^=+QQr9sO-WE+k)$oMnqZN2~0cdE{4#A79a zp6Nw1X@+VZ^{Y*qrs0S|-uQ)X-NcNk$1$=RuE>Lh>_QVOIx3UJx^Wqh?J|EAquyOu z#Mba!nGsA?1r)Afv`_4oYndfD1m6YzXZKrB8g#W=bsBs!3EKy}x}x{+RI;m)j1xcJ zYRZnKn{oZdLU_Zf&@Wut+l6%M0}_qe2lj+Q2S7_BAPh%7V+s90j7$?nja*6F0>A0e zI8E6_r-U!+cXO^#go@gV?}%SPn|hF1O>Z4g!bKh}P1S&jpLE95*TfaqFTX5LKqt+vSx@Qpoe8fCcB#isS9ltBC{BY`Yt8*A#$ z!s4&%gLDTw8=@+Bkrt0O^gXGrZ%gz3kMb@7o$hBk>`K&gKK8YV2$IkAvAZGk-d2mK z<0`*T#u4#(N*RFVT`_mQM@C9LF=dwr=kN>VrLUNaTbhJz84&S&?t&|<lp^u2aQLuf4}ar*+V5;VBSyn_}g3qYy`(X!ambJM+C%vGs4Ax{wsot_(!v;vmS?I)%R?0@1-qvlUE#VI;%={y(s{z$|9r)|>mN;Y!Qb4Em z0#j=IcB0-jtdDO0mukpNhgR$TT-l`Q8pf__eHf-`FH$H5b*BL#Ad)sfo>$yhDouIG z*O0Fp*&vEOH66&GX~p8yduWq(-QG@fTmmoSC96Aw`cgv1Q#R}9fBVp-MiKFBpQ}d3 zx-FnV5`kST^CrLJ)Qpbu9xg#l7A;ts=nlStbK(?zLWhZ^lGOJv@r8iIvRho|OMiSR zgopLcbbECPw<2bsZiLSjfhl0;mTR;Pbw2hY{WGcr&WcEI+q86Dijm7rd1rYgU7%R?^m zN1MlDm~-6b+K;y-D`-HjyJ0_QVJB;V>-8#j1teB7YkHMp^^l!k?>r+|mb+yWF{zaR zx(S7+f;aws12fnnI4QAo)YFnma{m~$?@>nQMU^qo7e?Umwfs-##*DH@4;2g)L>l8j zw+S#4UP;DLHWW|zW?``scP=`tZ=Pm1kX!6q3MV7p&X=zNXa_f#f)8>X@SeO)UYjkC znEvm{HeD^@YXGMtE;O$dhk)As;_j=S(?D=`E@$i(fUKaYG0yK%b5}6)UnPq~kl%%~ zdz^7yCT+0cm5MKhwD|zS`226GCheSgL*Y%If#ZU23k{Q2sc;ajvzhW8q{qVFediSrz%cMmlUiXrIa$JGHDVO>{PhDt%I8$ zZ((veFUI@aU3ByX*lytWI260)Nz&4!8IdBT$Qa4;ug)@SBdZUBnGM+$sPzkU0N)9(R8Q|F}tej_k!w3t62qtxYj|dl0`pbEe!z>~KTwOtZR>e-& zjH#NLY?~9_G93fI`-&GjZ2d?Z9}UmWm}wB z%kfR0fqE{01iA7M0`(Z)|0QRSwg&kXk75k(4Tp3SjvitKrgVNwGg4>V;BGq^m?CxK zTNc^#Zi;>+Gtr&9R?D1Ka&MlPOfU-T2>{=#6>r+TN+0`D@MA%=%yY2XGW1Dmy(8Ew zh@WPJrY7@5*!X+3I>!>%;u~+a{r3cYsZ4D0jvj1|7RTeiiT0|%Qciy& zlQ+=)C)QHFx-xLP`HsywCMsnQbh8c9*~?o8>Rc-yyAPvTaq(i#q+7r3U5l-GzMJ`4 zDL2&0>C@5f+E8>l*)Z(~e2#DbpO4V)9XNF%hGA*zOd0MsEbmIT-Ih^xWsGCTyoba> z(z%0VNR=4%OC5p9#n(VaOrlo*jG56&1RC^b06nS!sTo+Y@g&cvODIizX4`IuzkT|7EqH(z)!#?N@Un zWc|6N6dP@2>ptnoG%^>$q$9wh%Kw}gsB66xRtEf}@~L7OwEF=9L;2`=61ZE133v;O zVe?;L+))Q2JGNEl2stlYpFY3m`WrT*ZZhPgGY=3%&FC7ZbK@@JuF@d-=od0+N21Ss zUX9{Urqh0)lAay)JnX2lL#+uQ~CNzh%H+uc6CjDqG`Y4e2_kx`T4a!Frc?)T82!a ztN|FF4bdY?aENaQcX=1X1&{S11UmNn0KVo3dYf^~bc7r);uN?9{Ua%2bWZo8G2%liRfj+axe*|92Va4>la>`o%yX;}K;` zLtQG&Y@GE46Im%GqvK(~eXdgW+P?Ttyuk_>nF4=GuV{CFw*gIGj&2Ov^Re z{2hK;N#*3Vcc)h&sVJ& z{NRqXGhQsuukvpqQ^yN9X*T0m#-UM8FOMoO%jC()WEA@3YN0%Rmizc!p!p`Dgp|aX zmDK!^bXLPRt3aY1NiLV2!zWj)|Bzr+CM+>sQyjtPEsVhO*-j<1LC|8Vgp%npVRBY$ zR_DKsm-{v&j%Sfv5lix6EdU@2GmB7WlJJ&K*jVk(&S22=44vJ4n9w881es~Iy;50jW?VCxT1}A z53`X7Iu@J&Z#&+KyKgxmUfl3K{{x$@$nZCirS9lPnBS0iMaw-p&W<7LF9}vq=PLZ% zKjPj7vY3X{nbM$OQqlu$vKWE!wbLGaJgj$Ru+toysbXq-mgv1%0%yH;iuhV zWo7jWFxTDaVI>-pCN~TSun%M)`=^Km4Lx!6M}frB;4Fr&|q}&`ucitmx$)v$BD4S(6^uDWY+Ni;;&T8 zT8lOKn1m0@^`8z`@dPT{>+CkTS-8Q{*D;GWUUu#y1oeoqA4 z<4Vw(Jd^neIPW5TQ|jwQ6A6_b>L6d;;(Z|(i(9Lpk)O%On8qr2H!d&NveWg>QLOK}{`Vj|aaV4jA7$;_5+-fAj%iaWE7J{!CGBc*e8htA z?jh<%Lba!0sF;1NIQp3OOyL`wY{%d<@tQ#q^Bsr?&#}Q<;_NE6FgDKk#(_(8E$ddi zYJ%VB&%e)Ki_dVdMh(D^D;m$-ONQyRriPVY<|=l>^)ng=;=1N1uS8?Q+-}TjT zZ~R9)^OKVD*Eo@Kp-pfhK3#uW1@8&vDKLU9`v^@QS-yWH^b;qN)L~>ZX75d}q(?OF zQ8S8~RG4{+xu(7y^o`TOe8bDQ_Gh|0D03^3rFDrg*zY-{!QJ+jDll?@o!F_$RV+1VY<-q}sx z`rMg+^~ooXiYtlhas)9=Kyn$Lzg`{mNS=@%QJs{SI1!ID%$#Djmm4+&4wiZ1PHSFS z;)i3?y@HXl=a*x!wV9uLo)l@MRodvyBB3KEbbckci8A5gE zanC1r9EOnJOy30v@QI2gEACuUcT5`IukM8t#Uj4|`{v2Yft&7gmD#9n^6U@qahY?< zrH1f!(;`SIzoID)0l#be>qBmx0iqkarN+luoz;P@?f7I4|BDr=e z&-L6xsL17ec&|b$fn#OQM&gH4+?eX%doAqx4=3mOh$>~h81uOhz2W?GL$~woJG?ln zf4_Sz&8EU!f?Ff3u1)sVH~81%N5(3(eE~wA_Y-{<^aQcmnM3_*b}Pe0i1hMih&Y6) zU09j>LTT>cPOj`OE%SZNxt2Ert3<9Fgt&G@(|UYb7F`^c8*{S}5uGKZTUIK+{`O(rU>K2ov8 zu@$;kzI-#zx5Au1bC8{CzgMbGFfuZd>5FIOk>AD@SN#Elyj#NNooUh--1K?kSa#PE z;zHtH_kOGx&mm2klM6laz7i*$8WS6v5BHx}qv2N(j9n!4W!(Wg9s>ut-SDW#%yxzA~)6HVevQkA-!3$Oo#O{Car86?Q zzP6Ei8zoI7cKNIgqf$~+#crZdjK6Qr2PuNsEcrbqDvY*TIgaw#pwXw!S7Siu2Ckg) zBT&O8){y@qB_opunsWiT@aP2vwfcR0L@zWjnBil@iCP^;vy>XLp&tr^;BD0cDb)l# z7XF+Xz0zx()paL4F=o>bUDJtlsoU9|AJt|wM$!zDD2{#PMo2}^n+WAx6=mIx{s+vC z*H92n^KZI^!TnPi6I%Lm)^+$%oZGjLaXl+St5}$GL*AH~cJDfsH(^nRD<8L*%ak$t z9?B2|FejEmJkWUEeGPLb&kgSt9cWLI$>4d>C1tG`e`S!5{`=m!>s`Xm4s*52&!V!> z@%#7FLOU}zRbDm?ialy<^#n!flx^#ekB8f{uYh^=piy%24+WM79r!cJvh`vr=rXrr zer*lofb&oy6MU*Vi@GJ@wGw0aTL0=dBd{ne05#$;a5J@tg29kg6zqQ22}eIC?ov*` zX7zdbF%dDHaBuO9CFWc%X@NcL82VO$AK@%C1MGMAw6ZLlrNN$!SdzhOGqYDz<0$`x z*Rm3 z)04EiKj^F5C9?);d=0Qfei$vgJW;_YqQWiwcX`xd_NnF)b=G8)>w~(bp{14nAX=Ez zk!KJL7x3`+fRgS>_r%aJ?ZI7KVS?TnKvIh(ne(8)aHacy3f%>zV71{5KE4EgtJYHP zO*NWU`@B37a=K~+3zYp57!FLZLv&ot=jt64<3q2IHptx!((K{jq&6*GP#AK{#*S;!|4~! zr}e)b?k~(x54HjgEROWjWJ53w4vi|=;rZoX8*<@;ix1uD{#ERB8$@=^OeX!FSKs-p z_UJ_a(S}$~X8q=(K4Uo+baYhv)fd8^t8sYG&!aGi+uuOkBvwC=#;ZQiJZ|p9##BaU zchL)BePXs;aH}>bS!n4W8XO$040wd4#WV!2(WQXi1{lZc^y0=pefk=m%C-y)hu1&7 zhOY0u^;}lj_+DFR_z!j)c_8Cgy`Li-laEca2GZTs@|Cpn6gOY=YpRHz2IBV2XWzLq zp}kd9UONBx=yO^XvJeu zRBFgLNZ64ZO&;9r&RK0Qed`DVbMIn9p4-W@C!9`0?YGpM*NfCOFj^wwc`Xm4^Gmaw8%sUa(k%zlo^VwUT*191d#i$eeq;Z`SiQYj$onTJS@q6~WW~t#JLD5i z<6mp8-46MA>qgGh^E%--8a{W1rj(Lo^D{DDiDC-S3Ay2z>jq-R3qUgmWfh6FWLq}4 zvz-5Z@}GD$Z}!DyP5nNRIft&oznevMx(>$|(&N0lc_EzDj5b}|beIuVO%Q}_i2I2s znWCwWTpE!z+o`Og0y@7E+`TFVmYB$lv@~`fB0HkmMnm$ZR!udvXHgbll{wWGcoS-; zD$nDuoec|jn1b1gV?ne{I-6>itR_BR`od`8Bf2g90=>}osg~5JMz{m=X3w1#R}_72=|XRj@8`_FM>!muZmy0_GO>7={yBYECWAl1CP%Z zJ+60J<9=7BY-V+~EFg|hq_h!C_yLz!an5TAB`B|JOym2}f53gw2!BBq8DZ>)0{zM4=b-~IS; zZob5JFY1L=!TIdspjCnhhZV)x-rgSC&7`mtFc5?Z+GkL7Fh~t8rL4E@xmp+tX=vIT zLrr;dVX?d=%H*?%az-}AEcz!gFK-I3ne|C*$u~Y~j-%8YZ8<6U9<|{#avigURzFzG z{~SK;1p+J&F=jKGv(XHyZKK)ZVFBXjAq3X{;>qIm`YAJv3mx2w3O;CTD35Y&zY@*E zyYSo0)Vq{@npc}jz4hAe+4d|wEftkkOrQjBKJtBx^g}p~yHZ%bQG|FdN2KnkklqKYYvl?a&Vy8?!Z{GolSl zKcEw45oo9ix$E;>Ve@&1d|=~wF5(enzj5y!PDETO`ECEP{>h`I;S&9H6orZYDm_t= zaEpC!TUNU6@49kn67=JxLIFST8K^k!0K>khrbf{E6~X#NId36`^!(q$_NT$Y@9M8U zI6A!kN`F1v=xtG;HKSy7P5`Gi&s+7VbiK-2a!m{D2iI*HFdqj19+n3N*@)Mv7>ztU zWTzW#QWzLG_G(JUE~83$;BbJ`u$2|B%=-GMKVx^Zm) zk`7qV#N880wQBLb!NJbnWS9+>9DiT=G%viyP*3w%M5+Ul>IjB$cF#0UVJSDsw{}1f zljs9%zqq)#(fIRyJeN9vRq0)Q)<+`+-6wiSdyQ&t>h6(ve~?u+SXwbEuPPC@GBw$n zYEAQfQZqx=c@b46A?h;!YUzDa!TAZni}E+l9C{U-Aa`bIrLFxB2uhwLjd}W%({Uz^ z+d$<{Riz|~!z`)n^`kITBk$B4F30KP|4e-27l(Hl0UDIa^t1 z_~1Dqg^BN7wHzEMx5pGHummSt;*m4(0~2+13R8I45nk5TlFL+*PduXcCIZ-{Q#fy2 zCg4-8DEPMF$6u%08FVwLXSTn9phb)6-E^*T-sU5gSRM55n1=tBBz#>R8+2Ny%0p{8yFPF-pjJnML`2nq6^viS*_KyMfU$pVS;)kM>A23Awg5et$1F4zTKZHY0KV87u%GnK? zMNtqRmVMr!T;m($8 zX*Y3hvcBNJP|WlHYOk==eWgt&-!qqz^wnlK<8p3f9-P$&zH&B;%nx9@tlmUwXi4{B zxqWV|_KlQu){AE|+mL5>A!iUJed}1^^Cvv_t@b9KpC*MJ72G4l}uGI%G(b z&rB~`1HjkMFYn75MgEn3;dQlMp^70$!;=mbeK5b-)zEuWnNu zDEVCkre)~axJn(&9M@o{OkjGx2SU{0XKZ&87JH%hfoRLbR($9cpICG3Ef6S|H`Bkr zSk{WJ$p8B2V$gp*(1f#G_>k$j`Uj1)XM@iqG8(fpzRUA-;U#{oBIR;?u#e)(RGgAkG1+>u2Rzmm9;Tek|ble zOi>$1+7r!j3-{q%>&pcd?KH?Gm~?%}S^HGLDH%=ZeGSGb@u0fj&@eD8QYFij_7?^@ z^XJ97Z}hfK=y098EVCUU)|m_z0&#nJMe%OJ%oJoj#6kl~bW*;2;5KK(moi#or}SAW z;urxKBN^qL)@o7b~*9#{ZAH z{gOs9m{p5jgs)0vXhu~zzQv)*bG`^uYpvgWGT|)mrI^!*8xM6^#*oH6EKc&J9w8>b zmlz2Z=pk{7bXUbq9~!#c`(wB0b3~)|ueXulwxJ=aOu#W?#CSOVYJtUU@ zJ~?lGP9MDYG4ND>EOhTvne&!ILaPx4Ahf6H=*z-oA+c#Ruz?jd0r9)&L}VGIWr_?2 zKx2gWvvB#az7|~xkP7_Er>v}uU9=3{)i9(zbBl=AKkuF2+1O1~$_)M1>ls%uF`r!}OD0EY z#HXc=>gSH)vd%Jdle01-xtUpRgfV;Kabp*?XwVmZa-aV&E5%TwGGqLU zTKl@vE91*NCxZl9p|Xq49|vYn!!m<{?e>Rt))uOXGJlI-JkbB!`a#q`Kka+1E!h{k zdCMn7oLZT~pt?|8-`nM5#=v4;cqRva_l}_QW&FgAZRNj81+E(m4G({$03E2retKzb zd@pdyFz`*0Mr!JZ1VWKY!rP zE~{^9O6&T)h9!-WWC#lj{v#O*q;FiGfuX?;Q{=7VYkOYyjzFUHVGLO>i16five)P4uV$3?B|a3Pa}>2n`cDXn|$k zs1N8{vADo8G=5omE!N$-iIP+a-FlE~!=_d`2g1{4@B}92svgR#BZbx znznG6@_A^Y`XAq5W`BqEWkI)2^WR+a$x`bI{e$x#1{@kP?U@_`K8)?EslWL54nDGN zOd(}&hg@nJR9Ylv7*^RjpicnS4GY`P#KrG$&>uT9IZ$Rephx^5j+J2$$!oG4aMj7; zpESnuX?Wn{Qir$Q8(DIRoVQCKO+2tOnHFbU6SBla)mpNk3TJT<=AG}oe-Q(Uq= zXA%AyxVbVnl7}r4B8Cval;siN^FKeX4RD1%)d_al&Qoy%fcD%MT8GgiY6FBU*YB1N zf_f>S8hXKczdbv#FVDbRdn`{rd__9v6(eLKWtDKVbATnGR#1lx-dn!my*B07Gb&GH zsQrTQT95lR&ldxftlG!dlj?MGr1KGYPR2}ONJDz{5Zi~+~}AiGhs`->oNW{ z__fpYWH9~k$DR7Cx@ZlBLDi&9l zYpT<8$m53B<0-oh7v`r}0-bIX&5{pe(X^-SeRoEc48(T0GWoM)RIPo>Y)eaw|I8U~ zv?URqY?1pkCj0|Qz&TgeEEG6mcD)eq4}i1NWI?m_20F( zDS3S@=mni9I8+bnDz+Pqp5*mET3r-syXF_|{^v_7=GBx#H~OJWgjVYCyFvEq?LwkSvF5srUf=O1|u^ zB?q8rgoi<=mv4<7m}+4KMlYn=XoJR^s1Q0~Os9t6sW!AlB==q}V~rhdFdOzAy=i3E z7%Ouw7PQv?D#RjcC$Ca3eB>mrmTzxVU%p1P@9ucQ?}=(@_0{ct*L?eKQtPj)(pU^P zDp;fhKz{FoB^bccD*g_Qd{;*BO@$K|J_@Qo!8}*Z#ZoAA1(Pkzs5Q-mTlBdxkx-2T z3{|HwoPt@W%O%tYN-q43ba>05hFsZy+Lv&}6Bb1)uEB#=kAi9HmRBd}aT`2I1p}Zb zWG<$dL18W<;%udksKe#P^CYLso%BuM^!41Mtlqz=IYO?0q0KzsZ(bj^ zQ2fHIJEg%lnzJK`Lk~$H{%hP$wK%2;7$Xz>R}-#RpoPYTfj-Ghk<#sJgDdPvfTi}} z{e$O0bus}B7DsCBOBG^B_C{lw!>vhPKy!%#Tt5|~G>}!6Gw;IqU1bgUS}XzSLgn_kuHB?>d|=?+%8);__*etj6i&Y?lf|!8&=1qNiW#unB|_)3={3SLvmwmxs#B zHTB>OQQypsPkiKMYO#ZjmirI12A+-y=7kMn6PBWl>N91!GGx zfCE0UtyJr_9aOWHFx>94BObM>>K%zLn!{GqmPs6f5#z?t1c6?p^o-F2x)il1^0KF{ ze*<})uwyxCEWAKahy8x89PVxkqw%9I!~)b=A3-XQyx=bAIo^_-aFpzt7cBwtdafXH zUX+m6$h6HQVC=~U1Ex~<)7D2}q`m;<66)jsf|yk!3^~85aV7`!F7ts&sEwo4lM$fv zSRSOmZ)r{Nn*pdgou0Ww@heA$1xRCs#@RLLe(GZz6+A!=%)5zp8nq})`u%*-5NX2k zNzmK~3**-uQ(<$y!v0w-^iew=icmXc{!|wzRX1!Y__5N)`s~~-bGe?g$i>!Zhwpb1 zq9jya0!5u5aI+s3jpiehF=?}cRT5DMThyikLS}~yXksgdb~ofs(&Mu?M+oksc~X(u zPgj3x24x%w$Z-ZU-od^iL?-u3y=A#el1J;ly3c-T-8|R>{kMYvd7Rsy11u#`?In@F zQDlgAGVpB+lVw0>N)BKEwcd|!1=E+q*X|qy&KzzWcMc_$3iU!P06jI&mV^0%n}Tj6*~&?6ZxF1e zj;|wbz>3}_+*RF%$PODrT_17#cZACDy|{?PZu#r}q@R#k@?r&`U`a>(L*~oz{ zs5@!SKFyJ`W`Ez zA%hifCe2EPiJOTF4F|iYHIIoZL!10H>cqDqsGBehDw2woyTHkmZmozr4g=Q1~g7 ziw(V+u93aeO~_p9&#*SEgO)F6_kD}@mq?YAT-=D08~LK@=k@y{<&B~STvoAQY~b!M zb)~k43K-otkmb|2A3S+pzD0)xP5d;X zv0UV6eiNM9382+jK2>EXqMXd8tmTl#qD1B0WOS>Cf^l2S{6WVSNI&3H3=rQz&}xf! zs&E6b*6K|s>Q`vM_tj>c)02xWJMFi2z8b?BV1?$N1pH*kQ68$()lU4Uzn><5HC#R~ zrRi=dJYz2RT zej--+!|xeF9{x1JO$3(1Ik%M39;rG@&E~EHhKKRx(^Si;N{-TmcrxuLLLl-2>^;go zW33=UQUC?Y@#WG4V0EgP0SjWP5BKvUN}t~V_!JN5=SuU8mcSH^kD!W9zOx6aBdQ>A zigya<0bcqG-HcFBU*r+X=DRN7 zgwCM2arR2qDC5{;u|DHqVQD9hxnIxxV3IZ=)R9DOCj2!jD*o=<0JeoLd9beU%6Gr@ ze-M(OuN~7!g;Nb%e3#81GS?cOl|I_mHf3%n@{`Kh{(OQRN7fgjb|V<}?5C0lE>=JQ z35Gs4RGFZV_=fV`;jg`aw(f$!)N6TC$CV+N4fe!((}_WUQnf@)H$&XtZ!z45yET$D z76hGA0GbUcCphK}1hmHZdNtwAIBY4%hJYQF(phAv0gCvo>{31;AK98HY=VW0n+1{r zVLG9YBI(6{Te`&cL!j7%_aDR`Lqm0hwH=q-z_iVd%_%V7HEHCF4OGf=mZA`_UxfMy znh9fEJx+kdU8$ChJh(>wBSmf!l-+t>wX zfjwQ6uXuL*B#5%BKJo=R*fMDeE`33;o{T|xt>LLLj*nl7qU|s4x{}Fx)B|`RFU))J%wg~u2u^4`o!8MxSZ~rFV0<_4S?z{ve8;hFV5oas}Nr9 zT`_$Nt*0g_Ag~++?Rc&1i;b8NDRlA)-Ke7W)~0j4?K#AIT>o<4h6$5xkUuPAlPUG9 zZS5`GDlD1B9r^XR`rQ-Z4cz^P^v1z>0y{i}+(tofzHHo1sE<22(GC9WEN_1SVsh?H zt_UK523qlWo?T4AcrPMkj5E)MDpUQk46MF9E*Q2_E;sapAub^Y)&ZwaQmuJ9t8znM z#5-8l1)gJAEKT>zK;r9|yc`x(`6ovO35dS%F^Cts@`DmuzRQ~mRgo{%3^~6QH+dQ> zBCK!afEM9s?%u5MsT!p0S@kLl-)Y1m>zJ6+x&KjjRG?iP6_y49TbMvmOw=W zkY)R&Zcap-f`EwMv*Etq)up=$tr|5L#i%d}t2L5N;Wm|>tpDJ#4Z?e*Lw>|-k}_`S z9gXB0v`__;WH%;CO;bhunRe7NOFg1x64zIBN+fr(cxsJyny|*_H4JJ z?Er{*m9A#nc-+FJfK_ghZ>YF7opyZX%cQKpbc)qO{XdA)>e!2p>3<`7Arc7rNgE#w z^S`=;{<{6zYi{8bcyyV{0lub5Eb5`4f-ST4yScMjK2D#v-6eBIPs`G=Ba9$Uw&%Cv z{;i$*$RJn6#$5AqS<9abtsSC_>KQ@Vw@x#7j;=~$w0f!KuBIDgsok#3t9FDYK(&Nf_Z*CWJTq)Qap&Kj@7gOsXo@SYgzFr3^;D<#K(LT{tWdSnmr+ z&+Z8)f-<&B91{g}S2&w`Kp7t;u!#sOn|k7(EE7J(b(RT7cO!pCj7`r67B?eBc5u-KUrFReZ7LJN3n|9M-?Y1KH}JeAA5TB26xOL; zEcFEg$6MoSPDFQFtNI)5+>%tuwPBe1aJ`8NE%SA2_PL> z;r&K?A{K{FFSWF+rfSky^;zPK>I4^o-CPGl7e0c}dMc3@X}eL-e&2rp;8fOQg(*If zEzkM$pVuz@t}}@LxiiNaWpB{l?B;zsrZtQr6DMKLgSF4!H zE*srrfX>+fvy30I-Bt5sAK0F~S1Q3>&_aT!KWL|Luq2al-H8oJ1@_SL=Q4&~sFSw= zX8C2MM={5&bGN~FKz7y4K-1q?PL|4PAmpB;>$L+CIPo-((WKG=ZX^dxA|>1J<(Vkx z>e(OIX_PtDNj}V0Byh&J^=&szQb!AB%g;?K>li&%ziDX(%{rwm9#c->ELTBHH{7$q zTRBBH3#-r-_G`Hx>ij8jK^hv^6~VcJTQ)$Z>U78VOB=jLfd(221|FalxS&7&vLO3x<#@EM(esgx}dGhme@%{_5St>i3)QA zkj|CeUoTYJ)UE538dKs$i*29;*3C4s(hJryPvkK)d*0&?t@(9 zU|%{x1`nUW0bQy6jL>3z)j?o=5@;0G=t^K#4FYptGw-j^d;9YcHOEVCrN$VHY@m;W z9uY?0N~2*wh>-tWo?zDm%lEzZH?Zr(0D8j zt2sC>mcXH*U>X*e5tJddu7=wulQh^+cdS7*<=5Nz)oK6|{L4ZMOWgqs)5b<1}Z+`Vt5JCew zK&hk09~^!reG4Wky$AELZI~x(-5o$1^c&g_yx=z_<=@?>g$%WpfRV4PYM;ZBv)fN6 ztG8`+vhM)bD+f>q%YE2BJ^(Fg)|RnMioDsX$UzJjvn*CC>zi}0UmDQF+G2hm3otnW zgQYBi1v5!3TEd5a-f@cH+P?OB+UuigV)SJLg zt}4#Sr;qllp3$*Oe^edkM|LbYQv+F`qOre_j#qqn)8lm7smAAb@~YPM&Aw0L&Ll(2 zlN`_Gpp8K6juJnL0*$JCpC)F*%_wTI-)RO4<6t`aGGxKTC99f&+Soxt_(&*00tAXr znaU=+LNRF^&=I73eit5JP5@Dj46+d~^}W2_p& zVU{7r@0C1nbG$WaTD=9J!CAmbPl+(7yGk;wHbD)?O#hAnfuDG(d;(~h)}RJ5?Rjr7 z-Jp&A!y-sSeFkt?8~F3_81^?Q+rzDN9}Pd65WH>62VR)XPgO!-9&!NNoR)k+2SByF z0gO_)IlXl3D5=}&Ae+T$Ld#bcIA;oxX)6Ik!;w5O$>w3@jvbs7v6OV zF^6-Y)}PHStvVy($7)@A2%7^_BXROcX5qEi)(|?j zJOa5!-k7|MYGqmmrtLHpzjZ16BBP)L; zPknp?6(5Tq2y9!w#dsU@bft0hrVrv7j@%6q?4Qj)N zu+(9L%1wNcnskzx?PDapUQJz^3>)M&7@hl(MIB5#m^@=KYVdJ6SXcE_afch>O~V1_ z_gm|cJX#Bw(`<1hqNuTvZ6Shd*=8bDk*lVpZ4}TAj?@V5uMg6Dfsxc3J2RxS?7z=8 zx%m6mz_jtF(|+an;(k6RgRSbxg3gRcN=J2D;AoGOJeN|+;0gzGr(-8%GQ_~Z;1nd` zt?$E>?Z`H|H?yuEUoL<)RSa$Z!}>#8`I{a(4#C znJ##@bdq{)@3+>BoJ|{hpF64SVV=~BSi~r?CyOlJUDj;}60XN4N)52qPofmrph7;- zDdpeAdGj<4GkIoo<#9hLQ$tHj3m&@pgS+dS%Sl7B{}FTp z?64(|SC!xLo^0q=N)HyB%h&iWIVepD-Oi9pDJLCRPGPvcgOl4S6)ny<>o6VNrQz`0 zEj+D5oJ!8MSSdU%=d*6Fw|W;JfjFd(YECZ8Yh+B{#(hCyx>YoJNub@f*X9)lG6$zE zD?vFtb@)L>=T~wqUD;NsYF?gEx7tv(pN_gze8*wzXQ%qqO(z3q-q%d6bIZ#5QJM1{ z*Sh__X2Xywi zCS~;G+;m;)$fqq|>BGjEt<+g%1s4QS?rP~?y^xmNoM?PFbvwHyy9X*BX8wu*oOw_^ zwyhb^1ojs)o~>a7KdHDB{4`>=;vPK?!VrTtv#~KgO`iYdYC#HMk?p%}t%Ye~u}YDX z$Hkg~P`~{ovC~wt zi+-W1c8X@uiQUyoTRI1s`2?qn6&{|z-pj1AQkaLzY^-D>;ZqUnud!!NxsC*NC0Ha3y`g-utB70;nCCVYA# zkG3-qdDA=RFQgVX{?^T);Y-7M7RszYd0Hj7gf&)8^%1^%ZU-*7n|2frC`v3ki6uV= zExur~67PriB>)U=5~>xf?L;_4!5>ZK<<@JrRO*#64~UdMh$jBZX#yqGd;rIvB_54` z-c9>E-Ar>{-UjMT^0oGL%6kF#m6`M<1hvQ~L~N5JI_~YWCP{+Wn;B>bqLEA)+;3)& z{M~Ckf90_2CXYG+@@bP}hsz-D|NpWU?Y~R zPn#Mp(i1>-P^9KcPb5(JGnSB~3j`L;OpJaMW7n@9SfMgIE3?=D&?hwj3FB{M(k5Q^ ztu|?ovZ@Xa;h9y5o_o$+5;6x8baaO)G-E_zkM#04j)Qh)>xFOK6XL_Ks@kFg;at3O z%I?o2)yG~3^C9E~`!}km#wA(ksi0SUfqp43h-*^a$yH7xs;_9idansZZyS7o8wwR^ zk@ITVb|qGkfx;`#z1i~cow-rJLD*ykQ;4QjL1LAnblM3{q9e&@CsznaE{3#50^gbA z4i=O^ghpSc1dD$`!G*(YO_Vz)cISz=-xWh9{b%j0+w}R*I)A2$_)4Zt5cK;FEeUrS zJlKyTRwjlAeIX4^Fnt)zw#RYAd3%91h$t9_1;K~YX$QY^T#27seL82qC5S4uL zKJD0P0kbU-Pb+#`tCGA$6GI8SnEir59{?DmwZ9W^yo;~hUj&peQHJc=-%XD_d~R_i z8@s!CMH0imr3(v+Ijq*$0XReJVmz$G>(JW7NfEE^1<@76<%@@o1NKU^W1G3hr#QkO z7JXk}7;ljJ*-oxRAHEy%yL?;Jm5f^}F!CC(mf_dlGV24WWFCW)_DEZg(p$A@OO5JO zRbQ2TjZvfgkJpdC4Aj;B~un{FG*`3^l7x zXDvyrNHShxVG-PD5Weg!=y~_2@bmf&YulB{Nxs8Pu|s}u8?!#cH@b(vGZy7J(FAuR zcF2c5Q#w#_-7xyWY`E6%QPOz-S5p@ZTL(4PI>0fT0c$~_8y)N@@+jFQm8ut+h|b61 z@DRTY-UvUv3p;7KC#7ZYgg^d#r$+-__&&>Vuua-~Yso)EwzFFYk)#<9JRA4Vv*oK7`=>zyPs#6*FqiDYjnwGPtyRP& zP4p&#LolyNvu;$2OCjY{(dTWKg{~et#xfspv)+{JR-3#HEP{IyRB0MMqSHO6Yllzv zOt-VFykk@=YACRKFT}J zGprZlFsRW`dM|TuT-u^ZGxwTfNc;ZcWqbTCXv7{$=9Ybhu~H`>=Z5%xh#a6$@u;7^ zbk}8f{_!`@%CQ&EV~;VMNj$E&AOWDRxLOcq^&13BPmgy^j^EZ9scWE#}#f9?WNw>*i}MPROJG?;>h=2R1722 zI33ap5yFl=L0OfX9e@x~b?wZPFV0~4yH>(dS-ADCbF+`HnZxP12#v{8#IpOLD1Yw%gM+{lpOzTtB@}1(m9CD!=f#2a_PRZ!g=nCK?b?Mx==T z)#P2+k*dUJZUC(EP;h~)r;XW5O!eN>;h_w}Eug2V*ny7LwsUH0LBYlNjRU|435~oR zk)-_x5ZmqSxnFqzT(I9_Ka3mvv|RM7cGxEHTAs1e-Nga9HoH`SQJ9p;+BehMk?VA} z1QWH;LCg?6;T3&+%;S8OH?1eK@9tvJv-B{M&ejT&(4-Xr`VdAmnj`rq7kgMqeYfc1 zIPyFiCqc>6YA5^tL^>5By?qB02z{}c@Tzu|y}_hX){rs8)Kg_LkJCrvzvQDYH3%?1m6p6eZZ2ME{u|=tSzRsGHwNNsFlPl@_hB8O1*G)#rlha98 zrzU}IEeQf~#%><66{r}XVsIfNp&xAWVjO@3spHt=VC0a@&%JM=dDfU|^q8op8kNdF zd~PZ92=UT@c{)Upbl)Y#1nK5h8Kye<2a$u#I3R?Sp*qGy#SgoUkyb_)a^9l6R1-IQ zOPF1D@K*%f)VtY>=z#g@aP4X;m;OVJsHgajQUNlgZt6@o96}V_#5=}pO+dde=APj* z3tD0zft>Zm9p0*?-O{jD%D49-rMJ}jY#X^D2@}@QMGYs^{rS4jPqU{#oWH~L;V!Xs zKgc`0!3f3^HnJRVPch+YZ%Oh3%*esnX7T%-ZHCIuzCB=t32L#4qxr^)jkqEs-R!Gi zm5-~>VxyS#`7MQ`BI4mI5jq3=a>oUVFYZ@8HnE&}A++)xs^7!Y`T^vfzuiCfg>q#K zr75O7Byy#$Z&|{a$h0yVXg9Jb7;lef>E3o-y2+PgdR~P)-kQUeW1Dq?O$hUB zjtGO*PDHr^AVRZs#D=f$WeeUy7co5#zzb&5qB^lxZPF3%Bof_pAL=RnKn}^O{#h0r z9;bTPZjf)8@&g_r%{nLFlJLk)*mDrOed|emK{Xag>ZKAmc9{@o>uvGC*|wcp29+aK z59%_f5TICoCV*n)Ph`BwI?E%4w^NwY`f}?B9F#>&AYYw%^GM+ny63>n9nvpWndk+Y z6#*ie)@>o5lae3$Fpj;Ei0toxUR)Vb_bVn@2GmgBkWQ8-B#5_w3(NxQ^{o(9lii~k z{Nix9Q)T>8eIei^DWJ0i@KgXlQ&bY>Km#~FD(990_(qoSP{y49V4Z2SP}FBh2AIwM z0k%__oXD|!EP~m+`2lw~ADpyjLP@ow*;%QZHKp6)cCKr8F0=gU#7cK|yqGhmlHmhF zH~ks&_U0+Bo@5N(tk)u-#IpWGg0g$t*0s&o`MB>^s`ytfq&qmFHp%{3IVnT|Y9kw> zP|szDVY6fXZA-}ESK593=*{sbgcok#W82v{0F>+Jw&{(}l3LsTFs>=r-s$ zr-QV&J_CIMi=vqqB!HVh8o*situ1t$)YihY(J*b zJgYP%+*)6a04nji;tVeBOjh564Q75j$YhP1Snqjd)v5+;R-I+-Jp-R-^AkKv^-z05 z-%it$I4TQWkASD^{;?fgz zLgqi9Eb>Dd3fzs9>2&%t)m|AlDp_DWCBG)3x%IXgrNL%P&?sP}RglT}3>e33nh0-E zR-Ky=i`G7)o}$&?@TLS;_XMz@Akhn{Q$G8hOu(zXuauz^WPs*(QR+rP>>l~!pk)-~ zRuw=_mmj9vTP|6>C^TXIe)KsQGBW~(U3~cgdIdX!5=S{lCL`=X+W)8TBY+8f#-)(@W9Rx&sf2u2U-=Ds{N!>)fmBSpPWzb zTmr!H%b0siweU;d#2fB;d6T5{PDAm0#zTy*Qtk^*>lMpdjze!^H=65(O&+dr=x6A< zkRwKOYz2x9KBE>X+Q9v5eGCewUcO;e`S7U1JIe)eWmQH|NueC_&Dwe&Egk>3YSBAV zOdNd%VuV=xiOU(@tuR@@O8(7c#!eE5NAbVVaJ;LV8xLwYj5DuJZ=`q~HoF_0eW zDf|Lz6_Ra{?;Sp6lEOiP<+-CwZleltMEJCHY!{FtM18rHbp5F-iPct<_%S)2_yGQV zBqZ-k4!cenKH_=naRsZR0f3+mYg9QdNET>zpV_%i$QT18DvOASem!+9eVV9u<<%Y8(t<_@J4=lt2hLXs9T3flCWpG7!9ht&4PLH%4Z|_&` z?Sb!b-M0n0S)3X>zS~_`-$MNQ92b|9sKS3Ew^x+*Gg1TdqUOQ}Zp(fUl&L1<$et<6 z!@D7+a@?A{9gg4k3I!CIY|s`F&g`;Iy`AzecD<+l>M<9({Co+2a@2VKVq{Rs(EpVT zV0M{+qm6)i2Gb2%<@w7(iC~r}9FwISIfqaZjIyiy{SJ0%~wQXbkRU60NK1XkR`{pdUbm51Gni7&|zz& zH}HuC3o6+W9%tZoY2tHw!F`t#TmDLc0v=9l7it?XD2FOv7xB2`$RP`sq#T*M;<1Nw zmGT*)+neK3|8baat&bTZaJ*n zA~olUx|smrbl%R`NQDs~@tvp%#|}|Df!#71b1Txa?;*Dln`2^hwlnwEB zV**ajr+2)eUdPhRE6Z)nXS28orT1DA8eY7;je$D2l<-pA4SfQnLl~?cl|_m-C|`&+ z*$)5=kOec$Y`39WB%JF8qydRi=T?l~EJ`^E7HH7~FkmZi8LdSr1wHrg*8|{lst6kq zC=uN%4anZ|YzvYBfpQ+0>eOn#NJr6gl)B&EW@3h@0?B`q$XX_qjAyBQR%WPh>F{|6 zUZTl0>=9S?GPm`C$4f)ppap?kz{7P7+Ed(5w%wpI3v zbm^~AAKa$!_dX26CSuC^8C|P3$n!&Kr|xUAK=hSciQ7E0Q8|gG-EPWMw}bxMqby5` zQff?6i|!JyB6{5GX^gnfZWsIi((mPcGvp3 z1vxV8yjs}K>75nzbyNUg)Z6aU020%bf&s9W7Z`3XYlx?1%~q~?z$F`YKAX6`pfH#9 zGr>A0#@$3-0Z3|{yNkgYud{;_89LddzW+tx?WeWlntEO&Oat+-|Q3=auF zqm>E`(|LITO4T`lXL00Bf942inY(praswgUcCHd0h-f}0MAv)2$J+q%@g{Dcmuy$s z7N&XJ9> zXB*bNdC&5Xv+w-=p1=M2>c`VLzd?O<|CO`!Jd>g^kn9OMUOLM(@KkaG#m{`VNU_v9 z2}bEE3_wXr_nS5t>??9>y)YgOt{8mC5V^=T(=2kO9e`imiRX_Medu^=6q^&_bu$}j z%}_9sQ6O2tect%ik*bo}FM_xc5Z0|-N>%zME&e+31Tt1A{Hg685g1s(t>+fmF(&h` zsvoTwn3IHrqB4HK(6Bgke0yO6>gRM>D-K~2OxLm(8!C75EGDb+0@MSvhW6F* z4BvNh#aGE9zl&P6$fBO#^Q=|X>O&DWy4a9ex%Xo?Ua~01rGn*}5~bkHmS;{xWO#|y zOvozCqd=5)=i~Z=(pFAv@Bmx4A7ynwS_i-(YyiaT zQNA*S=nzioiC~8;XO=l6szggXufp&HgAo_&XHEv`(YKSh(yVWuN{BPDgY%3(8q;G?o?XBIGWJ;NSUqB-|mp%%wNN+t^0 zC#8>oO<+IvuAhQ1!e9eoI4{cGpD_Vwxn^@~tKh>4-MJg{A1J~)Jj!jSjkbY}Hn z0BELBZlQEai4X^^maI~Oea37om5DD&Ae{P7Kt3U11~tO`9g~W(E$z3Kr1WDNlZBWN zY3>qCN(4hF3R*J_%0gGLkd}*#!~*JPn=FBi&ZXDZ7OPvB`AhR(t$(nW+0CG(3YR9W zY9w(bsh#zF(I|dkDl)!ny0Tma`zikqZ6ePj5SEWuYy?4hM z6x`wCUJppv&L7AB+&waJD#bQJw_$VW{6vYob-|oKTJ&u_OR)Solee2H13 z0VAxK5G;qwokR+t9c_F#_5!7Mwo$GJ;C0~+wsJXX{{PRON z_=I2AJuju9jo6z8@pzZE^;m5x(>VyduHT_HM)|LVH9iAp%2`!#*`xgammN_V`5Ome zB!;MM(eDA$Hz1%0YgZZynk|v;vOUzr9n}`r$88n$$l?WPFs!};j6X>8A`yVS?w=*s z(Z;OoO;keG_FMGELf3EKU`P4O?S8K}#b z8n3eh?pxM+F&@tD4EN;|Q;{f^@OSs2Wg|=blSYF8&pYBGng8^IgI|`%9#Ot$U__Wv zBA7AE0a`k^c z7m(Q(bvMCBi}G94lB|BIZ1gD``4IdWEQ#uu!%HPK5dpu>Q@`^0znlxhV@ZTbkogXo zmRY=NmZ}vP`b>K@91k0<6G-qv&?7hIjH`QMS`>f8=??OcOPElph9|w&kntb7QNMvS zVX?(zdFfz@=AtB#pU-SE`JV(D*u@?!W9|rY67QAgKw!?k8Q6f!k@d>?dO6Ma!6~6% zg0B8yfQnpJtCJM2(KP@$|9Hk*_K)R`?MCsJ!W%Cf8WrW$>m=ol?X%M``xji*7=jv}NP5Sx{~5~hq)^}!$uUc`q7q1>zbgA&BwjX()z<9e`h zTociEZX)9UJMAGn`7vne%-n?br3@)136dt{k451m6+HJ6!%w*<{vAGR0j%pjV}tdL zKW;Ay+F&9I*OTX@QVRk(SIQNYZ=AIn}LdH3!-sGNGVu@iO<ZUp50my43B0hqS7DtX{hFS^8aPfYSO_0P; zNBy6T7Q{GM5<*N)_%*Swi3EJpD>KbD3f&h}>%rmZ@tKa7i__~35*CP5TJfK?qMpf<5&F%c5*W2j_STWWxzruF0 z_?Vw!d=TU#Y!y)=J1At`|KS2q4I1V@bGiq%ZTL<#+qIly1=FEoLB);={P;;3t9>xm za~EN&v00k9?fGApaLXgm2(FiN2ZP4Y6UA)M>T+2N^v&-bvtnHT0KX&%_)%=kUPiC| z^gB$55tENw8=$nS zFr?j^d~QwO@KWrnefej?whUVE7L;s$hb z*gVRFT)zQ{ zv=kwm(@`+eo=P;Ty>EksZigb<-_tI%AKTRWrM08QeRRdN$3yCk!B7y*oD{>j;ped& z)&LcDeN6@kfhW7Npsvt3N1|nM9?aDha%hxVTC5rWHWTgocftb<>w@%*=ZtXcDU9H# z-bN>V1kmeUlg7=-&G$yyk6(KpZh&VViK+NP*~&%| zVnlz&)r((r5kZlFlqZP_`j6CeI(4kNH4tRSXNBp)COTrGBy2Sb0so4(RB_`b;kD4C z(tRW?28Pmo7Z?R$Uj0A%@XvjA1ux3%8z|wSplH%X8C4?j0<`4SGfgem#~1j7$hg~j zmT3uSd+U4JMMaqpqSretIWY#ih)Iz7da#9P(jXuKq|@MA|83|$MhHtf8YLK(x2Ik_ zR=gbKk)2t}m-zwzl{cE=UMA1=_S{3AWqQ>H12hMRcv?0v=8>>TYul(nRx0W13lj7c zJlF?;a_Qgz2PW2MEsX8@2jDxM9Jo2l)hUK;M0&s|KI?C``xw#kF6?(CFJmj{Csnxq zO%Z(Ky$Gny9Wdorz06Tt$*b7jTOot>6{ENigT20}3g9`)N?(ZNkDd7}Pn43vNQC|O zbg}4wD%ra&2Fdd1V9`)GOfTt!$&pXw6hc>?fnvj-g%{EReZu-qP4q%0nWLD-PzUR< zv4BQbr=OVrdKXS)n{2_w=ClGv)EZXnxk*{^U`!UgRF!Dj@W;$Nq`YnGKbm%g$&biU>%|N z(TTmwub-H=Ou}Zt`c;aj7qX>&H zC258UxMsBV9wh0E6@`notC_|~(_%=kHG_WmSI8yIuy%tP_?5=Vnii6*kp7ALf*U0iWL9)q1K4%$ckqNHT; z#rAmd2ljcE{%HEYvhpYzNT>7xnVt$TEEuut_@LH$&x^btpnp-&`jAdW|7E*T5Qy^? zUi77@A~41SS#gHLX8CDIg8=0hPRjMCe@HWL-H2y@7gt*cfb}!Yt<}dwWNZg?5vfh* z`diwK$(4$k39}3|;{)KLEMf}%jk;*?aWG2AFhQDQK{Td^SE4!sk6$1CdJR}Ww9q6{ z8ZpW}7;1=cvJ)n>X!)FPgVg%O+A!QS`x6#x0a0Qbv*N6BhI6wdmqW3)eN&4fm7S_8OitJRcJQu=*OCw6ZF?^IV z6yqcMbG4fY(PpDoZcW|3^OJ06<8dij8xd_9(jM$mT|a3e0mwGWaa78tgkj)4gX0FC zsqTnc3^*$Kg~_7Z^^PVIJ!7n&$j-2IEoywbsQf^0HgjJR;$K6o5nKQ&f)tRb^{FaC69;oMzQ0yFYg(mukt6dIM~lm?hf&_`4`*l?)%B`gjtS#^0PrV02!c=VIx0 zeO=BU?o=JreUHh)FkSufZTvwr8|nH);5Mhh70WoY9Mx-(x#sX!0gUU{&KW4LDK||y z=@XR2mr_KkW}d${y;^(iXnrLoiHlT$G&R5!khEsJ*kps09KcGXbedi=X>;RduuEN= z3XtJpq*3;Bx&T`AvG?@T6rN5~2hewD-pwjT_gBDBu&NnpVwTS4_AQ{mUnVD6o(utA z#NW;VAQK;f_oAS3zsYJvu{dQj@EKvr1jBsS=@K49D~jUW0^=n6+xGRPvXzlI^N(yFd!@f1jir6 zl>kS@+YghDXyn%yfCfJrd)FQ9wrxrgWrBz@o0HSaD6r)vqwyw!ZE}#@lkfjIcu;Sb z3+S+Gc&VH?Ceixx9Y8ibL_hp{^M-@`%@zi zVIF9H97>yM>}zop0HWu!L^3io8={3fXBOXZ@+6%DR&wh7(HrF$wDkVH)c^G#b4Xt+npENX3^0(}(^Pu2nJ!Hm z>9E{aWUN6!hoi}^y(j*U>-m2?WDOq#A88Geiq|3q&W9vZv`-T98gwz|$uY$8##dYB zw*|!$PFO3HrY+J(s{1Va|D%Tg`*grM`xwEcp1gcL_Qz3T2A7HCZ$t9`@#5?C5P^Mo z`jH0rddDF^xlU$M3}F3_i$(q;FyL6wK+|YpWM~1{M`VF=cBp%H_pf|iOX9!3Qv!>a zuvCkG=g()QMS8y9-8iW5{?|X~0lJQ7ElB^*M+lrpF5yB<0{7p~{ICC}`G~X#!~*tz zn*3#Xvn;e3^T_jA7p6gu}?QMjyK_gC1|Bj94mlhP69T5*nw)=Z_3zfVsKf6Q^ z;Y3%^;0QCt?40`WlYKbzVZuAc8udm>75PO=)IfSt#6Uy2fNued>pKq()~b4$n;)pK zLN1l(7vf%MSnWELITlab>WuTmi^-rdSi!P}e^6+83~qbM7~;dF?}l9~--+;mq7WG1 zbMtJo2IOpp2~xEGr2_D=ryH)43-Ao&?of#nl^e7#wMNkDjMvGN%sqXf8(QQjqPcMuj^x?$o|SAZVLZEZkxRytAmge@|DhZy=>1JJdhB|~6&p{}=->V1^@}u@6^D4D zKKs*-!yUIg);tI32cty z)*Mq`fZ%KpM-riWe78f^UqW*EA=d+4FNyK{#49@M?xecp54ea^q%b)A34;c@1fb+7oXxbJ5zsV7B7Bn(q- zN_Am(!3m`^U!^z#>kJZ0@+F7%)!6%z@bg+i>mu;;(O<`jc6N!3nB;DxkgC&|8~Ep` zJe9ra5taFYa^oFa{?BeqMtl}*&-mQcY(qJB4Knfci;WL{+`8@4kX^q!m=bT8yi zt+^E?Q8r)kdScdOqiXTf>3~Acp*^LE0*i0*_juO05l!W-SEcUi{Q3FkKvYBV<>1$n zJb)$9%B5M8{8d*cak#C|N#IsTZ=`u=BXjBDBb}L@hn)L|iK|(&7qvF~1JbCX-5~25 z*l2am8-8v}rGBoesbKvX`?$?4O*iuD-L8R;)$wKm(1oL&-z1)h;KQkOjy$v1&*hxi zjNgu_UM^Xov=v|w=NY6JSbA|A3Dx83@H%DLJt6b>l2$$hxiH*}$o$o<%f&h}pn$#V zRs4(KMj&jab=FjicYSIN>1SC8Sd#JNEQMf7#HfSIgaD zJ(H3mIqLhGbR3L0!1G)eb9E7z?72Zy=II(#mRaScuPP=S_bfNq7com^0$(c^D%^R| z@J-w_aByih(0B)Kds=klw?z5)Hu4d2-jv&0m~Vch#;sQsgF`58gqMqzm_QlT!25UU zr||_qCij%J@?Xh(ial4GH?v+VF~m9(nG;@8uc+5_qN|(@J>^}JT(zecAJ_brp!Adi zk(Yu9O2~e|kD^=`p-hJ%mj-i~G+N!?+?dYA$}1zTeYKsj^?q~dEAFug?dv8o4}y8B zwM{Jy(=N!mLP7|YAGlKl2{}sW6u^y`+DQE;Sb1BgGD+@yrzqZZec$Dfeg;3!xo-j& zU)~+mzj%X>B1l2x^EnkV74X;Cd8%*ZD<)o`C_`STzgu^!9~8t)%;Db=uQ({JQMLX| zYFCs~Uiav3y_AN*rTF#LQ)}g$Lm7z+nD$106#ers+ z=5UzF23ZUo#;_O)TvT3f3b=0*4U7R)=pwhymKzCszYhHrMj@>;WO^L!ft7a>UrSNB zTm8nDaVn)5f2TkGzAUwEj(3qu8>zetoy7Jrrap5x1!tF`g`rF;T3_a70s4STB>Z`N%!&pu?pYP%IsUW-iPA6HQ{NbjWAzuPKNb1DH(G28@mdI=k z8#^XXislYxew4<%bd~;^nCKo?@kRu1JC$v8ryBC*?n^f%O}Xdy+)slX^RwzwU2E$F zU0P47{6s2jMDLydb_kR=m6OHWk~Is58uf(#6n9yPJ(p|3%}34WH6e?SR0@st*;U0$ zCAPI6;^?&L(s=Uzw6$e1)Ua>8`Q7}dkygFgRhaEihyHzOu!+)&ory9{rfXngeP0cK ziT$iTPdvM@S&r5M2WZ%Q}y?;$M?44UVX6${~KiPw9+ zX4sG5?7$@~Ucj0T$<03!j&p=19&&C&iItV=xs*l?PjcU_lYwVUi|s4B(7dhm zk7uKCkDtACk9EGP=ssfe_5mBD7t33&7dJ{VKT=T&q36Co%JkgeoT&WpkUj)COHgL! zN8&fgK#LrsC7~ZKUDJXeL|%+hFiADnPoF0bKaVhA^0ja8>;Qh~P^cgcEy|IY(QdR_ z#%g%as617e=$k8|(Q;XuRHDV!Kw-)1a79STnQigRT7YJGkU;mr_S*mrvhYPYzkmIyrvrKJKsdO<7wE{h7OAz>A{ zN(8!Ek!NVPEx`3^~&1`ZRP&NF?zFr-+&D)fLI1Gwqy~Iiv zYVoGAPeM78S|s>$JS|)lT#|e|)T+AZmLEQMt-8#=*_AX7vD;Z0vH^WtY)>IDm&B6A z@YC*|ryDjOVYIUe?!$HC=K&!)Cv{k)r?i?BeT{jIy63?MwRKsMi8FLRG;Oum9xZs4 zv%NbdbmCa%E_`h>&#{&XG7<0x9nU&ROWS%=ovKOTfKQ1h9Zs)t@ zT3JRE986;VA=i7<`sdu~$CF`#wl*x?nH35NnHLQlpA>A?Gm$TemCfEYTbTFQVbI{Q-s5_pvHr>^l?B?RPU1o&AY1Qr3 zcyl_{7^g*x{5$XIa+wP7s^rGN>b%WfwAq39jOn)FvX7UJMxkg!yq<_o7tBQY?FYgU#mZ*+)!Zx7Qv;2^H{n0t6qBXk6cvOKfD|)LV8{)Q}&E#mnr4S$LG&woOe$yWypXB|G?Rnz;nWFfh*8M2^%y z29IorlPJ?tiTF{mk=|l?mefp13?Icj=DdkNJIbW?hRiu0M;a&-KHQl(C1bnwa#I%O z`CXQ|xbm6gJEcbc1(5hxXRRW_iv3ygPk9f;(@65cLc{4~G0NT&|Lf_agdEFRaVku)&|nO+ZZBiRqX#Wv6H9?pZrg0q3Ge9#j6hx`F5euIKVYb6 z3pQq1$iFItKRa$W%$Pq=mDdks}cQD%Zo2t(@P22r1B!Rq@!W#K$+=HA;nHP<5amt zBp!Xqc^_)jrVQ2gqtY4|m1WM20l+QenxtCCQtVSll9NU=eR@)+C&)17{00(BR_bRE z1~dZ|f$DPk=#$S&6y7u7Dm6tYzVD+8ULQ>PX5Ey60^*n85rBh%E<1s$=i_v&dz2XS zT~8&bfA7&fypfCWv(w)jJwHp@1r$Kba%FvZeg-clsk$v>M?rIfiad{oOjE;s$GK(Z z16vyU#-V{!4o!EHUxgr^rOx%2yzh=4ZmnFA%cfI4mTvq~LnESF_gnSr&D4K%t$7^W8iC_R) z27>(`jofL({uxQ<_H_x!k0Icibff1sN0~o8E$o;k?reI+K^=ujlC&vC0Qc)%^Wugb zBKP2vP#h5|@eZSftFgP58g2pV?Qf!FGmblw3{r<*C>^VUmG_vT5ANF!Pg zp%#bkSC_Vva}|lfY9?A)<@;hDrTlPy<){8b023^k=pZpyr(sBnweMs(;c~e{2JDhZ ztjdTOhqNi@Ce;}X3f#}CKZzBk_+jN0 zWk|n1eCb1zi7uAVlYa@j7XS;Vn{Tu#{4rm$I&fXPc3Oc=EFHKZcppgo&06iHY??EX z9hzXC=9$K03wI>m*-)t$K5b&XL#lqz8rhNx)p{?Vk4sipqFf{MEq2?q6_F0zk@M7) z&YtEC&Ua!|NjDFrZ(3;cdl6)u(z>tNiE5Qit_r!@OnGwwrVmi`Qva@awMODzn+h*` zbIkjB@cs5en6WunSwFsn5sv?Kh{G+0m8Ky%FlS4RfR`Xe!wg}i>1x8Vo^_@E?*5)kAdiO#897vW(ow#58rg< zfh$xL!FPK$;E!eMP?cRZyz(jNUTb5LSt5U2cIz52?n}jg%7>y1KiKgGzggrQNIra| z6c4IKp2Ql5q5LJ7}y=U2UNHllLB3S*O-4+D90jh z?TQ>6jn|tXHt_nSG-mr!GeF;R!=?<6Bc_b3Kg>WLrb{L?934n_xu=31h_hFLX%`$} zl<}Bop}jklcVKTMf@8GuGK)t^r|^o-ql6y@>*g-#*Ujf2rTcE+3mSJ zp478s0M#0-%dpuGws|p4DeRwTq-$lK>@Ks@<*ba00=%j%EE1qYyuG~(G;a8<#bR$N zHb}qCZX)j^V2@FZ#79{r!Zn}8pYS;+H-}m&3Qr{Nwbi!MqE(|Q0}2Vsx+XP1s5LtD{w5M2l^XE+k{MJAvx04M-lLFztb&M1qgO_gU z?!*F$lPg_*ZyDw&4?@|`IX~^YReJN%k%6fouJLpEq$)-lfXkTa3W=jEcb$ZW)^fw( z)#fFOXLhCPN#0<40A=FurXeg>h9+Nat9N6iT{Hdd7e>8lCvDQE6-V zRh`JjY9>(sQQlpZe9iRbaReNphIxtZPe&@tNC(*>OBXx_EqC6{u!(09qw@i~6Z0iY zUK{T6&81Nf3aBTbynFQUE6VNS6zY|)pC8?EHsEjd=5ckV?8%Bc?zzp9q|*|2hqfu2 zMF-=v^S#%|Qij~CV0Y8w_opX4Cp|}7y;}muh;93vyuJ6z4$`&9b-NCNu@t;e^Q)8c zXsMA)`pqMvOdDinF22Ly%HbjJapzJ|C=KtCm3YR)h#Ck>ox^OTFXO_A>=QkC&I6xG*`l|swPa$t)dFB(j z>dX?HT#ACDQ>kQUDn4SX$f}0FY!p1}TxE~6AFRjFHqK(CPdZ{1q8%G=VMLzgqPnWn}|j%RoIe7 zvFEL0;+?^mo?s*JJvYb}>DSZx4#bN5ceIlaFtdtRnMO-v1NaK=ti0ovyR!8cOckv8 zAhB7yqc0iLL<5#`wrF`d$KMUkcLI{lkFXjQQ@?)w#YO1rTaAir9#m?7u=M#Yj|pBQV^iq!8;|3U6^b(}R!G6ti(In#J7B7@f}emE%6T z4BFGhn2)Rx@!$(Su^Rutc2&GSzwUpc4$ZBf@|-W6>{PWn9^2GHUP)7jPh`;Sdgr?{ z^In*ij8B;u#@8KoHEyp?am`HaPmv4Ku$f(#_RvnR_{SY$E_i|^PgJLKtIfm&B{~rd z(pKU6PkEx!1x9!3TG!1ak3 znMb(yVL3lnW1Pb*uXfFLu5rWSb)AmmmOVfcR7rE};kZ7o15Xj;4dVSK0o3^+*6o3g z+n(Szx2y?%s%K2VEl;q4h-}-9uHNV{pthxN=SB)~dd@MIsQTZx4LNLR!;jTV(iXbB z6ME8Vh-$`J4R&WF@oEG&#liAaXNx#6TBD%W$=FQ{)$)(gc8yF9h_d3Cc9|*a0V16d zPcY_|-<1XMXKKR5WSPnLak z3Q^{dXQr&BqiG9s^8vZG-c`m|4e-l-&un6@0m|do7Imj2$$p{R7Tb-t6GLM z(!5}=`0^_778sS(R=n!j71K3NJ_iV1PZd{_E0`~yZ)DKVi0pqV&XHATKBxd)LzUBd z)6c%=aFHkn)$jWdeCjq0wjZf>9J_5w<%aXvdMpbQx_zBslu$6yA;UXPVbz5}7Y$KX z>v{HNvSQ98lo&0#*nXQp^VA_{Kw#fc{9J$kJA76?U1ckNC@NZMT1!I6N)eS|;nfXz z2q|C-S;4|eI?;ZL>Zf#)R#2p_*OiB^<1(SA*HFaT+K4qH+rB(_JamB0rj7u~7 zN~7=5+Mp>TY~_r5_(h@hKCei?(Hzqz29ceKn~Pv(6rQ{m5{;cPDAU(Dm`$=cV7 z9XEIBFiSO+EQQ&x3hG`VCP}nkPVk(P@Jr9);`_zf-nTyV%$yJ(fnZarXk4&QrhS`j zD^V(PvnY~b0(072bUEcs*BX}}h*Qmea;w;lLWiatLe~>%tK5f*6Qlfzow>f^>~^c1 z7Uu#*>BZ2)4drVA-D1xSZmmRa*p}_`;fqiZg3%abYDrchZoV^&_xOFHbf4p()cI9p zfkFAi>DHiot)bvK+~p7q%A~J0y1E=k47Yn(dR8Y#1#xO&Wu0+7D@stAfxAkep4J>? zgX(K>H75)LT66DhNX!@3A2z$Yv(?Ht@$sdXfg0gjQmDq=u}h;S#MM89XX9nI(cDjm zNAhJadcQUkE4iGS%epxDmf~}@n-B5J(?^r?Z4#Th4Y8SSmPETFW=U3X!0$HQS94$j zFvmB=Zoq*u4x7(&Igf4_Tn6bOmZm@BHWXHnmuFXAzVY?%H%= z)r%dP27%)S7(=$a^^+*Ar815!OEvImQ&1#5r@PJNU|ZyyK8wij-@-xOA!vt`CjZZJdfbd0-6_uA66 z8JTzi%SCh$nKdGBtP{0LiKG7FL@@Q+Tz&dpDXw?}af`J6I=pi--w59BZM9REn-~kI z15dFTS-NtsEd(Kfk{RGSG-Y@xe?H%%p~M-hdKePmlP!3mBU8UAeYmCNHe00@5ITQf ztQ4EVH1|!%VVm|9t@4^J<4Vr5Bq-JH|;G$Q3 zJT4o3?RskKxRc{y8Uu@r6XZK_SlA8JSvfq?2%!ui(R@-=84W`{y8KaA<8z_PVIZJL ziR4hKT71~c5nG8^oAFpzj>fkWzCyj`_*mMHyr~vi$L1wgr%fr*m+RTxQqvDS&UW5BM$0Z z6FKfcYCG<_{5ZA^wR6s^Q7L^#(N{_*sXJ+DhS)Zs8mhmXmrXAApS3@U_F64f@Z~h% zq>GAcy`nK1yjsVv3z zmst34jKWt3P{Kzch*PTVQAtoB(Nz(_W#o@RnC=c|@tEO)yT|8^s4Kdt_>pyNb?5a| znw19Al-=`G1e2w`W+u3rW6I6MM|)B=If~?-MM)S>gPY633qxgOBG+yI|E;>GThkNP;H+;Sbq{L;)G(8}wKN!K4?Y;|hL9y*HKrgkVNIKt-oP}0jM_b60 zBP}p(ol2p-B6pNv9RMSBKAbI|y*3dj_sna1bq&KWZn~<;P$``dTkbd@fQ(d^vuN(T z^rRE>WcUeF!-PPbTslG;cKIggplnFO^^%u96Y6JbO&Y0&gDeEy5PO5-D{D@bEq ziC9V1YFb~eJW?vh#f`umWxYjcG5QdcqS6S> zhL~o!z~QQaAMA100Z-U9bEdU|@zd<j zOy#6bOsL8+n{pZbkR-M`4GBs%1<{Y)lyxN`u9Mo%mF1wT;gUh#%`8Y{kP&rNQFt-L z7v8}%eXVhAlQObe)y;@iV|ldYW;RR1N)mlUJk^ZK8Cj5FM6or*yO$nR&k?fkek2&qBEX4TH0M7rwmZpnWbn8GFAS zgcqd*YI75DgD<;l^KHK!D{D3Jk5R1defpq!Fv8(G?$Y%vdj9LdL60ulfNaOF?qqf# z3c9GO8Mu5HmSd)S0B52M4Smu?n}Cbjq|MHDbowW!x0{M}c^9t579^38lS=UKO1cl{ z>RmK=o=yrdzs%9lq`p$BB}64dYhC91@j-4x$h@whm%t(mkQPFl6{M9Q$NLD?Ufelnv1aZJc;jV038qM*+LrcO{ z5Heutt}k6uiyLEjlyB_5dR!HCJ?UC`Z>3;iKaIw^>^S)#DV|#fb@1~`?MYndIcAMH z)%WY+Hno-*@u^-^jPf#pyq7&ZYJ=wARyE6TChoVeGh>rEQ&|e?+_BvOEygZa(k%11 z8p$QzLu#^_T)PGSc2cxD1Z{paE`6vjW^uOpT>K6Njb`?0L;+zmYoZFh-ay~QQLzLQ zV}-n6xO;kqEKg)PyDK^S-X&V4X}H;DvcLh8-MrWC6Js2493{`t`uy>6q8ZCdAx9T) zSeH!(8Cu~F@gbMo%9ieB^Fhp|IRy&|B zJaeA*Pv$Ja`*EaZ_sI5WgpQmMyR!%sGNyQNBl;glJJPgVtG_dwV<8}w~4 z!7{Q(VjZGF&&JFZXNK5ccgFA)*@k!O80E-3iIG6s)j$Vii4Jg@f^bjMXYCE5rC>3Fmi< zC1YpnMp#Jv3TwHFt%aqUc{fUEJ!T2|a-Zq`-Ib!41SOmyRJdE_zJl2VZI<}()U15BkuyEo5~)f;1$rJd{2&JnPqL?njcvSO%wNsjb@K)67v zC|!f-uk2cgjsgfM!#Vk2KjX_k%NREdY(XJM$+HklsW|z-7ohsm!uU9wqAUzJ<^4NO zc<&G7k|x0v8vm21|Ng#E{39b{&x8o2#Fs8{)Qo?|s?;~%C@QeV`SZ%3A;w`ogivE&KGaV;|JK=sal*td#324FAuS zhKT@r#4}v+7ytFE+Vi|02}S*K>At}l4#^u6`}RjqqbOcNpYbR&-DiLZ95=<={LJir zhVBv%o>$~}6QBNh<iU{RJiQh8wf$E)N1>TJ~WL6uAS zR@2_Mzzv|CAvJ289tpf!!8xA!Gxl;hj;}p`nTVb!uv8^-@qb?GFSgme0{%aq{c-D! zN%f`v+|Tc-N?d!AzkL_*{J6Be~&)v~oqE!V@^@q_+YzqU^JLc~*g zQyJbY+0;%9BG-I^(;fVPwr}N{l8zdTF8j|a3Pu5yd zNt;g9!lMk&xW=uErTay@#~{FCiO%o0r6lAuOW5x`MOJqDcu~+%{I*)GUP_Xd1o&yg zU<2Uwi#$8VC2JQM^wA9gt^ffEq7(dZeO38$#~T9J0Rq^=Qmoe^-w_)J-Vm@15O7F8 zxHG_n7Ai4xjXDd({beOW>wqNE_1bPxQcyFg#(1E6g87);J}mttFoPmINUXl6_-!#j zfCvqtuN%Q~@XFU40t5g8`bf6{4lrc9^4$;+0}wzcF#vFYI*48DhJXWr0DuFhYmq+U z>Fe%3yDO-_;2`M}AW(T$-CLp9VfYNthI@^Q#7_k24}KCj6w=k6dc&ddRIQBU#{nYs z-w)o}zP=%V#P@~+|EUO?(gu&}n;dL^5#r#f!_lrPd z@#V#V8xC#Vh{&%kS?i5c8czy9KvH5opUB!?otCE0KQLD0Mub+F?CU8q$!L zMJ3q^gssBww_*DK=Q{GAh@gl$oBc9>Z(7~smt}{^0G2CZ@bII&=Yb{_@7%3hLbt?U z3%^fECm4HP(RF9hSM)#Em;Xc@DpBuau9VL#@_w771l8v`S_-y%%rwGI1A`CEu+*15 z_M1unbA9-t!NL&$QgV**Qe6MM^oKY3M<0?Dep4KfAdE5jXQ%a7;0CZAkZsP0 z+Sh|W{n9_)>5uz+DBbuEI%A_8lz;t!Kd+kH*odm-i>~{BUiych{PU_R%8iprOU|bJ zr>NqOU;XvjEIR;SOlp0oN&d|TjVJ*0D-HRQ5%cf(^It~(5lsEUnE&N`|I5gKt+Iby z{jWy;Q&9d7dh}n@{(JuX7bE}H)%}0vYyy>S7h1lxUAwGA8+uZ|%wz_7L4E|g`!m%! zR?GsuwNnLs(7C3V$4>k1;-CFHpV;*raK{ggjvf7x<*{X^t(wsdfI;5T%EC`4}8`Gr?W>3JKA1>6I!ry*w)FNlTSB0ol!Y)EGvRKLtW5 zX{$N@MzBB3tswP-;zb0qX%senB>U;##wn_d!1C>8%CM2*oj0<-A<T>f{Vr$>-y97&9*{woP?2gj85Q7{wtWC5d_0&S^f~;skL};D ziu@4}v%XIM(%0!5E`=0G%wHu8G`gtT*n378mmiLUtrPHsqc_fOGV7iwA@fI}B%Tpq z#QnWd?q8|x|8;Mn%^Rf%oKP{ieL7*9e~*H=1UQLr6d3^A6ow;%w%H*0=8Yfqp$B$X z=MnJW;{SD@8-(^45d*@wXC+QA7{IEd1tt$6$R7-6;Axf1*~(<^T#h&N$kQYAzxs?zoNaPn~h`piO5K~g^N_)ia$=m+4Ivp zME@ul{bOkN$JHb|AYfH&-lk860yDwLC~~=qKL#&tfmAWy^v$Wl+Dd@&s^23NKNOAymgHkpoXH%nFB#9{Mq`An5d0fHMMwjDLNBva zH=-E&jEOm(e@{|1o|NLeu4z}%?n5Z;-$kNH5jPd1 z?+pd-|=I%nJ>>^}+Q0r{DgK1w(HRh&+^D zM3im-dP(38EriPX!FvyeI%lKZs2*}8$|T^uPXijRd+g=ZnToFe>#E#uY9X-zhbb+t z6=eBg4~hA>dvNQOUigJLv;|(bpyubH?D8&$DSSWsck7hBS!coY0wvPEsUpwCqtv$O zJ4^%wUBj zA1{@eCMK-pG&>lj#HopK4pvnGkml9ZrI*yh`XAjYsaCJbBER+o)i%a@R%PC;c3Ud? z%Imx@Y&}?qsdnuGI4@xs6lc0|C;H)*z}^(IT1nok$6No}i~PR2@#RKRMzpi#u%viI zjBvHtFmk05mA~+)fDvQ;^=xp9R?U%BvHGe^ur1V@S!bBrc8-F~D|evEBoObp?B7DL zK46-PZTSP$Y9VFewAK5C$}#w}2=-hD;CvCsb>;D6w!3}y>B+L)dk&WXX__RI2QDb<)(H_eX#{2fPbKNuDOfJxCTl~JQAN<5VE zE)O4{UXn__OF8X|(q-CxO~>j=(9ve_cL74Tn)NZaosP;CfhQ~`aXfn=!NM%e_+wKL z;j5oN(T|OCadFtUlI%;&3iK#scG<0Ukj&Ll+++ua*Z@<5?-#e?sYL;!Dc*B1Vf+5V z@sig_ByMNPm796hJ%PMCMpbs;-O7j+i^S{8UH3PK%E0g~kq=?Ifjn&w3E8Rn?0NCk ztvpn;eQJ*1tzo` z)Sa)eX}O)JN8634&`DO{PG16Z4`S}(tM^OAazcA}cWVrQd5m0QB%n{vJ4{5MWc_xU zN%TmmVnLjliEdQ{92>H;BbjkFemuv+3f&U^u~+1KzzU*(8Kyj*i)V{LTq0i{ev3ma z_%rqB_%j$>oh@P(uci3MG-L4g5ZQFRFbL#X`P#Dh8)bgn1%##|C-t_O3y0fiwa%cQ z@hL@1PNIT>;e}DvmfM@OhDk_$NaaHnqiHWU8rFyTC&MrU?YyI_=3o{Q4 z7@wgkEW+~}Z}lJbr^Jy3a+6|J^so;}WoXsAt?V^pVk%8Vgn!)10OwV8@m*gWHjb+p zM7PE{twc9+vExA~G2G9W@ejvzgQ777&Khp!^$15-IqX_EAFXMRsM|`X9^j(|30_rH zo-O!8$%6QF*e!mXz(nyi^iF^;>6BZe|BB@LO;CWD&ErI$`h`6r!k^O@G$1uDV8*K% zCHv5>$jdHbDV7Zue#PACqRW3c24B?#9V-@h(Spx1zq1$b)FF00K0E|wSn^g-3P<3v zo0q9VF1lfVlt}lb!oP8SnO&w+4p0#XdYn zo5wfo5>i#3zMt^)aVAaAUAJSHvWH+XAlzZT(<$3G;Oxu(Lf!U{o|cI1a?G-rJ+iPj za}jD5EUFr#AEM05#(Ved$Wfbo2&8(nzgZah0RG8LM-R zra-6DqeM=9L!#2(9};o*PkkDa7A}nZp!4mc`*!JQZwVnbyyko*Ry#%}32G#XP)e-F=UT(rIlCct6N&BCvP!ylGbVl783 zutro(AyzEGy!f*-4zr-fv%Of$Iuyou$AxbzB||c7J$5bP)<7#bz;>2zw)hw;Nl&HY zxLKl@9p0&WMmM}D=hN=3yrfZ1|A-i_ydMLLSSR99mb||WU)^R=?p9B_7r-i+ghGFg zuu8UF?e!>FnzXFvD5{0Ph8c~pvJ_#e!P0MKh3+gGCOEa#0K;3dkw3U~6*WTaCoHPB zpS(qU90O29f-;F@*)S1gU$DTwZV-KAj)zFnX=$6s_A;-)K)>z#6QZ=$Cme+=qw2Q$ zs<`Lxjb`gW9f~nsLz%Lc?`>&@Ua^Oa0a#^NHf^t#3L#Naw{D%|6?D5e+bhA#tvMc1 zdo$VQxp5Wbe$l+foB3Rt6X=@*IvIaj9lrS!_vD(9RVSm8Ri#-BdY%*m(uxH}oU?ow zvz|0BzdO4~e+c%q{<@(CDg25@!GmtIoZ2cu7|V`V2MjeAN;+#oV?utXfAztZ>_I*u zS-ag&XZZFe943^YXd6I2pKJ8;st!9sbSF@X8G@wq|a>&x*5vV?*zo^N2pL@6*vHM zlD|pF1IK{J_3=|tNPVO^$6NUJ8ty0#+#7822F4iq;!b?@LYL4?!{#!|_Cop|>Hl*smZGxj{vtODrR4 z-7h93e#jiEdpy8&i#e{NXAV=^S68R?h6#$0Ad&a;CIO=CV)L=8VJK~V0totJB*JsP za9Qh24tD^yd=IswxgMkZawp)M?OIgmHln=hpkNzn0aAv%VjQQUfg~Bu!do7?Ur?~Q zEhrldj?P9Wx?e(!-q=D}Vpa6NiT2P+G8k}bnX43AoaTuxaxJ%-z$*!X*)Po8LtXTD zrR4*Ocm{K;cdG7!saKDAcPez5=RO6?aL*&#whzDV=bGvbscy1|XDG!0BS_SpdF6I;N2*)L%eO#=tql@~QDi`|^ZeLTNj zr=J>|g@Xjq?vV?~I0Hb~)cIt~cm}|?=jS(`ln8V*0PCRb&&Y>af2CE27Jefa8ejH#de#0+xV#HX7>=CJ?+bvv57Y zwVh67iV3PaDg_reuPr;#IAJoTs1K3 zNaRz@>w(G@jQYzHwdW-v{k?akcZ_bo4r!}&^#D3M_K2cih;+QRnt0wDYf)v4rSYoC zVaBPwa=7c(RZ!bRnIXYEEcvP~hP)+&fF$#|YeFK&7Gp_Cl4 zh_rr2C4YX~k3NPvx8|U01&AZQ(Z8p+Qd7w^wIfRDdhbGSGyG2M?iFu%WE-cdP$~~Z zzFARO1DCNPILO$zTi;x@YM8b1?2O!jJf~dWOo6{F?V-nr)D5#$XSwQ?Hm@L>mwRl* z2p+N?!&xayp_I!wbE)0%NLJk$_?T4m{%T9Of5{B0#&(iHhLA{iyzL~0EmXUA*4?zY z`5s-h0<`S-Qe8TVIXU-sl$Il;H&_XF*e6Jmz^LdYcEYbNl3MGt2 zNN>Jt1!;`k}YB5*v4ibQOsDqASD}Vv99w@#z z^x1#@LKScu0p8ZE;cuyfK1&g9f?hCqiV-`JvnE*6^m5I*-ZnkHWBfcP2n(n_!O8mv z+aJP=a6zWWh%u1Ao@i>6U8qF29C#562AJ5yFWrM_87lW&S6ZeyIo0wiYoeifM6f&*OSf9WhyE+=myWv4yJFW)9)pHUCx8wI@?dZrWLdrh(aI9CM#W;pm-hFil1Soxa zheK(|(xO74)Pn^!p)_5}beGqk!nKNVrCa>f$awq2b$#7uUwp1mqQiQR4n+ z>OTmYS#;adI6-SWx@vJku4dU|F=wGdV8C6qTCM;gx!fVLDjaar=yRsbrt|4|?n-pS z8{DstAtxVR&z=MGq zHavgq#vI z#T7htBL+`8m@Ef8o_Ob+<~P8 z*UF0L=wb{eqyV@d%O<4<5#$WK0!C`E_%l`+RX8pHm0xt97qE_erS=y&rlswq*MR?S z-UnYqyPeJZdeC{>@O1Z;^dNWX)xYAn48f%%6X^6G$iCue1G)aTAi`%y=^y&H5rok>+3eKyTj{2n}^PbPrBVYBH?b?Dex_1m;_nH@5IM^<- zoPQ(I0I;Aux3f`=bpMl8n`Ufmb6NHb?e;x_HkseW$V5*Hp4M8R1gpa`NYX_f*28y} ztgan(&jxVV`H4Ua$jicZV$-F=u2Bq|;H4lFKy^R)dhn3Ahcs)9=|^gi(tXO5i8sOE z-72`3@g;4Y)FTj$6)mYTheR(&+)E$*La$dYa9zJh7>N_*w*EnZT))KSyVN&#mxoUV zC7IAEZ_N~2GFHHO?1U^IZyd^Um80nuBaW-*-5}=fP5UT zg>u=s`_(m&+(%+VO}viQpR$MJ9egb{h_KcRga4>M>8t$cs1rmkEZQ5uyNRC(SP z6Mjk2RpWgUc3J-8_aBiY-5TM?oA@Q8X2sBz6x^u@c-;E~sT{2Il~q(t@u!vSW3Pme z8LQSvZCf6Rc{3YC>qpFiSlWqfy;nEv#8h+QU)rJ05acu)XT}@Iyj)M~0Us^~3Eqp5 zB5~al-ZMXemYXzbX6ToEAny!%zv3hQR&LiHS6Q89f$SYoOX`O`X?og(nNIjhI3ciM zLo@|Oq;b&CK4fK>3KL%R^d)wt?kSpLk&U{=dl$thM0S=FpM9XVjc|aavbqfq2`h^U ziNOiTfFdfELL__UeMyMP^JiV*(;1f95y>Mrnx^nwv=_)7`gJ~^EbrzFCOJhuwX9)z z`^NUZ#S5AA^^97Q#3f^KMJ{69*o^}Uy9XtBKuXDKVJy8OxtC=1s}dU_@FLMVn6$I@ zbbZvIIhB!1@bW-y=|c3rh3?5rZZg)wzVZy)Wa5<}XbRnMXS$DQ&>iIP!~(PGQ(EPD zr}ccH{fJ7Pztd6fCE04s!E?#Ra7yq(sBFyI?sT<-ieiWgd3JAg0g&0SmtAday> z?h-(0z|r6!EJzwC0ACYo?#H*FuA?mfg7J)yC7*n9DrQ*mlKp{$Fpd{CD%|t217wU7 zGAH3=Yi(SV0b>VpGoT%2d6g?M4Hy59y}ypCvfJK(;YS1%5J6Hv>267p z1`#&hu_=|@Ae|y1C6dyyNtNC-Y&whVxnvL*V7~5`4hJ28OYB61Oc?}G z>_V@=m6t4>V+s2A(@{YeHL-WBkZCs2kU^ar30tFDKg#npubjb)4cMbE)qE}5&mAy9 z;Ad0x^u;gCy%yvOXI#JRHx}Z9cWxltb_vj!*{Bc|=|BWMby^IuQvFIz4bCDz0PRuR zc2?11N9J@2H%VXol*Z6{!P8gZ zLghs3nNG{4{PIcWcs~~mSzH>jA5eXE-64?GS49}I6EFI(B*)E5_72by*i*4r4vMKD z!j@?yQLu<<9rqbpXyyBv-=Q(24{cauJWbE|nr5c^Ew4}XTIlJhCiLW{9JOMDWoTvWjJZM z71+0GcI&<0cRkipos-=U#X;47TUKq@S9N8I6xlGgmPJv837jt6L}PGer>I@~s{$84 z&BJr?qa2UY*ej%cYLtBnWJTB7Go~!U(C~mWRD5ks>Gb`=ce=JLyuE4uTm1;0(>Lh-8hD9=cdZNklEhB%^ zho|ncgUqGAIKMsy>bL+dau9R}E6eY!n>j|89@+7X7*|gKZ^8yitv@--ZZ00w``{)7 zWay8)2>~B_NUnhy?neo9mrk*3B_TA?_ymxaN~Tt7=p)Cr$1bSBa)&hn!IZb+&)M=1 zI`x~?6m=HsV|X%wW%J7!-1AY|xBnr%`|1t;O49>iI^-Y>T(M(u)rw=+?@otA`D`u~ z3bZ=zXpiGaG(FFCT*w&{<(W!~B(`NtJaQ<1VuK^t+C9TJQLAzt6_AiMck+~VAg{Fh zO?~Zl^hX?47nS+~w@0@1g7+#YW;oj&xBA(ItUHsH7`)7?afJBWRYggSukxoUqu2&* z?Zr{(Vaq3z{5w#1J}PC&*meN8=Qk0JNSpHMN}^1EAZ=4jujpPgzG;*K-+(nPuqYP; zeCjLOkAAjXFan&xqe#ee1$O$=rpQ7@13uud?1SDb8l|{ghqnixNdZH%GOUr_Qb7Iru)2yF_Cpr+ddbt&AlB+N#Y zkYhuthx-iE{=D3AohXTLo7yOPhNV)Gpbvm|T zWw>j9$nOu(V1_Tk>Wccb&&S@ag2eH@P25p_SpvAi2;W_j7q;lLAMG4Luf(IW(qMB~ z7t>z{E~jL7(}+S+A~2Y$+hr8MFz@x+SyukU>gFW7}$&+1KmW4_c! zQ9sCa4bPbGw9zHh!H=`!_Q+6g87sFthLtFja$!Of*hf5wF)<2gdG+e>-0rNIE@`-VK9FV<(DhryL+Yh2`*e?V4yvM+wqFm(8p=a+(TfZYu-QIypysO-?Q#5~%nB;I0Th!PB%iVmWXzX+?W88*IIUWF0rNiD9|r@vG{ zd$Fh!ia;O8Qwrq_8>#M?z^G^zMJ(XAs#WFx%n^r<63fu`ikxo^Q7Y0gap=JK202%~R-*XGDokXfe8p z14EuoEN8*xlLbZo@kwuohf2E`f)fORQ6mnKs|o6@IxAWhWKzsMw@|OBU-smExRl(z zUGt@^rD%Y^y-p5^6z%OQNaQVzEBzKL|l~H3{JH6yRwI5}* zQzqZ6;WqOxizt~udm-P=~KUT-kBE2dSdeVC6dBHvypK2Yc(|VI%!1lULVtC zx>;f5JFJ+1U%1(Ke#hx-SEH;R%gL|(QB6!V`fiR(%~ooYrV4%?tFxlVhcz?MznJ3H zHI3rIjp6}st9O(2Kk9PMwX*-H`7t;!%I=J9J#f;>j)(1e`(zABky<+N$Zs`taqmKa z%yK59>Cn~vv2Lmvc2M^?eP3*`f^VBIKCbx#)H2nuaH?C^x78H9<#S+$9l$e05k2Aag){cr6h(++L~w$@G(Xiu1TDa=DlZeu0&onP{Ux63|+XydJ`kl0tk|r zQbeVL^N4Nl8E)4^%(=h_lgRs!ILRn8kVMJvfRr*=@9gq;&SA9Hz&DV;CYK3M>CVb9 zf3Q0*##V*glctyvy}L_9_ta_xMZzKY<)3lAz@LVZVNnd;<)`<7>JmRbk z)T<_5S8B5b$;^$tLncvrSC8ZcrXvSlkkKjr&<*`0Bg7W}G2p3##y2qeKP{OcE|fm0*h5tZQX@dmG|yh|P*7DNnL$C#Y}sKz^7y=KR7U7EAn z6cSb~U>ypgeSPgTxP<6esmG!rD$S^&}UkO-9jyk7Uh_wJ$Eze2TXQ;=Yd(%P?&Gig)3 zq|>jJ9aMsMgW_i$AYE(z@(fik?}(dj=2tieEbxWu@XgVtbQ4%tuV*e?wm2I@W$8mN zC=G3#aY<83d{~UM`u*EtcA6E?CU1QnsABbTFTO!RU#!wac8kkCHe3z62$%Bc>d@lr z*xvY3#0U|oo|ZDRR%+v#d#UxZMlK>>U)Tm6yYrY#01{isN?hWmI4Bg=$&I)uF{f#J zZ?#EFnJUed%G6-T-i{M>L4ouGtSA2GvqKQi`$NM+S5I-=)vGGGY|M~VAtPswSw7pK zR!thu>BO)dq)t)QLjyfwwsQqtcXDrvj9&)dd9MA$D@~YP0w<0hE(Arqm_fQnXT z*j)x83{o4B>2?|r50(zCN#%l`GPX=Ry3jks74fYa(*(wPOU#Hh+Wb}x%~OrR7cnBb z({o&RWz?DusVb0IxXcAfJfgXLlY`(iutR%J zI^rUIA&S6wa&*sm%DxO5IT)5oChX?uTYFQFO`1J#xiaSKjp?!S_|>rIKKMv?jxmBK z!vo6rg9J}0R3lHcLt;SoS?Y_WPP**9a70D9!yh>PN8RIcV@)TT*EJ8007@HNSbw8WY=IU1x zj(SzhI0x%F+)*^*F0j++sAIP($IX^T86o8!rU_QP8*egiva6wBmX z=({I49krpgl(m+Uem`-r>(qr_#`s;IH(Un~i^F6EJ8`(?*2cIYOR{_0-@;^s43|1H z;*Khc7rDz%9cCknL*Kapvs%GxRDaoOwduub39&`??yKN)omJi!v~Tl4$>@9WoMD5| z3xSN~cxdR8{`lSDNB0mXrCoA4g1Rh%Tj~OKr3ZEy7(!mkUp#7_)on3kgUjh74vd{! zaeAmKOTyN^b9KZ?(|w+l@Tu(hEQ_Y4S;M?G`TQt?*PG?g%fGTJlNC)XW_9Y34NNAP2*Z$8AUTpiud%2fFkW(sfYsXjKV;|j`ex`9 z8(5QhFXL{(Fu7-q;k#{QpcE3S5#b8?R>?Y)OH2_sf~(h2n029E?F@)2W_VN284k&V z6Q!bajEUVBicyEa(UDd|U_GL=EXi80oD=!(d-KH2$&o!8|M|X3w7T&eMfk`=wsixk zf@4s0>$@x+L%l`rE6dr>ki z1@HztD-X*g1}??iUE3Q?2`5i{Qn0n4)U0tVa{v!iD{?TM zY;R#(UGis+ti%E?s3O_1r2z!Vkd_Mn7^N1fJVwgI=cNHVLTC zGfrbaALksG&qnF^K_BQrI2XN^mVZ_a;~uZQ>e(Waf1QsU4O-5kScEJLNK=G1tC8fB zjHPE2rk0vw(7ZkTF+igMzrwVOQ3`8i!QRuTg`-+^DuXoYN~FRR8$USLpW=!;mNu`O zE8lG|LL0j&hIfGA6PVbr{wsI^3wa+!I&G-t-rWIIaKqM|I0?HHKz(Q-T^BoAZDdvXL#u~*TnEy3k)~vwuhkyHJfVGi_ z-geB^lI&H{GI1Bqr~?O=KSWL z9tbYLDRocXiVzAc%-??Fpa8qz#Aa#d+IKJ+>HqEkbIxo0HjeR)Gk1jX$*8!qYhJRw zhn~)m#EYw(MXJyqV?o%fEm~Dq#ADWkWamjHA;rmJdJmLq!hJdF27s;xNCA9j`g4v3 zg_iS3QE}#vZ}!k3QPUpYv3sA8G+tVF9@&by_A1tDYysEmzU}AIe67ywuBWr0Jx})i zR{MqkSljOZaLN zx{J>h65kZY>uYLVwGzuocN!gOlk>XC;w}lxe9-fb#YBFP)_zn%RPLt9e9(9*v$WJ{ z!ZHV`$0?P#svLALEs;kUxYKHWZ;QEzo|IA%$It204%m+X%ml#-jl^vb_U`m+bNjYp z7Wx|1h;_8@b_15sV}r4IGh6`pd8^odO-W{g#wg@HJ@b)L9L=|yW z*JP>XbN;Nf@tUKwo6GxJawhGGRAj@#>%bAWDMBoUXrQ*|x_p5js*C=^W#6m&Plxpj=)Y zJ2zBQhZ)bs)bc(AssVwj*yuT_TZc`MTIfEW^tx<5fxy%^lgeiG>Xweqy52(bXiodD zBjmKoKgnSjdR;o1NA_k!7~9j;e-kQ*SlM?6oQ&_!nCn1IVn{#e%@T|2@+TVd<+#Qe zi0-*Bix~sq(g1baV(y5-p0d?;Y#vX+5tx{w-hEgM!6K=arpy>6^aco-TkWsx{A~@G zj+&=2lP|DI6Y%rl?`LPO&Q;F+yFbU=mgU<^1*ER|KE?ZSQO{eBf1EFj6%U{3+|q02 zNf@jK)lmGdL%{5m-j_WKt0c>qScvOl53-Hpnv<36?%NFE?ltH61Trz{od_7bwLUqN z-hu`MksWIqYM|x60F)TM-u0Cr)w0|EM%xgglc~WWf`|BQM|hk8Z8y$(m$@jSu?W8p z>IU^|$2b+DU96?cTZ;vHpJV!BrZ+p=h)@onr%f7tfSOYM+O99`0)_Ir^u+=|^IUk> z%5KZFJ{!wriMj2G>X6^ynYjvI%fPPs`?%7 z0~9B@ztp@*_>*m7jX;$!QQvk6RpL`J(#v>0t?B#R6`$J92us)-<<|A%lmtsrvqYH- z@);s)T%Zc?a!gW%t+_=Mh$vgL8RI)XWn*BNUFsYr&_#NmtK^q=P&Ypg?gND{fy<{5 zvPIvgh>j>%M3Q?=i6z-F$-H$CmWFlh-U}C|l29y+WnCt60yA(xg)P^BXt z`Ei9&Y!a^Yro+l`N~Q*M{tThd6wgJ-w)y$YnW7IcS@Li*O?b)ihKixiuO6*rRJZ(w zyyh}-^;$emjDLIl76Amnw7$BMIZRh$G<(Hw=>G)eEgnoM{6ptM8^vAXgu9nz)AxPp3G=cpPqE#sHApItCg}> zU*K;Jp{gI(K5mh`sIa3_^A54v_hD(}+spcBbhk|CclAy!pVe7(9+2xwC(!s6pB{{C zXRQ*Y6zXg9X|-C7=@*NW7*+{wugV&cvnv`-se*1%+&U8W(e@zoPL|e;Hxl*b~#PYIMC)_Zo zS)~?0p#pe)kKMuRM^s+Xe!h699rKHF!}|{T!@w*aUZ$)LLD;PuLbU*P?IT~nPgsd| z7HZa#mwOqfaRQ1~q|r|xSj-BZqpUA!^G}%V9=AP?ZMfL)s$x>hTt4{Td3wx@tk+pV z1X5lL6q}M@YEHqqu$L?vbs1&8H!7*UZI&eJgG`#KzBP+5%&{UU_dcuYGy7CIJaqsYcJOeAWlBtkLbn<5K6Gxp{l;<=HM?UQp;EAFvRC!Lz1~DV z(E+(H#3N}~rUpx1Y45WLd&0rj&Zlj}r8mH>IfF}GA)ZKnru1^G*Yxu;>`M_2KJu2`VN)mX zl}sDGdlOq^rwlsZOu6swwI2q)YN!%shQ|77W#boh5T!hM^p+S_8^u@V2SPrmoKZyC z*3zE4+;c!;bGfkB2)j8Jm;%zF@>9MI6H?QE$g1OPCMiE^nUz` z%8qM+@BB<0Ooopleal0me|3H@x5Tyi@j&qTqI~7L-!!y~eU>t1u;%RNHjSPBRZp=7 zS!FffkqXO-N`IQ`Zr$Rhwgcv{O?nb^!V3V-%RD3#H7>jDyjm16g?7JQA1wBV zSDdUj_C&^Ck>MESLtk{c1|g;Nm#BSXpGS_PBg6npg{&k?d1jQW)b>8AutvGUv8}`6 zn=p=g;hjBqnu&Z7_K2j40rE~AX8DbUTv2GRx%oH(pwl_!SRhw}_$U zpNrlwf32s`@yF&Hr|I&HkIZTO8upCYIQxi+R>o;%X!LC1 zZW*mdlcOG4PB>95kZn7U0aeOI1B;vtdxm63)%b1TFCXQ+GwRFiepTa{nco`8E)y&Z zCD=FZdE=i8mLx(+d8?l7R4)8i#=B7!#zWS9ZfV0Vvzr3|_|C>@)|Cc^{G%d(#>X3!rlk$#h_Lpze_P_RE z<7Nx~c5Pe=+ZDdb&V>;}+gl^jkOW)oaedu44Mt6-QFt0qQRmE-NfN zKjvt(%Az+lE$XL??En!wd53r?XkSIc%Pu{ir@&lDcdtGMI{`V=aW=i49z- zm@I4+zoB7tP8&vEr~B|ZN#Ei~C;V0lyak3Zh{y|B7z~x?0>wz2biH~bs*z7vj?z&eS z=oH{~5t!~YcDpH4?nSmT`Im8C9?aCEZu(2O?;8-kkj3Z`xUV9hJE@&caKv7~lX@X= z?=;0Cw?7(0U0EkS(Cex7F#&64PMAEQ*HM&vB#Xk|zLV^l$%q1l?WPr*xeP%T4mTc;%QMG$65KUe>mbdr=6Km}HQINmZ;~^eWJQXZ=U8W% zb-${Zm|RuO>9JSOlL$A>&pE!dE7$AclJ_A5nNQ0nJJVCj(nLTvO=J+4d4F1aqY@0# z|7C>k?764R>~EK{GylNS9_!MDpb9P*{oNL!0SA=V<2mZ|bU ze>&ZYDoPo$Ugcm{`T(+C?EqP=)O!Ze#UNQiobg7ugL4Qy#3ZH5u#RE-nc2~o$TQ7I z8$I8?k(oK!p+(?7KOWq^uAYi}D~_=+ZTC4}-m6)%sxMG3E80x;I6BIZ&T6@{qudGq z;SHX3wO~m=JMb3N|7m(V4rdt&2*!3@AovnHvnQy<^oRrYI2{3)BP?Sdl*Wl59HHSo z862-Z84_W5%3*-2gVM5RMh1jKROaFG8kIwhXZUGzrDo}3DnLR!%h1nhUW9y1N9-oq z$QCzYV=yd!V|>hj#+DvF$JIdoI#omY%yV)dBbGj0haignam!w^7lPHFUYS5Elj(Aw zXupD~DS(0lCc)KJ#)A>9F$V(%-1F0Sc_N zob@uY4Nc3xl{Cca;Z*bx2T;*FylL7bgetNmPp9p4ez$0QlTsMyc&Q?|_N@HVGeH*< zb`ut=F-VOS=-3)nvye5B? zKIO;|f@-)7ND;5lzH|j02Q+}nXMgF3uXvX}+C4w#^?Db@u&zyAOt57??F^08BZdA> zNG-ZIY0~-~H94%@sL5RrH{C5OUu3~|V*MqK_tTgJ&N9cknO4T^p(qO!VB0x1&!P;v z=qa~VucsKrRf-|R3eZz}6sN*MZ``bmziRj<0)fTYIPZ#;(64l5=qJS4}&n3Go@Y! zN)Sn4jM9^#C1aa$?ymN&rIiU9zcx5jP9R>(a?=w-oJrNQCt>2Uw=}g%8j$^n4aSPM z20t1fGZ=fQj71piZxQlp5qduI6H@TZWIayrn|{MC!9JwGon&emS@3Y`MYSEZ>{?((&B`;i)o_YN|J#|vOr*}XZ?o%;+g^!xEM7#E@ zFfW1W3OIi&CghtDLW4xWnK-XX)5xE*=`$~baUN%vzJ;Tu2P3KFGoFuGT!^ge`OuE@cNL9zufUw4&>CG%`mg29DH`?M0tK)w?a#)_BBI!W7)HY% zG&dWy+0N%DN>Fy@g^kgqTNTyGYQj6Ruct6(<~29Fz)d0xA3V8Q;aK-)4M&CSTxZXa zPc&)Rg+{{$i$%D793W1mSn(l;Uy1c9qP$-;IOP=y1HxS0^>Ber?cRrCC`l{aiVr{@ z^0svVguVyLm&v%<;~uE;-A`mwumZb}ogX}bPYj8IRTxqu$VM>N4;{SQrbwt@&kuZ zR&o4UD3T99MQ*Z~w5{o6PJd_@ePrrP0B5J7e${8U*654t==S?!`h?ICZNAYlSI$;- z_e=63tQ6n$gUhi;v5P-y@?A#{C2T{BdnN%XHr;JKy4%39J+-D;r{ppthKI#8-A+u!_gFuZXH zPQ!=i*ZiAbyOq2066FQHDEv{GDRwAt!lSz^dhQDQGVx{}wCF~!g|pz5#jE7L`N0ee z0V9k!ek{3?nlPAsCk4U+Z}?0-kkj7eX9aw@1o~ltNg-%FX5O1p2ZdhLd|xI?5uRr8 zMzyO!=^XEFP~gbo{jUC<4?5ZU2Vq6@w{G9cal)g} zE1gVO1z@9%a;ocG30$kyqKgEhC12xxTB=}E&Y7QI9lN@Hp`0%p>V~(NS_sK-A5X`w zMDeaK?|>QYM|<&LjIwM>@OwK47&axO%YZqQxK<^f}a+F*pS;bXZx|JPa5t zvDezVo^Y6y`i{L%J=m<6n_tJZv3I^eOrCsTiuNXQ*NK{FweUbw$y2ibibiUgSNx&) zD#E8Y<#5lON3rOt*ykyfCqwG>sHk496a*^srG?GEwFWx=b@$8ufiE}_C^ON7&caxJ z=)@JVlGC=2P?N^=ptbUaSWTBU5}~jq+?Xe_z>gO0s$Aq*&<%87G?=p9L~VQRbN)IY zJEiC=?Y`juwPKYOKT(HcTrQx=fc%?++g=#U#Pcy*aVH(~cZV#S_j?Zc+;kBbkLg8i zkiuPXgA6n|hcID}G9Pm@nDeQA?z}9EFx^u-QZlO;YE{QK<92@IXl6eCQHbA_nzQHr<$W<60EnO7cH zB%5!9^s<)?3mxa+*Vw2+vTbKw$yTw-=p?>@ss4!f}%Q$*- z6g3HUg&+(2GVSnrHBxzq59hVWO0ovjLe$crQ)yO4O`Ii;H|HUag<8IGxu+;i9N;?+ znOEh{^?oeZq4h?&gBPabm4sFh>j^y!vMe~w$yTzYtMXH=CrS>cX(~%G!)S@}Gqk22 zt;>pR9lhQs8{6l;#^t zXdkaaCM2Si`vV#7&}Jd@PSa-E?8nwqp)uCpIpZ!QbE_%%<4K zVl8@KqrR`nV&Q=w#${#dVHKSe=H=T>Bs=s1k&!H>6dXm0PkRcBF5wvBz?&7=t^o~) z9qo>eR&Bt zCZ8@us?#KIH<)K7slyBd`_Rgqm1pGPi@s=ijV||cBEI!=+97vRhOYFYGa0%iL16^^ z_OJuJ)teE0(@es$acK{&xvmBv9r2(ihgJkCL>~%+y@nMl8+jLF8zr`;JSdP%Gq{l! zRqcW(FJ9=41Xne>h+cN)y)_@s>%P*>GwGcw^on8B;Lw?qe;L_xy|E(S`L}!qP&qbc ztBmB7VJRi-e4q!z6Gv3e(2fE6T~GgKNx4Gpt^22ykwj5g`>>3Wsu*;Txz;6h@G1z& zbRpF9zsh#0tX~c;M+OR`CHO|_Je*tB&scX5{Vg#x%yA1=cs3b>*GPQUyI#W>ifu)R z$+r3vO=}-NXID|xTvpAS-OHnm_mo5YV_8AjQ=^+sueh#BP0L4LzLiB6%Lu&Ga0%5- zaz^q5xqGRsq<0AaOY;LLd$^%5ugOBS4vpD=wlWy-1?U}Rg}w_g8IKTzKYmA7xM*)@ zVHKRv2(^6B`L{+==!0zDrkGB|lbi6#v@A$XoL+O$w9yBP?&1dbm&Et0W@DjoL!O9&@#<#L$#|5o7|yxR`Sz`0eFYDqU;27)1P4!4hx z@mMg{Qy;|Z%$ZI?J;+JAP$xFm9(_~`W*0|;d)gDYF&CTF*$8O><8>@25v1!0d7l@b z$rsh_9m|WBC5OKi2XOz@B5%^VcZdD(CHv3tt{pCZkv-Y;avKAW==H&MCFzI=EME8d zM2u*U*8*SY{w48Yks!u~S`bRfCF6G94&87Ju(wN^7Q#S(y#dAlIn&g zeOg9BVL6`KDKe4-`Y$b`G*tiqGSg`+7DhF?%?9~Ta1QNVHQcq~O!}29L zxOUyooZ@>gwkP3A)@Ke*YOW>sohJ_|2x7SqG1ISn{;l|vE7nM?_xRmbvA6G9`Ww*T z^`%h>5bnHHBMlQ|70{{=kTDa)7#e`9KpwyGTjS%_ok#Q15TvO6vJ6eCCi}te+}t$K zSJ)}q${$pp#Dw=pd9f!D8CW<>&H>Cx%E1%})o06 zn1B2(8~FiKoxI_sg+q@IJ|@?2zS6s7P z(h`5&Ld6xM#JYu|rNc$BmPyD=D7N7e#t9d|Algwr}{m;jiwD*#*HpclGoK~pcLRPm+H%zU1$10Ad;)uE1P|u zN8A;OACli~=1XC#T@iU4GYo$;Uyzp!3*Q6GA`LTEm=0Z$A4C`Y@O@4xhX?iZ*LlUC0x4r$vA?`&y+OCs?BR!_-SrA% z08=*#=|jR!LWE zv?Owa>e<`UQ7Ks;xp9c_ca4Cz7kh0pHMs#j@lU=VR?3biEBrl4>3Ed3F+uqUzTdv1 z<&KREX@#*|&Xev4Yz9`Deo`w8O4=K~U2hdeHcCD*`h#U*R?~v2ppoid<9k8W@0-Cp zrpL|rWfjq1O9+8P=_3wHl>DK}h_*0t5hk|cX1rDK<_h<0Pu*qTYJ5t=Vy|pO%NwTF zg*22}=+}XohErk-7&sY>WLC{9OZ0q;duVo+d^#0$ktOUcwIu>iGC6~=rkc2QGpA{7 zn2gG4Vlkd_(Zf$yZhF#C#J#T1=Vp{WNH??~btQ!E`!r3Jp`Wk)Qnk!WKe!2A9t_P~ zXm?81`S21HZs6tKVPulA8`F28yE?Ko?8(otKS`e-1TF5}+O zC)n->tl?sd+OJ&Xpj~&yR7yBq_lvd5P1Q-S@F(B^PHrO%G_#VO;Us{EThUf>8|C+$ zu3+L;GHbE6T)AZr{irMOwEteWdSH2lTUD*ABszX?veM94m!?quVr9m3(doF7dT=T? zVwb=~^2XqQqG%bs%@6>P(!SeWAkotM0zdLxbGWc}KS%}3?67^nmQ`eZAsi}qe)@YHCZrm1XQ+KGRRFHnk_U$~ToHLKJ0`w=fBmt!LkQ#Yc ziCbFAWzFKlF)zaykn34q-R_@$oKa3>^TG0K$nW|-p<)-OMiUebUT8mC15M|6TO~KrRYsqL`MB0%z0|x5 zZ#3x&SL?zaKKX{0$eeSV@x2T1W;FtAgzhZPq0E0BhSyGoyZO3(*nHcSt@5=Sbd%4h zf9KAsH^2YFl2)kdm_ExgL&4VYOs(8{TDmnThc-p(K{Y*P z0nm!FGuQ#yi&L=iZlO-w3c;J7F(m^X!JW_Hvo)aQ7)vBtrC+j%maa!MBSjDb!AV%{ zPKY|t+^~YriQyot<$#=lIj-MOed3!E2xqOlOI!Kkc}6vI5$W*vUWvgnmvP5t6S5p- z{WaEb#oMmz)rKIhSS?Wf9-qG8=bL<8leTa<{B?@I{BRc$5V=7J*OF@rA~tj%Xjm~A z>4ISEEJErEhOyR?9C|T)%$eg>MFu&)GwW)091H0JmO!v6L^z#MNykPtfGIdQ`N7RV zm~km>HEtQ?1mvpb6b#iz3 zXPpWJqlNIeWcla=`wO!yiZnGB^SY8X%n49uZr}U+>xK>9H)@1-kKoDvA~|uKN&ua7 z)|KUkD@YXYJ1Hq<&cdgilmRKv%wGJ}i1PQQ`1^Ce{qlc)C@_nGyFU08>uxA~JH6Rd zw^Qrw^|1S9tFtB#msz@RD|v~~SI~de3J>=K{`^0m`+xk_!*KJMV|hvR0^}&PiKJ~; zusLMTrtM|2|2`W1)xI_NUiCYl+q^k%bai!$djH{t?(%%Pv9~z{Y$y}3#a~pF0+(na zjcwH47=27B@_c?cM{P#$HB;#l=Z$~=?d1RaM|SD&hx~O|Js9q-udlz;eUBvm3jNK= z_tG8~o9hviMqKap#{zi&{mFkl()+~&u;>AT4=A{7r?AJkDw5yIJeD1-&AQiTnSpru zv2nF7F8a|QKYoitY`#-&_{3>6PbagkPWUOyFMTwE8TRHiBwWBDlSIhXwxX^0U(V|P zF_(Y;$x{wS@N70UEbRW!!~{w7`*{s96;XhK%8X{c__*Z;j9V!yC(D|45G zhr_)w4HQyEW@03M>4(m_NOQZluR5`6ETudnFMDl(Xsy$VzW2v5c=Mp377S*axZZ0* z402*XV^*6D{_ur2C;iQgqgTGeG=PO=qG1tL4)*_Ic2m0xl@iwe?NA3mPGi+x4ea#OAl<%qHM+N5sm>< zz~1@&_2MRk<<8sYM^{6JXyM&3Lgk`=m@7GOR9F;Ek*5BVZJBcZj_kHFBOiQ8mv3zc zwh-p8#-U%nQMhH8jN1Qm0KW||&E#ej^Z8&DuvtV`7^$hRwk7||z*3gT59mT$U&wB3 zj<-EJO+&2dA55Q(4GcuwrqD+Vw4-w;Cnt+z5Y}3(ye(XmyuI;FI^^NPBNSAkKO4Q+ z-|umfpUi6+3$wBL6kRgnW|$KCgG52um}d%J;_yLA8Kq!JpPW ztOw04>FXOCLDA0FkNLP6wky8sIIIv-PbI>nzM~kt+x;3rZD}rc$Y*J38T{s-pQ#I; z88vN)RycfaF`TUufrmJbS*;tq(CUe%pCs4BFRWoGY`=j;A3uKlc533!3-%XEKG*pf^0OE8+>6_d&zLGzdk} zk)2`=NA6>q)`X7UKhhs6U^&%7%Wxmu_o?v10#^s>eF{Rl3-(w_$lRRU(PIWyGEf_bMT za`n5cPqK5lP0G*Pvh{JXKB^*2 zlq=V;L^t(yioc`cKNxqmI{5YspGTkmVFK?`h6e&qCjYPl^$$<^+Yh9vf|-uNv#}Gu78Y<(geXlJA;92KNnFRnV8V| zbrU+=eLa$tickH=Bd-4@cY&{J#uzFe=E3=_;OE5TI~+_|gl>h_XE+bdHT9}Ktv57j zTE`i%6ds*g?us-~?XtOB4eU?ZJZ!k??@pXPic3j;T$@@ZhRXu+xB7=eo)Wwt+lhek zE;CwQ5cciLl{8^rN{wr5bkp3lm!s2WretWPL1-l}K>YpzKb=@J^TPPUzfYC_IFf%q zUu@_m9huaPGyJ)OqKT<#HBuk9trBCE#@s*~Tz*2XGp6VMx`~qLR-Tjr>z`A={wNSp zARA}K!r#+Khw*?6_^sAZ-~H=+nAR^xi{5XTAYV55v=j2UH8QJe*Qm0-&qzxui1;V8L0A$P z6#lH);fotuYx*)Np6y9I1bgZ`v+B4XAzD%7Y7?CEl@FN+=K?O@pC{?PSTF{d>=Lrv z)6_)=oo%n?^bhghnXx)2O0Bsn<~S#68-5;A);yqj@$kd_dr$vg6TpG|&^!6VT^4WF zLg=f(9qe!(_xUt&U%j8X`T6-}^5a;4oZefgaK)P;O>^~_jR5GC35Gz=zmQ^f<$br> zvl>RmaITOEv>n-9GyWp~rwI3Vc?!b?U>vxVG@|cHX84!*q z_x#%p{15-oj|yBG=`wwbucpy6)A%d0dvC%Ac}<_Jif~WqP1}hnR{}$=e0AFwoAsaG zNbTm0-cOAT04kB7pl{?2v{9`eFT6?buuaE+ehzeSo7$uPY4`oc`g)J$9k`Jd6?Zbm z?fX+qPvT2`ortQ5E0;cW+}P_KLB-@}BO#9jxc>hT=>M8z_{~mE-c{4pP1GrR>CGb5 zaO-x@crS?)yUMR*gJ1eHo|nelS(~O>C*atD#~FH7owEDk{W-Pi|9+E-@7>0wp#J%d z%rM&s20tGL7+8vJ+FP%M%^ivO-Q22Oh79RHr9D3qz$|Y#&W+HRgU({i-umkYQQeJA zl3;Y5#CQ8#G7Nr$Gro6l*hOi={u2`a_t?CvLAI8QRbvN)3<}s?oIl*{-l3OIv3-AI z^6cNmebGsJs9yYCVf&vi_jgYD}Jg*vx6gH#4$uWq$H(Zpxxd{2hG3sJB3p33?norYF?fy z|F!p?QB7#uySJi(cq#pK-ZO4XlOO#K~VY zSiA2NT9a=yH-wNyTupN-f4{Raw?RrEXaihfpUcY!QzB~%^C6O+Ur#S&TUgXn-5=4! zAw~Zo%NheN+kB0Z_jBVvesqCk$;6dImiz4&86Q==&t!Nt;tM~3WKKN;@Y%OIR_MlE z*Uc6V?Z=$vy>;#S;5=PVA=Wx^9{-q%og@B;e&jm{mB(xt-_TN5Oxi+V&WY)I{TiM5 zmkIVRGG8C|wham;cz?kUV8PH6q?csS=o{S+V`U4Xu+6HM{)Zu3BGv%#vI(pAK?R%N z=fEXyq^G$hF$I4;Nud$lWjf^?5zkP(de7H$e&1fXTzCAye5jNzVG2>@TJON?Zd}u( zYwdjZ?}wL*3M|0{qjtIA;0ug{_nUX7jg8w|hhMcG+7klzyLS{8$_YnePB2`NZnm=T z_Q}_g?(zuNgb9l@QY_!$yy3zSCx7=xqa(GF!YPVbLn!*%-3=m`pU%KVRUxi+NKGlG z!9n5rZb=F#gy;^U(daczzS>_O&56)Lqm!+;ZqPzyRyAhMP!5W zgKxIL`eFfed9-__QD$E_+O0+jmYa7!0KLE33a1Mkix_N6{FaxN?g27H+yw&a+Eg&` z5&AR7lyDXxbgnorXU8QHM`PV&98?vpV}YxQdEMELf8_zC49-q@E;4Ncrs{-vZ-THx zswgO*cY|(py|^;Rfu|%xcCi6r@W`Ol&gYfbyDUi(60b7oEYBPb>Jr zuI|B}5XK3MHOlVU9DU%2U?14#R`^UDE5H4XFnPt6&=F!JDA~AYzb&cceREU`8@K`N zZODqgj(l#4@qcN3{paAFB-@wkC<443)X4XrH<;f@C#3BVG)0&|XQnx*U<$GL>_paL z8xOnAKpq)O9Bnjj+%*SE`-McCqWeaBkLSE4SV>pEa1-`>eY*f=`-O2SneVw8m$R@> zZ(vl@zzQP4sE)q#{dN93+urRZCnQ9vb{EjhEs|(CFLb@5mWRv|Eiu}v)S&h?v*(Aer|rJ{;J?x`!ElvbctX;T{mpvWW5@ zhW>BZk*;FjqdAsIhovy#scW8YGBf#b_ey%#Bqs^NiL)~DGd%5d&wD9g?*|PueK5hC zI&Dp0y_4V<5Nsar<)$N!|8{0!tNc>ZUS|ic^B3qw66T@{stL2JHpEWRM5dwxIRbxOl z#E@egO-yO#mNkJ9oNyROJK15+Oa~HNuoUlM@}61q`mv{ae66UiP9k!X^|G;mPqtSqD%3YT47>UjSweHlHt-GU~ zrvuZxw-mB&5J65am1&RHtzHBH#lUyU>_gD4dbV}>OBaw^Fcp;d(UbTdj!$>Bv!~e# zoGpWWkldRIh@?Q=cBJW%M-CIaS+{{w;H_}IXoZVCu0TzPdIolk8(}@P%y!34EqqS0 ze*oktc2i!qzt4kQ@&y?C1!9hxy}S3` zqnsC+Jr1s&@6+PFe7(%FHjKty#GFUlp--#6L#I4fXq;Q(nOKuh!Y>=HAEl^9+TjdE zVWV}@S2QlD-z%k8aOPNsxTft$!GSzu7tvtOk!~`-lir(?)f6+eg26b+h#Va6)B#H& z4KvHn8=~5GpG(QuST9{5r9*RtG-+DduLgqZ4rlrl7oe=a(@)kthlqP#nq}DXP>a$$ zdgzQkBw47!bUj8<52Y@#Xf5`3!rijabtC)Q5mw!NOWbu2)fo6CSUUl z0gn{TZzIQF!0ArTEF_@g`CaWV769f*i(KOQBN%i$&6mLMBt6$dNZH5GhmuG{3KQzL zTDW_YoWm!@w}#UvWfR5aQmMzZura#b&e=C&xO^q9)NEZHLs@G}iQn7IQ<+gjmJiWC z!yaZ0n^Ms7Y##Du13F^+D9dRZ znX=Co9i6ZzJNoapE5lz7`5%DAxs&8+C+&v-tVXx-{bjm*YE51E-TH9|XUT*lh|~ z-I$K4?d=hwQx>_gH5?nzXi>bK+*SoEetQOrW^9aT(p+12$@Almu`KTv;wN6bDk9<~ zL-)?likA8d>*OZ9S*RfIJuNqrZ@!a!-jA6x-K0~X;ztmJz2%)}@qHgo7XY>B0Flug zd8qzk1_R_!PBO>d^V~+SPY1}$7F0yPSqRpD`Ht<@NVM4u#zOKiVSg6ld#xP>CQHof z8Z%`tqw-u!SSLd`XCq|gd^BtN8?>_TfjY4hsovxZ4j)HP@ev5CYTld-{{S5{tYR@& zYX(jO+*A$lv9Nm&^9UYfIoTi^;oxW})QQE9J4iYMFcRr^w!dG6vrXtfL&5Q5X~BuP zIS9f0h4`_ZWX=wES>ozW%YX1ZRH+dskjfKoYm^?>RrJ@MkifUJtXw6Ah+lm zXu`Y)3epJ0$c_v8R@xNZm07Q2(frEAAjHCl!3&B}C$LPGVQ^j!d*cvBu5+r0m)V}A zAJt+#1Gh)KuLG8QApil&brA)rw0HF_hmCxF%7Fn;eQkZ!4{OB6dZby5LNi$|X_g`S zmebzGto@RN_`RV@O6EtSOMXE?A>6_-R5b%16o|K>V+fILUdb5B9-}bty1L-{w@bgfpQ!f~BDdI`w8;F(%AR7fm@Q8K!6vMkq4PM|^4TU(Rt2L7}=AXr@4Jg&lWYDEj~|8x7Y0epMxm`;<*GJvtNA-qJOjy|4RR|XtG5)&x-I1l0Bm+UQl zeSKzw=@Qvzm5dHP+RSI4@>hmA?g6c5^l@LwP*Y4#c7}_$)$4?sTE)!zui&`fO37LW!tI z|1$qmB$VtMV7Ld5HY0qyxS_11C1vOJIctiiZAh=k7xJbvL-UpPG_Xx*(6DO>&@{85 zbB3BZ2f%|cY@y-S%@49trxCyPB+#(ebO_CA1*)1)hJ)AAE-3bwgJ-+u{s}@(0Sb*1>-;mFhwyIMDwABv zP3azY=n7@rDa;5?n%ot;Q&!m5Q4AlpVy$~YXCV{^$mSxhq zSSU+IN&kyP3e6$UDPAPttt|4#&(7Dt_pkQG+cd8{5OEYiRXmv4#a#wfcqGgc;(FDK ziONcwz((*;tDQ9%Qj$|hGb1erm=nS3BEsH{Kc>^Co?%ghMk7i?bTgnw^SR3GkTL)b zUaEc~Sw^yk^^Br)1>ty_5N1jg$R?@35M>+v9Q#tjBlCi}Uisl&)P$o2P1oG7w!hWkS`Enc+!7ED?h~o6I5_ZsD3DW&30N~rO zw?E1A2(>}U{OEn>?}r(~ElLu`eBj=2^0z_3*VW$Gpf#a5hny3AEvE?osB(~;d(NlA z;W*D%aD_x$2V9A`jUNXj)*@N591>_T#~16E>8&;G7i&F-B_KAii{>k+lUIH0dees^yQYJDZs8igfk{_do^5#8l2-tD;xk1UP-vZNj@k3c$0goz%zo%k zkIE%;5g9d7yp9%ALg9b|M8kfk(aBF36GHVpoXX{xC^O|8tMSh-%2f#<5@__Q0^{=0oN(ic0MoI@nmZxBy60GgYrT0gN(}cTwH=c)G2iBq8 z-*`0dP>vjdEl~S#6g0-qca*iz+tk$dYq+7W*D1|wFS*yLBue@=GL-ED7p4c`(yUwA zEDU~ohU}66{bOD6F$T`x>#pc|TEBIYmyCG4=9$~1of^GEi;rD3&~};*YG-bvzo3=U zRXT1(JoYR{dYu;B*cLDN>bm||?rII!?Pj;LpGacP6&2NCxbRH$J+1)q=`7O|T+PO7 zfjgw{(equgp@tl*8%X<}L7bFL!B%Qj4FxcYr5cJeiYTPDsUA<>dbaODp-n~I^Vs!n zL*j8{9RmxKKn>Z292<3cQSutI=zq-TFb=p)Pt|=~?uVEN6mIOhN7xlV>Pok5e9v9f zAgeR4XT3bXwI#B(V-y=B$NS6_Tda!D(zPNQIrn64kG=D!NbNXx=GNg4vZtbAi<(m> zZdYv;*sX><4ds=nfvL$~7`G)*e_nKLnK zQ6s#_mM%lS-R_9rNn!38XL-dS@K%`4pz1Ut*^c8wd6@E64_(!V1h@q`n^=~8&n3O2 z(D5Ih51oZ9Ka@i%wzjIjAd;EA>AenKMs!(IG{^ zzHS>sx%p^k+={{D7ZY|{{#9gjf=^#``L@J@?M%gt>!@M7Cw468p~6`95A)!o&d%wc z*O~%+<11p=YiOm9YP1Wp-fL&NCV<1!2ltab@gbO+fwHGrq+HE2o;Y^n01Z)wH@(Xa zpRx|MER!hpW881;%BBaCkD82bpG>QX=M0%HdiRK(fa{wF<}o~tdzjG3Z9CW5Q1%wZ zYTL)@ZhcBw&AV7^3ZCB<5l_*rJUVew8_qm%l zF7G$gMY2;2{dM!%#-uK{&K`+2iHqH889yCq?G05>L>*zf9@%^1* zsG0*4<(c?s0bf5|^+QwcQqS$6T#I;Jv3(*(Y#w)S8=LJJhOgQ3wDgdvED2-pj&UJ| z>#4T0_@P0?<_B;82MyZHTyP6 zQUzHUT{Q&*kJ%-a5ms68Fah6BFys9!6AN5@@s#Xr)>J!vxUd9u;ckC&WYzufQc91n z!8K;bsh0H%hzk(kRrg20={U?M{ufLH=;?Ya*Oe(n3#K!{7qs?~1D3(&(hy!`v1gjg z1Tn1;#5pW?#{zK2=lRTc(ORrzjkGT^;(P1<)Fm1DGX z1FKk4wfDz@?wA8hVqtJWlyw>+cLXn%1ofhQR3TMFf+UiTpo|PA*Qf=7#fg0B(DsXM zv!JN;SJ?}*N2>vX`0Fajm%y>>H7niH$%>a_i3~5;e0!(uxCMw2^3i9YVr@hlSaE4I&W;? z_OiWBCR?)0fP&}#_lVFmzGWYVBSF*wy7;FD`wkc zAI)4jJK~0^#!%9y$f8mdF+L3-fl)hc8Fu$muarkVDK39Ne_SCini&d{O-F?`4_@Tp zi;)(0*OVKwFcL>O05*D1?zrjvl(lP;fOt`2tzl*_rz->JheX5lgiQNx6}Q9F7j!L$ zK3S-WE^3=(qy5YFo$gNCpeH~q$En zw|U!nK?{O*+Zyn=7R5ym2m;GCnWw#G!E|k7O&UEC=R;O4xP3(RPWOzFEB(^1W8aKk zfR&3B`xsy21c%G_P}+j=bkmYz-BvoqP1?b#ugqhDh+c&wXT983O|3;G=ZIT$Q!>z@ zkb6f2`p!`JNa>Oet=nh=IQCW1%-W53jlFWsg4P(H8oHeh&TR0T53%LV8>9p<1^lT) zIS` z_a2k$$x6tR)kiU3JS&V@J}%l`zCS8wwoAZ_%_$<|y+ywk15f9OXd`Z&_+Wlbrk*tZ zJp3DsGf1T>l_;x4Ab7UU$Gw|hCmphR?AqhxrM>T!f=2k9n)ETDST#9g{$(+LU~BZ| z3`KJG-ohS1IVQUK+^LYXg0}jqn+E2zAl>U8$xj1U@>Pa99nZePAyb8x>&23xd7V)G z*H3?_$GYezGDGRDz6qUr-oPxjYTduxBAH|bH)>`t3A^rmLJ$R)Lu%UX7&Q1!7F0ar zvRsiXim;wfDR#{X#*$+Cj_Uxc0NIk)&2^u=!;RNMj7N{p<&Uih9@wf?D;!8h+jZZO zis04@%{3>lE-L*JWu2N|&_IEU<>HQdn<6|0W9P?=wqL*w`409sOQdm2vy~DuwX&tM zyI#*LAA#h*tS%XXv`}~b+h8yYd})DmYdi`zWEvBvD>@z$E*7KvH6N$O*y1RJM5Qz>lUyu;-apAb1 z@r?p11w&(dhC{lTWv!SNjoHjh?Viorry2(DsD{+5fa?}gDpNt<9hi5a7hSHueLv6Q zjYDk#vmu6(alJHszl3N$zri_ChOx)E?zFK+oWC6t#0qdvc-}6)Y(`y@)}o+N)tT$F++m zny=zVrr~b#ec$kPomCEEF{8;&nAQEew-4@MwXz~KI@t)8z&KJtM9OpqmvUl zxT0~><-6Y$F`f?_RWU;IJ2lpUu&~wN0m%+YH?5UT)s_&35!&3EN>xE$%JQ=-cI{pO z<=A-s6)*j6khb}mI`mRL&7GkpWHoLm^A4h6NU7bI`J?WWm~BGZaezRZlBZUEYmDxo zR-yo8C?lAezTy(JQ@Zl@A;>W>*2ZQrLu2$E*cLS%IDC6|5_k4D?!r?!)}Y`#TlW#^ z^@uzGLg(>(%rY*MeLGacjWZ&zxG$u?^vQU&oM#p42( zkvVI?ONhkoKE_@0eVy3J315Hg8sO1u%0!3d{Cx19vSxoZ0LrlON3xu=1C65F5~{5c zleY=_F6n-VTd+*k9IsuLBgL1>5M)6Z%teK-{PLj<9tb;|RoUmD;Q_zcO{&GSjF*7> z=1imlI$gt(%N}o!}e`P6w-doIxT?reQ>p1r?1V#hf2CbK!T@lRCP|O`6d9^AjnQ5?pedg275PoBkY5uWm|X9g?I_Fn4@Wwti%=jWk*RwC||PhYY%O{w6= zEq;68%!dt0yK-?8R)T{YT@5+Jz*O#R+ZZ#)NyWS3#1G?DP2tHhs0esGYD3zMIkOE; z-%&JVs`o}N9IfI{|Coo<%IgSk6cxnIHa^bX29>!@tJoaGn@?2?0P&*Y1&WGQbXp(M zPNu8Hn?JgBW>=OV&)(XX1c<_S`-fpwaoVQ7F?m1y5NeIvE`*&Mx!O|qm7FPguiuSZ zyB-(yTy5ZT_YYwSTMb&z+bI!t*%vt8WcNNpl4zeTqCJkKs?jy=S0{lU7*vC|%DGF7 z;4$ZE5CA#eWfzq1!{=Zi^p|S-931ut2L+8QxN$7u1@!uAgUjsTFWz8=lpmhl+l|}D zrvpi#^vuLwVoJE)>Z4y~k~X;(;6if8W{qg&j^8FN`icvF+}W*N+%KMK&q}Jseg2+C z`mtNS`M~A##J)i4Kebc^3(JZRQ#%bn#LNq z>7bHUJ{G@w(9DTwSe3a?EK1iD2)_LgRB#^ReqF-CV+?+MVGX)wGFHaOrP6F;8Bdu_ zOa)WEyPRsO)qIZGZPeron}wi;G3_F+8Ha+=MuiONj~VS>{PF0;tf35&@`py1l2KE& zXhXq z6LxoZ8*Fie7|bPlj4UCImH~VOrZL%5(#*dJMb7M`D}l5EstqaAoLf~vks%Pag4sa9 zvysYfQwq-Dp|p?wR7`&&CsS|5A)K~T@6~oQcn>{;7V@~ads?rgC~&Q6yoZvKddw0k z8^LV1%J{BD!q_>gwsji6vc1sFPk#&AeA;?RnlgdtbJW+Bl=5G>vu@WVCJmkf&76$|(NYE+qS)iUwH)$aqg|4NFm|FK=BpfcVr+I%AIqOrpz1)|D$LVP5 zbUe?`q~VeiDb8|knI3Uh+{5;498U6%%Tj)EkKg5;_YT||Qn8)%A+Rwau+M$M$+Mk0 zOcyN#KsCuJqaeh7WIBX?0G*@Oi@5nmb94FOTsiDHJy%~3g{6jM2)hoZT3LowIZ3Gd z_|A8{s)1B<@t7>jX7vP2YZ~Wf&n-MuD)}zxUiBAyRN|gB!!ErzYyahDAYxFQV7JPZ zvKw?rVbL1f9Tftt&M$s$eis@pw~vbP+B?wAu#XbZFyN|i;bg;5V;w&wZ=~&%mrnZbc&2v#)t!_4Ut4yll~4AACwtGU$0~o#3Az~Z656NY z)WHFmH|C{3_C|Z=b}@1A^|&ZWvlJB9WXjtb(d&~}9bTtYwW15Fr|4_V#Zq7YdRUqLhWmpKV<{_X4vW_$^e@<#CSJ1D96y9oGepHnqH`Ta#5J9Owzou?0bL7?7=U8Qp zOLoETcwK#g;r(oKwzBxy;B877mmgL+^R}1oY$gji#>CsvQ{bk^ji7m%XGY2DBj>3ky@ZDv<;0F^&8wUK9=>5hI6PX`MZD zn*CPvh4$)2UVA_1P?6a~+u%=8Lv#5tD}r^tn}-XMJE zYN3zIX@j5|8;0f&R}n0HqVW|2`99lQql+H~6(t&DrMXi=o6w6aJL?Obbz2?WtoTYJ z6kJqH+N+8&f1Uf-lCen!hseY7=C>e{l1qBmrB)Vwcvedn@Yg&+5(rJ>CDq1Yzj z>9|te?JpJ6IAZkos{!QmWfT@y>~lP3xQ>vY%*He8XEwe}&KKetvhAaW#l6D{RjTvM zX7bgx3oy5J=BZA2O5RHq5{olBkAUYasIdw?F zB{Gmjyr~_?se&E`g15%2a0C%871`O1alQ)Cr#X0U`a8e$>ywW3DZkHj9Mm;xO&IW! zC{ThRM645=VwlKVyOlC%`vX5RaS81e4q`tuB^LJ;MXteAXdE~`3Xmx#lF>&}jl~c> znbl;)o|u@%0ibg&9%%T19)e{I>i5o3hQSx9AYCQ@IU@#Yrfbe>C-& zup7~U3509~>0RWkwGD7%NO1Q?oz{!-_SX86>$DY#EY@C5)!^oAo9oRSdC42j0!rE+ zchRO*8Q7?u`;}-jKC?Gj=HgUMV~5WlfsLvg9!UF|C<^%|vSs zT)s8iP?`_ku&kq5 z8nHz~N2c#5O(Jve`TdVg5zgP#Y162D?7!kwWt{h2zFuH|qT6Tj8l({;`6!A(hXp;! zW>XAAJC9vCGs_X#?#4^tV%JCbaY-TuC#&qHsxeRXQ&U z&6@KP{SbQ=>f%#M3f+t~9Rm}3dG{koQr(Ku+blSbYSnrR_K>*NKl5|RDQ`%$bCR7D z4Iw-v(6Q@L>NXUhp7yZ^5Sc#UkMlvyvh*>!Y_F(#`|{JCJQPK_Joq+^Myrg!BAu7< ztS8>-$<8AP`=~&#y@!9!@miDS@WNJa{)>;A60^N$JI@a(0(ERmcC+)Df<7-?(E0WE z58a%mA~uEHj^GMrV)!`a#aIIgUykrAt@_#noAYOr$|j(@ClOkcQP#I9C4aKy?>SqE z)I3%|mL9=jyhBv0psXCennSMj2vkRd5=yD!WqYpQwea50M-+w7wH8hTEn?)OohEif z7GPe@bC`LrXKl9zE09{Ls|WVe`n<`kgEmdu{ve@%|k) zoSM4QE9_^_cJgeGG4tNklxiC=$E4ED%@WHue}N}sVtLf(dME2YWYfw-jB4?1X-P!o z0p)1|Sp|t!VrzONiI!oFt{^FfcRx;xc;yg+`cS3WdpAPRZGq> ze4`W%Hn3jQN_A;;)64R6MP)*eX?9+O$$L!a=9K?Q`82R%NM1PLh%IZ&)IMp5GrYk} z+`m&FS-oPVX#Ie}%yt^vay<@6cYB(smbrIOh-sbHd+F7b2fh^1?i^|_*a6XxANXS}TRy;SAzb6LB& zYESYbJ)2`C;fe*B*_Ii@7E(ru{1LIir{NV%+e2(zZ?ubdEOB3NjpU8Ilv$2Gd`(ma zPWIg<&!VQ^-v5I&ylvU9mc?)#1TGzUxC$=)Stm~8Pn(H?t7-lhG16Hb*$WZm*qmVM zIDMygE4XMg!!(~=9mjn6!YPV` z4zrmZI}Dq-%oovtn57kLZT%WZq0|Ek3~9=YTTP3_w1(DZ^CQQIAT~8p@im+l%uw7B z3DgkI#FAx}Eo_v{x0~{lNz{-3Noq&daee^djqL_YvNmGzkXlqQpj+6LF4&A1WipHC>d`z+9;*4 z-jZi1o9kO13A*Flk z^uka>T+#0t;V|l_cPGC)B?3i<_%N3qJtHHmjh_l zol_6s(+$N7y*r$tkGu^VhF>(1m5>Ym$Tv_Bp(5$N`QhmdmxM+!;f1EdqGQKay?R26&3yQs7>HNPbl>x3G~Mz&=I$b5nVT8yudWf=|oo;b`ndOv!P1ry~L`fa{kED zlxp<_Uw<8Yb7E^kjTc}%P{Q|GT)J|--dKFQPj_~HQ}W1*(c^LTeBDgYqO*~K^Q1;= zER|^o%qZb-WymIW8Mtk{|HUMs?7fdLI?flXTnPFDMP~$cEL6F{)8G*6`F-*Jz=mV} zdY#Qn9U9X?h_4z~_C%rpa>4}wz}REiR%hdYzq*0eF6wX+>qLAF-1ZgEtp`fQNk}6} z^sH!}(4@=Y1Q6h@_o%kxw44mP$7hSH`w*~KDL_Fqxt=piO7oUGI$|15ELGKhK2F|y zPn1*7low?nc4l4!3DrADy34b~ZB3^LZ&A;ur~DS^5!G$aS(>Ro2MF( zwyl67wPwr+{thznezK1sg$R=fLU%u6A$12*ARzEsBsWX4&F4c0OF3>>uYOLmM~%Mp zn^yrAXm1q0JB|N5{o%16N`Q7V(7yBxxQF`>$J2W4ug;e?`DUjCeFOwW$iwtpkh10)?6D%QN9n`bG z=HM&lUFYA4!DTY5D^OZckQPNJEL4#jWlCBkVv}@-?aEvrnN^P`KeMB4T(G(0ZzaF* z2$wg#DW2_o z>GklsSWZ+a(ya?1;Vf1sQ#s+&sRu?QVon_9gfpuNZx~Q(IYCNxBllGitahi(7Ck zqqKidxgIDSgG{chEi$^~#-Xy^DtS_B>}ea}^eVXa;uO>X3uF>bDh+IxF$4(~rQkX4w#miuAK?aA0(*9|a*$NOmUKc|tdqlNw)lco-df+ytlv(#o z!~l>$Th7IQO}fYlJ&F9$ zQqi(|5^lRC!xcjBj(L9Wits&ph0+~Hgja6EWy4F6 zW?c=Q+wYkk+$)>hz2^iex20HWa~o%!;^-SPDC2DnBTpx=_t20v&`vc#{u_ZdS48l zKe_Cr2r8xEn&nX@bCW^rqKj!coQ`Jb;pR80NrxjS%RcCoiVCWROwoqDfOkTBfIf`zR!xH1ry=PSoRFoB-Al<$gQq8dqQ?{PM} z^DIH<_Z(ZK9k(*pPoHbqT?8aj`?xdRh6KI6)6MVO+2ez;p;aZ5rYk93xvwO#?bN12 zZz$v-dcqu5PNXucdE_mDY@ZN;RIHQkXqPQ!Ik>mqI9yT}|W$b%W|x{}lG-(VM5jAorF& zfYzG6+-zI5A0TGxB4%A0vbDzYvAjUh@}=rA#lJe`__wNgM)7}he$jAyr& zPI)H7FnL8}ADL(;IGS>!hkt^A(+L?w{f(Sm>`8`AyM3_YMIl5^vcE2oLs(@sn-%rq z0VdEqcvjHO@-`Ku>8aZ~05pz8Lq+mhlO!@@vI@G71Fpwd0jaW}n*S+{qj1@ac-+Yh-^Qkmy^yQnkFWkk>D%4_!GFbRN1slE8PjvSPaM zrqSYGy|`6-&2MHX|IO<1dkB!3S;2ht%y*L;_q!eHf}fH<(MV*5@0pCe)LqV1D^Edb3w`7*u4{SBL5lRT+Tns5oswj->tS$%=)brKiAGW~U3p~D3m*rX zcWa#&?-)jta8B>{xKB7bC0eKzh7aDMjQBkDFcEq$dB{c{=|%l!AOJ#Wq&Kp<)O@wxb`A zJIz4JnQOx-_&pf)jz!qG=@cEuK!bt8woxA0M}N4Ew&k8=r6Kf2L46jm69O34hDj}5 zD>h;3A>ixzN=wHqrU6f3*SFY5s0ZyKlNJxGj3$+EZw8MwS9S%`zVmb)ds}Rh@pF|2 zsW1XRr+c>Py5lyv&%`|pfV-UjiWliypN?CQG&+z`0DJWO9@gbpcEwA4x%%OA9pG;2 zGs3^@HGOE<-&}rFglAVXP?*Mo?x=Uu(R=yRlDGzP-w~Tu&`>uDpZF}r z23ZU{px{{Qy@w#1ajW@eFn(mC7SNTE46-_RzpD|?<5~n%hkwNGXwj%xybuUmZn`GD zBnWRA@`L{M{QgNbn;2R8yDD)$J)6^l4h;>KHP`G8Tf(P>?N)4GO6<1=FGRRc8&AP_ z4^BN5FTC_{_GR-X_x^$OEx8j}QhgWLlU0UItRzEq;t%}~ zz^(bdqlHZW_;40VdMF?l5=4D52WUnXTOSZ(rSl|KjUzs_IEGxqrw4_D2pGE^Xnk2v zgO4a(4-x9b!i?@Bx#F`<4T9F@Yg3Fo2z*&kK$vh%|0VhdI^y>AN|JkDAYFjd@_b-I zL!mV&IPE#iXxtjK>~)AsP&;R=fb$c&QujQu*B3El<5HYEPJQsL3^@i`pwJuPoR*)8 zS{1|Y>CXU>I=_0a*uc?z44tCd=*vTFg3l%KZixo%{H)^@i1!1X!UGO06OoZo5LS=l?CG(oyen0Bg5_xQvQ_p{I!sYaLFL%06dko~(e`cH<0@kP+(3@0C= zAA->34hlUk=JwqWX+aTs7>e*{VgS=~Wkb?DTXUpBhGh4?lJ{&Vy!x}JkHf6ZLPfvW z+4&SnU${(l!v*q=e?V4>Yl9S9m;>CSrb%|$Rr8g9V)A-oXlsI~fpEQZ7(fm`Z_|&W za=!XkNArIn9moUfC6}N<-I-=&k=7Ej?mGP27?E5uXx5_oj%0I63*2VU#HXcyUKsu3 zzhE)`LI7zJd95_VKNAL4@@tf{v4H5TCLR8_oCm3N2`~|u83{lV2C(b(o@d1fTZdD~ z!ZO@#;lKa1)|{X_E%rx-bD)<2j^SS4`?cgNu(`0-!EEJujdnl-ET}I=UXle%@F1*(q@i2#p6AoJhWZLdGP@ff;1N#=s1$ z+M0;mR<(Pk#s^7Pm~WK;;*4z7Ke3rC?}8kPYok$OKptS+a(w@@;fw*az9OYWEVGeB z1wV=ZBuhK7T;{VG6 zl1ldk=Z;_-Qmw=cWU7k<60z2F<15xPGZ~a>buB!AB`avzcO#S8_8-5?3-Dbiou&oY zqx_*j8Bp{DkTcgBe(Yc-bEFkVwZD7(wKD>5S^hIo_BX;QT`y+_4e&DE04t%NN8ONk z&&>QxOv&qzt7#)#ah-xMW;rhWb^rXAHS#AD9!6%BTMugVGiQ&_U91orNP1NA5A;UA1zN!Hd4qTOQDJ2cp1RWQ=M71z&lb|E5?KI$GaXW$L^@2L-j6@& zFax+9(PDudH0Zjt1gqQ4_<_mSv#jj@w74sRz~VlKj7Y=+?d#7k2!q}0Hs>xe|47d+ zdQtk<#NOTptQwKDHd(EIn$|oJz6tKFg+dr|Q=xieprKLd4Vp?#tuI8h5cV}?k^VQG z`@j7UuutGa+^Jb1f)E;l>)+%cK^wnx&9?zGf4|#xi3NnH zi5HYixigwjC{*Pq{LHz;(iDDwoR)6|bN%vzA{BUi&F zMfshwj-$-SL4W^&1OmZ?eXe=_og(klFxc?V!Dvr~>s|U$o&htqx`d!hgLB+B&aN$> z{sXXvJL@kA{hz1Ef9xTt6@b6C&hRP#G+u(H+9^aTvSg^Jdc*`H%)K zn`nLr68_JB`oAAhOOfDo)7`7$egjas^l-jBx>%G8X<0OpLg#(26E$KpkDhp0JmoWi z03ch?H_sruf<+$JdE)I3>o#zo{J5A#fUlr%K}L{}sk{_lJNMshUNEonU{)dfj2wSt zpNb{X`R=cco>3`LY(8ZzBIU37ebU1GNGo@!A|=5NYYNdnCtkT7&)g%tibZgR-L={ z_gfJRuwNJ0M`kY+|6v&s7sdaJ|JUG>H~Xnm=WZ!Hk=FQs^)vp@=lYM0`~Ty$_CNFZ rKlAwCF53SW&&B^)tN;62Jz@Do*LL9-otWP#@Jr$8izh{ojs5>Wy6|DB literal 0 HcmV?d00001 diff --git a/docs/source/_static/images/implicit-gradient.png b/docs/source/_static/images/implicit-gradient.png new file mode 100644 index 0000000000000000000000000000000000000000..faf26486ac888a54dd944755e0305fd8fef40016 GIT binary patch literal 177159 zcmeFZby$?$*ES3YNQi}ifCAEuw8RifN(#~`DKLO^H!3OJ-O}ADAl;3mba&^w=f3aX z^E}`C-Q)3n|34hZFvHAT*R`*`*IMVf&b2naveKeh7z7vy2nbl>VnT8V2&e@J2zS=d zP{BKevJV6i5bkjqy?G-m{^kvdtfjeu5mX-mLCiNo>8`STCvKveocKo%G%@sgbkce; z^c3No_di6$kh~wiCh?i*PW3dw!<-nvc{HVpphBl(_-MA#;&tB z1nZ*6Ga`}(%RizYe`VXiRf2N?`e)q-|@$SmK zmoID|?|C-A;P8GW6pxLQvM@X({6gL5H-@0U@aJiJpJNhDYI?M9@{`cWm7h7zDaY$_ z_tkzTEsEo~E;sCSmtarOUPr~eE_h2U&(s+p;~9-nEF413jG<%W^Bl8eu~~s5O@f8% zofn6kN*=E(rjPFDg#m#KKE-fGEMJAmsAb%HeRxyxGz%ixdpFHJXD{_#i*Wg59`VOx zjxjzd#YVO9Mp-09T_t^pibA*g?HKJRx&>{5U1)QY^0IO~ZKv2ky}QBT@h=@wfhO6+xY!uRL91(4_6Nt%#Yl!pa~I*?K;8ouz^XKkS) z@{*w73!puF-!tz+E*$h!b5Q{O44d=?^-}`8?`t@oq#bTVI>6Y13)UFq0yKJ-jMvjW@<@2HS zVJd@}`{i#wd*nLQri}on1x>HRGXoj>*J~uaXWZ|U%bdsPj^Ye zoW?M&`R!4~^!{ttD{M}L`&D#DEp68+hMXOi?=8AZn=b?}%~{WOjJ|JCOFs!`33x0} zd~T6mlTb4i?8iGkwkWkeV%0gy1rgffR@REEbGN8;LVSzY7TE-We(Tkk zPq~ihvi(OJh+WCFRu^SL(X!~=q+N1q>M66ph32(Tx06?^&Eqm{H%j5gH6=Mg^3^O&#UWOTvD9k}VZ=Qvye@j`uM z&pQ~F5QqHhg!q!5zI|gOOE!Y>+}AM`%JwW!lqwbfBcG~gpJ-IF$)K?{o@!8{5L4>- zfbzRFmaa}_^qm!7i=vP8C=%`I6OwuNa+`9BA3=b(mTdau69=poh!uwwHlh-sUo<#|td_EC0 zNN9L-3Xz70mr*)4*)hjN3&)G7OHK<+OYjPAk?@h;4;CYL359e((hK^O|qT7XJy-E7mkdK^o-PvwB`7Y806+FOl8?hZ+rZ5isk72 z)|Mib-1$-BBYPQknOd2WHTR4kk$_AJ-cZzl$$Hk>>H6*n{>HZrbsLWh`s269j>p{_ z{p{2Pkp$2{aRLeg9D=)Sj7CRBa%}l*T(P{pF|iS`U)jw+-v7v4CSoLPvTV#S_AA>) zGgmY&-8AEQ9>YLlVZFj-_z4+m+e^bwPo3L%+c?^cM56A$Yghf68A4FOvTD)6uv8GtJ4vH!rTaJ;AydCu& zosJ9ET?P!-?`|4yGL4%LG#a`SPT&WyN9z;~&X)@zQFVt?^X^6k)z=(GsSqSwHUi)c~p zHvVuNM-clZrc1Iix+2Dvt&~IJWt>UH_*Np<==3n}jB|uTsYByuU*Yth*8bf7Q;|<3 zWV$80*&_`Z=2TVGC{!4VjEgpk$4XaJqm;*0=E_s1&Zd^8-cPBQQ~YxLWiVw{WLo)V zdwPpyn|7OVGVNsNg!B^Y*`o0>;HDOu(T4sSk}AV2hc%Rr1$wK+eSud0_$A$(0LB#3`MJcq(oXbrjnCV%BIM8 zoY}ofu}T!PUbCbb$aKbz)3g(Wo>vZ%!)s>0Vc%xYc*46XyoYfVyKK|CSzlRemvZX5 z+vxnvxx~fx;wYekh~Zhi=@FS0t5)JJ?wmA#q*l(HL8G*ryj!AM`E3irA%bAjiMv?^ z1CM{b|26LbjVysZ^&Ufe9IKZVDHYR=Gc8{G)-b>{#JT&vUJ9D{t*Ui)ivLj zua)g54vGlX$rzK=y0?iJnGM>aBR{FytDjx$eSKv6Xvim1jLlMVsjH**UF5rk?_WMX z>uP137^{BQvv{?yKV5_t=+M+$EnN?|2-G&Hk8* ze2lU|3gfM6$OxC&QWgFpBPAiZe<-267>zH@lsszN-5@PUgNf&q_YnFxm8uMVqP=VA zkMwwPQ7RM36)C>pw_;LnXJRgV@p^Mtu+lAtbA~R?pWP7LunM|sMkZ&neont;)+*l)1ij( zO7Y__ilf5cav#aUggN_?LzpSpQ=9> z7RIt|sBCs?p!VmL}2BG2>AOKS|i_RTVX<)(7# z+qFtqDTuyw)EuhIJQq1M>~}koUz4%P*vJ?fBE)pYg{t~6Jz;>VEtPznNoCUaEWuos zUdCBY!ac;(I&&E5H*veEA$JjOMO)9Boek&i<`N0c9+uF~jNo&z1- z91T-SD*Ti`QTC-qq$1PGwDnh;4en-S{$&}T)pdDSQ&+vh7X|;ES+j@Gsyb%v#Y8EQ zxGqkY3Ho2oRuXHG7sk08e)+BW!fHuSiD}ytynZd?N8@veNb3X>U5nS17z%cbL(e(-%~Fr)o@`wH=;W8KnO6MCm_ zLl{04F}sXegO%}gb}oai+Ary^NqlycOFh}8Tcz#&=ZQR)vzJ%DF%$6!7;Cih%xz2{ zN3~PCUw=(cZ7?~g8$(Zx%{QLTSZtNtE{7?37Q|O~)jRI1j+B4fL?1VwjsBE&m^p9I zpC7NrWQI3$V{dq((L7V)6njd$LpWP_({d72pRki4N{kHo=<@V*uaE2|%`ga)$ZaF` z*83(sbt=8KqP==m%v{X;a(G{+Hc>p$WsB}wezQMkm81W`4bGmD$!*n50p1XvmKLLH z#%<0@`R3+|N3_SKx#gU*v)YOD?aa5+#NB>Zv5SFgi!Q36tnwsNeohzkTgy|Eo$+1-0d5G?b_fGHcIg30m>%>~i5-G55`tJXmH_f} z8q|CAu?_)rS?%l$!M&#H-sq3g*adYOCL#O#6I;}x+Zz2pf`lDbt{8WS)ku_VQ1~lu zktW9+3(=#7H$g7pr>`t-ASH!B3tpokAR`hWpnzA1;K7GT_+PI@5UCOF{PjK(0>W1# z1mu6dBMqKmKf&MuJLd1_JE30??t;G_gNJ=G(to~G<==>lgJ*eN zOMQJatB>Z^-4QW|;0^RoVk%Y$2)Gom2cozf`5rj`gwcCtYh@`(Ze4Rz#t(YtI{J+E zrk`NvLEyFL1}{zZtv`_1n?lX3xa}dNe|>`+yoSBaL`w44SFBASq{>pVByY?u^+`Aw zUoyTV<-;H$A>p;uGvJmJ68YzF@E3&iqqX%XZYCx>J3B@@Rz`D6LndY}E-t2*EKDpc z4B#6KRt{#?AM6>-tjPX8$$!oxq;I8bY4pk3$lQzscHR#<<~G(4Qc~E3{_E%Oaq8O} z{r8p3tp2$zaDz;+Pneh)Uo!pIxxt~luy?s-jqLTIDndr4V9vlb_?Tb5e98OQ0spU$ z{`->u<51=QaVRJA|KrgA@zMW%sDhQg;t}{o{r59?jVSI~7Y6}LE`T5| zB>3JQackzTy@K$sru`_|dNgFA7b%4j0w~3(*nZT?vX9ouAN}~YlT%ybim^s&qBd4Q z5}M2=_~hpR#XBeo&|tHv!#`kr-{?#r_)!f4kcWMeLdBzjn!umnc3Y0 z8k-I4M;J(GB<=|CfBmGF^=ndKE<_*>`op)-NVJjB)c1u25D<|c!u{oghKQu}sQBJJ zq(6O2z`Y4)?vGzU@bGy^@(JfrcM}Qxi@`<#hV&fnX#V=(e~0wHv-H2V^uHeWS6uqZ z>XWZl`PRT7bIp7_$b{Unkb;V;{pwp_AbaDn+rq*^Px)r$Yly+U=f9c zdsBqhnA#mhf)Vs^jb#i)tj%GrRKsNqJ(R9COO&El59m^(LVmn}3wsZ30Zx6O!n(yg z#89sGRJi6YXBgSOZRXGApN&>}=#QZIA^T7hKG2cMyEiH575L*TXd==4llL;d%}h{pvx4JHLpxU=^=wpE%_weu#*T-C(_srF^}3r0D+B#Vn$~ zsad(0+^Lk8OFxCab9P~0vA1bx`;JYm(4W9uC<(zM7fo%rr$-VyzO%D)|7i?ro@Q-f z7Fp$;qi8WS&e+d3?T5qHs6}i=QQ_fFnx{Vc{b80v7=d}dR-_)NaHn|DLfc4z7Vh!$ z#N)hk{~7$dGDLq^<>!7pM4x;AIrmw>(r5~&XfM7ZUl?X_vkmH40#nc4TK3f4jWLg! z&m<`D&x0J#ATXa3Juc(LNGAI2=;>zz75xB$g?H_}#7G29Lx+WwZgtIGdS`pTDczUf zbz;G{IkZ=A!CEmhM44fIS9uLwbPSE4ZRF*uGDRzA#YWX0Q!*INxzd#7sWR4MY;58F z20wrQ(v7M=v5Hl+hgGy+$`2~pNmLY5n7kXe=qm2tn+YL;&$NX6z~YcwY7Z#p{UmncOjd{d7 z;qYnAdc$2vfp@UMVk{3Qg@lJMS^RY%@hmtHxv+V+hwu^H`=q7^KJuAj^F-5A(d_AG z=XECgi0EkQ8ey_OtsAKtnC%yZ4+L{xqn;c(dMdffet!)&h&CYtFTk(}K~u#cHG5b{ z1kq0kNUZ5F7GH@F{(%C2Q6JF&>j;kg$@4LS*|-P&gBh`vEzl1CI84Z3krxZt#;~3hIxUP(i8ai@Q~US7Cx2l z%$PAkATy5gZQp+Y1e&N>B`raY)HCOMUPoYv<#$*>kHPqdu_%I*pe&v8jcpDf`g0MQ)xop=)kfhL_TR}kOy58CN^&~$Wl zdm}yK?Gyqqn?=ds0YSJuYz$`@p9&NZ_ehg-PaTv%;0%q7#6=|2MT76n;gaAuj`0~M ze}TU85_FZGnY#X3)jIp)j{>28@?T*l0Dzg*P8AAs-%OXvFCBk5W6$`hH92)R0bYW8 z8hqvqf7;)sBY4mecJt}9qTzE{rjN2ZP0yu&kd)d?#56=-i}cv{N~%DGIG=qQWY7FV z#wRccY#3uut<-{m-B|C~(Ppi8s8C)Fb`}gt)iU)R=MP`-ouSaYHN{{H3yZ$^dubx@ z9XJIT)NA!urwmB{_|c07kChPWWEG_v{rS)>1_S$lCe-(hz9jq!70L(U@nPf`&lIih zh_<+o`Lkxij|81a5+$4xm7;AbZ`5IvoBZd=h4CrmRe)^bGBTWy3tK2z;mkOz_bi)@ zrvNxfsLitXXPp4JgLnsm3T=lz#T2a=^;gbehC7RQ`ArQ{3^mqmJ@hoVoZsQ|!HY=l zx*!upseZ4ghJucR(%Vtv=P9g*0ryJZo&e&wR1!g}&}G~?2^{#nW>u0+hKf>5Ow40E zJiG`ieL1)XqX~tAxgt?)flSf9Xb8~@Ab2cPxCRX^Hwc5CS!A^@bwx5V&?spvy<0sZ zEGdX@@C+b^2eeJTVEH_i*?TL%Dls!iI94cRebo%npEEy|<*Bu`Ik&xPA+nKfj4AtA zJ$>GMj}@!8W{9eAfOrIhv-hXj9PtwzO}~%1=`;@VjbxBlbxq_}@1xe5P2*EF ztAVgYZX>1XGsoP8XKwysz0P> zb?4-}e zkUR-&oF%O!*PNFpmV%v7Dz_84qNGF z-K9T?*@_(}4zF@7ciu?LZg#0{0z_O~Lc%|Dof!}A3L*c6$L<+E{{CU{@y`y!5A%i0 z?4fneb&hFY6}OEe^bXa697lu~1D)n+IXDz6%w`AzhxMf4JHV4#;GGB(^R5jr)4x+j zi$9troe+xZBHGpPMwFLwx_l-{@7FDb6YQjyTuR*h$H{-P%rk)Y=ggz|^K$a?z1V$~ zk5w97zLwRl^7F|B9q*DpIN1U-7>E;Zg%g;vuKujR zaCyDQnG?4ULbU?W_sv;<1gTyQfdAy0ro{JFv5;pBiyuZW9t&%I^MRW!d5 z=eRzKgI=vG9Xbv7H+Ic>eZ8jkeV=dt<)esdoWZKt)G)tkY$BPVV(wOD{xIFl9VK61 z-_V$t^&=W)_zr|jDh(Sph3`MhL;ers@#m+eZA=vF9?Y&zvM?@6oNv^cq9l$~V{LS; z3C^T1qV!^36^J={fcXWqs>Ehr(htKA<(GTSRB`O8|B8g$I;IbW`4ejYa)&wN!OcG(!A*^K=I=X;121ZIs% zbOx#5&DZ;xij_aQpR%{+em*oGH@#Lt1? z8(OPQZDG?Ne#+m?F@<@-_DbVjnk9;Z?HfY3pyx{C+abjzAhX9mUHIuQTp9ky zFn0!S(?KOOBSYNSI43)>t!XqoD60FqurG8ji0Cl3TCDu7VaJ@=I<*T=MaZww+~`#q z$ylcTy3&cP?JlKTTB(ikN4>nUk>zqXyS?{1v4X4Fmx-^JWBt`G>ROC&kPG4~bOP|; zqKy%NhB`$kP#;(zT~^G&byzIMs2P`s4e)k&WY_R3B65UKHN)nQs_hKdQAE9n+js zsZ4zsEhU30qg}(xmU}Tx5n+noib>&7(O1I2(#PKl>(0|!9g9?dXpOTxSfVMM^y9U= zJZ7ny%HTUzP4W0H@r7^qaO?I$lg4dxbTCVk?n_`21C!fcOn)g$hD5jB;_Xh@;PUWG zPdYSp;k|{IaB~@t??4q@E)V!86)6|8mo`~dkuOT z9-`9uNGN!2qjh&8#n3(qtF!E6Tft*aa4Fw&NmmXf9?RgL0rz9w~B&)f!v z<(0?P8_-y;R~M&xRI})Jo$*>JOdl`Yb8z}?Ii7#6k@KGB+~Po2E<+9_$niAJM8<7} z%9XEFY2ln@r?7rHj(a1vDIzV>{|Ef(mxWD#s<8aL@n{?IlHXm;lh_oo>Joju(cA}O zUe2DDcr&ab4O5AmgN^!7jq`=imynG*^R~J;nii_Q!-7bERkrZYGIQ(lyWxHytD|t; z8#bu=JNPJO-Ak~5du5V&SkXkY`-zlC?$qt!H%XPcA(Jnpof?ecc)L5l<##oUPn%vC z4eX+y(po*2oU`NCV94&5K3<<%%KKGvP=42WBSB7ZzKe$L=PS6`d^|0fT$5#MI$oto zPLs0DA&W-0DLVnx^c#t9MEt**PlhriI=hsJf<+8R55Au^oZBy-TDJJ0&yF4(KsY88N)D>>I0driN*k^%$K_{SIdj-|7K9fe2e#KXlzX`quXms<`#uEHqONJ=zi-_^3S z@BMXZdldee1!x~hce^!fv{c!?Z-Sd;-aIU4na}RrQCO)FOdu%P32eL^;mJ-_J^e8D zNhX0?U&G4d56LkK9=MkNQwERR*)+gmh~_A}|#tAC`` zI4)z9m=5VKm@N&?H*yv_7YPv&-G=n1RrL0S{Oq6ImfuasKvI%cWoUz|lFhThR*nvr z!BAa;`-s1U(eUf|me5K70clh_=Rvkz;ecf7glkZDqSx1%+>FI@spabsFW!x-8D5qb zZ#126<n7EZPAuIz~vottRoVEXt^O$4Tt!uf~!CW}$3za0KWM@p_02 zc=?@~;~i|d3aCnBzj{O%t_y=*!(PGW*agJO-p|@>xVh%k&SLMhWypR$?BMdhW?bTP%0l$0IsYX{(p$}?_l93CnG&w%+awX z+vT4mybw3U?Q(PWPFCe6;j&$_#EnZ(nVDt9oS~}2{J;mU+TBG?ocsFRcq`CGokW+wIN{m76GK%f8qRsQTngxeHg16d-vFsIWu6pTS^yW@fBv7HWa@ zp2qEdjdO)rZp_!q+oRoSMa}9{gHzg|O~Wjv$o9F!-FI1>N&Nip7Z&?~J1W#2{CMnO zezWLUG9;ox z3$-r%*&r?k5sjAJ*y}8(o4dS33fanOi=BrzP*VQxc*9q{zZz|0iBI zz!t9!d_uy_kiZNNhCh@r21x-*^}jWyl#-VI<`4?mF&}!FnVF;*qIEpKS*izZi*mht zu`xh2R5<*ZsGsBZDmUJ(-3rFPntH`qoSPK*j#;DfB46lcUSfUwIr5#FFuzHC*nBIJnp>?2hNM(wnLXcoP^V=vvRL5* z_YQcjc@74BhJcZOA7vBB>-gqr)haXVx)88OyDmVXjHz-_-?Ap7Cy-#^jdh zoLd64|7LkB<*SGqwW-2(wR6&;tP)!PQ&PX#8$I<$Zz^@JVtz?UvHFzKuXFX@wwlFn zJUtbjv4LP@Ts(y-58!Do2uPn{BZI~wTOjMCGA15l+4X*G zAp56zflX*0twO5Z43umT{(36Ilt%5KNUMUnEIS~oozHih>rGy4mpHfBs+6x&aalSN zHpzBzm79oNDh4k3DF$Xt39r8G%^9r6()f^Qj^kHrH#M58q&BHc04-M|VEd@8uluvT zJuFZkFVq!oxIn=LlBK(B|4RZOJv(e1Srs5j4!%!%WuabDxf3i(snG^;B`no)k#)3Qd20X(FM~loqjT%f zKv87Q_d9UTb^a2uy2x%araY-!Q=T$o*~hgCOfZ5;Yj!7h4StD5paFKdhr*j&N;jEK z0C0^4?9#V*tXmf?+7IKpF3IjXj6}4?iM^W3A=!xyJZ>X2k1j1<=}$7hQ~+`Pq@yiI zF+XZ4 z+0$cvbu?a}VWO{VI=fX1(JR8{z1%9zhvq5ftD;XT|L7XL4=(_a1Oq)vfF)~FqRZC9 zoast+$8leS*w>C&A_RyHd)Z9lIMUOUFT;tt&j%SNGjH`kYHhPn!c znlp=BoNx^T4H+i8KcE2OQMt7y5d7(IoF)wPyjBY!Biz)a^~+89&KoC(PAqbXDX~3?yKO zHu`^qI{|~BQcat59C)&g^x(!-dE?YUy{^>P(XL9MZPrbKk9+WX0xaWCe>{IwX8+V@ z5Xbio=PQJCVxVndd!}}Lw_WMGtf1}oFA5ww#iL7J^@z=}{L+$Q4zuZxm*vNs~)j?VI+hTmy>yF4}BrW#E9*-SJz zVYm!YJX+Boq{exQ3?bC>KNBHN&PeYG%D8Q}ln2ZkE;e$d6yFa#ZywkiNO=Q%6+jNO zu^1@hM3c(Em&*RM5U*gx)rTf)Ky!aMw|1nW^u}{-Ie@aiSDBt{E_Ro7g5tJ8RF#u! zDyt#=y|t5G9_Vh+$>aewD|}`hp^3}_Ls7`tyq7^Ozmh;136tB+`KtcfKnjY3u*aow z(f0AU7AImE79q!k>zs;V!ZP3!lX`Fql^&rm%k(K9%gYrBBv4^UquI1Mw4s^^=ZQCN*jm(2%OCSDSp#@Cxe={HC3D>wy&0iPX4r zx!V|J&>M?$2KnviVW3-oz7j|2lQF4mTRO@{m}@@QFgB@dnCM_o^`*lJz7qQ$%uz6A zSZFBe`ucjN4V%6OJN}u>Wc6wC*XCu0MZLbTN~e*Cbiz|;E|9PX18SS8ck2$n-Fx~{ z;7F3LHbXp&QZD!~bIa9?`nuuzM2(J6!V7BaZ=j2X&(exqas!-_ARws8g;Vo|OuF@PHWT`PQKf8Lk&1}pgJC>OT>9WRvl{XH&tA=I zK0hi)Eg*6|`+(iUb-C3s_iO)j&h50Ltt8GOb5hym;;6W!*myJtrwktpZe~ZS1V(^n zvY2QyTB%Fuu-%aO;^v3h;<_m4W~`M2&Wv}!<5$ySH|Me%5~y#d_QUabLuFDK&~{+8!u7$< zL~vmJAteYMF9bw%#}v?pgDx_KEh-d9gs|qV*N22F%;$){i;Ep=Iq!c~x9p^NaZIz* z2A~?G2mq?FsiY?eLJf3}cV?!#+Hy2%EAmeV(BXyQ@Z!GQ)M1)&6ID_SKa}?V7c4MPOvk7|2NqvY@hfVlM|M7mTilY9C626Yj*Oi!c zE3qa^xrGe}j*qYM6R+7E4wmM5UL`O8Zutr+DgH*xZ{{?z11`NWQhZGWHw2-11EY$- z8ZIki6PJ==hp=ygO726Wr}D#XachrhkVb1BlzvH_8Z|P|u!!)w``~R55kzmc&cWfn zpqISKctJ$9)oNFcLT+9_x-T|WmpkH}gDt|f6a)5kMW+x=wmaPsff+F>D4lHty!J(t zihEaw{dgtCq%O<&#Q(muLZA>sM@W~aqC_y!(bdILfBAPo_^9&9C1iVClVrj=2?qr!!*zwkCk{btN9+%ZhhXhkSIWsf!SgAJC5$oc@!qS32 z6Ng7Um@`$tpB_R_bxZ(y5J-+yZwB?Xn5~cIM%i#gK7Ui2B*71_^yI-0h}`5AB@jm6 zf+|_w>_4dltPYO`!Xo_)5xyao?f$~g(C~1*8NAic29L0wbB};TqTQk&a@yg~)Uc+Z zk?fS<)c)-`WcCp;u{nz6;_A0#-JI49jdhRG>-oJ?PhBS=FdcJme`snXW_|2{D^2`3a4{^G$O-9#s-9 z&0}-={`xb<0bmWEuX!^8GZMG8wH+8XpDfA1=@^9O9V~a>ovExMFqiWkF}$fkWP&MuBq_6w=J>DvE?ZcAFHCWHgc*=nA}b=Jq32>u zZu;mWR@bJ^#nJjuqnlf)@bYxERed7vmTi9`Bz4tbd$xXR%%APkVq)MZ$lla;X}Y9i z12gc$3QCKQX3f?oOS5oVbCn7cXT}E}z=HdNuP+gHY8GrR7S4kB?6oU1~f^J7`t0^<>6! z@5f!@vf=WH3+Qy!AQ_CPV1>sFdax1Kfq-AI`(!+69K~XW% zfpAshGh+Eet>3N%3PA7wIQHzPk~XkRMUElB_Fs3k$;p@77!Q^fO_vtmC+34X(Si|G zPdjerQwH0NsTfYWK6nlXE<8eE^<>H19tWilp7cxOuh!LbUuUAN9~VIB$15D^lnU-e z2ZAJm-3nD{vPA#QMSz%D9Uzf(gqv~Z4Ua(BEH@diC@C(eTJRG~WYK*Cr=7(QlwcH| zPS*F=zV$ID$4~p>y^n3LAPBn36s2E{=a3vS^fZ74TZI~tNR$=>%BJr!EyX!QzT0qF zU_)$*2cW@XL5PlU8{lCj{l#`uta9K1iX4$)l`3WI4&3l6y0<`^S3uK3*`5CnP<-AKuVZ_q#=S{5y5mjNKdqp86+`qqV^|#$$OZPl(qi`Uen{frsanmH{v#1+Q2P-)OiT zk3J?OB;2n-`pFsa^_|$b;rt^>MYIIKj<$g<@eNP=Dn1 zgRZZRR_Ifu5}w@vKqq<1skf|V2ee>h9IFH~mfZ|;m1Z+_u_D){z#2QpwcJci5?2$Q z3Pd>~4XT74fiw6=1V$tznDP*A_OSLpQQ}YpSVQXaD*#@A>40@MC-S1Wt$}vWhjU^q zB3LwM-40$_9t-zX*OP&ML5Zhf9Wu9no=XpuP<5y6a!O2zeybei;?zcicN&#IS&QMZ zE8rqFzPUPw1>?!sLId55%*-wTSRZrUEUbrsam=|+0h0!xTk#bp5dtv+i-4`@5A@>a z3t${*BkcAOKo|neG-Z5JUP_?7OCI#plx&(Y`v2N6bc=_1!Iy(%d#lZs;0Nc9x-QVgS%D)F?xJl(B?F!dtU9gnH?R``KzsO%5(_L&0ATan(K zx^b<&9If_>zuIOX-}_{=hfyTkffILDB!RrQ;0(4B2n2a-8G;=B0;=*vct8FN+!BbDS z+7lKZN?4yq)O@l z1VgAF-(=)_NP^N6qa={pJL&0yfY5)kx>*rkAfPDTw^NEGCi?G)kLC|Nu;}HNo5zqA z5icds>!N5_alefEpnk8CT^ojPvI9oTz#~gO{eE2P~!uGFnQz?u2CXa(LHS?H;jY&Y7O7mxNl+=Iw++)Q0vRQ3R&-E?hNxZlgU~1_mjSZ_n2En;V`_b?{4@Kc4`y zxhRB_6lmVvVe+_LOF>HZpUA1G2ql8v-kp8MYcHEA6=&e7ED=e+KY_&5xK2(^9!;M| z1aTDWjdiv=I6QDIKx_p$l(?B$!716`_FW#}1fI&VfumIxOGy%d?S@gwR8+j5uC*)d zuk*~W+H1zl7f@7JS1+3R2ruvSrUx*tSF&ffS80=Mpk;9qkyYMUnpgH&z5?|qUh2kf zs?t0P^!!H8w_F@=9`5eV;W}0f1BBJZ-wa;h^@eBpHLIhv9pi%X!L8O!Lz>KupUfl;6@Uqg`h$21B>ajR?CS8 zX$?cNfeLWl1!*$r8G-@b$AYsTPZU6ruI_TD76?{5Bu>7qt*z1Y^u1yxAixe>ztw4> zO0<7XCsyXZ0lIxYl%6UcbA^Dj@!zQq2@7E0 z!V@(gPa;r@ru!R)UTmCrT=SGic%OC5rP8aX6BIc*4O%9dBC=~|h=A6O-xt*Fb)x3E06|4r^xHFwB)NfcAQIyNzyQ(5b4T>ezx#~`+0=R4jXiDXE`_k2;G{FKmyV+o2cQ)!0ZO`W`Y<^Y=tVFJZ$+e7w912GuP~yR#BKBT=qeXvgV9MKIbVqj zv;hZn(G}J`#-ZfD+VcXsCGZy{eO*w~jfR%x7SzzvLZgxYJvW8r$_i&vEjMjo%a9{# zq=6Qf4dD;aHljbYzKtL%qp>tD)o(ClikL>BeGKxr?jZgfYJz(M+mlC?o2#CfjMSev zVhp-_5+PH1I!4LMfT_7^Tq|&xjP*b%CSbv5q9~=2awGwF0GfAxxC-z!_IyQy?`-}~ z^-j%Z9q5ZKoGZ+h&r%hv)2OwbObm0KGp|2Z#O^R2O813NS5yDqdBs|Y5Gmb))9MH^ zk4F$XEJc?Z+rxG3(btoawe7pg2~Uf-Kh3-2cw3^R)w!7rkjbLk73l(KVKEva2P?}O z=g9!zBos{EklqKI-C5ATll>Sv{&-xvv^bJU%M6JNWGbLU0(cbbk9?r+eBDZ!OX=sk z?9qFw_+9qJ(v9$pVB;K2u00%w=E)*u?C9nL$-N42{DuqTQ&IJQ@Awx$#|FAc6N+7e zM@Ebo*bmRgI)D%vDc03Glt+tQ)HO>lQF5JhjZH{M2YkywLLf<8So!A{EW#z=BnN2k zVS4QewXTkiB&8y7ebYv=!f)J~ST8?T1T^kSCN~f`!(5K*0W5h+6=Zm;jo2%o3iNGj zP^bBn#+%p^&n548wDtlhy==F*NB2q!G-~~@Swh(eO@5Ub0j*h6@E0#yPugKW|6Es_ zF?i3l*Wug0mqp$Iw%ql@e;q@6eNqPC6QW_NM;>Tf?hP`dS*y7}k>dV99be%bB}bh& zZBl`tXea=aDx&dfndtyjv8m|aG~C`?>i0x5e{fLJWE#Ae@);8^mL9;Efv>?CGOR2_ z0?KbgmF8M790Nfe&$DU%hgYwI+^hzh{UV0Wx`R2vgx^uBnof_N^s|_DdAXgrVc&QhpdQdA3ZpJ@%o32HA ze?c$H5s{m;S$VUs;iVAEzA}<322nOH&^#@-YxY8# zmGiB-!#<$8AE|S|sZR?<0Ys^sxJwrx2Q#8rxt(DA1cT(g) z#?DlRhL2Bk3>0f>-Cl8<%{9y*&r`}I(gOkS!9|2&yv3Jk>tU%C-fPv_8iarT7XW~I z9*Ga8HcyY-pH0g)-U0mnz5{AaU^0+)Z%w&g5N>8Axq_OdwTUoDW)*T3fd;5U*B#u* zXZ^435+n;iJrSM`^>z~|Ku`o-Hf!ZK-j;puzLO1+D5$H)pAYh<^O#~b$*wvbt-XUq z1U*33Q*7C{lN?s?9e^xgddv_Y4cGy*8oKdIv(ueh{mJE16-VhPv-M1V2DOTwLazRL zWSIJVKHhjk0gxo@IghP91Z$@PXfTv7^ZpWoL%3}=kjIOks)3x8XSYr_ArMx0pGpIa zZKh9db*|3AYHBxy?B8SL6WDgfg;J(5nX&l@+*9qyTXIzhuielMK z3@RRvoU=hEUaq;a9X$cVM8U)bRpA1GR0EnB=fmE_K(D%C$^6oOKAw)TpW~8V_Fxum z`%#ARKWa}z1dgT(W4<8FJsXRV`PaQre{ZkH5Y)cDIJW$Wqa>!O7KcT|s|?fb`7|n- z)oN``8^~xta}HN26&Kee^amb|+U`1sFG%TFrqCJHD(L4Nst7*13see?l={@z?J~xq zzbU4gPYkpHTJHomofK>IQ9S#Mv211WND{DkF#w=jSUG4;Mgx5i!c7nZEBG`Tz3!mi zD_SYMU4gShN7fq1hkkvrGPEC%KpO;$q2^yD$R7UdBl)lx8xRZ})>6I-Rbm01Q3d`;az90s`a+?2@rWr9PoeUhARPlchR_}Ff#r_Wtaps*VVN-t$%{%}ZyuBF zSE~gj1>V`LAA_KJO&6SoFAfxZA5Pu2dnOXXN|dn)7VYirfRngg3@4pw{t`_8_oj;% zVEYUPqJJiuH|-wKj>G{Ib?#d*0t^MFS1#JL2NC08JxkBZieorqOU8j3&DK?;pRL6x zhm;&AqCUW7RR6fbYBEO81%#ra;Sf|Tf@p1zMBfanv?qa0^{2CRO!Y@{iNs^y-;vSz zj27#Pis}I*^DKbo!LVfpXaEHO+rS+%%Fq%0d+}Ld4NT~3&a}HoFVK79IAsbNZyYTi z2n9Z6e*NVEG0`5R)^JmH~=!wwwON`1;Q8sV9 zu7J)pK;{@@J2UV#LR78qV^c~713>lfW0*)43?($L8$>SVw%ehPhdjws|FnIwJI4eP zi!k8ra%`9V(iZ(~L#4%RZ)k=>cC+EG|V4wWeQcH3vVafJF2BV0nhvD)8Sq zjUY@^e(Yy}_T;@lC2cJbqJ{d+ySlnC&TV2d%Ix=buv=k?&fIn%8;>|^Y`k0~y^2l= z*v?)F^0MTw;QzSlA}PoEW~eZQ@1bu^mWtxiD{CWhS*`Zr)Z&W+)hN83$cf(h4xJ~b z5l7_B)DTJJ&DGJcOa=dOzuQe|M4-)1^(w}sas|!$zcr)CHUn6VC_*u;`APTEpyz2e zpq%#z*uD$g8O(b)vm;qKJFFdf^FyN>>`c+$oBx60XlG%~2dW=+r?XC!)LE-KpF{VD zYivvwCv%mUvehchEbj5=@LPf4sz|a5l|6w|Rj9Ye7y}HZRJdm>iJ)tK7uj><6ByiU^2sx2oM>Fhc`!a@g9YZ)0a<{8pi#N5!NO@ zv?WRndV3T-LG3yOuzl32K-(7>Rff8-)C73+#lyKh(L(^@eu<^a7)u{*L$UqLY}SE* z`IXqU2(rX~y)FaR>UU3H8HD=Z3#78%u)zB;F36A1qvh)Z2{t`3a!SfBswC%yVu6dh zhag>ii+cYVHGz*$xP!5*?ohh80nZPRsXGL46~+OPO&m;tl&tisJc#>sv4?q2k-rf^ zXhF3t6R0@@7e^Bl%KyZ$yf`4YW@N+&fR08Zk+cN4VjbZyt~HE>b;!SN-1qnh)8V%1 zdaf{}w6`XT54KACy8z#$1u6nVSwGDd9;l!R$Fdobf_-clXYU-%#uzH6bv*e%vh2`g zD}1fp@&)T)CCN z;&S8*$L<2jtvTpw0yOy~Sgq{i2Yuag=p;*?;qoNWlYg%Vp_Kw&@Y3s`CiKNiuoc>h zGQ95uWW>Q>BV@K}c@7!X?b%{bH^_*+h-y)**)Szw|JTKHM=2VMeJxWk2?V=V zUVS223o;o<_V_=1y>~d){rd-OyO8WHTgu2@k&I+zm2riP%8|V&-v+lf1l%czPI0V9RBIPJF4sYyx-^fI$z`b5a7X;0J@c=o|(9vf1?GP zr2Fm3pY<%0V#BNTe@RT&Yht5iZ;x#)&HeuDe1;VxD_?5ML{ESa#a+98Ix+G$u*gw% z);v-I^NQUvXgB12HLm=JN?T8!oEdsTnol+5;{@~F!coD%)f4HZS-WQ+feSPPHsF%> z3!IIPZbZ1PrS zDdy_;C&F`Ji4)lxa;xZh#>e1pLQj3e6iDv#sNz;e;ulO;{U=9>M1it>ufhHH&u%Jk zWYCcuh)o7^-JpY%2%cW({wAbu={}V#01h_@5e&?+W=xxC90D8o^+Z6Uj z=@c31AqQapxN7<*9fDYgO4}ga^*f0@JgzW0v0>ty2V0ep+rkZs?c3Eh+8KuHSN?1d zZw@MuC~l9$FiNtmH|=FelBf-I5OU_`{|}Rwvq%>eGW=KdOo_S=mo;$sXzr=z3%F*O znNn*(&~@n?F9-bjI>852E$~2pEv1EHepG)>t&|ri;U5~-)}y8t9B9RUk2>0Os48=t zE5e|LEkWNsB)f??TZprO8MBfau<7SLgz=)!Q-6`UrzV5?V*oz_&Ed-1 zetPU6*c5c?mwCYz8KgA&@vlE^aT~b8MIy7GHIM+L5Erxmezwv5Pr5P#IeGH6H%4Zr z;9q#HO3uBMUe?)m$UXyf(BA5M!xkVz%W38!q+XfHRf>1Epqvi=v7t2%g2BZAj2qJF zBKP7;NW5Ttw}e6XJUcLB8aH3t>i*T}0mA(=dn;m>Lp#)X-Y`}7@iYOA#TB7%e@+Ng z3J}3iA-)hZ2(lJxHFd|R2>RLA#dHKN!+xGRpv(O-6mNBaDPlf1MkXHm?U%19YoHG0 z+`bW~<<47WJs>70nV#gxM{7UygKA1u>tuf#PNZqW(4}A;pWPq#`tJ|3-ov;IwP8Y! zUq(hI_&O}pHSya?1SS$1M+@5s7Onl9bQ6_=g!Cxd9k2aj4%czf7G&7#F+ca0bU%qy{<~g=+5p z7&jO4bmx)l6CkQt@`y{9@)X-z{@T>w9IaLE+VMqWKx#Oks{ZG34~&-PxE1bx>A=7M zzM*?%NN9ULc?xi1(}TU51rw0*8_IG{_w;xJKJ0lK5i!J(l`N7%HFucn4)9Z5tkt1* zN^hnm-s&7E0=hB)SC}h1qZm?W68)LLg#fM7b$8slx5B*Zg90(0&1fWmei(d*ShP42 zKtJmSm%zQ6>mQJTMmfd%Gk7o_rfLpST#zG12BW%fk9mSqDy*3A_#e#cu_z5hxXRxK zHyTy<1W3|rtm5}Rz5}*ys{ZwRjjBIFrvreQ^FY@Thx9JdyaCss;y}3zKRvCAs;ba! zU}kfZaXosP{eFYVNE=l9`QbnRG&Cg(oBcD*u{`dnic^k5CP)lA2pd0z`HzA(2W4Nm z;Tz{{$y+uc$-9EGlogTpMdu|%V}vVG#GRO+Pf$m-Ts{6pLU+Zdx`YHNtyL93d{u+% zmggZDeWeYzMl}wnZxHv}ZWsp?>X$kGU-JVp-v*I$3V=nl?$7QQ?EV8#lM^tp@t(8j z!SsP$L$o~ua3Bsb;~k~*n1|YR}M`lK~ z;=Vwk-yl`1y04KToxCn>M7;fjBV7}WVK2aCS!(M~liLC&WJ@3*7p6h!ceDJxyH<1k zJ4lQH>oB5K??iuFdhx}qzJ_WN99wIwHqY*l&OP?zmoK zSFRGn#(UJ70`ni$_^GC@YY7v$H`JO&I$q3gEdo~XQ+{LynJ{?a9*!UPTL&r{Wz#W( zi1$?MSlKvR9(TwsS7d28PmKohKsU%@CQiEU#hOFHz{>A)naTRwxJ-jSy*d!0`+%5u z;hfg6uxMUT$r=k%e1G`It!$a!KZJ+)A;Ndt?c2Zct| zcBK3__u*;QjRaCCikueN$qOeq2GusK(*Y-cPPjMgJ_;ut?ylLvOeA&TdCp=tFwfc| z>|G5@HC~`_|0=ySw1=>Ix7CL8HIv}N;W=mn$y2qKfmgw{{Dd_<=6EOoRHB^LI#ab9 zH=Y0;<@>|p2=}*SHPIc}+Rza`HqnQ9L1}<6<`3fE9`BTGPPz@uk9Vq&08F;yiGA(z z{rVS$&-T23{8yg^|HpoLI`p_!1~6{vj-eH`tSMN0H^EYe=au5QbFi4{goOqkrPs6J z@*nIy@ci<_pugw6{8UrsB0$ZpSNzLVAe=t~m(4Uz1&nws(C|?y*FG=ICz*(iJIZ-- z&xM1C6$ew-zz3xersHO!pztiiVyMO47Db36rX{=*sbUJX0}L{F+)d!W`dvilK7d)QoucQ+60`1UQ|#V zz40f;3Px-AIF5<7wIa47-RumJ>-Be3PCGsQkmZ>Fv17h1hO-^>f!=GgKNhvCi_8+f z;4G4}-bKO%2qHOXYz8yo!WI2H_Y((%xutRSN{#Me`T59<0bzZG`FQUqJhf*C*SB7! zTx3*b74?Mqsxe5*=Rqz;`FQ!r#9IBnC!jNcmwWAH3qS#u#Wc6*lGc0WHQ>j1=>$)? z#Ga$?YoT#deTgTvCD+u-FpyawtJlaFS5+WguFvG8LG3i_#RWR(UaFm!h1K|j5Izi8 zZheR2S}5lti(k?n{H!|=_po!B_S<_;#NRlhx7B3{YEqxXrN#~UOlzHe!X>>2DW7s~ ztIl{y9eO;Tp`RlB1ua=Qs4zXs*@emt( z)AThq{HW|LmW|FJnGdbJ+d&9c(}XJArp1@zCE(|CH`J=EXgn1i<|x6r*)F!!8GvC) z3HDUAi**J3Sv6tJ6r=(VXRw}irk_Xm_rYRp5SGHDJOPVDBh)t1%-5AB7g@`pekWhP zUr2CRZ17su|8SYh#Z@mngW&}x5pfT2^Sm+14`e<%aEUS~ofHHBRDX!4aIv3K0*s&| zz&n`gW&bXQgT7K!CKgnTXmEn#f4Lv9toDD_7_!&hv*+#0$~#RszXppeik5Eo7NkCM z%-{X`VEz4vWMRd!w-9I`5FNfq`q&Dy+Tn}Xpn2vYPw5+ZsN$UUx~b6Ie@&zkwpGD%+m-*Y0Y-b7k6JeO@im*&I&O<^zjIo#k^j%SyWx zCX$hRhHE<~uUf&Z0;IvH@Tn`w(jY+5<87zMb{#knr_3KO0*<#bH@n$mLM@L|F!SVc zWEVNT1G-I44)wi(BWf&CXr8Wh%7hsn*edTNprBbaRm?eje*rl!SD)#sWpL?L2DoPQ?DoZ7GNhfe_cD&mJS@#x43&Z8ns-@VxH6 zTg=5KI{H0*@(iEJt0h+sWOEBB7EJeX>k$6_y$!+cIr=vrG4$jjk6jD~3K84bKoOum z4v=AQHv4+Rg`tcNOnG44dIxOq=bz_|tAWdcH;v@%uaXqHUGQ4L+gb2LMGC0wRg{$l zww3|bzWcy6R*^K~QU5iPnonq4nZA1hn6GZil#aQPtUSEEDG>1wni~}E$xeW?`(ywU zEJLf#rx1hJUczVSHlpc@v4zMS?&YdwB!)rFXW%8VB<2W~5gIA$8{UmeZ^eC{X+isc ztK`4By(P*ZKTybRGgLAo;M-ac!7y5etY517izH+B&in4RauFOW!M~}(w zZ72@hrW3P&zIL#5;@PXU%vOlEq0Bk=xF(IuTy>ool}?aw#u+kvT&t?Y$as6-qgJ=03=BLpSJCy=D< zGnPmFz@qSM5?qgZU!0AWm@`C`7x+U=!Ihe0J)%XZX^M<)w32{p-`bHbJ^dRHk*S}u zfTDpV%Jb2uuf-c?YrvZW9h-aUgd!||JO#idGgLyh^F#b`70!QTZL#>nJI}8?s4;Tu z6RkO)BPTeO@){<`nnN1Pd>|CSL{1{_}5qPtp_Q4i(-Q<~+-OrvY%nx>a z+4<`I((noqjI;jsq#4;bITG z8^q}oKDH>p)?VCjfJ6@qz{0`tA0Kk&o6{8#UYo@@?_wvcc6;4 z{o099r3DVh;u97+GLEjRl8fc0ow6I`I!}zKv}6S$3FEioueut);gu6oCB=EqLWhfU zGHtk*S~oQ;+%VNtB{e0y?#(p0yi&MQXfm+<HfVmJ2cWUY2u6*eYRcl>I;9$ievetl`B^$}uWy^}l@0>JVX7=+ok76znqpfStEPq<1RSCM{E_Z?{@O(JlK{m5$7_97X!8Rhnh00?cK{F{)%eU9d zMTM1@Yq7P(;fhl0y!POQ{ODy-YWWWzl>>23{RvhIj4qm~u+ThF!pO#)4gK}n;(h%4 z7b;JA!QeXK4Rh)D_cA$*+KzwPc3&IZ(G|7l6D^0-&P| zgCX%EI(8q2REBsDSA%yF-=zJ)Zy-^EH|{!?!x-RnsSE6m1ANPnmebp&cjiK*-%&sqGEb*(_mN$Q{ zaX31zOy$&1)Z&1%^v3neIk#3X7*Ban#7hI^-H1Snc+=kwhYLx#xi0oQpvJ{4dnge~ zB)j}oPk4<5cNz(&Y?M;nhAkOOLUCe!4$qdL2D)2~{iYZl4akgC>Pn0rbrMsYpa&f9X zs~%R5ua&zb5kvo1A^wDRKHB*)eyi^-6eM|3z4&QWHDF^7qQ4gVBdv#m{_1aE@GJ>K-7)=YznTPGG^IF7o5 z)f)_=8x(=J?nC2N4GUBB3GfcO;kgj__&G_MlU5=hXaZTvf_7RdkfAf2f5&ay_-5Ry zc+nIK#=G5x@HSztX07Dom&mfPdDNC?|9eZX^?~Pt?NR|!Ze2$+@7g`7GD!B5K%nK) zOf_de?Iy8cc9CX%?-8tJ^Topo8-bJaJxvRg3d^69bsvF6mROzT*pvbbqP{*q$Um;j zA35d5TAZsvGgN^mFb_2(d|7U)3PcH&MDD~{MxsfjnJDqET7~ijq@WtDJS3Q`zpn5??GbARhA%h3$qO_jX1;E7IR*6$zsdE!_9{OV%16%OVq{t67#joC3!*%J5+M|JLS z=_5Wl{iv|d9&YAxQJ->^NXqCVv&GVVKTC?8QWjW(w8*o*b0=)KZ%l|gPR2f=6HZB? zW8{Z;rdFhvi>U0`LM0~eYo$hYIYnXt$G)#q@95KB$Jde`DbQAsT=LDw<-(V5U|l5b zYr^qA`Qw?_Ci_rX!Yx(YskH40Ai5jpq|B_4ZG|8X7w&xlWy0P^*_m6x;Umlqi8Q?g z-g7rdGN`_D1a8DLaZ!s=@89B>?@QI+_(~uia!bFcGx*j(G!z7B7ba0^G2g*c??3#< zV+AO0%@o0^rf_2ro^{yorOMOmfay5k;5@178`iwa2cuhh=~-7J)`3_G;_R1MglYvH zot=d8{C6uxJ|FI_^FwTJhi{=G66FgC0)#)HzkX(37r5O-3MAY znRMu*i{^=;C(86Ltgtt;U_bSl2(Zd_={}!S&cA_8-sW%AXz_Uqvw4M1R6edH~Gn2So!(wpNeiN$DH0&^gJR-Bl55KUtX3N9#IMNvI;8+)bcMuR!W z&tvqad|8f5?b(g6$q;i|*8Pt!)$QsX?&>u<&j$CLgzU`%6R2)Nc-C4XH~Sr8jT~Dj z5+)9%qQaC2$dU+k8!>mDYk9v)P^1rfr+z^N43kT`=Cf61aJ|;aOXzuMZH-9`k2y~- z5?pYvdLDUpbeP*tH#dQkV_@_QAd@T|RS@U5zA$AE#~NO<0|#ocFGzqBIk8Rt08K4m zlVSZT4ZpDwsWGueZRkW7a{zA$99_n<}0!NBal zy3g~@IuGAWmBpJiS>DB?W2b zxiYy7l7?p&Z=ILRD8AeAn66&d_#E%39kI6@M5}9K`6t=)sTduP7oyWlATveuSpFsK zK@jpk^5VnVp_kflA?BJ_5IPf&i%B4xT?<*lJR^CO%p9?hoz}Ko*iIV}zcSc9`oz>m ze6*hN-l^Alh51XrDC5z3oAWN)Uo629aD4Pjckw`geGu%W2cQV44-iH(me3<%64nzL z0pWf4YL-XPdTkE2za!|04wz=&g$yUz^75T)K7ZQtbLPRkTsrBq$SC2+tH(Z|2(m{ibQ*VH}%HcLxC9`s%Zx?bdmUz@?nX~#9D*H*NFQ35m8Ou95 z*PWTnac3n|T(!**+qr5W=DVK@n!8(OUzS}GpIfeXoqueG>efc$i3)X}uq1w#3)Fe6 z87SPEyAiy_4P0rs8?Q4n=3Hxvw0GrAtC=mT?$|AB&`p9gq+T9%$FXlhMyhvvc^C;# zz3jq`@PgvTNJBrgje{N{$D9)}0)Y8BE@Ut2T#==+PZf zYRcyZ01Th}AHcAM7=cmkKrqmfkr#V!Exf!QnLeXOU3w*5{xv3k?C4|4i)>0UOrNO3 zwnbeHjHs85lYOl^M?h|0RvJ5&g2)W_-mE5uhG7gn0jqXj z_(w)aFq~l!`|v4HK)-~36?CPV1#D40U^N?7VdJ_ z7hxG1p!a7uR9b`KA?54Rs*7u^ z!7SZFho3Lf=AS|w^{`HUY7Dyg&h6fK&Pay*0YO8Kz`d)CTTDu$5KqJ2EjRBvE7xD- zT$vegJYLi}0U*Tz5PhA2s2&Ic?F1By@~8EM8CNWKqq-x!%lG!X)rHL3aF8yln?W3k zVGq$OK=s!V=Lo1jgT$Xp4+~%(QZ30|`vC|l79l;MlD<6#@5mf@Ifb8KN8|m-0=A-Q zHtiD@?*tp;C`2vJs6!B>SiX`~pOjCUpvC*qU~D&ub=S-d|BacT7aHjk8e3Vla{dyb z!)am;DevZEH<|=GV-|_i&{|S2mE+oK31iw5T7aa^N!xa|U~OdCRBVl1CCrR*_CJ=3 z!o_{r^$4G{^jy0q`N+qNNi;4YW%>Vo{RRJVuu42rpQhY5_EF@7xTx#{LGG6_RP>|8k162Dr7KphTfLxsp+9|9+}bEBNM7Y*m9^$h z89Z2+&(eO zE2qIVO!9JdK%ZZh#s;~Apa=9U^rhbK1GDBOz%|2($&#{O(xJ-3^d8`AKU|$)Q);{} zb7`(Y{wt43h%m_(>blMO3phDBIllba@|LX3GtQ$m&O=O~5i;9eW52p{*E zQj{)mMSx$*E825AC#tMt;1{i;2@LA0qNWn>gCBAT*p6JF6S7clbc+}!?B@w3hmo+<+!c?Nasqa)BX>ZQw(|dgx^HOvcaWcCBOjO(I&2 zN|tY^Mo`kQ@XOy$+60-h__DO2MpGuVk z;$V@%Q6;5&#zlxZm%Z6{FU(iBxvuPk(Gl3ODGIJv>j4}dFE2@iLOT|BEmYz+{ z+IB1cmt8Xo;kV&P5{rJeZ9; z?VobIib#`zQ_*xIQ|69NNHDn|$tLvl~?dUf~wk5=b+``V@s42H8B{g_7qyOvo;*9-)pM>7UV}U!m`$e8C;YLDIg_5}{Q2uw z0@@QPWcJ|kK*_Q(j7sD)6dNk6NF(R6GT#II_rgB{KZH`8?_ubeAM#*cM0j zg1rar0x6R)d2@q9d_>m)@&yxQWjs%2utWV6BsxKnq`Sa!gVhy_GyGexvmo`Gloklf z>5N^G@T=yQ5f5G5#%DTMIIodb@byGbfxYJ5@b&q2dE;Wri#PXIGc3uJWs5r7tZ`Dg zv#@GxwOB~#MU-E~L;vE9k=}gx=~8^7Oqy4B(xP_moo3(hU0UE97tE`p;)qY78(Tgh zAbDraZE*&6KeFl;>I@OfSoS0Yhq)Aa^MjU*dmvODo+jn|@r7olu_5D7S?eoEXME!? z{sbOu#-z*(IZn@r&c6k_W1d0qFopLjEUWn*tc}mamLTZd&3Yy!dGKLIMl9D^Zr&pv z*zb`4JU~VE)xi%()IQh^nhMRfwO`%jbh5ePJI&fyI=lv^pP^S3Gh>zJsNbvF=0GGw z*(ynqj)$KzhZ-qE-=+Cl?M&+$goQf?lSKV1mV6DkXYQ82&asT>10{#c`l~v8LlMN0 zVQR%AyC0#*_MT~HC#@E_rt9pOJUBn6-|~|%Ns~%stkE(mB6d31E;DhDX)ixZaNwVPMLt<|JsSwv~=f29OqoPIApcoZ)0CdUm z$XkD|)qGO7-3yiEQX{3Ay;ToS`3TA^j{pEBUd!^PyAlK~z4wjFclR)4&Wr4xUzz(1ur#ta65bM_WIjxel*>sVF31hk)Dv?hw4_ z(eaonkv$(0B7e<_*#zI4v|@Q!LUXvpA%NI0BHYr?*+MYuINJ@COXCXtX$@A^3;0oP z>7Ti)Ruy30LF6CLNv#W}n6_Xq{VD|cyAs3vwwZgY>^E3E?z;`4)#k63T|Hc4r-MCQ zvp71!z;jT~>14;r-lcdhU>1ycN9ucDsHf^lALOolE?a*dYXj+wM8xQV_yHCNGCxO2 zmSTs>gl$r2NKS7QxV-*e*UbsuMm)soz=r|#DdW4N)-ebs`mcJWZHwV=m`IVLH|u8- z)dNi8u-6`bYPrhh0wQi{UG7>m#%xEq!Xb3e1js?C0>*aWziA$ExsCH+;iX(Rf597@ z>sVkuc?w8^L8E0BUx}ryc1W2;kYJ`NX7XkbuV#j{D(1r@Z2-Dwb@)_ze-+=Zu}{8f zhOFX=zllS%4Av>V1D{R!FfYC@bWAE>M5n<9OYioTbKu4wow;IL4Ev*dbO*Lz_0rM^ z8WtJmmmIV|%GWiQOIJJ<#yOK99EMwMB~^v=jFWa3`6Wig?a_X~T<+DM|GzqjELJOvGbD}G*7+rFW*fB0SKp$}Dl=~4hQ54j;5j4&hS4)x zh!T=Q!ng_@N_grS6?+VZtDMh>G6cQifCn}R4%|fuh>^~ViL>w8@hAt5$eM>s!HL#g z)XA-#6FTKM7XANwF_PbLV(chDAoM;>E0%bh59i4H+*OfKd}zDm952g&9YE}E1WW|f zxMX_wz)=_8Oh|L7I8EOXC2C14O~j}<1L!AME6ACb`LcWWwij7Hxp(f&yEYR>saBXZ zau^rEyGA2H_<{aETF60p>>`l{A3XsMTcGtW0G_7kG;I8{IE95hFo>`N!yH)^ovI-p zg=;OycpJKVLMWhX7wqW(1Rt8G%{tc8F!G#oH8{lJiIArX{MIDL3OM=|A$t+_5Asdz zK!0N$ySL`ij>Va#hxpf1dJLVKIN^Xh2It@Zb9g_|&K?$~gg0~R6tHyH#tUqs98w?NHEq=-E5{?JUs z)?n;wTa3h>3$%0F$Unl#bGPN?say07f;^*EerFE$coJ#MDb0dq8Mvav?GN4^*VqCT z?Rov})?mt}nA>-*GK=s7{sJfaef2Ih!(NCz7V%Z?974~w{VeHG^_yNkMXv7|w=OsE zUgv|d$OBVw)?XtiV?M5}t@?IIA5DT#wD%mM9vg!lqn&tTFAj$h^efLjnTD7;9eIa~f*0;GQ==>zORFobN1NJBxn z6cPzjAe~K~{4gkRky;`J<7)&&1GF@lLvbQ|VP%IP+#yX6KK;)QWFe}@0GW)P6(8(6 z1pb*Jk2{1yQt>gRRcfViC`HSIOV<5qHEs9Uk|-zI#tD_8mEv$d@%N-VI{WaW%Uc+3 zlBEj&pqKGPU3?e-;@jL_)}2^;rBkp7H9QzgGTI`kB#gGNtVS#z8Fs{vIp@M$tD2DA zMX;5P_GRpOb$Y54UfQk5sHjBn&(pQ>eait~ek?>CXiEpM$d&D}@JX0-&K5(}>;WzF zOcDe5e8FrqsP%Is`1x~M(M)8eBDBaGBoqe=FlT3{p*3zLC1>F9)CaeKRX*X{fM@ww zqZT-P%GmJ_vL~}T<&_mR9gE>xet1Vz55DjwgltZWHy_xO`Stvid*2p>5X;UqlKA{V zAnEr2lhxGa6{P6E>)0^$H$?&g*BjVU-)7Tz?$lQ!ybebbemk6i-a&WdJwTxTD@YP4 zvs^?;du_C&UgZXLgHOhMh|zm`G7IaMrTb$4xx`#+*uPo{T=UyZ$!W2S@1X9eATJL9 zi#k>WbreppN*1(J)nBwGna*=Sh13DxS*FrGpb!;yu-R(6zCB4aSFXyZ`Lao`$?xfogXl5^)QLoxLyc?9?HFgOFdm(16l1v8)L(T z&d+Ea1&Oy{<)sThS;~#!o~NQ7B)5e%W^rejq{jw{<=N$iMqwK!QoI6FQ7riVupUBZ z=}N)%H1bXR^{-8vuo&)DMf%l%uOc@u@Mk9=%P(LvX@|R%9tS2LX6Qr*UhI5t5>FU= z+B=UAxt%_MjBvekCQZi}b9BuQh}3O8n83!pU09J7ZWXOuHHVHl2!;5eOQA(ggcF{ zjR2OMrZAD~FgedDgna@B9;B$#murfZv6^vWHT%E6Yg>o~df>O7AS3$?CkbE8ncEeh zA*>&1X397`FB->^HsA1-L(3hBsVsTE9$&3wI;B&-c?4C!vEGoe=pxn_=bENvpsxg! zO~FqQ4BTrF!v$O9yyveCDAq~vtqHXb%Z?SCS5LVRmgr;+qm@W3H8JGG5Y+?6 z`scVR7;Y4302xyR^q49r(onY`unQ)N-cjDH&bGymPZ>Z9yl04QcAA{)Obbn)(L+Ik zQJWhTnv;4ppcyJf7p+)Ibm0}+%xH44l2s)8C?9QWk6s&KvIy0 z)P^#hrH5d^=mqM4*rCGB7l_6Gf`GtV1p7o>^qRAjhcL3 z%Z5>%JIAL--9I}P^Z0Q**uM$d2tO*bWaDJXZh~0U{BExk)C)F{SgSsXido5ygH_bg zaP4U&*$BG(&rw_UB=J$~CxxWT`dG22P@T{~VgaojA9(8?r?3x`C3I}O-0#uh@&|&5t zab=K(Uf33uz>E#yf|0a2-|GfeG)Gn+GdO$5e=Y74i#SX8WVOj_@CNks%dU}EF>3IH zZeNor1}pI6x7Ko5t5*-^p7yYQ&Q`#G59`LWYT zLBBLjKPr^?qL%#SshM|M1r?+d!;f#>mZ}A|XJrJ92=BWhCQtv(w8^3ojO< zzH+vF*qIO+|D!jQyy!tKCjC~#>3|kcu;Kj?>d}7o=bMFvbw-cZd+XEF&fE|IQH=tp z5mCEj5iaUL2NKJ5?#uDn7dP+UNEM5k?K)$Do^i(lV^jmO-{)cyj2u=zZ@e-fXrTW} z6-YPB`a5^^ukE*h1pI0sH&MU=754DR%ekkOZVeA#?rxRENj7U>8mzOmd@4?difLDL#6WEzwTfl1I(rK;SH2>=EXX%;^oXl zq2GIL4ecw0DrP`OTG4224cSnfwE<-{5nk`J^70u_zOQZYUh4>)Y3PDj^r9#J0dNEs zuo9^C&6(dUL9wKP?F@x^fG!@7<1T=3b1~0G8IN&5F&o|LizpGg6`jM^(!+1Q1XZr* zQ5dc(Jpl(NW&ys`{T0e}PG-PXWfVlI+>JavlT+%xks8nipTE{`FzzD;H8g6Jp8AED)7N4?p`AnMJc*uPy_NK7>QY9_ zOv-qOlCVSZ6~5BXH3m^#(2_;ANNHtox;J-!AF8xeON{(bTLzZ7?pRg@-$?HpSfy-` z&&QSnG~gF&$6u;tq9qEQ=)=$!B17CX{x)9z7i~<*QncU&g7^P|5*DKYv}JoNWb!MN zya>ZV%@oMUi} zqi{jRvrBhY(oc~1_5(uaZ})#r-hMR%sbn4<^YQENPvH?%fTb$p2<=l#X5WBxO*cNX zZ&uM0Qdyw6M-X*rtRCL|vRuf<^$t*jyQOrwQr18I2!-#Uy%uTN0R|C6NS;vapn#L# z)5sdlFK6%04Z@<{Om?l7p(txM9WL7-X^a zYPVof=OF}<=i!TZ?as%gC-Y#pSU(E9qhcx#>5&a%UnkM~rKg@R3gmpfYapLBpQtd8 z*#G%0;xRsJDabSSkIqti!IJftb|Iwt02RnDjV+arU6#GXGWQ8FJ!kttlIfGY`G1U+ zvRH99Fe3X=eg-d;$?^?73i7CAM)wu)emwQjq-*B}q_L`2p+DO)3Z^I)%3 z8*AL^x+^Ge2OsekRAo+BB=akSXr3rk z;T`%{@ykcNh?OfK=-h9ee@lhp|$Y|-a9Y*A|0AvsJN@poTg%Ee`{ z+89_4pB6`^$*nG{bO2Fb=!L|#5G6M8G>Ruk3(-Dw=DepQ2)5U!O~6QyUlelZXlQ7t zUPu@pcG1KmF`xQEbQ(-^(WOuDcL_|h&k&6qw~6ci*U^AeHIT&D9w`fO#u5lDf3C1e z_x~6|;ay|ZDMDZOLpSrWnIqa3!cmMT_KD%~6cjtoNbYCB?$1Q`Fs0ycT7l95kSlB$v-1T?zb)c4cVIq5V#6!BwUw3kLZ z*ehoqLbpmntnjX5`c)G7W49ZX}o9kFQM+kR%YwwbsS~ zgk^5tBsg#s*D2`Up@CsD9}_SUkdD591vGO!(Bikl21TS?LMK9U$jS^Xe+VB{+Vx0 zL|oLAw_(sE()lDfzkg@`ePr0%w?~_iYZX@xmvW6o^jwqKSe3{ug(^c)&k@X z4?woTs9dCI9%S=~xCHtNb$`kc<;5}|O5A0}%Gb=)J7^*AVTTu7!_{6CPaFFO2@o{l ze0dtUN&s)I8Cl)_m4EY$h};E@+3nUXBBDi*got!RReB*}GU8iaHv?3|%d)5=&+a=Y zPHOq5z^9JjAgUBf-M(Yv>rO!hqRfJ_-Jf}t-#~N}4oC5gaYGu(HBdZ4rSeW`l(O;& zgQXR0pJ{^=n~OyxXV4zp4FdyjM4Dlx#Un5~$$v?l(2fdA1uj2$fHtl_Re#&M9#7F-FP452uukqHXm^Uhc<*x>^9 z^PBl8jtSjV>yvf4o2IPHqfjl6WU#S_F?)#Zg=C}o5ix<35Nk{zWc>oqS@fDjcBg!S zTnu|JpDpjv!sdlaSoi%hJwER*bB%g3%g zYX~wxlU@c9@}+9nr{def3v43l(NrOdGazjQr)e=M^&|4y#mrP|MlXKwt=yh`MnT9B z3Y0tyLiGL--U~F-CIuA}H4dxyzgdEs(R^N7^IxA+7V9xo3SV}K6j|7_X90>+y`aq2 z2jNW!_bv5&V04F`j7%3rhY)n?g{$+5Co%zB0tW&-oD5?2gX(mJ_!ne@D+YJ}JD2?V zj-j#!wo6_lONg}E*g%xLXWKG1*E;x1j@VqD`nUkF;Wpx-Zl4_#vb$YrK^FQ@@j~lA zDGd%%9W1W_A4-+6jk|ha#0vsLAK?l*zFt^jP`?;Gd;c5jhgWw(s?NRbC_g~KbEVEY z<#YyK1*+&Ew@s*v6oFsWPSeKXOa` z8!HQaQa92ao`|Xdbto$%ECb8V1mw^QMCl8}w~Qbm@_l~HQ0!1_1UIWT1qpXM+y@FQ zYcB8idLsD=Ra=C-D3Rg#sPnJvW%~eq|er3`)kTY;fapIW<&>Kf`dc2ffBDy6Sn`~Ga8Q}Z_WCSS^iJ! z27TRA#8aZD&@XF-lTRa0YD?^$KGX8(TF5Y=TbF!787wBz=1VXSl4cE`nz&Q%9*3|N z_5Wcl5Hlr0C}9vR5zLI^T9c%qfh(t&`{`C?*2?hMK%xRvOm$6M1~5AOx8K~NeA2_Itpsrt87Y{dQtZw9`WTfP|h#(qj0TG2X&;>NPUm|Li_dU5;y}? ze@3O(A!HF0jV~F_K!l%(Yy!Ea&==XL^V|JFJGiV>;knI$m=OL7mKN~{IZjF-#!22c zu)=elPfJnjOT2NPx5I#zmXISV2owKG9;)?Xk1(zAxJgreoGQQvu3Bm#8bWi!rhi}?Y-B7qfV22t1;beo$Rs1*EeZDrIOTTUAa>PvB3>b6A6$KET0t0+IdXbYTPSm1Ah9wY$LP1D2jFf)h=@;SV3O1vc zi1RquQQI6&Vh@ZMK+1hjwQAYY()WNmEV&65m?c6kGNW%Ix%YT8zng0R{y#@&lR6-tu5sJ zyYGuCfNS6j0w%YpydL%1x(g`%yj$k2-mC8I*Qvp!i@bqF)S{rGx~>&TB@{7>vHk1m zpsv8vkq;n|2P`_w+>1lu{tX;)cDS%N~zz|AMUv z-1;;N*gN5GtyQvCA-JS5FBN+8dz=yzflkys2Prug%_XB)U?MIkl5bvPeV2c(J)EK- z4}AD``QFMCUYnENuI4aIYu0pOis~UwCiXJuBT)9v|9h5=-G;KK>a)<8gN@QD(o+Xs z>Lmj~I_F7;%ihmBa?w#x1CMGqPK+V2r97YCBJUwjOwAa*ogOBovy@M*KB4z7by#DkYE0TXPuv4KE8R=WTu$z>^W*(RtMiUcNsQwE zd+yK(P@*y@hD;gUQcJCE9e82r%m>0$f-ldz;R3&am!E8YkGV>3fV@f(aejkUQ6R0# zU}oJoD+cJ>!t`(O+iE5_SAiNtvpDbmxSeQtHrj$mBNOmK4+K(dW>r;g-A7iGBP=t0 zM^4jUpR{Xd>m#Zsu?1FTy*?bKVaQt?K@lsc{N_40<98-s{|zmp$C%c4mv3@WmZOgr zdog-kO6Vgs>`dgP|2L=*G>D9|)koetq-3&e8^CI=lt!n&NAzRFTC30W9v}SnPXQzJ zf-8ZTHuj7^V1KL*dHIt6@(bcH?&^EL24kCr0OVA9(7&OG2QW{`xE0<+T0`q}Z ztY+7%@WCL!hPSwj0fQ3C6QziT(yjtNAdGp=Zz5CG2Qo{1LSPxa9sGsCLNQJ7B9To< zb^Dxc|H+@`fqFowMBPT)HUBQsZZpHzp9zvd2`^ltk{AEiCd@Jk@SN(W=kqNh<(Y~u zrl&wkEcds2Z$hMFwTJD`37 zv)7?*IydB|#oWU}Ub1M$iEi|nQgcn7%aenTPS00g;-Xw9&2m3d^`+^i9} zT>k^Otp5Q4WFvN!!tg-$38333hE0VGcRgj;fXVAA=sSYPoo9<({=ovpe`B1ilvxlG z*3y6l9<~H&KrA|QW7?xyAB^<>-Ue@Y{M7*e^%2#C3$uO1VWe%2&T~g#?Y=CE*x?(u zDmJlfi$QGlSsa*VJ&igU10PJHpNIXjD;DCs0Zh4U6;$ro{b+Neon*Dgn}wSfc{5ky z-E8ozJPMuz6cCJSHdpZLHF)8A;8!8>`n|PI$Rl~q9w<-*|Kw+|c1ji0w?0u<8$Qiu zMu|B90JzcKRpE+%fMGyltt|Ryqj1_DwfuDh0&h2cF%L{Y4i>*cKecy~i;n?zYpSat zT^Mn6EM`-D46Ybw`=S3mQh_+Z5PdG15w?uE#1^xQ8f zn9_h~`!&BsgoD*13~mMR00b3Qru~btZefFV0weYPMbb9`C(XdOs=zCO`L$M%g+H4! z5YgpDwAIpc)TT3x*Sz06on&Zfz9 zb3Nr$qjvJXU;^%B9W*UZhN2a5=te=% z`EnWkD!h^A8RKUIWvxtT8>}Q*fVL)nZHfMGeU(7uJB@k<#>k6UkbX`8?(}_*Vq{}< z6jD1$1N3NQU~nrOxiUdyo{iby6gLn`eHp8wqT(j(j(R)W0B><-LEfJQVF)kNEr#9>Y(*=3S?Mlb8TTM51^h?lDNzmC zcSxnZ>(tc1Wt=68*;?vVViqASuX$ULw-YI~l1do?xRqqvuDh_kbd6& z38GLJwLT!na#C&v-cm&&-SFKICtxm4=`ZEHd<(o1N;iHYjoh8S*&12awwFw4lhvJv zymezIJ-80)k~&V7uQ3emu6H3o-&V=qwKkg+XE^-*T?jo-^P+y%gUG=VAV3sIu22?m z_x{_tbBw?&UxqjF7?k3fzMGF6e{?64pF3B=BkZ;TG1nfbDcMC6AeJSrU(@dni6w23 zcL54ucH(5e>#7CJiC_}){Lhdj0roCs7&!alhx6^nD|>>%JT%@j|F>V?yo+!EA~prx zq>rh&i(v>SprnIhM1d*c0BCUe{+uk`xlDHav#XGoe{(MdvkQzUy6N<*;PJ*>?Yxhb z`3FX%qDE6K3o~r0e`gP6ku?TaNSNwETY&A%gEn11ba#(i{s`)rhCb7DD*$I58FGiF zGju`)&2P#J^2qBy1N;NEG@KSMkTedY?>K1rbp3#Sm0P@V$_FDE?>j56k5_eVV?bFd zm05I#;q_CGV4b-@1WW)uki#JEnR;_RqP&Jlm(Fz-!WmIGmrR;@wanz=<}{;rAeD0a zFs&h~6&C$AYDh~|e0TeXo5DPT2Gfg9bTQC9grCqs$`$`}!CB&w9SBc!>U{l&Z|Kr7{3HVxcpI>&MGl_`C*i4CHzWP-FV~$PX?ha664FCBI zbvU~}r-~TS$o@4DfRmYQUfxI^VOs~)o6ku{z>h4Fimcxqq(EC4Xp>Z-)FO7>*dwr$ zDDQeN=m!vwSHg;B3!FqlkYnSrYgQ+W$+1sp1=~g?en|$m*Fz3C+C`^JlTU ztDr_D-|GU92>3WG(xC4wQ4Q<=7YaYL2OdLl_mAh!k4(Sa+sPoxzp6@9>a#$Fsl*5i^2A>Fh{gcm&{e)>;civS)oq z%^Ek!r?A~NK)url7>lvpurz^lkYcX{>Nv?q)Pa~_3r@|MgUZ#`k7rk=I#yqjDn|gD zf?eaVmNR@P>Ev??@dpu9CqI~!eX^=~Wp$iNl>``ssTvo_N)r(6AXZvE0P~(!@zc~T zljir746P|lPH^7eA2Ecv?nOy8h_!CgJh=AizhN37iHP={eGrUab=nfzDMw1BH-}_a z^V-`J^!Vn?^s=`YYA5 zYNYxB8KVCgq=#zd#DLD03vvK-v{?Le@L$9A*^X9OA=@PLGx2b$@X|$SqroxBdBj7& z8ebWPE~>>%$p9-*$PiHk8xxoM7T5cegUOtlbf17E9F?9=!S(}3P$=;E*WPCY0W=@5 zy7E8TtM*8!5mugjhcB|bSzB@o(qU>N-846E`SN`O6Ec=r4?&0GVKW3WlNVkR%Wn~K zGARnTBishGk+tuV&cp#>Q5InuwB?RW56B@h%X0@%uKJs;klthqRRrROXh3ukepy7D z>)?q%Qy^g`1w6l#^0ywo1q>GO79OC7999B6x8 z9qZWL9>@??F#sPxGit&X`g|x(;7XMwzENy4VT^lPZ-Cvnp)CXS*ttq`vj^!*AE^^g z&309lHm=vcw}1o5p0d>wWA&gaJzh4l$Nm9GHtflV+8us%1)2Nj3j(ydUTMOXz*G{* z0f7qAZIJz5tu+k;QTOGkH}r{?^~GO;5a>}tIg-d`-!Hv3Aq%b3*V5{A!2!Bc|D7$} zD4-Kyz9cF|vw&GS@dJv)uy{BLX+k`Btv&+1n+{WDTS;jCC=77c>4lq!+%DT_r{0b|L{f!Rl=K$s7LTPc6bp^sUxAe-`uCjbS6G1|& z@rCMg8VHkNqhHQMmC0{;W9K;TKr#;TNg7=K*`tP4@`#%TnHJ4#oLYuR!D0GeeMD*t ztN|21&NcgW;rX2?59>6L>Fg=bil~Db-JxWdix0ub%P-@xVx)3NSCUcju+xYk#AZ~9 zlgrK^#1r_v|KMn#M>Kq)eS!{g8jrg0h-egyf548N{TLd#nJNF9SKjVa1Up(#Jvynyt>s~ z99{#@G>qGMUa^xrp+H6#}m_7AGBt#qj6RXG>=TacxC2&KW5$$St5?SuEMl0d%rEOV=t zccvE}w#~4{PV+VBD)QW6_sn_VPZeG-$^3FPMo%#srox05gK{&GNDwYZ31wE_gU-o^ z`|5Ud4^xD8sP-2)WhCB7sGY?niGyB)WKFmoWZd`8_fG4j!TRA5guLS#PzK+Ii7Txy zY38g=8qZlpUg79gO|erF(k=Ru_Kq>qR=7zHIkmo z%99A+hor7LR&2F*CB9hs+jOw|gXWW?yM+iUYr7xZk7 zuXa)^ieCo};(v5!n454-ZCrgjii$W^DUbmr(9E{85A4%DCf1L0N!AX`>_XS)`2KEM zv!THm9IHm;mbqLF6{QU&9tGetslYxaJ(=(TJhUyoFxPe zq@LC-cqxg`G1|eRRCm0HqLfAGL3mRqfl+#Vt5JN=uzHhBj{%h5XVVB6`JSkO(%LeN zTv5CGfNLet?YuXbnT=d^Y}zjDp4eSK@rk9~<&ALhY@H*2)&Fo_@rge!>Ms+lfr!8#1?>I5klUV$jj{CKPXCDASn9`j3yv=;Rdzv z$$?r+EB^6>VOyipV$>gVJQzX`8YKBGceu_Y(A#!aWrNrRnuwlD3p!83e6jBLz6fJsM2IJ20x2ImWkW2AAF$U1Q;V*CY28&12K z3n+CXgd0nWhpbG-oe@+{P~+YaEJC7{I)MLXk}Bc*fv$uMx={N&Mq$Zd0F&TtF@SD> zlkKKrzj?}6)mCB*#i{+b0;D`Zop%(l#~N9F|3H|{7W-s9un(4yzVRi<>-Yy)$aA|; z4v)uqD<#Si!}~(_{`7{QWQJpm{quFtb~|ELG8)BVhb2giN`nYkHIvR)#H^66FW3Y6 zv4Vv>AO_UQ(~kRjn>CE%hx?bKM#IhMccU@7l(__>K6^M0UHR0C!VV9=<>B43E3EaN_p%vaPj$hDgornRemVzky~KHcy5<5v{jdP zdhqSKCpTRo`NAMx4@6Tk<5lE;9Keb47-YA@a}_CaxYi1|mH=A!P$8UrvC^zb-=GWs z2j-BhPp zDLZhGfczeG{+$qLKe)F zW7f@DIJo@t)*AvItgIV)))6paIRhkNm5;@Z(YAyewr6_;))y1k3Jo>^o4OUJda8Eb zMy@%WfEHCplX3%B#wB3lDCk_%W9l9e&Up=Qklrs)Lh%lWNrn(z3Okd&9t+tI8~;$c zT?D@LC+Ico%UpCM+NGdog`cXm0Ws#QlImDG0muLC8?e;S-$nGs&x%n+$-_}J2P@M@ zymmlMiu@#?=2N8)$v4ho5O++~k=;z7QyT_1(``Bp|AAlbmrwB^4IZ_}mRZ@0cCjJm$Z8|b83$$=ME{LR&`s;uQ$lsr>XO`){{=g)od2Pl#Y zz8<~if!&SWom(V#-{15b(eG9p#6CL)?i=U}$MkL3ipseq>DJP8;?ppa*KYhd*%BdR zhk|67lKY_$lvFI@`^90r74^v=w0fDg+Ik#ger1kUp+dys^ewqZ5Z0h2K(m!IBaB`~ zj&K}M3P5ZdcxhRxn0WXm9|G%egOy((%lMyTAFPh&D2xZo74!(xoS>CszQZdj;|;X6 zHucJb=Y{eic<K&P*`lIP6dFY~RIe-?A^^Q+U7gr|=_Rn-e@QYzZNN*c|wxbcZeyvw3pdHm}O(io!b?9hsCpd`3tZK zkM*+uOMCpbF^lb!{)lbu4e5N|oR>@?2Thcgnk0q!kz%m7!}ccXf~vJ8`iA?HRsBha zqki!liNOMU_$r9zOZ|NfuL7puqPd|9&HCII6ym^2bBp?)(|a#!DJqR-B4%gUc_^aM_663$V8BdtaS zYhD-%Qen|@oll~H^Zg94_qDiM*6J%-Tvv80-xVWH$GqJ`P!B;w)o@B@h!z>de}%>Y zzJM2!$*=x!&4#cdF{LW&Yt2n6Ly-N(mQ`l6F*A-b#73OV^drs!b^!KS90mG7NCU6w za?%@4DB-`9HLM|)6evx>!_w+7MsYQx8dAY~;TU6{svk-?0L}!)S)nh@Gs1nhSa=Q2}nq{96d;*l( zY|VtAF+;sHw*7Q#R^p%=(}9C%t-%e4V-ka>9a9OJ@zNeESEbD1`G=|jON?gz(ChL{N~; zX$hdPMo%PtwZd05wdun1gk*qJsxtTrQCuTcv=m~qzLqBCcC~Zy1x1>AfH-XBa$lV- zGUsZ=&)tlAx!4K3YN4)88o$wyvo1_yfBfvKcu@U0anGBx*Y2INisk*cPFR3bp3hB( zb*F_ktP#*({YEZi#DLbCxIJ+&`8tmG{=?l*;PZa*NLgnsoU#2mYidG^wk+Bv2pV?o zFI3KZlDF$vOLs>!bp14SN0nUH-`04$D^#a4wP!`J%p9=Y8$uf4_w~pN6>cGNB|=K zzh2qDzlL~G;?~!}$|sgwHD(1A%l-Hw$FtqeDa_pw=O~EMiMoCXlX$HZ#`$+Ex}$7O zkg(hf2z)3JZUjD^O287!?q-xP3oN}GN3v-^+);>3OOORJJcB%=`e$~ST-LqkcoV(o z^B)+2b1Y*%s>=TFgIahX|AI$3y&7gEV{JAPA}+FIAqDDwp+%9_!M2Mq@OE7^o4xGS zIJNN|p3}qisE!E~7b@-NVFN>XpO&Pv(w_%KA`W_DIEQW?yM@C{K3Y;y2Xm}3mqvOn zm_QiSW~%S#Itey!-bR@VW9-dZC-Drgh75m#)sKy3Nb)dw5-d7*E8afCeWADG%?;6? zIs%bs%GY}E>#t(t3|>~^5r$cx6GYvY``Nz$XLxRO9Hv5SJ4dE?F&i{krE6AykM~qU z3OKxlko4u_G%cwOhX}aMRagRE_&w_t1Xy?)QW&R0A?z+S2cmzZwBzSyo1RtwTRa#} ziXfOdEssNiAWLUW7IG5s`?knCOXaO=EX-eOm89VWN-bKByeRgl>yT{BxB~vFm7A%| zdH9r(y!5f5uh(!!!V zkcF#Dmhu=ytOJh)wDyh{LKsLFwV%K+r3vrk4f+n7TO1T;F}W{3O|%5HO3!%CpFKRq zFii(p3<8J5GncuFgHygrhrG81mIu&~6_}5f4 zQAK)`5%k1Gn)0FsKTdWP5lOK}N2++bY{j+6&N=XZc(hy1$>PClDK_#h*K@A3Y~{{KcdeSk5z;CFxDAhZ?^7iULE|)Tog4E}*BuSy>g$(IyiMEg=k&65d5j7bS#rP5Y zirdz5=_hC@?Ru-t^TDPh;QT#QXE=e4F{C>NWwe9Rdc{wttm_y!da_ISzB*>_J|&;e zI3a<*^4f-!^UsS49&~>!sW3{N4rd1h>MKbO0N(usTr;IWI!<1Ez{<4?K^b>|8`D3;}bIR zmwyZ|A-gbg#V&_eK{mSwc6pv@`XUHBu6xq5x%)#hE!i_FDG5jdsIIdASW660`D6I5D zX6<>6w}N>OL-&#$W;dY57W#HUUV#ykCAm3f)PYZTAPe6xvIpK|*Jx0Q{|a7hFm}dq{P6@& zlAw`zPD^D|7H%6qax0zn0s4G4Y`@$f@+~Ra@wn7Rj^`tl3n14i(X9|#F()Zz3FOv` zH9;yAn}Ao( z0TJ{nz5rCDYcTqM1R1&IbI5oNY%J9_*&^jGD(eh8d3Xhsa-{82-xFhezG&Isk1Qlj8PN(EUcGXiXd(c^tKq3g@cB-o{#8mRz&l zWc0QK#qsEa^lmkIq|P4#CpSW@ryRkangj|Z^=AJqGDsqj&mJt*b#k zFRyxp%E@#qNDV@0gUwyo0l}nS3BqFD?zF8#J0|M@_*2a?Dmy#^iRI2lc$mdc5PpI2 znAzJKgv}Np?$fsaQ{e+cJRpb{F3X0#k4tnPB`&Q46Vem#^VcdDnxj0*M8T6zmrNZl zaQs|=UX;{Bt4(dr2`FHl4LN$XYHrO;4giS0JqjOVi1X+B7pzbp(F&Uj&(RA>Rf90e zi?q^pJEx>BRy;>P@i;;HH_U`lDk*=C z-TKSG-nb9tzknNP2tGf0Ve4>KmFvh&GXLtr?YkYXVosx+n>CnkzLVp|Fzbz8*@fL7 zoIVt&u>@5j<99@M=>)J@kn90^Z?(|O7h2`#X6*v|c<1i*rt#fyDvC&KhJ)eU6eVyT zmcEDJoj6OvA5PWTMGMbOewu0%@?8on+d=~c@rpzOhp#XjVP7o~%7FJ75?II$5Y2Lu zp>>p<2V6s)hf~j@p$=VUNfEY**l&N1NfR+*v_6#+&5`BGZE@HS^H4sfQ5g8Nr@# zxhx$nKteVb9391I=X>~;riNh`3rAAs3guZd)sO!gY_+$ zH|29o;Leu)-jKdGjpDN;aV~~Z}&`St>dnQ#Yh2o1}PD*jkcH~Cn zv_g^9J1VkhsmcF!7@(`PM$!gd|GN-^elPH+Z!fr=f8#XrCn@S-|A^%({nXYrL$hRc z6$!UK&>frr@b8pzUU-dq7m2)IVGXEx3HZ6!yy&CVm4C4jiNQg5o7)hb`I1S0|0fPU z!lHX8ydI{>DH#gBBL zwJo->e2;qc$&+}1X{5m6ag&izA^^Frq^nW5u&xVL(-0hFdafsDJ3^BWXbIGywd~>t z__N@ReZ9wOQbmX1to2V{rj_7ivsUz%{?rUyF?fzHGaqk!PZKy?0YbO?)B~cr&~*Ak zPz8lfk|Y2AgMY;ByP*~(%Y$1gf#dKB9N|wucU@cotD8^Nbubavkt>G< zjBBfCZ%lluGnz2MP*{`1;UG4Gj4ZE;!Z&v64w(a@FDd3R6!WCd$^Zy9B-ubv7ocdM zL!AW=6lE%G6g{GCrTJH5qM@C{woG=;7DOJ&e>EoZcp=2;XKmO@6`>2nyF5KDIw~s( zTQlZ*#$W*&r&qD}*r?d#S(}j(-6+VOXa3{iU>V?I6eDvE?4!jE-~0i5WdSIMc&mLC zUq}fH3ll~m(NOAoXv~uUmA0iCpvBXH+i#9^f zZN|jMiyxNF|{Sb3Z&-#k+UK4l&A;*}S$5vI#aDjQOZ6tYG8N!E?(9MdN3gK}4`73lLK z^7?F>GvreZweY@S<>S4+@77#Cw4(E3jOD~oQ}7pdY>#rRPChA*CWaFCqjFX58Ojj* zwl#_zTH1owttW;Q}xgNdDY(3bz?v-8OsYftSY zdwZqdPb!3bD@>cbH;6;t3&(Hc(@$1!k12#5V!VCIg+Kjx9aV1pSFraMe^X-OGrZNR z|E^c|pOF^cGaNh`(^XG-yj~6*(Xf>yTQbma>2N=ZcYm0`f_M1T)s>HDaG4HDQ9*MV-Z6XH^4^?LGr4^ucE|sy+)fjL2 zu5S1HaGu}u`gBs_I^g@mkE$MxWP1%TyBI83HS)oVX})&}lxUC#_tGf{Q5J;PVYq_5 zmMJ%cKsOYQ&Y@@B;c)SJhf(pjJw4Lz_IC1dizBCjwN$Cr}PUpgK$=b?YK* z@FTBaU^RzifiSUEf?ka;#Tb-&<&2RIg+idxh%AzLW~CrKN{wj&X7=lj?y$l(v$h(< z2~H)&Ka%^vGAwxpDIJp!s+tL1#HnKzfP4 zB86%415;rIw0$#r=b$ql;Zg+_=24Dt=-%CGJ%)=7pXBZnZM_Nu-QCp5B7iq@uHEt2 z&-MluGNgXAa_~mk;7x}d@cZeY>OuDB*j73$0)sz!(4`yzRmB_-0xt+wU(niI7In0N zFy~y4aje zzV*NY003$&fj=Me-hd$h?FGT?D@tLiY72S#80HrrS^{c)A%@2Tu?HAW0mqjZY>NlI z)>%f)5E33h{;0modzQd%mXPFAiwyJoC>rAr9XAerO%X}QP}|GeSqf`jpSmlTcW>VI z;dX;nPPTMow)$=$NF^cO$FCqGGK%BU4+WIB##bwy2?+MV{CENb0pttAs45=czNan_WEhU`n8&`13l2qCWP>@x| z!uBsSFywFnrM==y=iCXsJrcW{+i{-Nf_A?kXm_WLuJ_BY3LuANKJI(Y#n&3&klWIe z-pg@ z*+aq>;TAksZPKM8Sc%#l`qh>W<;F)SSsZ0Mw9nOu9BQ(MlMTuZDUP>I<5>R(ObK4c zYP-!nl(UI<(Z~hmmSd7#`7~sTP3xv~!|V2qH8?_|AMY8cV9OMNTWZ>dp`J~8i+BPn zMR{%GN+j|Rn+I;$4miB~C3rGqt%6L!f(A^yzD1$&PudI!JiP#ELsZsnSx%52vdtN9 zxD8*y5tux5m$qOTTEI@^W|;=iEF*-zV7=!6MdVJha1OkHTo_oK!3PVighvRu%FRNV z%cUo|M~KG^qxKzy>p`S6d`n-!$Hb>V2SpBIbEZ4~^)0c6jIztO={bMpHlen~11okN z${s2-rhy$z_b*Pk#?HhC5Szk>p21}Vkdy%@iikCI>yT9U=zw^Le4u^IA6w$%73Lvk z|L>~(5-VRu7kDSPK@8&Oq(SxItuV^$0tyw`-R*T)Yo)N{Kc#_n5y8$_YC-5@{8(k^ zeLRVWc@ZPUXAi3Z1>tRuj_GJ-Tn7_{1zzo7ibgR0CJc8OL-8_U;*)mDM=Mn8sy~BA z&e4Y8d({fVKW`$yZv+PW7RB_XWieGH5d_m<5d%8&2;MC{Lka@w?F`N_h|HJ&&^Aei z2dmO=08Bxw^~o<1DC1rjd6e$GCc0(3a(9kFrwDp`(5iD;$=+Ip+YOTx>lF;3rbt_+vyCvV-61;`B3I$1@Tz*6}$0iok!FJ;jH(3+Iw6gIyk5U7@U~ zLSFBSIbR4ZP~2H~ToVn0tscayZt+T1dp#etZhOC;!ONFc^F6MPoH_(KOF>p`!#nq-7_xk1xou9Q z!Ad%D@kLjxbZb$aEC^C%UN=#kQZlH_#;I<1eRWeLz*Imm5P3NRM z@cYa9#%UC@@_ZI5Ob|1B$>Y@N*&Kwn7hOa^fDJ3a)Zjt~+oZiu2|4Mgn;IQ~y3 z^Ml|srxkdU^C&Eb0_l5k66p5G6HS>=U;`u_q9d1U>t39-Ndhx%6xdI4?X6X18Sk#4(~(BdtWgRzdmsjW60QytXsxL`w0FC=g7zp8v% zGs2CG(ZqZIU{@$%X4y}m$f`}}0IcnWdtWMc4`N(Sy6`{Y$VmF9wj`ek4lSD%Jx?sG zh+`M*pRx7>#8&>9JESk1(<@iRgrnn8$E}tldj-#SPLzq-2Z)|I_$}W`*|dg~IzCr4 zeO`mJ8^i2jqf1e=eOw8+rL+=m8E0Q#*hu{AGkAwV5-T9T-A}a2;v1hHH0OopL`E+2 z09?5)UO5-f1Y)cV3e<-vqKh2*c_YD$f|2g;G8HwM>Y?yOato zd(+cTXeTt@2_NtMLbxYf#>7Sf-Bc4K^g%p1zBG_6^I-huv+w^L;PM)j(3cc8ro^iq z#T}L1cu@tMZOaX=>(2li9$4?3K$>ZkoTueoSj~o;PSP1U143iXI;C5{!bzQv+ky_9 z_x*E(sXQW*93<(qPySV;Ti;t4qxNe}WP;-dgoR8m$-TEMJCVT>naW_I7%NH-6#LW=Z5;cCaXA3qw$hi9xP zi(RXdShxLdQgl!%)!{{#I>@TEBntn{)m5nmhw{ZurVB_AKm75z$z1Mk?$7CuKh%r9 z?rseP&1PSIC(L;lmIQ!qV*-7Cl=Z+U-;3CV=U5tR@-^)GxiWaU?`qxaI1y{V30(rg zn8oh`{f!gl&msf&t@8v+#SLFzn+b0`!DdJMa(3kedk}et-DQ}Fk#678rh4NjQF^$lm!aAD0HllXpbbR?M=u!%=X*oRYDSqHbS& z37h{?rQ0Czz_EqI^N{o6ZGjbh*z;q&^iTsmA~QS)w*xr*x`RP`%osAn2wkvofsK9l zU{;T4s;!260sa?VLpvXYpy2EkE%@gqC!A~+V*V7B81SXD0XZWnz6-*!RQsGOT{K0W zy7g{WuVH80kt5-hqn%c661e={hppaEOvJo`ZsiFiM`Hq&QTKov2sNB$yGf_V&@$HO zHRgdArZ1Isn+YKC(~>5JuqXWqS!jdo3U4DW}84zJtfk$(*UDwzgr~u^yyj zz8!o)VLUN0F;|0Fv8Ej8n8w%XuttHkeQyn~_%q>`P%C+w{VQQ>G%N#3gy!%gN1Y}k zvvyb)!&pMl=Ck{sm`J18;pl?EygZ5PF1{JGe}6btGH8H&r&MmyNxTifYM))x>f=Gz zwj&SWwlm&Ic^G-I9FDZ~AxH%>PhdH5R4Tq(9`6{Y$_FY@9kQqMNo!3Bn%vibCiEYu z;u{B!XFGD^f?%jbT0y7IADA4Z;#cF0>+{@r0{Mz!wt&x&7VRDoCs8yOJa! z&BYPYLmlkq3n|*J3JFDCvGU@XkPl*cRYuzKpcHJk8wq4)tSOC$l_!KL68gYhW?hZ& z`X@jqq>2hkT)B@?y)H2gq>})aX~ksdQJUs~+Xl|s{bo_>s2Jj4a5wZ-&?fr{3q2&xO zF9Ss2^?;>$x7^W20e_@8UfCj8!J9DUZ!buv7(5*7M!(lmWthgx%SXX}DQ4S!+ZrbJ zLaFSVP^Ev@B@ujLjM5~$iH=GaA>PsaHHf1t`w6>-x`GW3|7vc9HE=Rd zL!O^@=q)JYjV8u`iG@hir{GC=S7(s3(amc?Fxra>xqbUl4A()r#(#Q;OV~-bwCCmT zz!mqFwgLL>fJ2QQ+|ePVhC@Q|(`(yv$}^S0tLGYzV$V&EL|*ylyJ5A1sgYLHtTz%u zU+FBMgow}*QwM>-l7gQ%lo;@d6i7-POoh6_-zTOG;J3!=L$;87Y^#bo{O0p82Ait^ ztj!05G+maf?Jwv9ghCES6^21%{~U#ekotE@iJR8DDS!#2Jls%7obIZfB5cM;qk~y5 zRQ%QAIrDN!VySGL`QoD!+AGv=A)@EBX6S@fDnT5+;psNn`=IgvCPcgJxC?5V(C^Pg z@9!VmV+fZjjq6IanVzUdz52UCugXKm;Pb)uzOVz{j>fjtOaTb56w)p)jfwYy_!Kmx zFH`qF>t_;oJRdtlPW*ua4$&w}(fy(41_tlDnAqNqvAZtYyRtV&WvSq-Q>Ihb`VJB_ z+k&7~0D!Is$}@mKEYS4;e4^W=1q)Ynnd*f&qvKZ~R>ASTts>w*_EhyV2{NgZBz4^n z>BfpEtfwph044}rTRbFWAn9ncmD#jVi%Z1K_RzDg1pt*}uhajvT8UW`j!TuxEPO0l zc*y+((|m?GZXRNw_exnjLD2rL6`+{=K9TgZTqwg4sk#=(u|7}- ziZ^88cr^Xj^mGD)4D$d$Q>JI!)hUP_GKGmU4V_hV6ky$`sI8jZSO@5F%uDg+*n^9e z8pQ7cHpYKLNYWTf*A>)EnM{EUuZcifYpBZ@jAUx??NFCvm`|Zst$yHN4Qll%cn*dO zXYX^LM>{_5Xw~J(OObaIgHd;^L|GAf%b+O0b2IKKkdoR>_&AP+bJ|^Gb?}8nqiLRw z&M&QI1R8MdaH}02U+^DTK^cLt5oDI(>F)w9#U=#lZQ>V&B@Pr8qP3CXhARchu-jy2 zrr5OO#4}jRoJv7DpH^wnQek!tta~N*;eYC)wCk*8>I&zec#lD9yEX;>?8%aNSjWt% zQeK%GAP%L3vj@38&YUJWwiG4pcSi!Po%FgV{4dAHXb$uM_m|oAg^V#3=8sC9?7-(YGeHV5iZlJV(-g zq2U8p_h#8kZw-v+_wAA++ zbr^rb`4@Z~LV>!@ldMK$#2m^v&DE1m?evq*0fV^;0K9h76z54PMgP!g zrNocG>~XoE_*xTQ+A%{w!ld2;aDKhdit8V=HJT9W@ok0d<*TRhR^kb6k#z-Y769S&mM(NtiD4&lei7QW*fXgt5(ruXW(B& zY1pbrA#q`dqq2PSZQC}MuIzW4ysB&oV~67~9T6uex^9nef=WyezTc8?L#be^D7QVn z0yv+&b6*G0JrurGDga$`8i;0oS2aJ%V2PK2(2s{^fk*&T&RQ!WtdfxC24gv#_Cky$ zrwFCeEnjqjKqhfj_+OE{te*fMiuFBEg|~VVhl>%afsNxD$c%V}zIIW(!3-`#SNlzb zAzD8OK7Mp!a5Ld(<;{o^PF%>zb9jDR-3OVN22>vYY9*ra@xbtL1N;`eY%k%*1HT*J z*xjAE8y0Arl?s6DAPJrSEmWC3D>B4)?|~{h^5lZikGZ{$*#oc2d}`=*qa&*E_s$AZs+;F?IO7r(A13(XrsdQP3 zp-B`qy@Yc*j7pDR#KJgCDq%ryeXA9AI=wiRVSn0+mtsIxQ_Z($Z%g zK!Wr+Tf$B7inV6kdxJyhIdODDCjx)mrP^-A=;a1K?FXWOK%HvDeo!j%qmYA@X7;g> zBJY|O62%T2yH4nrSRp3AsO9`sxFc>@TnC$=RS5@@iEFtwO{apvG3vO3-23 z76C#gNGYW_KjkMZJ!S|euP)K_;0}mC-aPtP&hpnZAWw$6GeVg(&3l3&NWc&M+e58( zFx#lXZMB_biK; z@80U%T}VE=`KH_)gm0b1bYv?(iKcD>&zJc?UlDu;Z^sOQ-@Ij8sQ>QiKQ!;^X*hv4 zm;CO_s$(p#6MDanpK96tSqV#XcZE-_9*ZP?xH%ms;xNcsN!RWVe%H8s@Qj!bgvRf6 z1k|5_>oYe3yns*WE=E(3@OQDn!FBdHYIN@;aRBQ!!i>LXZ^BNugIC4^!AI>H@)j7W zIvIvQ3(KRViu}x6?xEKsyk`&%BY#sK5KBwBB?h6Oqpg z9BI!}O~;{4IBVx`rBwwY(BCbrYE-g(0^^dtMY4AASkCc9Wz90=4Ksi zj>B-Jo=D(Q4!)w%M_d2ATER+KDIGVx=CAM1cAHF9_=_ywRY@sVSspmhQ_lG&S>r$- zriSoxqQ5l$kE)CYpPLWAJ`UAxmgv3reW&ccScM*Y@1>;A6RI0?BXLFN*ZD6lmFn!@ zH5fYo@M&jMz=igtS&)=AK9yOMhR z$B+T>c7kU9biR9r;X)_hH(Pd6^q+CQ!0u1U>KgX1mPbtzjwU5s)7K+9HL2i0)ZnM=pYD+wN_^Qk^*|~x8Ms)OyNK(YJwBfD- z`KyIO8t3!;9yeS&TGDyR`)9)V1IOB|ZJUU{1PL0%5{@raQKXLHR!ilkUq*CC zML@56#;8i8!QyE%o!$HL)KJQNVTG9&C;snKNt4<yHjn55Dh;~Uy%3;Exv_9fqx+lie$@`g8SWQF%l9VJoFK{xZ|0ZMTKgP0G=pHAmr z?JDH$wzX#YXkIPorS>x=!SmLn51TlAgZ*~<90j*L3FvlC!nJ+%%QzR;SNEO2go7>M z8N5cVUfzx8IANZD=KDyLL-g+bE#6}0s0U=s!N=R3t z^cmtK`U?b?Q?t45d{+M%KuH=P?bIt_lhPh|clgFK*?0Xy{u2z}8tE-8-$sWGIs&bk z3{8j)mDf{cYj&#a=Ks${KC6HQStjC1fER6!KfKy?**PPt=={`|yn8?y2gl!rC$zuC zhT78gx=RI+ruS&Pl^xRk)n_kldHd-Om$!G`#QQ>Wp=7thROqV)-AQvnYf-8L?) zJ9wU*=tLz+QayZ`A8z>1cD`Rd#2_^Cee9=?@`n=Hzlu;63Xs0?l3jp0=-8#)y1D@K z6S_ktI{B$0nsSM$7etDEWHe93)5b?M*WQ`rWn^;iQfB{-N`*lJ_<{O0c zj5pf+yFE;I@KYzM8Cm$RaWLOFB%fSgw_g8Nw(NoFqs;-fed*6Z(^gjNIzIN|w!^l$ zwW&nuSyMn(Sdr)%`h-3hn>D|-lf#dzDs>cH_vO~>xz>CG~ zm#u)S#!NSGxgUoFka)y-5d)7u^6A8h#zV@pg53p&>_{rUND0&KwJq{)@THO@+>jCH z7ILK-J5YU(^1;`r-5rHcYg5{rYIxU``qz)>K!gZAx*uJLbN2*4u_nGH4seyh& zy($^bK=A@v@|IU68MlXtnKL3jb~&MZ4*Z}(Jzf~P z$XYL=?*Q$L2ujj{^^@m$C&o8<7GE*nXgfS#PnN?FBp7DX<<7t?m|r-_>tpB5c!S22 zU?Qmh7g3vtz_DbzSy2M+JLn|1nx=f)f;#Bhx0KY9VWRVsTPy9WX7@bXiV8o_9ML_U zp>iUXretU(zcfZ;NRs&moFN*kn_&@iUn>M=Vcy8OmJjg%P)kq{?U^<`_CqoPRMbw7 z$vqB}jyruTI9d`kgo>cK>OnrJGbXObqt3c{K|A;a&do<8H8&^w3>Bcl4ozNmW+%-? zN4>U4(bVPgjHqYXX5Dj-2#)s6TNEM@;Vd|#*51%KnsVN**0s=aOB)q@_)BSpJs6Ts zXS&)gdWnVR`gty#sA`xnM3txW1(;)qs9oRIvjNDI2E1OzJs^qH&<0Xn=8LMVrWg>B zAP@X%gX%UYnP&@bn3KFJGx}>UPCYTRoWEgL!ojlI8xpQ>i}#(Gq==yIfW93qROm~c ze0XIYo&RG`(ULUToy{4vz!Z4|yyf*yWcdu_`_YH_U0mH!v-+`T z50ypV)P95Oj5B{jn{{gd!0073Y9=vC4!}-$vV+H7q*5`_f+#}t@oFTdUCr_@T*IIh zxDA#+xD9k4kRKQ6PTqRIak*}eyB;9ra6ZZnzzy5L15sHZQO`9i011*BjHv~P9|c_% zDgeI~7UX!K6iEgEg~tjlui%0AfxOWQ2~ZWYaz)2jJ7BbX8ww00%uP>q?vl{)7dNZ0 z^?;u8`gssL>qxI&j{#J0!TtW;Ld`{G7_j}m5;@IK-~qCEmfGCL3wR$vz&#V8cclk# zJqLC)ia89nrXGdg-~l~@t9E+v3fJ4e$)ktnVbpQJ_a?*R@%xL<#~~h#Z$+XQr1J#8 zmoIO>*mU)U3sE>BZ$Xh~3;2gQ<3`A*8xUP`0Z{s(PH0zja`XL>HSP|^QV|i$fUOJa zmgZP_AKs)N%`VI%7mS$70P%En;m6h%$ES_j zuN*0%W_)#fr#TaXm3i*2)0-&Np`}Bj)xZp>BScCVYF3*4pSA z?6(+|h1Hm;f_tw*e!MEQK*{62SX=koOg95_a04aW`f30@$hqdTrw4@D{XW|Yd?!yJ zMt-?Qj>5R@8c+Xn+b(gq@$-FXTMi&n)@?M`T|vYULDkd7q7Tw07q-OVyD~@*7=9fm zdR!ZEtufpSm*NbLsV+M8;_icYrTxvAg)kgNqBELzK?TqU@=eT5Dl(4A zYpswykPl*Qyqg=ybPtXBa|@j%pRDhZTj~ymR|v6pL8MmLFL*_6=J@Yf{^U)ZZMCT{ zvJ&noB!l{)3<{uydwf2sD&U8AL4-J@9-R4JKI~D0A_vs2lJ(1fbL5< zQgnhU45N5qdKpT0@?n*I$UYfEgCQFAw`9e&z@SAsS@(Cpy}9^FSfXT7^e|1P0Vpd{ z;0ulwe6t0vWdlWl%;zg3d{=)N{iC^1n20I}Nk^7nfIg+J3?u`zoL@p-y5DzxUArE^ z$0iYffZs4T+z=%!LsW`mODXhBr2-T^pdQwcVWEcD!NlM=oZ1J-sMYAT3xcy+;~;rf z{YSoDDx1&R6;6P{V&4y=kuE?w(u{B3S=uzXcV0~rIJ>`?(r|?N=GpgA%|y%dXl^Sw z5i(JC;RGom;s^0mG$HVp6KB=Hsrlw4W-L(33~ODPn45jR` zvSDxu8K?vT@KnH=XkQX<8^ON*`-;HELSwpR-@(n1$Pofd&b7AGgt)?4Se-YCoTm44Sy@(e(F<*L zcah=~Z?f-MjheUDPFJcJidor|L3|g&NrHQC1D$_HD@$rBjaF`arJ+EYT~KUK$c;-? zMTwgp6FT@Fv^KPxDMRGx!wL(z@MSd`wdc6PT~enf%G(1=CP1-p4C!)PS@eM&Ly;b1 zS26%FVco%=D>pnF2r8po#(ggQHzcTD?$?r-48Q_A~ChZo1&{kdAM8^ZQ)>%#7d}Is=TG2%x5AcA_P4>j;NxW&~iyh z)FpI;!|Qh%ME=KX2m)F>=F|(?A_ki5KY$Y*cDqf8X8u-;a9GOVtxO@tlXElVb#eZ- z<&@+Xw47oImwfw~600NtF>SIslC2Zu<%IIoDD2`lHLQiTjDKEXCSz+Zn7&MR_>cqg zp{NjhAj^RNe#TpGzWTxm2-T6=nK2}uY~X?9V5 zG{8a{z;p{$2UA;g4T2Zo3iPt^XCHFmN``=OxHx4Pk(%8NM`I5N9z}};c7{#e^>@Kq zWRm^jTV3zQZ;s`Box}X^h>ehF>frxl>ph^k{KLO-K0bCvb|ESwo3ckl3fVh*Z$e~b zhwNl0yJTjB%=p-=BqE9Im8@iy_+PjB{(jGSp8q+W&i8vx=sfo=-*$8}B)*zfdv`%5XS3*Nhtrn5#D;)6fA`QmrtJ;P5{l0d%Sf2y1I7 zCM^WUzd=PHP!87RUfd0h?Q6TNB;xD2n9a{Mo}ScZmh|tO2ELloop1a zO;(7#K^7^Z(F9f0XD)hWTnmP2E4Kf{3Zk)Cs$=c6qk#57rr^c(Z|zj3pogE>Q>~Nv z^MSM)e{G0lOg~H)y;u$SG|m27s}r2#BDPpa~nI>vW{xKaJ4;PjFSR*o^8X5zcd^kf0)6%>A0xLcUgY zOVhc_e<|WYXa*Uqaa>VjvRpDh2{7=T!mimhmAAWzNt)i@n!`P7({LVKDG-%$eqh`8?hi@rsh7Lp$Vo8~6s=(DqEdF9Mm{XtGcaGJm8ukPiPm&aqL z^}m!)KN!N$G@CMafE?m}68RJncG@MF+b08>j4zpK)hn2|-H|Vz|v<`0p=n z`|yP}oV-$1X#YoAp%pr0BwX*Ya;a6%%0_hlH|*WIJ`Kcz1W4LL5|oQRL$Ru7aBK6t zHSP9$U=-;Y=tB>>W$!Td5v!db#OgxqKOiQexMFz2-hU%B=S77Nn^7{-k20~kYmKar!3NW!A|uWpnX3$}X{ zX67PA1PykY!S;XbIo;}S-FxHN7_i@2cw(EiReVWRIS|Z9Qq-=VB}VYokEOS@U`To= zM)wo!{i%Rd5Ywdno0_qZ^TFU?51d*E#wLKvr)SUG%Jj{-c9MqQndO{pqAyH_l_G>Z zzi9`*4syYC352@sL(rVSq?7WA)$Y!3`;Ed1_;bX~_~yDW-)w&S2H&A}^)^IoOu`)+V^`{#Gj@Fpka6=QTk(PfQ)19ltm?8H|AjLZX`jJj zOo2v-KM?!(D)eTXezDcym}C~xlPa|floSOpeMySnMI0LROilzQ$iY8MfXqh;{lQO& znO!XlNHc=vCp>q-SKrAI18hHYL}vm?2t~%M3<|9q662aJ0sg@@SSKd~Fm+%em~#BP z<(OE>+lBiWSP$;Rm1gJL2%7bvN)IXi zwJ@R4K^JlLWd2;KQAIcI6|kA@WVj1Z{UI`)in zd-q`Y<_;8>wO!01wO}gnH~2PVtsi1dCENAG9BTpLb?Ahs(Hwr2i$GMSYktTKR0ai# zzeHq^dX4lbOWd=Mg37M|0!u$@ezmhfLWb+AE6sdZVQ2`FQ*ET+vIb2|<83M<`)3vi z(S?FJm>-eToKnBdLKqr?0fP!&prAJwUCJ=%d_Jj%DKa5oK>}G;1?FbcFz|)pww+JQ z?nz*cmaz+!zg37Myy)8Y3&CAD5gT8cRNWC|@&Ii-_5Md0Os^C7b!f~tk{|sO+OdF_ z_n^W#U=dOr9zqW>{;u`T0%!?qhHqXtFRbcDXY;X8!$CC)=kf^yQ7H)b-78=suGmNEikaSN+%VZuy;T&Y$ifcdl zOU(fc17H!_+bOp|Gr9>ksstW-D*2c_Y}G95cj-Ev?HI+pC!ifB}l@1jL@vOvi`eE@>^)=ZSv6i{Lk($ z59S7`JotRAF^%M>&If>0d!S+&+4v1@u^;Q!8IT8awyp%)B@4MsjGbm{@)Hhy3@=vW zB23~#7Q$s-+%8ag`hxV6{L5G7-U4rH;c$fTY=_%G+#!I`9nAlBhPnWfK9$*h;V-GK z8jgqj*eF6p<{#*3?R(%Q_=2`wgNNpDFdb3*{(gQ&&>pHrN2Pvx3gBJX5y@Kerwu}c z^8s*@LVDY(|24YP`lvzit`(oOsdj9Zj29w!1y5cZG~RpZKrGl_ky}b{_KN65w7F6R zv~z!}V~A%#Z~8}hRs!)8Uj|r_6$+?aU0DAUE0$}h7AEOGKNB1OAii+&Q`W~rOkzeFoXe7xNb{5SLri_g4@Kty-%t(&pTspzs)-lO&8Pahq z#4h(9{qz*Ifj;;+#)D0O-IC?W2fI4yvZeot7njw_SaF6Spg|J2xEE`nJLf}=1`Lo9RR_9 z<+H>2x0XaCR14eGsFktWNWC-z z$RvqF2wTnGRr#c}-lu?!fiaVZY}Bga-$KDF=J(G>`H89a|B9|4^mSlWaYBAwJR+4| z%Ljt)7>d}>NS2tm`OS2qjtHjdljS3+7ojJC-v0{oU(^5|CIlbPn|Y-RkHU+G z^9Rtot;<@g4%w@Ga0pTS--pl{5j33s`BV0qXOf z?b4q8jKGz;D1PZ?pzYi{ss{ z;5|_9zp_kJHnc`R=YnJe;HFpoSD?fj!BSyOW!_E0tbMgv;>WvKacj5(Ep!sPgBzxtNA z2){V#9eGd)Rt0HScKJKXXlL%yHRBK@AIzJrazSdBcls%L2zTicXw+C5LP~0;u5P$! z0WoR9xg1cE>8$^_P^ILq&!-(0XPph4onfw`E6#QSjA}9VLja2xt9V;}4zA9F+_-{J z<4t^WCk1fU5wH9fnrPhmI(8*f9;=|>ON}Etf2=(#BE$w?aywS8|L_kPBT5YRrTtE& zzUy~jt4(53B0wnePQ*5%KXFafH44oQq6bn$1+-o@@}ktU)^nmO44E z0P)i$%mYf7!2TbZ?5>gN_cFyw>i6>))v14_J9s#{8XLl8iN~M-+S&xecwDX5lF#i} zIvoF=n=)tyjyle%U)wUy?$Xa5D2Nc5To0B3EW`?uW?qOO){rn4aE!Y0)lAk1Xd#F$ zKtQ~A75a!i;DaS7ccWRzxv}>R+fmPedah{m2S5NM&MJl0)4TPb!pSCK!vghhS7gS% z5_}>LBMi;vnhYY*A@E1a1O$|Kd0p|su-UJEs4ZR8%|dRXgheWaetSH>k^GB z?27^@Q96FFbtBfEwkb32{OLx*qms@3N=+svek`x93OVIhO)>>B~0r9}-!I zv>G>Bi75zpf&soSRD4pYopxCaL=vD`?xJ&UFafTTxPUK2O=|h&JOvnxqvlA)em@2s z!pd*eLyp+=;0k|%{noY@GAgDnxS}9)Z$?R<_$5b4F8uL6!LX6TxBn$9Y{BQJIm1V> zxEFFD)fLXEYsRC4|dyf1zchrEKjy>5|sYl zG^VIugi5dqWp4>WWyR)$Mf%u;wN#dm@FIz~8ZAF|HGo79fHd^K*NB}v9mRyAAfUv~ zxEDt0q_$QBJcj#()siJx=YexgQq8D)42r81=&r`9AgLG>&`|6&AhO8y!%fyLsCkI| zYFMd}Z3T_=NsrA`Oxy7Bg(snE%J}HG|386M%vizTWyyt26YywnV~UA6|KN0}F1ZN_ zgQz-^6B9s}-w3tAr4n579`meR`k3Z7z=~zq+)Q(M^0yngff{UvjV#L$(219^Gq6IE zp)o)EjN$G}t~%e@X8Phkn|j{h)qHCWH14}HIgfFjxxeT0{{;*?5Ik+R@%ZyEe#)TF z)M~>)$l;Poc+GSqi19nrB1mL}zm#w%oF3X)0l(}|x?|4Z6JN_;%NNVPKa%_rfq9r* zc1)0wbOgba^H4+S-kW?w^pg^MJ9Ja8^}l|SCEgD;hz=^JAn@OmD61;chq$-kDGrHW z7j8rKFw}-Pkfut1mZ{DW`-(srI^kMJCmSNU18Tqd4>+^|9}J-+0g25(93AA3#lYsb z*^0janh@IkpV1zJ)JF9a9)}4jYH%^M;KfW`PfDV-w z?TNxFp(Qt&IoHOpUl(_sK-YEQb^(An1c!WiJp1oHh>6bJ##?vW#N@~7K-&Xk^pFlw zWQ&Nn2DjR2R`5!3_q;3Q6!(I|b?PV;n==sohl($;`8>M%s}Sv8P116^da67uF;b8a z^A<;S?_eD53+5Eh;p0kYeUb>JyNb;~1~5xE^=A6dNzzz=S=wdW`IGtrSxYhsgx|l^ zs(~lh%j#4~waz<}4LZydZc0J+v!G2bd0lE$uh}>*s-O#aVg5;gJ>uc?cMu2(85GN- zFP|dN(Gu5s)3$1@u%>8DWdavWtk3;EGluVGZ`qph5)ldQw_kR!Trogb{14ocw5O0Mdo@m zl3oG+Fh4ke16ERw_($~m;vPC91mQpJi%$e(T&ar^L&0_MKVys8#~2{?Cg}?LRZ>A& zfdmTbpVIZe57@TiyILl&XY$0Hdiw0|%i}?jtFACC!f2%)Ew)cerbCbq8>%Y*%~c=< zi+urN-3*3xoXoit28#EaqyAygq=PXpG$HHpvW?(V0~!2lNkJL7fKT?S1{2Cu9l8~<#!+9BE^pbf#mVpCyYuP#~qpV!7eVh*iV zVh}586aVM;s`u$X*jQL1bf~@eaecwP<)i%vs4&AijGh*`H=V~OL-A`wzDq|=Ps!69 z`V~2s^=`wy|IZib#8f4M4AuPBlHQPXDw+J32L#xI&#sdnRgGiXmvEXnh6IYc2z!aA{TP39zt$pnZ#X-Jo`es|^aN&vvF>?^W?UZ{Z$(Y|)z@EKwCrJ# zKfd1k_0h9@w*}+G4m!|Ms6_3yGn8&sT(>BGUJi%sx6!V)YnBd{tnXqtkb0ka2q1wd zLAPc;UP8>_ZQx5!7-;em09WvJY0(ehqjNIJkOy2N=#oLrupwJ&k#Qrky$!AEAm*3p zdiq%ZPFL6$JD7r@kmjIlQ>KI`; zK$VlQO5hs;!<+$@RU(xg0JQ6dJ=ex{Ae8od^Aax7&@EE{XCMBtiE#z}0-u1unMV8X zW0ei^=|i72T&+u)Cj(k zIr++#l)?}yflcLXwof+H7D#`l^1k96rXlF0RY;J-8rPRT+J8o2bq$~cyIc}t>Cd{d zRW8Tu`&r@Tr84GYhL7-k87ggm?`??KTV_fJejRFbqoZUq-^wI)E`@f&fb-##zdc+6 za>MgmmM#>7Y?}ZDD#4|ythg29vp=U(z$l_C&g0%(cu2;yTP!S`J$*PqO0Q;;{Ljw; zIo|XMr+*%U79bgG6?`xf&4fw738`FIZ-%U&xB1GpE(2wV1xU25h{udUt;@?$ zeZ<;Dx&db8mv4HmPl$PCtmXk*(8Lk<)0GpyyECoy=KRA%fe%{o_ESQhTuYnf z(tI94F)Ijp00<|9^)1osCWQhZiXbwLal49pmO|bRmRr*mIthPkxB6lcvB{0*TX*Z z24bh{t6e>K-qMW-7np-1P6r(qfK9e)M(@eJSqI-?t3ETO?t?Yb3dz(~i%1vO%a<7xZyP#bAbD)nLaG#8C#DpYA&^ zzk@$N_1Nv6S+;Kj)P9t3DRzgDl8wMG@I>MuBS_c;Rysdm4#j>O7pCu{VPukBQKP_lD8=mE>P%1>xSoIwTwlq>Ec}OD z%orq3TSiV4@ZQZ&5r47(eNlko3IMn~QrVjfG>8{~3<@yHRvonYw$2G$7LA5Kg;@PO zhDx86=4rR^|5zG?24e}nZUINh1v#R?nMdeW7aNpkdca&4NidtZU|+=(q}j-H3K+KP ziN6Nu2RmXh`!t;F)JWH( zu|io+b{z}&c@W|zM&Y4j;=iMF0I{DT1GW!O@j7$c!H%>FlF5|fWQhA-;6;T<+F34u znWd&!k08(l>_LOsN7fPvmV@ztI2rOjIi&iygr6a;2aqJLsK66) zAR8hWrkw2g-yI!URLH(}NRcMwm?G#hp^qKpQw-gS0-OT*K)}*?_yeE`B4w@awoqlqY)Hd zHRI`pQilMJzyO@{^sHiv#ZXk1`T_NRifR92Ml@A5c)o9QNJ%5`i zp@$-D_Or2KL7vfA65!IpYjnE*r#-3|1Xc7uk$H{rh5Ff)Fd;8?yCcbus#M@27hnk8c-i^+^u;t1Sg<@0?c%1 z((cqyOm`Db$_(>Jjbqxn- zxffQn+z+CRrtWYnK$&GV&k-Y?Sg;Jbul7iC&~mRzh%)Q0>BDccP~M&x|FxnmFF_8q zVT2BJf8BNn*-RGB$d|~VJ|sLnIbPU#Jm{gIx($479~%)!Z-E+FNC>wN`XT3rkpDGM zK*m!izX(z=z)VSUwS$wlZ;D7*((|NxqR|CPp`1$;S%9$_+a_Ro4I>&W)5LgZt@!um z56IGqy`gY}e|d(DT%{EZQ~HyNKFK(TlH*TBZjyurdQE!j<;=hK0n7^_rk)=KjK}Ua zJ(v0ez^k7J(B{7UJ0XV%>2ZBgJM2I+-l`B7E>@MsE)a476&W=gvblx0H_D(?VJlCjydUfN z|6Uq_0%Dh-6a-TxQL%tSZ%lRBL-MnXqOQy=AzTE5pmMXN&}iE~EmXLRSCmwp_ zp0GX3zaiHD@3sBUV}h>#^gf_PNiAe{=I=7ZuH>vVBdLH_;R|uq30pY%;g$sO92P>V z&D-&6?-WAMOWgea0SO@?;VY<-&8p<3q~J`Z@6V47Yp2tdgP~SFBJB#hCzq|8{z>e&ZeU15xGvtc zt7v1VLW^yl_S~GlHfP!L2~tVpq3r3{xXUgXM0joAN?FzW=wQhjRMvv>PZ=WK!epFW z0~H6ipQL`O*$E2H*;S_WDDP0P#V>BzC%YG7oC_e)x1pUq1j90FJgOhs;v?)pVVll% z2;DFW6w|fHE%+QhPcf~p$a`(^L?>%YiP^vaTHGNd-B!a@H*0j4&D4?jx#vsvie?$t zKRP1jzmUWX=xqIhkddV`5!=%B`)S3!7M4j^UE$?{cD%Evjppz^HFuGAk)XPw9Z%KI ze#AMtiRxuG6KPGh)p8vK5UJYf^W_$B`&@jN2>pwXjn~INAV%i{l+*Xs>IC3(JB_JO z;#sA3qD~I>31o~aGIJ^Q5n;LG>Btt~Ymyqq@;|<(k|N>xjp}iy&Dh!YkhwUlZlwuN zF?y%RSlv_go6(;%~WaDU>n(- zuZQu2^Pz})!TQA*`}A3nvKFJ%?ttuWS>iR`y33ctg^}d%$q?Wa>cYk}5BpRCc0)A)BnWuO`(3TlDlCUDnUvwpf{^3D5^h;Yf3us*z3_#t!}bH zQBVKx@eTJnUmA(ZPi}mD)tA{d73P?$cs@m1Hs%o;J1$v4eNn<97~UE8&kJ8p_LfnC zx=| zJjBsIc&gMc92koV;Og}ekK&#bv*Gu5uH{Gzn_K~t0!xJsT}K&r920_W zhA>36Sj2a|ojl^edl%}R*^>0Db1r~E+JAnPPhp``ZZc_`?t0T*HUDPXHoKInGPMMH z_O@%7i6qF<4}mdo`Tnj`zD{G{aloxiV&_uu^(PL!eS$g5pnf6aM{$qCCSBs~g;S6G zZ@)}^$b8z#S}v%{Yr{m#z6>|tSWP;e^Jzz{<72V)DiafqISm}gX-x1Cw4;19xp6_1 zAApz{)-YC>++9hLy(lbZ62j!pz_lf&8*==dA0O_hlwz}Xi*&8Hk4!L8Dex)1m%Hmr z9>3e(5g%vjz;v@80PSRQM<6$XKMS?Pjmm0^x@`%Uwcp|J33lUx^*-Dfb-@)&412aEQr*cdR0106qhh~1aKIY zN!nW|F0J;XW;dS8foY;+{VSKhV;=qveQxWY=Xu=Ap4|hH1(%?qB5|b7X|?rOKQo#d zR_Pj-{D7@tOHg;vHI7zm55H+fKXRjzVza#!iAQt0hMDTRUrJEw=Q&#}ujTjaa~Kve zN;2*ADKEr21j#!CyLGVEe))+9%mM)O$xRm4jub8Up^|{ykze5DS$5&ky{`$wiO?mc zGIPZXkAnD)JsT{V-A%fi;AgOqry@<@?OHjRx&8saFH6#{hlC1@ACJ)bkHybJf#+I) zn7N-iE7()Mr~_L+?(tz5#6Rv?NcR(Zk=NJueq|pFG4oL_JifYF z1$Mwlgf1diU?$(92A7Zv3SQs|x$ny`DmqZ~)>|xa_OSObGUB}iNHl-@LrDO>EkNLH zxbhx~w%PRR_0;VXrZR&v=&zo7oKwAK;r_kI8kb3TtzoxHk>{cw2|tlptoyN znlx?QoW8)78GV86X&5@md`IdaMDaEGK2m{ox~$zTYq7WTHRkN8wl@m6 z1Q9tpxaPBoReKlg}TzU zy>gbxK=Ki_`i!?gps=X!Tge!&t*g2*+%Vo@A#X%8$81&42JLhb`*(M^JQFeqp9*4X zX5=oRRTIjOOTjF|o_F~KylY!rgwq9Wp!ea_mSh3Ewqy23+w8|q76*j5dLPy%5j_>W zzsY1IksVQ8e=`MlP{amo-L?rS%!Bn27%1sJ_3II-n$p=$?i^lUbM+# zs*>~IsEI`XYHdU7#0KZ^DWzp?V0nX-DYh{KSXJtP5zaMx_2G6a4M|a#TjE-YnovkH zF>pn49~yUV>u+_3ZXpX3oDs8fq^yigF={T2OTRnAZp%!Xbc|bkw=GZiTZQnYTQ7NF z6i{lYtT3D@>L%+B=BY!MUsBPO-{l<0PB^M|JxUUMrhn^NtB%Kg#0D@0iqmmhR^Xkn z0_5aH{A#`Vw@da4o#IJPglG?ve8~#xD+(vT`mho2<16=Pn?AvjulUxu2+Au4Bl0Wm zOkEAOd17k^dAzTlem}#s+zdk~Hf$!sqmE0S zCOp{`xleBv@K9_R=pOx|+h$c;EAg)UKK)2ha-c{w@W-D0CKJ;gN0MU}i{q0wF$YVm z4`hGUoY*jpr@eL&3W191_P^~Lq9dd7jgMAZ9TZU9{pn8~P5c&dYz6jOlT7P(V&x}j zO6KRjuo+F>MgTULD&L*@au8zK>ATUO+&2PvSp0=Im8F547yAN@qxig+DXC zl{?MZAcd2?0I-p+t``s?IrYj-YP53!vgw7*@;P{|rTn<^_nM=TYi4=h>t<&@m-K16 zR=m!6xO-?tfuJDgjP7cr_oegT59PDt9P0Mn2JpS(;csH&yN`z>*w z&-ygEmb#XLeDD^OSFUHV)mkJ!(o}wZkjCee&iY7XMyKg5y!uyHRs9xXc51@Qg}ZMt z;OgzJ*zZdp!)i?;Uxo9Dzn<|!(E|LCuz(5=1;mXkC=-F=a=!-EM*@GiVX8|$pI-hI zuOvIzI;;>wIMp2`AO$1LUp8T|uqV%`i`;?QBkNpTXC+E=OVUJCS5&u6E+`+`XVEaM zFx}h@j<>0je@uWiev5fIVuzd$`F`5A72`%~Zg|^?6=;JW@BOe171J?{IW(WYL@2WwvC7V0( zi^^xe2iz{bgX#W-oC13{#Y$-_Uyl-SapP0LmEef?-lPVm)*_9?HF`%FtN}6YE)&O@ z6E})oWFw*b{V;KkzimgOx?J(HV3O;r0mT`3r@KC9ZD2oj2h@7s`9$lvhz4)+(jkn7 z7w}OJU(^ocK%KjLqcnzEJ-T+Qt+K6>;x@{U9$xz-fmy&OfM>mGye|#JUf`zPVrhz6 z5TjmFGe2`KfH*e4CVc8fS1%4xS;o|d!4pLz*x!yG#F`^HtFv*is?EQ-tH!mXg(EGb z*n7i>s1qO_L{!NeA%=ghwofd|hfO32J`p8AHdVH|p3<6lUxPMzHf|i6RV0(u|IR1k zxa8%Z%G@#3@zDi;ua6Ps#|77COt9(2EG{K*uczdd@0RvnzopJLr|yzAgUI?Lm+O2M z)-O+M*a`c6&YfEVHZh;A%}WpE>&VUvzgiJ8Ih{eey_nc2n)-!R8VgxK$%BlmM~}7@ z3Cdl?scm$=pF=V~kT7Q`b<&EX8XzKXkjl0RDe)2iuODEFShNfZ$v$wNm`^cu5n!vD z6zW#;Xzb{<#GRkDKlvK3r(#VZtTt!VA!CFISw}mbUny+BE_os@n>GdILV33y-8n%~ zknSC>=~Q+wh-3NzY1irf?7o;~qLpIr+5(u8}3=do2Z3^RUG# zRB9&69@5Yh$YCYUA79n`l35KT6o=rIxQdxhm~QYOfs4hTDb*y;14Z&RVo~7^Xw_#> z(1CNTB2-)!ePA(qGS@SYs`^6G9p&%a5owA0bJqIg0ggSya+HOuB|&Ai;NY;WMcfdY zJ<80Y-&EB%T)t<(Jy0asP4a1>Av=7q;O;kri$7WPFPzK?kWPmm!GOYbDeR>(S?{`M zpeqO@zt;F9pIRCI`iHIY%kw(`)G7$(o9U8A3pL`!QUBhF8SWw?=xRUtPAi$PM}aBL zoPb8c<$(0&(UP!AED}{{Byy1!r7}SR!7@h40@thzzvBRB%H2bd7pz}i$X+g>?q+e`}i&~^WL<9E#r z)is6n{1EV2H0nk{xc*ZupE&xm{CT8rp)|<5DGzV>9aE2>OW}@44k4)U$}HyD@cJ#;CikVA$fdF zo8ITciLLizQrjk9o95p<`O>~t{Ly1-1xv{G!#* z2Sr;6d>sK&61K5|y3#UU*QZbEcbc~Fzz49apjbyo2f@@t;Y4x^0Q=ea)KuFf4(l|o zB;WR0UQSnBp*?ZaG_K}ucGex$?HsiWFW0k6KSu8+*^B;Fsp+aH5<5d(iKCdh%HVx~AwDqO zVDra(E?J1WJo|8mUrMYzdwGY@u>-j%E zcrWt-dBJL?%SnI$bRuhO>jZJg&KsK5k-vUj!M&0ncvWeTRB^tzv@{fm)%5=^t)c+m z`roBd--k=%`DT8s7 zH@v^)QfS=#;BfG3=KYwlL2?DTMF)%0-N zZG*eor=*q*p2OsUhTzq!SIigq&$dV0v!5__)bDR{^YDyh#C zT@6h=Xfq=8O-rZ?;?z?JYFC1{ETc{kv5mnz);iz36qHl8)HeVtt4nl3=j z83HFDVFk@zmV}eiw(+{`k9MDteXAR&7^AOqj&lIO+ zXOqEZT8$N-nMne@TV>B91~HVLkz3eqYDx+ooxu&yi!}6Q+>=gf%mE~yOXcZ%dQB}i zWXl%BQTv{>HD-PLWVTQP(KuZ_+?0dNhc#N7x74?4IWOk)P@~_2?5^yR97%MZcD;p0 z5-u!M8OSte3S zqf3gPUFC~J^l5M8?4IcS?l0AW_AkKx?K5>uzCx|Ku*O=?((dM`g*XSv8H`wCN(y< z$>_~?Z#{A#3p&TB{n>K?E`R%m@z7X!+y4Mg)#vz!;&*}mgIIVjyjE0HBtL!H{Ba2-9RV`FdVz_6ru>zDJQJt`?~3LA~&J>4%U zGYGn|Ng=>o`o;7H5Wgs3%+8W~y)!oVxpqA6qEdX&>6faJTm+A4Vbz@%-+!PTeO50- zJGiK;$MwgHjY`s&F+4DM5)RVSQ&awX zna=)hSx9)w+Akvk4}u(t>7*no%++Bnu%$lF$ES9evar9+^Kj-N(=_n8Qe7V5B|&JI z3^cDS^$uYmSY0(L4#%XHccQtBZE8ZRc1Gx6%ZRtkSW#PfOtWK1-9G^i;XpsH>{e4zsM2Uks9UAA6r&gu1Hs2f{h^Wk|) zb|`gISC>XPQ=Iy&pxtzNl$uz0oX>LZL++*c2Vmy^s>h$O;>dZn+LO7QQ_Ck5X#C7Rv=cj)@EX~hj?UoO(0NrEc^-B5|lEv)D2L?pei$q~2 z1hJ&rfP&A-H}mVsUQy+k2&2%uG~FtW9C%V_?{gIo6NzfH$Hnq~rAPTZLtXs7(zD8b zU$kB$nncefN3~JK4?B*p&GUhR%6MKS+Vbj)7cWM&W8a6-VGA1Mwk|^e&&Tzc#=t@U zjNx!1!Hjo6k?<8yqNbu!xU542AS`Z-SPeLDME617>iB@{&*x#`^&nllwr&DCQ!aHl49XU@560*UqfhM$vF=U1G*G-njHr*3LuSo>bi*UVXt}%xz9DjYtV?{=2ed!bW1j zuY8>uqyrcL-ok;~Z8Al)cqA|e2*1T!-*JcW>SftUqEB1oyMyP$6q^_L@S*8>`HyT zb~+Vxq4ft5S4drYiRN|8p(pLHpf#{IcvfsNebw|x(M@0wg4J(&66#pKO;$gzxOA1O zF2v3|mhUaKxTU0ay84V#Vb;Tqb|?U5zBJtp7 z==vfs@^fezC3N-l92lCYzwt)ZS-dr*XgEXsuhntEHx6!PO2HoutN(ORqj@uiniU8la? z{Xu`8!L_QzQRdB*D08YpOeR0aTgN>tEJ9Mn>({ieUhOrknu)Tijn`0`IN;h#|%(lEH9%K!b#i#$Q|bs_cdy~$Mf^s$SnuT$zet@cl32N98KI1T#rZ} zSQ9u`AEmh#j6hq5JvI4D$=le&qxz>wE!cHb+~0G-IO60dBv2rNfqVxT3*jeyV$CIP z4-G}{1GhK!OC7avw>q;-IEfxM7KK{$e0uz`*!7y6H<_v*Gq;aUW6B$k#PVu&XNpV8 zLQDe=Is+HTvs+&d!&tg@Vkh0P3{ff>&KDD;aTWj6cJ+lP_VMo%6T#CdS`*gOZjIcr zf+rRq0FzeYJ23>*i&!I{pv*3o(o>3}X$@Rdsh*e~m_N1ryQhk#J`qJ-H^*|0YTTlo zI?N=e1g8NT66K#iXI`I>nL{@cB$*N3Z~czP%gITEgpgzQbzN9u!csCbd4yuag<@** zHDyy%fhC=!?J}5yJ$=&_s>y^2BbiSjpv0(T;pOEG|N7?G_n14ApW0QAYCVo(7L$2rx#`pn31IKrkLrAvv!VaY6a7{xcS0?@#g-KweN?t;TW3+PI?fH&j`g``I>XYs-_}!A!mIbLwP^lmE}+~IUD65TG|nfI2xt7gH+P@%Fh(mzFeXm@WvBPEv?9?yW+F#)2@>u zzg{gfAE(vQpBZo?dEBx+S?SF}X&I<(P!n#}sz5ra_^vAOX~x~|^gbLxSkeJg0B5_+ zA%{xU`-VLMR70z{bSI6>5(asy< z$l%z@NK;c)4Tll#**p-AIh>2QaN)wK+s2p4uI@AY{rqsViw6-Y)YxV)$D6k&zf;B3 z-E5$L6nwEj)=zE8KXhDMsL(+HGk22|Q;vVH{qvQAT%9kwWsL*`->_x1Z+qM#UpLmP z6QZo$n3Y~jKRlY1P5`ov9i=*)Z*UCEw8YZZwKZ)@t7gm-Im}o$dk1qh-yqsv;?vGp zj4E$7JA)MKH!#uLSnsg|W5uHG2I-Azo6N@^Nb89D2)bS86ZI_|nqZ0PxlQoNC@>nH zUeE=vjIJQf`_P1#-pO$kaoe-oHz0bSnaX2`2gdQW6MhqQZv5ck5}koNBZn1xNlp4s z^#`o38?vbw;2#y8zbpP=eRFqXichE!jw}Twr39-aV$gZxmjc&Pvl8Z+>KYokof`?N z`CaLn#!RyVa?ZyQ9OHyX61iJnxaqR+8;@o*~Mw2grU{ zN0*3hwgn2`JJMKjE?E>-*D2%Y`*~V#`-5Ux4x7nhbg|KvkZhW?F7o7ba8i)A(HcD{ zjn)q=N1K=9z0QIoNG9+RoF z+@_&?=FAy&b#>+_?gNkkD1O1@WXq@-!Q3r0Esf5YD&gWf+ZVtu-O7dlr})`?*<|c$43%5+SW+8ID1RpZ-KPEqyw~4Kj1*x7sx9@g7PqELUvEUY-4=E5Fo(`O+EbHvFl)~fCc8@giMZ~$^LPlx#gEd{0D zOc%vhclB8e6H2D_wmc`K`lGP4n&9h`EPGvSyyuS|NUSqjJ#J(rucsWCT5yrdR$6;C z!|C@%2xP99dTg)G>0{fGQ~G-ocgg&~;fg}WE(3P5$B!Q;WDvvZo9xXs6cfl5n0@io zg+e}Vk(ikHypWKN1LYiha%$?W%f+}8B%dMS*x5F@&=OeZHJMJ{)rsH?Lk#rkGM}}k zDdP&WV5(+3FX8k_-f??r^-P$=wL(-uj00UG@ubRYhmpQ`jh;*Pb-wO--`R(m;_u$I z`thrTRv~UH&UM;Mb3|o2u;dZWuPWZD3wQ2z*hV=8uo`{X{kGjz`b@pPgkd04>Vd_@ zN!Na<_$;CROZSB?Hn8R_Mw?K-)tMvO&bxOpx?WRaqN7B6E~>CG%l2qlVPE>P-l1i? zEnP4%BlgG=b`}omks17EN##T`;$sw`ILoLQ8xeiBy+e$3urBgGZ$3-d z0BxFt&GiYcDtLxttKnR4aFb8z?^{U2A%Iv6mXtFBoeT{O&qOOD0zTBDzUB<6KgF^= zDT8QEnB)_%_td7|p|a#uWRIkV25JUuUS8RS3AFRyOPl-%6bJp5fH&Hc1C5K2Xwn9v z=9tx}sHhweIo=S{J|n0pfAi+8+WvR%!b-XK=8U71IPB}(eSdTWJ)rX@PmFRwm2i4t zGC1UBTt{3gH*)Q(QsN~q(KoPtC59Q?@W9&VdTJsQABy5fOex>4ia2vr+ka9_@OjDH z!0P%t`%Up!aR>FH{MyYkck_40R`lTijBU4i4o`pB%(gAt?+$;@kD?VvalXfq#2=%7 z1HZL%q#1B-cO7Q9Y%a%E7FDbk@FZ~7eoe-+m$UDDX1=aqJnzuD;&rF&YH54yQxZMj zZBDw1LD3PIBN$uHrDbK2f=ETAFN2T|1(Vv36Us-RXt*H& zT3*e)FL78>R<`?v<*bB2xTc?&Hc%HBofIfnm_tG|lZsJ!%#7SrMYOosD?9kqXl#y<3!M7P zoI1X=!M9RLhn@ziy1FMi-4pqoVzu{lsb1s?dm_iiyi;FqM*W5~K4Yx; z$oALwx!tn|lnYtU7L9M(TYX4d{^5Qt-w-S}3Lun(%l^6_rE6lD44>bl{3mY)sjno( zY}YC3#rR_V5y+c0#xvd;z`GANl!Rbu9s+d5JdY`r_q26*m)dPODL$1EWlH-i4@BD% zYxOkQxc~l>*kE~Y`NPYnB84}%DLj^Tc|5h2-#nt#9B7gkyv)wxdORPh!5lT;C}R6l z)cvJ5bi(#PyE5TYPGlp2*s#ao66xFrT>ejfpufLtO$5T;zGvUS2SNk-YpHqi(1#ds z1Goi6Y~G7az`eELhKUwanP|+(bC?8bgEn+-$)m}lJ(q# zhFt|I1tuHv-{@Wi!Adm%Gs7crgg$_TVwKuEQ=<&SRkmu-z{cNrp^7BqjP60mYv1hY zi#OB}d(YdlXE#0;QN}{y&v8W~@4Cm2dBwgAv8oGI67Ec=^qL>$fFkPE1-CNIZUv)+b&N;us8>HjdgGBn$h~_}*C-Nn$f*i#1}|uRFS<^E{9@3XM<9Y=EK;#il4na4X8H+?>6Q1P)o}+#sc5Y|Ll&4 zx^u>(Q-tX|E)X83M^~}Rut(qP?g-^TU|xsiBXT%pXja zw4HJ`TORy8^<0XN2koEFKGC9Zm~6dU@Ba+duCqV(`-)LydZr!57od%mVKr%L746qh zd*wq8hqmHH3KNv?Gn`K}OiXJ3A6ai1m1WnpjY>!;As{UvASER&4bmVeAxI-gcO#7` z9a19Q-65RUNJzJUG<#mp`@H-6_Sj?i$9s4)bggx+HRmyp02V)vLKg^JvS9Eu z3f^R_@o7N)hHoZu(LPf6grs7)3cMu)E@+uR6cd|~(NlTU^(m@EA-A`$qC%0n~ia`5TQ9L@L2F5#>6r<{?cYf$N|iv`_pzD5Ln8u zF?!-(;PmI*JHl>Y_e@nJeLS>vh?`M4vvbV8?fFuYn*N(Z8QYOkMP?q&nd2m-+oAE} zJ(;W(-d)IvJS~Bb3Fhh1qaeBu{tC%)6(; znk^C1tFfp~-K4Zgu&&?Rix=gq+vL!SWhmc$aM_-sPNcVe=YmPj?e-n**;|bA4AZM~ z52|^`a3J<(!F?ClbO>MSDD>XqRzpMRsZ6h`Oca+ao#2W#2b6iNM4J!sDi#jIn8h3j z-$&2^i~^O9`+~oHtsls11pwPoUeGYPttObRgS*#3{z#!Xgeh%w?!_Lz0arscP%$)5 z(E<(D9H!(C<6v}Zq|bO8Dwb^cnU*0dnyGNbPWEg+Z?pRo=ndKn<4_*cEx|XW#y!b> zgLnf|UMp=lKcoy$Mm*j_rjw;wtFdiscrs50Ly5kKeoUG91$FddX{t1lb+BVze#8V$(cON}k8CqUxc;%ef;;AJgJD3lJtg0Ih;mNu6{qamEO}CU|TCCf-D8Yfe-pA^91wMiuEwPLnpS5b7 z^V4p{I6i2*Qh8^j9D)TZqc|QM-&Uc=((uj)(9266hwm(nYBZ0DVk+ynH9@|Ce@A$g zK|bnn$q0jORGA35SSwpX#`Kj=7Mt2~&hJU=(fga_ALjUaa5~-;@c(C7bEuCKNE?<2 zA_r68_gOe%K4JN%@!}enT+eXs1@R!;G~xUvyEmm7Vku@Y8y4Y^HNSp=S4`O-pE|68 zei}Ak^h&eAVIz4Yd-Hfcb4GDNeMBjAszmUK&YU2vL7hXQ$)oCSHWZ>k=|xSZ^ENJX zG@J+d#IzLj$l!&-K*;j?rK&)q=kjshGr}oFU2p;qGQC+xiR3Ap%x>XzOnt!bFa&|V z+|#WWyMpc@HsImZ`xWA2dW=BBGuT^(+;K+NFg4u4>sAfSWg?dlPGn$0`z3HTJ{@dT zju7<9$|uac5fvFda^hES?$w532Q^K^t0H4*h)91cx!Y!QeTAOLTNx@dxXHUG9%tFQ zOU;M>ljK)8Z|S*o)mFh~c)oIyhq^<51fxRd3{CH4m;be)k+bYvI`h!oqAW;saOx7J z^}>Ne+0vgJa807j!5+$^`-U^4uI39vSn|Vr-pCmOd2?fmdhIzYfcFucYzKgxVi5b=e zp9a2&DYXgO_Wj!L~}{)S4k<_y4uV#R?gdeVwxO zJd{bjuA*Yrj?rr2`o6_6Tq)6fk7ePOTB#1dR)fhD?g?LC=uJiBpGR`hB&!DuP7P3y zT!tGMTo}F5fDAY}xfrxn^{$NuLnnH~A$}dxm00?OLSz!8jYek*4M3CAL@_Z1 z!{>zh3R_QIiTg8_=!%BP=fd>|m=Ye`t59_{sB3UVhO6(E)cy!+s{J@jaF(VYMmd#y z=Q5o#%ftomgG_jtwz^~kgWC|IqB6l8wp7#G2WR&}hQM*2OwgrRMj1Nk9I)jXlo&wP zLn#8Kc%s+sJNMHeZ_NBH4F?!Dx^LvfOR7wtY>tN_W>a(&1+pMX+JF%rhPE@Jhyems zK*^=p-BOJ*O0U1+E=zYB16aTfKV2%|=qX95=Wn%z;xW&9+HtF?VHq3K?U`CHY(Jy$yv*$k_Os4P_H?76Q=_~EOyVpuRJEFY^0mFi(UxD@_t+aos*QlI%)knQ;2Jk+07w0v+Mu z*ZXKJmTTA!f}ywAANQ|u27+%1hXKmxU!^E@XsAAv-}t6V^tD?_Y9!t9Qh-k`9=)(oU}#nOr{C_ff2i#F6x+4-6R68P#$dMsW?`QR{&U@AK#2y-T5f)s{}v=7 z4uVFLV~Z(fTQhW&;qQEo@Oz_a(*kFce04F`O|9d~&h9ji^*YLz*K*E2)^9cRJCf4E ztv>_z_&h-5XG?TB#%@*6ryG=407&37W8s}VMEZ#@!^(6aLNC}D0EuMVcx z69Y{AsfH-u89TZesvxAk{~Y>GGxV15E=%8ZIxG#Eej8I{4ug6{TS=cT-eh+*ad&Ck z-)p2^`@aT9f6)5$_Wy`w#XXLmXg)bvQm&rH{PlW*f?UO4t|nmX_8!Oiy8&|6;=!OmhAXEhkUrr;QU3{gBKLgvaEL)fo$Zg z)!CXqCOW`lNQXO)Pm&@P~UslYofbTZAlK2<>rOcSM1EQb2)4~z0# z+%q43*?hXGw+lFLeigMlUth8zTk6@hF-M&mlJz57ezwKgzoGf?>0Gmi3>{sedM)<2 zIZ>i{UgCR-&p}_w-!?DGV+9hTe2#Ix?M^MteWbnKE1Wx=$t-P(fhf0~HH$ay5O8wp zU^@KIvIqM{K~=X9g4Flp-}6jS!%$1#hj_CtHv`Aqr8mqD)J=M*HzMEVhsaAMTT95w zXS4Y-BN0cQGB$O1A-Ij}{3~<(`qk&0l?43R+=jB_*oiR%YnFLYerqv3*b?`~}fA!8J45(~Lk@Sw$qTg)8Bmm|E%_*aUDMDn>PGM<<2Nb`T@+t-dt!b z3=3Sr-a)LTNbe=>e!Xr&!!Vr3(lBh*MERU_Bz>f4)w~%cg`$!^xK2EfnqENLFtjLI zJ9H%%tvs7O9?N-*lKC!T{(I%_Q)2_f63d=)>G1zREb&iLy**Vb2z(K161n$m1!JEq znt;J#^6`pu$f+Lpm!TFfk9VJA8?bmEfhPP5aJe&xESKPB%K(jt`CBLjfrsY=&i0lo zn;(=#-EQFOM#$Yai{FHUV{Qz>`$V}LxGNiAXMg<3g8*=bxhJA!!rv`F@pJ4a8lTsI zf;0vq%{PNqzuEmUBAIpZ|7odRabxAR&S!r+XK>{TT{lG3!^x2s~CG2Xv%IpuNkLJpi^J7BZINSt$3^!Qa zVFNm?tJCF3g4Q05K+rLO2WLE^T=XLgj-@yBh{tj%!DjPbh$bguP9~JP=>Q465^$Pd zg$gYJ@8eNweHM<1NkWzV3ShTtRP^2NzP|$gr|`-mKFL>qY@68#u=+7bAd~s^fPqdLQ{@nJn8a`O%bLwcNtJiOizM z56=9hSJ@BxtJ|xW8ks1k-NPtg+Cc7FsWO(J4Y|*euSCChX{N zINh1{)O7vyf|TzL5;ZI(6(u=!p}06p{4fi>b1ImEjeyX6oW@Ys zz{uAsA(~bHkDj*)@yH(bCUE|-tF3Me(8K07WL7b zlYf&}2m8_V%fZ_m%#+?JPK-E1W-_n5PkCrYkZ=1btHd^A zJMuio(+~OfgR|o&M>MgtyP8B!T`|E?o!AYAV>a;A#kIX1BD9<}Xsi6*w>t}ik38TZ zH6|^tpO2%l)C%cC8`GeA2YParWBC1gcuU?8tuUUY`PbdfZ|FG_>kn&xVPbT|jbf|6 z{|l+D1s3BTRBm@kZGi%!yQE2v(So`EoF}z{{u$H>h%OmVuJ0Gfyk53?UvYO=kavTm z-|VeAJ~)L4R#2-@5wYpWvN!F&*DIF$D~Ri|mxlw~Zl3Xyt$M6*fkBYGu5f7jik99$ zhcZ6>jgrs82LKXmQ~!6CKj9Bb&ANE~ryU0s-$*D$j5k;m{Q256hK+b9Gg^xNN%mvT@MaJ290qT{#Xj^V(vZtf?WK%Q!C5W>swqO3oBby`38V=g z@1}gaaBpayZluap%URiD-+T0aVTtVCQ;9ci=U_4t`z*PbPoDNBR4npHvN&hFVcDc( z9M(}`(^X-D@u6!q{yNZaJaKz07v4f};K3XsgDl8&e!T;VGbwB`?xE1A2|R;IODhb# zMbL@S@ZP^oHYM9-8Bu^TXtSiL!q@%u~X6qH{)a`?&w(sO_gf=du?vxUZAGt!HgfHa-@aOx~>B{zDz9E*>9$SK4y& z+dMnN#+@UY?G8l%kVpj)Fe(xNNceSF4fszRq3XwLd{RyeleY*^+bu4ZzMnHD5pGH{ zUk2}(V6T>{VR?#Eq|dD;il12$w5$XM*j=D!EBY~r`$qKUk_+_0^0c_d$N@!d<^pGc zLy7%fws3k>iX^Ru9K)|^dn8Q8CY(jp&ecE}np3UiI(2I$*ACQ8V7hhxKw%Cl+{S_p zGfPyAy8iSZTsGj`wIaR>xgN1xrb@W=?BalLnuA`{;WIv+{HZg;S`ka_wvqn`aMt2h z(Jbo>HEi6CFv>W1K&X&|QxpZjl5&||>Yf+8X+~xS_}~~Lnx^NzK^GNUE2N4r{IDl84t zm%ohRR5qv713eTUWK2@pei0WXvktM-O{YjhZnJFb?ydMAHgdeS^FN2nQEhQk-W2@& z^YKt2&9+r>-{AKc@qQiJhil0WfTnVVmP3fs*UG*~*%i(QodmNU$$rAHDVm_DB`N*T z9ulq&nTzRgyCkd*q>w+rcz}(}O8`FecMtlkoO>WJFVN%Zv3J(f)~Pk(%WtwnFexMB zG6_?j$d9h@Q}Pm}EM5$m)N@~nlHm?}e^v5E0$bnl*v~Y;-lO?;)^Uwn39FuCp9Uis z(*-JsdE*X#9|DNwRDe7C0?c@&;X^-S1UI>aF`&)h)<^l8z9Q`xsur;9%ip%^`*I9W zOsvIehfIqLKOMBYMvrEf6fV}=Q@D$h)~Y+K>hHW;^|iH-qn7 z?-}CS$%AVLO~cRKKW^Q=oIOU0f9k!jE^1TBTt}4^;J>B?n%iXdkPg_(VKO1E!g)i? z4>k`K%v{%Y%&=mW3S8G;!bDjd4Gn5tAvJJ+-i-Js;y)G@}vWfCb(1 zhM@WAw6xvmfSd~~U5pSn%8_J~vvTiLP!o{voC>MMlr%)#K2J$OIS8pQ2z3(|hxUdb-w{){Fl!%x*M3xQTvek=Ui-SK!1!_N& zrB<+X8f0#JlMtC-oq&`n17Tswev$+p=mG~?YSPq09}~g(UKKGd2oH&c$@9P8U!!mq z+P_PS$dsUdZI^$r*6Be%%v;gLBlZcSpBDj?>3|p7rMErn;i0;1IsV0LI$vgq;ay?; zN5~5eQm##;#gc>2|3bmfjATnC3w2xA)IDR~@^&Wu7EdP-W1oGJ493S>u>FgE=la|A z)dhkoNT+8T+&7FUEmZ+F+yMLBcwN&_*H`DB$tKH6;72HgMc5SC%$jd1T@i-TxC#jF zb@PW^QF4PZ%P%2wO-gK4M#P?=w<9z(E;>2P!N`$?@s#jvY|+0^H*mQ(gFBLjiToj; zm1(|{V;4|GF+s&8|Bhy)(bZO^jE8BZ>5sF=*-U=`x3huk)_jwRPJ>f$-HuR|JQeUh zHQd3H?s%y@kIHjjJo4l;$()|-7_{U)Ybmm!C9kgTbxS$)tqBaGsRcrRFbdl=Mx(t9 zw_lrny@Rn91nwkfN!*aewtn#{weaMQA?kl1tORU1qD;*EZw#R-LIQ+zJzIk`+2 zGWjdFfSE^JHT@4yu2_xc#e;e2uR$@z*+7JD(H9VVI_n~SYMc_Eo zQ*Hl%Q)>zJ9XRD=0LHg_UU0atI|?B=-t^PBM{@Xz{4~Qg*4pO_H`ff& zy)_`jg{V{NL2wh&#ifY7!CsYW^ zTs5jH(xX3e_qsXek)jEGBkkeSD>VMGPnV>n zp3NVKt?e^QpK7wB^3_5MID_X@RBGm7&;IA_)#6s~jyB)=^&`2Qq|r|4e%E9Kj)&eB zw%GdvjoBXmAn`uHDHC-setT{2B~t~VUg0A$tz!O|33V_3G#x>ffeId1x_C- ztBT-wptI$R8Uf#c*Ukqi`-G7QuYPX;*%m!4cuuDF9ulvl1;|kpfAOX*Z zRHL=F`nXwl2kTVI-t@k+ovjmxs*y&w#WNwM8g%S*0~-immO+#UlY#e{RPK>0Ufy=m zhFO=RVhsol@9BG>U~gBZ=&o#Zz}n`oRzT(*A7TwzjH!*n)Y|tOOZR5uF$#W7~$7AX2wD{(|Z^E{breNJvS>WVcN_;i_(*~cZ( zdC#+=F66i8D?@mTF9UG4uO(ar-TsxXuyadU$j5V#6TWMri7QY^t6XDSFT&9N$A65* zuk*5rwDe~aoDT;$ABya#MHJ-m!_oCd8)>1Y8~v07!MgShJYpKV;mN<8yVfrqL#}n| zJ^fs)UdyAOT#BzLQRMtJRg(4T{)0<1WfsTeq1StD;O(BR=KNT>Md_u!CaFC}9O6LL zpK&==q_?R^C~HtH_zQ4lxlJpOs< z(oWQ2MRw^c^3*tbndpz5qklQ+Ir4^2tJpgTTfhA7ihqaA?9&dX?3`Ed4_Ufxy!I&K zkQx2yw=M>`BFWoo0t%-e`~DFI4Qa4Se4}z z$plno{Z?!uL209B*C~_iheVK2A?K;j_8$I%Ym213a9qZI1_3%4eAxN^&pU4{rFrbIT|X@8Wx(LtK!$g?T2e4NY6b~Jjg-;?W0T=3nodIm%ZxM)#CR))XmSm z&3N_^VO^_amNDqbpE_vq@={~Zo+#h^U8S}2qRP;97rCD>t5`EO|!H>Z9u z7sK*KN5r7pTc%Spev8R9N2n5iXIAr0YzOa+?)DdjxiE8jPfulE$*2`(9>n9yODEA- zslDd3E!Gii{qVc1I*WfDTSHy%+g&!LF~Yp{j|TVIh3(jqMI<@b=~+Hc7yd#$``FmC zT*KE6x_8}aqcjTm{vnJ)07fs?<12hJ2>naV)@tc+P(GzQ^p64)=m&HFk{8jPwFPVXi zPlv84iyjK9YV%@-?VZZs$rbdB z@HFRZ?bi^`@i!~0{Tkl-7j7v`-WgO8Z2@S!PuEe%L%={Li`XOkT}nD=T;DZenZo#E zFfBfuh!S1yJ!rM?8hun7e|i9^4%-wZco{G+)gO*jA$%nG z*CEwg<FK&s>AFqIDm zBQp8zcNA<{k%ZGPzjI-nJy1u#ZnK9-IixH4{JvnxWYdBJvdm3HuoC#(MUK(@LewfG z?_q$UQ_sZss(_Mb>q)1K&4r7zPYLQ=or~>xfmv4-yK}l$xIl3B!p;kQ#dNAwH?jsj zA6p?|t(hBKj8?M;bB&lOBKEQc%m-I;`Cio{?{F^5U zlv0ufDmf~ladqDGv7~S#P1mr#p)XDeC<3H#tmC78`qus(Mxmmx`f?8mFM~aTZQf7( zfA?s}U61@sHx95^UHH4*dnfJKRH>$JeYcrDwtTzsw_tR;We=o{uR>Q&LCW@)_yeUN z3r3v|n?41?lXIFU`-xt~o6t$Zt+9fb?nFyZr5gZw^};JQ9a8S`B@Xulc=7%eU_x}2 z++%z_xWufA6wB2N<&hG!4Mmn6&k_qVbl&-^d*weBNcXvOh*Sh=2|DYrGdjO+K#ZhUO4C^g#IOd)5 zf;{K}92~%)I+`p!cEG;$@6Zg+(yJVnTCBEIB4-m|L1cJF`F1VL=D#h2BzrsF}a(~vQ zjx!ODfKi{M6u;-=j9r&BnoC?isE2qTxrNE)bDWYins7OgGsq*%Sxo|H2Ok+Wt2JWW zf7kVSL9w!E8-UZhQBAtFHYpm*vJV58U9g)$nw^8Ax9kM%lKc!~W*Ni%QsjEd)Tr;$ zGKZggF7INA~k=A6FQ6OadgSInr&Kdxa`Q_DQDEoj zf%cFJVxyh9)eblzvZpc!8G)41j@g6BG3Rp95Ar@H@s+98nnb#DZ;cOhY{;26<8t|h z%c%x9@Ap`^OiTp4S{V9SfvH*@L;r%w=OC4-$VFK=)t+Z1MZoD`RxYR1DWOtz;h@h; z&olNXUP$NHqGxL_sA4Ranaa)!V(=`aI|~;>Jszt58pn1@ES)MKuCU$8sa@hObQ+Ec73H&yHv)(`0!pVf51FXAAP$8?So-)vbr?9Pn{&4A@f1k zA=Xcidp3(%hw1rjb|xrN729@^&Wy9?*pRzU>*wrlYxo&H3QpdBYzLB$LjhsT5^S$9 zQ=?ZUTI1ygE(AT_Nu155$}==*Lj>@Y!bEL};D&w}4rzoy_NIutCVN!EHK1RscVpI6UR6d0_tKoIzQ_(Qbp$xJVL1(q;f1b;1 zN6%4~yZHHTuA90478XUv)2h!D@;`YV%*IZvCck=3{e|}TBi#xG_UoCiek~*yY`RY! zpZR@$p6L8q(gEG=u*X~A-O&rFfv5-+li?02d5gC;@?sjuEIl)WvQj!m&sqC;#<>*0K=q|&e z#Dn*IP`mZmqoLDs8%P#)Opc(MXYjE*5j$szWpzdBO=c~7qbK+gu)u(p(Y>^>=4$Q8 zF>8CrS*C{NM<_&C8knZsCOUNG7{_7EyjbOa4|S+DE8kWo)k8l)uHIhNd}K-KyfGrh z`-q$}9iOb^M^``{Ov-q^BpUf$xp7=(~RG^`rl&;-L-4;MM|VOz4kRt&QbCjT#h#}y@lx# z^@A(_v+O((jDQVaBR7sNPp@q2J&_8zAMh@j#cC`+YCBiCIx4)PX(H)dPhq>tV@n>G z7a0`}B@lj7S_vq`;W=4U@xpW6s~NSipN30D4YD@Thw{78mmhVL)JIk#=Z|_$mMrHA zpSmna8i{w6Z&G=j)`K_1Gn9vl9$ygw#(oEzSmBM^!sF`mDK= zP2LdJV_(wUY7}_(M5fKW!7xy7d#Wx>gu=nr{SZdDDqo4URlmBI=YNxYJDu6)%yE}R9Ce?fxu$UzSohTynO14=teWa?-+mzOTBvy=CbnKsRo7hNhahz zbyk}OZ;_EGka!!)lLVZfSmwX|=O~nkNtel4vL2zIt7_O#61R zaRbft1+M62`QnV;CNDpKi}@(MA)nu7Fc~mU8CnJ%41u9Gm=B}6r2o{|lm8qgS6u(6 zMYWz#o2?mKQ(GT)0I=U4_}*lfO4b5im+2-NxmNG;yLZvHzQr!n7BpA*s1u%fu5hJzeI{rhN3EV~TJqoz1_BWod}D4v}p0?+4pH`TcP z>qaZe{**C|^O1iMZ}f;J%uhyS$z?;b%hS0n|-eDc%zgH^@g8a&@5hww2?#mm7l8N@#3 zs32QQaA^8G4y3>PsKBMlf{v5XJWmQ(d0aG`9}N^<70UDrU>zWK$w#E}uJzd!x?)$E z6PYv}b5nZM2Ek^M_;JgGuxLgm-Qxl&z2$TA!=~E`vQ^C{9{q!YL`qc8`iQ-9fBc>A zzRP4fPeYbK5&T+Di2wDx(kNl$(b7Q5uK=6=%zjvNOdX?SzzT&c6nw&FmR|5$t7*(! z{`!|_wHJ9ERk>1EE1hE!h>f`(f<2^6hw(JoceD&8stK)}{D57$wTL8hN-FsL?n}yE zbSm!;^Q1y)M1!QX{pE^wx}8)%=jy#ijjE}7K7}vQmmGfjMGX{7r?He$zVlRRmB=G! z8HpL}2BQgI;Dd1|N3Bad!bAk99SX>93QSAsfPR0RrsEX&FzX+QtM{IH9IcB}c^yT) z@oru$e3|i#Y&mjRT!~x9UU0HZpPf9$W}$_ZPY?+G2!7;JB!dKnbQp=a6^UvfWPIN% z!y_y$2kMc8`PTS|90mphgo!vrB&8KX{K@1LIsLC zZRUwYnuw#)CSUs~-N!VVZ!H&%nzZZjWjA>RGFQf#Cfz>4&A+^}ZoEz#y8Rcqj(?5( zRcs)mSfXAI?A_CG<>=TxxKPvaQ4}n_rG(^U}+2y+S2am?{Y(XT`_vpogdE9=(NaA8KeDXCrs&4-tts1?{O9`;+YA4<{*cr86tq#R3hvu1H75inw=Q#83r1kPW+3|(E_ z=&|&IAgyrW<9H{kwWMqeTg*|;q6K6ZRNQ0wB@wONF6Td=ZK=V=#2Im){Z)!qGvEA?q1AS%NU624 zBI%1)wvyp9o`?^zk*_Rz(?^1KHNi?KtA8NTfd~9;4mgq!_o(3F>jkNkt=P08$OL1}AgHIOz3- zz|Y#1EJ693S9Dr0G{AYxs zDif3Aurpa;^Oq9wx*h#iBAZaO%r5-f>!G$IwWEP|zWgmky-~?(zKOp=@D8daD%^%H z>mT_H|Ieo7!KU@4WQ$7g{5tD)IDczL-jC1*$+LNN{EdsxrHO&tQSN0A8Zw8hm={I< znS)H>ba_@5qvR%;-Oi?{)6%}kkB7(EBR+cF7AKv0@>-G(Yd&EDM`^npF&5LcX(P4z zk1kKD8r$E%_wF>u(i_rA1kaSJ`c}AVTHkncL@Z^&R#-sYFJ~Fsw)cyV$p++)sG|`9 z(k~!idPprnz}Z5;ehKwpLc_q-8DWM^f43~S6OAeBC74oA=hL3h$whyF9FgZB!Xl*U zKY@BRmR1I*KX4aFWL7~&{>d1CI7DyZZM?JvVYfU8vQ$F;@cJNfH;U<`NDL6ZxQAM* z=N*w0g7j6xeg2AYc_z70Fypydfzh1e@uv9#bLiES1tx%RAHf|+hR>As2%O~kR1tMG+$pM z%%@iZ25US9yDKe67lSiAj;gszGJ9T^*ueVlFnYe`HIn9iVg8xMbn^Wb1HETl(RPp> zDr^{vE26Hj%A?%)wUp~r4Tz+{fxF>q$1zYL-u>vIr@lKn|w50IZMe}YVBObdB=eAPIw(h<09HT%NbF^x? z#>gg-s<8DlPKx=~S3mTd-txF-T}aYdYasR~#?>G8$2uPNXA$;i1E*DTEg{_o>xoXi z{a9vJ|MqwAb2LdD*l&1g_2H10ybu68Zg=pnCb{6Hx}z0Td+;~Bt` z;5Hk2_*FI{l9Vezz;$<<+B9nn>J_0C3#2l^uB{gZD3k5ZKyu<;7C{&Yv*J7*sA`&e zT5GBC%?n|rZHz+Xk%)*D`%?*4%+d&-h}Al~L6#H|uZHcJi12tAcaMyPgQ;ty-P zG+UscD!Wu3M2cqqo_4jl1E16b*00*gTvZ{@iO3JF;@MMZujQk{<@s!NxL8CLB0SaV zl@uOp9(1L*JYT3f)-V_UGs)vDv{1(^Jepo2o%^D^bq`liZ8C^?%rlOxdNV6ZiZjDS z<`csU%B4V>4{aDQQ>G~(p3j#GUA)tUTwAi(#ZOqd9emu#ciyR!VHGOapmIfJ_1znZeLaDmhIAGG03zuAh6y*V^8dVU=QU_kQ3Z$?f7x5HBDz={>7{C;4FPC8zq9nYrMBn!R(xnN*71X5GhNq}c~ zoKJk0o217+U=qAFngz3fI0u#7=S-~^KM|@?E`oyB6*LGcNQ^UtV^Lm>z7?aPK|Ot7 zD6t8qCvoSXGke^qwet68czawxkrea4FNV5#DsNQJe!f*{q?2WDez3*&raqy=s;@); zI2k0s-u2yR#e%iAs?dIAx1uvy3s{QE&kNYw?a`CXX>UE=pN-YM8V@+{nW14k{o#-M&kd4cjocHKU%%;*zBS5wSC^KCSo8W-DR3XR=}X`rwe$>dRxjrnGi)nKHjV9d$Bd-K(DCK%t!V5Q<& zdHk)iH9$6b-+i_Puflvt^XdGq$NRm3r*6c~xVtOH^mo2jq`+}ek8EmCH_lFOHXZzfx5|+0 z#3`{92aPiLQlDo^WlmBE#s3nmbZPRaYMnImi@_*=6Jn|a>X%-%Dtrc|=X*_mak{#? z67;%*O?45m6;bp4QjMOR;JB3d-3|+H&+RIk&pFZ$t6m@jy@d8)#>SIPN&=9GZ;XX=__2j* zk%`V2CC9&>sXmB0{x(f8c2gpsSR{oiq;C6AMc@g}rtT2*#IzE=;ZvZW00O}uQgt959p^uV z=Hmz^X)4ga3S%)-CSao?bOkUo90n4J@X6tf76^4SAjRm%h1*pDnBBXr7$FTtFHNh= ze}e%Yf{0KlDCV3B5bwP~@VJSOnS5y2K|zi`?q}L|pk;{h@E!zl>u9ELn$)5aHJA%9 z>kEMthGZ_f;>v<$*f^WvNSi;RND13X}cY3|#CbUG`Ofbno}SPY~M`H5m&-St97!g59vw zfX?yg?H!u$TM7p(?4|yu#rGLs`cCbn3zQ4;re5_`HM*rR9#ahpEKM)9ATR7afL5|} z4OZ|7Ax|ia`Lp*_-&op$s!?bqL2Gj#!Wr(7|<-RA7;n54~UQduN8gNHi=Y6+sel)ZPSxfeG$~FW^VA*&7a!QO0+>UeA2`PoeVNDT z^7wU}B7K|cV3)z!Adk%Hypww1ap1GaxV)O?yr8#|wVP!P`%-Vsh9xwrh30IaM>Xr_ zie*|4;ph)9A{q&+*kclmp?e`k@hEHD|16BS=}(x?&b;jQcV3y)XR%k}JMs4N;%_3v zG`?Yxboq!#np@0Ir5+kv=Pk@KKMVuF-cS5_(F`L|6<2y0 ztuAwp@uMK{oHka@{&kP}Sll+#GTZ)U<>$eN#omPF_v(JPFnq;|@4?(m^-`5G?SXkM`N zRK^W)Z0l>%@Snk$>+)_gw;Psz{2M#)KRWPWmn}83?Gb#(YYT20$7jl&;aeI7`bl>> zOS29M+2m40zWmX&`$9Jq_XI`y;NiU-)T2p1oGVk=f9N?`)wT1DJWv%^Z%ppRVlk=Y zCxjCnB!8l^3;Y5b+28FZ|G)k}J&wu`KJ6~JUbkBc@DMd7ZQHr#fy`@>BXKQ#> zKO6JORs111XHNWmn!8xZ{fF>0ZtaZoAnu&yz&%I+LHU z=6u!>*yzYi=rG^li@DFy=(WkUODvyC^OYu4P4em$aZ3;uuJgJ)@}y#`*9%a)U^lSN zlS)L!mn3}pk-@Ppe~OOQ5Z!hP2@>25t^^@Bj-fPGLxs4n^3MvRCuF9~j>q$^%sYB* zS9GNm)N&_;8MdlE3S;YH3L2{2LRMC|HHj20YO@?UE6I52cH6y3<9*|6YT{qRBk1Dj zIk^6kj3;ji-K)Br*TSmuB8H|~%4A_8KGw9w=w5xqMWa1c^TnEww_5LRz`UE(av?>X zkHfBAyWOAJ3h_Sp*NZ(Q!f3N2Y-8MN-3=Kpqq*I%;FGH8$ncL)?w$5ZaL9;4#|v(K z+1>f35PEEPiI}qMt>yy5l1cbD9__!mxlWvA(5}Cfw=Fn8_NfsY-WqJ-)jeI9=uXJV|Ir?B>77>IRoM)K;OQF)n#U&&l@P4#Wcq6!E)8pgA-&|M#ACcg_+ ziT861GOZtk*%ov$VNH?3g5`mMb^%v{lcROlWsI*L2)TVaT2$*dp()7|1#XF zpw1_R#kS~cyPa_y$)P_S{aY9xq7esSq!ZXyzY(+X7_^#9uV0goFmk3kr22-IUyP@N zMa*c6wWV1{VHq8+3U;pofK_J7GFn^w5iH><9CwE6QI+YLquJUWdp&%;Gz0-9S!}L6& z|HB1CH%|EM6CkY$i3g3D(%OgZ7?tV zdyhO_wq{4Jgy4~8kv2)6UH$;D6lL6E?1z5Mr=7sk4_I_} zko~^Ur>6<4{QVQ5R`sU)i$SHqW?=PZ;D)^ZUgMs~x!DYr$Xu}Pe~k+q!UVo`Pnr!W zEltWTBRxxDP_w_{xm8v2Olm@hzs`rV`$us7QG@@Q!&*2#Tm$3;)foS|27+Pt#lvc@ zw@OWR0y|fp^&&4qJCTDvU|#KhfMwWvC`~OhPotIt{T-dxm6^;_*KgFvPKY7cOp%F|IeROWW!?sHsu&UZ<`!8_taen__2Y z5-HR#+8OedMr(7{Dmg9er-IO;gQ@`p)+(a15w6?7E}+bJ*p_d_xqBsFX`&`o4U?dK|<}rZ;lM-Gad}(($B!P)5qxcIqcO-5r zNq{JUMnSA~e+$%U!BlqlXe4em;FAtyRR|UiSN7k7*!%qUzT58#~nV~&XA!-n)S&ts)IUe$kTRSJ+jedvhk0Mk3?AnbTQ3pAz3?*CFOk|3gYf-d-`3rw1i^!Y0|#?EQWE5RYPSD} zy|)gls$2U;1px&_5JVb8Qjjib1Oe&p2I&@%ZUO0#PU-G$5NQ^%=mvoW(%mf1T&T}} zz8&xToj=dH_H`{~Fqv!4G48&9V~l$U?gZVl;t_l_DQG*4jap{WhoV)E+*G2z5%l=; z{V&7V4<4f=BEZYN1ssYodd?+&8y!qRQ|6a&O^J@XF76k@qC{@breVQf!^9~20;mLO zMycaZIL~8Jcmf}NVDXqVfwD(&Gy(@P3W^30dTEp;6+CUwM^X6B{6^)4|I7#BVeI}M z8(1uH59YuV862|`(YTRhF&`sScvIMQgQJ`cVqP3rq<^(}`?i$BiQe&hSx+c1RJ$&C zIye}4fe!}|hR}S<{Ju*HRI^i`1@2RrmV3;3jqMFL}&$weQQZ?WVc4m7B(CJML>* zjS75K|2ds`e3hg{3=b>_(&tKxyjs9vvvH_y`*?^_B`YztYm+(tj3I*(M<02y)++V*#WC z9`2@r*Qw>5F4iC}EJh=MR@mM?_2^nK^V)SHyU+Q16kc9{eh?~#_J1nDwex<8Akd}_f4OxpL}eKd{f6x;U@N~P zVIVkS4sW5Q_IGfExy*Zy(6|+KXnj~j?w3?z`l_X*a1nw0TdD8DZj|DCM?WYz1#F%@ z9$h2^3OOWCrJ(pUJ_O=@e(m9yeU`AdXuuBz|B_E>OlE+dJWLh0pUz$nFPKWW>K#PI@E+<1@W+3O+Rw~>sy~6Zdyk<>rI8(>d5*J z)ca6l2Qb6U5sFSa-_t@~bP`ng);$pxl}@{y`B)2?L?ikU{BsO^cwj5{iry(C>KZ1=F&Syia2ONvp#?^R0Hg zpIkI)6!%H@n@C}$V>AoeOVspQU13x|jF=P5#AQ&(r-(&#sUrJB`=I6V*ZQ(K| zTb%+%Xapi2^GlX;oaeFD!FAjy*jy*Ou|86fCMK#vgV8i24PfCDVuPD+Wo} zfqG|8tO6SgJ3!@)-9wUF>9j`F7en{84u*PmIDwbpz1a{qd7beT}4NT-FiH~0+O$}_}rV4?dxzq1*2|yDFy0ksN z3`hY>c~g%aRxt?FS6XO{e;ZEI*H&R{uQMnC-(*FdPTPK9H##yLvm}DrfJw*?Y;Jt6 z%MYhL3mmQ$D=u`tQpXGquw~0eU&V04Qf<|T1K}?UeL)_2hEFJ$)CLS%DEKOO>AZf~ z1h-_4QQU;=PVE`sRPall70(%%z1>u#v7kvKN|<7B2kM-?1mzRD78jUXrotixI`RC8h6Zkl zzIoFK4DiAL&Va)$A!L5B&=NMFQ)NF;;%>OgLQn!n^x@mg)T50cDmUOC*1KKMCfnsE z@g(*ReqddRfCfxjW4t1_+Eh<>by*cYbrX|(7cN9Xm|);Bclyv#=}+}VH4=zp-RRFbs>XiGuUl!OnD?d~0J?2q(4>{kphaNM5Oz%y4;S;{g zd$^>wLi}isKVLcsH}sngcIN$XD_EZ7B?RP)YZ2bE937MQjqrM%loXitg$?#U8VsPv zY}b%lFEouzSsuT92tSNX$FmU>`#>HUkSgR?uaN+pKmY}J^(FGY=p&0Z`BL}J<}joC zu?)W0m3gqzap*iQxKH8c_)QO%pI2|okr&YEgFi{>A z_h<{DRFRQ@gk@74b(vlE1On^#dyr^(6kWc03y6B@bP1^Nb)(|WJ9}qQ!EK%95&;6aX;Zp^*iG@IM^C&_`a*NmFN z{qHhJ?z$ii>w??d_xqHd*c%`(r5y45fQhK;aLuqJ_JphVT<0o9qGcDfNqZvD_6lwz zNKL*6XA2m7xd+nMTiA=G0lD4=J*TH1yzh4|Flaf1vo1}NebV2W;}A8{*P}ZQ#D4Wd z#6o8EZUWqOV&f9bLRnF;u2+ zfK5=0A@E!lsKhAHyIiLVz_o$ZPeeUoP?dm$faZe`BVAi*j)8AKrj@p08NLED7Ly-$ zTc18s_E&xW$j57#fG6$&+>?L?f53K@;L)%GUJa1VC@)$xm2Meew=m;}_tFM5X-Ui% zjdGBd*S()R(Z2wDn~p9|$ZF%<4J*r0kZ5zfF=&7e$VR^99034$%pI~2C=6NhvWEY` zwbbv`Y1PMK)Jx&)Ol?q@`v8g&#>;WetZZw&j5&N3Z(zNT@M*z`Mb{S1i5@+bMI4wV zc%*WAD7ph9KgJeevW<~AydHdX%88+s15Efk~CiTL{J1o53aF!GID?S zB_}9=M$g@IxjKy8TAEgB`iMMxl}r~R%o9hg z-GxJ~E#gt~Ft`81f!o1)m_wD`+9V(Mr|@OBAMJyT0812bmorQ>keB9@iqGZlf|Tz( zfZAs)J!QO#c-hd0oVwUYtul$uz3NO&Y>TCMMV{cIkOrL@CN!&j~~`sFM-#vv_!h$F}21V zbtzEy_BHV!fsseVYMT!4AxR_L+k0@e7+xiHhFQbXjet?wCWaeBhXq`xm>83A?C)(K zo*g0btCV>M)m{hOgSR4oevCA^RU<(j`<-D-*|LaE}8kQ!CKX^6|4-1szV8|-PeLo|Z4ga?v0dm+I?x3$b zRVM)CswvW&@OO*|yY_#6^dkUbFumT`V1_;M^=II>$M}y}5&!wVzrMCj36uetC-b_5 z@fm=SfY;5ggfIMpzy3|9ykM39a0b8r^fn&&?FmxY=3meI=SEvE%sL%ze6_sO*BhAS zf-;Gq|B}alz8mO68!IqI$xK0!yG+cR5tjBnl9eO>4-^8}7X$Rciiop{;LbMr0>6zg zg>4}I$1i;P)OIj>8&5+e3SrGTB*prDqbeyt32gNq)ITx*UHAs;^}l{F*a24iq0ynh zf|~#B^E(%c!1<3%_y-xDTL1(?Ay9t`MAZNK+JCyD>j#Wg*@TqgKMwUD3>27xNfLV% zHSYgKFWOK59uy4=YX9$pyXRd7SQr`$lfRP8f4uWwMcNA+pqpB{U(~EKn^7 z_+9bpH);O&9)11*Pk}GPINl*uIh)WIAo=?Z|At0&5p7`|I2&6h0Y7(JZ{@3hOy{{p z7SV4bCT`c*+H1J_@^bCB~G#+c`c)?!~Nltz<1>2 z2B#F|9q6olGd6{RMC^ko4KAdFz>>uVA+`rl!CTG7;-&K%4eY)IrtBz@?-vMj>|s{s z)@8;5MJJPwO1)8QMtvLtUdlietCr^bj2@?>T@A@zC(z~`CW4~L+)-#$sDmk8i$W)} z&1*NVq(n`jGfrGiG~RpPIm8uy!Y8mnryU^azxfTA1nj~4 ze|Ycx8EeXI>6Jr#g*J1+9Gj9Oq^j3MeD*^$+5Hf@9It%lo0hYu4#^d7C`es$rJtCo z^QDRSf2u^Tvm~6KSkM*apSau#pWNYg{5g&8nU1PrJwh;ks4$D~cTMzSkL38|qdGMR zslokBN8tkW-lf6D2!FJqiqia5=w#2Hl6n9^@N(Vg{ z>Te8Q6_;S)Qr_>uM_bPo&2@W7FIKsjN>CMDBDQNF{8<>-UNQgDz3w>`9otc4H1TbnqHuJAioLl%6jwqqIvNMaZfRWM@{~La)iBNEuQBvc!w%x1q2itx%crv$#|dH<+-8!j*%;MRnv}KA|vS6C04lchw4{R`K)rNORIsQJLT2YK|sCbaUIy-#vsVYpo*(<`43kloyrcX`CDWp0J& zb@gidqG=6mkTP}ARJ_l~xx%-f7vt54R2$(wQi&unrn+u(R%E4W<?3~STi27be0w^<)fgirc$R!z@a8pW`N$jjna15-gA|KaFbhM`zt_-P&tIiMV8u+E;XYd@iRbQaa9FHH2KE zx%66b+GDv{KhSe>gM=$&>jl-ugk@iM%PB-g*V(M)td{IhyH`RPQ(EkO`CJQ=GtjxS zj&9AxT8=glxty01$I_mHo#VXKmFNT#z;oOTZ=D}xA6;<#B-C@Cc3yp97;~xJV)Lv( z{uKfC;DuL@QQe8dLg0O?cU;Zy`~ z85L$6VSlaD+t4b>e0=0wri0&qm4y^d7q7;}!JKJz(85%r`yc0HpPs6z1rJu{)#yJ5;~w{*edr4?Lk_tr@F^9_2~n!wG|6N52Q3WSE*4t zvY_`^VQ(|~fd9Vm1)s#m42U~}ro-|o(Qo5~MmgDW z9Gjo2q|NPg#A~GwFP(2Dw9-U5sFRYo#%Uz;zfg^=9*=7 z=fbw8dN2)G!Iv)738kdcN`h!jJw5MJgDaLdYnyZPMx)VK^Lkgn0ryjp>}l7HlX}ff zR&>_g*c;_XUa@94zvQytWM;!{0W+2|kZDtU608^VzRhf%6`IVlWh5ADFgV3^Z;`!~ z-t1qu&pYlMV3QleF`GwOJzMG)CUZ(IyXey{#I8_AbbFPs+EIMQ9 z>IX&kt*o=62&rNUa*1-!;fJhT&VoY^AJN8+dobgSJF#+WH9IGH%)5(Uir>UT4Ea2# z`Bf67Vt7pT&0gzM5S{AVxSaWAux`(4W#EHq3}XWFg+SlFvuG|PHYq!&_f$5jt1vtV zY4(a!W#KXSwoaEg*C&h4@KV1#0{iH;!F5bFm=2erGA;tu6Zyy#Ba?n zo!PpW$z;FJw1#4su@D8vP{>*+Kr&UH69}PwBIng#RPGeFL?xbE8?q z_>pz5D!q+?!ef=?_cj))S0Pq@}ILxo1N>ZhAc*IGtPH z3%pF;f-_~;l@}kZq?+c+L20$%PRC;kwsMHt4(dc5AWaoq;1T1S2or> zNoe--i16qS1rphzq5es0G5^BK3cdZaX5=lwKz|MouF~tP!FA$*^HG(&gFL0aJ?gaX z9r8@i?2Kt1qmd6#Stxs$T2nwy@p*7?OoxQwkt|R~>YDqq@%;e7$&FRRb5IL70Pw}j z=P6^gOiwTM4Yxb8HIzMX$IlpCC^xHOS`bJ51%_0p=7g9S3f)O+Omg#GsZUgGc%H7C z*A2dL*8pYd)iDbpJO2urz6k@7icO?&{a=wv4l+>I<|eKA$g4K--PQhiAhxiD>YKL5 z)Kdfx^}BsanuCe792_jCFtBF}3a83{ln`lYxx&3E4>s)7rv8Y4y|r0zmgH&wAu2i9 zF3K*-b*T9)VR3IkGOG7IBq1UB{etr%NBlBnF8?N~qkH zcgqu}-;Yy(yj6G&v@eC0a?F8!@Uhd+bwzfsZlU@}$-G@|#Tx6}k!wfGqG(L24uyueFQd>x=IZ7i>5)gX3Mn9=D@=I}~)4B*n2}j0|zgv*r&z&72CoNs&UOceZ)OJgk}lRJkw!-nnF;GYijEeJ`ao#Sm3O|m zE|z^8Ez9pQEHa8^z3%{0R>r7zi9;+FtDkO0-oMgDNjf1ITSPJD(CKYdVY{&^7+1Xb zGOife+kkqemf6#kQTR!cdDr6rVI_X64gu+wr>Qp5KVHxd*O}1~iO~Ee@+H0rdY1@g4aVcGg?7!dGj%8_vh8Wpd!Jf{ zqjF5**rS7(@yY`$bEMOdk6pd7C15p-Qi)AG zKd-evf=|%6+kbv@p?#%KKkrFYt5M~C2pR(ESUHWX*w>!qW2Iz)f?P*J`w5*cd=nsE zg{dbB*A7WBpxx%i%d7UDla&RQIKK0b`w^U;rECKtyx*WE8CV^=@Cm>nECH!4)17ft zLseLF41Nd&;)Q{337*MxU*AL2>jQ6M+RYkZ8!LL7Y{ceX!|$ny5sI^zccl-pr_D-p929izubobC zeD;ZOGipUDAh*{i5K%FFMrW1j=-+63;Hi@@5q4<@cqD?kW_D`Hgm_0~1k2W-u zmuQ@%z|fy1a<||krUGZz3pjIEzLNKXmf=ri63=#yJ`S?q8kZ0>wWv*-p~s@A@e+8J zeLM12HaO_#uhLf0Zb-p~oCY>qR_(;(DYq-_i#YR89pN&V&hV9%$DCe}tZLGy=2)5b zMJRa|2RKq6NGP1KN*-;LI-j(ExS@7>+^!UXWDAoY3=<1@ki1Q2xUqpF*6xElnZoFa z6G-);HO1?38(sam=o(YERq766_B1hhYfjNB0RD#Tp9fJ`Oh*r3%@miKYvf@~Z;Zs& ztPfO5C?;UBXK|caT+sESjzjoF?Wvyo<)pGY$a6s$ELpdKkIXQ^dEgx()+p~oo8bRN z_Zy;=*`nQ?h*3T!Wh?YU#}TBLqm_}Ms45SS$;8qzp%#?rN`1u4?OWv4kMlEaFP~HotpvwnQNIsWIox~_{ zJFPWpj();72F^8^RxeuRFvaU__H+6*j-qMn=7-s7LZY4x#=unuwtpW|OW#hP{mx)$vDeEy2ez5hGx~{DeLB*-@{6_zXgNf zhQ+CLlK*<)B5R?&8A)7<=XNPai*8pl-~8-iJ4%*x$dh+BSDomf zGH&RKPUZNfGGm1V17JD;{@0mbCSOKev?o^BrAIG`f0>?5W|?q1kZNZb)OK%7Y<;9U zt+i>rwx^$D*X;sd9(<}b$|4F$tBNeDe{e*P9bO`-+?eOy=O;np(XYv0vqT_bdRsHO znd2EwaoxWFT_rHj*%kBjgrjwHJC2qLMKG{!znDf&4K_<(oepa#y{-_u>hMbn-!@Rz ztVo$fvn(Y(Dq$l%3-h!HaIhqqZ&5WVKp>fAswf^G{_y5iSJy**2R^aZq>FhOk(Rr+^i_Qg3Yj@2@)d#V6(QF4R6%b&EFXX2LLtZb1M{Ssa)AC(JY%p~1&`B@E)l3?pWkB9C8$za!I0iu z(CnlHHK6J_)S%jVw_K3c^c8}i#Qh?XA-stJ71pd2n~{?nY1b*YD}#1V<(KU-;IJyJ zN)<#(BR09~VKIWFyojZWfJOGnQ1>B|3Pp7_IZ$R%eW68+pyHK)x*Ri=3ccIuNqFjn z5~GwN_s@Bm)^c@4#P;zI!9&JY+y&PAQL>s?s@$54^jQwr*s75EF!1GvFXuGCxisTM zL()v^*IN#)kJk`Xn1$*_%6xwII~VPnd`r+cpF|=vz0}(0!U;bzr?}PUAR3vq*CgMr zlmrSWzwR&eEZ7}^DoWc_4`pKsdhD$Riw$L)`Bv4jSLeJmv^t#fo*yXLa?EEh%-G8C zDSW&GVMb4_5|iU1Ixl7GcR6<$a)ly%d00~LGY!laIiWP1RX`h(ikbw2E^ha(>j@JmILeDKD3jlQ-XJKg#w0v&G^%?da6H~ zPYUmhQj#%4j-#>jl{J0ko25_=VlAI*$i?xN4{{B;PvOQ*l_0b|j-R=fkBuS=8%r`S zd#S(#yz;&ET{NnT$3wFqM6T5|tTD14Xsp`Of zjT)XDZ_*@3#MVrb$P?;9v=yZ9)vr^?sDr|MWb!`v6gCzwX0pJ0g!dtrtSaOV<)dF& z08Mc8z|&WIwoC{s#~^+ zlNK7UP@uOMTH&e;Kcic39f`@0S`vro>r@B7j6yJ-v|KQM$!|I_)XUQ?kytZiU?IGh zwtBJWTY?b?(b{x;kyxU5r;k(r&_`Bx!cm?pb7J@5@nZ0^b|nhI+MKJEKNFTU5;@_a!I;>L5qYsE6LNZjmA+d0pPBM%_iOHZgaT6c@Oy=O=9Ku8A=bt&a=RziQ>R$^lWwOMiLUps`I} zg*qyMWyRiXENf_~o58iE;ePMNVME&K`NzhDY?0r*2l}eBE z>0^>plNaqubT>y6xl~E5^G70iW>+UVgM>8~LFopz+&}~sUNM(eHm5aJyIAp&v|Zea zi?|t?Kk9VI{(q{|(S~n{ zRZPvp5}GGJcqMJ-jpq=1bqmvM2!8CYpo0Ea8$=)Q3DzVJ3_-lAvGNDSMeK9NaF%s!+ix#>5U_swT9P>P4U1M_ zXlgkv5(+7DU$zwVn6-!zO-Rzqi z+modc1vfM@Q|ToddxA328yb{mUep!5{e1LGsHxx(RGh&S)^~9VAm*4)dK7FZu#}k% z*90*aspu`|Y7)ibkgYwepqJr+kT@NJrdN%Qnr+FglZm8@swd=i@}!wLyEPltS$Hk8 zW+Fbd{3-l{h26lYS$Me_Ry#9qDns%i~NES@;bdaS4fl>(tM3My`>cW>G6>yw`~sU7s=Fm0#=pw@c@<<4%d)) z-IiKPWUcr`6lcRtn_fSNLx6tm3E$x1W}454B39Cw)OF9J<{KrlbdV798~jXnyjfLb z^c-wWih&7QGt<-6D5n(&1WiX<$1$(D2J8J5S+3Om;F$F(NK zYtPtKi0iCIZhRilm}=haES9=bTKG;z2N|c^D(!M6tUHN~!eTF9&e{s zY=F-A!a{{JYGOWy6Z!v*k%*OPd)F7Fcg_Z@v4$giN{=l`@NlH#b#S#tn?Us|k&7?4D{mdlVP#AMD7W zK!}ynP*1fxj<;Is3e~r3I747^?*)_l$Vq-{%BB-Pu@|g?mUSo(wxnz*LQBXjQqnwJ z{SzEZra`F$AKA$6w=xt+D}Kc))>j_t%FEf1M+F%`-Q8|Cb_B2F+Q$vAN-Za}^yK(K zrLAmE@SNk7^6`!(>HtwoyoALsTJ2iP>FeHOMGWg4#@#fK&D|8?N^rOCGY~~u2&0xe%H``9J3Nlq|7v1 zJpa=ERRhb#(cXZxCy8}U)!_QjRCSyu*G`ti7*5~g3D%`RPxa2BA>j-k+T7*U!A8p& zWE`!y_u$U8y_!R;sx>u(fg%#<`s0K*H*su^d@LY01CVdZaV{}LX6w~TTCm7(jfuuC z2z6R29?mDXYzE?=fh=_6i%G4jy~LcMzTk_i5o9%Fe$m_NG*3SI58tG)iW$m=t`%^0 zEB$}Y7C(3E1q@N<63t-gI6zV+QOj}AI&O#?>yay&_xMadgIQt+EWQd7Q;@XqFi1m7 zNEldEp(3^%;LbtmRuBY<>g*KL| z)T{l*Law@4lp4Cez|uPhJs#7YfF$}(vK2aLy|#ZBPlZ;Yd_u9U-M-{crl~h<;d~pL%BI0f1(>9RcP>~U z6#+wYO)Rd7^cOCJ@$T(Xer;|NS&1uY^y%UK3HGfiPrtv7;}s(esEo za^rh`)Tw4pbN%ixTAwC%>ajKd9w9_;Hf~0>x~hrsPb~mTI-HCP`s%Oa7O&Hrj5@58 z`*%}hT2CtCM=uQLWdyGS4MiYo8wYnZnw*;t!lotJLOJN|iZn@qyZdFAEapaC~&kujUnC{!0cDiCw z7^rt_-Cb0Otd7;{(R3bmNjGRu$y?eKECfx}`mh`9Uw$|+xh*;2!~@Bk4VwK9n%RbK zKWzK1|CI`b{3G=GzfXng#J2MM#aT09Ir+f(Db?*h-Nb`I6p5OnQ~vX7jiaMm4;EUw zX0KY$wiubDI4aMzT&1^0tI|^~jnh`eWssPvhl5Q=O4f)sn#xQ2@hMN^#K>_n9@nk8 z8h7&VZh;YMT{SFIC-2Yja2MBxaOyQKQt@bUCA{Ol(kb;DnB(M^xV<{zIPxBcemHxl zU>$cWlFR$URIL7noR1>I zGE80@;YZBAEfW=-i9}DSDp#wTpCi?_T3?DBipMFIUGSsbegS7%oQ)>^uwnrvI9}pR zhiV7dw8E|MiT>dFsETPtdUkGSqDpFp-vR4pVw>}a*ik13e*$(;-_9rb zSwt6KkL0(r4UBJ=wZ9j%LBrd`1?Tc(e-6q+bX zqSzdTfS@8q9f|bIvcCWL1Il4==}})385Aq{4KcOfXy5Kd~(7u=5N6pB2KU za~jvv3`!*^aZM5D?XE=@oQrW6I;6SGvfWtiHJ!9aD~|+oMqeBy4z)l_v0dEsTdwvP zv@6JJ)qA&__1YOCAB(6gq?&>rTGh|Uk67HeC%lZ#C&I}}dCnp~$TMoL(8}B)(rQL2 z!!Ran8fU^^TirhsD9Y7O|1vjshOc}DaYfLQE;i95Y_tbeI)uTLE#Hd2p)|pGaQf!@X`pn^ z=9F4vu@U>8((w0DCY;C35VfPZMwaIJ$EX9AyIeUsa;eU>D>dsOY$O`(nW#X-6Lun0 z^%=l#GFM8Y6U4R1cQA{U(fy4BtW6;gC4X#bf<09FN|h+EjF_L)HI8RR8wGcX*x?xi z6_(4j*S|jybcgEH3^}M*S(YVSpSMhePnnOloSyX~b6vlU{Cw$6>A8lLAN|a9HLBp_ z$(+a5iq8;BJ*C^`0G*IvrWv#J41Y}RGju|?aTdsUz77GMQQgnad^!F4N7(c1R^ZFL zVik7R)1T_m?1jdeRgX)ZR#Rl5`u0BKNVBB6eM&p_(;e10ro&cEeLo;gypxQ~Xtboq zw?BUX(A-dOQx|Zg(f6h<<(YL)Fs0aN>1Z?%D;K!1!g7y@oL$p+$hZ!{)djr>{qwpH z}iRh~bATJ61+ceOng43mo*WP{CcC}3W(Cb2~EKVz3$IAZ8lpBdb z=p;GI<#ayOBl}(HAZlVEx;6uABo*msOB_E@gMH?1urrotHs!Td0^@YD=N>`2c@^0g zQI^)5X}9-OOSFF8<_=XpO^a;0;EAh?Go$!+6@HwC&hq<>30jkTm|V(c>zsv=_9g;k zrf9SDWOQ*^q^P@Mduw1V?o%zhi;m^qIb!A64fw*FrGb+heX7jc(>;!BzL75Jsjw;Q zV%4K)TdWH3Lwr-I4|K-W3_he>kvwxK?6setSU{%3MvL~XTxqA!{O@z9Ef7}xL}?e& zZc0yYC;Yf$*YZ{BL0maJx+&uzo2aT+uHB#5TlmK2C)7OQM0sOrk2b1mvGXMS4FR1w zv`JWXOUl^{{@^OjgUQ;1r$3qLk{^Oa8~^1r7@-Mdnrt}FPu0TTkywTrejwj6pC$aV z-AY!1Djz9(rVTMSJJU{+{Kfe_OEuN!O)l9 zLoYubc(JI|rMN#!JM6A}?IKe?KZ6qD`vw(zaNfY7QZC%bliQK6WUtZAcD8Pj969-8 zgRa}VcRMBH)*Ffa>YggGi&)kk#F+OQSZBoR_*Rl`6Dx~`N~G=^+4`DBeLr%N-fU3| zkIS+wbzNtx0rTA|#gdas&4Xh}gQ4o^d$K>`VX{QCS9E2r?JjF5g}F;aa!R1SnP_?W z?A9nwy#LqifR}(g{KJx{s24`pRe5l78q#cS9}lt{XPhpf%< zR=v<}DPe^xh^Sf+KZ6=CCVKK5yy-a$Iy7OqvM}0f=}$D|9>AehluOhdep@^Gkh}Zx zTAZa`4fRYH58TDzvLtmBen9e6Yog;)Dhg?WV8ShP^j*|N`O%qrhxB2WIeDj44~>D; zj6@!_j%20Bby~vyg#FPs*H&|Nu_c#*nt4HW^!9JS94Lc2To-`1vZ6O1m+Npi%Ba!K zHap@}&iyz_TA+1cWDdN@IGH{g;2ss?o6@}8RUWR*-Wj6RuuGbcwX#N3CUJqVcQ+p- zTmZ1I7PB?2_knWFZOMossmxSN)grGc)#&Yqj^OmBs?3T8C^64Pf9Vrsf8P26?0Y)a z?(uB3wz0yuW1lv+NZc*0oIg|~;O&=^8*O^Jp6r-$sfB`}ys>4J*sWDNH`&hLYSIY$ z-_$Vc&CJ(SUtb8bJ@);nc^y%hc5>-reO1)#G(59w1Dd^^>V(5oZRWq>wyR~+!`m_6 zMcSfo{p=2*w}{oQE1u5!q{b7+UMF>=IW`q>?>aQx=t8aM!2}uDQbv?s>aVdMy>CY6Mig`Gt?EQ+wlVe`+IrLibf%DJIiZs^NjJ z@{JAvjigfgvp!^fbDrzKl|c5ONb|#?Qp>x+GN{}8O&a>`na0YC{#e=o$uuH;wr|=G z&p!n=N_R__L(nq$D($aV68Pi4Si4SNQ)MVoj@pY_&x3SAJpo`hA$Kt~U1ESodDgHE zUn0Svc83_V?Wbmx3klmMBDWZ?R<8t7g6Q=m3FSix$_gqgPop5tv)xJF_GNdWOPrY8 zx$0d_wz>K0aVs`XhxGA}sEN^!Albd{3C?pvX6MmKWuekNeVXyt!ooK>^-bC3A*v$jd?Jl9X{1`JmWjFvfsG1>K z^QoG7mgrgX{f1uJwNMjkyH1W#_oL0BSpj63JU+vZUIE#nRM%6?wmB=m84yNqdP4g( zMBBBMXi-|1gm?Y4dS5qqIovrINqR9mcbevTh^=YCd{|j9sz|6S|G`>69ba1gPF_pk zXs$E8Hne9JGQ+foy+-$Dh>Xe_6U@xLzAFX^GtPOIjYb^7crIdI!HQqu#4bL1PPj#N zz(-pZh6--sGQCnNI^Ek+R=v5h9*{phGk!o@HaBrur(U*`Y#W!&6+3AM#?F|4!At}O z78K6V#s-+IXfT$V^~p+k7S}pbGvz6snI&+wRG&_E0kB<>#qTujL>xfunrj}d`TRAr z0e3-iaokS=DkAYRO#-{8*;){aW_+_bWF$iU#YMw+-v> z3x)R=j2E5YYj%lnSS6+cJ4_-dhm9*e_E`rYDV0*kRjoQw%{RdcA}zIit*a#2>&&RJ z$6L|l{W(h|ADT_sD7CNlX@_osGRVPdqpA{gtl>Wm-IFLx_clHiQTAI`Lw;)4TCzD` z)~1^FF|=OwCQc;{nH8&smc)Og>&#`EEmy=-2OmIQ51Q?yh0c4BfUXzjc=UoCXN5R) z+6OosPl2Nb8>OY(TY*&gk|>(9W|HrE=ABS^(s+hO_NY%Pr@;6@m~n3W%ZaoM)(Lji za-p85R>hAF-K3HFC+4~&%C<+GczJFMUkWMTu=Qomc}xs_oG(mlh@&-_tB)PiS=sDn z`i0T`OCAiMc8R>+b;#4K>ck-8d)P3rTGspCP%nzt>pwsS48INnCLtB;-aYyq0RxwY zXC5+=&d#VOUjnlgrpWurk`f`uzx4a=rnQI9>u5v+?EZGCIQ1S@MR5!2JwWU81iaSY zBMKz{nX>{ey%PqwO`}Qvw@W6L2&kRzx5!C6i7(W=)iw_M->Wu ztz^6f)Z;cOaP#td`u^`_DBiJe0YXcdNo~S=>jS|;P=XD@T~dP|4p6Jh0F?5#vHra& zdj`;A+#O3!07h=QMBwgc^G(1Z2Y+Me-S}BtG1Y-uoD~+=j-TtseeA+>Y^1 z5D+9x@hoo``MsMSR=sN2m|AnE2O@joAUOP)Y zTg;)^o9tn78+68HGc|AYBPmT$GS1K^&Wg86|6`RQ+!ISR-ALB1I!W<@h%j5AO%AjL zUncw8;z1(chubE6k|#x5sc3s)8e*Fetw>4)Yl{KxPx7zPR+tCKdWlz&h=5Vp2KRzp+tT>I)G5b@$7BK`z)gqkwNB zXWIZzi5HFH?J~p^_hSv|hY!wT>N1p<_p^t!$$>Ve_%d*vF`V2rq@O$*#S}LV@tK%G zNWYW3SeN#7458mI>5PUsW7Z@^V<&GhzkZ42)itDcC2v}iXTR>w%0l^h_f37+Q@#OD z(IXZ+#f=lA#s)LeqrQsj0@`BOU?y1mJM6mOC?kCbPt-OO{TEr9EIrL?tU@dYb;u zZnZKvc;t?8o~L_7&#d zV4fU^v?&1P*4FU<@H^7}Cqw*?8n@MYKQ}74m`ehvcy5TR!zZnB4u8SpJAL3t^ z{{2JM@c<`KT-2UV{||rsZy%bc0x0EXwIMam|8k>=A>fj#E0A&<{^lJ1UGu9pU`Y!m zW79SN-CzF@2j;+I0M|0!ho8LpH&^?p{YgVQBD)y+@PBcD&))%VKSG2y znETCmo?Iq=GekO~8aR&n#dkCN%Q=kylwsY$1Zl7Y(YSF{nZ{IEtk7(#OwVv{BwxsN z&ux!b3&c$HjMZKF=jM)Mg)L&OlH1I2Y1M(x4+Z{P_uF1E5~p=a$;55xZmTEyo%+kq z|0mD?bsXlT#S=AWl)zRfmpLfPT0S&XSu+MJY=S=dx(#M4m?pdzGY=p+3aDdnFw!d{+@frZ+ANX@sseF>x_DQ7xRWS<~?v{d__t4`UZ^kI>Yn&D`kZ5V~EX z1Uy!90a1P{@&N-^Rv)1<2m5Fk{y%G>)j?6|IcIne~;;djRtT;0?sebDyQwTBVJao&t->;4%-_Z?G6ihG}Z5_ zCLVrZ{{ednuCyQkTPPhU(;5D6?{y1!ul_m)Rs^k(y>7v+{(kP)1$)+>%1LCgXb{-q z-aj5Zx!vXU#g7VKPk^J94rsE#sxwR{gCb`Of7=Q{jnDed!tqaa!WyjF3j~hql^FXb&lkK6;cW8Ir@FD zaeoY9Uzm3Tl(DCt4h0T-+obbUY2`#2W{@iGbsTx`Z&2g^-uQp7MHTNlZ=dP!$YT_Bet&X1U_zHE(#M1u zp*;WMc3E`QW7dDP28`FSeW(6ew(9wAV;LX;u@*1;ZelrC3Y*~~PwShQ3 z7J`5JqCof9bxszO$p>A^uqeKkQ6&Q=3$bUmZ;4csFXL!kMid1cu@m!WEzon{_TtH3 z4?q5+v&i5Dj6+el4afaO#HhLeuxRgqY+<5D;ky|Bn-M~N?|0RKb3>W35vTvlg%xOA z=W7LKS!{Uw5Iw)8SwWjDT8X!agnMi^A!|yc|C4ZBzubQ@qn&!1g0DXvM`8YXnV*&; z(14On?O1!eYEWtw@L?R^VL$)rDFA}V3YY*jC21G5>rzKjgz$8ibkOG)O!EET3%Y-F zG6l{mDk}b4C;lVf`;T8UTCPtc=6HR!z6#h?S7ZR2J1=u8m2k@!<235!2t|}SF`h>o z&?eoDHF|~j&lh!cja)Mz!c-?n=mfj?xb~=+ytU5~2eVqa}Azzx?N5T_*R$(BI zk_s;?{_@TbYiG6cTRjRg{{mzG2My5_xmFDs9ek(>$<#Yb0jpZI%?w>6)tj#Blce5D zobLMK8n5U`$t-w(s3LE&9^5A<;Qh-EJ#J?vBv0(3@lP7?!Uwp;o`Y_jgUUz^MTuHu zYrDkBxM2EBlY9RgsTS*kB*5?BEY+xFwu%b7+%5j`S^U7WqFsA<^nhn!R}w7xC>lux zq;Y7Pbc1w8vp(EUGR#S`nA^W~is~l3IM0b>_h0QR|HDn+rhMT-wK`njr<14p%&fs< zA1%7Exulv;N{l`4=fPkanUY#MO{uw#wZspFejd+1_9)@>8fvgt#?Jnm;r-v5NzH4x z8RPFJ*$BirFswJ`^fL>JoY(H_WVq?6Xv^?_l&ZC;a+z0+t)vi78YYfz z8r^O>7|4T6tV&Kq^CTVC}d zhP`*YJ-?~{esdVpwF-^dj9_R5?VT~;>g<=t_T=!z@r<+0Ms9+z!b8`OX9ww54y#GF zX_{i5|D_m8*VToWoMD2%qIg-)xyidzjxi5+N4$*!&iYUC4$N&so$6%M_4+28DeUtlDkZ z;&lhstS9HE`Sysj%Zzt@f6`BFG8TzG(9E3rSM{Rd8o>JuW3yv(89{a?sWVVJWJO!K z&nU3BIC?(Eg=*TYZMVjWdd$pcS{_wCffq9prma!sKkFk10DNJZbX6FmZJQTQ;Q9%TjPnK{HumH8%nSwN-fZHu(Q$(%O0A^R`~NxdV4)qrZFa$*V*)VemY$BY&{WtAFY72Rb5_vs9^t z_c!~76`0JHQjITH_?q^Drkomgz6u81ClyC{>>K-1Z6Xhk-orFcjyfdfp^HPCVeJDM)yg#5$w`Nc-iwK`SuVI<4GlwJ+&a zO>J(-N{~4oU&H!iXR287jvD?O%?~2!MFyHKFukAg4ci`C2b=65ev*ps0kj<1hd?=R z9A8kz1vpvnVvDMRuGTA@b5+0ajMaB4V1mNmQfMssLAiC*z3*Ar*$nE^=9>hjOMkHF z;d}OU{}G4!mH5?jbgPJQj`tS47PeE1YMEs%9PF;uIX~XPd2t!3CJ90Gx!}2DBJjna z+IipJ;>Mb`@*b{YZ+FrdHZ=?xWPck{)@*}--;X!*G_ruGfUou=0}~}rW~)Z5R}TiI zPw>ACN}ZP^=+n8t=w-b1?lBPkt$06x#zG|i+zZO(>tJEG3CM5y;}mws@N>IX<1PGg z@FZaz#C;Zd0;8Efdm6i;6DU3Ffo`fhYrQ&e9oqMux{gs$gL;gfWfp3!Z3Z^<6$F?9;!EtLDucNl_L)D{HmgM4Wm=fM}qhvirFj>~I}e?sZ=%(*u#Qgq}{ zwUvumX9P4kKA!DF?5D)}z==9MLFw}TIxBPodgt8l%jksjx1Ih8V}jMM!g^Ub8*!V( zWfJL&>Lxp5IvPi7KYPr(l>T53MNU4?Z`YX;dtC6utk?Jv`wAv>y=+PKa8TUQ|GEtfq>ej4HvLp2(+?~RFBREV`8sGS3dI=LOHb;nJ(huF-99PlS z0BM&|uTlvC9IQCP*yn5|;l!UMR(WtA`(zJwbva=dqzoFircjL0aUYcQ!coOiAX#r< zHOz#GL+ms0m)*}g=$-E()s1Bvw<}fFABexf^HqlfTm4r1MNo(8DPHqFtAOZ`u=m($ z&~3|x-si4)=r!v)f8zFPY9gh7tWc-l7y@5OUeO8V7w`er1&3iS0j-f>9!G#jo0j-9 z%e>lwy|dD>yme2~^(s{tgYe(@wklFuwQK3Cw_IW&&aT@x$s{y^POm+sv^;C2>ypSL z3n0Y7tk5U>xgR(m^`A(?=$ZV6O;3j0SDQ6#I>tFS+RU)W{I(`2gK+Mr79%QdDhUbc zAW^9}6ieZ6kRn8>IDGkbwRV(?JgCEpZzco?JZRO#g!Su|zKKiRnC7XSeG4mI*aX}N zN2M(zmFeev{lKG*(N0Bi-Up06%lE1PLBBCbjVd4kqL&;si?V2q zl)`rHbT|oBWO}Gm8pha*%(}=rOmmaA?03+oJwdbhD4jmeM}5TLkC_MGBEe9adCk4c zgkH%Bgm+1N*N?Nkhdn_L{&FFT;;FuhE#M7f9?HSbY8|C`tY}FGB40_C$GdMNsI`tP zPVfO7mMk$aBA-a$J zqw^j64>yE5-PTgQ*jhA+f%Wvx&A&F3Zr`|#PzCM+mSGmWIT+J8tQTo9Oe6tkh;1Nw3N)HOF?vHZwe*ksihm=%=vUh<8 zf3MfR4;*=Q{mHza3M-GFDQa~j%5Npkpvi-)lh!sx>loIg zu^1!a+#YhVQYtej1Xh$fT?o#sr%SL0{QL3zkx)$zvcOdV$is|9fLJqP_EKi1s~JAk zL>M}O{?#i)CIwNiTa`LE;A;)#;Hh11UtN|I7BVRt?e7!qe7Kqs?j%wxb?L^@FFuj_ zn%ctq#=&;7PT!w|dF%Ph_3)uH;i{{2A56Qn36r79$8RFX>O+HZ7pg~F6J>di?WkNd z#knGC^!7ST94bP50K1EmtsG6YdG{soO^@PGimp42X*YKUvhi6Ez5!M; zeqlN!P14Vo4c(JesWm;mNXDlmiL?QHZcV4wz-GGNuBfwogXJ_u4S(PRA%h&>SHalU zi}WrcoOOcPYLdJ*9ayxh)*3>5m_9zfI98iR^b4o160R6i1KeYfv8=Io;qP=Gf+XbM z5b^fbhKbhTL7cvEx802>Vt%mqxRGuBTo!!x9{!+(OB=!bVmv@4d_VsiSb4tIs-V1( zQUAsM=h1|PGvIxt(QeJ7pB;hFp}4;^Z);D^k$N0M84- z4Ja91y`+Ztjv(sZ$}Y}8#DLMr*IMNX4Ub17WmzN26re@Z#%Fu3L)51oEj3=R@ESt* z&X_5KRCq=PntXQ1y{1(!J@Pi(4_Q}}neI?(iq%iMsU%gbkqoG%qWE4UburVR-T+y4 zqmfjzoDxHQd`3dY!lp&xI=c~)u{obj>v+~(t=Z5GQ)a0NXN@4rxde(xgUOJgvbGpsSPqs~`8GPiK;E3JUU4okI#O3H1`t1C;o~U>LMUYT1NY{5zu&qeW~G}U{~{{ z={YO$-7L(PkWNwtyk0NN3k98U4_PQxYm7?=ClSSV2ZTuAqT@jHQr*47-B&5pBq}bP z((icfGHa+ED6gY6d;k}_Gh+AAThm3zY9gXPYEBsZu`#dl_d2`zvvV`@fP15IKR(IU z0tD4HyTSft=yjVxH;6&iEhnhx9tU z4K3dwAz6JaCFSjRR)=KCj{H-=eT|QH!T_&2_?1C?TtJ+;_4UnLclV}Micdiy2wYCQ zRAFUKb2=!H$8j0+n)rk76PICmnxyP+Vm*BN$-<6Jwi7^txWrW4`6<91n4}%hYNr&S zG51T`-NO5iL4#aQ^;nhMk!+aI4PjunF}5@Mc4@3(>wYg!ZHke5zmVjO_;eZ2r=dV& znI}5GZR9iRFV#g@m#?#|Cy^2(prDqsL)8R=2=1T2N-{L~Su>DszLL5MvRM|Y=cV*A zMocG$`(_=D)`lbg1Yg25L|5*+V>j{@VnUC-rmSfOuj0fD(6Z3LP6F_T_)cpM-MQ-- zek6E)1&Fr}NjFT4Q4JBadrI?(2o;nHo+`Aq@ey+iHv##7N>?4*0jsxW-M^yJ{Pd{7 zszhhC^Vvi3RsMsY8dt<~t66O`Nw^vg>X#CjMH8D&fq=5e#)T5xtwd?E?Ilm#PG-Va z;VXZKtk$wF?3`F~*ucP`?W=X0XX3zkgTtT)&_APct^OZ%DXqS}C_3Qn`cve9*IUvL zEWDLq{w|*Hco=3r@D?~uf`k1|Cc>B`zuo#yhsK?n%V#UezYy)Mmq&b6bDlG26G~Cc z`Do*P;R#cBTD8zk0u%vM?xEjyMOx?^&AQA(*!x(G&lyTLr&h=1iZ=5TTxbsYc{5*Y zMlBA+&9G6KZkQW?7l!D!MJD$#DXPs8z}2O;%YXJE4J>xx%lbwvs-qu`_GD#1(??n{ zdQeC`dcq=b2TI;$tGZ|Tg6UyVxc+p#?f2sR8D&H7>)g-S7OOc-_|7zHPg!`N7h0}2 zs!_BP!(}OS0x6NA?ToS{?@E48Ed-F@)*RK@i?NE9MlD}Whpf)OiSMp^hS$JnkO8V+-aLCKM(0@fm8Lj8HQ=O2%A9w6mt=pm$xpgj4l=*mz&-kk$ z@8Ry;jABerh+7_>P`=B_oR?oftxy@cndufru_|szxgA7vC^-i=5vfhy(nnpMvJ6tW zF7{=Qf3X^IK%Tddo@wYw;D&PM%0QMvy<8Y^mY5$2rqLPj@G3#{2 zV9{!Wr6)0iySVNAVSy<6xWZ{-M#^)S*=8s1j&R0Y_j7nL!CG>sYV(F}Le6k4B^&hh ztrU-HFN@~CfYiHmlV34rW8B1<#O&(w9PMWl{*sV#w~Mok?T^{eudfanDvBAO!+;2T z#9eiFK!LzOjJGZ9;iL-3^7Z5|8hN-_bc`wyw`tUn5=@9!F|qt~G2E|FT~Xv@*4+<} z6uiMv;H;8(1i%?S=h(H6k9Oaid!~Z_@=`ScL(N=&Q>B%&c$>O{-FPay9uNPtK8d+g z$|C3PSz{JzEW=JBLX!hu_$b1!k{S8Lk&5Mk*m9JtskZx%8-(6?yj~gc*l{=1dUysX zQh%&4eao42E4$S1WOqI+36T=?2gp-Iu%BEh=iQ=;Y=7CNhoqBLYA^^wX#29K1GLrB z$DqZ{7CeskOZZ%Gk3IRY{`fXgDS$DJWM$d=Z+WSLn1p=D1(|hX>+X>%DpDQp7%I6S zjW7OG$NRGFysf@{9r-#O&KN}KdyQa_CSq8P=IYxkAKvf#vlGyOuA!D=ZiydkY45qs zxtsRdTL>u5AYuRnwnWJ;0a%uFUusU4jNuda+($R!=97bDQ2oAM_S`qV?luFNv2VPP zsi+5fAxqgyq9M)`_PpUyL}kXKZiWf{m-REwt#c-(Md(%TApF<6CdOxRlquG(t*;ri zP|EfyXZUKMr)}&$QTFd#!86^_5yZw0#g>1UU}S)>BuJ2sdA{D=R^Z*HE?OOcCg4EC(le?^2o!7W0Pn3xc^hrB6DP?}hH@9RAJc3H6i%$1BNj1N- z>mrO7=d>`8BojsLU}sL89=sLfo*Y0q_jMsQERwE~v^nhcCEEZb6u3pmyBfp|)Wx(p3>rEdYQIMJioULSTCz!e^0nT8; z+xX4Mt?0s^@S15ZmLJ4snl|jf5JWujB!SqD2x5MQ9DaL()F8HMU0xz$szp_nuCb^- zR<3o~7t`vzx+W-NbTN%|oV@ya_P4FyuP%<>aLVLx1$tc(e`J`4da8Lg*u zwjoBt@)dC3opYtF@alAq>>Q{z7*cOh?U-{;@knzJU+`Y`n!(dp3SkvR_yjXK5j-1#J!1Z19QXs?R*Qab=(qm5FA{G#z~+R#$a6Ue^BV49>E zX0`@99*-nHv-V{EOs;Xj02BjU6TD4$q~gd+bfc7fpy##Ds21Jmu}-1 z-`Q~Te&XHz#qM**o0c7TwP%`@!7qrf(bVtsHE9F}jVYo%5~nZU{0<&^SnD3>j$ush z&>pdp>73IL0qoiN#}*2B{YwRbW9L=~N{(-&C@0rbyx;$1X)sggB-Qo(lnc^Ywv!IR zuF|Gg?x+`_M*XV)Ft^Y^bEd;AzuO1g#bGC(sC#0@!CDtR#8C&U({DrOB|F(O`fWcO z#LrS??jQhbbkoEVYw@r!Bt-Gq{WJ`cO4@P_sK~KqT`R*d=tWG06f7v8jB*K{wMUId zA*L>|6Rn~8$x6YhOnjv$OuA?Po&|81E5DCJd8Uux>@ZsY*_`P=d6TV{x?uf|Y%hjZsQhGqgbdqVIsp{zNM z-J6Ui!TK+?$B=g|eKN%*zD(oiX_ttX%_&KRO^NlGijWu0_dcDcj!%BZQkB-aiCFt& zy*xEA_sO-F`K~(A>!zsLTXcQ3=w`OAZiQ7G~;&$>2q z@kw9Do>J-WOrur<-E;9>Z~b$rJk+uTG+HEdp=kVs;@Q}NO#FvsKLfgm#m6L)XYVG) zjkdn)`w5P@ISTR}GY{DlP0A?P$>DjvH56N!Pe<0-pRT4=Z7rg!zjq27{&{a99H({8 zH0!P1jgb^z?9%i#6Yx$xQpqy_#>88F$b!5|uhf`=V61Y4U3M~S3Rx!;Or`l&z^Q8E z(PqIy6*N5>KE{o$0crYwQHeQQMB&`qK2e#5jBGU2y;dU#9hoA6XIW9QsqT#$!jo^E zN7J9%3i)4naMh-g^Yrviqeye70Sly{@`lJMPlvC zjg>Bij&mD0`hC_?*U~4TB{rs7FvL;+W{c-zFP%d0N_LKZBi^CZHnmoi}N-;DA%D3L4719QKPf6C8R?0GdD2|1{>Z*G4E( zhhmcy*9!m@++MMUvWD&!hL4Vq+8>J;y4RBEKceWFxubQ_!10#7iB5NrgKx)#XXaM% z*+bAxyyjbt zSn=2dOhk*xhM5f7tIdy#(Z9B`{l~2>_DW`~xm`5_y$4@oQS-X*${V3(`t2Oc0*7IS zRIgTK5d>HgOt*dKN;6AY&ru!Di^v;Sk&ApFl=?!D{H$z4Y+x$5c z8i)8i%m>{;+jnxiEfrc1hEZ7ntXiHqlHSrAJTdhf|FTh;afLfBANls6H`D70cqSUotALJ|lAH&(BUB!p z{*2Fm!R9z`z(AYkp)Km?X;xYCVk7!z4Vh%gyX!KdF$QMjm5zLBo4iOAy}!n!j{(H7 zfDKx%X1_t=t3|Y$OY^j`J$sWIUYMsD&||_U!z&oF&{HbN$)!_1kh=(5{&H*U3KW~b zc3FAX$!#dEyVT*Bt$6#(>aE7b+BPgXq#U$Lr<=e2OBeAF#v!&_w?xMnaQbJ6Q%%nJ zz*jy-(^F4}Bh|C0(3$!Kxg2Mi;V9Y`XZCD8pNGA!GNAIpqi3|VoeU{b91yMR5{JYU zfwMJ#PknfidhhnPjRp4%(k6>Lr+fVro_n z62sHO(e^c)G~C0fWSbXQq*H$#S4J5LJT26 zV%XBRLbIl>UuyIPexPkB)awHzA>-U-gLUV2jc2RL=ak=lj+qR)hO-sU@dSx1;nYFm zD@02{F@EFt3GC}fl#`0)eL*LWNI&?p0FpcJy9M8>NPMTqCV4e`pdLVmS z&6a#R*#A0BA2QaOzDs|B^W4bVu~RE-)Twt=;R$kah<1V#B+T|0@>)Vo2z%{#@M2Zd zkbZeWM6?b0yJhHB```{pIjQm8`=&!lNI!WPYX7xa5tONT1uEiwrMwS};;e2-sH+se zQH&gS_6FeW**~|4a%V-4K0I%W6SwHEzKr+;6aq;S?;tdiI_`SD%Lx<*S?w>WF^F!( zclL?aej}n<+WP?Q+r!=Z_>I7ew9yK2)@j*#e&pI|Q5(=G7QZ(mWI67Yw>AIMOgr#= zyt*@)m8N^ZyfWRjU9s_l>_zNx(n9CxtL-nA2MfVup}a9H$177h{hmr= zUOAiFeWa5_^--|aR8yj)dH0F#z(`fTfOqz z%AQqgX?~&6T@xm@5b7TUNlae^gq8>L=U%B1^jJlIv%Y6#yirtK%FoYt&z4ks^OPS) zdDm#H7Ixl)i0OM?PrjDjo9e!LjNm&U@r;WWr<$IQTej(nYOnk5ZgJM*#tevdj>cx1 z(Hvj6up6)kJfo1PYwd%oPfwT`?6q{eB#H!F`u$8uoD*&FxXKQCetdLN{F&t$)jCEE zTcYWEzHd>T-eHgX)BP*FFIR61^vqFmCyQ`!c~FLs;l8~}5WKQ%LJhn~5(O z`)6Nms(EvBt7x*k+h{{9zqBiEn8fxz%|3Xt`IXr7$)hX-8r$4!RKO7I|mhc1KPkP-nL}EB<`l3%&P~tc*3hs5Fku8pqg> zfmUx0{1pONVe3l$RR**2a0Ry`rP6b&ee*v($%kI18xS9+;$!GUoWO=KpUwxPri21g zx}W@f@QfCF8nmzLiN>HylJUfO{QR?En{i0}_{t!_H#rUdtG%&<6<{+NYvkd1ZA^d= z#=krxLt7aTYDQ`WabltUf!s9<{??j zruD=zQlQv*$+a9K9j-~D6!OM`Td}DwWT^?`eL#78%a%RHmtSMC5C7FnJ68E{wBeKB zdoyt`1bVYx=TcZr^4&0r$Siq}fcjF8wG;I1DcD-9TfMJdCliiSbRODtPbv&I$xR_Z z4UvGh<%CLu?yCG&0h?~<^Mh!cc-5DbH?~>iIm|Sdl-H;ZNgwp7Os}~*k_!psR@^6% zA-TC_$?Q8lldu6>z-6tLU^b=OsMd^SEV$(QxlI0eYK?FwQCSb%xIISuVcj{nqfGUT zqKC5lrSFvy`)D)xCxDI9W#uhtNbLWbbU2hCl`=STqM)&QE1jtBn(xYx&@yg$&0}sr z0X6#J4Cf)YK$!@z!F&cN$hbFd)i+7U!+J>Aeu^d0Kp&mdR{wlX2Q5(jF+;eyI2@53 z#N2!GMX+_rs3|fRiXs^oSCZh1C@z^-+0OE4johv1WUSB|5l@ zPV0?tW{dc%2!=4i)R0tPg(Tc&ANIAIAbrB-TIaZPXa&H?N}xdg0e;R|3Kj zNC7Rj--|DY#^7wSo6O71b@p6=-dK27yTRKL+bl>cklFB!5iWjY;#yBMA&a>v@X^3U<+viB=NTbS$L>r3k_;B-(ippl+N8c~ z^o)2_=^@^c@}pfZF)^pD>TNZn(u{PO;+>=h4vfrtn$dI*jm7NSd)EGAS_&Z}@fb6z z)CGeU-?SB~l9fo!dPIRXu?km+9n`9FoTswT!*P7=8h6rU5KM^n`h-pEn_;itj9KxM zA3^Px%y(jruTsJmxOz{bGhP_CJ2e|91b3wRufC#rGvVfV$ha+=Q4P@jkX@25iizjK zM{fQkbI{II(;OOgMq1M>@5-i^^KjpvAR$3Qc8tdvQt>ebm!zEGdILB&;=vawkk z|LVO#ZI@gk8BuSq3j}<8FieP+vPZ#3o;{_~pJ?Q%BIdD`+Fk6*9kJ9h!LN(vpAM)g ztT^lLv@ApT6?Z7_`Vf^?);a6k{@Vm#bEcV#&sXNygQ20%@iA=Wz-65eLU5t6$+O?d zUxJtOyJ6bBjdq_ggR(Bj*vq}jJ)^|Kr1ihHnq_u2FSF41cvEZKzLb(VC+Hua!iln8 zQ}8*dk@$ia*%B)kO*}^V2?213NK2VujTl~mZ!mlu-C=1u^p>X6Y-atJ-B4K$qNsApz6nq=`Ocf{3x}dY&Pzy|3lxxgnXxNo*cQt`D{ zYZCYdMlA_)1S7tkv8bYT)2NYpg8ouIq#L|9mMADopoll9;+dl*s9uW*lPd9D~EaVh7&-5^bUAT==kcw|VbqA;>=SGGy z@!!)yXElnHB|q8?!}c#(^|OCEZQQ2l z=R!F5w|KWav?80moIGD!9&9Upczm*j?2I!rvMk{gYfTw{7;6Nu{w|>M%jnH0Sr9qJ_wqpJXQ;NT)~Qc1SVd5VbXD&?`!K8dyQo6bA-FI&A`8w> z%~UL*^G8{~?*#+@XGZ$rQDFj=z|}3eN1C1ZBNyruu0lmzFseu1Qr;dFsS8K`r>y;U zzo`raF$4t}&fCuef39Fggwg}Bh3Rsrd*iW%t?0)XCRU5gfyh|1*`+f~6^w@Wb89tS zHS?9QFXJt!&N-357en}4%YM@H`rl_MhZ>jnzLzP07Va<%V9&*4eL_+N2A=;sAlQhG zU=SBR99aBjTkq(QzRcwpkET)z?5y%->OG%%P6kZJZb?{&YD*oN1C&Kgk+G`R>CoeS z{{R!7mLbtvo!)9*GGP4B4<`Jm1tH!KzR0?0&$u0m%+~%1(~6v1NVZ}gCS~zPGx%e8 z(Q|l#L77C}n>5kzYB(v=B)c%wl+lMFW<0sLb~BKIbX`BIzY&cIn&-DZ_&&-&eP5_D-Y*~u z_vNsy=U)b92$Nmz+3y=YfU?CKgSYB>98AiYVtqASHjeJ5TywV}B|pJlJu}nYl4w&- zX*)np<`C`bbP*uyTz=vBjyS5rGj1elE+VXCfh(4GGhBN*x|9J!MtS}U+5 zx6iR&TOL;_SIt`ac6GOt`d<#CAy#zQ7cN`95`A`@5m^YpWUPAnSVy-(Igp|9J2;{T zuG!sce%7VyWautCn+i=oqD&#Ml<{T!_ZG#Cey>Gm#tUH@ISe|k-S_>C5?mj zb|}wXO>CN30=II0Q)|OkUnq&e2Ay&-A@1?cs^jVUs*YuutF`@C`1B9lUV&rfZ7ud0 zj8H~oOb#=a!dA%VEnnP{mw~6FikkAP5H68uzp~LNDq$5i!p+pl&+3C`UuE z(O}o5d^=lSU^mrfr+VVsJllQ%BhEGx*pb?s$8XH@sdDA>s8#OYk_iJ z*rw*@+~eM$u0GeOt!pjRD`7?r#cGmhtcsqf6t7MRPZSi2C!xDaU*v?s^yTR)^BB zGtTA8TC^5T+}k<2V6unh>RJtayAQ_8kHqG_U}8PTQ}cICV&~&>pVivmG?>sXh8Jx- z)=FLh7$IM&yyXtYbK==TOp_20>lXIyO~L!>Qbc>?JsM7f2OJa^1y^ z2^jnMPThE!@SQ?13vI!&8y*qnACKA{kOh*`w6G!9y8v@%BBLprSxl7qgPH!H;<6Sl zF3S{Z-RN_ps`r^qS)jaXrgy@Z?ZkY7eTmPqWtjBs2}AydOMQ**)D#Q#YF17I);C?B zuPHZ3|A^pVOfriZK=%%Mlmoqu zPPtXZ-u#)&L)MMFv+Vdvbd)L+e9LDF2XM6yzo7v@fcHGhQQtqDpV$2ne4$8Hf4-2L zJu$15bNGbNAYPfyK!yo4e(0QaqNJw%3Tn189O!$rNwvwB%l6*POsI|CKXAxT;Xfq` zjSvirzED@0T<93e3}b}V*^k`zUi{2Xx2#}?S?>P48~6MmmM9x9903f*#%(=cpNbdk zO8JFA(Mg!5r z5ENeaYk{Z9TUXo|CNmu6(cW$b%!XH<@C0VZd9b*vQ@np}dkl1h9feLjfn}2qae7a-< z?=JLZID7V)%2OH=tlF15!C4daIJaI%lggsc$d9sG(Hv}f5SWvvEv)_-&pjE&89WU}nxqa9ix+K63 zHHn;b>}@9;?GSGRuh-DaPb@@A2Xt5LPJ#^5w4Zwp284vPicZ*fttADvlu@yfgonT0 zAunzhll`N7!nWlB&|-d>5^}ekE`|X0F|jvFsSf?Hsy031H=Txy-^I?cuG*YnVS^Ek z+BJBt7>4u0IDmNo0r1mts!ewpLsXh|ygC8F6v5I$53hpH0BPf0dMmt!J$%m1_3n zmYl{HKk?y){NWPamxo3pVm+u5kNs^}+rQF_@Ony*1#(R&jJ^KM-3DmUvf*g*{sjw9 zxnstWVQP1TRk~oe2*{?}cPjKn+U*=@zKYMU+cONt{p+=+7%$UA1L*lvskD->ETo4! zGa};(n(5|&o#2<@^AKNj816IBhQnLZ0asB)&oz2+TfkiOdbF9jQDu<5dR^s~-sLwr z@v#??Kuu>$J`r@K6$!$u-THMrcN{%_ux@Q-<3*-(A|T0V={Q1o>x5) zj4pC-XH=U~bL$LlR>X6(ov@9?K` ziMi*cX;0eVkZ<{3qs)KjP8XwmL!82Y5&rV<|5a*o|CYZOvi&9TePVsFdA}*vJ16M^ z5(X34Lu>Y5rBz0eL2o(lGD%73sqQ;e$Erw@kOEMLija`?SbxAdOqp{yib>^%z7 zQoC1^O6;k+QGsX#ikP_H$IMg1I`J?Jx2=s$n%s_H+vtpcj}TyLT-`%;zlnY@!{JHq!=C6 z`T~SgM2^qOtx<^MiwB0r=oD?~0X-*7n15m7=bi-ll#b=QW7pgU$KHc`*Hhd*E)U>uM`ObQ=lFEW_Rx{)7$ zAY}Z+U|>$NnTKlzk`dbwck&hr;S}gru&hOZg5QOvv`;8XUJM5a9Cw4sGX#`kH z9O@SnUqTUd@r-BXSX$M0f~~ojBfZ27lv`Zdz@=YA+M zi)X*(My;&X{vy+f$5DLN@Rd?-WXs^Im)xG*%9sioR-O>EN8cER4oQMN zwZn9S$-WTe(&}F~Q3^oD`)rKVS#5ohBaHn;K)>k09f}7)$6;wA@r(Wkqa`LJ*bsf4 zl^z2L4iaw`9=nl&??T@K93q&c=j#V`)>Nev#Y{$oHZHF_MF#9aq5@hoZk((i*n7uH zeXE*o#Ua<-wR;7U$5QL* zZ$4$}?hutl&|uX3EMccj4vtIRU;BJHxfeZa(MD@i5KMpBwCgLH&#R<;%)Y2?U$L=%$x-_< z)+lhleR&Oi5}cqODTt)PE`Yt*LknK}n3( ze&b0a9#HzJ>2fE)h6h!Rdxf5`)t7Si2FUZE_PMc^N}zRa%job3HoUOG{HEW!F=GaM z-{(N}y#nGF4tP_qYuPkl#I={_y zvvqVl`FuR}Afm%qKz^p13UUKdVs~$+M!&lQyCmXS-Z7bk>M(vK-+A$c;X=go^{`<@ z*W?WzOG05ao{2&?hdvd{z@hWU5vdDrOT^Rc8fU+}Gk}!Lh^V=^PXPdgP05ORz0Pa} z_o;pfOV?{#M|%2?5|fr+Q-4?7lEF(1&iyo(aQY zeSd(WdRzwveVQ{yQzlqyv2k;gni~bs+FV3?wvk4MyJX-W$Fb9;uokpm30BGPaAuDnh-HbygAz>0zjOh4Dy zn9}KKyoj)`*?lFpGB*=D!RXE7wz=$ECV)j3SK*QD&aglp)8{TmlVtf{`4=2aa(uf% zA=no>h%>r9DFadRk%zY%;vmeO%(iw zG{|WhUk&k7BD2uleb&cA);R&MwY#e-Q{Kq(RJpWG`8+L_t=U3d$v0ZsvR7p!<<$*;_lU3r4F+PT;aJh(V5c9$ z9c74^Sw6C6i@U04v)p#<1=Ux!bVvB?>Qv9(820->-l7-whL&QI-36D%e_L6iHdU5Ci{O3B)xwqFQrrgJ!~6KbKiR2S!8%3?NLxVr|Pm~3GbJq6Yb!OkDVP+iHy{z_)uo~u z(|Tz-?X>h)E)*ZDs;Gzlgempks;L>!i&PR0jw#3`0z^>l5fM#%UhLamJ8jJz!o%&; z_USxDOMrac#vPBik>EKX5nGEjS;kr92XtdqcL@)fP@Q|R&-w@&a3FB?V2z}Hdv zL)L8O{j%(n@{#}%Veg~@hLHI8%k1mR_R3xl5^2RoqUtF?20!dfgHGIMB7YX)?vJRc zW(tCg$)R6w}dWVIKnZyUjx=@(=3 zp*cDtNGAQpV&P=}nQFCz`kZ@L{0(E}Q1$o7rXdRXy)R9isGkdsWQ`>jmfwjG?5LV_ z^!Z$77Pd!2LUHm8gOu{@h@fE@sC)S)!7A*l9ySaUi!P89X2^^Ybu_lQtCiea?)W_( zmu%J&m=YHB%JQ=0Pg!?H)=!qVbk8;{gs~|vdps^GCG9DyuMKkbrCJNiw>Cgv2*NFX zz#wt9r7_pK(1-E!Uo0wPdP2Augy;+!;IZYU4I~iJz*^8xlx^`bE2c9!(EJ>sEt`6m zdqpgpUYWyUkI#v-6%^k5Hd3Z=y{`p~PbtKV6cmD% z4>rE*g}XX$OeavV0}B!}pVi2{g!S$r=8b)PRnIFzf?YCQLH&&1w9YHa9uN!neqt))msmoUFHf{_mH~Zc1E6n5o8iHEm2>7-@hYr)P2giIRkoceB zl8=rRR9ZjQ0c&Kv5;bL)YD7+r=yI3rG*a_Hho0E!x;q(!XAJUoaAq+k7OP+B0RNHf zRJAuhGi7P=Huf=l5A{l%UskAdqmwRBPyKMO+-g883C`wy^;XY1;R8wj0fVPAj*CX8 zG1zp|W1Z*GH_We)7P-F1^{OO^2h-kU`it6DU4r2FJV?ERb78Lf1*$%B=lq{$5J~=v^$>~O8bm7 z2i-refptxf(6CHA@NzOri)o3otFTO4&1%ta6Dc7eTm2-OR#P!4)PA6FX`WDBp&{#` z#?&J^YfiJi5!ncx#BBIX;mN=))X6kU@^TDuVuq8S(AIvRGjm|A%<;0?Iy~x0+ABjc zb*n`gVn7kWKmcdMPR)n2g|!~=akn+FP&Ol_Qtj3G7j(+|E9GZws!kq2KW%nYTriYRrHd-Cf4#z#K3VE^Jx?1ntUDzgE*z_fjHWMSRKoOg!{`%$$5&ol_ewcQVBty?I!90U+6ec%2lY&a;OKeU z@WxcQEGIfdX0QHAvI)3G;{wU9uVp&QMM?&UxDM&SFakniy1Bd5dKO=3mH0znX!#g3 z|H|;Z;q3t+U-6Mjr-cj_(lX0|bfNj@-BAg7>>QQnc0n^tkrSu|Hg zM`mUOygMm?1PKD6e;xLHLW&5=t!{QqpVrMU#em>~z3D4!(@yB)2xW>G7U4T>_vIZ~ z7L4yIEAVBn49}_M@x@^vEU17=66%#*jNrOaxKCXV}@yf>&r(vviuo234t+Y ze;NEL8B3m8*Q4lBT5&Maa>^&QBPF%@Drd*8wy&(KB;(Ss)jNeRB|4z zOG>#m3gE9U;7rpPyyZd(k3R9W(1$R3?|p%TcD^pqb-uV0b%y`Ad%GN884PWky0LVAxm{2|`@20r*U= zS6??L41lyZneVNG9z!8gp7JDp8PJB&1#n4(4si>;d%W1;YEeoR1%jmk848${=u@sg z1q2iA=uWQjAv8ZF>DrrXs&@g2PC4rSh%{)|Vh)7-Ii@5%0Co(8S!vev8YH1}@$PcC zl@)sQhFX<{oAlbI?(K}VT`%X!x|0GgkC}@(Ih5D68*cv(*4;<^nB${5u8GE|TnYYs zIjsvo7F5LdPLF=bB%q{u?YQ&n2<7L(cZtHQP8_Q4x3EaYdmk+GNJqAp|7;))nw$ZH z_FM#bnsbIyYB}hs?E+1_;auGdIyT%TjqLtPpa=MFkL1>e(7n{{N#~vzKrXY-rNiqw z4Jr{gCeN*~G*xQS?ShWB0HjUL{vsc#P%|kf6QuoTh|f_YKvPHX{7dW6XmoQB9iEj- z^HUW4Kihr%uiz)HxNz(`UxpFx-SwcE1i%$m+1(CCJ{|oNTiN->dfiRs>o{kRdqC-5 z_uXFk1%BXOMax4eS+d^Wl!a2FPPoga#fL zv(zj8D$xD`dWtEKV-AgEU7)&-E)cVDMR8Ajt$5(btk3D^w|Z5ZYXwE$=&^_g_UJotpvW43*qlS#UY0K`@wn)Tmarndr}`#q`~ z6a}P{nmpuSXUb0C&s(zN$$&<%5#u?9B0UOlIH8k+fr*quJ~^n)f>kUaZBX z6@V)%8W+AxCx7?{3(?<6`^*TP{P7)jaW@3aqCp1imbg*vcRXp2xvMMS9TN>qU3%hA zh1r$(S$8(&>ILaIu20=Er=B!0CF3RlQGUCENkhlyR&BPAO53*HSAvRyf*vSKa+X1W z7YmJV04hRfGtwNGfc+Q~qt|v{v{IAnh=GWcGi>MKWomMD8whdiU)wnaY-d{5==%K? z?RJ+6=C2j)-<&2ii3)PYUneO3m!!;}jEqS^M3hF$pn2Gpd#~HQM$5G`!!&&1{5t6G zbZ;0?gyfjcpYP5XD>9cLSIoUYWu$mov)^SO>-4%0a6rv{7Bh5oi!;Llb@f;Xs}Q`az;RzcceFj?cpmAffBW8&jxX>NqqyuG4g9J;3Z{7E<` z05Kd$`u*di{HJgE+Y2@yfX|ZIZ74AX8DTR(WoBmpg+<*IyBhV%bvh+Aunv-2-q1vW zDiT%wJ#BJQ8(6#8OXb6JNLSXVR|4jsC}oa6ANtyPE6=qzW^t(2>zms*CI=px;lI89 z-~aSred!R*qkLblx+~}VmF}lXTN9~wb0hTp1M7~aA$sTki@ERLe}45R_zX$^ zcy9B~=%=6!lCdP<8H_w20Mb?t9mr4IP-Pg`qk8&PMOy3i%j~C=OX3|4V>m>A;vj_1bLYUY#l=1zWgwH0OT&7j0k$(2`Nz3imbtM1hQ~dWLg0cSbHSeljjf6b`h?(wl zdQyN|vKfXI7K(o#ac@GLU3s0MmS_hk%4sUygnv7*t4rnh30kiX)Z@XTZa;V%9V1{a ze^^|&0ma6J?~IM3%eN^Cj_EBQ*b(H_{OHpSU!YYVHmiP)3s^J%cY?;h&*7_J6<}^A zGqssGiab*D&}?jFV5}tII}m}B+l0g3XN4Uibv?>GoAP6=&Y@D3WJ_|)dp>t_-zq?FI%HU!{oR5jVZao|c9i9$ zz1V#fa3O*+e=y%IKz9L+8We`>-m!~13H=XiKnosF(d^|0RZ1h`y8V-Bm|BCAi>2C1ODDzJkS|9hIk!JEIHNC{G?!#v8wTj4Hnd3s4_=-;wvKT_`I8X@`5&JIT zcdsG1=;95$!-P|Y48t!p`KEqpI435k6#?1$ws8azWRgJmV)($n-m(8=#R+e~GO%Mp zgAC<4Ow*bgt%^N)wcC;v?^8wU1W_z_{_ndHI4eK!SLyz`B^!+8HsnYI6hLJn)?k}Z zJ&H8Xuv-hKmWf>1+aW~1+0yvIagu|`?cw|HzaJK}pEoalde;7eI0yaPuh5b^gd^G`GW*D$r2UOMRb zUv2sA^5Fkj?>;ak*66I?QXu2^d=dp1jp7^8%l~1*Z}n0Dqv5VsX8r41{@0I<0He`i z`s(-}=O^+iD19L|Yu*57;l+MceRNBfXOh3)!2fq*{(gS` z|6r1f5impl<2XK(w<*cTioo%4R5~6hGVigdhUFQ`pMJtC_6#?At)DslS9s=M?&?>0 z;OhCsK}r7c7OSJ;X}WTZrb|^HJw+LdDcKGFuT*KS|Bm_n^F1qDK6k@`s7hVPZFMyA z8nGYOLv-Z#jQhX4%4F@pOBWWQ#WlAJ(Xo;qg@-g1hf3CM6jMT~($**;+^T_E=`mc#AsA(+hEunT=$(jMz&{6nPTHXpXh-0X%weBlkc-uHrp`)l2&LW=QQzde%a zK?ku;OSxK1abegK)&i^VCogmZvu*scoq~3$;V=USV+C?j0@TfW!M|Mq@hiZW=yv8+ zGW16fUIr818824E(bGPBJkxyaF5-2tX6S+*lHi0J;{b36T3(xNoX`+9>2CWpg4UFrE9V#FGir09G@Hdd;e4R$1UJhimjWha+(GS zg79cm8bfqb(9_tC8*Ln4hgcIAl(TGN^8^v;bA ztC^?QoXH!rH78JF($>vEi=|k$nB5sK=5~CL>5HwqcmeX;3A?NWyrstJin+J!au7H? z=(y9QoDmb#gfm`7xU~ z=E;XJtPyb~zWc4t%^AXdkW-5|eer4hKAchoM5E$tBQJg5stHb2?7rV|O{2nGu3CA0 ztxKt=JAjAo7w7%@-v;(c4}guXTrImZ-J1HMuA8BU(;Jso#$q_-w#Ks6+i9th4$w2g zvRCAQpXTr5`6u5MxD5Mj;Hx$cwFbo>qL^6ZByuaBn6en&Ssa>g3skwKSdYaR_IBSioi|LSVJ>tWzZ72^PDy$Cb|8c|{K z_ZC!NGRyh#+hm1Q?ZYN$Mi_t3{_ly5zt8%sBm5-INcopMF2FOSF|JL3Bh;+-r$CA= zl=E8oafQO-4whZ+Mqc?GiGXSA$NvnT0<+{~-2Gw-0r58y94Oy|^Qz<-1&mh*!#u=3 zs>=S#wX1Xs^RQSHjQRr53H*hp(9>!7`RYo%h-SJmG=MOL2}gqBo3%k4<=!AkcHj>eiswpi(7w~$hJR)XyH4Miv{ zoLx;A4~JIY>i)fLB;5hF5&OF2*L~JZaI_Cly$P&q_Bfzm3!3h!+Fs=<8^g0=UN`;| z#s8Pf`M->P^B=Qydzi%k&hg=H$j)D`-7i3Ervv)*$HcSOdZHEY25yn(E~Z37h2Q0sCD|xO3%cZQ@A_T%b#0!_zDa`63oj zhKZEfCJ23(cgFd#SL}ec_JP*9p#N|^9^&5-;)PyJjjP+rJ+)qa_F3fQU@)<>y6mLA z)8iNMR^FBAz}N%ae(lr9wSsK8YmQ3i&I@|W;)IMhm35L+WaGcZYwn((l$4?kHoJ}FVwUh_vrg}Ia_3{DNvD3Ew2}A#>`~1SR>Ewl;78ng$4+*184&lP z8C4t~{H;ZBsDTxdOpK%lRTY>2ytE>R4AkUyr*|OE1E2gC15dpeBn*aiI<|G2s!rGG z7ROr5VjF0T9=6aJ7RQOLB`=$u*n0|$kE}=Zu4La>C&Z_SodG93%ryq6z{n1Hpg8jb zkrki}v#T>k0Ah#(MD_O6`Zjz;cyXNR>KAb;B2$B0wMUy&J7+ieg4;Z6@;nn>wPw>p5v_T`D-|6-@nFev|&6ALHdx5BP)>oH0!Txn} zat8_j&zYlB|MDb~n8}~b zhd$~Audwkx#u`=DmyQ^|=;A8wy-&YPVFCFEypzODH2q>jYOBNOq(A6K=Vz&jqwNeC zp%#W0Hu~fD)0~NW%VIiS3te$Z05Z9e8^)vkM}{_$f^2$@7tt=`lyYGI;jg=kzc!21sy_Q&;{Js7rg01flpS^b2%% z#O|kA7k2)fZ4}^*BAihB2AGCJfY6C`{1Y5t*KVRsy;lx882L-{#)Ci+X7?l@=+;4w zu-qaxoo+A)FE*X6*=&cvT%}|Fa6gjviL5;G4u|)4E32X`E~V&~*_j0i4d11+$$zzF zF}LM9zW~VOzSVpIw98&LtLJC4z1&M~c*W^`_Bov#$g`XgknJ*v0#5Yhb+79)cz;zT z(Yx|Tl1`J#vvvQ&t~B$T*PTzxKDwT|AHfGc5M>o_&-=Q^8l4DA-*pki@?cWNTZE z8>V+F{31CE67^Cig~m#6P}hSaZ3R*G;Z0j=wdu$Z0xH+1f7s;yQ_sIH9Oer-0V1>E zr=F(XlE!pfeeZli=tUjoSVT8vP^5VPX$z?|X=1{f^p!A0qIcgg49p$?`NAuxo#$;lESy_`CzCER- z{1#^9HH7K+0xjk#TY`6;fs&2f1lr{?A-+yVY=1sH2yv_9^${r0Fj;M&CC_bmZCX+T zG@GJ*z)qgEqj{j@+EJ)z8(N10+~{97Em8m_^CI|D5NH8;^zpgiI=Kt0M$c< z_XFL-aqB!=z(FmQ;z$AS!HaLnhd%_OW5^T*A^zFL16oN%pk+|L5;S*`jwFg7{cH@Hv-Y#ZYaT#-Vq?@^%L))L?aJ=GsEl#d z5GE4mfA!#s)L@e{N#3kT?S9ah*Y4g!Qy)MYJIEeN6O*d5XNAf!Tdz5^8%TL{!iGHD zf$4c4PRt1Rfa;D+^)CJ#ik+!vzvemvK{$@5gLg6$-r8rY&^hA^zGc zNBeIa78FctKk3VoMek`VmWR1Un)xo5tHr7an|I%=Z7U2S^4?BijKRypm4F*LNaS_) z?cQ@2E$ZZbmtsxBM(%Zkf|4E#oYel2P(wtI50Hn7PCS}tmlwzcy=@A$aY%L+z!}OI z%^LZ>C9zAhy#EZovWSs6iZb`8zV>*{zPn<@kH+w4<)l+qTBH-G4}tg*y+RYW&bjob zDs-=aUPRumif8S*$_}%zoItJ`TEM7OO z$ZgkFu6&bayv)W7Z+VC75TOT2H-V4oJ#v`L31 z{s}{@ht_JU(C~)!Wd^FUx&Y*!)%PZI^B7Y34H}7JjTwVA*UwZFBkMuxpv)Ra{tc!#OVJ{~m+e*fP zeiITz$q7mX*uoATfeVbD4yz}FOo(G}n!4qBQv-eo_Um6sj8nA8)jeAQ86>Zp3Y-Ln zZ!_z}llwtxKRKi+J2f)zcD4yR&9^eSE4NA7JV>G@q(FP`6eY$ucZ;=P&W&vFs!+24 zyl8=VscabZkP;?6qNI=EF5fF+hIjjV#Fk5pLhR#bRF(fkYqbvQlx+ z?YvGaYg^5NRCe>@`lC(uM&o_qDKq=6M&FjEost(O!yo9nhG;%7#qt3pMZjZoK3y=2 zdREbno6=grI}#a_v4B^>+b*a9Jq|(AXuNJ49Gt1~GFGl=O4u!KBpj++GlxWY;f5AN zA2IL&WvEOKz~A5HUb97C1mp0|f)&hQlzgFM8qrNqhac$cSb zVA73wWa4pk_7Rvz=bn)BpSHVi@XdAXc7jU|yCVZmfbOU=1gP2wnN#o^KF{9o(F%JmJEduB5L%p4w=a??i#>I+Ip}yUSt096NG(0R2ufd-oWo4Nn)5&1`{eT5fn12vi!7}=TYJLz+$N2@HibD2BXA?FRkH7c-sMs zsl`ua-&-?#%_5U%!>;)qg@#Z>YisgdN|8@9MnP%pSAIAE}TVF*rP-o>G{%9=>IdIGMsuGQzP zn#1~eol8DAl#$p|^@b=lC-OA@D-x^uOHnHGq0)Z5V@8pG-HPu0T?}mQTAFP4sx|@j6Fm-OOqhj z829u+YVbQRDvt#G3H=N#Qa7PwzPGM*&P>PCPhh&%-^I}Hiiwxz4K|b2UbkMaHCgAP zT9|O$Na+-Qano?+Q|YSDuN&BYht8-Xc_6KXB}y_AOr;HkU^a;5C})E46juIMrFS%U z*QXM6)UUTF_}V!&&1pp<_xVe1irB@+`PI}Bl8Wx~6!vTM7jf}n0VAOM#6eHsE$?^e zm}wxDR(pdR`4nHzD?tOwa`T|3ZE2zxOH#qf>Thwn>vKv2IoKXh6Um5oQY=;zXsUXa za;AvHV&iOAyRmXnho?BGhfJtjD5Jwn%de=M567*AHVs{}uFvo>Iz1Rkh?oYop;o2~ zl)UDg@v+8~>&U(eC_s@VCJZ8h9e9n^`kd-6Z(iI2V{6%Q z{X`aIojJ4c-y08i5eD|&A(#4#5j@(xcsL0Pzgl%VENP}pDqJ`E7OA^(PNvF?_`W~# z^jQzi8&;Lvr7p4wSutBaSsnE1TKzE>c`e|3+A~8B#nWfIt^(^na}eRN%Y~`5vx>uV zO1#y@VxS^6UZU<}_}A+g?lASHKmvxGg>QUps8*qtz z?G2o;ULG1fNbQ;qWHGI;n;x<$ivN-^#LJ%PaIx(6i#og`Jslm;sjL9mc<$) zfQpyU*aNTEc(Fsw3!N}IOwHkn$dW#bd&$B1bQLE@_R&+*1m@jR#Q>DX9m|i*lt9nu zXe;cjWo&0Yh|$xjvxM6Ww!k{fA3Omq?7`L@BoOpB_F9=k$H?J%oHtx4D&ez$pss&BgY)d|cP0z+fGk?Un+UmW)=9-$CZeMD|Q7 z@#7WW4#gzx_T&$m$f8<)0+pXz)X=Aim4h)CVQH=TjD(JOTZ}UD#=Lm{x}_Vp40OPL}nf%4$?1 zCncWGq%+KB7j0W_AN7~r*@bl$lv3oM4Nn!O$Y=8ROvd;?%Q@rSkBxxFoeWf4F)qwDMUHvIcMp4XNy&^CUd*2&%Kj9&551_03hB z?e@M`S~kZBB9bNHkw~A`$^-NhsCLNAQxlG+anlO~M_()~xt|+*per%2gVHGJ%j_8#8ZlE>bui^ZAg6Z~g>zlnZ4<&T2 zuvoJq2AhlBpX-*&w5VEKbdNG9d#zKiO35+W#x95R~HOaSa0_I0_1k8WPg18KiQr5t^YujcMbxU-52CAqJRJxab{^No=jwYTAK zYZBaEKkD1B`S4sfmCEcwEQ#>CNj*shYhI%sD-C_oHP`+Cly#!(K1{nN$Wf=1%Nl3k z5`4gFd53iIDaP$gQT`BApK2(v>fN|17Mq3UYQH$~^|>t(`ms(vq369?|NeFTEH7wz zb`F!%gQP%;e7n)WlhVPSEa9=}9=RWhfbj~c)^lKx*S?*bAI=*%PEo^*w$Ks9qKin} zn3URfvShA)HsQ_<*wnjnr$F+lcWv&8ujc0iVYnkqEFsN)fa*}_InVOMd%`3PW)k}@K-`ED9=&gZ;|iyS)m*}e3|^Wr)kLS_-EMfHUyG?FYuN{I zc-BtZb=eKi*hYg2KV5a{JEhUx#9BEcaXQtpWDMQ;YDL$#)&`H&B_dXDrSt(!#77Js zsRggIBv`D^b|K1qT{VW$pTn)Va|^TchF?$`IM5h{;->e0eTV3M)-F_@-IGKnvg`>a zh9JJ2?AD%gzhLaYT-r~FVhDu5yW15XN0xt~vK}w7NK6o&ox9LJu4oyqBFqP5Yb%T1 zho*(jv9}X;uwxR2+>M6ipN6>>PX1yu=;I7AmLG2uZC3q8tmcS4_-s)-H}qyb#H-mP zL~3}np2d7N)vGXGV^s}vb9k3SxiEQH!V1JbAzZ~S2**U1X#j%FO;TDXo-}_|%zYg` z3@_vX3^x^qL-y>?fKon~_0+lyC0lX`-0wgkrAYUHs2glkE%Gt%7RFu)?Ow_> zD(JmWD%NwZLIy;tg@GxY-YaT{rhX9fg2l3u&g2jIlEWzWTcnWVpk*T=YG_*+t}JKf z+=3>?el_Wy?_?!xjaIc4z= z>_9#jPCt^?(a%F?;1aB$*&}m`Ur6Ws4QkUGQ$s^}az>Vom#F+NWH6n8l?Z z<;ts`-=JN8nW$-VOx38~?ImexBLzxnmnCKeGtIS9a`tUODSVc5tk_~AAWXxsE0{MT zOd#F%om8X{>IcjIWMpA^dv?w9b&b`A%B8nvQ>GS;HFA^gy!4rCD&SdiG(W*OA9kte0pRXij1UkGlth5PHl)QK_HjE zZgSSTb}a3{VQ%LnqOVEtV9mWMKZPB$B|rjKi(LR_*W9jQ2v3Vra(f98XYUe6{1|^# z&JogHyzU+tAPF!x6Au>yH$ufvyrk5e$_SfF9hp4v<1x_)4Jo|>-}!S8CV7uRv(vl; zYFMH1=Gw(55JCBu7}3G+ULn>i0RlS(?71RAD{3kpT$;J(gRLBrT?p@?x})+PCy zyW*%c`j@$rqZDH3C)84~pwgC#m%JKE`%`VT8S>*#&%NA?h!C@4??*Pz0h(=2Ev@AT z2U1h5xOYERstxV?D*wt@EvXJVgA4X;^^JrD0OlbRXH2l`tD z-%b|?c@^kn6s%DRFDa<6O{W5ekWk%!4GY3A_upgF@ z6?4!0?gV0UG7P&2i+IRW(o>}vPriY`%w+Y-y)tUu>Bc|)UeoB zTqvuiQw_~Y#-=mMhi>hhqa+-mLm`{i@+Dl}reinRn$&bH(;@=6SR>(PH%ttQX`33f z>zv_{5QcEBvqzms__H=z(PCY7Npynyv>3V}2erY5N&&Y5i#Qw5vJF?VvYFNeY(-?(7)PLRh zp)cldxHYj-!^XUr8*3rM5)7~=p6kLky0dt#<8jojuDR@S^R%$0A)wGl`Jyap#eFYO ziz~x)D(&FBLWgNWR57KNPcw;K&7Otjxe#5cfNC91_=rzIRpH>?-uDnCB=2ZFUDQm4 zC}&Jd#Ys?vP#$h=Q-vejJMK_uTzIH+|osT*FKCxR1zP?3o)ehR0OGtu>$ zsvE(mATSedTR>=7f4*3)`VChMDuhCBHY#8y>+N228W;&z`{QbLw@B7oSXCS!rJB(G zQL&Q*FEtFh#L@mKXMn+qj0#%iLfmks#{`Sm1%~ctK6OjuI=Ql%rxt- z!Ch+FZ@8}KU?_7{E0;cJR#&ihDJm2>NEBt5>cIU~vTvaZ@$Ja(!xPqY3fG2iT9iJ% zdRcR6%v!s=|H`=y={CK)RI8WPGo52^a&%~xTNN9+K0cbskIcCWD4tcbsqlW>SdXcY z4-GwE$oYr{^A9+TFKzx3Ef))$ZpY_z66nd)M>d4z$H!-H&C0A5MdlaOb}2UKELF5V zC*sg0r8Kf#(ImjEU0T(pKahSd;xnrt9IU~Wh#1p}TfOvi6ywCoL1S#Ga2P)0bB;_+ zX?PsM3FDzqN_%zo$BsV`UTk}P9@QUK(h$%-FjG@XhU*JhKwlU8fB3w&q&zQN;DF2&!#^NAqcVWx7DUbrVh2#tXzCt*Do-@ zLqIxfwO)2})`}-EmkF_4Y4=^48YHQ;@*}J48s60$TVA8sKG8nCxBgPe>qTSj0eTvD zuyZ)GY~>rIL<7|-W>6I`afAd;@1MNUY>1d-Ue9m5+*j7rd^^tr{UyOG4qg62RnP5h zjY_No)efU#j{8^dp!FMGbQ-r@i3rO{itQ8K*f6ruV$SK-4L|v`x;U+OEX|#hE7%#T zQ#=lFG|9F(t6rneOwQ$u)@aiCc&r*aFW4eJQ# zokHrGm)jY=mD`V9i{w^iOCEkdUV827TB{jZ*niI}JZSs=lRc{JjjnFdfOWbtam zx>8v*6^sf*4>@b*pz6>o);j{rV(z7BlYwI~I3%yq#!Dyv??2lbw+aBMCk+M)8R0oc zAJy?0U=CE66^Oebb4x$soL&d(005JcjdAX2HNJ?crH$fvlq^P~F#Jq1KUDYWITI6h zxrsC=CC7BLdjNeTT~;dODTXGJlqu0>QUR~ly4O{xggFhh7Qr}eDa=gyUXdQPmsgUQ ztsf;Q1r7N4#HBMC<%FdOzVqJ1goRe3Zt*pSJ>+l=wDrcw#aQm<$AQV{?g?*>XCpm<)8Y8R~-)mz}Osz=b zg(a%0NaUL9*k%c3g+R0EpTaF&*+Q@h6esJKLSTAj;k3!_(VI{$qb&Y~fSmqA-ZezE z2<~QtVo7mZv$-GuY+SOE+t)|zbBRL`1z*X_6WF!|>+QA(c7PG}amr7zC-j6Wx1Y#_ z0hHyUufF~)r^xVQqeP!q7U%uFeoh|rU9-!q@X}wuJ7P1QSUa6Aq3u^aWk24xht0?K(`fwyjR(HhK8eTu(KVoqESvi`yLcoCet;?x$ML{ z#(}9HzQ{fUw3)6x!bz5DT0by zD^CoS0O#A~vh7TGJ(=02f=v5VWsm!$k##v3_9zKTiq_+jL|GGHZuN>0k4F0z*uR~2 z2B5{W0^Zsqr+dTCPO7i-Aju4^vvCHzS6H7W6vnqPv2y1IOKv_k_gcgjI#$tUv)O}U z49q*4#3Om=Ku?GM3>&zP1$M1nPQ!M)WF|_sLmGa(biF9(dnT7K?q11T{w)r~@!qCP zll0MZkGA>4P6%V3p0b|R*k?K9(a&#=I*BuieTuHP5nlVf(10jcAg9r^js+zbhg1EOE-x)iD7G$Xvx{r&T2{fxu{xVaS}@+Sr4t@$un;Y5{E@ljN9Q9L)~k z+qlm`;>TmHlY0_Dk-PFV^aD41jRln-WTo|pDwh_?1N-Q-wX~t)WSMiFBp}2%N3%hn zUcV>#gYGLEL1kGxg@sL}+|ICF!AbdYtLsANT*wIlw@gDy`P<@TQFJ9^0OM{J#KICz9^5BvCJYT)a5+T{tr zj$xpk;ifC#6$KN(gx|SCVS2*`I;?ZzeURp zS5%}KyWTtE0PmH68wM`Lt}vTDC=jND#|ua0iU>gI_A|n?HxObx=jlUzE_9eWmOK{? zxk(dsUm#hD70W4nFuYu8X|NU1PCJqJDtrE6Xz_Ip+$D>zoP~7wG$=Y~SnerTM{6ka zl;gQ-nIG}a#vZK7Z?4%M;nMwW3hw&`fugRQBmxJOMG9tvvQkFRpZB6VqLvToLB6?d z4@!Fz&F1Pd%L!RDPJ5PzhxJW*{Ayv|7*OLwg`VgPvnb+7F`d1c87IpMynt-G;`q^~ zi-JrD;)rG<%H4rltz*q_iS{)48621*Ky6+8>!TAcN*tm~I#s&2E%l1&s2!OoGGhKd3zt6D5~@Bl7`3WA{_J+a zK(XyHx@G0X(wBFWXfrHU%qjE_T-cs}X%VP}Gg$kSKQ&pG8_`(5&h}S=?L(Y%%Y7Q^4< z)^gnk#9{hO+Iu-hKr@CkmYXi!3@oa-7WKwZYA_c9**00!Z^}UY zSs1Fm3MWZ~z|B}Ua%7A;0Ib_#A|*jck*_MY@QI0(>kVr4x|Ze-w%$w!R`J8NGNZmj zo#EctrLf4tXCo7$4-3R`m3IR_QRZ%Y7x1;3xGo=t-ek;7TX)0R7Jg$eG?zNAs>0dE zawCvkh@(GOR~f7>8YGmFa)sSa4Wnn2T#Te?=DK@L*r~ie)Ry54bxPJUYP~D+e;POC zik@fmGSvAo!X%80fvp~gK&QgZSTKygd^u})Fusro)1ZYdOL#lQD$w2tUeLM#i7Dh# zvNN+eGoYLBR3aP;(iiYT9u>zn#Wd3iv#wa3PeOq|9x_=h z?I`$ue0S67as7D}_q9)fKyNq`0Jig3Gh(oVFB6_zC?jS$Nweyc%_nMn(H&r5liO&BCL^Z#k@%HyGKyM7cYH=z&{Q89KUODN=tvM*&D z`@Sz>kX>m(gY3H^S!T(;8{|fVp)4bNWC=0ZmkIARz4v`TpZj@p|NnkI&-^w2{C?MU zo$H+QyUw}3=X>JMoSUZBna`KsD|BwWB`$xh5>UIfV;n*3Eb?|rcbdmjMyW3PL+{oJlTMoznU20@UFEV4zw)?L2j zQodZ1opZ=+1>Gja>Y{8Odmmv+BN~HvUgj8{`7!c6D{FJtf@!ulb9EcL^O*rXw{`&M zYERLt>`6a^=^yk1*>YMymSlknd4nhN|x*qkp@mgTg5kMHDx)WpJn{TrV z7k&t}7Qi}I2McYb4P+|Dm&x2eHM}okbvx$PiY;4CvR~d>o*^UX9@=RkTbL`=9!C6- z=v>OaMQy}T=vBC_V<|_cYvMQgxO;)WvlM`|qQ0(@n>0KOM--2j!|B3#;(vVIUrlZl zLIIeB(wtfX11)~LKtiQDwl&!f=G??R#93jm%)6#WMJMU^W|GYLF0>n_;~umzesMyXd`mpX#f8kR zKdGPPQ5KE_Agi}uO^scjKePGv5*H68%_?^SUyu|3C~;cdO~WQuo2Sz~=%V750${W@ z_?9+3r4_^{ukr-|An8;!&GqC}72h1}J5aWQ9-k>`HL33sDVs==+j*#P!MxWVd5bs! z3FlC#^Xx+7xbH+z;g-hHSR83~MUUG8Isi&(tgwAtM(S0Xfbj8Od1stf{0~`Th+?Zx zX296m;~9MuG{(!BF}1W?aCTp&qDyS-emh4=X^Vf{B9LJQ5>~<;*B&>*G^l#Q(sEp= zQ?>0%TtGF4&nNB*G}F-fq?X!1x$(_DXN~srA%RZDik!iFrRST1 z5xu5p$rz3tds7Tj{QEVB4=?*E_%iQm1tCT+PH2)hwjCPF(sJ3|U_fY60IzE_v8}2S z&mPzX4Wbp7R(#Zoy9azh>2I#iRxt;&4uzlMgPT+_5&zOrk$>NCXQpS!CSGH7$H`tI z-Y|}3QFgi78df%neSJh%bsPCqsW&ke1NF`}F49o}eH5Bs_dgi;=1w@GMd;#jiA<)x ztHu+NqN4vQQIo|sp(uS157kD%t=5ch(lNZ34-*O(JTKA|;#<*Q1)G>AZ>yF+UOSU4 zn3e@qOy|OFmb;Fp?bS{g2&k8!-#M?t32RtK+X-H^W$u2EcKQ2ucICDWB?}cIj=7ky zT_qz>9xhQwUmn@>xCJq`?&|E3cj}h6B^pAgImHPsn|nc40QPxmf@FLZ-n&vVrY2IB z!v-Hm+X2q8n5CY+wxvN0u_{*HAlD;DAPOqCZr(GO$4NK>DzCQgGu?OBJSSM3=WSfn zq1I@ht_x8@RxU1=tRF+HHNJXHY{J+bTKTQ5InO%J+o1xutxYeo$r`dUzcQEybxhBh ztDiHMT z!cB;i|1o*WZV_D-y@Z5=e6s|Zf~54B&=S28d5uW z2S=qNUf{y)_DC(g>=5@b#z$!a2ARNpI&obo|D5mW{aL8|j*80xxaLD3R2WC{>13O% zP{sGkCe%Z%2f7CONXsQ^tUG|b~hCZMT=7?tRMQ{J1sJOl`@BLgNeX@g~Um_^p( zizoZ912_%Oh_!GB0`A4eVPH-y%+CUGYXT#47r6*8EQ6^Svw;V!t8C7hn@i*d72v)V z6{SbU-<{o~r|l!x6SxFsdnDslP2JJYo+APr`;$-f0Xi^uRMVN|NVGY?i?sPS-XA=- zsT24HG!|}adxAnLczr8ruWi{C`GkvUt7Tgs8tc`3BLTvI%D27Bf*=6)JoOd{0^nNR zM&&K=!P!1mmSz7F#4XFDx~_9?_>Us8k>R3K`};|3UJet6klpMV&cUhcgv?v*Au)NLc_W2bT2>D8UT3VA-9 zrEr#DqD$M3bHnb~mGx=L?%->aEkpWQ#XP6)>PaD)H#d>s z1g?V=8M=0NmfS@-T|yX+L6F#F+hLt4*yYLz*x8z9A=#rc;NS0%6B>YATH5WB z!cIBBKh{OaFm1fMWT|5H2^^j|mMSr`&*3_DD5|s)w@3scWIlSJMez{~j8c-$cSg^` zK)k5{a3EqQ-KyNslg}$cr3ek)-s-0jfTO*|q3#7UQ2r;U&T|UB>(-Yf2mrtr0f9;- zg|XepPP4H!<;$1EwGDXJqx3>$F%&AYwBu-d9c8DyZ)7sl*o%~e%)pPY;{YV90gCRl z%_UlB!4}G{Ffv~u!hY&lkqsnK zYn-zFLw;kKb+mMwZD-DNn;;i*rQ3T5MG$eFp8hU?Zin-0*qlH|2eIV|C6)ybB1KC( z1dwMT_8kTs!miit>^XSUV2FnKG#3{aUz70|7xk`wqUCCUFmeK59DuaNS${|iz>5|^RXHR3xi*=AFEchL_6yNB~=HGFz`S_zGwMz3!ALiMC$DV$Vd-}zYK*UZPCGD!AmtC5@%j$P3#h?o-Y)&Z6M{LCcI;- zjvv&QBYNY8MOdGJWWS*F<+&K1WI$!-dp(W>Z;;GN@_IT=b*g4lgEzvMsa&LHnrV)u zP=`gON(n1xls1%N6CH+7nI9E}Fn2^){Qm*He+;(kNUb}0F+3sNsb|g7pOM@^ZUN3Q! zAZwOXy>EKnuK(C6jkuDP%eiBve95*CR{htP?$jRdw~6Z%ZLEL4VRg^2Lsf8B690Le zZCBD1ZJyg6gVpFb&t2!K0%~ueRXiCxvB!PGdalc~m`F~3KARatpKT$zEI$|y<4BVK zF?+#l1TG&nW9;|QZ8jyM@yZx#!*KlWSKi{Ju;=yWtV@5AE&!&o*eGcu1^%Q2>Jjt6 zmDc{KxY*T4=YKTS#NW+O!OLKza5vIW@G}ON)l1pq;4#Pe=mNLDqy_Rwg#y;~TPb6B z$x4d8%sf5a&)UilAh7{A@EwU<25NB?Nuy(YuHdvj3$EEkAJhsth=1ZumSb|6*3}-= zbyclxRLQCWG0(MEckb8qPMPBSB5@<$@c~@amLi{o99r7A9%XcSYo$Bx=vAjO6FxFC@rw}!d}T6H zovvO4@V2^bHXO6eF6C=UZUvdTP|(X|+qV|`(f@6J5OUFRk&J1#Pb~-aWJxz}bow^D zy%WUKFd@G)eY#N)UP#Oqdi&0|K}<7U5+7l-8@!8?wxj?YxyfEVDECIJFB3KxfZp0i zxu5RL$oYb!vA0`7Yo3V{aJPcXr2>oZfuH^1JT$N?;ZHw5l0b5vHp~J6`OLJNL#1lg zDAmbMqjsXzc%Ut&?(7n2oDh&|hQv6_?eyH94XeXkCDwDES9p;(Kmj#zNHUR;i_)2! zz4F+NQi3M8nf2c!AMcM+)L6wY#=+vMmJ4SxY|L7p#3|%ZoFmEzr`iLkkI&5>iMj`% zC{v5mhsEes6gKBK^QG9tXw+LL74}%LWp?jXy61EM*qUkcH!R>(3(*tR{Oa9{eR*S9GU!FRO!N zPg1)K`Pg%vp7L` zsIfTMXFafM{=*4|9&`i%ca2gMH-Kc$?Z&0ZV`cLdBU9>HD}^FYwURF zt`*$y#6zR5kDZRVn?|wx_!eF!*)Q+NS_P~^0y$QZkWS?_Mmu(TOmt&w<6LKvLf5*M zNVd3FFgO5<;`pM(oNC-wK?gpDud2sDDk*;@?r{b7#i<+XvFX(>iJt+=%7IL$mvn=b z1RJVyWZ66rCyttL_#jQ;T4hl|u6?`!!_Z&8#>T-@!6>%Kw5uvkh_)m9T z+!}WwpZbQG)z*T7g3BgZ>QaJb>8^Bc8cO&G5h3x)M0+#;&CZtt)hD`0V>$msgz|xz zY#v`loFxlmq= z?ZX*xz}9FUHhcx@hjwQ@Y{15!=#pb&Wrpm*3xx97bDA%`HpM|iFSp~G)Kl>4v+t^fscsBQ z;Vb=J#m@iYiJ$LiJ$B5XwZQ!CtA;X6x2d5P6f7!WOV)OZXI<9wa%tYin5qbXJso$QqoWaunebs-? zf=l70G+6)ab0swN%A!EhZy9*IwSc7GP>dXP#l8_un)&hX2si(a(EfN_Tus4a0JT!E z!MkYzM5njcJAm+>A6AAhO*@1&ywbaVBUODi?DNO$^<)1b^}oi~)>ZJ((PI)>OW}tKF&YR_+Pb71%2?^*>Wa^f*u#hryK7-Y zQ={F~zTh@ju`u4qj&Eglbz8B>%z#4Kqln>cB7!-^*7P~h#`OzVv?aTcEQXh6s@0;Vs zSf=qjE;qBo=GwV89-fEFY>9!vq{2GGsp0t^#%gw;`TN3Z1)_YsPB$Ppk@k=6BbluL z@^-K6ASKqrbXOsl>X-q7Gb>lt*JG$D`b7H<^oZx6ipZdIBEK|xg-`_~Fkm!0 zrbQN_c3jy{B2yFj^bb28*bvABRyciX!8mLW$&sTb)U%0`S6~Bq2S4cpXA$e;D zh7$(vol4(qH~M1Cm(p#SgU-`GT`R^26JKQ?CeNlxV9G77upnp^QLhUt(+IIJ{T4Es zc4V=2Z30xqjAE8-2j;zdXJ_peHvG|?&`jtrr3*C8>;w(Q%m_7VB*bxcvb%Z6z?G)h z>dC+(T!?d+-fwY6`hr_`Hxv(yIc^% zRx^FZ-_dCF!1%wy4z=w@VB34ve&3{?%^bBpK$?WY!q;hQ4ejlvCy^}as7qNphMAbHfMss&=+5n?_x#)p2X znoa*{Z}nlWP9M0bintd4`sBzXf_9dzs{u7ehEeVjT!PVStc1P`U9eHvwQktJo5?vlWgmXfL6uvfAvQdS>CY!dgLt zoa#ADKr{^Rfy#IaI31!Bp#rkz|13QX)uxT&asDS z4Oj(mZXJaMjK5Y1^5W01GMF$!2e}RvM5uzMf=A+Aor{N=2Mq%s&~X7x%+13-tDzvJ z8TwfuKj$zX4A!V^0=|72CXfC>WYWO5YbqfQU+v3kU?taTs81Z~tdDsDZOVvb|A*Dy z2cz8gq&vil$PZ2f)sSYE_=nX_fl;QW*?;Y?e~$fM1^N3P{Z~Q$L#_X-Apd_AB&_j> Xzc&&$+M7su1pHK#G;d)PEgt?4 + + + + + +%3 + + + +140534064715952 + +y +() + + + +140534064838304 + +MulBackward0 + + + +140534064838304->140534064715952 + + + + + +140534064837776 + +AccumulateGrad + + + +140534064837776->140534064838304 + + + + + +140534064714832 + +x +() + + + +140534064714832->140534064837776 + + + + + diff --git a/docs/source/_static/images/visualization-fig2.svg b/docs/source/_static/images/visualization-fig2.svg new file mode 100644 index 00000000..25db4e5a --- /dev/null +++ b/docs/source/_static/images/visualization-fig2.svg @@ -0,0 +1,106 @@ + + + + + + +%3 + + + +140534659780336 + +loss +() + + + +140531595570768 + +MseLossBackward0 + + + +140531595570768->140534659780336 + + + + + +140531595570576 + +AddmmBackward0 + + + +140531595570576->140531595570768 + + + + + +140531595570528 + +AccumulateGrad + + + +140531595570528->140531595570576 + + + + + +140531595583632 + +fc.bias +(1) + + + +140531595583632->140531595570528 + + + + + +140531595571104 + +TBackward0 + + + +140531595571104->140531595570576 + + + + + +140531595570432 + +AccumulateGrad + + + +140531595570432->140531595571104 + + + + + +140531595582816 + +fc.weight +(1, 5) + + + +140531595582816->140531595570432 + + + + + diff --git a/docs/source/_static/images/visualization-fig3.svg b/docs/source/_static/images/visualization-fig3.svg new file mode 100644 index 00000000..c041e0f6 --- /dev/null +++ b/docs/source/_static/images/visualization-fig3.svg @@ -0,0 +1,339 @@ + + + + + + +%3 + + + +140531595614064 + +loss +() + + + +140531595567168 + +MseLossBackward0 + + + +140531595567168->140531595614064 + + + + + +140531595569232 + +AddBackward0 + + + +140531595569232->140531595567168 + + + + + +140531595568800 + +AddmmBackward0 + + + +140531595568800->140531595569232 + + + + + +140534660247264 + +AddBackward0 +step1.fc.bias +(1) + + + +140534660247264->140531595568800 + + + + + +140534553595376 + +AccumulateGrad + + + +140534553595376->140534660247264 + + + + + +140534553592832 + +AddmmBackward0 + + + +140534553595376->140534553592832 + + + + + +140534064448352 + +step0.fc.bias +(1) + + + +140534064448352->140534553595376 + + + + + +140534553595616 + +MulBackward0 + + + +140534553595616->140534660247264 + + + + + +140534553594848 + +ViewBackward0 + + + +140534553594848->140534553595616 + + + + + +140534553594992 + +SumBackward1 + + + +140534553594992->140534553594848 + + + + + +140534553594800 + +MseLossBackwardBackward0 + + + +140534553594800->140534553594992 + + + + + +140531595617904 + +TBackward0 + + + +140534553594800->140531595617904 + + + + + +140534553593072 + +AddBackward0 + + + +140534553593072->140534553594800 + + + + + +140534553592832->140534553593072 + + + + + +140534553593456 + +TBackward0 + + + +140534553593456->140534553592832 + + + + + +140534553593888 + +AccumulateGrad + + + +140534553593888->140534553593456 + + + + + +140531595572368 + +AddBackward0 +step1.fc.weight +(1, 5) + + + +140534553593888->140531595572368 + + + + + +140531595612944 + +step0.fc.weight +(1, 5) + + + +140531595612944->140534553593888 + + + + + +140531595567888 + +AccumulateGrad + + + +140531595567888->140531595569232 + + + + + +140531595567888->140534553593072 + + + + + +140531595613184 + +meta_param +() + + + +140531595613184->140531595567888 + + + + + +140534553594272 + +TBackward0 + + + +140534553594272->140531595568800 + + + + + +140531595572368->140534553594272 + + + + + +140534553593504 + +MulBackward0 + + + +140534553593504->140531595572368 + + + + + +140534553592976 + +TBackward0 + + + +140534553592976->140534553593504 + + + + + +140534553593216 + +TBackward0 + + + +140534553593216->140534553592976 + + + + + +140534553593552 + +MmBackward0 + + + +140534553593552->140534553593216 + + + + + +140531595617904->140534553593552 + + + + + diff --git a/docs/source/_static/images/zero-order.png b/docs/source/_static/images/zero-order.png new file mode 100644 index 0000000000000000000000000000000000000000..2c94d667e42b995799c4aed5fd21f8d844c8349f GIT binary patch literal 129922 zcmeFZg<2)C^3UZQ|XoP5RaB!H?Qeuj5a47h2aEKpJQNS~M zkMNz~;4pYhMMV{)MMcRKY^;n-Eezq{q=KWB@2DuXA0VbR`iw%j zxa8s9KK;qrCxK9?F~hdWmmFGe`We%7&xt@(g$GZZ;!ZFcHBZggYh&Ik$M)`zm}dn( z(J^$GA|u5o_9z2Kc?M$De3`#X9PL_);G&a0TV>3nifsGd(QE zL9zEko~J}vro=`;reFSWh}wMj?bBq(h#w6qiz-P^+k;<@%^UqlQsA*dDKr;(nPkS0 zqJoN-sYoUt`>^FF{rpdvA}4*lsNLC4VWe3v@&*J}l|f!J(v}#V5r3vhv7>vdxoJ4;Wi`-U_2)Xd8+=c@Od&`JL&` zhZ#Rbryn%xw*_6zCEuYZYe_`>UUNOO@4LYjfEkbah>GZ-Z2-T78U@EUNcSi9Cq}5( z`JA8=QVmMOTBz%9E#LhUBYCH1E9Cst8dP=u9lT&5dywb{IS+gbeMOyt~(?PSaJ%Lc97!zk9!2f~G?Z!@cs zt0uzV@sG?bOA?rmE7b9b3J9GoyT!W0TQqr={KVC}Q&2oAy}|E*WDdu$!95&QswcVV zWMVJyCMQqCs_>mtQTm4!vMZF-9R5gTIeWQUxzs3k{~vm%T!hwp)Jj>OLc5;f5?$T* zW5|o!=Jr|KSlB@Dy14#y;a02n^oj^|{|6V-H4>bbVR+X`%WfAu_XQGMOQKcMsm#eZ z1LCX+pY{_*MVpF?XI|QOF;70PBI@hhvzT>D^5)xthbPAmh3_XPrS_F(K6_Eolcj?e zcW=o{QQ=;U_b(lRQKT{-5*@BlEW(ES)(?{4@kID4P_`O=4I-y_d+eaNiDCQs_1#DL z6c$Gw6D06E)QQqViur-}Ps(+yMcv z7yJuMeZ9|dh)cn53rrZ0Wm+>wU*uqXZ}?Uypn~Jpqg)`DgXJ2A^A@G&2gxkjy(S!M z!IZGxpLp}@yChBm(>~gbWV4qicnHZlQUx?lI2C?x{V;w!d`6e08GexsazV;_;ZoG@5d!V84`qLUM3c_Sw9nMcoX_mc zWE$WrA{_JEcyW&+JG^nLHsH`ZFM)=TS#gC@sv!*7$BM{4JamBo9o7jw!mCz_XyY^3n~njGYxa*K8@cCPmi7SC+93*il!hB9`X$F@LL=3N^tksTUfgZ4(J(lv@KHg zez^F(uoN|4(!JUPvV*=vH_0#dwXEV~wwcO-({JcgkAE7*$YgY)E4wPEuS<+1LNUsv z%8{Yn(9|h`$*e8cNml^|eno+A{7|PgrxqusBmQOa9khdlMf;!YwdK{0>Bk=1^=`y& zMeYt~2cc!8jKsB;2UOZ@+9}((Gjc*P+TUi3>g7C@JX1VNubbfZ;Y1payr5-_d?B?V zm;8Nn3WQFyJB+P~Y`(UXsOV^zPtncLhwsUuYLU4K{bZn~%4EtE8e+6lTM1@2BW3A?-adNg3c;DgvK+tC?cAFOq9c|SwV_yFL5@3Se{a%Z} z9n!-4L};nJ@Mj^5K7FfTFv%l6fg=lIDyYq5PX*Vm_pZ$RHuWfV+k*rgjt4Z$A8+-Ob8|Vxo@osmsM(foVHvmjWQ1{&a?%Jz zVQkcM-u>Er<>h0UrmP9{aGzl0_x`-rR5MSDKVDM1Dl90!Qayk^{~d+ctk(Sd_wZK- zB|Os#%Ru6av$+~oUM_F;=<2j3+V@`8Ui>r(Rd!|_ed1+jGwr!J_ z^5V{2v<7NEpGxc-_j(>Et;pMFtz``i5TU!_TBrp<9x_^}FBF+feS#SJ6rnH5Ek0UI z#ofo#K5-uEHTS%#qIQ>_P(02eioG8j9IL6lRqxh{miy>2J!x*JhHCx0(r*^O)_+AR zr{-Jdj+O*eNtAuIwfs5WVvoBXlY3qwXnR@O(a=%*I^cE4w`nN0MMVvZ&U}ijL}CY* z`zXV>o2|@B%$eEuwRgEcbH&wDEo3Ggj_`W5&B}=^ATw8$5*mY*R1T6dxssNfa!tuH z>wUbcn%jAItH&MhY(gjJ3yeFpmZtaIby6*62DOzuYwLvEJon*y5nb=8k>>FJt=Por1Ck=PogX?)qlV86x>Z^~-1*Qr*K3VAcTD(!*+IyP9XES|%F^8UlN61vAonvKhE^tsi zu^l)*Ik5(D)-ba;HnUp8pL)AdbiEjdMLZNb|>3k!!;7(=DRu{Hvy;u-fFUWJxk40TXxpmha4clox1W+1yX)l!Jd{qD8RvH8|W{)3dL z>`^qGf={cfa5XvbUMU#I_jcSQNb>ow>Cis$7eB>5kymk+c=gZ`&I|!gDh^W^X)?pY z@7Du8LW@Q9lM^_vstSx>jl~JSHRvE>PWMJPXeBo_dmF>VotG|{w#d}UmFIfeL3pBnKbC-}g+u)7c?3APKvOuR zKktzPzhPhD;2U<$-@g$f0^sg|zaD^Zr!<6ry&DBT4e?)(5kG*};6#)}rKQ1dC4C!1 zL#VBZl^tY7=OK9FuCz4XpGGnVc-G zVV?uX@5BopS{mBvk~>*iKy7)Q1StP{2QPRGdm2JX{?}XV%mpY_WEIFot!xa*IhmN5 zm?;I($jQn1Z48Wf6~!d}Tn_#cpfs_wv*v|B9335*9NCzxY>Xi+JUl!QW>yF*D>l93WSA;8S?MX2AA@~p5;|AbuzS26*IL2YX-&;-4$lAIrT_8L|GxCKt)Y#ml_eO{PVj&B>(99V`Qo1o`5~~W|Ho4Nz0Q9<3l>@s zjUV#wO%p`>?I3Uh_L10BO#T)44Xh0I1>XvO(Ej}!_PAd`^jXv>9Gozmw3x^%C-{x2 zJ5H}&pPlT+PRr1~6qfpeh9stN9~md%*{~MfGvS21>Z+})mxai~WS@Mh^!y9hvuGWr z6fzAPpFN_Gf{0`tGaaTLZ}m)7#!K7Kv>R+CpH45&#a0^k)E+1G5*#1%@AtS)Q{u5< z%?ZQ7BVph60|A@%nHL2dV-=+r90Dr&jX%67guT465N>-JT!+LU>_wr&%5>W(u;CFf z+)!@44Qyd@9qMW-q2wDU$+I9yaAh;0PnvH0|$>Hh5Ck*F)__wsxYMc9}tbhFYQTO6~a4^f&+I8^z_Z(WT*RoPl%;N%<5_hh!Zg-hV zRBW9#WZEGsR|Hf-g}xC}Q&SOw2pa5k_&f4_BG)#z+t;kT7o4w9G#wmg|WF%t2!Dq?=(TU^OH3Rq_#L+j6A9P#V1c7-oEnyPDJk_6X!tk-4c zZ+esb%hu;ktM#`j_62&^Lj|r))uZZOmuw1M-%GqKa{U+;Wq?G4%5Fd-#&^ryuz7%! zJ*yb5+Wd&}=!tAX)Fv%^Tpwp%iW9=!GhX~anv(*P?l0C+4lyRm z$~m69ZSun8{J?D<{;o39Nw=D*^GN*?k0!uZ5&MYUSJ0OsD&x${_GR{KyaH-d`!N-NV?dy!=Y}l=0aK8g{kA>Sr2|1Z zsPh{K?t$_p0RcfwHn)I!BkL%!9KUzK{lR4;+;?TT!NbD7C~v;>ha-|(QT&Ic{-;Yb z!Ge)r^F*C{^?em78JXfq0SyH77->VW*qB%EL}?yBCYX5A+|puH8)|!N(8GhxXSTT$ z%9rU#Rx9b^VDY*^-gbBs+)R!-O`&B(@o;eeZF4~eDJY%%A=~yx&0(Km6{qb&SM(LS zt%~<3gjz|Q%Ab3*O$n^s+-eFfABz8Y0hH0eO?Vj&s-bUAl$TF_wbaneaI&4K zaEUpR;#Ezm`-RF5?8Qh7Xy9yzX$Zet6`aKBMZVa!VJ@awGvl*uBsUlIN zHfA(v{W^w$Rm(zEOn#tSjIRD9{$b5+(OQEBY&Mq6XLU-`B_Bs?k#e!;B!bLXjv+}h z7r%`97$b)wt1v$9^XPoO*#WRgVM$9v@{*O`ycW4k3=QM0m=V0mHe)1v8}r-WV*R1m z_io)KImf&NR>S1bIT_uPE>8!3#~S;w_F0}4F@K80>3_(9rhxDz1=Oa3pw(Wf|da$ zD>V&GXJ5J~`r;Tr(ya!Z0_=`KuIjP;`O#+g@E^A{OxgcrjU@3Rk)G;vFB=)B?hmB? z2CQ2Hl@}Y>Lj%=7O7VAbjF$6j&)0Ummv|s(%CqTQMr}i<^zQ=|!_uucoe6IVb>WF1 zSwPZr?D0N$u>4rp)WR+E!gZH(Gs@W*bscqLKg36KMRp>+?dn=EhwJeb((9%GOt!2q z)jEoP2grx`&pLXW?%(V!-WFidE@E3;&8gRz49=bJwi8Vs;*UP1qn2q9z2zEsG5}BG zsi)Pceg)?-tkTJ!3HM|3!cRSwUT1+t3=B#-I_W2JN%&Rpy9T%I7JTptT+bBNBx~EMUJv*nqlY$xXP~s}+!DCFQ6B`Iz=>K|h&53QSb&^=}+V~E0N<&Kb>v-9sLXZnS?elnuih~{uSk-Xc28=3^U9wNr{ z82Up8rZ_Am-O0!1jmAazXrjr+x4BUr!hIqga@WS=rr8+WBK*lB#Za7^P56f@IVB>(7p@r6bD+q(KSI8R=L3%Z|G=xty;xcM`%QSrQkFqp;E zj=;_=8x9v83Z{r3<|nB``EB+QL2rs1UvklH|+hIKM5ZsTNgOOP;zADBzlqZ?sicWedcg-=W6bL_f@Sw|9>$W98MN%gJ zjet=V-IFODiSdhKink+tE7^yB!>hXfx|`K`Ay=M}DmbK4td8gsPbP6fuBI42A|gVp zyzckyTS)93gwFhsPVr}yti1CxfXCvO=gxljT6t1NE-{q9$nTXMXn`p?5d=MT|al){eX0JtT0-Ar2D8s zK8csH)a62xNu$D~(r8TG^JAyEww|f&{M`xa%D2Dp*=~*xe!j3zqpL>#lLltH6$JuLi?~9SKS3@} zi@Vy_^D@#ug5S9;%QX-eb75cilED%xsGCU5l~1ZKd9xRmqlvxnN|#R+WHlY2)=v(w zTc;JRax!s7mD!5e$k-5za5%Q@@cnt{4M{O*2I*L!=rGsu%u zB4A7(;{CCR*E?b)Hp{hvA3D&L>OV@`#;8l!t=r2QbcAcglv#79-xKA|p|ril5&piP zNpL^c`wsEv^Ackh6CPtl>s}UPmD+Mif{~lc%-`GyD!+*^n*RDyR?=}XQS~8f%aw{>y5&xU%PpSBJHk#v3flv9WlD z&tt-kUF;$xq19UN9?_@7s{36KB4AKxBcR-}Bln+RAHfG$Gwkp$Y0tEylXZQ}pz*{y znOCbac+2Cylr*#cTMrW5?^&aiS6B04<~Vs8pKI1|tnj#irOR@nEZ8W|`u4?@B;P>9 zK*wW>0-Te`<&SYUeq`{gPT{s!BXwR^I_!-UV392djgUfMpu2)Sj;h#Mkdw zJtA%=pSEAVY8p7&^k66m*`oGucz^m`Y2z;egFqV6gm+AXXOUSUGPFkHQVV07N9Nb6f67 zj%1S0ZX3YlW1#w#8>;2G;^15$V=hyja#3^~&Q7xP4nVr&)Wi1Y+@4o8_iI$vvlR|T zAP2ETY~`08LgLA&LzivFHQT5{j%y)CI_m5U$|Gr%2z89 zp)zyZ8{*^L?7My$0EynoU$g9eAL;+STtz~q?A2XWHiv>&`5KJ_TkZq8JA6fM^N?xT zJ+&o==aqRH`nPdY9pszhi;c4+%5`myCY|LYcO7@l zJtLM5%~i*1X|>p?v~HTyX9+^}IftUS@?ZJ?JbrK)*hmPl=NG{-&tJHI+d#l_ELCt+!g5eSNOn zv{$B3ubFXknkKCNJpRIVSaJKw{Nb6WARWi|*GHy~dWi5SGn3UX;SmZI<9xV`4vS zG`8QbK&SLA3bu<+rv-l!k2i zDexdlJujrV#84oZXr~B9aRxND2fh`VbzK*i@SOTFMkLknhGO0fR)5F>RSr?v?#w5e zb%%`2O@ebfI&d+ebN!^mmC%8Nn=)4DBw~5!qS|?2@PTHfk8{dl^U}9 zZsP)&y)UGqxrDSAcsQ=62lFaJuRW(2Emh_7TvlF>6`6YigWSNf2p#BHwEH= zOOeeWBx2`!y09en=}Qd=vU6oL18olb;OQBO?A%e%WUgs_a%TCKJqW&QPG9iZ1rbD;=nJ z(jp${d#{00pe1fYR_6g=edWN5Nz;@uH~C@VGoNgMYAfDXv0rJV~Y+X<6nLZNqK9=5Gxi%zV@4T=Pn6KB)b7hLV5Ob`Y4_Hc3 zI<=pnaE1;roKnk>KjVv{)#|D+4#XY1Hk+yWD60Lc#yyCqC({55Lpg6$%Qkt*^WCcy zK6k0pE%#{eR4Q+-MH|`}b^C9;6FJwIZTNQ-GWa_R{XigDJUd)B+!!sC08(xMrXH79 z`WR-d%OPtqQAZ4;dfNPNqg&Qh_zMyxQv-9Rx+K9stTpRI=)w@$0Lp zg{p?Q-dXS4K&h>#0mT9lhcK7JwfHei*+kCnGPv@o^M0(G4H6qQT38GT|DbagNFszS zz=OLko?T8WI4_@8&g6`eht(ffs_K*Xa-D7a!}2T*n=j*TyhxBWH>rz$z>DPifw<=$ z==^uq!2Idx=!_hi{L$-FDX+FvKZ)IyaS*MvS-h~%xFtPdTW7)spHy4nS*KYV+OF>` zLH=&A8bDH{H=kd$Tkc*~zDD<^(tko7Y-U>k9YoU-k@H%U^zDFqDbiTgAJF38 zwS6^nXaXt@_+;arq(Xrn02$b-DVUf(Y@kfo*ooZ?lyvR_k${4DjrrspyXN$i(%In-nzKoqcFA4ZOv>MJ&uXRA(s=8twt zfHHEkr~!xI{2(X}PIQQ1?ZAOH?LS*tng1H{8#PO_e3i!B8UEk zgs>X#$RCDJjI$S)_^$^aA4&aj#e%rC&$5XOW0=dmspTB`zneos934cP-#xO_l%%3~VA#-#4vQV8JPLBsMpNH_=EVM155uE$#(q^_%3u^m0hd|6T-DMbicmbw$c&Zc@^ zsRA2a>CXsMtxi|WYD0a7Y_`%qHoA4n z^?kFEJP@a7Uw?Zw%#?Jqjwb8}lZHh`_=t<}J;wJEOU~ETW5TR<^~4QF4h-lX!TY=U zzMfPSB2s%e2GvCDL@-F1v~I#Ia@zf!)BS;arpo?%{8WL%rdna*RGJ9R_67qyF zmC?p9%T`x{ZlXlODtE;UyhN1yHdQh1gr$UQ-U(9^42|xHj_Lds=wVLUndW2tAZG)4 zJDo|bL<(Aa_=d-3A;{4-FF)iV9VOUlc5{Lf=#&~eV#6vhx=6d)F55b=;)V7k$v3WnaKk06a)27hfO714fsoqxr$mv6hiiA_<2>Jm_gy(ZCef-+ zwdBNB3u{VF%AD_DeFTzUPJ6JxWKR;3$angP*bLS#qFWU7^z_6=@|iyfxE@KxGHDus&*Txz)2el8i38fX z&JlgQs`dG7qpxC~22XK!H-x=fw)RBK8C%3rxr3-HUW+F zygr}tNX;xo1!@QlOv3_OOQhY<8_y=GR$@XiIhWh7&ev<*El+Z6&-b7Cl|71#hGteM# zAOGpb+f+gKaaxm$X3(A8S>AIXsXTk8$T7aroM=@9N{5Nk5yLr3XLDK4xL* z|AP1yMC9>GhD>*`SZhOHK`iXg7^Ym{!OS!2PB3!tMKNrHwn>NF3~8+cK!vi)zE4|s zBv*Yz{yK-iUWPPIl)R!`$cm@fOC6Q%g%0 z55>y$s@A@J@9G5pqL<|AvFSNwKN4*8g8Qhsd7wt5uQOa7*%^Ov2r#7a) zzC}Byq~cLYgobCQdhkxDzsl1Xqvg+qzmtnO7qEc>%2NOklqVIr4t;sq6fvbxW}$j} zw)CMD_L{4&NN7$|f9#AVxc){O2n^4YSH6(%*{)9_{YI%_;5#DatnxQA&)jG^S^J*f z0qpfXW4OciG-HCP0(NU|`Seq5&&$$W^;I@|8H(+V000vvsyL&P*{kPLuP?F-lLx>) zJ`UdqaNVvQNwTuDv!l&!EWXj6gG>PZ*Xqm{2>{x3KSbydH2#dBvO3w%8B(A*jcE3@ z2H1cb`J7wISF%&=p{k`2I1E)OvxwhaJkL=r4jmacTJGBj07(kiGU$Z~hH@<&h!zt8 z6k{|D<#sy}f`806KfDoehs>=nE+0xS?%+$padx91ufPoJ0pJFXsxV$|WmTpa)|g2< zgr~aB!~n~eR}a_7Ed*bL2m!|)H`;Ef-khn|Uh0ac`ug>2%M7S0FS_95(iH&|sBz9y z*z|#f5BPHZ^;LDQt{G>bqeIio)JbYVs7$c z<6h1>uaWwcjez{Rv**yt#Td7#iD10zy-o;lMtPx2YHtIsN!wllqfHd9hb<1Dgl<+K z8`Ob*Rvs|2o^yy#d$!?^lNpPIw|B=C6bq=TMbPasp z+NwK;+SaLYCm`NT_iTVf79R2ug`h+yZ<#!@k?N(jr{jhIHE0_W#m%;2vwCg~EGP;u z;WAAX9YC!kXRWN~(V4c(JuP@U&<^OyP7_x5IA&FJ)D$pT_EDHn)V~q1x7ayR8|881 zw8UV|eMFtLK)f@ZsjD4}B=?p6xQ(Y;Ew%R)VIz)V5r+R(rq5acZn6Fh6{f{xaE)^F~LNlL00 z5j}nNMdvp21G*e&eNuVpg8{FzI1oMij7gZnO+Dl>NK36J2p&fpV@1Zjq!StO?4}V1 zTT>Gbo-}p`_woQTfdOKyPQL0!%*IC(I2kur1UX?XPQrUzzw>u&d++5MNFB>fPdO#~ zHvB;+>MjM05&lgqG-`>u0NF0(U&b=1M(8^L6@n=9WroBD0?#zWO6xhF$JM$sb?#+W zFKb=w0lNdl<(Y%8CGU;kq~aB5L?XpyG7lHA>gUYMt2rL6Rixp4iscxZB=aDZz?^Q0 zn2Zb}8h9_-&(E(-#CrDy774F`5m#3NM<;#M$$X9p>XN1oO@Jb} z^BE4>EGIiAloByjZxmU$^oQOA1-)EheOGTc|Lzu*254nFtvQvW4c4GJ))I1!dNT1q z+84cQ^I;$#GJCa1h$sNpN{UKKiwWj=A3ZPT$>8NBijfXRuH}S4%54cHFyd^Tu64D< zQeDozcJ2SmNKtY%8*i+qcul}Y*GJ~~t8)!jvvF$D7oUMHa9LtT6`=;I+>mqWq3ul<@+eY5$El#ci@=)i=F74^kP=3e-N7BgM0*QZnHVIzQ=B&qxYy%ji`dZqFvtLGnod-x8(e%nFQB&3vmgVt{(Q$G3 zlHDPVH)UWkSW6svUZoM6Ng4$sWA|Q^H5$7?Y+;(VhZ+0^00^CvJR;;Y-azN<`1bmH zm-e|Qpiq@YTy~q|uYm#ER^{r?44%3ebD7MUR0A5bf~&;qnX7uUr3&l9x{iw~j3=lu z*`4sbhg;3kfT}IvoTg7dZxtS>s8c>ucpQE~qpzFNE5K2ZhMaF-M~u3lJi9a36tjWi zsofApDxh|Cu`INF{PIJER*0ypF&D9r$K+832(P@WQ`P17t%KgbFLfum8O*nR2lK7z zKz7oU^G2zS9`wZ|e5F!i^nJ!M-0oryTV=G|Ds=@{04SjYbZ@y|2zzc`56TOv{>8~@ z5;-j+P%sI(ElN(wKJ*-e7dV!yZI|UHxN;S<-o;P#RoWU)jP&2XDRp@ZgSd$F={20Y zfH%sNAnp3Fw2P~JnuvBFZ*gPf*8*Di??9CQEV{?TUXV71mQXr-G!JlXBmtX|l%C$F zP!hg3X1~5I9?|(9o^Vv__J0z$S zvw9)`SE3%#zmXyz!^~AL=PV{#1FUjXblLf)M5p@{7@!z>qIy~*8-siU#S`A=;~d}d zhH|6$Rne4|^4mDmW(i`qF7rhEt-ltgc_U*oJAD8+r;~5HhH9$X@tA#=D->zG#FR?t zV#k*y1LRa0m^=J#w_h82d+`DQ_2OPZwqhcW^G&V^{oxIi-Z#&=rtBg&*KZ-%Z$LU! z6@O+8Cx zh4knLg?b|!&86$rg|q2sLAu4nQesy67{+$pd0xewjPW+lF0U&f9rvW{>`F8$tQ)Pe zLAyg-V*rSJm6X-GpvC~84`7I=$%*wX=J~b57~DhOm3y`E#&68rJM_X{jS!g?!dhQU zUNp}M+DF4HcIQ;x3Vl9>b6O>;Q)F&CBB}*C-dI*3285C$j$sJ0=UK#YS|%K<^fwKU za#W{->2Bc^pe^tg$?X2h5kCWQGdKE=rQE|GIXsLap0Oj5G5LDuJ>>*uV zzK+5o#ShsFuuWUrK2pJ&N^E}ryqo$}DX}r|XEdf{7-_Us-D%4Of?)XX_V+|woW7V` z(@q)mMMNb|m5!Sz$@Nq7ASbaP-aA|$DR(8fylJ|WVgOa5v~kc#>tQ2gDBL2T?G!FYt9BffGxPK34j9-3Y`B=?RJGVB zn8GbRr$}~li_>G6gu-ok zPBvpjhT`TtQGhG>7hlW(W~e)mDJ|h;c)GU;%Wc1uwJlMI3E53{>J#_SZg5g^J>a8a zU5ae|HK^g;;drGxl`R0p*)F{}1wbFa;La4@@RnY-uSDtbrxO5%Oc=Mgu|BY!N?`Rr5ph0=Qg_>Z4mV7fW;)=U+#?s zRw(^dF$Ew-4SKg9_69CC)fz3yBVAf!~+ z0f$RkmY;n8@km)i7ljB~vn!!&z-4lIcY7Q&5R0sbH`k(9z0AVYh>P89h<3uG<|cqM z>joy(qfI5TA`D+dKwLq4k1WCnVuC9w7l<3S)tBx|U~Mhq(4?&GZV2*^Cy@hI_usXR zQ@}Nbwz3XHZ>Tv~d%=2N7!X8h@!_X%Jg2Jc9}p3lUxa`&N809z9`UMucg8S~83|L4 ztDSZ~@n7sT?d5x3%6MEJDo;e#TA;Z!yLSL;2sn=K(Nfw?%x@ zy7hpQC9@lcx_0oWIokDFKzV)w9AV$}H2}hl@OLnIovu60;e0Gm>L=NasKQfHpjYFb zyT6-l04TR`QA-3N6v&iOTr+NiX8}#UbysKRwsk;PD;5|Bjdf~8MBt~#oTcMzs>4#V9cFk_+6QgR8*tF-hJ5`hJ z`fzU9auyH_AFBa{^1UbMfhoU5s6&$jk)cu5h9(eo1(y|YB9jyeoJWry18Fmwki%?Y z)umV5fX#{eBvBb&-l3^0p6xwk7O-s{q@L%|PoqcFdCzz;(Oi5m0sFpZk^|JBET925 zKT<{=!-U`MC-`T;Xe$Ow%0KuvT?x4a1y&+>@6+r3;bpni>aXw0X|bo9@|?cfxP<$( zX%jTR@-+Eozn>p(LjWY?^xV|W*Q)i}zW)wAhDp;5g&0VsdakICp1uNW2}9*ar8ic} zP+x#K>S!G}1bb~6=TBrK*(?~FXC|dYDslRLV0rvl9AJd#K}IJmZ~nU=t%-cf$Ph0k zygxp7wly*g>1e)-F7xApH=5V%IY!lsm z3oGIUE>KW5LLr7VJ)D~+O!Z|*V%gR`mWe63Wh32SXFL-69y~z+yU$kke1I|1hOZ+Q z0!c7sHt9>7tXf)k2WHdU+6<3Yv)4htw-6~mUiTb5(-n<|hMKy9vU(1{MWX_d_18!? z?B>H=fZ&yw^b0d(L;hJk!rldJnDUAWdm2t{Lp4Njg69DJ7eMoO+|eHu5p3k3xT8H6 z4vAUE+pa3@`y?K+0PLm=nhW67j2{Xy2-|djN%Wdu_!zb8CNa1z#!K3dtl8UElBclq znoBHw`)+`P1>ksBBgzBKNqXy@S&@Ei!}Xb@4>pS(?LfrKVn&~h=nfxzx1+_31fnVv zuq!sidw^$^k!FY#0xYlYo;)O7OeAO~mXiy2_)8<@rb2F@5p1^*;h{$5_1`J9m z1xyEt(IHdrH5)#AWaCUfIM{d!md@a3(EYtOs<4yw2Sc1i1U?;|-M05nPM1?bge&fZ z_WfuZhY)`NFcQ!iA}(l&z~gS{%5-gYrF#?+n*Kxz@eTRT+YNRQ^A2fjm8h^eGrQHa zM#&T3hjh2OytzgNd47o zE%(D;KS<^JQXLoC?t>8TkB(;{R1EkIcvsz%>PF5RP(FDVuvNb?A6AlUgfo9rfSpf` z0DxMVJ|I-F?f@9@^6ZdpP1t}$38cA&&GH$}qaVFzq|2yCqQZzaXEX1Ib%D-hC>#c( z0xv-O!HHF#h%XAfCegPZdgVO!Vj@n9Q5{$Os@eoc2jAhm*&F(*bQ2hQTH^=Jj2QPi3gZli;Olp08fV{Y_ zCI)-+Q0OOjkZX*%fJZe?3a$Y?uFR4nL*%+#eEEBR$gQFN-`Uw2w@=#u$jxWXsnGWe z3%(X8H2L@RJ7v?<%b|O~VbAOa$C1IY%xjdxR9qd$rrr54A&^8ia6ybQUuIhRR9n#GvI>hp*P3&*q+f|>H)1%wnYSPQYOL;`3DxRU^h`vSblC9khW*DkBz$W z<{7}acTd_uE%qTeLHOhe;RJfA)r?k2T+tvFrlK5hevE=q#?ajTDj49~2mq}KkyEn= z%|>qjd-eR{0Tf_XSNM!W2r%li`5vYFJE$Aewd!WWIi_E26~4Wq3l0n8GACGx-Gy=k zo6tAny13Zyh4B*i<0<2wa!`K1+1FqPiLG~kv)CEm4ey@KIn-hQ`1nuDqSc4UJoWOt z)5U5dE`aKpTCe1s_CXyDaM!_X`3cfDAn${HdHg)8;O1P*2a?|gZT}tkM1-o+&e1~s zuiZL>AyCwZQ4JxGI$B80j?d^Nf>!Zg$enFA%^zq=nc3NiW0(L{VbWI5O}lP$EUSN2 ze9ZMC_Md0QyqaL%>gJ7~!iQYcX+*15@+C7XOBx_nUqKc#Hl*aZJKQg3LGK)^=am}_ zY`*N(>5O44lf(yD-0f7*?Zi6x-Mb=pOhfIk+_2h+9o7`ml5}_oQ~GlCR|m41`Kr@F&MwQ# z!1*Vv`1{{Ph~VxRu&u;i!s_s>h8;+!!FA;{iH#$EF>Md}w2xsuAt!s9Y%plFfoKhM z<_=J9TpQeK2>@sOKqH7;3csl|&TdPAOO zb*997LJA@g=op4o?P1JrmR=i5o9}&EF>e6TfSAP&mwu=Ont&97GH7cgFM%C`ggX2d z74Vl7*DV%vt_s8RFFzs|rT@2?iGW>OAU#!J6vW@bu( znnY1uGY^6;eOjYG$krWzb29=kjAfDn_BC>Mz7^CA9as890GuQ1J4|268N(#=I7uha z>9?(x^hLq!+6Vxo4FnEwIQD-J?)MeuW11?T{e|Ta4A>Sjn_N*x@tn?9vkd`+X+f@v zRsd2qkSMuK2Yz0?e>ST0H%J2H%&41iVhj@`{0E?R`tU7-2S4vo1t>wonhHExwX@|&*hc#b^(A2(HWvl7sCdT9dC(+|set+cTeXPh zm#OgNhnxSn?=W!RJE}594+0P%wFeP)+tZq$Yr%(;51=iH&i73k!t39La#RP084f#- zfQpv@9|(%6<5f&U$w03&2JGK)D=C)(lq9AcXDoZl0m#46>MDQ{_*M^x^h<;{w32x; z98bIEzxtve0*v7?(7G_zHb8283gN;M^#;mdTaEL6ml{eUmsOd7Ot}%4+d;qNl3N%F zpOI@F=!_}`bA;u($k6Ms(T@KCuRTe8I0^0IY6dlVy&)lOb0K@ju zv6D<=Mhd_pW9k(=&*{xzPt;v#`^ak2N1l3&R^)9F;4`$K^^w;4?7+02{4Z+f3B=lt z7$9lO0@0zxV#cwFi15X350u;G;LGM;oe3aE-CED|1OE_5v@O;Po_tnYrz?(y)oDj( zw;c2C7{*n%iFm_T1c0DQ!2+=u3IJ=O2DB~~ROI>WSDXn)KphK~3wu*a?a6Xq&w-u} zP{2|j>A(9=d!qvO#=gb>2woil;QN7kPtSNh$2@t@Yy&o!;0sDltf{;Gpr!r_Ry8Wd z1>F~q!5Q0VAa(rmY$a9$l`~2J`B}_L7W%sHTLA+JC1C5a_&_E3|MB(RfmFWz{}GZ= zns!l&QYxc^6jD-VX&|BOaFSWsDQSz!%qS&g6tXkYu!$lel#B>PX869Yqy2e)zkiDmC%;f4MpaL)>ED#pCq!BMSvW1)EMaO!Im$ zr&4bQh&HLiaw#_THF2gW#M;lvP#e-;mQsgNO9(+>0JyiBC=60W+e!rf?G>5Lk=t9c zGfP*;F%+6H-~VVL&fKc2kh~cFU)O}h16ir~KF@~f4bpa2uM&<~jMxV56!`>bQ0hh# zbQX2VG3%Q4#_P@epUNu*FX%W#*6yI5QfR>*)silkW2B2=k^A;a`zwHocIoW7%XSp9 zQyPqbZUJ7D8FVfV)ngm;p1o-;{j88i+qP2f{Yo^#Lf@`v+5g-|tr!KPH1R#1$IFjd zdSj@f$JQJE{+n%l{9I*^S){XOgWSF8rMx=YdZ@u-HaT=fat#H3YJ(efCo`w&sBSioEO$Lwgm_m|%I2oQhj zj=tark%)5j)2Yt2O#kv+Pd4K|byh76@HO#@t7F@_b7vCdcd_7LqaJ1tzP0iX3tRQ# z^5UW6%Yjy7?uu43J<(QtA8VRiKYbSD0u^7lf&Ta{_V~W;#U>4R5^hIY=h^4Xn7t@v zV*}VVbc|T4kN$TZIE)stPcVv(&Y9&{61SvPq;v1P)0@r`*wVFMoj@p2m(R~w!f}>< zwCMvuYdk^@UInIG=qLE~{rNRM>=&mxqo7144f@zd+p*7SSVyP1gV=dpm!JHbrsa4^ zy8II(!LI2uNmaJ_@%WjZ*^@|o0c9N2#bRwA^ytc{Iw=-cxNUXPs5h>Bt^{6FTfQR3 ztIPFqGVd%tX+Le}5|>9-wJ6lDuHIksb(8SFabCa(ZZD&6&1`fGED8X5=@nWRGBa#f zNZV6=vjT!%)(&_;)cn9f^5hO`s;ON?IXT0@?XK*Rx}<0RjgfTB8@2)LBXtEfx|7(g z{|q{sL;8T5*F%ezXr&_q&QG9dDhUM9J2yH8x(?Qvw=ErD9YXf!nZ&af(tFd`d4`E| z)A)e4M5yuP#CYl%WZR7=2G+K@lu6YB)%vfpk>fd@)01OIgL&vRI0QoTXWq}_;wX&B zLwR%?S;wyX&AMDlmEjn~NZ-uzT8c#z2kI*5J5j`)re?P^!Aa})--8u z3N`HM)2FTd33Y}V>y-xAXM1~AGp5jbQ51G#t6O`HAWBr;QRiLy#vFL}9r`*yghj#Lsl6_{M@pr7NI5TMFOwFaUlls9+& zRa|(o0_sRNy27=DKA!`^s|TQ{G-vV4w%WT)wQSS8_!ykJ-vD{XQfsGVML}Vm1i~_7 zpi0#~Jz>D(kov+IP6@4IUmk{icj6vNtYhS(YN}Hv50M;$}0%b*jO}=a&>VF*=k4 zpe!x{R_oTYWxe<8ne5C0MoQpHbB}#Fgf3zKgU**prm8R3d?@%_IpRJs(%$Ae)Lm!n zat@F7v5T|u20%z6hB?`vtvdge)_TsvJxL#2z%`v-a3Pn3*#)*mayn==+a%`xyr*dl z?js*LKa@DL>wOCB^jbtny`2SQG0rbLNn_7YjbJ@Oxov7yh?rlpIt|iZ2pNt4hZc-u z2ktEDReK$?w2DBQwa=BM+lnIRXjjXz2td| z!J%uJX3Gd*1mB5X1whlcLFMequru9u@pY3C>FSH9Zyvj(%VO4gvx_=2lixb_&!6G?INo zN~hT#!`?E6nw8H!%Yg_1%9l*I=-`%34V|b#}3mCUy`CQfGXGEct8ec8_XoiIL^YmxuIUym;|jr_%I)o2>k~U=~*C_MBG;Vm1jZ z)5<<}R5#Nhwj`(CSWL>yj{Ve*M_CeN;lzLiga^)17thQP z^i=3zARJMD$q)>UZ9$EdkZ}jEx6$ih+CNQm0-Ogr9Mh8PXB)eHfF=WNBb5BKTP+L! z%_iSbg%nJ#I(B**eFifU^P_I zRA;)Wmz0YwDpZw}BBg6g%Ed*~mlM-A1|M3Tj&ZIwpn8$bA~ ziaj);?7p|V=)c6DFPXVHCwo|p-^&)+O^|Qz6&Zo%RdlP_9#b1-?X`HKEkKWsSWSYk z$Xo&&8XJ@}w^ak7q-ma+VBvWiHpx4TYyPFge}N4gw(;Pk6As8Skq9Aa5xjq$Nuym+ zKjU|GN@HsXzGv>l+fr&}{I%{yvWA+UeE?q~4Lg18_}g=RDwi~jhQ?;To=X`>1nR`D zLfT@mku#BCxOkm+DaSf`uvFK_da^VxE!ue4s$8!m4`l;N_k==f<@-wPY*cP(8=Ij1 zuPbc>Bdq9~jsd;0^y3+OM+<&t;Lo}>Q>%atD~`3EzKDSpw<%+_kxGXZsi!(h>*}9~z(e+|uC+#MOOiGn6QR?o$7o zrhhe(9JVQ*J#>pWIfb`f;#>EkE>>F@d77_|?G7`ZbzTuY=mN=H*9f#mA3Z9Pc!y@o ze>XB&Z!(k!h4mspQvaKzPZ^-7>ZtXcBfxPKb?=r2e~E((9EDq((K~X+bARZ?%d#>9ab!4! zBqWC3dMlZgA5a(Rk!DkG=Heem#ZOaoz`-G6j2aa8PCj9E&fI|_%cE6@Q$+}zn7#6dTlw;$ z)ugbbB?jjCDW14o^5aS@x22@#{^r}2kX0@sZgW4`Tei~`(k&(9a@0&9-1gt9e#_3k z!6^l02k$?%i$V+O!{w~P5mPdlkrUHf{Z&mqzkReTC|Aqv!$k@)aa{ZG;_}#qy&n8> z;AS$qlS-xv0S;y#&l>UJiLqX!CjdW5a7sh-2rO}iG1qA)7^i~(1q#L14K)-L5Ky*VQ@y_G3Xyfm%|65FGT6P2=wA;}vJ~5UZZuY=A2k-t66D)7rxaz& z?`R*k9IM?@^vN*qJqm~B3Ws{l2DTy%h=bQbRj^TG`($~rgmCl83Q(n2@7@)O zHF%sE5Oer~I#OjtxQ!Xb`dxP2V9@>Dv4@JNwo36V^#T8s^&Fd#r39b0zhoAG@+FBo zELZsg2OYy#vA#3%-}4K7iB7gmkEvGQUOmt2T=r#w-hDl27W^BH()OO-^bq(%n`@|~ z0kj9x3sO%rrxXlAAtvV7i8g)&GG{7c$`#d zfIzRJ_It~df+fEpOHfw!ZH;48cl0CP6ahG_6uUIuY)sj78eS{Rj7qq>=b$t7AE^)v zK}Zxu>}E4?FlXeUagRRU-v+zM-p*IC0Wj=o&|qA#DHwN-0>*DbsAD`|v&PfgeU1mH z{ryYflp*aWym3NeV&yc5=19{faM_@Nt!Il03g-JaHv3r5`p+`(n}KC;T;ao91`gZs zyoiYb;p0F!%Dndc-AXD5ID0`dR^q=*707p@eunsqZl>{*2XY_OmNHQmg=7GLjMTMt zx(uZzi~$XPW&gd+Zp;9EAi5QPX4^PV_l(z(iqmBPlU8sQnb80m295`-BOkH-DD`~p z;O#->;#(ul97u2luuXCOWtQPEG=oMkbMR~Q3s~6uCv~fkP9~-XUNc$5ye0ilZgDC2 zcz~5xL9mZx@AG4SWhSDc-h+}?V_u~m6RH@|y>WgJ4RJg2g{`kS$#oe5uSv}>uc$!W zAAhb#`#=zJ)SpvUpZ}EZRP+eVcU~WgV;1ncpBJ>w*rcyHpP(2KIaA~*Dk1_;zDSv+ z^yu(w`pg#q-lJi=^$!ZNaHaG3{HS&Z=Ks)VZWECOqg24@0b zMiWZwsb?;KmhjCh@%j?<#F)-cqjzG!WB2p-Uqav`34WRt>fkccHyrPN_@Y#1>(;GB zkaSzS(j|H{_E~u#08);GqYedg&2DTaj7zw-*1|bN#ML6}F(IqqQ9}WCB%j+oJb(6^VKBw?EBGA9vHbW6KO|wi!gd z`26zX#jjSMM}t{O|I%%AVCIa&ZDhPmt4QP{S}=8`lQ%Etd-;~h>C$AyJGVU5oj2V0 z!Q?#ipvDa@&V%n(C%bsG_h&TJ1xrY@yPALNDVL~_DW+Orq>N+#V_^Gt z@Y3C#RFCtUN?cg9@ojKc{t=JYeIQzb4aYz2+Z(|vX`$exRGhe+{ZDy_mn{}W%aUy6 zX$PKE)g z-EtE6KX?53y^`4t$X^TN^_o*f0}i52)bQAsXz&@~RjDfjM&7F;bJ8bX3jC42I0`*b zA%1O=ih04@>f+UB?L#ySSv{GemSQFnLKj2kp?eLP+0p5a56NA+@v}T6nf9;u?7OYo zn?!L2E}-F5&|R0jbdWL@{v2a$DgkPU+f$CMcsT3NG&z?T8D2i&TTI75zrx5|f=}8u zuj5YncHtXW2b`d6aq|HBY8J%s6NGJ*zf2{c7NoU6zx`zoEoJwugS)1?mfp^=FcJ zznAWbnq`^51Q()Z%#DQ|?p4x%z6_N$SPGqvk1bZyGn+i{Oyb5XG`^#8BP2eMr{{gJ zdH6by;ey3UHpYx(4?-0|v~W^ZH3M<5oGvxD5HzJk7gSFaQ;(rZiTCtQ;w8gN>nMrW zARls0;(ayb(2=-LUgxjQG9L{P$AvBhEGksyp4LofPBM}k6r*RUdGIg7+3!_I1h|-8E}@cm~qZkI?ZF29;TF4`%_m%<$v6we@S!G$kWbVXl7l9 z(lCbLIglri$-*~}^Zhk{w~CN~A%+Sem#5Je&4Z-g`(o@Cv&S)vlcy zoMJwFx9YZP`|cYye_c7BTzU9?l@zX=n+H8*nJ3-!Qp|@);t#X-u@;IWZ=`Ck(NtJK zc~c5Es``-2RT1V6L`L1@%CA?V}(3G6KkI1eh8|ueZ z{RVNts)@N(c!Vmco<6HSVIo(sxap2m?SL}Bi>DTlUGSC_+ZFA&I3w%A3$bg6a$p1w zR;&4*slw+IbRHg3d!e%DBgl#~CY0~6m*s@)iqmR%e3P4r23(a*xGFm$Igvz z9}+>}j@fXiQNSih!W>VD2+HT7P9*FdgHE~hhYJFZ=TraOdkSKRQLXeB50H3e>lJA( zL2S$xCwH;XLWUg!A_Dbl`~#@ZnuMV@&#~`+j5KWdC1rA`$)gut-4lz)gZshjEPz5b z-S%K^)~>$>zR)khUK8LvX@*BX%DSb*jI?RMq6{~080}832qAqLlxXtx$1x1dO!)%W zK`@qAZFp(nh2j00$PC_1W8D%+S}2=G+}|47k6QHjU_5oud9h59iHoY0bFRH4WzhW! zGzi}sfpMSbLA*rt3yxf9r}M|Q=kSfevpja#ZWaEp+1yJ7I?k~n3M?3ho?M*YQOO<) zJ)PNxK%Wo5^=MghnpZRtVAZRI%hi$D8i+eDts66?J{@1|Hn7tkQFB{7=Tnwqvtux{ z-I#X1&8}}hvJUj#I^6v25gs{$S^0^4e`W=h1bl_o6ZV)^S}084QpX16?>ZEyYfmhZ z(mieMW3t_z&|Lyfbh&lx#q_Xg9PAcmZeE1>O0%KiaOw_9+`BTX1qkI)pUW%vr^Y3Y z-~9`cJ4?*X$BfIdTt;*h>ex`0Ng6-Dunx}29hnSuY}!Si8B+@$kUzQkA5z*dwa5=Pd8uRvrslc|_1+K-G7j~1^Pi`JsJlmy1389JFJ zT2c|7rP#lBxQr?d7G5cdc;7h{uf6_jvk_lc6zlliRWI^LZH6#P=HvW-SF0US z?{HdllX4+C85_FO^_vAbPsJ}e52FZ-pw;fw-QGXVGCe0QZdF$d?m7J1$y7+Vt|9fH ze?sa>AAW!S#zeSdjjm!B&*wjUr~;o+oJNl?izY`9DRiSCcuIY~i2S!KbAv;*dK-po z47`4YeIQF|x<=bghGVyr{KX|3U;=ge{IxNxzkEXlTHJTp8CY2q{sn(@=DwH^A-IH{ zmu%!$GcEiph)0W-%r0U%PbD)BmCQUO%P`(PyzKt*6ygiP!(O>BA#5PEz3tSe#Es6V z0ssep>RKC;)pGN^7}7NqltYgeU;CXuu|A1c`tFQGaeQq2);_0R+8L3xe4C!2BLTviscqsxYY(1;GWvYtphY;7DU(QtpT<^Lua@jkJTczg zkWua2U$Jqh0nrvf?C{^2td+<@Mn*nWJ7d8`=Yi0)ni#3@khQ4}ssmuLEHr=4K^oQ~ zK%qw{aHQsds-PTlQUdKz2Hl^%{W*ot$xsXlXC1_S=Opva*S9sNK-?s#HGXp)-owvC z!mu58=a;}_qlh@W<^jpOSxz>|IGDzIp^1da(t}g=lo%uU6y&AIIGHTsZ82-k^BOF< zFzkl^#~lEVD0j{+mucq%I)I8BGe4RfcK_Bv7tTQP_?o}<&Uq{^1A3mMySRWER&gCq z!c<-TGdY)*acHVvLPyKYU}%J=hoy$e5worC=IH3f^A2X7jVKfxKV+$yl8FH@GigRI zm6QW*5tanR~Wu4?P z_>IuBxQlj0f&VNo8zOo#O!}XZ)6a6j6`-Ytrzsq3IrVYfKJeT%d`gBK@VF!+_W6$* z%j>%kK41QT&FDFl>B0cS-yRU>A-?xkRWFj;L(`&=YVUw#)yNY(ki=V)aG=YCLV!&Y z13q5W#s)1%;CCG{h^+qoYDS1eaU%2Gz(N!by9GMx*Z_j2WB^*O-W_O0vktXxRp@8B zwnBUPC!yjHV-Zo&lF^;u1Pf<8CZchntbj2lKS`i~X>NOQ@nCq+I^vRLsp7+8}dAR9b26a?W(P;USLPw5n+Ewi4o?(n_>1& z&e(zbdsp{YD$S-fkvT)#9k-g&WO?Nc?5Gs`8>5k{PeUjU_-rWWkL9Q!pn|V((m~3C z72ly*TVU6EB^>8aib$-E@aDE?#n9_Qv%<&xE~MfbQ8a=Y1xrK5zHJMva=nj-)WFhK zU01GSCtUh#b5jc6Cf7In3?+B(M)`Kb`jW|a_Mz8nSN>7lG@}jko-K5b>^#K<%oS#p zT|k|e#b^9%C9!r^lf030ak2cn(U}!gk==Lw>N4zOc1Ft0D+Yzb@uMnitg_vS#vo&X zZ91AXwmSBl&bz-=0eSiEg@5ylN z+EZwzat-JHlInIp5jZOMbB1A9*gEHE1gbj_N7YnM0OywAp^9`M{6YNmFre(AY01JS z^}U|?0NaX^R{UP{%+b5B>Q{b~q2R^%c_D9@G;h7FO0GDK@ham>Ky5LmJUdkA&d_oa zhF+GZ3amk~wh8<`km{U}``tcpY08YhuG}&I=U2y<^Cw+)-FsSsMl=Ha>3v>}lT?@U zS>?Vwoo7}VsW@jpzAk%vTNaFg^H|l69~l{V0ME0kN_yi>V#(e!Xr7M9H1R z>sr0@x3XAZHVJW_FA``sBLl?8FL^dL7Ori#hB)1=-do6TQvv(R&bwZ^jvXK*1D^VsMf+9-uRFUKFBMWEpOceLQ)%JJtc6oMRlTK?y14+$aG)rY)PhTVvWvbyMI>IxIgxZelv7m#Ggh8F#%}Rj{O!4pf$JD2 z*$^XIM(`ZBL0(3I#V~jeb{Aiyyt5!=WQ9#< z%C0(g&?e6AKv-L|MgafWCGqN|QEdvTed>P{ab6Kvy?yJ+>-ZX#2S%SdF)@^D-j$KW z?Zd`>XmqGI!e+nRJ~SOa(vJX#x)Ef7*!Jj;(6YeXE){5S!ZM|?nmWd5=sE|9|9X+Ti@N5GA)iYTff!C8qY%G^TnT>jM)SEOzC*$H-=mM11sutn`On4O zfTh>Kvx?+|5jBN*YoYn7U`f1;kGeBObm{HTwwnxHYNnJr|3q4hY-!s z0*4@w#b<$_&K58+HPw}G4|fqe0$}k+ilc)V@azy42Gj=Y6f4Y*;S=I`W%Wg;{>84< zS{Sj@kB4ezGyYlvmWGM#o4Sr|8(sr-!aB3164I_R(m(;8# zf1zQ8u5{vD;*&hpO8DHoKn0E@ZVZj`2+HcsskazD*GIG$R&x)lD{r5wGzAvWLnSYP z68GGR!=@Aq=GzV~ZzqwZ*-#gNX@8Y-?+M0=SfiwbS0Mba_Q7ug&#Y>ar)oM`%n77G zy%yk23)m)pc^LM4G3nF5t4YW)KW3R1Qy+M>)5IM*QdTTetQ6dEI;?! z3k%88+fGYs>{lQ;FYTCFY63r~|FW{McP-s)Mr*w*zGgn|MA~ZKsnGGI$(;0W~&! z24Ne~|2tPdCEbPc+IzI^q)T$z9pFbRmZ7z{{XL9!tfRnX`ZKfm&&jab0*JeV&gc$+ zg7WH#PVvBe$lyMkPzZ~U@=iqOfsu-nlI-ji*IuNtWs#4hYle55P+Y$}saXw)+KqToBhRlAg z1SDAU)Xu&MqYa7>m>?Uf>FQD%V#S~u2_KCzrFr~13Njf=lhj!|S(=hT5;f|F=NH|x ztjcA(;;`EADE|iaj*}HeksqX;9^|_u@*qE^{MlfCa^5z3w;oW}&%a|%%>xS2!?s!y z^C#?w`wkki?{)I_nV$(By_D#QtRe(?i2RpTP!)@gYE5kv^UEv zq|}UYgl2?d$jh(LXTqoi3Fw$rl>6Vi&oZT%Q#*T61h_`rLJZKrcf7LX?*`Wpp~-dK zP8yqw2JeL4R|o-}onEU?fCTj~YxaFGwTu+rF_OZq!4*Io{w9gv0T4}7t1#%Gb)3(!G4cOR+NGlea_(R-`$aC8l=BRIBRI{sdcA7uL_U3bqa?H zUQX}6L|d*33|@c^1+l`yOB9q=y#}EE#Go@vuZ|r(jU)XkNZW@RxlJhp-F4MG8r!|j zK*!toSHV3@hG4y$Nqayu=T-V`#Vo={K2pr`XMd&N#W^by5<%f4gz?d8PJuE8hk^sT z?4vGxR@@kxK{{#2={bl_4|~G>M`6QpN+N2yk0X)Z`wBEt|5( z(n2*HwJ4jW1yS+*iOb66w$4Da6B`p={Q6=bgVs?M5*YFi9M> zL+__f7|rT(6?Pe?ZXJBSkN?;G5bU8F#xVqzf5Iw-`$!G=`s1Bs+^NDRhLe6}eJE-| zLDvQYelz#(z%(zZnT|w<{e!7M##Se+Ok1f$g{1t%uyJZagkAB9JD}J#@}l4ut^34q z6q{a>@(M=9i%Gmh?QK}L8|H3KgN}hT;6Lc<{C_pPt%L?2X<1oKre+B!U>h=Y9IE_L zEG%Kp9v`f4LzQ;vfs}(Iilwm7UV@f|cKg634*JOA5=$Jv6&R_2;D-x*@4o(3 z7RJGN7}b#ho^wt-Ev@?Uslq*S7 z&QPS1c<`-U+xk8no_n~>jCtRW8p?gXmaXEvtyUIMM?X9dMtwbO7KCYhFV~*x+%4%$ zjF5ngK_-teg-$0 zvskhHF4kwu_b}Sm#rm9Yr1d~iUdEI{4$bR+`=})B#HU!xnu5wUYF*MY>^+P{idwKF zyQ+a5#STEs!h$Fhtw~4Ld6-& zq0^nwQXbLXPm$hfy|<5WReD9Nh>1k><{S33eQ$Cu55p|*h=7BJpeud$>JdunP&i6L z0nj~9F}$}@H0vM^KiC@~ z?)_YTS%zXK zvXe+=p6Z?Pg+#)1V`{|$9Odp79!hf^ZBWlurCr^ zt65J&hK$n#WGamZQ3!y_N}^1UQQ|6sy7cz({;CaFYo0%xQ^Dq=br)r)k^;iPE_|}k z>EHlbqKVETfa{Oomx&P76SpZ591G|0> z{Imn580(Z`K5;oti&o@f=f52RI{ivxv+AXvyp5j-UjAACkLk4UuzjAiLx*3+x0?NB z7=dd}#yOn25%>$b!~*1_8$aMI{~1$`WX<=d0M{A#Tu-TAtNLys8(Xy}Qa zA8}HGp%-l=!eD4|f-QAw`*IYXA@Q$eYfK4=e|KiGajnh6K{Ky-mMG?qed`LW6a$~F zWiabUTJg<8AJx*}ih0W?;P9_MlG_MIW>qU!=fGwl1+INwug%qY_(AG0a_TpmP$a1h zLESzMnSCh>XaU1Z5(Xl_JDiuM(6H~CM(?tZ*KG0S*_$thsg&zc>i`ANC`70cCvjIg|~+qlUlRJQYiM*;j}JcVh;dy>h7SQ?$fvm$?tbeb{% zCeB$+gEb2ko1gO@kt{efyJD}>yiK#X#h+pTfP8)Gu>{nK;URpq7&gu84MWcNud0Ay zG3c$PZ0JCiQ!wD(r%sWQv5%1ik+Z(NCdDtrd+1tFE=EQii1q#v9w6O7eAu>VJ#Cvt zdwbHe@hrzsRkykjUZ~22BV4P9%CGM2b6PQvhvFOIiCp2K7_eOZ(58+9&W)xdUvx3K zqRLSNq-MNHImuaz#vF%DD_IVB8h4_vNRf_}yyg|LDTi0VQ=*6kFmeXs9gku!s& zVm}~qChZgw4aL}LgSGG#sw=aHv50x99)K?MU1!=@iZ4RKoOQozsBbCzZP&GF($oZntFT7fUGJX z(2$*aI0{b~7p~GVYBaj+a1*ydSOo51u{tu02z9~rHHM2o8*k-2VDe2DqJ`njzqpAd zXwH0v3QmwE>y@$qFUw{PKRYdwCjkz! zo7^(dDo25%)9}2Fv}r<#9zA5|KD7tvA$k-v;yl$x+vZO-kt2t7?S!WOQhZM{kdn3l z%}qzc2Mv?WF7_s@q@7w&&OQt-Z}c5y&jKl-R~XbCHX*h@wG?0+kk< zY0aiR3ucCGh=8^UjH_w4rQ)fOV(s&@5~VZWbQB$)^uhceWu78urTG~U`}^gI&j)iN z`d+t^S!@J!+Se$sXS64)7ZCKo+apJ@7(6CM5NuEX47S_^-1Zh*et>o*-&GtdLKO>^ zd zRR#`N`%Tqw7z)S;_tDsSMgXmAMpgh^!|4^*%A#oVg~xu!PQNyiSiaL&xzZlvB;gW? zjh8@ZJeo3paEZ9=`!(n&$)G?$*tw4W8+y_UL`21_&$S)4t-y2S`+gloqm0tF6g14n?ww5m^#2aVfLkuM7lG>84X+%#Nnrou-* z+Ryo%04yT6f7@~a1>27H0w?bYp*Zoo5(z<4C-0QC~O78)?s-EyqN9-5-E@O}!{&_fh^ z@i?XoB{}Z1=X^tHs$P+G5a|v0d4$_wFf`zx`G9qhU<&=di|T`^!|uHlFk^#x;A7NL z+Ls^MidFvI?sli}RG@vHIbwhb@3&TYhMf!VCQ_m>(rL5I7xAkW^LlfqB?5>H1oK0U z36KxUj@!J;mH~|Y@du2pa2S~~`uUCIsb}q{j!*v~pg?|K$|xx_FnzY1(Z(;iAh?>` zb~}-7Kc0w3aNx$(UN`MMJMAf@p%k!(IyjaEQXCW%VE$m%yGOMp#R*xRz5*QS)06`4 zM_jIa^)!L1mvxeRUHZOD!rYrqM7`9!$!hau;d4gLBXqrwvJGv&EbKGjO&L-RNZMaY z1Up36A*ExOkfpz|@Ipwo7{Y!ca|o+POtRs@gq=?F;`9Ih)##xTFMi%q(=wLO7mHj< zvmVMPe%ug8VHYcM))oI`)*Fxnk_R_OSkv!zGrb2CucRKEbUw{0_KyCY)$!cs_p6&9 z*HnysJR2Az^zDv2y7@gJ`er*KYU{Ejj_P;4IK8IhU2}T#A=bX8E34;>@&;RU@=I0)2eRADL=1bE*zavzbX4yC zP}Gz9_j}H-=`dI+x9(H?NmemohT+MQnXE%|#i$qOm86F~uwAnvZ*AwbdUcc!jrWzV z_kU>Wv=^N(ab(j3uQpsX9IQj+8UPuc>TqgG)$ z&p)0z?J|=y?fF?#eGQ%TKwI*Mm)ep5o&|iXAKP_ZiX*%XzxDKy{hFi6dGSXNaR<&@ z*t1N(fk*iIQJI4+22Y;-{KMNO8Q)Vpl66O4F744a->RyP}o-Wt5ndz^3uYR+qVC(8a_)!%EZefXjx0BHxe^E1@aC7yqyKH%1BmMBg z?^}1*_usgs>TJyoofE)o%AiP6|FV+PKi~G`Gu`)QUP^P4JXM*`^wV>mSzJJ~&vk9m2Xv0WYee;F7**>p(J+Ei#w(;CB zT9#+~AB?uRZdgAD0b_Z5#Yv8y7Ns%2~-nx5PH!o!nVh z9d8Wim$4$qsv6XJeIAZHb5E_*ukL@{;s5K$zyaIN+FLid;oayP1HDv?)ofx}5B?z-AkS@^yl2j5 z?C+|03eMnLPtw4gq7&P^bLQ~{7CUCHRUdEURqSH)|V6cR=B0Va(pk&uEO ztHnEdl=aZ>#ZPn`^nt`6#<6LMlsgm=8hHovPcp6pc?15fhsZD>GCsECL@6eR>*)ier~$8?Q+l= zH+>xbG1BZkJI)Qhl0!&)9LLm6kEAC9J#9&Y=NwpV%T;ZSFIiT4HJDJ^!M%DZ)6kBR zoR=1~C)|POeUo?R)F>G*{h7qGO88}<3_|A;^|O@V^^%_)sc5^h zZP7y~G!k*07&q9T;sH6Joqb_+Z?Z|zT;b-m9uwce$V-}g`|PUHMEB?a&WR|CNtv*O zlnJ{6ji>Nw4i0!-=YqsCoGDI{}T6GVE2$XhX2k>S7L!MM{ER{rQEf52Jb@)6IVAbeR1S<^&F~ zznxM(E6fF)u0k&!L)*ppA60_kw_&o~Lu#HTMPTt971w-9M2$W1 z96l~YX^?%fd}b0I4jS>CF72P(@Ye_$eGB{ljhJy9O7V$Tv0s`@$j#}Rar@?|h33MIN8ZUnyQqZW zwl}7!E|KSCdUno<_MD<)aBBnE)D#+t2XMnddlCvqaf~Qi3tB)5;}%pcG(TJ^BlHRw z5mhVekFXsJr#vQtURtFizKzTX6Vx2*txFa@)J~3_k#*?cIO-y6UFj`^!C4MTGC3n{ zyiH{_-h56p(BThJfgkYz&tz)rn_6qb3o3=Yc+S)w7-)TqR%Uu=Ss#WV^0*bfo_GAQ z%g}nFhsYYCrEl6V9SsF*icJ#pe|Q+S%)VeP}J+f|@&!X{QG7aYN1 z88PAH#R6ba4-~7`psg7;Sfu>CIIoG(%Gc-P8EEJT@B1oG-A7xxedp0^vI!2r2xLv z;bc*hEp2?@F*yNrfS8V*Jt{Wz%1t?)ev4)lIi>O-Dmb|-N^txi(##aBGlx^`!jI@_BKwe6C z?Zv&c4{n|AWkDIymZ5luGrbD%GPn&+!wjpkX3J}Xdfi`YdLg}LGWw288NL*Q5gKa1 z(Zv^4ML}--1P*&hi7qyEGpVzEE~nlFq^+N+>M1C8fscXY#rTtKjIBqVq>;(YL_B!X?c92r>gOx<-AwdsIqwebCXroyuYfI`&xLDujHgzAtD-<* z9h*U;EJRy43B@Yz?1IOs$-mqjDk*t5q|u_SBH{?#Ak+rwewaGBH_EldAnRgTWlCN4 zj&M7vWe)sbkT$|;@bxOBpzb#>XFd=aZcf91AJMI3s(^q50PohojRejq5!984HtL3Bwf^lgN9{Uenr7Mw#8tR+Lo+T;5Z#(@J zP&Kd*P;!B?<-#N^p0x(%*_4BotVvTqI1RXBS1;B;TxQMpSpWm8_k0_qSp-euYO!3%q5@R4h z$n01m+8jA!`&<111&(24=7rQi=Sj|xF4l0UJnewBUPUi*Pje%_ES=(;;{cjyqap@FBqgm1ZEwEG9@CO^}{ zcbe&9*Ke=)h07yoRje1h8F;y>y%u9Xv1HmF1v^fCDO4(NSzV!}P_~~t47`X;zlA>n z&{+y<5n0E3C>kn!-RapP9*=L+#u+l^Jw70@$WfgmQ8#Ci$e@tNNY81M-jY9b%l{%* zgXquqj=y=&IKU8E)%NXURoAw8%NL>QI$TKn^>BpW7*;+?D`QAb6=0+<{vL*jWA^=S zC?(6=BQ45g#s|PLF{ww!srfpHoZ$F)FIVl-+ue*@4YwhLR95t%$<=^7^^{G$u;?oa z&Gc;Q3GSO#E;b(8I#p_8{m8J$L4T=i*bUGr(D-pS)nSIfeBa@+Bksn~Y=zerBoQVD z{}95jV|1n`Z~*5^7HM++4N0u~LJi$VOzp?N)sEfcJ+;HV=fkNPTulac;FcW2dvjMN(50q##LqouNXOJ)#_++X&RQZ3=N5snoiwwr6LD7kG;(hDbLAXVefBQM#9u#a=9G(Fqr8`vm(R-_fjdD&DUL@wx(XU$gEI# zO87#1kuCHmrkS8O!tO+q5Q+xjg=;y2Sr8hK;N)^y(KfgKOz((kO2b;UWXdiPFM^V| zL?{VDznV~9A?hWTnS0Q#CI`($;ZlJgDi6?MA*@k9b4l#HtXH`px{cOJDhTw^gP5Q|{sVZsZ{InlDI^VV;0hybCoM9sOwsMB!HX0h z+Dm*1I!;Ok)7e@!YJ1|z2Vt%epg(ASxB+s4y?rvf%!+n{4CE@)R-Ty>b_*OJb#|cW z)(t=U&uMf=yl@IvDCTHZxO`Y+g*pZx1>`9}gnaiqDP>P*>$wC#7}5wJj68ZpM|SRo zfJ!kqlg3Yb4Fgs5&0VqW$H#vrAD^YF?EUNGRhvN`OZH~>UqU8rv+vV}hy#v7kb3s= zt*Judc^vPStIt9WLDXNz7F5)NPLwmz)zz)hL|MME>)~A5BX`0gc^LZe$A|{*6@*wY z*egpA6S7K2wlqU=CV_e=DSR*jhw#;5KucmctRo<7YUG%6oS5%lEuH2)_6h?bzz`5` zHF+`S5pGW46v49C$5klP9#|n-hb8q7ruLJUt1vl0w!cbs%|~fU0ReWwI0g1vcdY%n zrTxU{i%p0lZGklxg{luy3}&<{LfU71GPnmX?8R|RZ3I(`V~$7;1*!A_ix4g4_sS{I zy-*RMb(XaCkH$2YWlOOcDymEfg)&y`jni?z9o6(}!@ZqJf`-U_n$NAbuiJ+V%EXy1 z9)){yPtG;6r5J#c5x<)|N`gAMMfg}C(ZaMm6C7xI<@`l_lBZx^UMRsYNVm0wCB5BD zYczX#9XTNP>8SP377Q^b)l+;g=W4yy23JEI3|p6C86eXYmwf1OdXf;-8$$D*PE$Iz zjf{0f4Gu8j)V?pSZV2@t=9ExR{&;zX6?mdw`(MZOPTj;X$Ae^Uh>A?Z-3E0RyJ-Cp zxehSEvS3BzT|uE6Y?~FV%FhHG{&qgImsZRM#IY39R{A`n5lA%LB0xqAZg~j<;>FP3 z0Gh5>WggavrCpT8@~PJoJHIl33Vo^+2K8!~N=W~p74bzB6c{A;Xwug>MzOxON0${j>JnMDaO_(PV87{$#faMFPIfFLWvk!>upaR;($oC8F0n&G!J zAW(OpC7p?r<3R9~a+`;>WZBR}@N|?;kA-W!NcskGwchC4czYw7tU^d5ygVtRonEODw&a zUbs23+lQW!Ww_@c(o8Al8N`jU>?V_cmF*R^@Zny&iPrOn0RGR5k?uiDf!0+e(a$<-4h?nG=+56@ur< zOpxAqqCa9X2t&r-=)Rz#a%xzdMX%cQe{SX#uqYWQ%iP%-RiEOPgf*wNF2Bm3IeM)Ii; zBK4XJT`Opy+qv4LzJ6-B0&zbG) z8jiIk;(9`=Cm%+{xp6i?s`F?jq@3X!>$x`_M$;DVcHYqx6@MxGy%LAq&+Ck7xDntM z52!WNo)+@|-ckZf8A9~OU0=Dc;&!xuCF{QO6^?iw4pzuO>5dK%S^um3@YUL#P6ZHm z#V>|?g`5^h)rL5=6tCcjcC^YB{l(t26GP+?`M!(nobmS+XxmC4R6%??8z4ZBxX@l` z4cDR=FRQt(B8oNxz7F_2mfIxp;vnD;41CzDb7EcQ)TUY0mb+s5MUCyQIie3~n&Dfd zBrQ^lFS*74TdDf{nT_ZP1mk4aQG8W1$UhK*FtRTMHPOp=VIO)~*JM}1|1R-DR zaUSAOlsxDVcN;QgWx8Mlu4kE|-$*7mr1xT7JA7E`FO%KR=&(!7?;P?#3>kkprE|rZ z8q2cJ^^XEPrp(?n9N90+zmObuWRH0VA2Vay!|X4SlUETmP9$8;CzFm8gs?TPnCQTM z{`<@S_%FX(3LH4`iwRzG2nz8EAWyKrCsI$`QRQ6@Ivq{9xyNyu_t#giP!aN~i>Uhv zYF_AuE^HAVznqrN&m!T^Pv$rvV1WU23$v1W7mDdyFfJx)#J`t#Tztmzo43$LO6OjgLUY+@JWlsfKmpm#|?f`V=zLwq=<;b_O*7dp_ky%c(V<=jaosO8vZ3W=oq-7`{&c1l`=xU~-@L|$V2 z*z^`MGGnfs@4o8K0;lNN$%KWtg1HX9fnqQIzUU@lu<;mVnj82GYp6 z%%0Tu=EP6{5tjNJ_Hk7GA@D%xwC~j_C|4olK^|GltMODx)AU!{39_`sIyYvQfa2m= z^A>3JRnKDN@AjOY&m@27)!$e27sVBcxtMlEi?1EXCnuaI0v`c(qYx!I{j5RnFqxiK z`&-M+Yy7sctRga(O+z;^w@koKjs(x-@-J#UZ7T9uRYsS-rFKYnh~E741{8Z%3Ff9;xV^};N9tUV%(CI6>yW~whE zjgF9}XYZw=RCv9CC+AZ+ZXAf)X_fF%|=dW3pEGn3_UJgec z)X2@6Bbr2Z@b7Ai%&2g6zQDS4#&$o2J2SYfQ!$q@gxG_KLXzxT^zn6}cHa-g>8!hz z!5i-Sb-O)=K(*-v(xRp;3ximmZGm_1Hb*|t;;&>C6O25hFIB*B z((MzVAzNf!pIR0M5VWV@0Zo2+ibSE{?`O6zp7pJD5*sG`bD8mMm#dDi;{}!YOz#Js zT)-lE!DCD}*|yE4uADl9M^Z-8UdrUdlvVZDrsqhW*2HhT-1y9V7L%`Te7SKz!9mBd zNV{TaMI8NI!Z3UhD5|l-zYh!4>P&Q7PDUp__YvOB7XL5@2NOi=7 zNqiae{PI|O&&}q1$nZ_VkGFB(a&WjN9DeM{=-2>!X6g4s zh}Zjjm*J#1=A$#~=DLkJ^#elkiDUKHiIG3{Z!KfMWfcbz$eKl`qv3~~*={LfK*QaK znjEBZc3l9~7j;LVm<=)0hKc&vyjEK0+;H@al&-_xy$kz|d`9}v|D7a<2q@NWUOdb3 zmK#!{pt;IW-ewhZclS9fD#;Y{`;%;;^TnQ{mMmoOLOBz{X>ct6#9?>%iv_)W;5gI` z$XNif>Y;rLjL0m6GUyih?d+_f# z@@PXPm-LAcogF1Q(KlC}fom-Y1872l8^0kVg4lQ@3&|O6X^$J z=!-6%~u`I`GFjD-*$aJ^`W@X z&DZ9{)Q_Mvw>fKR6Myx1E(27d87D_q-LJUj6brCso^?7v^0E~%VTX>qrx`DIAwJG* zu2m1futTxd!Vu$@5cy5vsoboBH@+NuX!JZNLV_n;D$Cj0DRpJ2VlW#^3auDcTR^70 zbYg7Q2x(Kr262WLYUbzjMYQsCIW1exi6%Ut9_nHm26DP|k80Pq;$d>eO)Spj!F;HT zuQ?N^6@ZY5&*kO$q~gWIZa6>If-9Pw{6w^&EIfC^Pfp7neVth9(mQ^nJd(`QxEr2u z*~h%<>51a?TEQ3u@<_cvytAYZAgK8g#?xfR27om4&oLg0jG+?MK}(c zn_0L(!c|5qv@p)&ar9LD=GckYJ$b7o%lK)Uik>xcQcy|s1%r($ANCB?Wg>1um=C_# z;5BVnkPsQr9v1xVsr#`S60(D>sK@2X%~{^HNNy`Q+1VRBvsaF{O4?MSBbJXyrTt>* zF%AAv?9N+0G8jQ#{fr#8(}wZYb_Uz@j)B97i+Lwy;y~U$5MP(Mw=HH z_0^|M;%AL>!?3tnC*)3?FyF`S6|LqASRxp+sdQ3h_W|g;kPwii22}9-c&`U#-vP&K zJL{KRBc3ct9P?5amuop%Z^mgVFVer|bJF>yt|PVWv_@88VjgXMNPIk?iD4J}cFoT3 z$mcTEfEUd}tw5x6TZDEIweOZPKcrKENL13>Kbd)l*El!nSK6x7Hhc|Q;bvz2q5E0v z_7xj0(6b-ec1Z-}=P|!7p!L zNAkVslPM}_<_1lhpZ$2-~pz&va>pe5G|A z8^1$5r%Gv)4f^c;AOjM>Zf^`q3EUOB%bZCEN?Bm??@B-XyJScVDv401x0d&bFnX0l zbn|^#gi?U%99~iT)5(`$w`3VA;{%rsgvewUU~5%A5X^G&JRP^?w-j*y+Eu<(I^j2O z?@OH0uX*5$&1btMsa{k52ZWG5p;g%iU;T8I;wAq_yl~K^7+rn2Oc1w_yYfS)tm^>7 z?b9CdIV)g@2x-YqRfGQB+re(|!H2j-6LJ`zPrrQ*n&Y*~j2ic1;CYKi3u#_=i#oww zGYkj-nr5D#A?m|VUGPOZ{|cJZG{6O0C%>=Hoqbp0VzV8XiVCGnZHc!w>Cc^%M!@wU z@K8P)@(gAq0a^yYfhbA+{2?ZMzrYhvLfV1jmnyCRN^;6ZE zu^ikru=%OsHv{#3e{ZusG1K8dDxQ6AS}0m@EWEYC-$0gnMCjGDflafXqVco@CFFq& zdV0?t+n_f}hrv3NC%iyUFQge|x{F)4QOQ_uMh$L}udJY3-Utx7I7qA2Ol%~EDcw#P zG`oEuaU7TDd0Ru%CY8Y6S)T=(bAA(WsPq+8kyo$A8WYW;p1gj#m3rz23=a)8E1%B4 zhZ-N57OYw1NML_qV37-=sOIciZcVr9Ls3?HUEs3vM-pI1FpclHcr=B74(dlbaCJA5 z=gtO%+tnyx>edtR3KM)GmKb0Pj6mpMMe_!=0MHy2#>5C=hP?h!+{_4^n$z|vn+@`P z?+H%S)QCW0ICMD=0(9Cv_ef02I`w4X(sKJ;=Wbg!t5&e09LhL1sPx1esn08DCfv)M zYvC+Mgz>XM*`oN*>tJ0drJ#QLFQJVz!w73Fb-u1LbRuJ*i|yTRyB^?Spo@EypF9f< z#RUql#!C^<(C=dS1Gu^@s5IMaU__|#Z0ZnFiF5+JiY{E0#CAwO1Vuf(T$FiK;;)*w zr;S0#eXHm#PSHSLoy2^A+CVmd8!`8Du&8hbm+Syix)Cpa!1qU&=mS_1PFHD*%}C%=dnt;PnfYzA?LKYgr_%11d|vH5zGK}Gde7@QT$Y4x8QfB*4>lV^ z$y%y2-#cPRwpV0c;Sg^{9s~0iiw6Lz1B0FgI&D_+P1_0qAIEHd=7{q{P!g7S?=ELL zvygg#2-t2hWkz^0WqQPs44o3%FJEl2H-{2IRvY!|78CZ&&aMB%h_vih)S7La&x1eU zrdGr2Ko*lORx5lCC0AhI6t$rJLeeDcNV#2z^8```)U%1qEG)(V)#Q@cJAUosrEi|k zyn2CHs@xuD@T-c`tM`DEKw}mCRsHb(h~y)?N!CK7gxCD|f-Xt`1Ib$SphZ{m7;$M@ z#c=Qiv`*b|l|$+BnP8HGmp1+otqRAFvvw_2&-4+rHqlWxkO3YFt^O2{W)u+bS6VF` z-^U-uSNo#p3j_L&aRiCe_$D7Z%`p{-X33ST4*7>aC<}_Q8}bn57vsE}He3c$_su?2 zaEa2R93X`mwgKKJM0(j}uU$k%NAp0PHYq@-1p7KC$yXJDDOMo3y5Ib{~<)*bSI62ehVLxQl#xgOO&6bhd$VG2l|RK^+E3O03|=dPsSb-3LAw9QZ?^R#zIgr%NS!DO3T4=#;t*q z@}*76R+D`rUDh+}Fn;gu<>kRV+aGnGQ@hr{2Nex*nv9&DKO4xNE5uW5XIMD2D!%s* zDaDQhYM@O7K@A+`(ec0czer{%wEXYeMPPoR?2;(aJRhsI?iRmauw}{b!&nUzzLOe^ z;Kn7`+HeB;=kj6H_Gyw+6|?DYOI`z@+-KGR8dzPmAc$2oYuNJ zShsCr|6s%QDIYx1?1@L1smP8_C@pN9PfgGGRfJGkf^#2NT45}5>(=LBtC}spSp|9g zDbQvI*`X^72Bg+Vr5NNfReJQ8{k`||R|Z#@+#JUs?K6xRe)$C}1RU8Kw@9DI2zsjVE0AN{}=_~QdCy+(5n zp46VO!#OWEKLD%JIR~_y?FZms+ss74!7lPqDS{Rf&uSF1&q0H*6}%btYgFcB(SA6E zuF!QDGl_H*@^3JjOllyw5P2F~Pd__<`k^3lcT(5rJYDjW6m||Gqz)D=rxL{yB>qGP zQf@XnvjSz}oZqD}mk>e6hJDqWoT8bL05ZwB1Nt9NcfXTMhEx>XUlc0REK{(!+NA`x z&JFB4Q|iu=zvxXNRo!NcZzWRL^jr@)x+o$qh*Q7FJa9q&*|u^3zG4yl9|z9Kz{RgK zaqeVu97IL2Nk>UD4!6b9Hv`xNw)8kxS*N}!I(aMb$etH4Wdp7UgQnw8hH=m&!Ru!F zt229Le@o#1l@lNNWByyg{Mrq-aeo>4XPke#Mjgk4pbiYoROU)>*$~tdtzIrlPtcbe zT>(?&=yL#sOlpJv`**7=DD1gb1@Rsb3L$wM-PK+&o&T>Lh#y$A9{DaVXhz#psBaBt z;Y$5x)?N>LiAD)XNiv2wPjL7?33OhJxcmd67MlUFNOk1s1fQrVI;97Ro;?0%TY}JK z5RozZXaV}<(|Oz z8DUw~`QtZ|N>{j=MtWTp)$vCxfuA5n$eaTG9TctG8GC8eh2`IYjW7KH7WO*yeuko4 z2myZThd;pdFanKEuq-w5wC2sJUY@MTx^_x$o-%e%nBer&f6%{48R)z^mB4xZ-{q+( zsG9*wqiK(NJhsNWJ}bL+<$wL#ojHb&lYXAA7AcYJh6E6&w=x>Y80x}h#Z8MMw?c0# zLJ8iYfS%o0#Np&g^XtApGe1X76+~?IV={&ClAO66+|0<%7BEebfzY}z-%Yz46>rdM zylDLG>m0$(-+eUYD^ZeNb$!z_iPENQhM?ym98h5Uii{D+JAM>YO)9xvnfd1d_&@w^ z+Mte9F(AuukQF(-AwO#)9F%)T#9dtAh%XM$Tg|@(vu~vKOqFuRk&yxMYDg$h@}+?- z9~HXoflEj{<1(%JR1xrYw*l_@3WD9fh<=uEB2LX^{1IpHEXwLPSQCX~; z8S$@N3;JU=4-o0~h3_M(aM_;6pBLZ0mZVLV{MfN$;G^bNB!dj4qX_LkiOX+zw+dmeCIE(#+0`aD!yoC``(flRR^0 zNOfA84e+OeZ2)}RWP-e|1BsJz^Y?o(DLSGLkul;v=YC66_JYgqPd@2*0I*^~uu`lC z_Zt%;(H+iIIdcutOPG|W5KmH?RkW^@7X?dgeOrFkx? z>lMU?QoLL$Yl2GS*&}sN)qQ|0nM|uME<1*#h6M~HNFi|O(SK7wk1n!- z_wS>BfsEWRADv-7(Xn{3Nn7=&NtuC^q2=OBA{GQtr4A(a_tgC3U?_+zjf^4m)WGoc zN#8+V1Za%QA*(3;o0gbL#xOmgH{-|D?!Iq7stWJgH8$OaEP(JM3Y}WguYsksf#!JB zj3*}LakyWTd=LY(vZ-(T#z9` z5A}U3Ub_D{LZIm*z@TB$1kki{nv#P9A9PB={*XW|-StHWpAT9ZqP^{z^<{g=fO0{i zYA;38%F2RRA#mMlvrk}uPw)`jXL@ufczr)lGR#1;VGR^}?pjVZ03s^tUSzo@Ew!Jsp4xvfk_XB_Mzog5azeZEGA_ImFih^6kxjqH84R zTgy*;d^;n7cN)HYDg|ZYY&@6T+{RdH?A2yo9JFwX9t7Avg^#$yNa+;BY{fuKn`Snk> zw70#DJKr2PmqwW7&nzzufUKH0x@*$Q6>2rzkyFG1{r zqlvUlL;|{%=d8%(iIK=@yNYRL^O7PoR%OeVE>q`xm1-2W{lwdTCzxJytu1Ig0R+)7 zD>i+upj*X5K-W6%crh_<3dK)*elw%Oq-ItHQ(PxR*F+~Rd);|m2$ut5kMc4!x*mFJ zHCfbpE%^szz_g(KjnJ1#iv&sQa{r%i6|)A0~ZvLNQNi zq1$4=GIJBS=H+;D(3y!uEsC_(2ZPSF?A2LMEk#{&q?F-14Kr4y@(-*C$Yl z5U>Q@E1SN0a!6H~^xmODCWKu(2jmI3+oC77cpo~C0w#E2-Vj*Ul&eurO=MAdJ$Eex z&;rxn`?i#Yz%J5omt7;v7Hs5fpyWC_MOh(nL>_lE1o3QEB^0kF13cQX7N{F`uOZ+` z7<9<+@R=$z99xcW3JUw~n6qW6*O00iO$}V+pIaFNyw$QLZT_XRB6vWGNBQr+e`0fU z(++Ke5j;ppMCZy+WVOfh!)o8kqT=?pGpC5)mDr7kOc758Q)OliAQo{*xCA6Qz75}1tYCrvU zGx{_k0@{LrRl_VEgVe&@50C>}2v5wGTys44Wr9Yj$Qnh`XS2XJrv1UZi|8qE>6!Pd z-v&@C4mN)i7vM4tLkk>^uZ1tc1yR<);W`e8R1i4n5M2m)N}Z4XLDBQDw_=XKRqG?H zs`<9{e)CD$@n04E#=_#0WxrjX-M$VzNp##4x;(~6XQ|dVIear|ry1jV>+|yp>$|(k zo|Of#@JPcKUST;tgb!QtEALMM%?x!1Xa9`-CImS|63|h*8B1VeNzb*dK11B%q0TW{I|^a&)%@4`KQkVLRlT8uahSF>{axaQz`B@PDVRevyVXa? zsB)>7`sD!qgQ1Qb99vF!QJ_ofzAzV@L@FT39SKr+gut=Di=doc7ZbNI9u3C_*y^d z<@!r=FA)U;o)mxs5a{z=ss;IQlhdzqoy|VmS!@i1Jo*uvh@=7d%pRTWyXMul2Y&yP z7+$Dj+{4B%{lL_mqF%rtJ;y0l$?ph4-7O_O-9Z$n(HygvaLl6Hlu?akOsZW$8TyE9 zfhbp92XMKB;Ixmhu23(}-NsPy%Rhzp{f@f~uL1!77cwQ&4XiASPeeWXr=r6_MMrhL z7i5P5Dtj;wK+0~Z$3w5#^7560I+uW#x&zRv+k2YIVk}mDWHpnaS^_0{&}&VJFnP`$ zXUD&d{`Omsm){X@wkp`qepV}Ym%g~jl$Ns2Xgtdi#u>?B|+h^ zv-k6N-eu_r2RC6=W!#=|DFv%<01~rcJo0VxAS!a~X2|Sz&D2T~RvgvdMqln72B(_I@}3wQv3Hz~ff_6c9YNx2rukIIV7^5`YeoogJd*K*u>E>I zaT^j^4X>DAN3Kn&yLdl;+RmT)t=#sjiB$l{H`AVh9|}EZCDJwDKM6Neebnh0v$r+& zuHK~fsC0kmKp<~21W?~GYv9v6&5q7O>hnxjzJ(0ZVLv(};ZWP7&1!sgp3E`aOO)jx zulmn1{`;4&31GdCx1qDyYOzy+iyZlV?*8sDb^In!C!Tn=fYdr^%kx)e;?Fr1kr&|| zKp|PSs;cu`q|39m4KR{L58ltjK`IK!e!-n`#Htvkbx!n2BLa!n*0}@tEhjv6*LlJP zj1~C-*;YdhnSaDiAFvqkX)g&! zSJ04@_<5uC`VP7-%L(QPT{*tjkR9E>OUQpadCMf|paUV?J!4lM&Lng&Ah8Sn)Go(@ zR{-uzU$+wq7zL|;XCTqbv5*wLqY+hR5u~|t z&j1&2=JvO5HeV>v%>_F{F4uSmb)f)-i3L~Co8zWuH3KTq^wq*4-!ni+;|KpGcZMau zJ1#BecLnfnu!r8;a|4FlX`MIwE~C474~bys7dU{=IOV6c{qoV7+|g2vL=5r&ngRX4 z@8J}B5B3#=;5{@Iua!X{jZo2>ndT@kR)9na-b(edp$`aP<5|Aa4Z!_!7LG6&aK(TG zG|dtPcZw!iuUs){uiB%Zo&EFpu`MU2J4B$*b6DyaDX$C^RN$ zh3-D@)EKcxv)vngkEIL`?aWJ(a$jV8&=3K#;Wffd4p30u|Bg%koo!Vwba zB8HF4pb5-{x^$;wv_k$-bN$bkFT>acJ}5OlS6YHG@BYV(8*4-WcMW;>vDJ2V2jbT{ z1cF~^T+UA~LZ~3+MKe=|o|!iy-YFDa^{a|GWUH8qyB^@2{8` z!rroc&kmNH;~DNai(bUs*&G^M6f_k)4wB<=py7Es2M-UGJ%8gH%H-rePZY`7 zst*m-#i`gG-7AXwT+YFH)TK|uDfJR1gYtXc1P3;=z7sP$vjTOE2{ty{7orkv<X5F#-%D_7Hn2 zO3J+GXspb6dFB;h@H#&`w;~$y@guv6s%qxBX&{?tQg*gF@ap-~)}!1oJTQ>|tymMc zJdsvZLc+LeTiwplu?TDjvCR`=pRxerenLS(!5%?lM@J67fIx)%tk6sTbhJ!K^j2ni z37@>I2fVx=2M1LYavhPGs#mJ~ug|N$lm0qA_2Q?*OWs%cb}8_q3%2jOXf&nvH}+K8 zF++;1BMl$9E#`$`V!`aazxcsN25CqnFcdX7HXkxO0_~r_>0n2=D{hA8N?%2FjH-Ur zd#Hffv&km?ZSvRWQ5ydX?lHP{yzg-bvt26~tl~OV@K~b9N=v&BLltHK+A)VW^RoI` zKtNkuHh#`MN*Wrdm3}+t_)bj5mX-VELtsH&OCV_*u{pQ6SYG!eT1HRwamwH~ZAyCj zcT!QV;h%qj^p!f0WLK}u(O6%PZJzA<2ec(Asi|z=YjrU&F|kEyEU|HLl5(^h9m^d5 zY(*2P+eexB`iiEer1*S)uN!Aw7be@@d$M>YJXst;X83Az*GC3U;^m7Jm-&C6@!tpL zvqF*Z_J|+<BrP1rKNUYx6sY=^y$+V!NHv{3oCoHGEo-o2~Yxxr?*)RrTvbhySux?$w4$pUCXMS_H3o;SyXKIXbhOO_4P9A z;=lZubqd);lC5!i?47t`R9PU7*+X%~+hXZ?-AZUO{i+z{{+3%*QqazkuHKCVEgz(R>cJuI)r}Dho;$Q0%g*ZQNIwtS%Yn0K`drvCdFRtghh)e#nTEBw) zrx3@32QM-`bRv^sKrS^|!RQP7iaj|urwoNks(U4TN={4T$eORV!0Z}WQ7faP{+hkQ zqqqw6a@aB7tRNeo9<*Jz>}`pi(XPBH!f!w$Q>3=Xe=Wbs?e+*3UpRKh=17Q-gY&T} zIxWT`(QKK<#q(shzdy|8|Nm}-lo^RO~}A*(S;nw^ubO z^`vz=96Zyj_2T=g_r11|iIX$tV7c=lp^li_Ejn_4N-9{3KYRK#a&inf7=3PTHkhdA zKuqWu8a^;CsjpAftuW@CJudpWxv8bD9>>SZeL94l96_Rs3bGJ0o{Q-8+CiI}u0+hBwrv`t(P z<>l3wXxD+eH+}dJ0R(rlbaZq{7Uq_gU+m7@H-__n{jKico>+j?$wM{qXT*eC9 zb$Cc$`e@k}dMmU~39++3y+rwbmF63u#+ZmD;rl22NIv7;J94GQ-m`A#xW;*qYv;Io zlN6Lt$U#m27zLLeqpMHjT5>KaY$;sEA>2%U$YtA`5f`fN0BXRGdp4M1Hf4*Rd|%r& z8w>Nx%7)o7B*^uv!$Nw+(ZGXn3J7RzbBl>(`1$*jEz+P5Zv&GBWqs{N{g33lQsUx9 zgIX{_>h?#130F!=N{rVwxwo+Z_p;I3$P{S$%u>k{b&HJT&{#=} z6l4@0$;u)lhdq13cCADrjzpu3`=pBKNsGKp5n7SBg4lPLoQS#B)-w;lcTHa=^z!!d zvObk=^&cA+_}f8oQykjFp>21Ov;2emswR0^u!9)T+O>ddoeL}H^NJP^y2pKbZhE;i zXTOos&;D*Uh;59RY18|x;d2_V*$V`nR0>$mI%NM^4jxT^<=wE$;^`bTE}@Z<-F0I$ zlb{1}*MZ zs&5Vu9V-jtoB$sFYwgWL@+|jAUF$hQoUe>LMbVGs9pyNz-sA%r<3S=b!KRYJC*iQy4g_+?4 zz$9?D%3Ue2B&7t-4~$Uo6Ls-`q{)YYc9W4vm7;6|>G(u!7SkG?X4r3>Kl-`mZgve> z4L-DZ!RxOsoK`Ibq!bUXC%wNubW?6Qwl-$oED9-mGyA3n5RW}x>KFy?F+XuW3E+-dp~@R={Wg(oTYH^a+PI7zb0Hw&Ak zfsfoaKoQHMfhKAQa7pam{Pje7^$T#uYS=%D>4LH)`d{5 zCS@0qQoaU8EG3?2-t??fyNlaN8(3}ByFr;;3?1KPZ)-p4zqcM^8*3AjWu*1N;i9zcwm0O#gQhcDlPpMmRZ+)HZ?ep}*&BqZ!a8B6TE zZ;?|CK*d&Xh4&Vji)wAcw^k<{lQr=P8gca@VU-wbn~2% z;wKOoZG^p6N=(^yc8qs8TP+W4e^z!U)3O|9tMY$enNg4rw{JVVtX5YE>AJ-1T=xIQ zrneMY4Wp~(*T|M&Ph)f2`zmXcL+2O7NLC_T)oVc5;9|(;a$NU@)2X$SCAGrEUVY>8 zM*hnCt9$n7@_lrS84(Juv=y|r7~IQ-|L54SfT2g9e)9?`g~tIY*t9t-GYX>-mmgB@ zcC3|r`1Uhdk6wVL(D9eppRcvKY5l*KVicDrE{DHd-CF2Ksk8RJKE7~!$;rJXP9y2( z@xc8N1Oo$+m(l%IVKo#!Bedu9)Kx&kakh$9#{VkgVQ6z9s2Y4z!*>7%#0hdRL{)nb z2PpB>Yyj&%v*o`ynK7e?rmDt+Z3tdySHWa^7PJ^o5H1JG$?S7UFY5``7LO!@fuJs~ zVUjF-ZU=mZ1`I9Mna3zS^Xo6PMQ7?>=e`aQ*$_&<-S2PE*3&8BNw{@M;~DjJg&hj} z<`2a+02Yd$YP-?=+#2yJ3|4&U=5wb}SORqF8YlNO!u?@%%7}fm>loaY+kejkj;6#Y zfpYr=JzHE%Qhxx9py+hHJm8Z{!P3X1AkK%5s-U(lzFj)$kIs(5XCb+vcY*!mP3Z1A z+yQnNTDeb@ag&9z#n2Bd1nIj&tz!(nft16u^dG=#A)Sw@_wKdEkdrMMh%e_k@`OQY zW=YoAOAKaEWhJx^#Cehqa+|Zk%36bMzjQc;|It4A*qCX1@%KTauW^2O@Q>E0@FuBRuy9PJKc^ju!F|I^}HYPEM2v61! zrF-z3E+~fB`ffsR$(fuASc=$RKkGhY7^1z$SBOlo<^om+J$6Zz{_CQ|j3Hmxj(~$M zCBghkfD;(lLl2Ry=XENK4aPZ7>uAMHy>S&ij+(1Ly@Hp>yk%lkQ%m`R275^TksYQd z`aRJ$u;1sHYa4~v`DplF0+bt@LbPhb%?5Qa&)Jk6zlWw+(O^{s4AwX0iLvuS`JYEu64elk9;aUm&(UNjPjH@AoyyfaD~qQUL zaweeyG^hLL7T16;UtB}@c{c#a+LQq+Jz;1HpIaE508TnqVJ?VM=i$qg`319>i9$&-oW2U`h zJNh{8L zLtkNa3;!mx|2;S*rDEmomqD{nNx-F{O<;O;`P`=u3s*<(4NXrTSBdS0%D=zI@|0Ds z-awJqRWjrmcmWp9_o!SM^rs&?{SY(}$~Ahd(H;Bt83sdu^>8w{=U*|wt-k1M&?+^7 z%k&py3XnC0o+LGCCkM`55(ors(o_q+=Ebg?c^CafG zcdVk>tKG=upXD}(kKo0sxw04md`30aW4~Gy69fCVM6l5#$0_|uC>NNyw^~CJ;Zw`g z@E`{38H*2;@|IY5^b-2?!Pl<@1Rk3I#u#EMoTzoVL$JfSc(;ra z8o$B~=m8H4Tip%`HX{QOg%2R)o!Em&w$a00OMaKy8$k3HkMb zhHLd6yb`rE>?gQDKT@|5)}ft0=-oky#t?&@@M4TsqfBcd4ktlQ$`3D`wQC?qae7lr zk67TiX4LNi8k{uy@g+a0q$^171S1O-f?V#8#LeTNnvVuqR_uF#Jsetm^&+u^=yoJl z3XhR+qxQrn#Yt?uXD0a_%pO#HPgUMkI`7@@4t^*(@#J*>hh?dKp*Tab(Sho|WAD4I z?yi|!yp(1wsoLz84^|>}9(wnUjG1X2jmw86@gKV!U~9O-7*a#P_2z-{mYBi=e~pvh z=MuSW_m>N<7N?{?d(PI*VTVr+T2cf8@LZi|GwQuQqO_mUI5%}2%@EmMx_(no0j~$) z$Tdt^2z}R?duS5acp2Ebk6_P`?*g##|NWwWFU)-l4VugNRa^A*rnN}sxp!Lf_&fwqq3HND%%bMN4ovLuxJ zmeC&j885BSLy@jyB8Y);npGNhohhy~b?5D6snB3ULi!z_s3kt-LLa=f7*2)<;NN#( zR-e}Z$p4cP-E#XkZ~Bngw_r(eLU1aKWhMzUVj#3-qY-hgV>{VEnt#U;iRoIjZ~V54 z3=dgQrY`jmfNF!Fj>csi`@6LK95mvC8}>>M+)5h#?5z;_P@+@`v>{wL?^M8i(!B#$ z&2^djMfO?uj36+L&*)Z3E$!n=1roSS5hx!+zlo@hEDC_bk7*$9R*Uca%S0%0u_6zjhRiSqncQ($M%j5^b|CkJNDR=&}D0Pve(j6y$d} z{+CdL{Gku2;hWii2Nx;L26?8wwiMJ0|Bm9kV# z&KLo{tNeptw4)ig4o*&JF>+9md~ge_Yb&9i*Q7<8ant}+*U53Vs*C!UCxe{Q$UR&R)V9w)I?eq7=9_{f9m)(0*LEWV4H#&$ZK+dsPGe@2hTYpndP;s_`jjC5=A5O zTyTYp!O@Y1>)0xhhJ*wvEGFeNf_pE#7yVj2%yv`;S=MfHoKf(*7Fz-&>x&{^0 zreA5nQzIaU9Gz~LcVL~B-tRo8EJmUn^M_^P0v=1XApdT(p_3H^9k6+OS6yklnieMz zGR(4}GL#k*j_kt;ZB*lptA8IjlT(M#?IwJ|vh3#u=TxA?_>Q>;aKj&fm>%;l2N)XF zU>~C^Agw8VIh<_jRndNG*tLh_0C!SNQ^aPk;?1#pi`(qOPGsm9f`V=dc9{7lv~QDqGfj|gfZkh8t; z9?RImoz|ahi`P_cwZ@bODvD@ogjK-}`uXnWJA;EQmd~<(b7Mcrh2kEVz}3}R&WPuO z`PaxAjHjUiL`MV|yIe!MO}K<*ToT>)D1j)G#dGI-_2McpcaZyYFESu8S11680zUtE zksHO41TKM)ITpHurgef4{X?EQ=S9QBVj;-dnjeH`IB0)Al{^_?0G)M63`_7TaKnxK zg}OPI1wuwu4%*L5gWvZbUxR%&*Tc^18Ut$}EVTrf#YVM}SCR&->h?U?f5ES@E^OIP zh8C^_!++0do~PEEG0jnVrIjnC5gcFT5CCB%6ST6=?tTEPljYVmo5eYGcLo3dxh#!kknbEiMdOT1O1Uu2Ouw`4 zbP^I(B&%TJZen3~mU2U=@Qmt<=&MG5mHuW7?i_GkJwqX-YzpnVD08)nO8$Cb;TFS7 zG1grC<*617p;ib$WC$>Pf%B6doKiQDYbgTJ@e-DhSbu1noli46HKF`VS$~}EIc%J@ zDNY1=_ES{3pM%eTpb7Ni0B)ZGw#iEpg#F{LM0B_P`D=&z zJiW^B2|S&vc*~|e!Dn9!7EN)%IA~I!#uE5Zja5Mm(MCMX&1PQ(E}ZZFuy$oeDy%Om z#z?By`uLX0mU#<{f`2nSe=peol;h384kxE^B2gx#l5?OiRY0j`cLhBn6)F!=pyxkl zRp89DY~--~XbC0kA!vQj(J|CR=Y5C~+@5%8x#HfVy7*~`F1!mF6kTpaN*uvC4;sls z;Gu@u6H9Du=bw>=yo_`%Wx;m=c+r0{$Y`I7+cdE}!syn_7E9d`dn|ejCMB)jo?K+> zwq0m4{0gA+Uw}rN3V2A)PrU_ySrXKTnn`q(>i zo!5awd(EEHup~)naQ8M{;J($w@pp#yB^ULiA>4Pze%YCtO@Zm9nRC_7tS~ktqc^T9 zRytqGH+vQnJ~Y8wCI{S`c=~pg8DC+q_`k!_$f-`D=c39IHe*qc0>Jmvs5W4VC@kJG zjV#n&*~l^ft)uv{gJF(lwboutMDgQ4s=m_&Bbc`okbHoV@G(M^6~haZUPt3(tU$$6 zvKx^~qs#aZEZj?H4n6>e^(7-^Eq1pR3K;|6yFQo=5Qvsf(4U!;e+C4gKcJ1H-mGl< zqY@HKYlhrG&4#mxPZ}G>w|T&GA7tXic&nK!mA#VztWu~ng3WW6 zsS~4gV31^c#ap*(6&NJ{=Hf$O5{2W^1;n6;#TE0+it91JKJaEe7T=BU>^~|sr~6Xz z;=^kQNBmq9*%bO4wV35+o9|zoCYi)PWYDx7B4dtEBNK0FLeV> z4_?U7b{lnk{%;zDlQ|#?ho1FOR~2A#`7V%i@crc3Y7kb|$(7YyV)plPtm9)Jbq84xhsMips?ebWkjT zD*V>N9FqR0f(hl~`fPAAoDLL>mQ`Z|Oe|60PL%|^3CoYyUks7;ae$**q> znE!?qa;h~Z-mTVt;umJMsZ{94*HGq$iaJZ`99;Fls zC0Y3>;+`YoOS+Lu`;-^e?P*uQ3((Sy_(VU{i0TK9^bOVhuZgcmvH+FJm(DAZp#rl| z1_kZGrub_ed?H>s?DHGvaxIW?7)wT+gR#OS;jueCqKkKXyY6@1;c6ixZm=w}X-_RO z1#3_KV843HNd-nRs6 zCoGTNQdU>65^;wp#Q-pm!gZ|b-j5<>5{o<@O2~Uia8IM)77t)8F{((azaAy?q`}kg zfLu*D3lqcLq{(igE3~-?9A`u-*%$OOW7l=J?JDP3iewo|v^H$KEmd>wB}`L7O7p2a4B;N@k|*F( zdqw8u3=JrZ5Pvt?1M2unmIQ6Qo#L*9s2 zdB@EOm>hz`_}!n6fZ0m9a_khU{x70fX*f`TQFb<$QAJj;a z_*QT*2V!W}_xg&@Has_3o)40oJf-^c2HKD`4jD5r>;iZOvl}N#CLAlAi~oy}@i-;G z!FHnO+CtExarr(Z0W6GgZ*;Ty0384V*54X_69}0)u=#hw7 zH*|S>jkHGz9Vz$UA}zlt&JLVC>Mubrb)mDVNHcJzWigo7B{v|Lz6g z2AaX7o?_yq^r$%l>YQgvvu6eW(S5$h>O#GSFvQ`DQfau%$cuCl$GH$+wlb8i9abBJ zi;~)KPo7XXnu8DcjmJ%Kr7b4tkd4AaT(B;F*u4GQ*0>IAxQYxP;RzTAULODUidpvg zZVooA)?yOSTBE9Z2X&0?$)n8@>@s7-14@ba&iF@cIuP1nxe-SCwEO+~&PC!~Emt~B{Gr(*KwQbk8n|II=RnfuQ$iuw7S=lS;X+v9%& zjj|fUz=|>ntSYOrd7oEAg3a;|^7pow$^8WH2G%iXralG^K5>wop4mr}Bcobi=C$q6 zAAmDN=`Zb!+(!+fwGjTQr76i~+?+S?N)^Q}3`y+87Oki_>E*%rJ%uRT+Qy?W$sBWl zaz!>5!SwiH8)Uk?kHvreZlidoj^fOehmL_KxBwjWBddtYupc#^(aq`M?b*H21C62_ zDtJ#hggAI!8hqx00q3s%mT2*b57k(GaYo5(T$f^HovwBNlEZ?qOKHo@!7`U#T|o>T z!$l*<(-bTVK2At~@9)r6eEgSS<_YoOm)I-7|N9D@T|CJwLI3hOOE~jB>TJj|dKzZv z&8;iSum~g^jjY{yc1`{g#;|XRf9_8gJ;u5#G<^eH8he_S>_&$&o*u4?AnSTUrgfp4 z!y{eW51F%4xvQHrXRz@mI_yfFs_i`w69MiX=-I&t zIs*}&n8>e4?#fqZhlHQVIC#;*V`7rhkaXE!j+cPl2va7ZyZ`drz+MFX5oj&s?@<#w z5u2pGyA6~)7kEpm@4&B9hmDZ_4(xN7P4R}05f8(P;mQai4^PCN&+bY~e53QK?GH}s zo{K8ajZTe2Qsf0*zPzQXR+p1xI2lcD;Awiths1`h^&V1U>%JaAB3eUCVwehPho(#0 z@23`S@x?mKAwRM)W*o|d<+WD;^Cb|#fnH1jNTl!8^rdt8cZ4ZB>v^z`K7Br zVvwXN)WQi(-w+=2NW{n(^wh|~3lqNb9Do#&F_8O%JpI)q+Z|7isH_x&{h7>i)dkCB z*_2k0LzB;j$j%Sb1`mUWLp-R`X*}gJ3z6@-pAF^&O! zNQP(M11xRys5$u1rW@2!g|UFlOdaC(FR6iTT^!k#b@+R_VU;L%JG!=Z5si?pT*647Or^=Q zFtYPB#_~X;1u(yZ-z#aL81p>9V+R0E@^S-RxC#lW;Vl8U7u2i*(KcfJPiiz{<2{;V z-Z7|2ft}lcaXUO!%tyNFPu~a=Xb}Y?b*OgFmOKVh8}l6IZT~V1{mho)aXM)&UYGgo z`7N;F@n+ZL-11;r*Vt@T*^GM&OvuF19LO%v zgGRr!wg#;ND~uRX##Lqs90uW3QFPI``pr*lr9h(e+fec zwRL^AxFDuzRzOpgP3~Emgl~w9Jo3=~2kbxi;dQyY;@p!ES|8fD zbeFQ=)6X)^F@L+Q)&HxvpJr$|>TGUp{>(?SqQa|?hxZ<#aD^$iJVu>x34yrlxp^Nr z3nCK~jkh6davr_{-o#+aaea*kt?bRoW7be|@rk2(kVT-%etgHxDz`k5JT4Q`6+F+c z@Plc*-a)Sm(M$kwviBBzmslI(-JQ)fh;KiYA){7MmspWd3@s6abq_nv<)D1# zQ1GRI1Bhl00qH<0P_%Drt9~=l|Do=!qpIGT|8cscM5RkQ6c8k(6i`aK^U%^I-60_$ z-K}(Yr*tdb9ZE|#-+l1j=ef_l&*%5~egFKfb=Fy&b>8f=_w3m-vu9p2BUWE8Tfk9H zZfD`*slU}ILb;$64~ z@XA-c>!&XJzcFv3Vzai*Z$CVRpq^Z*IM z$x^_C!_vUeF$trxlvBeBAQZq;T5R8WaA6;m?xCc^HrE0O#qQ<0#`_pJpCCd?*nCz{ z0tUdv)Xmtt7Egt+`1C;lj4OFjH**7EbqmG%uK{F140x3L^|nF#MQLa@WKbKdVR&0D zt5U=!Ot}P0^D4cg8e$26Z?~0}WFMt>zSM8w)Bz^JYg}v{El@`t>yCV%)9Vb(eyV*o+8hanPZpufD%$QMtbS4n15r7@0_Wk$(b$K z^btMrL%_w60d@U&EduvQ_0(2%M5~!4x;LnTG7#L!5CF|6p+ zL8YS0lW1sxLEIfvMEEN_XzqB`5rEPFfn7JbOkwSLar)!CkX7+C9=$8gqSmWt@dz8O z(EOGB7*0#Asp=sMlQ~|3PRuJbXnD8&V;Jjn0BF%!jj?CqvtIUvHi>hbuOLH5F^HQC z3wXezNz3vc#OHQaetti%@i-H2N;EfiuG>&ft<|(QS2cZM2ep>o<(jUJ_UN(R#+H^j zbiJN&D$sbhpqH~c7;CRJFIGhoSE7Kw0Z{JFf;_r%EZzGNFY#QLo<-Ql#Ac1# z`RIoNX2LTevh|Niu!e3WUtjuzhP{OAJ}mhQ>U1D|=h+MGRFEFm45`HS#XyKt{<_%6 zuAN*3a@+jG$lSJP*8x_rf9$ zg$)9*c*y5K?bOCW27h|mxVL+BxEcv($qGJ4R5-NxLl~`H}psC(7 z(U8loUm&EFsZ^-e#KboA79_mxZ$Vz?B!l9umW3{#ciQ6F`Yl?OktMD8eziZi&zb+o zltzq^MJ2Yb?t4sDHZlmH$g^6n_8n|(@2wu?uH;5!02D*sc$4NPyD6jc!za<3u4(}# z)8Bk4g0Y~8mA@Po@V9V~AjJ{TKaz77+b^;VaSHL#(7#h8YMkrtgX{9H_ZYgDFNoI? zFk-TV?J+~ko#VG08^EE!!Soa9e5(eI;@q_~H!;8v?Nj+Mam-qOEK$vZ<$Um``{hk|1e zYgx9&4?mi~E;ayuBKs^8@W6`rm0feAT6G;}1kp&5p_(z`CD!^naK)bh<7vX8X4>y~w zkp(@53_Gp;5xgG*0Ef>^Y#q`_SWj4tghpg~v{UGB>Ho+3B0t>ri@M^su44giZ?xZ2 zczl8rT8(GzIz+JFouvmTe>TH)JUSUcQ0(t967K4V3I1-U4PRT_Si>i-MSt$3HZ`fQm9>PV?eJ_Q_}T5n98f-HF++gpRmei z!Z>M-)o%6--$wEtc_?txNr3KCZ9ShRpwQ31zCdS%Ne6V4NK+`}b0#aT1zUaoNeBYe zDF_`4SebUYTfadn4t0egDr6#%gUMD_r&)*(;sE8Z@V{+P06XjniQZe@iUTleh&q?D zECj`$w3r&~o>x0<*b$tMU9Rb8E*Gh*E~c{=B|JX}`CV&TR(Srw*6kl*E@NJ@qHCC0i8QQY=ME`W9oEi?qCK=YYa%Ie6vc(nmHuIvCsZvdC9H@~>hC zIJYAKq24=`m+j%Fo}{;6lkGlVH&2M9RE*T4}m!W^YX<_^&qX^k3jmqxu za9s?bA^-JiQG^bnXgkT!*7ODAjU*NwOZ!94dv`xIaX+oaDqW20L5JN?mTdZ;X!c8u z#>GJn-NJtl0)$I%ImTFwkj)hJvCl1h<^797n1`2bd6ZN`;t-U zV~f=LhvQAG&xcbAYM=c3h3x^Dw6hr_i5;@=U5u)#>mj~wUt(rBpq=LoiVUQrBEXhL ze8TGs6i^%Ua5jY6%i=VOBM79QK0tK%k9m0_EK7R2b#ZZeegT*u5R0e?PmoqGX-$+j) z=%W3UxU`W^NLChbNsy(E)B~X)m_$F+-_yW}1V8bne&COMg3jTk@<0m17eva;F^}QZ z64wUeo(X|cheo8f{(sq(yX{$i3nuPZ?TG>~PL}ZFe{LBai=!h_$wwW~i>xy~$?2Zx z?Wv)oXh+Vw`z1UZd7ui_qW_D3FmFu=VBTNm_rWy#QZxTFZ~!Uy_lY63roa{4&-Of8 z|8u~_r( zw3bC5oS6ErwNU(Ih{93VBD_*uc0r$d&_j9($k1Q8?e6bDK`^VyN*pjJy@7~7rc90v zRt8|PSR(lXemHA+5))o}R~j87t=J_ug6(!}hW#wQ$&dqx+F#6pyE7U<3xv)W=~Mvs zV>3kklTJ^-3SqHrr$;kJHek{a<9r3M;SoCY|3>8^+Lk80ql5tPzoiC)fBr8f^nZtm zSRkJ0z|SZ-_lKWx{h5O|?c$Xe7jW3M?TPGj|3drwT&$46TqqYXw_m(YF{ZM;+TWsf zFLznpHE|zdfxQ#Gg1R_B$_m&2^4G;KxuTk@aV|wC_%?L$1c*ImAU zTQMq>_TA3`?ZcPwcQoi&gcDf~0hEdy*5QFwa?9dLrd^6V-7u}tA{Ms?Inv-q`u`AN zo(MRxfV-C}0g`y>SJ2uRkQQQz26xip5~W58Vnz0Ol>OjHs8qOibJ%V%ZZwcU4`&fe z_@A;O!V?UX&dv(ntb-;LAL1{Iq^v;|l;;a@R!~Yw_Jyq51MYKl4Pg_g*%46H9^j%s zo2YP_O22-wTK>3kb%;G^fG9(i>YqaogGdqyyrl+sGg`}v0zipT!h(BH`nRV55y=w& zBH3ZBUtKU^env5GqMofkX{2*a%3q@2+bqjZ5No%pRY5(D_<;MZ@;)$ zUs*2)W6tsv5JA5dP6cD4uyl(10Sj}==JTPEY(B1YFMu$cX+WXFz(ri%@#{~K@~;yb z;RV*8q8{(|W=I2vItHMhD;{o%K@ZjU$AVsuUuhT(CeZ_TX*4iq6S%+X%Z<2@#R5wz z(YmL^C!SUy#BIV5Cs#C_qB9TiGe=J00)@U0yJD=>1RkD!vl{4PRQ;hTHSE!Jr(c zv+!)~F?5O|pLb*-q7}G$ZqOBwmjRv@j>(QEGR$bv9|m~Qey;5d_Hwb5n2KfoQWMw{ zV75N}CMwk6(+ii8eTa{fSF555eJFqn7C5n?8p9=$?9_6XGejK+ z5=K%kSI|TtQ!R`~*noPkhI5b?7V#frq9lP`@+#LAcK`}{vFHS14&6ZVaRd!omW&Po z$+&QDvLKKkB{|l_l02Gs$cVav;{2=MG75g9HCOWSQYT1U(wjBPOVmgBH!J#-+J?3j zgpCkX6vf~YSrHG6Vfdfyg>|sebR$*DErH`{_baywnDWn(l=V8`{-Gzs1dpJ$nOoQ5gz(_7Wv24L1QT`?-A`%N>yOBsyb6W7!3ddc-| z>06U#C(|INc#^433v>If*wku2%<1pj$m9|0MrN= zhAIjsQ#C#|L`I?#vVcYG5cL*u!~oPYWF0RMKL6XXZQ+BOaMJ2jE^p~rs4gG0T!J4G zwFlGJxIy?q7DD+tn8k$fY3s{-XjY?=hi{lX=v4UrV;X>jR@lJQg7x)(MhVWjVU8*` zl=3wZz<5#S2s#yCdP51P#Py@_Q(Ni$LY-Z{HR)F`Ol&Odk`93asc)B^s5fu^Q!fKm z5JrRYqv!FYt8jpr5*-@`vD4|A&9<*4^es zd=l*PdT>2|*ma!g2%io&y&v$F+Zxx-Xa{YIT{EJLfG+?=b zQx*m6|I35z(O_{k%SKCoY{8#D^!JZZ8Ji1csaJ3*x zuKrW^t`IG?yq`pdud`Bu0s}dxG7~xQ&4xcsg^bKFt-oY|TdL5^>vt7SOA3^z>Zz8f zx_8p-RG0Fw*ETnl(KVIFeKSc?tyHKP)Anp@%i#mcnC=F<%Mkh7`f_g# zd8*i}5;Onc1Oq%Vi()0wi+MXcnV*-rV?5|y_fe{L;QEeT^T-rkPA)e&Lt}j@Shjf` zUs<@=`E^|_Q?p3Qot|y23zg}wz)s~VE&4v#jOnAbtXlFcjnsLbDr=-_!BYe^*T+-= zHm2D)?zbKxe+CtU{q}l=e~7^rW-q<$!Cn+t#6R@e>c^n}WU9)HtG6#+?THbaPrdN(Z)EWdLimM^ zpH)2lbSPMhCe~1rw)x03DzB*OqdMkUkt)QuhYU$5f06Ya2j(Muwwa3%mRj#}wR~H+wPTP?k zo#=M1Lw;4oOk<;FW|m7t>|d@sER_hd6P5iv!zfT1ZmtQ>buGl_iW$N*ck^Vqa*k4S z_vn2Bsx5t^-Xw&Fh9fL&g&=qiT$y~)3j8M)}bnx_1!W< zq6`@`$VkGfVh+}(e}Frqyt=5_oj~8n9-qW_8rrJcctNti*6_4*6`m?bi?zC8;TMnk zv_9AVY0~RM0y{QsG;|9KlJPsw)@SJ!J2?2*agpZls&IFrQqbo9nMV9$rCMI zcvO%4w~D!%CpPL0#Xf_nPa-~;unToiaO*o}t2IR&5Tj0_TxBY?J$ExsIBNRht=x<@ z;=b^duk#68agcw@{`N+ney?Ty8ZMv(HyPXXfH;r+SrNAIl2qX3v36Sb(g@h(1 z1+SOd0^)(!dy;C|SREK}ZFMAXGR78b4Up*w_Kl6xb!*;uL5(yVKdLLMxlhCGq4Q5`7boL%NfRfuK2DM-OfshD(g0uN zS)28G=xualujKHIxzB@ze(`*gCQ~IsuN^X8do>H^8$5MgZ8&(Gul)RyHEua6`L>_d z(4});-XK#xe}{Ewb=u6rm9=>8%jmWr8%;NJ4Tzj5==IU>?DKJk3R+WJJS%E;KTQ~l-(QPd&Zpf{ zVU|9;v~ZD{Lr8-2NmTQ(g9bspCZYEym-Q#%e^?y4Qj3EVMudhUNr-r|>u+AauWhwT z+2q~ebvORH$LhYsVXRi`+3;<+zR_mGj#tV8&gF)O{^{8E(HGjG^N0BE3(hLjO`Hq9 zk4(LmVx#LbHjMhH)wABSR$5Yu?Xj%sEAJH7Z6QE~!r5q8 zEzu*y3K1#W0~xFxIg~+X53#-kFQCQK{ztmfv#ZiW%X6i|=i)Joog1TXx9M4Ga@kez zphjN6U+)uuAIR4c8ZewcQ?N^HggZ0^LE>@PP_e==Hcs37v3b(33OSAV9|B6S~@Y) zUd1Ds*XOT1i{q@|mBboMpP3&H$lGNGgeq#cW?@yk2fDklfF;duh0$-B-2{Dogzv7_ zB!A&r`?Ud486dX z`CGYNvM#I6cYAJko(?B!hC{*R<>8%}GXAYcc`a7EsUB&!`hWonmbwGkB2+uLlabU%giXk#T<;eQGV26!=O^r7M$*m083q`Ndjx9a z4XeTNM1OB=ONr7H zyrp?5@~#+!%lV){UxAl$Zk=}Y(&`Iq=zFy>0=5Oy7*w})7v6ribr%yG! zVD|`W$jZ6aI^05jD99HvMT|HHRk`twc+ht>0gbtTVNQ58!b9-vQ2MaW(q%N#yOxrLUi^2G|`6d3@nN&PY ze3~Tdj>1Vw{eqg3+89$2(l@C-QTEBxw&EA70fhLz?_>-0E!Lu{(|QlZ)k&8~(+XNm z)(<{oTpBDaWQS}6lUGb!D_jw6V}M}=&nzghjHLUnR7{=Ymbt&c$;z zo>W_JzlCsROzVsmye)%VyHC|@wNlIV$mk^JILPZZ_{_2O9NR zMr`Wk>^sbF&Z3)~4C8x*3=d6S{7eiyy~HhQ@x0t$h0G+DTTF(frkO2!)H~J_%PhCo z@8pR+J$C=e;x=ZXT65!ReKxRcUS(;pBCag1wATFEUrl+9-M0lYJn=MknM!S=T=Ja! zT*-}Se3w_(ootadTj9pcPqnVzyfw$^Sa`b11gq>-N_$8oxrwxvii@_n<8pU)*qvb3RbeKgF zymFF)cY9rUEOBJXwfh0ir&6R=8mer2-hd11K_&NI-0#Z_Dyj-HpQSSVG^L?MRI9+X z4khgU#mG0s%5a`W*E|GFmB`-XuC!M@gI&x3y{i2VM?t%DJwu~KCi$Ul>~ z@?cp*?J}biUQ-U@lPd4gmt0Ql_-ExkgYm`3HOxpt`FI?k zL0+_?#A=Mu>)XxB0qv&dvf1}<&%O?WTV{9v*yZY{%?}wvnvV(<(VYci06XGIy{KJt z3-WC#xo)vGE+QUM6Jn}lg}r%>&9cbrFvoxFHpOswGimt%Lra0*Ac1aym1p*_^NGPt z$604RZs;tin z3cl-&nj2&Gb(8v9_-EX<*5mnR>7H2IYW$x^9%q=+n{zUSNH3T_L1NyQti3*PLR}}a zPP}+=|J}@z`!mMINJ4b~P#6+%jg&D+Yx(bTSQ9I( z+MJfeHu9_Y=inBEIirrBI3rQA_zrW_k+z!o+{Yix+sJgkP0dVCAZFR0fB3+L$o}@} znR$sw;7{y{!lj)m#muS?U2SAcEbZk&CYd4ZM~V-~$SWv?4+zR*ysiVqpSWJvv)jl8 zQlOmiFRFS?7Q@dMKik&#(E>+D z&(v;%>7C!li;8lQ_^5s7b}wkn#BYO%DA09^t@jh z`6Z_JmP)D(GBCdmnBI_GMyVEfguI-7jQe;?in?Mp&CqpAhUeBGLYt2kMam17tu5A^ zM?a3ewLmX`K?>KglO~Vj)L{Nq?3($#L*bqA2Jp=kn!4(p9bYt9-W~XgWD{@^R326}|$j+e7^I+zmWeY?gak&3jya z@yu}w-_y_5ru6s1BVT4#4{q*W{G^D#?Sd6j`QfNX?<6y@?nhsXxOLhCTo&d&RqTDM zvNR1Q_N5=V30~Jk;t+IhX^xBshb~ zdDZ>q-l*X5u;#7m(GK-Ps()9bceLFkBp0F zq)8hGHKGmPJ`8TbHk(l*N5;s3~w#@`-rB)@QF(Z<-$>OVk;$shCVdlRaW+B?`N z0JaW`?EGE$OO9khF|rv=ZXrdVpW~jYou|z&x;U_JlW-nGmQi5WQCJ*L$3so^>pJY4 z2Ul9_Izr+pqukY2w_e_P_%tcQ`S#TqI_I&7b;micYdn;AwL(?iFle(Z%U`U3(b zuO%!43)n3A#nngoEJ^8wYCWD~==gM9l0&p_p8TqjgN5JMOpIZFE8%+7=X`FpiNy9I zZuP9n$n6mSML49GEm&WxFFI#*sqF)gsh(Ap$iV0IQoAmCmPKNvI^OY`irI9->aXMH z43**;UlI`WKhR_l6m4%SAL$s88M}TE>rGNzD0Yw3*sM(~=%+-~O`TJAFLj)dai^N@ z!Pq0(rN)7SKd8M7iT6`w-QL>E2B|;pZQ)2NQs)zy(~Itg()}V zam}7@gxe%J!u1xr?d#PXJ}h5*tu!>NC8ak0#IWx~`S|V5QIGD72~wj8GGi7w0@jUj zq2#%F^$t&g(hqBOC%vlchtw2$>?(-tIXrR$U*DiqYRo!hG!J>XQ^J|QxYB(0Yuqfd z3x3n`b#B&^ZJLyJ`VuRqmC{}1mn(AV?`yco{QPYlPZmEdozFikL0 zkjC9m(eW!@63j&JAtFdmY+WtHSB#hB7Kbt`XZh3c>h0~TBddmCHsL~ImR|d(emEWT z?4*r>#VGKr*+aS{PiCV1wj*qiczs^%!5JZBMzEw~&*&<6M0S~|f{2vdZ1)Yu+6v)1 zDbHo@z`lNqXza-fu4Oj5@8g{>oY5Rt&maqg79IwE+dz6gUEau5ow6bg!4^9S@4shv zPhSSWQXyiwV@**J39?qGF zbTF-9!7&mxZ7a=u3H@L+Ce9Xsz%%-!Q0ZBPbbKklbiA9h>lu#Q^kmZ)$&q*Rd}p1O z0#^NTilNc`;dGo4LlP_tD6>h;RkjD7sY#b6Q@1onwk`R#k&DyzI{;Pjb!@_u7`u7e1xTaPSN8^g=AjzJsQ zocWBWr4)1Uedu!r11`3)4WX$|nk6Dy@io&86^&)3Hvu+bCdpHn@|YnhsSG5FPSeOf zO)cC$xNNUdOZwBsewGvpEEi?8bU=+@6T`IMLhz$Zy;QkoU|9tnKk%+ zej3FNokmz3VujiH1(PgC@$DFKeuDd%T|Ewpa)M$oaE*-|B%4ropnbuN4x*=R#So7_ zF5N{}zOkvOC2Mc9Y{BY{^~P+@RV2V_Y!)OF*EEpY9pD)2*V~Yy>b2uXUgBj2+RBRg zW*U>m5^{h}Q;xm(B0Wj1Q3pdR++{n`F2lEj;Mp(_@a{x8(JRMaH4 zSA~XRVX9;y<&djLrmoPdRV>rq=RRLBVwB}9*<$m?9F5`)~K-DMyJ7QI?@KoyF|_+`vRH*?Awwu^;^*| z!{b!2ds`c%J7**17<`z#h)Gi)e80O-$VgVq(8mCELMsO+l=#>d!-Ezae|quYa?FPm z^W2j}SYEBhQm-dC!wOFry((CuN0rWlzr#wUkQJ!EBFmb7{aItfg=Fw>xxa6M6OQwN z=*-gvR~f^Cc}B&LYZ1;d(R#?yHP=3@fgBb$xKrl8viB9MdF15Ii%O8PTUcJkkI`?l zoM(@O${GHekFSs-BgCu~_~DrB6ii~&$X5sn5T2PtJGNaEgT##&hp(2L;eS_cC{!Z- zlv3G~9hf1ChCaAKF=~X)A4&6M-eUH3Jlxcp?V4V%x+h(weBsw88SU<8FH4b(++6vx z2dhqw+{8>5tV%w&K-_eJQ}e&x$M7qzmhsIMP+#tru!LTotEE&B69^~PwvboT57GETj|8I5{*j6V-2^~ zszKH_`Ak}VF}qD~Rh=lFuO@u14L-!*HoKXYY;%n|ghQ8kFXJ zQoWdXS^VdT`B!K^_Um!*r(X*~JRfYWC1P!vHCiOw*U7qbuK7-xoP2e*MqEowA4*o1 z$h`vPCOVtsj{>S%e(PcuZ-OXP1;}KUlIv``lKYRvhtcmFns>CjhD2BJ+@B#4`}5g1 zD%6p)oYxO7oWE-}r4tz5c^G*5bkqUk^Z}|DZ25?#+>S9u^vCrmozSrgkYyxwxeKB~ zYV-@7TlOi{=kw58*K2MAOlqzLqJI&8k3G&3p$Ii=2bX@tjZ8|@5|c$ci@0M!3$#a& zT@P;3v~gE@ae%H2;mYp6r*+$$y6UY+BbFU=`_%c|P~Y575Wx>9R#gbhi&hl$ZcIjl zkE;WO51_(Rr4d&AJ+ISX8mzp2DsRGt3pPw%+D#1NL2%tKB2pA&ZDTr?v(Ig|cfPz6 zsL;GL?|(rBwg&o2wO=Z9yitN@HuoEhHb<5tMq6=hJUAcy=qt(UE^Fci`&IaOKC)n{ zyDLBKfTq1b;Dc#Mk+gjCqWPBGK=sI6pYBy~}UOBRt)-*8!x2BN7p+)?Z3Ez)5~3%ggvTp-)VYRKMrcl$kxSDo+ih;D2k zDNiN*-b4LcPT&Tn`Jn_6NgqL*$^TRGvjbnB3+|yzl=T2~VW^P9e}uB%3_+}n}{uIZTN1D-g_I`{EuaNRKxDH`4u`$PPK=l_!wL*V&= z`ds~lBof)bKXN5|Xz0Rr`Jlo(J(uvwOm-6G=A!9qmf>wKM-2-PyDTXt73ybIE}Dm2 zaz6L=HCZNt1R=Ot_K>b-)v;IObhc#WkOjFxGENN^1N{`47jgG5Gs_l2yzY^J6qdAp z(Xvo|tsHe0AJ##y;dl4YL){u=a4mW1PA8aXr03Ky$3jE{PU`){U)Lp5{A&7eSoDFB z1=6y6;dN2^Z&u}Y*3c}HDM2RvnNOWEH_BRBoJ90NcoQK9i&G5{EZMKCszrCq75h& zo`y^CyPi4gY2SiC2~_^9A(mk%LH1*_i4gtPiwDDYdW*d=l@7gm25b4!l!TV{96`iz zK)#hX`zA$Jf3wrF#;N@AE`ZnECuY7AJI`MMu|sGqI*;@RUgL}cyFZQgpw(=V_6^e` z1AA`8eJYk8t!l}WGyOVl?v)2YF!w zlOlWS)%x>C(-$0Wvg4Hk*IDnU2MKjU+84m25RVeHYT~>B@*s++XM-Bhcy~g3CPene zZNBX6)Cn3}mEX0|FabJz-=KOrw|@?CW%WGlBIUU=e+REJaK){aCicdy>Cj?(J;pKD zV;mjwQb+K+ol||eTOix6Gz0irQ$W$({)P*(K6ij*@Tnl)(pKNCS>v$;CrL%2{clG9 z%xb(t^vUIO3k({x00vc912jxvIdeGpN0=Z$u*4@tkj-X1^aPXRu#m4A@Fp8-fa}7t=(u-ES)!Y zx`?-6`=sqYtzpyFH<>`nVzu^gm3zNMOG_6w-C_UET86=ZE@86y)*bctkG~ck9QZ0O z16)X<++$*>l8vUAlltYlU?s4Gg1+Afv)=V&Fj?ob;g$Tc8QZn&CrmGNn9YjsIQV`; zvvfl47eaWH`$BBHxpff&Muu#Ib++Q%sm&5|8`QnR0}dvYKA3~2ovDCXZ79zh5P+cJ zyELt5GCBSVw^l>)vi4QjIrH=Aqh8?%b}mG}q12qG21ON@BdD*ztt3fQ;N%xYc>r(- z_8ey(#&=Dk-bos$IfZTPr88VcGJKl9&@Z=yelc!(FQLoXsP)2a*kBUJElxb6D-i5Z zF+H2V_TY?eOf+{J^?~`&MenNKYcR=ji;erne2zu4s*NSQrq7CT?~G>=r~h@c%3qEc zOe{qX=hr5fd$@h~+#`nH!MCbLl7?$jT5~+zpkc-l&Fo25EtkI)|TH~)jxLW4}Zon@#)3v*pSrn=^ReQg0h$?%E4wu9(x zX>(vT>~6Dk3Omw8MdOYMwJi}1*L$7)Ijb2)wxjejn^&_3J5yOdGru~Ob^97^CQhh2 z%wOq*xN;bk6C7)Xg5bUb?h4m$^Savg@a50_)>q~8q4NrAJRDC9NYx#Z+_`z!nV#+t zWk_uuwPK4W3LJcgQ@z-DjJl>~n#{NJ2Czf?A19HlLH-!|Dp->ZLAO``a9;0V8?R5l zvCTEUdhf%a>n3o=Y?ZHFeA}q*J>Ly+EBh2xLy_oIXiNL;NNT;CR=O(qzy#bbL)w+A+s|BLPm^WMw3&Xgb5)zI?}ixiR<&Fg zH?qSUyIktEdu!MQ3ibGTzGy(fOQUO>RbUh`W;FF_=NYxr7nv>*8e?nS`RNCuH?Wwl z_EtaskV!#R&*I0l&tE{O&Rakl_Y8!kV~b+3JtjR)%914BKYDVL9V7B;_JaT|32&gT z6o~Zcl}{vThvfEsty+Dgy5_bfKp!il?-}3aAN8|r&LdgQW5iUHJ; z-n3Lhi-w-oatDg*k$?tCs`&Pjeg{bcQAT$JI^veLa@+Rq!5iW_oI$(_rmWi~*To<$ zJsZth#yn%~&&;$#H*JLSRNHBXt|+YxbCJDxF^KyWX%bROxXg|@CijTVnl>&d#D#M` zmxVAv0Gk25-|T7F)|FrryU^AXRipS4_=V-dDZ>L7Q{3>BYL}vsM$5qnfiZIWuWMEu z+a0i(wz%ucCNTLK24QS>OkwC#XSNSN>G&e4HfE);~9Su6}H+cTW~nkW|fBP-fK6z#&?^*GA70tdKRz;K-6$Wiq2Z zZ1{thlB{AbK&q10UA{uzNpru&g)wjHI`{gRQ4qlbKGD}hu%b>*~=C*#VMcgcxRXAnZqL#$hw%ra)Rp!{dR%e8` zb9@&24+$Tf-DL`)h`zEzj9esmf#gzIKPjnaWXr}daOZDj-S4Md>mu5R$keJxov(gR zxM0aB#KpIIwt1UXw&W)A>0vo_NwwQvVmab^1UA3hw)0G|UUqzdN$=7IGNylO4d%qc zN$-1Z>kBesaUOPZo_9Q^a9;Nn;>%YkBzJ~z{Y%3emHjZ^kB0(KQj&I6NG^> z;<3^m;U;m9{l|?t2K?>h z^nzMXEvmKF#t>rJOKp+rIq&aJ=9b&s&e#SUDQaW+?}s}6ipDl`?kD%3qRbZk+-GA! z)_l%jEMM*WbozY$p{&mrZMv>Qg2uS)wvK;s-R5rbP6}bAuwl`>e}-X8HI*%}r!D9B z5BK7nhRfh(&%5R^f-OtrA>CPnt@@wr##;BpH5{)-I`}(mlKGO-)IWq=TF}u*b24U2 zC(c;SboSGECVwyI?3ocLs?2n~sIsxqF4V265yT=VUdV8j{m}zo!vyXYHqp+ZtyuY; zR#K|xW#`xSlYiADiA+$-(Rp1rsafAT0w zEMQA~10w*zut(RP8wnc}@9KVcyO2HF>qBiqzHiB<-LF-Uhcq>GeVD3~g_;G6&5%n_ zhcDS}ZH&aW<2UEgCuuOe$i}{JJ*q2Df;ycOpEHp|kTDU{{d8;0v-Oq1sq<&na!;=} zrZt5oFBP>cj2-Q66bu;(em3l=BO+zIyxvo0Baz4jbwS45T1OFu@qFzu#6@E}H;lR{ zEDkJRtd0C6TKmys+v#+@pgHj-ht4iZau|j!Lxsenz)+l|^@EU7Jh}D$(*8QFaQpNRvm>e%^Bf4aPM0A&etZn7l zCgrO7XWa!IxXCBb@UU)pl3elCql0W`4%{4$Tl-%p3O>G~%9$!mdi%`Dgzrus@wl=q z;3rvt9`BlwhvplMjm@^doF`<8t!C&`_K1*H@Cp`~^P}2MnWJuFJQl5A_=d@1pv&IM3`I+y~=OW%vf$I+|jXoO-f>_UU!nAMrt(9I6oXf|Uaiwj>mO0-$ ze~IFnre&S0z?H2t(_F{;#ru>m?g%fvb}O`>Me!eQHe_PS?xU|L*5wvp(DGZL}|IFuVh6 z&^a_+RfEY`L~~mu;rDhXY%|`6KEDjHEdOGaO)2|_%`0Ofx5DP9Ddr6hjMZc@rH56M zo4%%{`^e2$5YimSCNnvy#s{sw^o;iO-vv6Z;tr=1G7r1wCLV?F;>wHjV9*HiI!V;# zTp@Yx-!5}D=Zj(9ZZ6B2Tkl=(7SRplWofB;x8J157jS@Zv+h%_@aCoQ%NBk_B-;C= zP{%0%X9Co=#?C-JRO*5gfde81kYfbV!He@s;dS-eT!Tl6dZx**sHj7N9(IS zRtZ!@d`SmC4OKQe0c?>B^Tr78Dev(#WRF7YLAS}i-ODwbDQEHybU!wXPkdAJCsBh0 z*@#G^7(Q=OURl^-B_~FO&x{KClFEEdLqk$_{ZKt>5|HdK-MzI?_$yix3KzC0lxs0$ z?H)n2>s^ZkKub314{h`Pt6!xu4!l)^0YGG?-)v0r_+~-#5yqRulzwHdE!GD4w`-0r zwcEVXxPb^6f*J#DfgD{8QkPmsHNjr1Z>p%)%lM=Zv$E~S#mUV zrF7psy{@-SjwL=lA)xYiz9kP+QhcRBy;dw(9Ci>JdlI_zl*CYdi1?d|bMU!h*0sBP zbaZtEk9%v-+YlqRxTC>6QbX2TX}*O)MFKi_Mz$0aC64Gl;y4{2ZVv6%Sa~}QJhR&G z&RSP)lrj?_8f?5IbG{asZ}zFV)CA3NLXG}D%(9ZHK^=yBAoj<#dWD;8C0Yfgb(;cM zZ=Qa=m&B9VNTGa)0S#`u1#}-8;~stOKerQ)J9%_7G@slu$3E*pOjg(!^V$a|!e7N( zCi4}eetBmTZ50(mj6#^4@bexaiW~;-3>>O}_NMGUK57p<7p4 zL)O#PSvRNM#`FHI=vmuAzOBoojd{>wtlu&VV{Wbj4Q@w`bHOFj_;$&qIiyoDfsja< z-c=yUo_^=6L9Hb1wOM}b*HT%??&=(8s5j@{<;#>~w(Py{LJnp|7=6dVKN}vAzi=5g zBP#f)?eF{qJ)|j;M_)8Pc87alg8(B`J3n=4iTzyyjF5(}6X=HunZhA&DKVR~EW*8O zkLvXqA**peMU_(A?i;;HNNQn@NEt(mR$TSBs)})ZcyWm@;z^Hmy*`*-=B-|eBeFZR zW}omj2YXYSaKAXc9_nxY%=;^^+pKxlWSw?ryv(VN(Mju^I6vwnb2=8O*?psJ=!QT|J#y-y72%34b&1k+qTE!-^H+G>wpzW&)@KVx|9?M%4t!oPHJ`?1MCC3LWuo+tEJy%1-vi){K? z2y4g;j)ivb#`9UrLJz-F%Ldwpr1qcIZJ3f8gRh`HXT7HJ$XjGCPs14h)qnQiooN5~ zD{UDL0iQu?5cV&frvLVIjvFi-dZh&DoG*b{%>)Moy4|?^l#^C?ju1y*6Aj!M}6Dk;_TL^q0RVn!R&tE4m5%z>>!?~9Q zHqpyKHq8(Y{hDa+*-*ok#nj&SK>uCH5wrY~MueSJYX|DS?JAX=wzb%?m2C}I;Ha{O z9N#Q4lN81AqgVwjPXC}tQMV52v9^Jw<^S?nCrfv^3!$S3nCqrAn9p{~K?fQ9xdkqs zMg(F+>_Nj~#FMaws13weeXrF_UgjFAdsNJ8POa3cjcr-P3U98~kpE@|ypfLA)i(1P|m=@ZivUq>&^5(|cWDleb{khr0gi z5^PHB`2RJNKL`0^5!et>cUBcBp-26}zD~NW1_>Y?%spZH|6I=B2Ki$VA7F$IB3tFa zNlW8LKw)X$rPRK97Pe<-{oiTw-v{|~5eWDZ|A)Qz3~O@R)<97fs8p4vAVpnBlOnxW z5eXnwaD)k4ZY%|@n|DOi==OV5O z{4RI<;U6NVbRo+4uidbVOZmqo{BIwBh|cPqtTWwz`_q5@A1D+MRpn}lTYodgKld|D zcJsbY;NN{wfV?uvRRJg>G5GF(4g8wVgH1KHuj%3tb z47y6jt^P&4IMbql?agH>zU5RMvcKDQqRW5>qZ!;A7(@m%#}|l*h@g}4VQ^fYs4S$E zgzo*kL-F4&xqnz{sTkm4hYoGD`wEhAQ?SZTW=nu{Yd0k2gzZMG%+AZh4L!!ZO=}Zk z&{ZQ|fw;s|PYWAL2!p^1m6>$Om6${V?|N>9;@phW}yERKAPH zY{jmd{)KqSvAcOJXHjaZB0H$Um^kn)Xp$uI@7_UZ6tKCg)eL9q2=^SL1~`2Bnd@fN zCkf2)J{>uC)m*EQ5D62l<&RsGt9|kc|J|njho6yS1ehE!R|ku(I1`?w#DoM$4@UPg zITr_hsw*=zMO8g56IcQ}(Bkvi_y@DUD2_iLyU2x?06$=?Wtat#{lhL$zAp61BaBy^Yy|JAwXYEcs%<0u7^k_;ufUK@OlgfMt3VOOs?!Q z&;ru> zb%LI?tD2=%uN|o|C&<|@VAG=qrwO7+4*I)=r!HRHq~(~FfqT4aUrAKH2pB6!R|Wa)1Bs!x5`VL^soKC7AP%2*CrORimi0Y3vqD+Y7-o0UCN+1*5yW^bCdhk+# zl*E+tZv>3!ayekKr+pxBvFOK1mNcwPI?V~P6j!***z_cVKbm}zTe)b$noI$|MDw>p zz<7Wm)GM=qmhR@jxxjJ1F-R+Sd;@OGBMu|4P@~q=-v4EDF~FH3e(is|;>I$W8+v=v zr&90^TGm~{dPi8ux}*8*RhCAI{owKcBHvhGyE_~mng*XztbSqIQq`HXxA%HoVGzH4 z%UQ^j_wP2H{o;#X_#bTj$|^4I!@2cTm8ws;=^lDLh$azUc4<3{JrGkx&U3~6m*@L0 zLHeiW!a#ssfA7%g+{wE{5MLZr>ITbH5O&pH(_d{f)#N8dC0JWpw%6UjUARX7a&&TQLznJ+S|IPJhH7 zIQ#r^7pBS2kAbEX0L%OudyEm>i-uI~go!R;Sz=z-`w_G~q4|HB$;H2g01*sMC|`xa zi@QcPSO(EGkF@ynzU;HG&D-!VD`x}GPJK}vv!;)_lB6p8vm3YnV^jbAYYE`9wO>Yb zPBe#7ivbUx`;ab^&#x(JSy5)Vms5vXz1Q6J0q^c1NDR#Ttsw=He#!HH+#bOFg98$> zvkTI%O-uVKeDQX-e(}zcz_jhZUxUpx`LOs;-|_kW+TWUTtwONZ$R{Sn0e#K;Q5izI z?ZV;v_muwqlQZz}eN5CEFGti{WJHMW;P`7i5Ge&A@9&DKi)7%ux>8n6YR|sudpzNn zn}N{m(mxbtJ~o=Amh(Gu9RRwH%k}>DkHZ||OMMgxkgS*u8~OW{u4M7XdKZl+HSJ0r z{0#11A1ggg2MBxP0Aj=OUYptCHWnedqrRxF7(M&=k}tRaI<>$Ka$xKGrSD80wr@_v z7iTkT&$IB<9wsk+5b^i%a>9`_A4Z=Y$A8{GRW{SO*WJgBe6Z4Rep={txDeZYL9&wU zf97?p(W$LxCo)iBr&rp>;uAOJKGB_5x6@(OgJ@y(-3kQAN1s2r8lhDJ@B<`@jOY&l zy4=iDzA&;uQD(H61cUr-)?7+V2;?`bb~J92DcL`@f3P}a1wh~;YT z0=hFBy#cL=@>(SPPD1-!Vu9ABSB5Z3V+b^3@2*}-wgGrsURO-H>+K#crA*LL0tCEO z)$`G6U9m=Rs}U_!6}RwHx8glNZT4P(_ps}wMUq*v&!$8}6E%E`xD+_ncuLXH#stie z)r-A4N}x&h$$n)6C4&$UNQB+{*?J4)A$fF?85;YjySv-G{o`Hp!JHQK%~}tf{e$OU zEai^-#};Z%(Amj?fRyAJou-HL30`6GOigk4e$9Yjbhg8A?03}40N)!RIHmhhTN{Aa*{b=Q9c zK@ng;;ohkFmp-6QlCpoD6)Bze4hw@1Z2NonpLnlh&l;uNEp5)cx4*;d86PwWt6C@J z#H6;wgtEd%f&4STe7jySxU(H8wJBgXUZy&UmpebyZ~-XJ;{f{Ef%?L_-iuc1c8}t# z6s|E~NY(-BSo_^)BcP*DCWpE{phtBZPYydS?vd%q2|)5^r}6QEbBcf9o5a$WIi5wx zv~kwM%jE~QNRpr=b6U-i?FWgr(N0Mm$)5L6Ltd@OwS#9U1&9V85}@~rpE*Axu&YgE z_;JyFtXs=(5pFO#W&~WZ|>=S+YpxQT-*?T!{5$MvtcbxFZ z^z913$(o{(Z~$-*KI=`E*?7SqISnue+I3`Ry1jh+*T+KB8!&B^<1i(9`82S9X+P!l z{GG&P&qF)&ojr@wz$+|8;kUi?HROyaX-pD+hv{m>38du=K#Gh<3_L_IA2r8-*7FhE zZQA-yEgz0g0bbp1dOcecs6;`@{U~y*BZPs49 zpHx^L0i5+)*)-Gw%ajXa(-tXywoz9q)45ZGi?Y2xYIg8zhWBb8!o(Hze*)T0@Pd-z zf_Ur@Sbx$^1*GkCQLLeuZl=?FwW?_0l;8hQO$dH+K%4u@g4PlsRi4mVR(fUg7u(tM3oa(e z;XF;EGj7ZxrQC{w6`&uOxZj8E{SW(2{t&RzXib$yh5m7r+cY4Ng)_Cf(X$kjp)EO|}U_CP9mAR3n3UHJrJqy-cmazKK4zTC+ z*Writ;0z#-GF<)f(V;rc1>r)zy33lSA<2(UfZl{}dezGLz zE(efb);&u^6j`^>uf%))>tS(~J<2xA&96x4lbJG&lIgOq-FOmKOyMyPaJvZ|VeiG3 z4C~sAS+K>ry}a^g9SPz9Ov|gCMYZSYJ`79xe6KvrCiWcoyG1~u!gl+)=D#e$+iteI z7P$?7a9&g)mSL?~6J6Zmlj|LoA&G$%!FLw;kCh? zQsfM>#GQ*oIJJPZ+Cv#zi6GQ#duvzk^0nCjbik(;ZhTyRJ2F?wmvuVfI)gTkO^M5) z;DoQ({qfGKWOA}eo}X!ZfhJ(aI)HxiP#I&1`;it5Xg|x7Hx~~)pLVLEDSODJ@Pi}O zU6WN1$v5E{$ihG0v9gg*D|ZHIEVq7s-CweKiMxiFcRj_AObF}kE_SPVPw;6eQR7p` zfG47x_57f%Y;BwL&o3uJ1yG%V1(?Un!}*C*HM(s^Rd$P|izQO#39*Z)tn&7y=-Sgv z>|yiwD+v)JHc7qa>;__YLQRhLR+8+MBp4{ZjHSGlgJY-48e-Q9u7d zv+?H%|Fl9z0WB_3;l(#Rhora4q9DnirK0o=cdrS2acsI~K3U}iuVl>B3cLd{xjeS% zlP{+if-Zt_{Whp;kUwm&GSkpd4XBW*b-`oI*9aIIosN6pr-H1>N!OhttxkVL*5p-L zceC(_57H|c(Vdn~Gyu>N*^+%d*aj46L-q&Q-VY^euiS&01nRaTaeznb)RSIyElV$J zFO;lx+2?hH$Z%1_%|Q${S~>8O)^mv#D)0QcRS)V2g+8 zpF!;8W3D&MzGfv0DLs`S5!3&YyWHr+V*TCy$Re&u_(AjPil3fP&KJ%Wx4J|QL{(ijiZ79@M?I+lg?nolWAY>?g!lRJ=2 zsu0L#uFJhGhF5ht&geZ{NO)ltve0eNd=zg`VHjP*W)>pY`^B|oV=+d>d?JHZ^2aWg zFl|F6z14^i8e+u*mXmWwiLDUGVe50s$?s+&5lpEHDRkeihIeXg#&AVE5%)yt71Ryd zPX>+{`A^;Pm~?X2v>?IoXf;NVs*cT4?i z_(n||IZEUFLx5wZX>m|{l?K+cEH3FT%br3M>@D^tpAyN^4AWnU!*tagE+$6kR$vGn z6Ale_2!H*Rr|`4gl!-lUMwR<$T4%oy77e7Xq{;$%M5upo@O9ggCigv53|M}re8jMn z_aXic6<~(ELEgp44+>?{-Ul(G5VyVgnuZ%xM}Rc>Gyx>5Ct0X?QqGK$BoThMnR2$r z;qNRq<@(vdZj*s8n5N;Nh2It5j{G3xkgV%s5nr5pZDQek?hS z+Gxm7?LGXR&Tk2woytS@fYtwG+uj{L0*sgaWbSt~CrT5bUWwJ^h$SI=zFhi38B349 zt&dKhW+pp(s9V;~0+|C(0*Vx%>J@#Tow_D7|50h(qexqm|GFQ8Wvz3p$Bf^IrES!O zVbsF-`@By?sd;w^SJR|B&?Z=voc>`>{I17R_n+Sv`~yJIHA*p3ZRtO|^p+rd^6if3 z?33M-L|m5s%Ms2T$J2PM`kIJNX(!>fbH0~tPjvCsC#TM=ySb-Q69oID?Yx!q&j(B!d&%~z+(8^`g_@EXM!(g{!6mUgeR^B|*SU36ZUghbQiSbPIG zgE!!FGd%2aFWxd=(Cq9>Jw``Dh1<)rhCcJ7m!%3-qFB!IKiyXuk2!%W2&A#)*Xz;E zPXPpir+Cubd1-XC z5DQS!k5{AxY`RH=dlN4ww@-=~b1rRziXHqaSCebfXRhrfDg8NA85jeE-!xPGZyDcm zSxsBaVfmh)Lq6%JkZAR@QKpyKA{ma~Kx|xL?r))xvG$O=ReTm_J5bx_yPa}=glDOr ze_h(VXm@(DV!A0hBDaS~8TnSYKQs2>$jqMjUbgLvdG%Yy%Ya2`N12S{L-v63+YS1N zx;12w-hP`{+d{@KU@j_siJkWQb|j_xup625?%W+$^5_ZM=^ak62UY1`GkV46D5Q=H z{ZBg=%e7HnqiPD5h{fW{oP`Sk@-TP5DHqsT?kgqaHVwyfVY##ID7%nA-Ea(hHV>#a zRXa_Re?#dK398G`a z^M?UW>yPI)6oG*K*UKsLK8zc+_@w&h(43+&QYb{<=x&?V4<#9@K`jNs43eC$s*&nl zE=Wbj2XvZgC~R{570~y&0#>qFe%X+$V!67%u9kl_KP^n(qhacdyc!2EZ^Fk?c&iy! zaRm`0t2CO_+4~z4$22^`@@pQN95iG^-1vwv+0W6ka^%&}Q{OeAXDfug-9pnU+#SD2 zn=Mm6!cwy!2&WsrD`E7G;3SC1uGt6;a zaM^lyXX1Q+Ex)FpY!R!+#4}CjH~dBHS?}HO-8}|IV@=((q7ozI?<=|y$K=n2(tZZm zC=|k=ss$huqySR6hA4PgizJEL7uG}+Y;gTFI}GFK`_7e)=numkcLjEaCcNP;X}*~p z6~iMEb%yRBqVdokFo7)XYi(>(gUoC?bR-uubCuJbj49GxI;!={c~<*vw#*2s5D zo5R@8**w)#{w}Mn4QJTn&^B02Mp5WQ z%K3@KvEPjXn&(CU(Uq)c!2+2$C$lC5{NwQ)J*Rwu_G-)iZ+!sZpARV@pX9m@dC2tW z)thH+^ZV>P3~U~qf>`c@yqH74OZjB(>Uz-!}Lo7XsEV;1vHA#B>G>m5($7{X8(efM5I?=P43-JMyKBX2+@bEHwo1EOIMRpZ(_2jEwtTfNPdauaM$THx3_S7?W(&#jmBwi+qvtn$ z;uHs%m#O;&U=iFw(7Vv5V3Im=PD`JKh+%OGZjeXjHWgHm|4349%` zx-S-bE#F0RJf-NB;ny3n&QCAVFBrME;J-QUj*(KhueBJ>T_TNmnWm777up2^HDkqB zq3^S38)>VqwPyaVKUQz%VOuvP0+tg(ZZs%ln~WWD{7dl zW&E=-@JW8J0oWnkx6RyXVz_e+h!B_(*YFwV1X?twTF8!|0~Wz5%;kflQlR{~ecXOF zPSB6#t#V2VnR*se)C2v>A6EFwnvG+0NudyPGqT^_$jFo@c@xmt+BNCPf@F1}ZK{>_ z6$^|$3dl3=)!V&zw!J`(6CAY{c`9v^%*t2t1grsGQ}fr%q5f;5`*-~qP;|l zigP-QaH|J3XShm-(BdCn)iQZwdhZRdsL(wMjma>WiJ~`d3lePIsG}*ak0)arswxM{)W^gR+RA;*NO` zvk+nv2A>(8ZQGTjo$Zk$#46p@fzXwQ$mqvt)tCiAi=I1D(-A`N2;R<}rC7|Y-d7S5 zygoSBmV^@@pFXV)(#lTnm4+3`p6(5Zdceh(DEHUGRNzsrv+|3j9U^~ z?+Knc^Ht0sh8&_$PzAz}Oq0hr3wH@$2a)GZNtbt$Zp`0~pncqvEe-e+Pc9u6)=@;*c4c3nPx{wGK#$nPW!YOoEu==n$;INa>y5 zdi%&v)qNFRwv_B+P7(QCm6DTo6k^TuGFL3J^&8|qZTt!$#|3keKTDlw?=5*Bq9em}urY<(Y3g zAKHJ=dP*mK@2S+6=<$y8raA8fEvnI$4a~hl)K!7yDCyI{rLcbDoFVVK%GIA>U+2Ou zQ(5QdK)w?NpXKN{Dk}Gy79ejQnHu>Y$O0amISxyxm{^lTS-#7DvE#7#=mxa_=hbxj zUdHP%8EWWuy8PSA_ZTzS48XY{*q{TDD1+0X*tI&hzJ-Nxb1hnu zC~|?Ek*-vQ;B_l=GJ)m#8ny}J1KoYAk9xxD+P;@ru64;U!Xv|%CYE8u9yX{+4#V!xUa5o`!*Zz_4=Q= zNLdyQ5BY)B{81h*?9=4ky!AQm-A1*)U#&C|jc&fVATIse+_K^ph$}@JE(H?t{%l&I^ z&RXBKPk9lkV_1y+9ZA}U^du%FId!EXldLZ0D6=08NK*=>ayPPPRV_3LzJ@JaIvUi{ z5CI~0Nm!2~0}yhF@y8@xgmPxe6)bmmY!De>ljZt7nUZAORqfS`d!>e3snw!8sE%>| z?G|*q&^5Z@cnY^}wM@8=-yg&=0kWmsTj(9f8Srt>r2uNR8j<6ZGvlzwTyUKF6A9Ob z%{F+@mqbk6IR6t4O*7;|3Dt4^V!g%7Y%FeZLcEx8%&JixJ;IS}Gko}~A6d!Ddo(G0 zr&Qc7VB#lg+ajKF*nYwMbZre+=TO1q7!Me!?56}Fj*a7Rgc&mKdg<+}0{)pfuAj72_Y(LT9T$)~N75Vdmnlrl45b@2*Zq3g zfzL;-Mq%3?1ms8zgQm^2fydNP$~#@&P`aeE2s2wC)?ZL$%DP+vyNix#Kcw}00)&%t zb71lcD+7C_zy-*(nknEEN8@hkK)3V!a&C^&p$1h)61($1<8rx;MJ!9WJ;-^j`5!8DAFW&l#=Pno) z8e-S^xNjS4F1%;bQ1Gw90ZGhi!?y++ND7m)b889#PE>0l9M6*xQ_pSvD3_KTv#04m zuET|(HRhZ@B9k$!8+ig)sN6h*j8Jw1l26l}z#sPOmu>=|w`$)05r3%pZ4jd)=TeYU zzCT)Q`bXf(ko3q9%=No*zQW2y2pN~%TkZIT9D73pn*q{9eIV~N^x0UDV=dRWa;B0! z`00pD&qiitk8U#yUoR)kE`);DcHde~vg#yz&R)x4VFT`4r46iUq4!pTsc;f;S`Pu0o^h#B2PbeKBW3lQ)2|W zjat%EXz`?O+wv`|hqC}N9{e~TykV$WvB71T5h1H8<2Y!^mSV_@q7rz6E*qQ>L_d1t zZA^8$WS%g8a7&o%sA^gepm0{d*508kL+6@L_P9E^G{iQ;5BzVF@_LRokX;~|zJ2-& zBWDczhi_rA!a_u{DT(W@*z_;9(%mYa*<{cYjSGE}w7k^(p8UaS%gJWN3cL?4WoPz) z5=d``(ukcL;ugzn=(Eq9I$3MBe%lStkIBq)>g+t!4Wm^(%FxI^Z%U_?I*gP^4U9+b zf2Avzgz;#_5^AC^k(S44^_U|CD1)RbZ3`DLM3Fib*~b|&6Ng;o0lGMUfK#{Pm6BBu z)M*ZcCZ=e)!w>IXpb((-nQ=kua|6e!vfy1mdAd|Cco~y7-20XX1YUHWs5R@H&24?^ z7+knCk$XQG@{F#@`ZO=_b91_@D(KT3sDBAm7fk?yhm}_PlcFe3O08Nef6SYxE0VKq zeezA#mvCV~?MRCuI-ShDx+Z;4fvh!?XJ^DCqLt};<#K%pR_eLbhVcDT#W3!l__39+ z9Rg2dMmo+5RjBoy*7)jxH65ezk2tnfWGX_=`cq-d;dZ(cnJWvRO=cNoI2)Y-3fS68 zUk!@N)E|NAYJkM!=#m;BVwfC`R}i*m5&X1|b*miH3^cgr2G^QVeGqDGlB+O;qN-f* zcKVDQou|{Kw~7_-ai1@0W&V}u{--&Rp?jHP6*GPGBW=-t#%*SJVQ<3aic3qtJpG65 zX)1=jvCc29ymeEBH_=?X5kh*ubDPN{{+2~ zFaJaB_Pp^w&|n^$0YGt1fj->!LJI?SexpAuKen4UWCZrHb=}YS8aJmb@mfrJ$9H!n z_I%p`wf3p#=bT2j`RNlB(5QPuRLFx$%?SO|LwEN5OP6V3na~fVJ}}+)0wYecq5W(j zCOp!V0(o-YNIl8ebh$5dFPza=_O774uUi@`=`__%T2EyKvVR7RHfbt^16dy_vvPfI zqCqnw1H3_BYT{9bs6o82RkWQ~o=d-c^gk{z!%gF>SGu9$&tva#)l~>Irn-`2%E;AZ zM|9S=XGipV01J+Qz1bVHsU&;$%g?mN`?pyubErbzaJ@-bE7P$m*3Nm(*|V=IN7^_r0lx6+7Tu& zs+mYD&PHyA*y@DzuqHjAS%&a%Q&kr*c^PG@mY&Mte#H@QPDUFp6gV-m0O?#rz;iT2 zAV#q9I;qQGBCLTri)Z!MO;T+U`ru?2I+DS?<7B3;mz{&Tswvb*wyH;U4}lYo{U1&k zsurM--7xh%HeqDu^oP-RK(F3L$$;{0hZ6Zky2YfDyhDs|$Kc#1LKcM(PhnZQ(c*)* z&wbnV;Ulv#pY6v+2dG3;+-Ih{wb#p`fX8Q}nR~#I;@h9%Q?t1a4>4|9xbGf5$ke!? zK4 zN#l`i1{k{)%}QzZRCl=wRu8 z`gAM{bGS{e!Xb7?0A6vlZRTpuNNY$JiibiU<&nLjgURRKlZ?HYMG4_%DUa(#b*0G7 zP^ahO=^kq`CV2KW)_|hsKoAWvEJGw~>Y*B6^*2sGEv|6y_UdW>)ri!NS(x@q_Kmxd zGW#!F2A+u>5_S)h@l13QX_rD_-`BzziK~v+W00$EKyZ#QlRIv4*VMVn<)7AO_8->f zi^F9509%h9)xz!G3+v*|%<<9G4?I1nh6Uf+hcmW6;U!@jOzfmXxj>adJ;B7ZB#cb~ ziF(-|?ItVg{)EHQbi3Tg--)>VTe=*@pjqP?(`2s7Zn}?3`Dg=Y>d#)~rLyg)5$m)0 zWb#5?3Hhc%=yETCa@H!H!G9`oK7uW|Qm8Cmw?=@a> zbl6M+yxB_#w{-e5t~JXC*2O6$&!I10jl9>T7E1vVd)<9(AG^WiBa1-3Mbr$-+2*kO z?#BIu4EjBI3OK}g*F4eAd?fsgXumV#%OKap;_Oec9)TMRx1NtV2-_>ZE&n_dc3A+< z+Q*9T25&VN-XgI}$k0AH^OI1?_uM!Jlc>AUekdL|)Hf<_^EugW)9T^q=w0s>`|>Ub z3daIbcQIccGszCR&5^aKT8|E&e`tTnU%n?=d>`X6H{LmaKVH(5v#9nxTNQfV>pys$ zW?gW9;QeC-6aNTsz}*217jC6=(2rjkZE~}L8X|gN|KRN94OFA5ZoUKtx_><4#NxAF z{0%=la|uk1`i2jCd62<++Mc2|*e^9#uy=R||8aM1ZC8pc%VIL?j)N=^9Y4D>&k(_& z8QDDQ_v8`+VVYkx8`ceu>Z^<5sLe4^?5KaBk-Scz!Ep4Iux2dzV!CsE?GiUkQ3K&1S6ETjMa`7-?4QZ*8jz9 zDyqKJjrWK_r1j}bIqSy$J9suct`>iE+IDm_tSno=iVi6fyWJ}3=l-r z=mMMU1;%>aDwM7yG1``fwd$o5b~|r+u`wyOYyOhYd^E7iZ3xf`dU|AkQz zm+QMT#$b+fX^WQtJA@WZ`hVTd>J(M$r+kA$`W$8Q*o#$43r?^+K%>0t+jP`g{r1P& zmP&^M{H*Qb?TJ7O_hj*dTHQbz(vBJxc+a7BLo|a{Z99D!%_yw7!V4W_1P7lhE09dM zDDqw(`rJfGpW`;TE04WD*eO4!$QgDd_br|Q@Jiya)R$QRa%z>~TTFBLez}PIiYKCN@+BG%OA3*)!F7K-|4y{-lU-Cc0MSiOf1(L zPRi)Ur97y|t*(*o7G8}*dRDkb9v;~Ef$-PX%Lbl5A--9gK23h~vI~l!TDm}%VB?ym zU#(Pt*ZfYUd8SfY1s#dAv-BDeok&qlD}sJuh_APR<?tUyaC)=A8n^)f%l)GYYpy!O{*v zbP-B@4gUT<;;7$Ti4SAu4vtPnmRw&T>{YBhcDRo0b&ekE4C+0WF%?b@0kQiCSa#h% zEyzrVs)Ddc4KAIqI!M;jewSa~)GLIdEka4A4ds9iS|qnt8kl^d?C z3PG{_&5(A{U;FBRLThQ#U53%F=b*LTbD#LCp?a9btUtw^7A!S(&h^?G+9Byq4e+EzsPF%LH^wfu>_!@Bzx?BU@lX2^Ij1p6}oG(Bu%k#nioj-Aylk+Ald?!3FNF}d3s9fSS?V3FWV z9@CpJUa1UM$!-&JC4wRC<1Jp9A*p*-%t;osSD#AM^}JkK-2c4CNcm$~&+!QzeK(j7 zXI}o8awLM(#jbw0M=Eg!$Oa-#mjv&BUXI#WOmwKPmN-(9F>8=~0)0R1dy%kGA@*A= zsa^D3Yul0@THPAADYT2^eg%olm}V;Bnm-DYFoa5s-60W!^Z+m$cPsb`D%Lk#Iq9=^ zt~F&k<8CmBYoC@oTknIiX{HDy&m^$=A1c zfGqICp(ZJ^R#aYA%-m}H;(oSHl26^Tkn9Q7)e-9KjN%)!iQ|Wyo)dW~&noSqX$e(^ zz9!K3>y>>KtAQV(d+{6l%neT};k2&%YnD^@BgLDruzsELVfUqIbW2t8?eaX2wI>RL zi$M0fGIJ?;0rdKQY51Yw;3zAQO=C@sLP|Gdf)g* z@5{RR{$K?e>Z#)Nj=ja4KxJPsu3f=*LD##%M2&n={@L1E)TDUEFh-=X$|b0OM9pw! z@0YE68L8n$M%Whvbbh4hCDjuTTITzVwd_u%iWRDpH^~2lm$$roX@Les%XVG+W*Gpr z=GDDa7pM)5d5X@k3uXymM(OI#30eAm|fSmT)o;x&j5 zBl#|1lwO?R@86ZVMANm4ZWB-&vt0B$)wmv5;evNX8BX!EoEPQ&8zP#+SDWr}r*^vj z9-Il)4&nxPZZ1Agl03vC+#l1;Vbm%8pYI9Dn6`dE7;p|fAG{~azI6kC@99ewAq+Ia z+wstJS#nl67g?u&ukiFq_ zi(GgVU0^ga3sJ86tgU+!^%!DJgZj)_0X_0;NiMA5U##cv?A7K1B7&@78*3Vv;E~z~ z?m62q(FQhAr)EmAxVcEF9=@TaYtcBtnYOoX*j(f3G1(=m?zVanHh3Oj3FS1H`W@U-?WfnP7WbT&QRVw`JZA;x5KQ>NyP1g1+)W3*1t4TsgywK#0Xfk} zsJuq(^wXV2enN~t#jTx~T_?Gk?S8IObh@GmtD19K`rz9+3DT)9BvN^F0l zIrSC4fzNyd7{#H5Kc4A;3&VCePd`mTO`|wdQyO)u#LnyO{q66AlY<2rr+WHuhPNbOEir|Ovo*{+bh0MVU{RHs%0-V9a>sVGdNSoF}gOe%6vffRLuVjAi zx<*2L>_O^5{W7|AuQ&V(Ir})3IK2?WsXeLARGb3227_^=#)}{<(sHpK}O=Q zNKRxLb4F)(?Fi?yGzlO@xnV0kx*C+*aMiS&8kSB=DFsJGfnry6fBkC`? zy+0819?Hn6A;BLmp0`=~j=J>yW@r}C=Qe_+M(44I>z#$ol{DFv)0Q%&YJA(s!8t!47H*8r?ohh6!)8@oZL z9Sl=T--OY`0%+#R2V8}}h)S*Lm{EHmFS&KMk9>`(Ki>uYUVymYnCfR^ylPoT=M!g1 zaf=4!?Ht;5z_<_p5r(OX>VG&!^g%bW){q6m=Fx+x`;dt4*P2jXu@YqAr?_dU+#Ra$ zL7gcV0DQP*Ao$%yV1pPD?B#1@^Et1ErgZXCcLz&ixXIoq&+Uyi=kDksn^(e;$z@FK z8x8TknyKK-Fd?l9Q0O~Tn*nP(hgK@;= zjXEd~#M}OTwy@UTB7!znFgi?kHMjDO?p*FSpDm#7w=d`3G}0F?Iw`7JNJg|=+Du0x zQ>Lzv=E?uW@4-sV(DkP>iu2jCm$?&$+4fgS(rmz=j_0D}l6PKar^wyTwa6h&8(I3a z=7zGwcx`;J~ljJ<--!H@`mY&?$GQn^BH)J<6)G!5Qnm z?q3{V`O~!4=k!-Di0Y4;WqGgPpf|+qC&hQS+rs5M$R$7!5F>0JQzF8+KC~&Q#l9DB zk!+QI$!^0NvkbNVT(nb^aG9cSlt0S@H@&Y9RFiBH6OT>Kry1S_Ma^?tl}8i^Q4Cwg zbB7Go$UZ!pQle|Sa>r4=N_?XXFs0dmP}}F21zp+?SG6{Bf?&v6>)V@Q+!jMyvq13F zJ)HXt0~H13qn6ZrBO$geKHuMPptR31!c@eXWH;Y@Ept58&l)mS{VPHVNmT(%t&V z9xZfB6I*-`J?2i7od2)-a(>iFbgj3;at$I{4NIxt=L4cdvzcZc{~D$?a&su-Cm~ID zlJVVops!*My1U|io$JI@I^w|hdS7Al$HD2j0wF1wr)Ve$IrtLN3zmYwG))%$d^UQs$Y@1Q- zF)X2Xr%-iUi?!xU3K^#po&M)Oe^eELz~SGm`98D2Tfg`&FrU#jlMyw1{G;+UF#0Md zLv4Re6@7omIuz+k_B_W~?4kAPM1g@+11BTx@Ex~7Fp#O!L}hVv8i2`Xiy$SyS(1_F zG(BSUolY*$l?#>Om=wwMfz{lF8FK@$P|cysG6sEE@M?+@wannS8RlIiF2n!)>;dDu zQU1v6k!^?9ahl`zBmC3I_KKG4%KTv;LkV2^*^mJSeaAr7aK)un=NSvh`hE&7x5%(f zvp9Or0r##gWgzr)8{&l?H)n>!497o_(Ir0q~#?9d*`nsXCu`1uf@$28M^44xcg` z^Ma^W@200FGs&fYw3}@+WBK#hT%g0}r*ejcRezZo5VASCQ;P{l%B$4^bA;*z!+oam z#>a6is=L9uqL3(Qh+)OA#=n9|5+KeLv=9 zEE-wkyqVjtzm(YszB`hLm-i}21-(M$MQ1?NtUj&`=j&Nag?rO-dZuq6lDTDnIt??w zeyLz+{Vu)nL;o{05(}X~GlXr12RR?oPsx!8AUaf!9Zg*=;7dvonetT6NB2QshXC`{|~% zru6Wdno+kC<3C&8S!Dd(^FA9fnmj3IO) zR<74qTK1(QzU-L*3Y>EJV%B6ORzt+m) zxq-?5=AoV6hLL#IzfTNzF!Wg>_D9Y6`&CKQb!wdYiadX_@@Y^K5_snyO4g1n$E`~D zZ7s+i#ndO?rmBp7482y6PT$YyD)tbV_9b|^eiQ@h4n*><&ZN^+oew1er9EbiH+%}# z>j4EU90uDeF? zb${{>Kyag;V{bVz|D~u_UgDx^T3$^3Gt@3v#{XoDA!`H7kg310>n}YZd~%?|)rqmo zTKZ_yohK1~g$r6RUhj9TJ;CKH`^PX|SIg`shApK@kbpKDiSE7kFdE;_38^>qk!}LQ`TioU1k&?R~e_=834Vv5Uzg{M(WQZ zWd4D9)iuyS<|78- zMWVj|V}T}E&cFi+GG@+F?cc;$6)jnn8i`yHoBml?f@jQ45OX-nPbqADUqlG%;Wuc} z6iJp^dUH`;_wk{CYS>M#RdB*pFR@7F*j4phXzeSK?wtss zQwzD{_{4JTb=t;*`Og~bB|uJ!Z-cGYg!oc^o~xP(HwOv7u&0fz+4^Yl6Rytygh)s) zzHoSth2mR3?fb+;(tHt+6dEY!1@pdAPznt3|iV<%PqGa zD5cqd2_b&N-*D2uSTo`Z)F$O_>Py`HQqbM&9=5asjakLZE^fHZ?%}t zMSYw*=@Uw|IJ|pwaA^JIst>twN=k&$2H;D@R|EH$1iSFQ`AVX1c1G@C7Jl9_{dJP!#@Te}^@q z`nooVM<7jmi}7IpqM$%4OZ`!)MQ_Ct^Cthq9#CG{odeGJz>SqV-P1}e;kZ*LVdaDR z0Er^gL2z^3sT<+#CBPIea;BBsq_27ilLPsMJ_STb2xl_ zz3<8`XJ}SvE)=7dJ5FfmNK?U?^hoC=rW98EE48_sjVyq)7dHM;r7hom)pBb}*6%1T zgq&gAMH@)tnq~@`wo)W+J{`d9H2??M^_Iy#XAIQ=Sd+Lmx_1rL7uC@~eMARK8#X)4 z8^!NzDpj@MJq3vFK%`p{P$-jFyIEs{PNx60AD=Z8t{_7*$ny#?GkeM~V?Z{gtC^bL z{(2E`l8Geb^sn{(U&6Jra`Q^Uo~5A_lDPhICxOm@7iDs9_J>Bpe|_1ejyoi0ILuAR zfp-0?7r-4?M~2XQ0T?z&S7@;jccjt>!>uq<<=_uKwcAWkkr!7$R zw207p{)m{ALI7lkD9G6{SmQK^=D}EQj+JI_1a1M<#{96f!oY|jp}U5(?b=%km#B0Q zJx@D@Z4A64-R7DBa!NjkNTA_nb55%$YOJ zx*zXd>&%ClLHXBH?~}hLM+XqasEWIDX&N^5(K}iyCYtE%_mWB>2S5%GTF=&P|0i(3 zpF{voDHRmZVl}jL@2=bEqu*zIjd(S7b{+rGo4<64gbo@i+Qa@(KK||F|Mf408DI!I z-+4yUn*k(`lY7MBuR(n%0p#zQmL6DGjP1{h#xZG!xOgi3p{n$+){)=-ELRuMfN1ZP z8$U9#yIc$AT?d+~2=nixe>lDW`05Hf@F%(j;2N$^mgkw7o0q)M?_X<~Ur6iKI2hHm zNdTZUEj9;2zqs5>cm#kRRB~;&%J_W@vnTJv_%=iuh+W zm;}BoaY^}bmjK>jiu*48Pd4Y@KMtn=qz=?s_RzL8)MJ+RH1AYA!*u|;f|?>_gvALU zSHQ{nmhP^R!Wu5-AM1Aiero`+00n-2KSJe85q}o#k3Q(Wt~DW-iV?4Yq+c^`zSi;wI8xB4f~uKr=wWOl6G%4ph5p|_3Vl13 z6G6tf=6}2R+>=H+!Cv`v$D)M$`|kiUjrg)Gv!bYX+5adve}GgG)&@u+IUlET9*Xh& zES1ZPMfMt1L)$E8v1w|f`{aIpR4hQCh|dPMc>iX_AO26_5YQxd9p79F8C2$@rFiSj zF*!2O7<0ya3Npd)s|t1Zmp}aaAD#^E7%dG5@r7EZi~^7%EgfJ_$?OzE5iq3vILtfC zGC{25=_{YnV)Oh7QI)EAclOg|F#WxR(7w1or(GP%K?e z*exFhRO+g;wofUj2?(f@SpXRejslb+YU}mK#)kiR!LE3q1H^R;)yO2$?q;a;rv1hk zv(x?k3Y#;fLcfq&fGYwIpv1TtHk7-FEZ^KlLc$r=^iC5UB-11M1y_qKLFZFW=|zE; z?*%tFtNCCu!)!5GSu_{9H@L!Y#07Aaai8~>C03%HVg15H0A+)}EaAPEcs4sY#m`O?2Ir@1fTp4Cjm z5Bt^LWQ#j!QUdsn>$VYe4)T6K#;fHR#(!jMzDV@!u5qkA4?_M? z0u|u!mAOaPba;xOEj*T&4+;N7P+zabg@Mrl!3;kGjl(p=AH<21!eUTa^^kwrw?Exb z!54m;A6X7NlMY|(C3ikILGNHtsJmW`(WbA=Jl< z?0L~j>lwvf%ls<7vJAN-g&=AuA*z{}*>43=yKg3JLD#XagZod9$ay zFgrdTZ%dPn-dMfr@-#zBnC0^8$1=pXpiQZ0BZhpx z@GR4eKo+bl;-&;&>rFQ*8$aij^dd1U#fUxnHV?0|V1^NBqcn;o0czdTNqZj^By8a*~!pCs7 z%nJ;9wE|qa!xz)(T_USFN=zq*U5m|Pm7#J{arW>I4Ipbw!MIxNzwm<n*z%JxYAV#kHQ z4?1dld&sY^PljLrMwf8-8DEJH`Wn{yuBu_b$=?t*23*kTrU@=^njh13>(t`BG;O@? zEBH64uDS5K*Q#d8v~AZJ73o=C@XPFiyBE$hvH17yy3?o(4hk~_DqwI6x53M^jd@zyB66mQTT`9=)-I?G8l@}Fiq~p zQzid7ORkCBs8kcmdSQN*K>!*T>cffq3lY5Kz4-uqL6+--6P@vY@|bBTa?`Fds(^Ru z(qMJwWRdVkwcsABs^4E*lk%rPW6)2TXI)>fbB4#4kItr{zsQ}_rdJs}yHj(4mO0q zEgY{mF#+D?v&Q-81@1&}q%5x`wCg?uw*aT_XL8D4(=`^pnC6>n+UW7s`EjYMWw|2> z`!;mr@$^MWZQj85FfN*t-wAWxE^d2iXJ-eQ4kP1(!mD7~HtO07zF2}R z0D8IO-5F{Ftl@mq7)*(j#kyqvb=WMt1E059s7D*3+Hn{cE*)FA10Y=wb~mSNZcN~C zcyXr>k#?`X5IH4i76`UkYj<);p)$civfz`wQY40I&MPrJ05;K5BAnc+#%WJy9&h-@ zm`WmqWlYO1)ERWT*fxy!aL-NKZ;kdOIj<&UxnG<-vuO+ZYb&Ba1}FbJif>1fBZ5_j zCc%On;Wy4D&JMj%tKU5vtpv|Z8`RBWDB_Q&99g!w?O6mcd1lMZ>pf5$Q0G*thp64b zjM-|ET`%iWTe$qQ4jg&@0c3m#K+K6=>83`@dC~qK*ph?)fi2mcA~1z)f0;PY46rP1 z{4vSQJuar^{DTuD{&A*x8!788eDhmvA<*R@Jv378h8z3{%yQn9&| z<(kMKmM`Az4V**S+>Nyrae$Z4o^w04h7eV7RyeBmY(!pP9D8=Vb8G8gpZtKWCf;}? zpKPjXTgloTeU<5nopO+>m3ur#s|uxd!btVJu2we)~ zw~~aqK&xyArJf|P8njlavwL2<*@3~q#D5?>zW3q(EDisSX` zH)U=w4Ez2;enb3sn_sfG_t4MD7P^=lIp7727o=~WuZF%O_)VL<+0fjM241X!asi7$ zTNo!mL#^4Et}eae%k2)(&T{J8L+m=*SzEM#FWCvSp8T=!UDm+h^NePB){FD?E2r+` zI82Tk7dxqfOFUJb5&KP-B6Zh#$JHI~)%a|R`mx}g!M8tFJlVGL8;f;E6>+d;xF#<> zT}==Q$Lx1+95fY&6@s8qAL;cOMOCEG#RH)hl2Yd8IIPY5G$QmLo=uZ5NnK-G)3J^z ze2mU(#k^-8q&`UU_u*X8ISUwx3!7Q41hsDHFmg~x*-Kh_L6215G<0B6VvuVI(I z`+;&?XU5r{m1w;wUu^ZryR?_LD};=1^s?`!U0`xeLk&k<>=wfP9QpZ=GE682Ip=hx z+A&?70QQVcQ(w{HnYX;)S@X30kgQyB+FN199$G^Cz2i^|YGNnG`lCT>?~b^JqsOHK z5+n&$6%(!7pwMI-qXIFDPTsojbqf)n2tj==kNks&CS&=q;_h5rp1ywBw-~nCjrT5HTu5$N)h8*&+3AzB`2?K<3`mMvRs$_a}ek$r*KQ_}%`u1SNIDz>oy3XYf zvN33Fb|{uvhMR(Cjm9DmdcEr@mrcW&@rJAh7+aXxbiu2eG|x)_mA={|UVJ} zd;c8(4(tKqY3Tuq;81v{1Sr#RM`{+J4(+0@d=eHqDwpkQx=UdR0M23sd)_8uzgtfa z=w_8o>zl#%wvBHl31SXJ9Gqw&TwefCyO@#jckq`pzgg0 zI7%{=N3fOu(m`9DJKP+#_{`Z%qG|rnoZ+!=D}M1d;O#2V5)hM>$CCKA_iTXDrpfh> z0HNlNX|+{260wGRurSaCudgPEWe@cLALZx!_XJuNH0^pgwcP}?qiYwyNc_mlc$TJV zx5J($yy610Z;*Yg@`q_N;yM@^mJO$~t~~%2L)7jS3nFgjZkg@NG=DZG+GEF8goxO1 z8o%Liy5FX&X5xEo+#LU<2XFY37Pq!LG!n@x?s2RJs4I2MaQgD-I`=+;J4h!TJ*=N? zINUF+@WN`%vq#<%x_tBziDD`q%wykECrA1Y0CJ29K&`W%AL;!l^_@XdZ;4awF)X3W zH)}$}V-c!N{W+MrOq%zE)9O)7{Te-JEfgzfp5rtYe{78Pd1PIgPqZw$56*v9zMZnD z0aTuhiw9Kz8n-9#NeUscj&f6)&vF=5g?`oLL01o`t`cna_reRul?}&Fxry*~#{oWF zGFnIhu~LiG>&7b-et^Lx0pS7AIwlv~vG}5Cx4a!j)!SgGr{b5(PlQ;rQ$1>gwu{_% ztJk{WUC82vw|J(_p1hKqDfEc1@+l zKuEa-7vHat)k}cJ;qbUD(Pv>ssF#(1w9(?5-`Cx{DSD2xr)`|B{2L!1tUAf_-!H%- ze_K4EU%ThGR4{2qfaL4J^D=?01>PqJ6o!H6-U30x4^Q zz)72Fj*7l@j2yBddPqxHHYx7ls_wc|9;#*ZZeaNoASNZ}p;hzszmI~{S(cALBz$WG zPR74mHk_>rI_u{(EbnNR9(N$BMLD3JXWlNG&NM>`*Lsi)$SL7#0-KLC>V~5nOf$95 zmBwuhWha8aXruM;(|WNnF@V|y?>2ezrMkyu4liUoKXcn`ompMnG-pM495O_b0QK9|`ebAPeCb*9>Qu3~Xp z(M9GStyflqicY5jOGaw8IV$o<_FI2_7GPlb8eA$zcrqr{m?A&;ibK6{(Sq7ee|*PE z1W19^SK`F?lK1uz&rkRE^6$F8AfY3SmctKHoZPI;B2bUNg5zOBu&&)|t&jPJ+g=Md z^`F-c2eEo<^i-uCpN0|R%OcPcN>}X>D^yr`t(mE&nx~{Z6F52u?qalA{hIlEmTd2I~I)-hK~FBhtT$KyyYA z9WH2cWVrqlwbEb$`!FcG)SoPdE4DRXvge-Fg=f%J$)6ES& zDpEv?;tZF#ZfD$$XQ4tX72s6N7!YO)UVg=5TCPV*$`IC+Mo2h!*6Wl+^m$WdDpM`O zf+As6x-95nSApmeGDqppmuT!wpgH zQ|65Pwh1QO@*F!>dhF)LG;IV9L6dBQx%zbm4Ww?1I@zMqq^mOODdIzgBrjZamjEgF z0)Zj9Amr+ap~_|%2ojj%e+%0SimsPDrFsLPgrT54kb|QR9m$u*`$+agQKl(vYh&ai zZ`?K4CMm-02k(H(G-D%7hhLYb_k();#1MgRx9YPd;mSq4cvD&#lzm`YiV^h_Y1>|K zVfj=;zM#7$3NAXVkq<_QlQR;6rs27l^Ll{V%P0N)MFEN^%rve{$!iU3Hrb??qPX>;Ao^HJN-8_+h^l5%`@Y^(#YLSQ)$%H}4rZg{=5`=#0 zy6NV3&#r$ig?i-Eo5s+HmzBI8%~yZa13140x|Cn--CWyn?-xo@%C>Gj z(=v)tN%|yK5a4mSppu9I1a75FAWl#K$T)qBq zR@iX(3UC+ijSBB0Mn!WIS4rcF30n)0yZsPx$`35&w(7UpTm3xIz^RlPOz(rz`xk0f zU7z(LkS5>&o{pdeXzppn%m=Sa`!>O9KFy|wT?BNv`U5;vWNy3-z1N>!2nQj_e(XHp zcuGq+_?%}q5D}`v0#B*7_yi}IMrv7Bfy1Qc;KBW_ZYugaewnis)0&*56(<}EZ z{}i!XNMb0bwqLz&TZ^iTld-$xhN?zbWjdj#j6Ge9wdi9F7doPN)I$)b-4s$|?7wmB z_xO9YIp31RuJx?**2o*P?0Qeb6P|a5JiEEcW5z;8G$G{Mxp9RO)yZ^9#B_ul-uZ!Q zvXd4gE{wKD;7lE>Qc2_W1c>$3QpBke6fZLAKBMazYlUCx(W{lvlo$+bI?3w00_=A? zz{xpGn3{^F0UXLWxkU7ASzi)z$2PKbSW%{>!+h7|6$@?E$7zxdGH(J5WuwrCLTx4S z5hyT%YKf1okC+wb*Y|pZQ;_x=OA{KLqqKE(M$g@}&sphIC5zjoMVh)Y^ghzw`@;A% z-VlK3Ak$Pc5uF)i#p$&bXDMGkjOKQYhQ1)ZrZs z>|Mp;Nio*#d$?2zvQ6_c+veh850HHK?o91U38VVG6A?3EAY3-k?Bt}IFc^6MQA?gO znCo`6$O!7Tc@UC3GLr9;riv#xrjVvIDQ(&AHa5p?HM>?KljiJ6N8Kf=r%X|4dZt%a z<}LAG=W6p&kz0{qjeW!YcK6aiEUa?^F!Rq?A+KzO%DzdXJrR|mZTLbdc>eX=gQDT{ zS*j;fQup*K-`#{eVWdMvb?pAuW`bv1qLoapq;8)k%=SK$tXequV6FDCuVGF#sSE+c z^GfsjZeC!m-L#?haY%jyFvuA7TSnWbpJR?Dr7xQskSDn1@Zki_>8ZQBbR z`EC$woj0vS6m)U+-M&kjru{^Nlh#Q{2a2+u2w|}->7ZdMD!P`t-fFz=YN3o^Yb|}H zooHS2C0HcQOXlNvjfgba@qqmBRt*HZld?HjpOCgfgwgpNQU?5l+P>=8gX1>q;>%}N zcu``-PpNJ9sPmB3Q<@I|o^7K<-=};^YE;K(0J1(JW8J;)lOC-`0C8e~=Q=6heJ|nc z2RYe#KSKMl%<6q5LI@XVw6D$}Hiz_t8Si6|bW*w@b6(tW5W;Q#FLfOFvTPdAaH?t>IZVtIy)?vnq*_jHtz$l;w@m{2#f2jE7|k z!c1pnYzjeU>WD$<`48@5U0bpS9ynI`h>(`6Q8`l3ZShw(_p^VDWsA>!DQn+8R;C|r znm-H@K5JAOp4rguk9gtuV#*d`L}x=+i#$@m2BG!U@D*8C(ctz;2o4vO_p~4m|e-7%hwpm-aAJ!^Zo)PII61#BTI*NUTKjwM`03l(U z5~^p5dv3MiD8sX#mVA22adLW`wWwEsLw1(W#?+rIT0;Vl$)<}Pm06|*tm>S9hFRd$ zU+hmrb&qjhLl(8pg@XcLz3}9Fd+znQk9}(U=&HH8fX5W>inZ?nq>|g-9@ATGCK`+L zBDSA#HBqBq`TJ0Hjg>tYY0b3Eq1k^ZC-joDxN%%RxH^M)y8YwVU>eelIyy=-B(eIE zj7x=!Fv0WnW_lMGu0lUE2G7Zyrzae-pE_uOx-PzJCH%Q`>8TrjSzqNLQY6GnS(;57TsTbZ^dxOcxXxy+T)5(bT?3$85ahTdL#c*}bf8;Fg1 zgHdEJgae%Q>xo%X8w>?i-@wkdy4Oz2L2`3))z?Fn8L{iN=Y==tg=7tTopzvklTSv1 zE{^jVQ>0ARlyz+n9FsY4T(pL!@fvD>;^ErrTWdYArF2vgILPqO~e@hq#+XVoMlM z_Q_f)%vsC?_d-}qNsmSfJvav}>GT$3!mjlnWHq2$;bqJ8eIPXbBqL2u=XsRh`bkaR zOlE}bpv|Eul{(@@@w}b%gY-rxLCC7={ zdz$V6=7EeOhOb^(@DG5~`sX`(A=r_sduN`L?_hH=>7bR_DXaIS=4e$KLo1@WCpnVX z=v$G5KKdnuna3G?*uqT;-?S}WAbtO-bi#GG+KpMSssF{OdweUEB~>?nX)`aPeYfst zN*5Yv5dJ|vfg;(#SKTJc3TLfmF-;p6-If5i@MC_aUIZEitUt<>O$-N5r;~OmBT9}f zPRllsK`{zb<3Xx;nYq$iVrnQgywy{*LPJPL%mhnU*om&u*9V}->f6&eB~y=J$E zFZLSuAPK6qyL}@TV-b(UL@l4CHVi=aoNPpN3oT=o3cI8==Gq~WI>4kbQvlE(xj?eY zwYx$;z3nP=K;K{{)(!M6lE%11jEw$QXjpBITX?>NcBRI`e&= zODU}|3dXP|Ik&!hi!->=HzGoAr|3>ruGtQKYecVH$&*1~1yyX{AMEJSz493K1&e~#CR_ZC|^MaF0Ahgr(4e0@hwEt>fQx%OeFr3!BZZO zb&UeO=7L>MHCZ)(v8qI`$tU*mOV-@JBW0St>STcDsNc($8m`=Nl{vgx)cz~k36#wm zt70Kz2EOegv7Z;EqjqG23_W1o5YFJbp!#E{7P+H^zc(R*^|N6;GhEZO&NJ!rpSZ{&7(WRG*pb~Z+JFJ?wEH%7qn)Z*^J=vd23c# z=jpX;7qY1lRQYXHpw3P^WP9=%SC4V6YV5j{7=LKYHV@W5*5%Cec%nI`A9LiY=6v)t z=eRrWI7Q`iev9i0$=pwgDjUecTsq8Alv_)6DxWfa(}p@P9&2Np+9ALe4wVz+u(dU4 zS=59aqkD5cMmE^|q-F_-B{MJ6dMB{DsUGCPA!wa$kYJCo;8g+=LwRc>5|4HQDsruKnf?s_ zd#*{mcsz@z^SMJh2M=jkq1jiIvv_SDCy{CtsFlhFa5!L^b!#XnOcOj23 zm(2!G$`nkRSG3|I3bPCni=UX_as1|gQ?b#@?=9DXY8Wu3TK@GHkpw4_6 zF6%HQ)A12&yjn``Na`6mS>p`xs4G^3v7)Ms!0d zE;{jxvks3rIjc-JDH*6(=}VZn*?JyLU|t{%+9mfh0SN|ZBS;&bKa+iL4TeV+8k+k= zO}59NmM|w>1r`qO6g6C2bd97DdMO6NqtkSoEl_>ZDlQI5I{2zFt&;C29MJgWDPd{s z8Xiepj`3lqOJ6R6#WR9QJs5B9sG0VVKgYGrjD~={q^5jRnoZ)3d#?CxE`Pm%%MF|v z=a*>>8GBLVoWy7j$3PHzKk2peGD)(VI=)PE#JegW=qcoS2*%t%RQJH;CT^_86@`MD zWVyQQ$&wR@#rw%`ye%Cs$}uW;53ZY0W2z_~PuDzu(4)cfv*Z2q*C%NssK^-e{8(G` zblUD&OOTfc*-xet%k_$^>F7uOK5l(>;Cof`jw3eBQ+>C-u z+VkXGRp~7APNZUn;0=I%6sJ(v?ot9>Z^)Ac?&ZMc$mZg-31W@(K!iu1VKoIUA@qU? zkodrUL{3SzDeY7_){`q%L0tL9wH`YL zHl?M79V~3OUW>1SaSoj%_-D(lOjez6sM?17dV;i=u#}ClZSl&XspySuSM^}lDr|Em|*%2_4@~!#~n2f*%B>Fq$u;; zw7d4oGBnWr`P~^bbZh#`^;z0=^WC1wY#XZnaNd!7ezS2l$WAHM_M0>`!{f2ZI2&M=u>{;>t6z1tlZZtF9JdlHahCXVUT?*%Wrp_v=@>iR6E(?&715_4l` zhihY@O?r4K5+dRC({H|G^ftnTEYRp;ne&6gdI za-Enesk|b5%y8lx2Ud#s$4g%)RUGu?d$sG}My(%@>0(`%Upcq17cqTKvax7Uk__&A z#YnwH-_c%Kn5-aAT^t2%rNvOoy`5rhN zmOPB*(dzI8kCiY#7>7V_s74U=RNjHE*Y8r=;_~gJS@uXMjta8U8N!_D=TPNxoR;o> zcSXj?cl_BV+I=7NA!-D2g4J|TA>3v;Sp>+yygn=KxE-Wen*Xz>&Ou$`(#7r}_hhF{ zohQ~=ne(PH`IUBCUo3O>EX1MJyZvx@}D|If4-Gp8fGIs4he2b zbEI!gNkOS~nOP^HXj4H=u-LX=WA(0yHqq+G1eK;|WWMR^RcRq6Dm(P&+RRX0E8&L< zAH3WjP$Ho;XO*>uq)cWqTr=10S33I5nRsmO_%4*KgPMmnk$t-q#tp1yw1%m^FMFqW zZDu<2DiB6sNVYgdlNVW%S*y;{?pd~$z5_}_+5~}+GYdbXI*kmxyzZgAv&3S(h#T{{ zN{y_14d>+{Wlf{_Y#Z?{nbYbRe01?WS9~=186?vEy>ZvKOB|F)9wX)ta@C1l)s{^{ z8meK~6_2fQR7@v*(q~U)M{&)a1kjM(0~Q-PWM5NFfIVy2&fCvD0~j6ISTepC3-mww zev@2;8WO~K=$2TfIEZ{KOFd@Z=`gm#!&E9A@|^d~3G&Pl`^%Ux{&00j$EE4l#>AANSLB;KN>YMN@)ZY0uNzvp}1 z%az1wxuVDCd4f|?u%P5Ox#NLLP=^iCflCLpHGLm1j3Uj*vodN39>S4f~XGct<&gP9NNb@qAa8- z4$e5Db|o^RrG>7g!=_1wiPP%A9=~3J_u5t;P0e}l7b5eBx5%ITtmv2#(FuH_vTc5z zJ7IwIJwqyrH=|>y^3$<{am3oE(<`unQj(v0U3$7z3pa ziklL=Aq%bCuvr8sDq6g?E^LvFqu#S;ws|@8BMn;nVZMrEN-WDerRC@Mj&m9`ti9Y^ z`IMUGw-{@lep3!Ylr1`QePB}_FX7F>DfLPwXthvcVI@KvAxSR``+jKgGhe1)wGh7!bp)thg^{UGHNWM0Wa~f#hvV!*t3=kZ6 z`l`~%K+tXX6v?*$FUB}kMc3xH^Q3{y@=-rJNYnHv$gBk7q2I-#7|X;t5aQYWh)J;% zNiaQHBugOg+Az3^EDe5s;&uDgUIsvYrQ9^xHeHs|*vWqGt z8pRqi9IsZo_y8U{b_Z}AkD6}|`-Jv7Wk(m_{4WUPyP=jP#=e^d6;eIz0_skA z2F(1khS~xYV|Qk^smyynJSNM(Hq}_X8Ov99t8pd^gE0gt#4f7m$Wh3DuaQS}i2kA{ zc@gUYa5MSJ7z77qaM6ueM&cE&Q?CT%Y*q|(=T2xSJ;`$H87{LrjPU;O#S9g2d>JSiH^^SHI`o!n&5$?Ov!kFkNwZRs0qi+X%o0omU4( z%Y;=iFFrXu3dDH=;Q~i=GiPS$bv6U^LtJ72yhoy-;WszIrR4^gO-Om)XttecEwTY> zh>GlDM>)V)wwF{~ZqO8(NypI7mdXdSrwt`L6;lw1uDgQ?^b2b6aQdb5PX+H?19Ik-{U-+10@>l$)nwdUX}R zs~qd}I!EO?S96zinPv^_Zot^EzXrRM9b3+9!aZomqQIWoKDoEc7gR_-s*FF~V2Uj3 zwToLL5Y+cBgCO%{#D<3z`WW7#@+@J5nUfsOOzRV;hOn%2M~Cu7(&`3Zm8eNUEX15hadIe} zTSa$RXcQ8=AS}`Q`MKJ~xNzT{qSDX(o*|sY3}1LRCZ(E+b_#rPD5~*PHufe#=NzM6YOGQL`dSn!3dL|p;blni zAY|Bj#qEe>d98%}3yw)lVvZb82`{YL%9UjJW<*YwJ5gQ3P8zo_vzdk4?flRz8Kd55kq zmJTfS3W6{y)Qd&vagI%evpZsq&mkFtbT}O9IF+wf54ql%8#VbL708%JLB?7ID`Rww zKiP_WGLiYz;YruGSTf8^!qX{Wz%+1kHP-E}3ZdiFA0upFqN5r@%>tD(f6r;=lSj9_ zdmqi?q3>j<{SaW*Y$_VlaeX|SXX`mEc@%oI_rl6*z7sQ@<&mg!Pj%Fl3syCPddT>x(M;VS`!P(fe=qkZN#6DN{6Y*Eqh?PoUw8`_ z3Z;Ju1t~%FF%XK8jCH4a(`$39MN&lb2OyCe{4(qiv`(T>bgZaq)XO|sshk!(qdK=B zv7RXsSh8_xIA}2vVlO_WIJ>@cY878tii_4jUVOZRbNC5N*M}&iFjtPA0Cw3GAew8B zoqF}w1ZYjAfTNT>h{m-SaY zwy2WC4N)E$`Pf8NOj(x7sx@Vc#B1@d*WpqL-Zw0Dx5#QJ3*W7RO{!`xf1@N1$r@V! zxn0`l7*t?BwrD55SG&ffiguvB+ZFr8_G#QB?JLVOu2RrL)%iG0Wgk^xE=;k)=TQau zkgNP$mxML8A?Lze?Jo3-C)v;=O@Re2pnM`% z&q~Z9?^WpOibY~YbA*FBCph?Yyk9-lo)yF*Pty8HxgsS1LUMEW#Lbw<2FU7jpH8OW zFZcxO6ZytHIq#&3Ef{Y?_(38t-BtWBz-OE;)XU@U%T&SovIE+C2s_egAACft-(T|e z*pxHs$2#4*MSLRlTts=DKDK1G6qFAkpP1a|h~00FHC50ia@AFu8#fG_9u6$#9(B5@ z%e9j!lW&9>ARIIQerP8w`IMxnl;^>j>`{K(TlfLO&8O~@mN49hdYr?p1@|x?kE84t z%}wxehX}RupDz2RPs&!zj-U(dcfQfGT8lfq60*CVjpGuuI3rmN>74427R`^eGWA<;FXWOu!dQYYHm4psu{`EtiRF92vfKRs%*Cq z*|7U7yW}$bhaF4iz$82)>Do+jRCz11%S*%3VDu1ehj<|EOkxV4%uJQ(*F}i0K^r>`0*Wt$T>Wqt}+C9~9mvn{tl^QIcMekt}tn zaXVcXeLq?M9L_bm$(-Z~N|7Pj+C*VjVdc&2a0@+?Uhcv%YW_`E+VU9IKD1VB&_>MH zgwt8niF24ZciB4raA`rS&cHK=Wb1$>sf0nhCq00iI4p6ZI7+y+U>;2!AbZ%_ffb~F zOLcrktRw6NCNpx`tn7OG~$%IP@(My?5Ep!T{^*t6C6HwF2p`JqClm>^rKP~ezPIYs(z-aF84 z-@AOCYxFp>7_zy`ZS2<5D!_JLPdiZA!qjYgao#%acdM2D$#gL&G|9{r@ej_lB5rD? zycOXKK^b0UYx}RE|Fxbl{HTkD447ob{-n|iU%w?jyqeOC>eiSZz`ZWf7b^Fs>qexBtu?RhFV2~5QaLw1 zQ5+PjG5BLf$y~c=G_kz?hG+mox?0Jqs5ynNTN2fgeFKN%Sm4={Q-68&+)KMUQ30W4 zr+7bNIwM91tmbpz+rSK7JvAx;Ht1nQoUn@zSDw!TXd7WuWN8jYg1Ks+_6&{6sPCz} zkL4SAD4^!JxwqwKMiyHqxH-JG(4MS8LV@FCB=9*EL@M&*TQuoiI*k|ja5tWh6n-<> zS}pp1D%}|bPe*WsI@V0Z3tx4xGnvA@JFV5KwEX{wkKf-y^`|`gR5!XjjSlwqCKneY zbU$j-BGlfgo2kvxu_UiPtWj8SoD=BYWQ`$+~L6I7W zsQre;>f4vq4fP@Ip3xWe#jKb$X?V&g|KOpQa30Efz!m6U!O0!q)}6biq(t(jsi<$? z+&P)D!HO!suB(7XjA2OxLRJM%r|rmSRvctwF%)$q;oIO147jSV_;1Y0KQm&?dEQ(_ zz(#Xlz;>fK0|-d!c#CUhQo}1ZVm^UQuc>Dn@}9)`*Oo;l?1t_44Yb$%EM=aSM_YtL zY&t)R;@{PQZj?HVtV2T1*~&&TYG@4fSNtG;=HFge9oRNMctJ>F@Qs42yqu|c`gvrZ zqw@6Zl=?u*;JRp7JY3Rc-HH&;L2bN1XH|0yF`O;;XMxaiiM(z!CxPsg*0Ty1 z*+ef!0>h^J!TEQ{J$}AErQ#AybrQOw>ThLc0SGiX6JRT^=&RAYC6O&Yy8Aj-i(-LB z80j5;YH)mZ&=nVJtN=;0L&a1^kb8Oe^%QNFCt4a)O7U)iCd?(US8-?d~u+tRq<9pj%q9Adb z4Z3fdkR%GQ31^{NI8h$Js3;Lv#z_-|ZN%@B0w@aSfb zJ>nEkz3-AsDi<$*l!1coso~4@7edlWVY z?;MqMvvu(p)#YQ>t5N*iO{5`k(#a@m1o?;1&K}AcBam(9VE;#{MeC$0t=pvhx5Ij0 z6>jQj)zs(pI)!}nUJ4kcb9Y( zTFTY^a4b!lsfco8@l1K(}rwXa8$WOx_!AOn8h^0t^Jy(973))pDw(aSA=yU86MJMW#s?(gko7;t$bDjt=h;+oQB|_LBEcfpQ>>Tz(zg< zS=_IEt4J)HZ`#tBPXTb^x_C5auGHT4)5iLCmN!+0(r_G6xv#`*g-%ku$2A+SeKA`c($&Zk&Yyo_kBjSkOb3J%kV$UGF1ccqY% zH#)l+85sjYS3e3*_&a(cqDg?Soi!jm&EKa)^fFGyywpt9TdveB#cg%L%)!d2Y=Z;#e z1lSG(y;qf5M!Sv2?0KjOWSts_-f-3e1Ds}ZUx7tjmfQp{w7d*jOEf?z$w1QjZF|oupW*;;21-tYYz3yPHgT;ug zolj#InJ@RlJ<&x{IB!#_N^9UThzmnrrM;S&%OC7MRF3{Kx%vcwL!D|j$mx}=VbdcS zD2MN-v%Ha(XK*)Q<#B^=1F2RH&-*HR+6RY*Z3)R_(Wq?=MidlOY|vTO`l z;Rwvxu#=23Rf^-CxDsxBbqmuGSAkcInLsKseJ!%hosZmk!TO3kZPc|%P}!1*Wo25= z2lg}Z^sI-EiFQl<>2JB?{QLKCAAw%ed-cjXSN9L!64ngL4(4BE#cxEl`j0+sTr^+( zN^#SiUt+9#*sEDs-EsbmS%{>7xp;VRwP=Z%d&_2YIvk)N20QAXN=DJhvWevv>LLD9 z;cnmKkP&9W0g&+z2wa6@(V&O}o9|$Z`Q@KVCTdESqqgp~kn@0c#7&y9pnlN*+^245 zS=@r}Fw{ygqO1OacGL&*ouP1=tND}4JAKMZ$xueBMK|YDJRT))*7A>UVCAE>B55mk zG=nf;)6wF7`A^E&I2;8cD^noZ%$>@oilt}E zDO@KUDHVpucc5Z$6Q^;Dh_H@VPwa>tTbvzt%xqev9QOZ^Ch7Gk8-GM zhL;c7Z$92{M(Cfe!as%f7J{}ks3&&(EU(cqcKTRg8htnIRckG?a5DTrg{>R7iH_n%FCtI0xj=GQIITTI2vgR5P->O+k6+uy#5 zpGj}k4R6f%vKx1^+_6kT`!{3%`!O|OoWLJ{`i`uRlS9!OzJ1) zpO^L&u*P)2>T6Is;dq_|23KL@&u9ii;4E<~PCooAg8Y+A{eobOq-7(bv? z%a1P>T%+Mq{5u1qgO5dpo8TeIUq9Tt0vs=m?`xb`^B543r zy&;m%e{=9(pD26*G9vwpu9OV^1D{_!(Rv*EPljk{*`3l0!n7V1*`3ySmf`=3t!hVr z9qaF1nI3=lx_ze`4B={9H5;to69}vA2Q9Hw#`k_n&)fIPm;ptS?QJ+-leZcCKH8J4 z?CPqD!$2@RNW^Z;u1B|t#^yZ0sm?_IMFQBh)JMDzR@cIQ)q=!dW%ql>`K%D`k`&;dsg3AkXB_r8S zb!q<;+_*<%HX`jP>(j_sGNREnF~-}r&F=_1nK+xT?yP3z z$;;Tk2II5Fpp8Dzj`a6y|NSlhVyN&b+{A_M#%a-*ExPYcSEs8ea*(oJio(P7HwAp# z?dGp1N3x+peV%+Al>pM)_+YsM(DuRK(N-7Gc5-`GCAce%#=81y*0^#n-1K4`ouWI{ z#N&DhR7Oj)p0s1zJo5Iv^DEnX3t@K}hAQ`_%V;{UP_)6M((6|9qh# zKagf!COz@c&xa9D@ZISSR2pd{TwetDZ!Qi-28uoNLvYMCIeIrpm1q5OjBA=6@q^Cm zl@YB$Jc!13t4!e^2Tiq}9K5k?3nLTqHA8Sa-l*l^I{rG~S+E}%RaLIW9(tZOsLOc9IwBY4zZmHV+IRm^%9VUy4WoFWj9YdGA**^qZD>J@N8s ziqb>-+g}nk2Jg=Hnex%!7<6#A-$o66;DN~B+${mTdwWRG`@g5d|A_S)3I9i|e~8lm pBnxg2{-=iju`_ is an essential concept in TorchOpt. +They can be thought as a generalization of vectors. +They are a way to structure parameters or weights using tuples and dictionaries. +Many solvers in TorchOpt have native support for pytrees. + +Floating-Point Precision +------------------------ + +TorchOpt uses single (32-bit) floating precision (``torch.float32``) by default. +However, for some algorithms, this may not be enough. +Double (64-bit) floating precision (``torch.float64``) can be enabled by adding the following lines at the beginning of the file: + +.. code-block:: python + + import torch + + torch.set_default_dtype(torch.float64) diff --git a/docs/source/conf.py b/docs/source/conf.py index 96736ebb..d8233da7 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -19,7 +19,7 @@ # pylint: disable=all -# -- Path setup -------------------------------------------------------------- +# -- Path setup ---------------------------------------------------------------- # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the @@ -62,16 +62,16 @@ def filter(self, record): sphinx_autodoc_typehints._LOGGER.logger.addFilter(RecursiveForwardRefFilter()) -# -- Project information ----------------------------------------------------- +# -- Project information ------------------------------------------------------- project = 'TorchOpt' -copyright = '2022 MetaOPT Team' +copyright = '2022-2023 MetaOPT Team' author = 'TorchOpt Contributors' # The full version, including alpha/beta/rc tags release = get_version() -# -- General configuration --------------------------------------------------- +# -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom @@ -129,8 +129,9 @@ def filter(self, record): # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'default' -# -- Options for autodoc ----------------------------------------------------- +# -- Options for autodoc ------------------------------------------------------- +autosummary_generate = False autodoc_default_options = { 'member-order': 'bysource', 'undoc-members': True, @@ -141,16 +142,21 @@ def filter(self, record): autoclass_content = 'both' simplify_optional_unions = False -# -- Options for bibtex ----------------------------------------------------- +# -- Options for autosummary --------------------------------------------------- + +autosummary_generate = False +# numpydoc_class_members_toctree = False + +# -- Options for bibtex -------------------------------------------------------- bibtex_bibfiles = ['references.bib'] -# -- Options for myst ------------------------------------------------------- +# -- Options for myst ---------------------------------------------------------- nb_execution_mode = 'force' nb_execution_allow_errors = False -# -- Options for katex ------------------------------------------------------ +# -- Options for katex --------------------------------------------------------- # See: https://sphinxcontrib-katex.readthedocs.io/en/0.4.1/macros.html latex_macros = r""" @@ -164,7 +170,7 @@ def filter(self, record): # Add LaTeX macros for LATEX builder latex_elements = {'preamble': latex_macros} -# -- Options for HTML output ------------------------------------------------- +# -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. @@ -203,27 +209,27 @@ def setup(app): # # html_sidebars = {} -# -- Source code links ------------------------------------------------------- +# -- Source code links --------------------------------------------------------- extlinks = { 'gitcode': ('https://github.com/metaopt/torchopt/blob/HEAD/%s', '%s'), 'issue': ('https://github.com/metaopt/torchopt/issues/%s', 'issue %s'), } -# -- Extension configuration ------------------------------------------------- +# -- Extension configuration --------------------------------------------------- -# -- Options for napoleon extension ------------------------------------------ +# -- Options for napoleon extension -------------------------------------------- napoleon_include_init_with_doc = True napoleon_include_private_with_doc = False napoleon_include_special_with_doc = True -# -- Options for intersphinx extension --------------------------------------- +# -- Options for intersphinx extension ----------------------------------------- # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'python': ('https://docs.python.org/3', None)} -# -- Options for todo extension ---------------------------------------------- +# -- Options for todo extension ------------------------------------------------ # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = True diff --git a/docs/source/developer/contributing.rst b/docs/source/developer/contributing.rst index b4c4c825..ee66f560 100644 --- a/docs/source/developer/contributing.rst +++ b/docs/source/developer/contributing.rst @@ -12,7 +12,7 @@ Before contributing to TorchOpt, please follow the instructions below to setup. git remote add upstream git@github.com:metaopt/torchopt.git -2. Setup a development environment via `conda `_: +2. Setup a development environment via `conda `_ / `mamba `_: .. code-block:: bash @@ -53,7 +53,7 @@ We use several tools to secure code quality, including: * PEP8 code style: ``black``, ``isort``, ``pylint``, ``flake8`` * Type hint check: ``mypy`` - * C++ Google-style: ``cpplint``, ``clang-format`` + * C++ Google-style: ``cpplint``, ``clang-format``, ``clang-tidy`` * License: ``addlicense`` * Documentation: ``pydocstyle``, ``doc8`` diff --git a/docs/source/developer/contributor.rst b/docs/source/developer/contributor.rst index 407b53b0..2358f963 100644 --- a/docs/source/developer/contributor.rst +++ b/docs/source/developer/contributor.rst @@ -3,5 +3,5 @@ Contributor We always welcome contributions to help make TorchOpt better. Below is an incomplete list of our contributors (find more on `this page `_). -* Yao Fu (`future-xy `_) -* Vincent Moens (`vmoens `_) +- Yao Fu (`future-xy `_) +- Vincent Moens (`vmoens `_) diff --git a/docs/source/distributed/distributed.rst b/docs/source/distributed/distributed.rst new file mode 100644 index 00000000..f85eec3f --- /dev/null +++ b/docs/source/distributed/distributed.rst @@ -0,0 +1,740 @@ +Distributed Training +==================== + +.. currentmodule:: torchopt.distributed + +Distributed training is a technique that allows you to train your pipeline on multiple workers/machines. +This is useful when you have a large model or computation graph that doesn't fit on a single GPU/machine, or when you want to train a model faster by using more resources. + +TorchOpt offers a simple API to train your model on multiple GPUs/machines based on the PyTorch |Distributed RPC|_. +Here are some key concepts that TorchOpt's distributed mechanism relies on: + +- **Remote Procedure Call (RPC)** supports running a function on the specified destination worker with the given arguments and getting the return value back or creating a reference to the return value. + + That is, you can treat the remote worker as an accelerator. You can call a function on a remote worker and get the result back to the local worker. + +- **Distributed Autograd** stitches together local autograd engines on all the workers involved in the forward pass, and automatically reach out to them during the backward pass to compute gradients. + + This is much more flexible to fit the meta-learning use case to have a complex task dependency tree. + +.. |Distributed RPC| replace:: Distributed RPC Framework (``torch.distributed.rpc``) +.. _Distributed RPC: https://pytorch.org/docs/stable/rpc.html + +Here are some useful resources to learn more about distributed training: + +- `Distributed RPC Framework `_ +- `Distributed Autograd Design `_ +- `Remote Reference Protocol `_ +- `RPC tutorials `_ +- `Autograd mechanics `_ +- **Example**: :ref:`Using TorchOpt with Distributed Training ` + +------ + +Why RPC-Based Distributed Training +---------------------------------- + +Due to the Global Interpreter Lock (GIL) in Python, only one thread can execute Python code at a time. +This means that you can't take advantage of multiple cores on your machine. +Distribute the workload across multiple processes, or namely workers, that will run in parallel to gain faster execution performance. +Each worker will have its own Python interpreter and memory namespace. + +Compare to single-process programming, you need to be aware of the following: + +- **Communication**: You need to explicitly send and receive messages between workers. +- **Synchronization**: You need to explicitly synchronize the states between workers. + +Message Passing Interface (MPI) and Distributed Data-Parallel Training (DDP) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +`MPI `_ is a standard for message passing between processes. +It is a popular choice for `Distributed Data-Parallel Training (DDP) `_. +PyTorch has implemented this with several `backends `_, including `Gloo `_, `MPI `_, and `NCCL `_. + +However, MPI-based parallelism has some drawbacks: + +- **MPI is not user-friendly**. + MPI-like APIs only provide low-level primitives for sending and receiving messages. + It requires the users to manage the message passing between workers manually. + The users should be aware of the communication pattern and the synchronization between workers. + +- **MPI is not flexible**. + MPI-like APIs are designed for `Distributed Data-Parallel Training (DDP) `_, which is a widely adopted `single-program multiple-data (SPMD) `_ training paradigm. + However, for meta-learning tasks, the task dependency tree is complex and dynamic. + It may not fit into the SPMD paradigm. + It is hard to implement the distributed autograd engine on top of MPI. + +- **MPI only communicates the value of tensors but not the gradients and graphs**. + This is a limitation of MPI. + The users need to handle the gradients manually across multiple workers. + For example, receive the gradients from other workers and put them as ``grad_outputs`` to function |torch.autograd.grad|_. + +.. |torch.autograd.grad| replace:: ``torch.autograd.grad`` +.. _torch.autograd.grad: https://pytorch.org/docs/stable/generated/torch.autograd.grad.html + +Distributed Autograd with Remote Procedure Call (RPC) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To address the needs of meta-learning tasks, which have complex and dynamic nature of the training process. +TorchOpt uses PyTorch |Distributed RPC|_ to implement the distributed training mechanism. +PyTorch implements the RPC communication operations with appropriate ``RpcSendBackward`` and ``RpcRecvBackward`` functions. +The `Distributed Autograd Engine `_ automatically calls these functions to send and receive the gradients between workers. + +With **RPC** and **Distributed Autograd**, TorchOpt distributes a **differentiable optimization** job across multiple workers and executes the workers in parallel. +It allows the users to build the whole computation graph (**both forward and backward**) across multiple workers. +The users can wrap code in the distributed autograd module and achieve substantial speedup in training time with only a few changes in existing training scripts. (:ref:`example `) + +Here is an example of distributed autograd graph using RPC from `Distributed Backward Pass `_ documentation: + +.. code-block:: python + :emphasize-lines: 13, 18, 28, 31 + + import torch + import torch.distributed.autograd as dist_autograd + import torch.distributed.rpc as rpc + + def my_add(t1, t2): + return torch.add(t1, t2) + + # On worker 0: + + # Setup the autograd context. Computations that take + # part in the distributed backward pass must be within + # the distributed autograd context manager. + with dist_autograd.context() as context_id: + t1 = torch.rand((3, 3), requires_grad=True) + t2 = torch.rand((3, 3), requires_grad=True) + + # Perform some computation remotely. + t3 = rpc.rpc_sync("worker1", my_add, args=(t1, t2)) + + # Perform some computation locally based on the remote result. + t4 = torch.rand((3, 3), requires_grad=True) + t5 = torch.mul(t3, t4) + + # Compute some loss. + loss = t5.sum() + + # Run the backward pass. + dist_autograd.backward(context_id, [loss]) + + # Retrieve the gradients from the context. + dist_autograd.get_gradients(context_id) + +.. image:: https://pytorch.org/docs/stable/_images/distributed_dependencies_computed.png + +For more details, please refer to the `Distributed Autograd Design `_ documentation. + +------ + +TorchOpt's Distributed Training +------------------------------- + +TorchOpt's distributed package is built upon the PyTorch |Distributed RPC|_ and |Distributed Autograd Framework|_. + +.. |Distributed Autograd Framework| replace:: Distributed Autograd Framework (``torch.distributed.autograd``) +.. _Distributed Autograd Framework: https://pytorch.org/docs/stable/rpc.html#distributed-autograd-framework + +TorchOpt provides some utility functions to make it easier to use the distributed training mechanism. + +Initialization and Synchronization +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + + torchopt.distributed.auto_init_rpc + torchopt.distributed.barrier + +Users can wrap their program entry function with the decorator :func:`torchopt.distributed.auto_init_rpc`: + +.. code-block:: python + :emphasize-lines: 13 + + import torchopt.distributed as todist + + def parse_arguments(): + parser = argparse.ArgumentParser() + ... + + return args + + def worker_init_fn(): + # set process title, seeding, etc. + ... + + @todist.auto_init_rpc(worker_init_fn) + def main(): + # Your code here + args = parse_arguments() + ... + + if __name__ == '__main__': + main() + +The decorator will initialize the RPC framework and synchronize the workers on startup. + +.. note:: + + By default, all tensors must move to the CPU before sending them to other workers. + If you want to send/receive the tensors directly between GPUs from different workers, you need to specify the ``rpc_backend_options`` with ``device_maps``. + Please refer to the documentation of |torch.distributed.rpc.init_rpc|_ for more details. + +.. |torch.distributed.rpc.init_rpc| replace:: ``torch.distributed.rpc.init_rpc`` +.. _torch.distributed.rpc.init_rpc: https://pytorch.org/docs/stable/rpc.html#torch.distributed.rpc.init_rpc + +Then, users can use |torchrun|_ to launch the program: + +.. code-block:: bash + + torchrun --nnodes=1 --nproc_per_node=8 YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...) + +.. |torchrun| replace:: ``torchrun`` (Elastic Launch) +.. _torchrun: https://pytorch.org/docs/stable/elastic/run.html + +Process group information +~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + + torchopt.distributed.get_world_info + torchopt.distributed.get_world_rank + torchopt.distributed.get_rank + torchopt.distributed.get_world_size + torchopt.distributed.get_local_rank + torchopt.distributed.get_local_world_size + torchopt.distributed.get_worker_id + +After initializing the RPC server, users can use the above functions to get the process group information. + +For example, use :func:`torchopt.distributed.get_local_rank` to determine which GPU device to use: + +.. code-block:: python + + import torch + import torchopt.distributed as todist + + def worker_init_fn(): + local_rank = todist.get_local_rank() + torch.cuda.set_device(local_rank) + + @todist.auto_init_rpc(worker_init_fn) + def main(): + ... + +Worker selection +~~~~~~~~~~~~~~~~ + +.. autosummary:: + + + torchopt.distributed.on_rank + torchopt.distributed.not_on_rank + torchopt.distributed.rank_zero_only + torchopt.distributed.rank_non_zero_only + +TorchOpt provides some decorators to execute the decorated function on specific workers. + +For example, use :func:`torchopt.distributed.rank_zero_only` to execute the function only on the main worker (``worker0``), such as saving checkpoints or logging the results: + +.. code-block:: python + :emphasize-lines: 3, 7, 11 + + import torchopt.distributed as todist + + @todist.rank_non_zero_only + def greet(): + print(f'Greetings from worker(rank={todist.get_rank()})!') + + @todist.rank_zero_only + def save_checkpoint(model): + ... + + @todist.rank_zero_only + def log_results(writer, results): + ... + + @todist.auto_init_rpc() + def main(): + greet() + + ... + + for epoch in range(args.epochs): + ... + + if epoch % args.log_interval == 0: + log_results(writer, results) + + if epoch % args.save_interval == 0: + save_checkpoint(model) + +Remote Procedure Call (RPC) +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + + torchopt.distributed.remote_async_call + torchopt.distributed.remote_sync_call + +TorchOpt provides two functions to execute the remote procedure call (RPC) on remote workers. +The asynchronous version :func:`remote_async_call` function returns a |torch.Future|_ object, and the :func:`remote_sync_call` function executes and returns the result directly. + +.. |torch.Future| replace:: ``torch.Future`` +.. _torch.Future: https://pytorch.org/docs/stable/futures.html#torch.futures.Future + +Users can distribute their workload (a function) to a specific worker by: + +.. code-block:: python + :emphasize-lines: 12 + + import torchopt.distributed as todist + + @todist.auto_init_rpc(worker_init_fn) + def main(): + ... + + # Execute the function on the remote worker (asynchronously) + future = todist.remote_async_call( + func, + args=(arg1, arg2, ...), + kwargs={...}, + partitioner=worker_id, + ) + + # Wait for the result + result = future.wait() + + ... + +or + +.. code-block:: python + :emphasize-lines: 12 + + import torchopt.distributed as todist + + @todist.auto_init_rpc(worker_init_fn) + def main(): + ... + + # Execute the function on the remote worker + result = todist.remote_sync_call( + func, + args=(arg1, arg2, ...), + kwargs={...}, + partitioner=worker_id, + ) + + ... + +TorchOpt follows the `MapReduce programming model `_ to distribute the workload. + +The ``partitioner`` argument specifies the worker to execute the function. +The users can optionally specify the ``reducer`` argument to aggregate the results from the workers. +Finally, the caller will get a reference to the result on the local worker. + +- ``partitioner``: a function that takes the ``args`` and ``kwargs`` arguments and returns a list of triplets ``(worker_id, worker_args, worker_kwargs)``. + + The ``partitioner`` is responsible for partitioning the workload (inputs) and distributing them to the remote workers. + + If the ``partitioner`` is given by a worker ID (:class:`int` or :class:`str`), the function will be executed on the specified worker. + + If the ``partitioner`` is not given, the :func:`torchopt.distributed.batch_partitioner` will be used. + +- ``mapper``: the ``func`` argument to be executed on the remote worker. +- ``reducer`` (optional): aggregation function, takes a list of results from the remote workers and returns the final result. + + If the ``reducer`` is not given, returns the original unaggregated list. + +Predefined partitioners and reducers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + + torchopt.distributed.dim_partitioner + torchopt.distributed.batch_partitioner + torchopt.distributed.mean_reducer + torchopt.distributed.sum_reducer + +We provide some predefined partitioners and reducers. +Users can combine the :func:`torchopt.distributed.batch_partitioner` and :func:`torchopt.distributed.mean_reducer` to achieve the distributed data parallelism (DDP) easily: + +.. code-block:: python + :emphasize-lines: 18, 19 + + import torchopt.distributed as todist + + def loss_fn(model, batch): + ... + + @todist.rank_zero_only + def train(args): + + for epoch in range(args.epochs): + ... + + for batch in dataloader: + # Partition the data on the batch (first) dimension and distribute them to the remote workers + # Aggregate the results from the remote workers and return the mean loss + loss = todist.remote_sync_call( + loss_fn, + args=(model, batch), + partitioner=todist.batch_partitioner, + reducer=todist.mean_reducer, + ) + + ... + +We also provide a :func:`torchopt.distributed.dim_partitioner` to partition the data on the specified dimension. +While implementing the **Model-Agnostic Meta-Learning** (MAML) :cite:`MAML` algorithm, users can use this to parallel the training for the inner loop: + +.. code-block:: python + :emphasize-lines: 29, 30 + + import torchopt.distributed as todist + + def inner_loop(model, task_batch, args): + # task_batch: shape = (B, *) + inner_model = torchopt.module_clone(model, by='reference', detach_buffers=True) + + # Inner optimization + for inner_step in range(args.inner_steps): + inner_loss = inner_loss_fn(inner_model, task_batch) + + # Update the inner model + ... + + # Compute the outer loss + outer_loss = inner_loss_fn(inner_model, task_batch) + return outer_loss + + @todist.rank_zero_only + def train(args): + + for epoch in range(args.epochs): + ... + + for batch in dataloader: + # batch: shape = (T, B, *) + outer_loss = todist.remote_sync_call( + inner_loop, + args=(model, batch), + partitioner=todist.dim_partitioner(0, exclusive=True, keepdim=False), + reducer=todist.mean_reducer, + ) + + ... + +The ``dim_partitioner(0, exclusive=True, keepdim=False)`` will split the batch of size ``(T, B, *)`` into ``T`` batches of size ``(B, *)``. +Each task will be executed on the remote worker **independently** (``exclusive=True``). +Finally, the results will be aggregated by the :func:`torchopt.distributed.mean_reducer` to compute the mean loss. +Inside the ``inner_loop`` function, users may use another RPC call to further parallelize the inner loop optimization. + +Function parallelization wrappers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + + torchopt.distributed.parallelize + torchopt.distributed.parallelize_async + torchopt.distributed.parallelize_sync + +TorchOpt offers wrappers to parallelize the function execution on the remote workers. +It makes the function execution on the remote workers more transparent to the users and makes the code structure clear. + +.. code-block:: python + :emphasize-lines: 3, 9, 10, 11, 12 + + import torchopt.distributed as todist + + @todist.parallelize(partitioner=todist.batch_partitioner, reducer=todist.mean_reducer) + def distributed_data_parallelism(model, batch, args): + # Compute local loss of the given batch + ... + return loss + + @todist.parallelize( + partitioner=todist.dim_partitioner(0, exclusive=True, keepdim=False), + reducer=todist.mean_reducer, + ) + def inner_loop(model, batch, args): # distributed MAML inner loop + # batch: shape = (B, *) + inner_model = torchopt.module_clone(model, by='reference', detach_buffers=True) + + # Inner optimization + ... + + # Compute the outer loss + outer_loss = inner_loss_fn(inner_model, task_batch) + return outer_loss + + @todist.rank_zero_only + def train(args): + + for epoch in range(args.epochs): + ... + + for batch in dataloader: + # batch: shape = (T, B, *) + outer_loss = inner_loop(model, batch, args) + + ... + +Distributed Autograd +~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + + torchopt.distributed.autograd.context + torchopt.distributed.autograd.get_gradients + torchopt.distributed.autograd.backward + torchopt.distributed.autograd.grad + +In this section, we will introduce the distributed autograd system. +Please refer to `Autograd mechanics `_ and `Distributed Autograd Design `_ first before going through this section. + +Recap: Autograd mechanics in single-process training +"""""""""""""""""""""""""""""""""""""""""""""""""""" + +In single-process training, the autograd engine will automatically track the operations on the forward pass and compute the gradients on the backward pass. +For each operation, if the input tensors have ``requires_grad=True`` set, the output tensor will have a ``grad_fn`` attribute to trace the computation graph. +On the backward pass, the autograd engine will traverse the computation graph from the output tensors to the input tensors and compute the gradients for each operation. + +The |torch.autograd.grad|_ function will compute the gradients of the given ``outputs`` with respect to the given ``inputs``. + +.. code-block:: python + + import torch + + model = build_model() + loss = compute_loss(model, data) + + params = tuple(model.parameters()) + grads = torch.autograd.grad(loss, params) + + print(grads) + +In practice, users usually use the PyTorch Autograd Engine with ``loss.backward()`` (or |torch.autograd.backward|_) and optimizers: + +.. code-block:: python + + import torch + import torch.optim as optim + + model = build_model() + optimizer = optim.SGD(model.parameters(), lr=lr) + + loss = compute_loss(model, data) + + optimizer.zero_grad() + loss.backward() + optimizer.step() + +Compare to |torch.autograd.grad|_, the |torch.autograd.backward|_ function will sum and update the ``.grad`` attribute of the parameters. + +.. |torch.autograd.backward| replace:: ``torch.autograd.backward`` +.. _torch.autograd.backward: https://pytorch.org/docs/stable/generated/torch.autograd.backward.html + +RPC-based Distributed Autograd +"""""""""""""""""""""""""""""" + +PyTorch RPC framework implements the communication ``send-recv`` operations with appropriate backward functions (``RpcSendBackward`` and ``RpcRecvBackward``). +They can be tracked by the **Distributed Autograd Engine** like the single-process program we discussed above. + +The only difference between the single-process and distributed training is that users need to explicitly create a **Distributed Autograd Context** and wrap around the forward and backward passes. + +.. code-block:: python + :emphasize-lines: 4, 9, 12 + + import torch + import torch.distributed.autograd as dist_autograd + + with dist_autograd.context() as context_id: + # Forward pass + loss = ... # e.g. remote calls + + # Backward pass + dist_autograd.backward(context_id, [loss]) + + # Retrieve the gradients from the context. + grad_dict = dist_autograd.get_gradients(context_id) # type: Dict[Tensor, Tensor] + +.. warning:: + + Sending |torch.nn.Parameter|_\s over RPC will automatically detach from the autograd graph. + This is an intentional behavior of the PyTorch framework because the |torch.nn.Parameter|_\s are always leaf nodes in the graph. + The leaf tensors will not have ``grad_fn`` attribute and thus cannot be tracked by the autograd engine after sending them to other workers. + + To make the graph can be properly tracked across workers, users should convert the |torch.nn.Parameter|_\s to |torch.Tensor|_\s before sending them over RPC. + For example, explicitly ``clone()`` the parameters to tensors before taking them as arguments of the RPC call. + + .. code-block:: python + + import torch + import torch.distributed.rpc as rpc + + def compute_loss(param): + return param.mean() + + param = torch.nn.Parameter(torch.randn(2, 2), requires_grad=True) + + # The RPC call will detach the parameter from the autograd graph on worker1 + loss1 = rpc.rpc_sync('worker1', compute_loss, args=(param,)) + + # The RPC call will keep connection to the parameter in the autograd graph on worker1 + loss2 = rpc.rpc_sync('worker1', compute_loss, args=(param.clone(),)) + + Users can use :func:`torchopt.module_clone` function to clone the module and convert all its parameters to tensors. + The tensors will have a ``grad_fn`` attribute ``CloneBackward`` to track the computation graph to the original parameters. + + .. code-block:: python + + import torch + import torch.nn as nn + import torchopt + + def compute_loss(model, batch): + ... + return loss + + model = nn.Linear(2, 2) + tuple(model.parameters()) # -> `nn.Parameter`s + + cloned_model = torchopt.module_clone(model, by='clone') + tuple(cloned_model.parameters()) # -> `torch.Tensor`s with `CloneBackward` grad_fn + + # The RPC call will detach the parameter from the autograd graph on worker1 + loss1 = rpc.rpc_sync('worker1', compute_loss, args=(model, batch)) + + # The RPC call will keep the connection to the parameter in the autograd graph on worker1 + loss2 = rpc.rpc_sync('worker1', compute_loss, args=(cloned_model, batch)) + +.. |torch.nn.Parameter| replace:: ``torch.nn.Parameter`` +.. _torch.nn.Parameter: https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html +.. |torch.Tensor| replace:: ``torch.Tensor`` +.. _torch.Tensor: https://pytorch.org/docs/stable/tensors.html + +TorchOpt wraps the distributed autograd context and provides a more convenient interface to use. + +.. code-block:: python + :emphasize-lines: 5, 10 + + import torchopt.distributed as todist + + model = build_model() + + with todist.autograd.context() as context_id: + # Forward pass + loss = ... # e.g. remote calls + + # Backward pass + grads = todist.autograd.grads(context_id, loss, model.parameters()) + +or + +.. code-block:: python + :emphasize-lines: 7, 13 + + import torch + import torchopt.distributed as todist + + model = build_model() + optimizer = torch.optim.SGD(model.parameters(), lr=lr) + + with todist.autograd.context() as context_id: + # Forward pass + loss = ... # e.g. remote calls + + # Backward pass + optimizer.zero_grad() + todist.autograd.backward(context_id, loss) + optimizer.step() + +.. warning:: + + The distributed autograd context is not thread-safe. + Users should not use the same context in multiple threads. + +Users can update their single-process training code to distributed training code with minimum changes: + +#. Add the distributed autograd context around the forward and backward passes. +#. Wrap the functions with :func:`torchopt.distributed.parallelize` to enable parallel execution. +#. Convert the parameters to tensors before sending them over RPC. +#. Replace the ``torch.autograd`` to ``torchopt.distributed.autograd``. + +Here is a full example of converting the single-process training code to distributed training code: + +.. code-block:: python + :emphasize-lines: 17, 32, 40, 42, 43, 47, 52 + :name: distributed-example + + import torch + import torch.nn as nn + import torchopt.distributed as todist + + def parse_arguments(): + parser = argparse.ArgumentParser(description='TorchOpt Distributed Training') + ... + + args = parser.parse_args() + return args + + def worker_init_fn(): + # set process title, seeding, etc. + setproctitle.setproctitle(f'Worker{todist.get_rank()}') + torch.manual_seed(args.seed + todist.get_rank()) + + @todist.parallelize(partitioner=todist.batch_partitioner, reducer=todist.mean_reducer) + def compute_loss(model, batch): + device = torch.device(f'cuda:{todist.get_local_rank()}') + model = model.to(device) + batch = batch.to(device) + + # Compute local loss of the given batch + ... + return loss.cpu() + + def build_model(): + return nn.Sequential( + ... + ) + + @todist.rank_zero_only + def train(args): + model = build_model() + optimizer = torch.optim.SGD(model.parameters(), lr=args.lr) + train_loader = ... + + for epoch in range(args.epochs): + for batch in train_loader: + with todist.autograd.context() as context_id: + # Forward pass + cloned_model = todist.module_clone(model, by='clone') + loss = compute_loss(cloned_model, batch) + + # Backward pass + optimizer.zero_grad() + todist.autograd.backward(context_id, loss) + + # Update parameters + optimizer.step() + + @todist.auto_init_rpc(worker_init_fn) + def main(): + args = parse_arguments() + train(args) + + if __name__ == '__main__': + main() + +Then, users can use |torchrun|_ to launch the program: + +.. code-block:: bash + + torchrun --nnodes=1 --nproc_per_node=8 YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...) diff --git a/docs/source/explicit_diff/explicit_diff.rst b/docs/source/explicit_diff/explicit_diff.rst new file mode 100644 index 00000000..89c38df6 --- /dev/null +++ b/docs/source/explicit_diff/explicit_diff.rst @@ -0,0 +1,162 @@ +Explicit Gradient Differentiation +================================= + +.. currentmodule:: torchopt + +Explicit Gradient +----------------- + +.. image:: /_static/images/explicit-gradient.png + :width: 80% + :align: center + +The idea of explicit gradient is to treat the gradient step as a differentiable function and try to backpropagate through the unrolled optimization path. +Namely, given + +.. math:: + + \boldsymbol{\theta}^{\prime} (\boldsymbol{\phi}) \triangleq \boldsymbol{\theta}_0 - \alpha \sum_{i=0}^{K-1} \nabla_{\boldsymbol{\theta}_i} \mathcal{L}^{\text{in}} (\boldsymbol{\phi},\boldsymbol{\theta}_i), + +we would like to compute the gradient :math:`\nabla_{\boldsymbol{\phi}} \boldsymbol{\theta}^{\prime} (\boldsymbol{\phi})`. +This is usually done by AutoDiff through an inner optimization's unrolled iterates. + +Differentiable Functional Optimizers +------------------------------------ + +By passing the argument ``inplace`` as :data:`False` to the ``update`` functions, we can make the optimization differentiable. +Here is an example of making :func:`torchopt.adam` differentiable. + +.. code-block:: python + + opt = torchopt.adam() + # Define meta and inner parameters + meta_params = ... + fmodel, params = make_functional(model) + # Initialize optimizer state + state = opt.init(params) + + for iter in range(iter_times): + loss = inner_loss(fmodel, params, meta_params) + grads = torch.autograd.grad(loss, params) + # Apply non-inplace parameter update + updates, state = opt.update(grads, state, inplace=False) + params = torchopt.apply_updates(params, updates) + + loss = outer_loss(fmodel, params, meta_params) + meta_grads = torch.autograd.grad(loss, meta_params) + +Differentiable OOP Meta-Optimizers +---------------------------------- + +For PyTorch-like API (e.g., ``step()``), we designed a base class :class:`torchopt.MetaOptimizer` to wrap our functional optimizers to become differentiable OOP meta-optimizers. + +.. autosummary:: + + torchopt.MetaOptimizer + torchopt.MetaAdam + torchopt.MetaSGD + torchopt.MetaRMSProp + torchopt.MetaAdamW + +By combining low-level API :class:`torchopt.MetaOptimizer` with the previous functional optimizer, we can achieve high-level API: + +.. code-block:: python + + # Low-level API + optim = torchopt.MetaOptimizer(net, torchopt.sgd(lr=1.0)) + + # High-level API + optim = torchopt.MetaSGD(net, lr=1.0) + +Here is an example of using the OOP API :class:`torchopt.MetaAdam` to conduct meta-gradient calculation. + +.. code-block:: python + + # Define meta and inner parameters + meta_params = ... + model = ... + # Define differentiable optimizer + opt = torchopt.MetaAdam(model) + + for iter in range(iter_times): + # Perform the inner update + loss = inner_loss(model, meta_params) + opt.step(loss) + + loss = outer_loss(model, meta_params) + loss.backward() + +CPU/GPU Accelerated Optimizer +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +TorchOpt performs the symbolic reduction by manually writing the forward and backward functions using C++ OpenMP (CPU) and CUDA (GPU), which largely increase meta-gradient computational efficiency. +Users can use accelerated optimizer by setting the ``use_accelerated_op`` as :data:`True`. +TorchOpt will automatically detect the device and allocate the corresponding accelerated optimizer. + +.. code-block:: python + + # Check whether the `accelerated_op` is available: + torchopt.accelerated_op_available(torch.device('cpu')) + + torchopt.accelerated_op_available(torch.device('cuda')) + + net = Net(1).cuda() + optim = torchopt.Adam(net.parameters(), lr=1.0, use_accelerated_op=True) + +General Utilities +----------------- + +We provide the :func:`torchopt.extract_state_dict` and :func:`torchopt.recover_state_dict` functions to extract and restore the state of network and optimizer. +By default, the extracted state dictionary is a reference (this design is for accumulating gradient of multi-task batch training, MAML for example). +You can also set ``by='copy'`` to extract the copy of the state dictionary or set ``by='deepcopy'`` to have a detached copy. + +.. autosummary:: + + torchopt.extract_state_dict + torchopt.recover_state_dict + torchopt.stop_gradient + +Here is an usage example. + +.. code-block:: python + + net = Net() + x = nn.Parameter(torch.tensor(2.0), requires_grad=True) + + optim = torchopt.MetaAdam(net, lr=1.0) + + # Get the reference of state dictionary + init_net_state = torchopt.extract_state_dict(net, by='reference') + init_optim_state = torchopt.extract_state_dict(optim, by='reference') + # If set `detach_buffers=True`, the parameters are referenced as references while buffers are detached copies + init_net_state = torchopt.extract_state_dict(net, by='reference', detach_buffers=True) + + # Set `copy` to get the copy of the state dictionary + init_net_state_copy = torchopt.extract_state_dict(net, by='copy') + init_optim_state_copy = torchopt.extract_state_dict(optim, by='copy') + + # Set `deepcopy` to get the detached copy of state dictionary + init_net_state_deepcopy = torchopt.extract_state_dict(net, by='deepcopy') + init_optim_state_deepcopy = torchopt.extract_state_dict(optim, by='deepcopy') + + # Conduct 2 inner-loop optimization + for i in range(2): + inner_loss = net(x) + optim.step(inner_loss) + + print(f'a = {net.a!r}') + + # Recover and reconduct 2 inner-loop optimization + torchopt.recover_state_dict(net, init_net_state) + torchopt.recover_state_dict(optim, init_optim_state) + + for i in range(2): + inner_loss = net(x) + optim.step(inner_loss) + + print(f'a = {net.a!r}') # the same result + +Notebook Tutorial +----------------- + +Check the notebook tutorials at `Meta Optimizer `_ and `Stop Gradient `_. diff --git a/docs/source/implicit_diff/implicit_diff.rst b/docs/source/implicit_diff/implicit_diff.rst new file mode 100644 index 00000000..df0927c9 --- /dev/null +++ b/docs/source/implicit_diff/implicit_diff.rst @@ -0,0 +1,178 @@ +Implicit Gradient Differentiation +================================= + +.. currentmodule:: torchopt.diff.implicit + +Implicit Differentiation +------------------------ + +.. image:: /_static/images/implicit-gradient.png + :width: 80% + :align: center + +Implicit differentiation is the task of differentiating the solution of a minimization problem with respect to its inputs. +Namely, given + +.. math:: + + \boldsymbol{\theta}^{\prime} (\boldsymbol{\phi}) \triangleq \underset{\boldsymbol{\theta}}{\mathop{\operatorname{argmin}}} ~ \mathcal{L}^{\text{in}} (\boldsymbol{\phi},\boldsymbol{\theta}). + +By treating the solution :math:`\boldsymbol{\theta}^{\prime}` as an implicit function of :math:`\boldsymbol{\phi}`, the idea of implicit differentiation is to directly get analytical best-response derivatives :math:`\nabla_{\boldsymbol{\phi}} \boldsymbol{\theta}^{\prime} (\boldsymbol{\phi})` by the implicit function theorem. +This is suitable for algorithms when the inner-level optimal solution is achieved :math:`\left. \frac{\partial \mathcal{L}^{\text{in}} (\boldsymbol{\phi}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \right\rvert_{\boldsymbol{\theta} = \boldsymbol{\theta}^{\prime}} = 0` (e.g., the function :math:`F` in the figure means the solution is obtained by unrolled gradient steps) or reaches some stationary conditions :math:`F (\boldsymbol{\phi}, \boldsymbol{\theta}^{\prime}) = 0`, such as `IMAML `_ and `DEQ `_. + +Custom Solvers +-------------- + +.. autosummary:: + + torchopt.diff.implicit.custom_root + +TorchOpt provides the :func:`custom_root` decorators, for easily adding implicit differentiation on top of any existing solver (also called forward optimization). +:func:`custom_root` requires users to define the stationary conditions for the problem solution (e.g., KKT conditions) and will automatically calculate the gradient for backward gradient computation. + +Here is an example of the :func:`custom_root` decorators, which is also the **functional API** for implicit gradient. + +.. code-block:: python + + # Functional API for implicit gradient + def stationary(params, meta_params, data): + # stationary condition construction + return stationary condition + + # Decorator that wraps the function + # Optionally specify the linear solver (conjugate gradient or Neumann series) + @torchopt.diff.implicit.custom_root(stationary) + def solve(params, meta_params, data): + # Forward optimization process for params + return optimal_params + + # Define params, meta_params and get data + params, meta_prams, data = ..., ..., ... + optimal_params = solve(params, meta_params, data) + loss = outer_loss(optimal_params) + + meta_grads = torch.autograd.grad(loss, meta_params) + +OOP API +~~~~~~~ + +.. autosummary:: + + torchopt.nn.ImplicitMetaGradientModule + +Coupled with PyTorch |torch.nn.Module|_, we also design the OOP API :class:`nn.ImplicitMetaGradientModule` for implicit gradient. +The core idea of :class:`nn.ImplicitMetaGradientModule` is to enable the gradient flow from ``self.parameters()`` (usually lower-level parameters) to ``self.meta_parameters()`` (usually the high-level parameters). +Users need to define the forward process ``forward()``, a stationary function ``optimality()`` (or ``objective()``), and inner-loop optimization ``solve``. + +.. |torch.nn.Module| replace:: ``torch.nn.Module`` +.. _torch.nn.Module: https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module + +Here is an example of the OOP API. + +.. code-block:: python + + from torchopt.nn import ImplicitMetaGradientModule + + # Inherited from the class ImplicitMetaGradientModule + class InnerNet(ImplicitMetaGradientModule): + def __init__(self, meta_module): + ... + + def forward(self, batch): + # Forward process + ... + + def optimality(self, batch, labels): + # Stationary condition construction for calculating implicit gradient + # NOTE: If this method is not implemented, it will be automatically derived from the + # gradient of the `objective` function. + ... + + def objective(self, batch, labels): + # Define the inner-loop optimization objective + # NOTE: This method is optional if method `optimality` is implemented. + ... + + def solve(self, batch, labels): + # Conduct the inner-loop optimization + ... + return self # optimized module + + # Get meta_params and data + meta_params, data = ..., ... + inner_net = InnerNet() + + # Solve for inner-loop process related to the meta-parameters + optimal_inner_net = inner_net.solve(meta_params, *data) + + # Get outer-loss and solve for meta-gradient + loss = outer_loss(optimal_inner_net) + meta_grad = torch.autograd.grad(loss, meta_params) + +If the optimization objective is to minimize/maximize an objective function, we offer an ``objective`` method interface to simplify the implementation. +Users only need to define the ``objective`` method, while TorchOpt will automatically analyze it for the stationary (optimality) condition from the KKT condition. + +.. note:: + + In ``__init__`` method, users need to define the inner parameters and meta-parameters. + By default, :class:`nn.ImplicitMetaGradientModule` treats all tensors and modules from the method inputs as ``self.meta_parameters()`` / ``self.meta_modules()``. + For example, statement ``self.yyy = xxx`` will assign ``xxx`` as a meta-parameter with name ``'yyy'`` if ``xxx`` is present in the method inputs (e.g., ``def __init__(self, xxx, ...): ...``). + All tensors and modules defined in the ``__init__`` are regarded as ``self.parameters()`` / ``self.modules()``. + Users can also register parameters and meta-parameters by calling ``self.register_parameter()`` and ``self.register_meta_parameter()`` respectively. + +Linear System Solvers +--------------------- + +.. autosummary:: + + torchopt.linear_solve.solve_cg + torchopt.linear_solve.solve_inv + torchopt.linear_solve.solve_normal_cg + +Usually, the computation of implicit gradient involves the computation of the inverse Hessian matrix. +However, the high-dimensional Hessian matrix also makes direct computation intractable, and this is where linear solver comes into play. +By iteratively solving the linear system problem, we can calculate the inverse Hessian matrix up to some precision. We offer the `conjugate-gradient `_ based solver and `neuman-series `_ based solver. + +Here is an example of the linear solver. + +.. code-block:: python + + from torchopt import linear_solve + + torch.random.seed(42) + A = torch.random.randn(3, 3) + b = torch.random.randn(3) + + def matvec_A(x): + return torch.dot(A, x) + + sol = linear_solve.solve_normal_cg(matvec_A, b, tol=1e-5) + print(sol) + + sol = linear_solve.solve_cg(matvec_A, b, tol=1e-5) + print(sol) + +Users can also select the corresponding solver in functional and OOP APIs. + +.. code-block:: python + + # For functional API + @torchopt.diff.implicit.custom_root( + functorch.grad(objective_fn, argnums=0), # optimality function + argnums=1, + solve=torchopt.linear_solve.solve_normal_cg(maxiter=5, atol=0), + ) + def solve_fn(...): + ... + + # For OOP API + class InnerNet( + torchopt.nn.ImplicitMetaGradientModule, + linear_solve=torchopt.linear_solve.solve_normal_cg(maxiter=5, atol=0), + ): + ... + +Notebook Tutorial +----------------- + +Check the notebook tutorial at `Implicit Differentiation `_. diff --git a/docs/source/index.rst b/docs/source/index.rst index a4c20e22..02fab843 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -3,20 +3,23 @@ TorchOpt -------- -**TorchOpt** is a high-performance optimizer library built upon `PyTorch `_ for easy implementation of functional optimization and gradient-based meta-learning. It consists of two main features: +**TorchOpt** is an efficient library for differentiable optimization built upon `PyTorch `_. +Torchopt is -* TorchOpt provides functional optimizer which enables `JAX-like `_ composable functional optimizer for PyTorch. With TorchOpt, one can easily conduct neural network optimization in PyTorch with functional style optimizer, similar to `Optax `_ in JAX. -* With the design of functional programming, TorchOpt provides efficient, flexible, and easy-to-implement differentiable optimizer for gradient-based meta-learning research. It largely reduces the efforts required to implement sophisticated meta-learning algorithms. +- **Comprehensive**: TorchOpt provides three differentiation modes - explicit differentiation, implicit differentiation, and zero-order differentiation for handling different differentiable optimization situations. +- **Flexible**: TorchOpt provides both functional and objective-oriented API for users different preferences. Users can implement differentiable optimization in JAX-like or PyTorch-like style. +- **Efficient**: TorchOpt provides (1) CPU/GPU acceleration differentiable optimizer (2) RPC-based distributed training framework (3) Fast Tree Operations, to largely increase the training efficiency for bi-level optimization problems. Installation ------------ Requirements: -* `PyTorch `_ -* (Optional) `Graphviz `_ +- `PyTorch `_ +- (Optional) `Graphviz `_ -Please follow the instructions at https://pytorch.org to install PyTorch in your Python environment first. Then run the following command to install TorchOpt from PyPI: +Please follow the instructions at https://pytorch.org to install PyTorch in your Python environment first. +Then run the following command to install TorchOpt from PyPI: .. code-block:: bash @@ -30,7 +33,8 @@ You can also build shared libraries from source, use: cd torchopt pip3 install . -We provide a `conda `_ environment recipe to install the build toolchain such as `cmake`, `g++`, and `nvcc`: +We provide a `conda `_ environment recipe to install the build toolchain such as ``cmake``, ``g++``, and ``nvcc``. +You can use the following commands with `conda `_ / `mamba `_ to create a new isolated environment. .. code-block:: bash @@ -42,21 +46,30 @@ We provide a `conda `_ environment recipe to ins conda activate torchopt +.. toctree:: + :maxdepth: 1 + :caption: Documentation + + basics/basics.rst + optimizer/optim.rst + explicit_diff/explicit_diff.rst + implicit_diff/implicit_diff.rst + zero_order_diff/zero_order_diff.rst + distributed/distributed.rst + visualization/visualization.rst .. toctree:: - :caption: Getting Started + :caption: Tutorial Notebooks :maxdepth: 1 torchopt101/torchopt-101.rst - .. toctree:: :caption: Examples :maxdepth: 1 examples/MAML.rst - .. toctree:: :caption: Developer Documentation :maxdepth: 1 @@ -75,12 +88,12 @@ The Team TorchOpt is a work by -* Jie Ren (`JieRen98 `_) -* Xidong Feng (`waterhorse1 `_) -* Bo Liu (`Benjamin-eecs `_) -* Xuehai Pan (`XuehaiPan `_) -* Luo Mai (`luomai `_) -* Yaodong Yang (`PKU-YYang `_). +- Jie Ren (`JieRen98 `_) +- Xidong Feng (`waterhorse1 `_) +- Bo Liu (`Benjamin-eecs `_) +- Xuehai Pan (`XuehaiPan `_) +- Luo Mai (`luomai `_) +- Yaodong Yang (`PKU-YYang `_). Support ------- @@ -114,6 +127,6 @@ If you find TorchOpt useful, please cite it in your publications. Indices and tables -================== +------------------ -* :ref:`genindex` +- :ref:`genindex` diff --git a/docs/source/optimizer/optim.rst b/docs/source/optimizer/optim.rst new file mode 100644 index 00000000..850bc8c7 --- /dev/null +++ b/docs/source/optimizer/optim.rst @@ -0,0 +1,193 @@ +Optimizers +========== + +.. currentmodule:: torchopt + +The core design of TorchOpt follows the philosophy of functional programming. +Aligned with |functorch|_, users can conduct functional-style programming with models, optimizers, and training in PyTorch. +We first introduce our functional optimizers, which treat the optimization process as a functional transformation. + +.. |functorch| replace:: ``functorch`` +.. _functorch: https://pytorch.org/functorch + +Functional Optimizers +--------------------- + +Currently, TorchOpt supports 4 functional optimizers: :func:`sgd`, :func:`adam`, :func:`rmsprop`, and :func:`adamw`. + +.. autosummary:: + + torchopt.FuncOptimizer + torchopt.adam + torchopt.sgd + torchopt.rmsprop + torchopt.adamw + +Apply Parameter Updates +----------------------- + +TorchOpt offers functional API by passing gradients and optimizer states to the optimizer function to apply updates. + +.. autosummary:: + + torchopt.apply_updates + +Here is an example of functional optimization coupled with |functorch|_: + +.. code-block:: python + + class Net(nn.Module): ... + + class Loader(DataLoader): ... + + net = Net() # init + loader = Loader() + optimizer = torchopt.adam(lr) + + model, params = functorch.make_functional(net) # use functorch extract network parameters + opt_state = optimizer.init(params) # init optimizer + + xs, ys = next(loader) # get data + pred = model(params, xs) # forward + loss = F.cross_entropy(pred, ys) # compute loss + + grads = torch.autograd.grad(loss, params) # compute gradients + updates, opt_state = optimizer.update(grads, opt_state) # get updates + params = torchopt.apply_updates(params, updates) # update network parameters + +We also provide a wrapper :class:`torchopt.FuncOptimizer` to make maintaining the optimizer state easier: + +.. code-block:: python + + net = Net() # init + loader = Loader() + optimizer = torchopt.FuncOptimizer(torchopt.adam()) # wrap with `torchopt.FuncOptimizer` + + model, params = functorch.make_functional(net) # use functorch extract network parameters + + for xs, ys in loader: # get data + pred = model(params, xs) # forward + loss = F.cross_entropy(pred, ys) # compute loss + + params = optimizer.step(loss, params) # update network parameters + +Classic OOP Optimizers +---------------------- + +Combined with the functional optimizers above, we can define our classic OOP optimizers. +We designed a base class :class:`torchopt.Optimizer` that has the same interface as |torch.optim.Optimizer|_. +We offer original PyTorch APIs (e.g., ``zero_grad()`` or ``step()``) for traditional PyTorch-like (OOP) parameter update. + +.. |torch.optim.Optimizer| replace:: ``torch.optim.Optimizer`` +.. _torch.optim.Optimizer: https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer + +.. autosummary:: + + torchopt.Optimizer + torchopt.Adam + torchopt.SGD + torchopt.RMSProp + torchopt.AdamW + +By combining low-level API :class:`torchopt.Optimizer` with the previous functional optimizer, we can achieve high-level API: + +.. code-block:: python + + learning_rate = 1.0 + # High-level API + optim = torchopt.Adam(net.parameters(), lr=learning_rate) + # which can be achieved by low-level API: + optim = torchopt.Optimizer(net.parameters(), torchopt.adam(lr=learning_rate)) + +Here is an example of PyTorch-like APIs: + +.. code-block:: python + + net = Net() # init + loader = Loader() + optimizer = torchopt.Adam(net.parameters()) + + xs, ys = next(loader) # get data + pred = net(xs) # forward + loss = F.cross_entropy(pred, ys) # compute loss + + optimizer.zero_grad() # zero gradients + loss.backward() # backward + optimizer.step() # step updates + +Combining Transformation +------------------------ + +Users always need to conduct multiple gradient transformations (functions) before the final update. +In the designing of TorchOpt, we treat these functions as derivations of :func:`torchopt.chain`. +So we can build our own chain like ``torchopt.chain(torchopt.clip_grad_norm(max_norm=1.), torchopt.sgd(lr=1., moment_requires_grad=True))`` to clip the gradient and update parameters using :func:`sgd`. + +.. autosummary:: + + torchopt.chain + +.. note:: + + :func:`torchopt.chain` will sequentially conduct transformations, so the order matters. + For example, we need to first conduct gradient normalization and then conduct the optimizer step. + The order should be (clip, sgd) in :func:`torchopt.chain` function. + + +Here is an example of chaining :func:`torchopt.clip_grad_norm` and :func:`torchopt.adam` for functional optimizer and OOP optimizer. + +.. code-block:: python + + func_optimizer = torchopt.chain(torchopt.clip_grad_norm(max_norm=2.0), torchopt.adam(1e-1)) + oop_optimizer = torchopt.Optimizer(net.parameters() func_optimizer) + +Optimizer Hooks +--------------- + +Users can also add optimizer hook to control the gradient flow. + +.. autosummary:: + + torchopt.hook.register_hook + torchopt.hook.zero_nan_hook + torchopt.hook.nan_to_num_hook + +For example, :func:`torchopt.hook.zero_nan_hook` registers hook to the first-order gradients. +During the backpropagation, the **NaN** gradients will be set to 0. +Here is an example of such operation coupled with :func:`torchopt.chain`. + +.. code-block:: python + + impl = torchopt.chain(torchopt.hook.register_hook(torchopt.hook.zero_nan_hook), torchopt.adam(1e-1)) + +Optimizer Schedules +------------------- + +TorchOpt also provides implementations of learning rate schedulers, which can be used to control the learning rate during the training process. +TorchOpt mainly offers the linear learning rate scheduler and the polynomial learning rate scheduler. + +.. autosummary:: + + torchopt.schedule.linear_schedule + torchopt.schedule.polynomial_schedule + +Here is an example of combining optimizer with learning rate scheduler. + +.. code-block:: python + + functional_adam = torchopt.adam( + lr=torchopt.schedule.linear_schedule( + init_value=1e-3, end_value=1e-4, transition_steps=10000, transition_begin=2000 + ) + ) + + adam = torchopt.Adam( + net.parameters(), + lr=torchopt.schedule.linear_schedule( + init_value=1e-3, end_value=1e-4, transition_steps=10000, transition_begin=2000 + ), + ) + +Notebook Tutorial +----------------- + +Check the notebook tutorial at `Functional Optimizer `_. diff --git a/docs/source/spelling_wordlist.txt b/docs/source/spelling_wordlist.txt index 92244376..ca421ec8 100644 --- a/docs/source/spelling_wordlist.txt +++ b/docs/source/spelling_wordlist.txt @@ -59,9 +59,11 @@ Graphviz Autograd autograd attrs +GradientTransformation GradientTransformations args kwargs +kwds chainable adam Adam @@ -78,6 +80,7 @@ Moens AdamW Loshchilov pytree +pytrees booleans subtrees optimality @@ -107,7 +110,13 @@ broadcasted keepdim ndim partitioner +partitioners RPC +rpc +MPI +async +parallelization +unaggregated maxiter str bool @@ -137,6 +146,7 @@ pre numerics parallelize parallelizing +JAX Optax func subfn @@ -145,5 +155,18 @@ jvp ATen samplable conj +TransformInitFn +TransformUpdateFn +argmin +Jacobian +autodiff +backend reparameterize rtype +backpropagate +NaN +iteratively +issubclass +abc +ABCMeta +subclasscheck diff --git a/docs/source/visualization/visualization.rst b/docs/source/visualization/visualization.rst new file mode 100644 index 00000000..718c6725 --- /dev/null +++ b/docs/source/visualization/visualization.rst @@ -0,0 +1,146 @@ +Visualization +============= + +.. currentmodule:: torchopt.visual + +In `PyTorch `_, if the attribute ``requires_grad`` of a tensor is :data:`True`, the computation graph will be created if we use the tensor to do any operations. +The computation graph is implemented like a link list -- ``Tensors`` are nodes and they are linked by their attribute ``gran_fn``. +`PyTorchViz `_ is a Python package that uses `Graphviz `_ as a backend for plotting computation graphs. +TorchOpt uses PyTorchViz as the blueprint and provides more easy-to-use visualization functions on the premise of supporting all its functions. + +------ + +Usage +----- + +Let's start with a simple multiplication computation graph. +We declared the variable ``x`` with the flag ``requires_grad=True`` and compute ``y = 2 * x``. Then we visualize the computation graph of ``y``. + +We provide the function :func:`make_dot` which takes a tensor as input. +The visualization code is shown as follows: + +.. code-block:: python + + from IPython.display import display + import torch + import torchopt + + + x = torch.tensor(1.0, requires_grad=True) + y = 2 * x + display(torchopt.visual.make_dot(y)) + +.. image:: /_static/images/visualization-fig1.svg + :width: 20% + :align: center + +The figure shows ``y`` is connected by the multiplication edge. +The gradient of ``y`` will flow through the multiplication backward function and then accumulate on ``x``. +Note that we pass a dictionary for adding node labels. + +To add auxiliary notes to the computation graph, we can pass a dictionary as argument ``params`` to :func:`make_dot`. +The keys are the notes which would be shown in the computation figure and the values are the tensors that need to be noted. +So the code above can be modified as follows: + +.. code-block:: python + + from IPython.display import display + import torch + import torchopt + + + x = torch.tensor(1.0, requires_grad=True) + y = 2 * x + display(torchopt.visual.make_dot(y, params={'x': x, 'y': y})) + +Then let's plot a neural network. +Note that we can pass the generator returned by the method ``named_parameters`` for adding node labels. + +.. code-block:: python + + from IPython.display import display + import torch + from torch import nn + import torchopt + + + class Net(nn.Module): + def __init__(self, dim): + super().__init__() + self.fc = nn.Linear(dim, 1, bias=True) + + def forward(self, x): + return self.fc(x) + + + dim = 5 + batch_size = 2 + net = Net(dim) + xs = torch.ones((batch_size, dim)) + ys = torch.ones((batch_size, 1)) + pred = net(xs) + loss = F.mse_loss(pred, ys) + + display(torchopt.visual.make_dot(loss, params=(net.named_parameters(), {'loss': loss}))) + +.. image:: /_static/images/visualization-fig2.svg + :width: 45% + :align: center + +The computation graph of meta-learning algorithms will be much more complex. +Our visualization tool allows users to take as input the extracted network state for better visualization. + +.. code-block:: python + + from IPython.display import display + import torch + from torch import nn + import torchopt + + class MetaNet(nn.Module): + def __init__(self, dim): + super().__init__() + self.fc = nn.Linear(dim, 1, bias=True) + + def forward(self, x, meta_param): + return self.fc(x) + meta_param + + + dim = 5 + batch_size = 2 + net = MetaNet(dim) + + xs = torch.ones((batch_size, dim)) + ys = torch.ones((batch_size, 1)) + + optimizer = torchopt.MetaSGD(net, lr=1e-3) + meta_param = torch.tensor(1.0, requires_grad=True) + + # Set enable_visual + net_state_0 = torchopt.extract_state_dict(net, enable_visual=True, visual_prefix='step0.') + + pred = net(xs, meta_param) + loss = F.mse_loss(pred, ys) + optimizer.step(loss) + + # Set enable_visual + net_state_1 = torchopt.extract_state_dict(net, enable_visual=True, visual_prefix='step1.') + + pred = net(xs, meta_param) + loss = F.mse_loss(pred, torch.ones_like(pred)) + + # Draw computation graph + display( + torchopt.visual.make_dot( + loss, [net_state_0, net_state_1, {'meta_param': meta_param, 'loss': loss}] + ) + ) + +.. image:: /_static/images/visualization-fig3.svg + :width: 65% + :align: center + +Notebook Tutorial +----------------- + +Check the notebook tutorial at `Visualization `_. diff --git a/docs/source/zero_order_diff/zero_order_diff.rst b/docs/source/zero_order_diff/zero_order_diff.rst new file mode 100644 index 00000000..11232c85 --- /dev/null +++ b/docs/source/zero_order_diff/zero_order_diff.rst @@ -0,0 +1,146 @@ +Zero-order Gradient Differentiation +=================================== + +.. currentmodule:: torchopt.diff.zero_order + +Evolutionary Strategy +--------------------- + +.. image:: /_static/images/zero-order.png + :width: 80% + :align: center + +When the inner-loop process is non-differentiable or one wants to eliminate the heavy computation burdens in the previous two modes (brought by Hessian), one can choose Zeroth-order differentiation。 +Zero-order differentiation typically gets gradients based on zero-order estimation, such as finite-difference, or `Evolutionary Strategy `_ (ES). +`ES-MAML `_ and `NAC `_ successfully solve the non-differentiable optimization problem based on ES. + +TorchOpt offers API for ES-based differentiation. +Instead of optimizing the objective :math:`f (\boldsymbol{\theta}): \mathbb{R}^n \to \mathbb{R}`, ES optimizes a Gaussian smoothing objective defined as :math:`\tilde{f}_{\sigma} (\boldsymbol{\theta}) = \mathbb{E}_{\boldsymbol{z} \sim \mathcal{N}( 0, {I}_d )} [ f (\boldsymbol{\theta} + \sigma \, \boldsymbol{z}) ]`, where :math:`\sigma` denotes the precision. +The gradient of such objective is :math:`\nabla_{\boldsymbol{\theta}} \tilde{f}_{\sigma} (\boldsymbol{\theta}) = \frac{1}{\sigma} \mathbb{E}_{\boldsymbol{z} \sim \mathcal{N}( 0, {I}_d )} [ f (\boldsymbol{\theta} + \sigma \, \boldsymbol{z}) \cdot \boldsymbol{z} ]`. +Based on such technique, one can treat the bi-level process as a whole to calculate the meta-gradient based on pure forward process. +Refer to `ES-MAML `_ for more explanations. + +Decorators +---------- + +.. autosummary:: + + torchopt.diff.zero_order.zero_order + +Similar to the implicit gradient, we also use the decorator for ES methods. + +Functional API +~~~~~~~~~~~~~~ + +The basic functional API is :func:`torchopt.diff.zero_order.zero_order`, which is used as the decorator for the forward process zero-order gradient procedures. +Users are required to implement the noise sampling function, which will be used as the input of the zero_order decorator. +Here we show the specific meaning for each parameter used in the decorator. + +- ``distribution`` for noise sampling distribution. The distribution :math:`\lambda` should be spherical symmetric and with a constant variance of :math:`1` for each element. I.e.: + + - Spherical symmetric: :math:`\mathbb{E}_{\boldsymbol{z} \sim \lambda} [ \boldsymbol{z} ] = \boldsymbol{0}`. + - Constant variance of :math:`1` for each element: :math:`\mathbb{E}_{\boldsymbol{z} \sim \lambda} [ {\lvert z_i \rvert}^2 ] = 1`. + - For example, the standard multi-dimensional normal distribution :math:`\mathcal{N} (\boldsymbol{0}, \boldsymbol{1})`. + +- ``method`` for different kind of algorithms, we support ``'naive'`` (`ES RL `_), ``'forward'`` (`Forward-FD `_), and ``'antithetic'`` (`antithetic `_). + + .. math:: + + \begin{align*} + \text{naive} \qquad & \nabla_{\boldsymbol{\theta}} \tilde{f}_{\sigma} (\boldsymbol{\theta}) = \frac{1}{\sigma} \mathbb{E}_{\boldsymbol{z} \sim \lambda} [ f (\boldsymbol{\theta} + \sigma \, \boldsymbol{z}) \cdot \boldsymbol{z} ] \\ + \text{forward} \qquad & \nabla_{\boldsymbol{\theta}} \tilde{f}_{\sigma} (\boldsymbol{\theta}) = \frac{1}{\sigma} \mathbb{E}_{\boldsymbol{z} \sim \lambda} [ ( f (\boldsymbol{\theta} + \sigma \, \boldsymbol{z}) - f (\boldsymbol{\theta}) ) \cdot \boldsymbol{z} ] \\ + \text{antithetic} \qquad & \nabla_{\boldsymbol{\theta}} \tilde{f}_{\sigma} (\boldsymbol{\theta}) = \frac{1}{2 \sigma} \mathbb{E}_{\boldsymbol{z} \sim \lambda} [ (f (\boldsymbol{\theta} + \sigma \, \boldsymbol{z}) - f (\boldsymbol{\theta} + \sigma \, \boldsymbol{z}) ) \cdot \boldsymbol{z} ] + \end{align*} + +- ``argnums`` specifies which parameter we want to trace the meta-gradient. +- ``num_samples`` specifies how many times we want to conduct the sampling. +- ``sigma`` is for precision. This is the scaling factor for the sampling distribution. + +We show the pseudo code in the following part. + +.. code-block:: python + + # Functional API for zero-order differentiation + # 1. Customize the noise distribution via a distribution class + class Distribution: + def sample(self, sample_shape=torch.Size()): + # Sampling function for noise + # NOTE: The distribution should be spherical symmetric and with a constant variance of 1. + ... + return noise_batch + + distribution = Distribution() + + # 2. Customize the noise distribution via a sampling function + def distribution(sample_shape=torch.Size()): + # Sampling function for noise + # NOTE: The distribution should be spherical symmetric and with a constant variance of 1. + ... + return noise_batch + + # 3. Distribution can also be an instance of `torch.distributions.Distribution`, e.g., `torch.distributions.Normal(...)` + distribution = torch.distributions.Normal(loc=0, scale=1) + + # Decorator that wraps the function + @torchopt.diff.zero_order(distribution=distribution, method='naive', argnums=0, num_samples=100, sigma=0.01) + def forward(params, data): + # Forward optimization process for params + ... + return objective # the returned tensor should be a scalar tensor + + # Define params and get data + params, data = ..., ... + + # Forward pass + loss = forward(params, data) + # Backward pass using zero-order differentiation + grads = torch.autograd.grad(loss, params) + +OOP API +~~~~~~~ + +.. autosummary:: + + torchopt.nn.ZeroOrderGradientModule + +Coupled with PyTorch |torch.nn.Module|_, we also design the OOP API :class:`nn.ZeroOrderGradientModule` for ES. +The core idea of :class:`nn.ZeroOrderGradientModule` is to enable the gradient flow forward process to `self.parameters()` (can be the meta-parameters when calculating meta-gradient). +Users need to define the forward process zero-order gradient procedures ``forward()`` and a noise sampling function ``sample()``. + +.. |torch.nn.Module| replace:: ``torch.nn.Module`` +.. _torch.nn.Module: https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module + +.. code-block:: python + + from torchopt.nn import ZeroOrderGradientModule + + # Inherited from the class ZeroOrderGradientModule + # Optionally specify the `method` and/or `num_samples` and/or `sigma` used for sampling + class Net(ZeroOrderGradientModule, method='naive', num_samples=100, sigma=0.01): + def __init__(self, ...): + ... + + def forward(self, batch): + # Forward process + ... + return objective # the returned tensor should be a scalar tensor + + def sample(self, sample_shape=torch.Size()): + # Generate a batch of noise samples + # NOTE: The distribution should be spherical symmetric and with a constant variance of 1. + ... + return noise_batch + + # Get model and data + net = Net(...) + data = ... + + # Forward pass + loss = Net(data) + # Backward pass using zero-order differentiation + grads = torch.autograd.grad(loss, net.parameters()) + +Notebook Tutorial +----------------- + +For more details, check the notebook tutorial at `zero-order `_. diff --git a/tutorials/1_Functional_Optimizer.ipynb b/tutorials/1_Functional_Optimizer.ipynb index 3d70eb62..07a8aeb8 100644 --- a/tutorials/1_Functional_Optimizer.ipynb +++ b/tutorials/1_Functional_Optimizer.ipynb @@ -18,7 +18,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In this tutorial, we will introduce how TorchOpt can be treated as functional optimizer to conduct normal optimization with functional programing style. We will also illustrate how to conduct differentiable optimization with functional programing in PyTorch." + "In this tutorial, we will introduce how TorchOpt can be treated as functional optimizer to conduct normal optimization with functional programming style. We will also illustrate how to conduct differentiable optimization with functional programming in PyTorch." ] }, { @@ -70,7 +70,7 @@ "source": [ "### 1.1 Original JAX implementation\n", "\n", - "The first example is JAX implementation coupled with [Optax](https://github.com/deepmind/optax), which belongs to functional programing style." + "The first example is JAX implementation coupled with [Optax](https://github.com/deepmind/optax), which belongs to functional programming style." ] }, { @@ -391,7 +391,7 @@ "source": [ "## 2. Differentiable Optimization with Functional Optimizer\n", "\n", - "Coupled with functional optimizer, you can conduct differentiable optimization by setting the `inplace` flag as `False` in update and `apply_updates` function. (which might be helpful for meta-learning algorithm implementation with functional programing style). \n", + "Coupled with functional optimizer, you can conduct differentiable optimization by setting the `inplace` flag as `False` in update and `apply_updates` function. (which might be helpful for meta-learning algorithm implementation with functional programming style). \n", "\n", "Note that `torchopt.SGD` and `torchopt.Adam` do not support differentiable optimization. Refer to the Meta-Optimizer notebook for PyTorch-like differentiable optimizers." ] diff --git a/tutorials/2_Visualization.ipynb b/tutorials/2_Visualization.ipynb index 3141f522..11c68bec 100644 --- a/tutorials/2_Visualization.ipynb +++ b/tutorials/2_Visualization.ipynb @@ -18,7 +18,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In [PyTorch](https://pytorch.org), if the attribute `requires_grad` a tensor is `True`, the computation graph will be created if we use the tensor to do any operations. The computation graph is implemented likes link-list -- `Tensor`s are nodes and they are linked by their attribute `gran_fn`. [PyTorchViz](https://github.com/szagoruyko/pytorchviz) is a Python package that uses [Graphviz](https://graphviz.org) as a backend for plotting computation graphs. TorchOpt use PyTorchViz as the blueprint and provide more easy-to-use visualization functions on the premise of supporting all its functions." + "In [PyTorch](https://pytorch.org), if the attribute `requires_grad` of a tensor is `True`, the computation graph will be created if we use the tensor to do any operations. The computation graph is implemented like link-list -- `Tensor`s are nodes and they are linked by their attribute `gran_fn`. [PyTorchViz](https://github.com/szagoruyko/pytorchviz) is a Python package that uses [Graphviz](https://graphviz.org) as a backend for plotting computation graphs. TorchOpt use PyTorchViz as the blueprint and provide more easy-to-use visualization functions on the premise of supporting all its functions." ] }, { diff --git a/tutorials/3_Meta_Optimizer.ipynb b/tutorials/3_Meta_Optimizer.ipynb index d50ace2d..4a09836c 100644 --- a/tutorials/3_Meta_Optimizer.ipynb +++ b/tutorials/3_Meta_Optimizer.ipynb @@ -112,7 +112,7 @@ "# Low-level API\n", "optim = torchopt.MetaOptimizer(net, torchopt.sgd(lr=1.0))\n", "\n", - "# High level API\n", + "# High-level API\n", "optim = torchopt.MetaSGD(net, lr=1.0)" ] }, @@ -274,7 +274,7 @@ "\n", "We observe that how to reinitialize the inner-loop parameter in a new bi-level process vary in different meta-learning algorithms. For instance, in algorithm like Model-Agnostic Meta-Learning (MAML) ([arXiv:1703.03400](https://arxiv.org/abs/1703.03400)), every time a new task comes, we need to reset the parameters to the initial ones. In other cases such as Meta-Gradient Reinforcement Learning (MGRL) ([arXiv:1805.09801](https://arxiv.org/abs/1805.09801)), the inner-loop network parameter just inherit previous updated parameter to continue the new bi-level process.\n", "\n", - "We provide the `torchopt.extract_state_dict` and `torchopt.recover_state_dict` functions to extract and restore the state of network and optimizer. By default, the extracted state dictionary is a reference (this design is for accumulating gradient of multi-task batch training, MAML for example). You can also set `by='copy'` to extract the copy of state dictionary or set `by='deepcopy'` to have a detached copy." + "We provide the `torchopt.extract_state_dict` and `torchopt.recover_state_dict` functions to extract and restore the state of network and optimizer. By default, the extracted state dictionary is a reference (this design is for accumulating gradient of multi-task batch training, MAML for example). You can also set `by='copy'` to extract the copy of the state dictionary or set `by='deepcopy'` to have a detached copy." ] }, { @@ -303,7 +303,7 @@ "# If set `detach_buffers=True`, the parameters are referenced as references while buffers are detached copies\n", "init_net_state = torchopt.extract_state_dict(net, by='reference', detach_buffers=True)\n", "\n", - "# Set `copy` to get the copy of state dictionary\n", + "# Set `copy` to get the copy of the state dictionary\n", "init_net_state_copy = torchopt.extract_state_dict(net, by='copy')\n", "init_optim_state_copy = torchopt.extract_state_dict(optim, by='copy')\n", "\n", @@ -680,7 +680,7 @@ "source": [ "**2. Get `Trying to backward through the graph a second time` error when conducting multiple meta-optimization.**\n", "\n", - "Please refer to the tutorial notebook [Stop Gradient](4_Stop_Gradient.ipynb) for more guidances." + "Please refer to the tutorial notebook [Stop Gradient](4_Stop_Gradient.ipynb) for more guidance." ] } ], diff --git a/tutorials/5_Implicit_Differentiation.ipynb b/tutorials/5_Implicit_Differentiation.ipynb index 21dd2ed6..f8258fcc 100644 --- a/tutorials/5_Implicit_Differentiation.ipynb +++ b/tutorials/5_Implicit_Differentiation.ipynb @@ -373,7 +373,7 @@ "meta_params, data = ..., ...\n", "inner_net = InnerNet()\n", "\n", - "# Solve for inner-loop process related with the meta-parameters\n", + "# Solve for inner-loop process related to the meta-parameters\n", "optimal_inner_net = inner_net.solve(meta_params, *data)\n", "\n", "# Get outer-loss and solve for meta-gradient\n", diff --git a/tutorials/6_Zero_Order_Differentiation.ipynb b/tutorials/6_Zero_Order_Differentiation.ipynb index d824ab61..968f6b6c 100644 --- a/tutorials/6_Zero_Order_Differentiation.ipynb +++ b/tutorials/6_Zero_Order_Differentiation.ipynb @@ -23,7 +23,7 @@ "source": [ "When the inner-loop process is non-differentiable or one wants to eliminate the heavy computation burdens in the previous two modes (brought by Hessian), one can choose ZD. ZD typically gets gradients based on zero-order estimation, such as finite-difference, or Evolutionary Strategy.\n", "\n", - "TorchOpt offers API for ES-based differentiation. Instead of optimizing the objective $F$, ES optimizes a Gaussion smoothing objective defined as $\\tilde{f}_{\\sigma} (\\theta) = \\mathbb{E}_{{z} \\sim \\mathcal{N}( {0}, {I}_d )} [ f ({\\theta} + \\sigma \\, z) ]$, where $\\sigma$ denotes precision. The gradient of such objective is $\\nabla_\\theta \\tilde{f}_{\\sigma} (\\theta) = \\frac{1}{\\sigma} \\mathbb{E}_{{z} \\sim \\mathcal{N}( {0}, {I}_d )} [ f({\\theta} + \\sigma \\, z) \\cdot z ]$. Refer to [ES-MAML](https://arxiv.org/pdf/1910.01215.pdf) for more details." + "TorchOpt offers API for ES-based differentiation. Instead of optimizing the objective $f (\\boldsymbol{\\theta}): \\mathbb{R}^n \\to \\mathbb{R}$, ES optimizes a Gaussion smoothing objective defined as $\\tilde{f}_{\\sigma} (\\boldsymbol{\\theta}) = \\mathbb{E}_{\\boldsymbol{z} \\sim \\mathcal{N}( 0, {I}_d )} [ f (\\boldsymbol{\\theta} + \\sigma \\, \\boldsymbol{z}) ]$, where $\\sigma$ denotes precision. The gradient of such objective is $\\nabla_{\\boldsymbol{\\theta}} \\tilde{f}_{\\sigma} (\\boldsymbol{\\theta}) = \\frac{1}{\\sigma} \\mathbb{E}_{\\boldsymbol{z} \\sim \\mathcal{N}( 0, {I}_d )} [ f (\\boldsymbol{\\theta} + \\sigma \\, \\boldsymbol{z}) \\cdot \\boldsymbol{z} ]$. Refer to [ES-MAML](https://arxiv.org/pdf/1910.01215.pdf) for more details." ] }, { @@ -59,9 +59,21 @@ "The basic functional API is `torchopt.diff.zero_order.zero_order`, which is used as the decorator for the forward process zero-order gradient procedures. Users are required to implement the noise sampling function, which will be used as the input of zero_order decorator. Here we show the specific meaning for each parameter used in the decorator.\n", "\n", "- `distribution` for noise sampling distribution. The distribution $\\lambda$ should be spherical symmetric and with a constant variance of $1$ for each element. I.e.:\n", + "\n", " - Spherical symmetric: $\\mathbb{E}_{\\boldsymbol{z} \\sim \\lambda} [ \\boldsymbol{z} ] = \\boldsymbol{0}$.\n", - " - Constant variance of $1$ for each element: $\\mathbb{E}_{\\boldsymbol{z} \\sim \\lambda} [ {\\lvert \\boldsymbol{z}_i \\rvert}^2 ] = 1$.\n", - "- `method` for different kind of algorithms, we support `'naive'` ([ES-RL](https://arxiv.org/abs/1703.03864)), `'forward'` ([Forward-FD](http://proceedings.mlr.press/v80/choromanski18a/choromanski18a.pdf)), and `'antithetic'` ([antithetic](https://d1wqtxts1xzle7.cloudfront.net/75609515/coredp2011_1web-with-cover-page-v2.pdf?Expires=1670215467&Signature=RfP~mQhhhI7aGknwXbRBgSggFrKuNTPYdyUSdMmfTxOa62QoOJAm-Xhr3F1PLyjUQc2JVxmKIKGGuyYvyfCTpB31dfmMtuVQxZMWVF-SfErTN05SliC93yjA1x1g2kjhn8bkBFdQqGl~1RQSKnhj88BakgSeDNzyCxwbD5VgR89BXRs4YIK5RBIKYtgLhoyz5jar7wHS3TJhRzs3WNeTIAjAmLqJ068oGFZ0Jr7maGquTe3w~8LEEIprJ6cyCMc6b1UUJkmwjNq0RLTVbxgFjfi4Z9kyxyJB9IOS1J25OOON4jfwh5JlXS7MVskuONUyHJim1TQ8OwCraKlBsQLPQw__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA)).\n", + " - Constant variance of $1$ for each element: $\\mathbb{E}_{\\boldsymbol{z} \\sim \\lambda} [ {\\lvert z_i \\rvert}^2 ] = 1$.\n", + " - For example, the standard multi-dimensional normal distribution $\\mathcal{N} (\\boldsymbol{0}, \\boldsymbol{1})$.\n", + "\n", + "- `method` for different kind of algorithms, we support `'naive'` ([ES-RL](https://arxiv.org/abs/1703.03864)), `'forward'` ([Forward-FD](http://proceedings.mlr.press/v80/choromanski18a/choromanski18a.pdf)), and `'antithetic'` ([antithetic](https://arxiv.org/abs/1803.07055)).\n", + "\n", + " $$\n", + " \\begin{align*}\n", + " \\text{naive} \\qquad & \\nabla_{\\boldsymbol{\\theta}} \\tilde{f}_{\\sigma} (\\boldsymbol{\\theta}) = \\frac{1}{\\sigma} \\mathbb{E}_{\\boldsymbol{z} \\sim \\lambda} [ f (\\boldsymbol{\\theta} + \\sigma \\, \\boldsymbol{z}) \\cdot \\boldsymbol{z} ] \\\\\n", + " \\text{forward} \\qquad & \\nabla_{\\boldsymbol{\\theta}} \\tilde{f}_{\\sigma} (\\boldsymbol{\\theta}) = \\frac{1}{\\sigma} \\mathbb{E}_{\\boldsymbol{z} \\sim \\lambda} [ ( f (\\boldsymbol{\\theta} + \\sigma \\, \\boldsymbol{z}) - f (\\boldsymbol{\\theta}) ) \\cdot \\boldsymbol{z} ] \\\\\n", + " \\text{antithetic} \\qquad & \\nabla_{\\boldsymbol{\\theta}} \\tilde{f}_{\\sigma} (\\boldsymbol{\\theta}) = \\frac{1}{2 \\sigma} \\mathbb{E}_{\\boldsymbol{z} \\sim \\lambda} [ (f (\\boldsymbol{\\theta} + \\sigma \\, \\boldsymbol{z}) - f (\\boldsymbol{\\theta} + \\sigma \\, \\boldsymbol{z}) ) \\cdot \\boldsymbol{z} ]\n", + " \\end{align*}\n", + " $$\n", + "\n", "- `argnums` specifies which parameter we want to trace the meta-gradient.\n", "- `num_samples` specifies how many times we want to conduct the sampling.\n", "- `sigma` is for precision. This is the scaling factor for the sampling distribution.\n", @@ -126,26 +138,26 @@ "output_type": "stream", "text": [ "001: tensor(0.0265, grad_fn=)\n", - "002: tensor(0.0241, grad_fn=)\n", + "002: tensor(0.0243, grad_fn=)\n", "003: tensor(0.0222, grad_fn=)\n", "004: tensor(0.0202, grad_fn=)\n", - "005: tensor(0.0185, grad_fn=)\n", + "005: tensor(0.0184, grad_fn=)\n", "006: tensor(0.0170, grad_fn=)\n", - "007: tensor(0.0158, grad_fn=)\n", - "008: tensor(0.0147, grad_fn=)\n", - "009: tensor(0.0139, grad_fn=)\n", - "010: tensor(0.0132, grad_fn=)\n", - "011: tensor(0.0126, grad_fn=)\n", - "012: tensor(0.0122, grad_fn=)\n", - "013: tensor(0.0120, grad_fn=)\n", - "014: tensor(0.0118, grad_fn=)\n", - "015: tensor(0.0117, grad_fn=)\n", - "016: tensor(0.0117, grad_fn=)\n", - "017: tensor(0.0117, grad_fn=)\n", - "018: tensor(0.0118, grad_fn=)\n", - "019: tensor(0.0119, grad_fn=)\n", + "007: tensor(0.0157, grad_fn=)\n", + "008: tensor(0.0146, grad_fn=)\n", + "009: tensor(0.0137, grad_fn=)\n", + "010: tensor(0.0130, grad_fn=)\n", + "011: tensor(0.0123, grad_fn=)\n", + "012: tensor(0.0118, grad_fn=)\n", + "013: tensor(0.0114, grad_fn=)\n", + "014: tensor(0.0111, grad_fn=)\n", + "015: tensor(0.0111, grad_fn=)\n", + "016: tensor(0.0111, grad_fn=)\n", + "017: tensor(0.0113, grad_fn=)\n", + "018: tensor(0.0115, grad_fn=)\n", + "019: tensor(0.0118, grad_fn=)\n", "020: tensor(0.0120, grad_fn=)\n", - "021: tensor(0.0120, grad_fn=)\n", + "021: tensor(0.0121, grad_fn=)\n", "022: tensor(0.0121, grad_fn=)\n", "023: tensor(0.0122, grad_fn=)\n", "024: tensor(0.0122, grad_fn=)\n", @@ -163,7 +175,7 @@ "\n", "\n", "@torchopt.diff.zero_order(\n", - " distribution=distribution, method='forward', argnums=0, num_samples=1000, sigma=0.01\n", + " distribution=distribution, method='forward', argnums=0, num_samples=100, sigma=0.01\n", ")\n", "def forward_process(params, fn, x, y):\n", " y_pred = fn(params, x)\n", From b8939ca383da95d6016da8f0b3e8f026ca63649c Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Sun, 12 Feb 2023 22:20:03 +0800 Subject: [PATCH 20/24] refactor(workflows): rewrite setup CUDA Toolkit logic (#133) --- .github/workflows/build.yml | 2 +- .github/workflows/lint.yml | 29 +++++++++++++++++------------ .github/workflows/tests.yml | 29 +++++++++++++++++------------ CHANGELOG.md | 2 +- 4 files changed, 36 insertions(+), 26 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index fe4e587b..71ba3dd1 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -14,7 +14,7 @@ on: - include/** - src/** - torchopt/version.py - - .github/workflow/build.yml + - .github/workflows/build.yml release: types: - published diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index 474517a3..55dee661 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -15,6 +15,9 @@ concurrency: group: "${{ github.workflow }}-${{ github.ref }}" cancel-in-progress: ${{ github.event_name == 'pull_request' }} +env: + CUDA_VERSION: "11.7" + jobs: lint: runs-on: ubuntu-latest @@ -33,21 +36,23 @@ jobs: update-environment: true - name: Setup CUDA Toolkit - uses: Jimver/cuda-toolkit@v0.2.8 id: cuda-toolkit - with: - cuda: "11.7.0" - method: network - sub-packages: '["nvcc"]' - - run: | - CUDA_VERSION="${{steps.cuda-toolkit.outputs.cuda}}" - echo "CUDA_VERSION=${CUDA_VERSION}" >> "${GITHUB_ENV}" - PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cu$(echo "${CUDA_VERSION}" | cut -d'.' -f-2 | tr -d '.')" + run: | + CUDA_PKG_SUFFIX="$(echo "${CUDA_VERSION}" | cut -d'.' -f-2 | tr '.' '-')" + sudo apt-get update && sudo apt-get install wget --yes + ( + source /etc/os-release + wget -O cuda-keyring.deb "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${VERSION_ID//./}/$(uname -m)/cuda-keyring_1.0-1_all.deb" + sudo dpkg -i cuda-keyring.deb + ) + sudo apt-get update && sudo apt-get install "cuda-minimal-build-${CUDA_PKG_SUFFIX}" --yes + echo "PATH=/usr/local/cuda/bin${PATH:+:${PATH}}" >> "${GITHUB_ENV}" + echo "LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}" >> "${GITHUB_ENV}" + + PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cu$(echo "${CUDA_PKG_SUFFIX}" | tr -d '-')" echo "PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}" >> "${GITHUB_ENV}" - echo "Installed CUDA version is: ${CUDA_VERSION}" - echo "CUDA install location: ${{steps.cuda-toolkit.outputs.CUDA_PATH}}" - nvcc -V + /usr/local/cuda/bin/nvcc -V echo "Torch index URL: ${PIP_EXTRA_INDEX_URL}" - name: Upgrade pip diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 67732041..cbf8c350 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -26,6 +26,9 @@ concurrency: group: "${{ github.workflow }}-${{ github.ref }}" cancel-in-progress: ${{ github.event_name == 'pull_request' }} +env: + CUDA_VERSION: "11.7" + jobs: test: name: Test with CXX/CUDA extensions on ubuntu-latest @@ -45,21 +48,23 @@ jobs: update-environment: true - name: Setup CUDA Toolkit - uses: Jimver/cuda-toolkit@v0.2.8 id: cuda-toolkit - with: - cuda: "11.7.0" - method: network - sub-packages: '["nvcc"]' - - run: | - CUDA_VERSION="${{steps.cuda-toolkit.outputs.cuda}}" - echo "CUDA_VERSION=${CUDA_VERSION}" >> "${GITHUB_ENV}" - PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cu$(echo "${CUDA_VERSION}" | cut -d'.' -f-2 | tr -d '.')" + run: | + CUDA_PKG_SUFFIX="$(echo "${CUDA_VERSION}" | cut -d'.' -f-2 | tr '.' '-')" + sudo apt-get update && sudo apt-get install wget --yes + ( + source /etc/os-release + wget -O cuda-keyring.deb "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${VERSION_ID//./}/$(uname -m)/cuda-keyring_1.0-1_all.deb" + sudo dpkg -i cuda-keyring.deb + ) + sudo apt-get update && sudo apt-get install "cuda-minimal-build-${CUDA_PKG_SUFFIX}" --yes + echo "PATH=/usr/local/cuda/bin${PATH:+:${PATH}}" >> "${GITHUB_ENV}" + echo "LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}" >> "${GITHUB_ENV}" + + PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cu$(echo "${CUDA_PKG_SUFFIX}" | tr -d '-')" echo "PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}" >> "${GITHUB_ENV}" - echo "Installed CUDA version is: ${CUDA_VERSION}" - echo "CUDA install location: ${{steps.cuda-toolkit.outputs.CUDA_PATH}}" - nvcc -V + /usr/local/cuda/bin/nvcc -V echo "Torch index URL: ${PIP_EXTRA_INDEX_URL}" - name: Upgrade pip diff --git a/CHANGELOG.md b/CHANGELOG.md index 10d29960..d69bb40d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -18,7 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Changed - +- Rewrite setup CUDA Toolkit logic by [@XuehaiPan](https://github.com/XuehaiPan) in [#133](https://github.com/metaopt/torchopt/pull/133). ### Fixed From c58dff7b34ac4a6af8865c6c283c95085e12088d Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Tue, 14 Feb 2023 00:42:03 +0800 Subject: [PATCH 21/24] fix(Dockerfile): fix missing versioned packages (#134) --- Dockerfile | 19 +++++++++---------- Makefile | 2 +- 2 files changed, 10 insertions(+), 11 deletions(-) diff --git a/Dockerfile b/Dockerfile index d34eda03..7295af74 100644 --- a/Dockerfile +++ b/Dockerfile @@ -16,12 +16,12 @@ SHELL ["/bin/bash", "-c"] # Install packages RUN apt-get update && \ apt-get install -y sudo ca-certificates openssl \ - git ssh build-essential gcc-10 g++-10 cmake make \ - python3.9-dev python3.9-venv graphviz && \ + git ssh build-essential gcc g++ cmake make \ + python3-dev python3-venv graphviz && \ rm -rf /var/lib/apt/lists/* ENV LANG C.UTF-8 -ENV CC=gcc-10 CXX=g++-10 +ENV CC=gcc CXX=g++ # Add a new user RUN useradd -m -s /bin/bash torchopt && \ @@ -30,7 +30,7 @@ USER torchopt RUN echo "export PS1='[\[\e[1;33m\]\u\[\e[0m\]:\[\e[1;35m\]\w\[\e[0m\]]\$ '" >> ~/.bashrc # Setup virtual environment -RUN /usr/bin/python3.9 -m venv --upgrade-deps ~/venv && rm -rf ~/.pip/cache +RUN /usr/bin/python3 -m venv --upgrade-deps ~/venv && rm -rf ~/.pip/cache RUN PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cu$(echo "${CUDA_VERSION}" | cut -d'.' -f-2 | tr -d '.')" && \ echo "export PIP_EXTRA_INDEX_URL='${PIP_EXTRA_INDEX_URL}'" >> ~/venv/bin/activate && \ echo "source /home/torchopt/venv/bin/activate" >> ~/.bashrc @@ -48,14 +48,13 @@ FROM builder AS devel-builder # Install extra dependencies RUN sudo apt-get update && \ - sudo apt-get install -y golang-1.16 clang-format clang-tidy && \ - sudo chown -R "$(whoami):$(whoami)" /usr/lib/go-1.16 && \ + sudo apt-get install -y golang clang-format clang-tidy && \ + sudo chown -R "$(whoami):$(whoami)" "$(realpath /usr/lib/go)" && \ sudo rm -rf /var/lib/apt/lists/* # Install addlicense -ENV GOPATH="/usr/lib/go-1.16" -ENV GOBIN="${GOPATH}/bin" -ENV GOROOT="${GOPATH}" +ENV GOROOT="/usr/lib/go" +ENV GOBIN="${GOROOT}/bin" ENV PATH="${GOBIN}:${PATH}" RUN go install github.com/google/addlicense@latest @@ -74,7 +73,7 @@ COPY --chown=torchopt . . # Install TorchOpt RUN source ~/venv/bin/activate && \ - python -m pip install -e . && \ + make install-editable && \ rm -rf .eggs *.egg-info ~/.pip/cache ~/.cache/pip ENTRYPOINT [ "/bin/bash", "--login" ] diff --git a/Makefile b/Makefile index 241516f6..4fc5abad 100644 --- a/Makefile +++ b/Makefile @@ -102,7 +102,7 @@ clang-tidy-install: go-install: # requires go >= 1.16 - command -v go || (sudo apt-get install -y golang-1.16 && sudo ln -sf /usr/lib/go-1.16/bin/go /usr/bin/go) + command -v go || (sudo apt-get install -y golang && sudo ln -sf /usr/lib/go/bin/go /usr/bin/go) addlicense-install: go-install command -v addlicense || go install github.com/google/addlicense@latest From c67476bf31d385ff79ba7493393afde31dd66c18 Mon Sep 17 00:00:00 2001 From: Bo Liu Date: Wed, 15 Feb 2023 18:33:58 +0900 Subject: [PATCH 22/24] test: improve test coverage (#78) Co-authored-by: Jie Ren Co-authored-by: Bo Liu Co-authored-by: Xuehai Pan --- .github/workflows/tests.yml | 11 + CHANGELOG.md | 1 + CMakeLists.txt | 4 +- Makefile | 5 +- README.md | 2 +- codecov.yml | 9 + conda-recipe.yaml | 3 +- docs/conda-recipe.yaml | 2 +- docs/source/spelling_wordlist.txt | 1 + include/adam_op/adam_op.h | 3 +- include/adam_op/adam_op_impl_cpu.h | 3 +- include/adam_op/adam_op_impl_cuda.cuh | 3 +- include/common.h | 2 +- include/utils.h | 2 +- pyproject.toml | 9 +- setup.py | 8 +- src/adam_op/adam_op.cpp | 18 +- src/adam_op/adam_op_impl_cpu.cpp | 11 +- src/adam_op/adam_op_impl_cuda.cu | 11 +- tests/.coveragerc | 8 + tests/conftest.py | 2 +- tests/helpers.py | 53 +++- tests/requirements.txt | 1 - tests/test_accelerated_op.py | 193 ++++++++++++ tests/test_alias.py | 140 ++++++--- tests/test_clip.py | 8 +- tests/test_combine.py | 51 +++ tests/test_hook.py | 38 +++ tests/test_implicit.py | 11 +- tests/test_import.py | 365 ++++++++++++++++++++++ tests/test_linalg.py | 27 ++ tests/test_meta_optim.py | 75 ++++- tests/test_nn.py | 180 +++++++++++ tests/test_optim.py | 75 +---- tests/test_pytree.py | 214 +++++++++++++ tests/test_schedule.py | 8 +- tests/test_transform.py | 65 ++++ tests/test_utils.py | 140 +++++++++ tests/test_zero_order.py | 4 +- torchopt/_C/adam_op.pyi | 3 +- torchopt/__init__.py | 7 +- torchopt/accelerated_op/__init__.py | 9 +- torchopt/accelerated_op/_src/adam_op.py | 48 +-- torchopt/accelerated_op/adam_op.py | 61 ++-- torchopt/alias/adam.py | 41 ++- torchopt/alias/adamw.py | 47 ++- torchopt/alias/rmsprop.py | 45 ++- torchopt/alias/sgd.py | 37 ++- torchopt/alias/utils.py | 140 +++++++-- torchopt/base.py | 26 +- torchopt/clip.py | 15 +- torchopt/combine.py | 16 +- torchopt/diff/implicit/nn/module.py | 6 +- torchopt/diff/zero_order/decorator.py | 13 +- torchopt/diff/zero_order/nn/module.py | 4 +- torchopt/distributed/__init__.py | 4 +- torchopt/distributed/api.py | 2 +- torchopt/hook.py | 15 +- torchopt/nn/__init__.py | 3 +- torchopt/nn/module.py | 4 + torchopt/nn/stateless.py | 7 +- torchopt/pytree.py | 4 +- torchopt/transform/__init__.py | 5 +- torchopt/transform/add_decayed_weights.py | 38 ++- torchopt/transform/nan_to_num.py | 19 +- torchopt/transform/scale.py | 30 +- torchopt/transform/scale_by_adam.py | 64 ++-- torchopt/transform/scale_by_rms.py | 37 ++- torchopt/transform/scale_by_schedule.py | 29 +- torchopt/transform/scale_by_stddev.py | 37 ++- torchopt/transform/trace.py | 33 +- torchopt/transform/utils.py | 85 ++++- torchopt/typing.py | 14 +- torchopt/utils.py | 64 ++-- torchopt/visual.py | 9 - 75 files changed, 2329 insertions(+), 458 deletions(-) create mode 100644 codecov.yml create mode 100644 tests/.coveragerc create mode 100644 tests/test_accelerated_op.py create mode 100644 tests/test_combine.py create mode 100644 tests/test_hook.py create mode 100644 tests/test_import.py create mode 100644 tests/test_linalg.py create mode 100644 tests/test_nn.py create mode 100644 tests/test_pytree.py create mode 100644 tests/test_transform.py create mode 100644 tests/test_utils.py diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index cbf8c350..8bee5b9d 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -87,6 +87,7 @@ jobs: make pytest - name: Upload coverage to Codecov + if: runner.os == 'Linux' uses: codecov/codecov-action@v3 with: token: ${{ secrets.CODECOV_TOKEN }} @@ -133,3 +134,13 @@ jobs: - name: Test with pytest run: | make pytest + + - name: Upload coverage to Codecov + if: runner.os == 'Linux' + uses: codecov/codecov-action@v3 + with: + token: ${{ secrets.CODECOV_TOKEN }} + file: ./tests/coverage.xml + flags: unittests + name: codecov-umbrella-pure-python + fail_ci_if_error: false diff --git a/CHANGELOG.md b/CHANGELOG.md index d69bb40d..74d23144 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -22,6 +22,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Fixed +- Update tests and fix corresponding bugs by [@XuehaiPan](https://github.com/XuehaiPan) and [@Benjamin-eecs](https://github.com/Benjamin-eecs) and [@JieRen98](https://github.com/JieRen98) in [#78](https://github.com/metaopt/torchopt/pull/78). - Fix memory leak in implicit MAML omniglot few-shot classification example with OOP APIs by [@XuehaiPan](https://github.com/XuehaiPan) in [#113](https://github.com/metaopt/torchopt/pull/113). ### Removed diff --git a/CMakeLists.txt b/CMakeLists.txt index 50f6144f..3b091a22 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -17,7 +17,7 @@ cmake_minimum_required(VERSION 3.11) # for FetchContent project(torchopt LANGUAGES CXX) include(FetchContent) -set(PYBIND11_VERSION v2.10.1) +set(PYBIND11_VERSION v2.10.3) if(NOT CMAKE_BUILD_TYPE) set(CMAKE_BUILD_TYPE Release) diff --git a/Makefile b/Makefile index 4fc5abad..f856cb21 100644 --- a/Makefile +++ b/Makefile @@ -54,7 +54,6 @@ py-format-install: mypy-install: $(call check_pip_install,mypy) - $(call check_pip_install,types-setuptools) pre-commit-install: $(call check_pip_install,pre-commit) @@ -110,9 +109,9 @@ addlicense-install: go-install # Tests pytest: test-install - cd tests && \ + cd tests && $(PYTHON) -c 'import $(PROJECT_NAME)' && \ $(PYTHON) -m pytest --verbose --color=yes --durations=0 \ - --cov="$(PROJECT_NAME)" --cov-report=xml --cov-report=term-missing \ + --cov="$(PROJECT_NAME)" --cov-config=.coveragerc --cov-report=xml --cov-report=term-missing \ $(PYTESTOPTS) . test: pytest diff --git a/README.md b/README.md index ea045c09..c1fb97ba 100644 --- a/README.md +++ b/README.md @@ -11,11 +11,11 @@ ![Python 3.7+](https://img.shields.io/badge/Python-3.7%2B-brightgreen.svg) ![PyPI](https://img.shields.io/pypi/v/torchopt?logo=pypi) ![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/metaopt/torchopt/tests.yml?label=tests&logo=github) + ![CodeCov](https://img.shields.io/codecov/c/gh/metaopt/torchopt) ![Documentation Status](https://img.shields.io/readthedocs/torchopt?logo=readthedocs) ![Downloads](https://static.pepy.tech/personalized-badge/torchopt?period=total&left_color=grey&right_color=blue&left_text=downloads) ![GitHub Repo Stars](https://img.shields.io/github/stars/metaopt/torchopt?color=brightgreen&logo=github) ![License](https://img.shields.io/github/license/metaopt/torchopt?label=license&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCIgd2lkdGg9IjI0IiBoZWlnaHQ9IjI0IiBmaWxsPSIjZmZmZmZmIj48cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGQ9Ik0xMi43NSAyLjc1YS43NS43NSAwIDAwLTEuNSAwVjQuNUg5LjI3NmExLjc1IDEuNzUgMCAwMC0uOTg1LjMwM0w2LjU5NiA1Ljk1N0EuMjUuMjUgMCAwMTYuNDU1IDZIMi4zNTNhLjc1Ljc1IDAgMTAwIDEuNUgzLjkzTC41NjMgMTUuMThhLjc2Mi43NjIgMCAwMC4yMS44OGMuMDguMDY0LjE2MS4xMjUuMzA5LjIyMS4xODYuMTIxLjQ1Mi4yNzguNzkyLjQzMy42OC4zMTEgMS42NjIuNjIgMi44NzYuNjJhNi45MTkgNi45MTkgMCAwMDIuODc2LS42MmMuMzQtLjE1NS42MDYtLjMxMi43OTItLjQzMy4xNS0uMDk3LjIzLS4xNTguMzEtLjIyM2EuNzUuNzUgMCAwMC4yMDktLjg3OEw1LjU2OSA3LjVoLjg4NmMuMzUxIDAgLjY5NC0uMTA2Ljk4NC0uMzAzbDEuNjk2LTEuMTU0QS4yNS4yNSAwIDAxOS4yNzUgNmgxLjk3NXYxNC41SDYuNzYzYS43NS43NSAwIDAwMCAxLjVoMTAuNDc0YS43NS43NSAwIDAwMC0xLjVIMTIuNzVWNmgxLjk3NGMuMDUgMCAuMS4wMTUuMTQuMDQzbDEuNjk3IDEuMTU0Yy4yOS4xOTcuNjMzLjMwMy45ODQuMzAzaC44ODZsLTMuMzY4IDcuNjhhLjc1Ljc1IDAgMDAuMjMuODk2Yy4wMTIuMDA5IDAgMCAuMDAyIDBhMy4xNTQgMy4xNTQgMCAwMC4zMS4yMDZjLjE4NS4xMTIuNDUuMjU2Ljc5LjRhNy4zNDMgNy4zNDMgMCAwMDIuODU1LjU2OCA3LjM0MyA3LjM0MyAwIDAwMi44NTYtLjU2OWMuMzM4LS4xNDMuNjA0LS4yODcuNzktLjM5OWEzLjUgMy41IDAgMDAuMzEtLjIwNi43NS43NSAwIDAwLjIzLS44OTZMMjAuMDcgNy41aDEuNTc4YS43NS43NSAwIDAwMC0xLjVoLTQuMTAyYS4yNS4yNSAwIDAxLS4xNC0uMDQzbC0xLjY5Ny0xLjE1NGExLjc1IDEuNzUgMCAwMC0uOTg0LS4zMDNIMTIuNzVWMi43NXpNMi4xOTMgMTUuMTk4YTUuNDE4IDUuNDE4IDAgMDAyLjU1Ny42MzUgNS40MTggNS40MTggMCAwMDIuNTU3LS42MzVMNC43NSA5LjM2OGwtMi41NTcgNS44M3ptMTQuNTEtLjAyNGMuMDgyLjA0LjE3NC4wODMuMjc1LjEyNi41My4yMjMgMS4zMDUuNDUgMi4yNzIuNDVhNS44NDYgNS44NDYgMCAwMDIuNTQ3LS41NzZMMTkuMjUgOS4zNjdsLTIuNTQ3IDUuODA3eiI+PC9wYXRoPjwvc3ZnPgo=) -

diff --git a/codecov.yml b/codecov.yml new file mode 100644 index 00000000..65b70e6e --- /dev/null +++ b/codecov.yml @@ -0,0 +1,9 @@ +coverage: + round: nearest + status: + project: + default: + threshold: 0.05% + patch: + default: + informational: true diff --git a/conda-recipe.yaml b/conda-recipe.yaml index 74d2d94d..faee0a7c 100644 --- a/conda-recipe.yaml +++ b/conda-recipe.yaml @@ -88,12 +88,11 @@ dependencies: - conda-forge::black-jupyter >= 22.6.0 - pylint >= 2.15.0 - mypy >= 0.990 - - types-setuptools - flake8 - flake8-bugbear - doc8 < 1.0.0a0 - pydocstyle - clang-format >= 14 - - clang-tools # clang-tidy + - clang-tools >= 14 # clang-tidy - cpplint - pre-commit diff --git a/docs/conda-recipe.yaml b/docs/conda-recipe.yaml index cc68310e..9a14af3f 100644 --- a/docs/conda-recipe.yaml +++ b/docs/conda-recipe.yaml @@ -62,7 +62,7 @@ dependencies: - sphinx-copybutton - sphinxcontrib-spelling - sphinxcontrib-bibtex - - sphinx-autodoc-typehints >= 1.19.2, != 1.23.4 + - sphinx-autodoc-typehints >= 1.19.2 - pyenchant - hunspell-en - myst-nb diff --git a/docs/source/spelling_wordlist.txt b/docs/source/spelling_wordlist.txt index ca421ec8..8f9d6895 100644 --- a/docs/source/spelling_wordlist.txt +++ b/docs/source/spelling_wordlist.txt @@ -161,6 +161,7 @@ argmin Jacobian autodiff backend +reparametrize reparameterize rtype backpropagate diff --git a/include/adam_op/adam_op.h b/include/adam_op/adam_op.h index 76baea3f..a49b0a06 100644 --- a/include/adam_op/adam_op.h +++ b/include/adam_op/adam_op.h @@ -1,4 +1,4 @@ -// Copyright 2022 MetaOPT Team. All Rights Reserved. +// Copyright 2022-2023 MetaOPT Team. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -68,6 +68,7 @@ TensorArray<2> adamBackwardUpdates(const torch::Tensor &dupdates, const torch::Tensor &new_nu, const pyfloat_t b1, const pyfloat_t b2, + const pyfloat_t eps_root, const pyuint_t count); void buildSubmodule(py::module &mod); // NOLINT[runtime/references] diff --git a/include/adam_op/adam_op_impl_cpu.h b/include/adam_op/adam_op_impl_cpu.h index 20f12ae1..37aba528 100644 --- a/include/adam_op/adam_op_impl_cpu.h +++ b/include/adam_op/adam_op_impl_cpu.h @@ -1,4 +1,4 @@ -// Copyright 2022 MetaOPT Team. All Rights Reserved. +// Copyright 2022-2023 MetaOPT Team. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -64,6 +64,7 @@ TensorArray<2> adamBackwardUpdatesCPU(const torch::Tensor &dupdates, const torch::Tensor &new_nu, const pyfloat_t b1, const pyfloat_t b2, + const pyfloat_t eps_root, const pyuint_t count); } // namespace adam_op } // namespace torchopt diff --git a/include/adam_op/adam_op_impl_cuda.cuh b/include/adam_op/adam_op_impl_cuda.cuh index cdb3ae58..6e661564 100644 --- a/include/adam_op/adam_op_impl_cuda.cuh +++ b/include/adam_op/adam_op_impl_cuda.cuh @@ -1,4 +1,4 @@ -// Copyright 2022 MetaOPT Team. All Rights Reserved. +// Copyright 2022-2023 MetaOPT Team. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -64,6 +64,7 @@ TensorArray<2> adamBackwardUpdatesCUDA(const torch::Tensor &dupdates, const torch::Tensor &new_nu, const pyfloat_t b1, const pyfloat_t b2, + const pyfloat_t eps_root, const pyuint_t count); } // namespace adam_op } // namespace torchopt diff --git a/include/common.h b/include/common.h index ac281eb9..65f9ef33 100644 --- a/include/common.h +++ b/include/common.h @@ -1,4 +1,4 @@ -// Copyright 2022 MetaOPT Team. All Rights Reserved. +// Copyright 2022-2023 MetaOPT Team. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. diff --git a/include/utils.h b/include/utils.h index d5cd2e00..0ef98539 100644 --- a/include/utils.h +++ b/include/utils.h @@ -1,4 +1,4 @@ -// Copyright 2022 MetaOPT Team. All Rights Reserved. +// Copyright 2022-2023 MetaOPT Team. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. diff --git a/pyproject.toml b/pyproject.toml index 0b0fb346..12fd6fe3 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -72,10 +72,9 @@ lint = [ "black[jupyter] >= 22.6.0", "pylint[spelling] >= 2.15.0", "mypy >= 0.990", - "types-setuptools", "flake8", "flake8-bugbear", - "doc8 < 1.0.0a0", + "doc8 < 1.0.0a0", # unpin this when we drop support for Python 3.7 "pydocstyle[toml]", "pyenchant", "cpplint", @@ -209,3 +208,9 @@ convention = "google" [tool.doc8] max-line-length = 500 + +[tool.pytest.ini_options] +filterwarnings = [ + "error", + 'ignore:Explicitly requested dtype float64 requested in .* is not available, and will be truncated to dtype float32\.:UserWarning', +] diff --git a/setup.py b/setup.py index 75f32750..0297d43e 100644 --- a/setup.py +++ b/setup.py @@ -77,17 +77,17 @@ def build_extension(self, ext): and hasattr(self, 'parallel') and self.parallel ): - build_args.append(f'--parallel={self.parallel}') + build_args.extend(['--parallel', str(self.parallel)]) else: build_args.append('--parallel') - build_args.extend([f'--target={ext.target}', '--']) + build_args.extend(['--target', ext.target, '--']) try: os.chdir(build_temp) - self.spawn(['cmake', ext.source_dir] + cmake_args) + self.spawn([cmake, ext.source_dir] + cmake_args) if not self.dry_run: - self.spawn(['cmake', '--build', '.'] + build_args) + self.spawn([cmake, '--build', '.'] + build_args) finally: os.chdir(HERE) diff --git a/src/adam_op/adam_op.cpp b/src/adam_op/adam_op.cpp index 57b6ee0f..08c9fb74 100644 --- a/src/adam_op/adam_op.cpp +++ b/src/adam_op/adam_op.cpp @@ -1,4 +1,4 @@ -// Copyright 2022 MetaOPT Team. All Rights Reserved. +// Copyright 2022-2023 MetaOPT Team. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -104,11 +104,11 @@ TensorArray<2> adamBackwardMu(const torch::Tensor &dmu, const pyfloat_t b1) { #if defined(__USE_CUDA__) if (dmu.device().is_cuda()) { - return adamBackwardMuCUDA(dmu, updates, mu, b1); + return adamBackwardMuCUDA(dmu.contiguous(), updates, mu, b1); } #endif if (dmu.device().is_cpu()) { - return adamBackwardMuCPU(dmu, updates, mu, b1); + return adamBackwardMuCPU(dmu.contiguous(), updates, mu, b1); } else { throw std::runtime_error("Not implemented"); } @@ -120,11 +120,11 @@ TensorArray<2> adamBackwardNu(const torch::Tensor &dnu, const pyfloat_t b2) { #if defined(__USE_CUDA__) if (dnu.device().is_cuda()) { - return adamBackwardNuCUDA(dnu, updates, nu, b2); + return adamBackwardNuCUDA(dnu.contiguous(), updates, nu, b2); } #endif if (dnu.device().is_cpu()) { - return adamBackwardNuCPU(dnu, updates, nu, b2); + return adamBackwardNuCPU(dnu.contiguous(), updates, nu, b2); } else { throw std::runtime_error("Not implemented"); } @@ -136,14 +136,17 @@ TensorArray<2> adamBackwardUpdates(const torch::Tensor &dupdates, const torch::Tensor &new_nu, const pyfloat_t b1, const pyfloat_t b2, + const pyfloat_t eps_root, const pyuint_t count) { #if defined(__USE_CUDA__) if (dupdates.device().is_cuda()) { - return adamBackwardUpdatesCUDA(dupdates, updates, new_mu, new_nu, b1, b2, count); + return adamBackwardUpdatesCUDA( + dupdates.contiguous(), updates, new_mu, new_nu, b1, b2, eps_root, count); } #endif if (dupdates.device().is_cpu()) { - return adamBackwardUpdatesCPU(dupdates, updates, new_mu, new_nu, b1, b2, count); + return adamBackwardUpdatesCPU( + dupdates.contiguous(), updates, new_mu, new_nu, b1, b2, eps_root, count); } else { throw std::runtime_error("Not implemented"); } @@ -207,6 +210,7 @@ void buildSubmodule(py::module &mod) { // NOLINT[runtime/references] py::arg("new_nu"), py::arg("b1"), py::arg("b2"), + py::arg("eps_root"), py::arg("count")); } diff --git a/src/adam_op/adam_op_impl_cpu.cpp b/src/adam_op/adam_op_impl_cpu.cpp index e242bedf..b9c14e49 100644 --- a/src/adam_op/adam_op_impl_cpu.cpp +++ b/src/adam_op/adam_op_impl_cpu.cpp @@ -1,4 +1,4 @@ -// Copyright 2022 MetaOPT Team. All Rights Reserved. +// Copyright 2022-2023 MetaOPT Team. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -49,8 +49,10 @@ void adamForwardInplaceCPUKernel(const other_t b1, const scalar_t mu_out = b1 * mu + (1 - b1) * updates; const scalar_t nu_out = b2 * nu + (1 - b2) * updates * updates; - const scalar_t updates_out = - mu_out * inv_one_minus_pow_b1 / (sqrt(nu_out * inv_one_minus_pow_b2 + eps_root) + eps); + const scalar_t mu_hat = mu_out * inv_one_minus_pow_b1; + const scalar_t nu_hat = nu_out * inv_one_minus_pow_b2; + + const scalar_t updates_out = mu_hat / (sqrt(nu_hat + eps_root) + eps); mu_ptr[tid] = mu_out; nu_ptr[tid] = nu_out; @@ -313,10 +315,11 @@ TensorArray<2> adamBackwardUpdatesCPU(const torch::Tensor &dupdates, const torch::Tensor &new_nu, const pyfloat_t b1, const pyfloat_t b2, + const pyfloat_t eps_root, const pyuint_t count) { using other_t = pyfloat_t; const other_t one_minus_pow_b1 = 1 - std::pow(b1, count); - const other_t inv_one_minus_pow_b2 = 1 / (1 - std::pow(b2, count)); + const other_t inv_one_minus_pow_b2 = 1 / (1 - std::pow(b2, count) + eps_root); auto dmu_out = torch::empty_like(new_mu); auto dnu_out = torch::empty_like(new_nu); diff --git a/src/adam_op/adam_op_impl_cuda.cu b/src/adam_op/adam_op_impl_cuda.cu index 4b65869f..ea1526a6 100644 --- a/src/adam_op/adam_op_impl_cuda.cu +++ b/src/adam_op/adam_op_impl_cuda.cu @@ -1,4 +1,4 @@ -// Copyright 2022 MetaOPT Team. All Rights Reserved. +// Copyright 2022-2023 MetaOPT Team. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -51,8 +51,10 @@ __global__ void adamForwardInplaceCUDAKernel(const other_t b1, const scalar_t mu_out = b1 * mu + (1 - b1) * updates; const scalar_t nu_out = b2 * nu + (1 - b2) * updates * updates; - const scalar_t updates_out = - mu_out * inv_one_minus_pow_b1 / (sqrt(nu_out * inv_one_minus_pow_b2 + eps_root) + eps); + const scalar_t mu_hat = mu_out * inv_one_minus_pow_b1; + const scalar_t nu_hat = nu_out * inv_one_minus_pow_b2; + + const scalar_t updates_out = mu_hat / (sqrt(nu_hat + eps_root) + eps); mu_ptr[tid] = mu_out; nu_ptr[tid] = nu_out; @@ -445,10 +447,11 @@ TensorArray<2> adamBackwardUpdatesCUDA(const torch::Tensor &dupdates, const torch::Tensor &new_nu, const pyfloat_t b1, const pyfloat_t b2, + const pyfloat_t eps_root, const pyuint_t count) { using other_t = pyfloat_t; const other_t one_minus_pow_b1 = 1 - std::pow(b1, count); - const other_t inv_one_minus_pow_b2 = 1 / (1 - std::pow(b2, count)); + const other_t inv_one_minus_pow_b2 = 1 / (1 - std::pow(b2, count) + eps_root); auto dmu_out = torch::empty_like(new_mu); auto dnu_out = torch::empty_like(new_nu); diff --git a/tests/.coveragerc b/tests/.coveragerc new file mode 100644 index 00000000..462c4c3a --- /dev/null +++ b/tests/.coveragerc @@ -0,0 +1,8 @@ +[run] +omit = + ../torchopt/distributed/* + ../torchopt/visual.py + ../torchopt/version.py + ../docs/* + ../examples/* + ../tutorials/* diff --git a/tests/conftest.py b/tests/conftest.py index 41b7db0b..eaa734b2 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/tests/helpers.py b/tests/helpers.py index fce77725..4bba706e 100644 --- a/tests/helpers.py +++ b/tests/helpers.py @@ -26,6 +26,9 @@ import torch.types from torch.utils import data +from torchopt import pytree +from torchopt.typing import TensorTree + BATCH_SIZE = 64 NUM_UPDATES = 5 @@ -100,12 +103,8 @@ def forward(self, x: torch.Tensor) -> torch.Tensor: @torch.no_grad() -def get_models( - device: torch.types.Device = None, dtype: torch.dtype = torch.float32 -) -> Tuple[nn.Module, nn.Module, nn.Module, data.DataLoader]: - seed_everything(seed=42) - - model_base = nn.Sequential( +def get_model(): + return nn.Sequential( MyLinear( in_features=MODEL_NUM_INPUTS, out_features=MODEL_HIDDEN_SIZE, @@ -132,7 +131,16 @@ def get_models( bias=False, ), nn.Softmax(dim=-1), - ).to(dtype=dtype) + ) + + +@torch.no_grad() +def get_models( + device: torch.types.Device = None, dtype: torch.dtype = torch.float32 +) -> Tuple[nn.Module, nn.Module, nn.Module, data.DataLoader]: + seed_everything(seed=42) + + model_base = get_model().to(dtype=dtype) for name, param in model_base.named_parameters(recurse=True): if name.endswith('weight') and param.ndim >= 2: nn.init.orthogonal_(param) @@ -165,7 +173,7 @@ def assert_model_all_close( rtol: Optional[float] = None, atol: Optional[float] = None, equal_nan: bool = False, -): +) -> None: if isinstance(model, tuple): params, buffers = model elif isinstance(model, nn.Module): @@ -209,3 +217,32 @@ def assert_all_close( equal_nan=equal_nan, check_dtype=True, ) + + +@torch.no_grad() +def assert_pytree_all_close( + actual: TensorTree, + expected: TensorTree, + base: Optional[TensorTree] = None, + rtol: Optional[float] = None, + atol: Optional[float] = None, + equal_nan: bool = False, +) -> None: + actual_leaves, actual_treespec = pytree.tree_flatten(actual) + expected_leaves, expected_treespec = pytree.tree_flatten(expected) + assert actual_treespec == expected_treespec + if base is not None: + base_leaves, base_treespec = pytree.tree_flatten(base) + assert base_treespec == expected_treespec + else: + base_leaves = [None] * len(actual_leaves) + + for actual_leaf, expected_leaf, base_leaf in zip(actual_leaves, expected_leaves, base_leaves): + assert_all_close( + actual_leaf, + expected_leaf, + base=base_leaf, + rtol=rtol, + atol=atol, + equal_nan=equal_nan, + ) diff --git a/tests/requirements.txt b/tests/requirements.txt index 8404dd65..6706dca5 100644 --- a/tests/requirements.txt +++ b/tests/requirements.txt @@ -14,7 +14,6 @@ isort >= 5.11.0 black[jupyter] >= 22.6.0 pylint[spelling] >= 2.15.0 mypy >= 0.990 -types-setuptools flake8 flake8-bugbear # https://github.com/PyCQA/doc8/issues/112 diff --git a/tests/test_accelerated_op.py b/tests/test_accelerated_op.py new file mode 100644 index 00000000..4821a03d --- /dev/null +++ b/tests/test_accelerated_op.py @@ -0,0 +1,193 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +import functorch +import torch +import torch.nn.functional as F + +import helpers +import torchopt + + +try: + import torchopt._C.adam_op +except ImportError: + CXX_ACCELERATED_OP_AVAILABLE = False +else: + CXX_ACCELERATED_OP_AVAILABLE = True + + +def test_accelerated_op_is_available() -> None: + assert torchopt.accelerated_op_available('cpu') + assert torchopt.accelerated_op_available(torch.device('cpu')) + + if CXX_ACCELERATED_OP_AVAILABLE: + assert not torchopt.accelerated_op_available('meta') + assert not torchopt.accelerated_op_available(torch.device('meta')) + assert not torchopt.accelerated_op_available(['cpu', 'meta']) + assert not torchopt.accelerated_op_available([torch.device('cpu'), torch.device('meta')]) + else: + assert torchopt.accelerated_op_available('meta') + assert torchopt.accelerated_op_available(torch.device('meta')) + assert torchopt.accelerated_op_available(['cpu', 'meta']) + assert torchopt.accelerated_op_available([torch.device('cpu'), torch.device('meta')]) + + if torch.cuda.is_available(): + assert torchopt.accelerated_op_available() + assert torchopt.accelerated_op_available('cuda') + assert torchopt.accelerated_op_available('cuda:0') + assert torchopt.accelerated_op_available(0) + assert torchopt.accelerated_op_available(['cpu', 'cuda']) + assert torchopt.accelerated_op_available(['cpu', 'cuda:0']) + assert torchopt.accelerated_op_available(['cpu', 0]) + else: + assert not torchopt.accelerated_op_available() + assert not torchopt.accelerated_op_available('cuda') + assert not torchopt.accelerated_op_available('cuda:0') + assert not torchopt.accelerated_op_available(0) + assert not torchopt.accelerated_op_available(['cpu', 'cuda']) + assert not torchopt.accelerated_op_available(['cpu', 'cuda:0']) + assert not torchopt.accelerated_op_available(['cpu', 0]) + + +@helpers.parametrize( + dtype=[torch.float64, torch.float32], + lr=[1e-2, 1e-3, 1e-4], + inplace=[True, False], +) +def test_accelerated_op( + dtype: torch.dtype, + lr: float, + inplace: bool, +) -> None: + if dtype is torch.float32 and inplace: + return + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) + + fmodel, params, buffers = functorch.make_functional_with_buffers(model) + optim = torchopt.adam( + lr, + use_accelerated_op=True, + ) + optim_state = optim.init(params) + + fmodel_ref, params_ref, buffers_ref = functorch.make_functional_with_buffers(model_ref) + optim_ref = torchopt.adam( + lr, + use_accelerated_op=False, + ) + optim_state_ref = optim_ref.init(params_ref) + + for xs, ys in loader: + xs = xs.to(dtype=dtype) + pred = fmodel(params, buffers, xs) + pred_ref = fmodel_ref(params_ref, buffers_ref, xs) + loss = F.cross_entropy(pred, ys) + loss_ref = F.cross_entropy(pred_ref, ys) + + grads = torch.autograd.grad(loss, params, allow_unused=True) + updates, optim_state = optim.update(grads, optim_state, params=params, inplace=inplace) + params = torchopt.apply_updates(params, updates, inplace=inplace) + + grads = torch.autograd.grad(loss_ref, params_ref, allow_unused=True) + updates, optim_state_ref = optim_ref.update( + grads, optim_state_ref, params=params, inplace=inplace + ) + params_ref = torchopt.apply_updates(params_ref, updates, inplace=inplace) + + helpers.assert_pytree_all_close(params, params_ref) + + +@helpers.parametrize( + dtype=[torch.float64, torch.float32], + outer_lr=[1e-2, 1e-3, 1e-4], + inner_lr=[1e-2, 1e-3, 1e-4], + inner_update=[2, 3, 5], + inplace=[True, False], +) +def test_maml_accelerated_op( + dtype: torch.dtype, + outer_lr: float, + inner_lr: float, + inner_update: int, + inplace: bool, +) -> None: + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) + + fmodel, params, buffers = functorch.make_functional_with_buffers(model) + outer_optim = torchopt.adam( + outer_lr, + use_accelerated_op=True, + ) + outer_optim_state = outer_optim.init(params) + + fmodel_ref, params_ref, buffers_ref = functorch.make_functional_with_buffers(model_ref) + outer_optim_ref = torchopt.adam( + outer_lr, + use_accelerated_op=False, + ) + outer_optim_state_ref = outer_optim_ref.init(params_ref) + + def maml_inner_solver(params, data, use_accelerated_op): + # Initial functional optimizer based on TorchOpt + x, y, f, b = data + inner_optimizer = torchopt.adam( + inner_lr, + use_accelerated_op=use_accelerated_op, + ) + inner_opt_state = inner_optimizer.init(params) + with torch.enable_grad(): + # Temporarily enable gradient computation for conducting the optimization + for _ in range(inner_update): + pred = f(params, b, x) + inner_loss = F.cross_entropy(pred, y) # compute loss + grads = torch.autograd.grad( + inner_loss, params, allow_unused=True + ) # compute gradients + updates, inner_opt_state = inner_optimizer.update( + grads, inner_opt_state, inplace=False + ) # get updates + params = torchopt.apply_updates(params, updates, inplace=False) + return (params, b) + + for xs, ys in loader: + xs = xs.to(dtype=dtype) + data = (xs, ys, fmodel, buffers) + data_ref = (xs, ys, fmodel_ref, buffers_ref) + + params_prime, buffers_prime = maml_inner_solver(params, data, use_accelerated_op=True) + params_prime_ref, buffers_prime_ref = maml_inner_solver( + params_ref, data_ref, use_accelerated_op=False + ) + + pred = fmodel(params_prime, buffers_prime, xs) + pred_ref = fmodel_ref(params_prime_ref, buffers_prime_ref, xs) + outer_loss = F.cross_entropy(pred, ys) + outer_loss_ref = F.cross_entropy(pred_ref, ys) + + grads = torch.autograd.grad(outer_loss, params, allow_unused=True) + updates, outer_optim_state = outer_optim.update( + grads, outer_optim_state, params=params, inplace=inplace + ) + params = torchopt.apply_updates(params, updates, inplace=inplace) + + grads = torch.autograd.grad(outer_loss_ref, params_ref, allow_unused=True) + updates, outer_optim_state_ref = outer_optim_ref.update( + grads, outer_optim_state_ref, params=params, inplace=inplace + ) + params_ref = torchopt.apply_updates(params_ref, updates, inplace=inplace) + + torchopt.stop_gradient(model) + torchopt.stop_gradient(model_ref) diff --git a/tests/test_alias.py b/tests/test_alias.py index 50b42835..c613d7d5 100644 --- a/tests/test_alias.py +++ b/tests/test_alias.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -13,7 +13,7 @@ # limitations under the License. # ============================================================================== -from typing import Tuple +from typing import Callable, Tuple import functorch import pytest @@ -22,6 +22,7 @@ import helpers import torchopt +from torchopt.alias.utils import _set_use_chain_flat @helpers.parametrize( @@ -33,6 +34,7 @@ inplace=[True, False], weight_decay=[0.0, 1e-2], maximize=[False, True], + use_chain_flat=[True, False], ) def test_sgd( dtype: torch.dtype, @@ -43,10 +45,13 @@ def test_sgd( inplace: bool, weight_decay: float, maximize: bool, + use_chain_flat: bool, ) -> None: if nesterov and (momentum <= 0.0 or dampening != 0.0): pytest.skip('Nesterov momentum requires a momentum and zero dampening.') + _set_use_chain_flat(use_chain_flat) + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) fmodel, params, buffers = functorch.make_functional_with_buffers(model) @@ -85,6 +90,7 @@ def test_sgd( optim_ref.step() helpers.assert_model_all_close((params, buffers), model_ref, model_base, dtype=dtype) + _set_use_chain_flat(True) @helpers.parametrize( @@ -95,6 +101,8 @@ def test_sgd( inplace=[True, False], weight_decay=[0.0, 1e-2], maximize=[False, True], + use_accelerated_op=[False, True], + use_chain_flat=[True, False], ) def test_adam( dtype: torch.dtype, @@ -104,7 +112,11 @@ def test_adam( inplace: bool, weight_decay: float, maximize: bool, + use_accelerated_op: bool, + use_chain_flat: bool, ) -> None: + _set_use_chain_flat(use_chain_flat) + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) fmodel, params, buffers = functorch.make_functional_with_buffers(model) @@ -115,6 +127,7 @@ def test_adam( eps_root=0.0, weight_decay=weight_decay, maximize=maximize, + use_accelerated_op=use_accelerated_op, ) optim_state = optim.init(params) optim_ref = torch.optim.Adam( @@ -143,64 +156,97 @@ def test_adam( optim_ref.step() helpers.assert_model_all_close((params, buffers), model_ref, model_base, dtype=dtype) + _set_use_chain_flat(True) @helpers.parametrize( dtype=[torch.float64], - lr=[1e-2, 1e-3, 1e-4], + outer_lr=[1e-2, 1e-3, 1e-4], + inner_lr=[1e-2, 1e-3, 1e-4], + inner_update=[2, 3, 5], betas=[(0.9, 0.999), (0.95, 0.9995)], eps=[1e-8], inplace=[True, False], weight_decay=[0.0, 1e-2], maximize=[False, True], + use_accelerated_op=[False, True], + use_chain_flat=[True, False], ) -def test_adamw( +def test_maml_adam( dtype: torch.dtype, - lr: float, + outer_lr: float, + inner_lr: float, + inner_update: int, betas: Tuple[float, float], eps: float, inplace: bool, weight_decay: float, maximize: bool, + use_accelerated_op: bool, + use_chain_flat: bool, ) -> None: + _set_use_chain_flat(use_chain_flat) + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) fmodel, params, buffers = functorch.make_functional_with_buffers(model) - optim = torchopt.adamw( - lr, + outer_optim = torchopt.adam( + outer_lr, betas=betas, eps=eps, eps_root=0.0, weight_decay=weight_decay, maximize=maximize, + use_accelerated_op=use_accelerated_op, ) - optim_state = optim.init(params) - optim_ref = torch.optim.AdamW( - model_ref.parameters(), - lr, - betas=betas, - eps=eps, - amsgrad=False, - weight_decay=weight_decay, - maximize=maximize, - ) + outer_optim_state = outer_optim.init(params) + + def maml_inner_solver_torchopt(params, data, use_accelerated_op): + # Initial functional optimizer based on TorchOpt + x, y, f, b = data + inner_optimizer = torchopt.adam( + inner_lr, + betas=betas, + eps=eps, + eps_root=0.0, + weight_decay=weight_decay, + maximize=maximize, + use_accelerated_op=use_accelerated_op, + ) + inner_opt_state = inner_optimizer.init(params) + with torch.enable_grad(): + # Temporarily enable gradient computation for conducting the optimization + for _ in range(inner_update): + pred = f(params, b, x) + inner_loss = F.cross_entropy(pred, y) # compute loss + grads = torch.autograd.grad( + inner_loss, params, allow_unused=True + ) # compute gradients + updates, inner_opt_state = inner_optimizer.update( + grads, inner_opt_state, params=params, inplace=False + ) # get updates + params = torchopt.apply_updates(params, updates, inplace=False) + return (params, b) for xs, ys in loader: xs = xs.to(dtype=dtype) - pred = fmodel(params, buffers, xs) - pred_ref = model_ref(xs) - loss = F.cross_entropy(pred, ys) - loss_ref = F.cross_entropy(pred_ref, ys) - - grads = torch.autograd.grad(loss, params, allow_unused=True) - updates, optim_state = optim.update(grads, optim_state, params=params, inplace=inplace) + data = (xs, ys, fmodel, buffers) + + params_prime, buffers_prime = maml_inner_solver_torchopt( + params, data, use_accelerated_op=True + ) + pred = fmodel(params_prime, buffers_prime, xs) + outer_loss = F.cross_entropy(pred, ys) + + grads = torch.autograd.grad(outer_loss, params, allow_unused=True) + updates, outer_optim_state = outer_optim.update( + grads, outer_optim_state, params=params, inplace=inplace + ) params = torchopt.apply_updates(params, updates, inplace=inplace) - optim_ref.zero_grad() - loss_ref.backward() - optim_ref.step() + torchopt.stop_gradient(model) - helpers.assert_model_all_close((params, buffers), model_ref, model_base, dtype=dtype) + _set_use_chain_flat(True) @helpers.parametrize( @@ -209,10 +255,12 @@ def test_adamw( betas=[(0.9, 0.999), (0.95, 0.9995)], eps=[1e-8], inplace=[True, False], - weight_decay=[1e-2, 1e-1], + weight_decay=[0.0, 1e-2], maximize=[False, True], + use_accelerated_op=[False, True], + use_chain_flat=[True, False], ) -def test_adam_accelerated_cpu( +def test_adamw( dtype: torch.dtype, lr: float, betas: Tuple[float, float], @@ -220,21 +268,25 @@ def test_adam_accelerated_cpu( inplace: bool, weight_decay: float, maximize: bool, + use_accelerated_op: bool, + use_chain_flat: bool, ) -> None: + _set_use_chain_flat(use_chain_flat) + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) fmodel, params, buffers = functorch.make_functional_with_buffers(model) - optim = torchopt.adam( + optim = torchopt.adamw( lr, betas=betas, eps=eps, eps_root=0.0, weight_decay=weight_decay, maximize=maximize, - use_accelerated_op=True, + use_accelerated_op=use_accelerated_op, ) optim_state = optim.init(params) - optim_ref = torch.optim.Adam( + optim_ref = torch.optim.AdamW( model_ref.parameters(), lr, betas=betas, @@ -260,32 +312,44 @@ def test_adam_accelerated_cpu( optim_ref.step() helpers.assert_model_all_close((params, buffers), model_ref, model_base, dtype=dtype) + _set_use_chain_flat(True) @pytest.mark.skipif(not torch.cuda.is_available(), reason='No CUDA device available.') @helpers.parametrize( dtype=[torch.float64], lr=[1e-2, 1e-3, 1e-4], + optimizers=[ + (torchopt.adam, torch.optim.Adam), + (torchopt.adamw, torch.optim.AdamW), + ], betas=[(0.9, 0.999), (0.95, 0.9995)], eps=[1e-8], inplace=[True, False], weight_decay=[0.0, 1e-2], maximize=[False, True], + use_chain_flat=[True, False], ) def test_adam_accelerated_cuda( dtype: torch.dtype, lr: float, + optimizers: Tuple[Callable, torch.optim.Optimizer], betas: Tuple[float, float], eps: float, inplace: bool, weight_decay: float, maximize: bool, + use_chain_flat: bool, ) -> None: + _set_use_chain_flat(use_chain_flat) + device = 'cuda' model, model_ref, model_base, loader = helpers.get_models(device=device, dtype=dtype) + torchopt_optimizer, torch_optimizer = optimizers + fmodel, params, buffers = functorch.make_functional_with_buffers(model) - optim = torchopt.adam( + optim = torchopt_optimizer( lr, betas=betas, eps=eps, @@ -295,7 +359,7 @@ def test_adam_accelerated_cuda( use_accelerated_op=True, ) optim_state = optim.init(params) - optim_ref = torch.optim.Adam( + optim_ref = torch_optimizer( model_ref.parameters(), lr, betas=betas, @@ -322,6 +386,7 @@ def test_adam_accelerated_cuda( optim_ref.step() helpers.assert_model_all_close((params, buffers), model_ref, model_base, dtype=dtype) + _set_use_chain_flat(True) @helpers.parametrize( @@ -333,6 +398,7 @@ def test_adam_accelerated_cuda( centered=[False, True], weight_decay=[0.0, 1e-2], inplace=[True, False], + use_chain_flat=[True, False], ) def test_rmsprop( dtype: torch.dtype, @@ -343,7 +409,10 @@ def test_rmsprop( centered: bool, weight_decay: float, inplace: bool, + use_chain_flat: bool, ) -> None: + _set_use_chain_flat(use_chain_flat) + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) fmodel, params, buffers = functorch.make_functional_with_buffers(model) @@ -383,3 +452,4 @@ def test_rmsprop( optim_ref.step() helpers.assert_model_all_close((params, buffers), model_ref, model_base, dtype=dtype) + _set_use_chain_flat(True) diff --git a/tests/test_clip.py b/tests/test_clip.py index f8d3b289..0b191cfe 100644 --- a/tests/test_clip.py +++ b/tests/test_clip.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -20,6 +20,7 @@ import helpers import torchopt +from torchopt.alias.utils import _set_use_chain_flat @helpers.parametrize( @@ -31,6 +32,7 @@ nesterov=[False, True], weight_decay=[0.0, 1e-2], maximize=[False, True], + use_chain_flat=[True, False], ) def test_sgd( dtype: torch.dtype, @@ -41,10 +43,13 @@ def test_sgd( nesterov: bool, weight_decay: float, maximize: bool, + use_chain_flat: bool, ) -> None: if nesterov and (momentum <= 0.0 or dampening != 0.0): pytest.skip('Nesterov momentum requires a momentum and zero dampening.') + _set_use_chain_flat(use_chain_flat) + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) chain = torchopt.chain( @@ -86,3 +91,4 @@ def test_sgd( optim_ref.step() helpers.assert_model_all_close(model, model_ref, model_base, dtype=dtype) + _set_use_chain_flat(True) diff --git a/tests/test_combine.py b/tests/test_combine.py new file mode 100644 index 00000000..ad018d21 --- /dev/null +++ b/tests/test_combine.py @@ -0,0 +1,51 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +import torchopt +from torchopt.alias.utils import _set_use_chain_flat + + +def test_chain() -> None: + assert torchopt.chain() == torchopt.base.identity() + assert torchopt.chain(torchopt.base.identity()) == torchopt.base.identity() + assert ( + torchopt.chain(torchopt.base.identity(), torchopt.base.identity()) + == torchopt.base.identity() + ) + assert torchopt.base.identity().chain(torchopt.base.identity()) == torchopt.base.identity() + assert isinstance(torchopt.base.identity(), torchopt.base.IdentityGradientTransformation) + assert isinstance( + torchopt.base.identity().chain(torchopt.base.identity()), + torchopt.base.ChainedGradientTransformation, + ) + + _set_use_chain_flat(False) + adam = torchopt.adam() + assert isinstance(adam, torchopt.base.ChainedGradientTransformation) + assert isinstance( + adam.chain(torchopt.base.identity()), torchopt.base.ChainedGradientTransformation + ) + assert adam.chain(torchopt.base.identity()) == adam + assert torchopt.base.identity().chain(adam) == adam + assert torchopt.chain(torchopt.base.identity(), adam, torchopt.base.identity()) == adam + _set_use_chain_flat(True) + + assert isinstance(adam, torchopt.base.GradientTransformation) + assert isinstance( + adam.chain(torchopt.base.identity()), torchopt.base.ChainedGradientTransformation + ) + assert adam.chain(torchopt.base.identity()) == adam + assert torchopt.base.identity().chain(adam) == adam + assert torchopt.chain(torchopt.base.identity(), adam, torchopt.base.identity()) == adam diff --git a/tests/test_hook.py b/tests/test_hook.py new file mode 100644 index 00000000..1f3024c7 --- /dev/null +++ b/tests/test_hook.py @@ -0,0 +1,38 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +import torch + +import torchopt +from torchopt import pytree + + +def test_nan_to_num_hook() -> None: + nan = torch.tensor(torch.nan) + inf = torch.tensor(torch.inf) + ninf = torch.tensor(-torch.inf) + hook = torchopt.hook.nan_to_num_hook(0.0, 1.0, -1.0) + result = pytree.tree_map(hook, [nan, inf, ninf]) + assert torch.equal(result[0], torch.tensor(0.0)) + assert torch.equal(result[1], torch.tensor(1.0)) + assert torch.equal(result[2], torch.tensor(-1.0)) + + +def test_zero_nan_hook() -> None: + tensor = torch.tensor(1.0, requires_grad=True) + hook = torchopt.hook.zero_nan_hook + fn = torchopt.register_hook(hook) + fn.update(tensor, None) + assert tensor._backward_hooks[0] is hook diff --git a/tests/test_implicit.py b/tests/test_implicit.py index ac61b3be..ce0ee23b 100644 --- a/tests/test_implicit.py +++ b/tests/test_implicit.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -230,8 +230,7 @@ def outer_level(p, xs, ys): nn.Parameter(torch.tensor(np.asarray(jax_params[j]), dtype=dtype)) for j in jax_params ) - for p, p_ref in zip(params, jax_params_as_tensor): - helpers.assert_all_close(p, p_ref) + helpers.assert_pytree_all_close(params, jax_params_as_tensor) @helpers.parametrize( @@ -358,8 +357,7 @@ def outer_level(p, xs, ys): nn.Parameter(torch.tensor(np.asarray(jax_params[j]), dtype=dtype)) for j in jax_params ) - for p, p_ref in zip(params, jax_params_as_tensor): - helpers.assert_all_close(p, p_ref) + helpers.assert_pytree_all_close(params, jax_params_as_tensor) @helpers.parametrize( @@ -470,8 +468,7 @@ def outer_level(p, xs, ys): nn.Parameter(torch.tensor(np.asarray(jax_params[j]), dtype=dtype)) for j in jax_params ) - for p, p_ref in zip(model.parameters(), jax_params_as_tensor): - helpers.assert_all_close(p, p_ref) + helpers.assert_pytree_all_close(tuple(model.parameters()), jax_params_as_tensor) @helpers.parametrize( diff --git a/tests/test_import.py b/tests/test_import.py new file mode 100644 index 00000000..30cf914e --- /dev/null +++ b/tests/test_import.py @@ -0,0 +1,365 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +import torchopt + + +def test_accelerated_op_import() -> None: + torchopt.accelerated_op.adam_op.AdamOp + torchopt.accelerated_op.is_available + torchopt.accelerated_op_available + from torchopt.accelerated_op import is_available + from torchopt.accelerated_op.adam_op import AdamOp + + +def test_alias_import() -> None: + torchopt.adam + torchopt.adamw + torchopt.rmsprop + torchopt.sgd + torchopt.alias.adam + torchopt.alias.adamw + torchopt.alias.rmsprop + torchopt.alias.sgd + from torchopt import adam, adamw, rmsprop, sgd + from torchopt.alias import adam, adamw, rmsprop, sgd + + +def test_diff_import() -> None: + torchopt.diff.implicit + torchopt.diff.implicit.custom_root + torchopt.diff.implicit.ImplicitMetaGradientModule + torchopt.diff.implicit.nn.ImplicitMetaGradientModule + torchopt.diff.zero_order + torchopt.diff.zero_order.zero_order + torchopt.diff.zero_order.ZeroOrderGradientModule + torchopt.diff.zero_order.nn.ZeroOrderGradientModule + from torchopt.diff import implicit, zero_order + from torchopt.diff.implicit import ImplicitMetaGradientModule, custom_root + from torchopt.diff.zero_order import ZeroOrderGradientModule, zero_order + + +def test_distributed_import() -> None: + torchopt.distributed.api + torchopt.distributed.autograd + torchopt.distributed.world + torchopt.distributed.is_available + torchopt.distributed.TensorDimensionPartitioner + torchopt.distributed.dim_partitioner + torchopt.distributed.batch_partitioner + torchopt.distributed.mean_reducer + torchopt.distributed.sum_reducer + torchopt.distributed.remote_async_call + torchopt.distributed.remote_sync_call + torchopt.distributed.parallelize + torchopt.distributed.parallelize_async + torchopt.distributed.parallelize_sync + torchopt.distributed.get_world_info + torchopt.distributed.get_world_rank + torchopt.distributed.get_rank + torchopt.distributed.get_world_size + torchopt.distributed.get_local_rank + torchopt.distributed.get_local_world_size + torchopt.distributed.get_worker_id + torchopt.distributed.barrier + torchopt.distributed.auto_init_rpc + torchopt.distributed.on_rank + torchopt.distributed.not_on_rank + torchopt.distributed.rank_zero_only + torchopt.distributed.rank_non_zero_only + torchopt.distributed.autograd.is_available + torchopt.distributed.autograd.context + from torchopt.distributed import api, autograd, world + + +def test_linalg_import() -> None: + torchopt.linalg.cg + torchopt.linalg.ns + torchopt.linalg.ns_inv + from torchopt.linalg import cg, ns, ns_inv + + +def test_linear_solve_import() -> None: + torchopt.linear_solve.solve_cg + torchopt.linear_solve.solve_inv + torchopt.linear_solve.solve_normal_cg + from torchopt.linear_solve import solve_cg, solve_inv, solve_normal_cg + + +def test_nn_import() -> None: + torchopt.nn.MetaGradientModule + torchopt.nn.ImplicitMetaGradientModule + torchopt.nn.ZeroOrderGradientModule + from torchopt.nn import ImplicitMetaGradientModule, MetaGradientModule, ZeroOrderGradientModule + + +def test_optim_import() -> None: + torchopt.FuncOptimizer + torchopt.MetaAdam + torchopt.MetaAdamW + torchopt.MetaRMSProp + torchopt.MetaRMSprop + torchopt.MetaSGD + torchopt.Adam + torchopt.AdamW + torchopt.Optimizer + torchopt.RMSProp + torchopt.RMSprop + torchopt.SGD + torchopt.optim.meta.MetaAdam + torchopt.optim.meta.MetaAdamW + torchopt.optim.meta.MetaRMSProp + torchopt.optim.meta.MetaRMSprop + torchopt.optim.meta.MetaSGD + torchopt.optim.Adam + torchopt.optim.AdamW + torchopt.optim.Optimizer + torchopt.optim.RMSProp + torchopt.optim.RMSprop + torchopt.optim.SGD + torchopt.optim.func.FuncOptimizer + from torchopt import ( + SGD, + Adam, + AdamW, + FuncOptimizer, + MetaAdam, + MetaAdamW, + MetaOptimizer, + MetaRMSProp, + MetaRMSprop, + MetaSGD, + Optimizer, + RMSProp, + ) + from torchopt.optim import SGD, Adam, AdamW, FuncOptimizer, Optimizer, RMSProp + from torchopt.optim.func import FuncOptimizer + from torchopt.optim.meta import ( + MetaAdam, + MetaAdamW, + MetaOptimizer, + MetaRMSProp, + MetaRMSprop, + MetaSGD, + ) + + +def test_schedule_import() -> None: + torchopt.schedule.linear_schedule + torchopt.schedule.polynomial_schedule + from torchopt.schedule import linear_schedule, polynomial_schedule + + +def test_transform_import() -> None: + torchopt.transform.add_decayed_weights + torchopt.transform.scale + torchopt.transform.scale_by_accelerated_adam + torchopt.transform.scale_by_adam + torchopt.transform.scale_by_rms + torchopt.transform.scale_by_schedule + torchopt.transform.scale_by_stddev + torchopt.transform.trace + torchopt.transform.nan_to_num + torchopt.nan_to_num + from torchopt import nan_to_num + from torchopt.transform import ( + add_decayed_weights, + nan_to_num, + scale, + scale_by_accelerated_adam, + scale_by_adam, + scale_by_rms, + scale_by_schedule, + scale_by_stddev, + trace, + ) + + +def test_base_import() -> None: + torchopt.base.EmptyState + torchopt.base.GradientTransformation + torchopt.base.ChainedGradientTransformation + torchopt.base.identity + from torchopt.base import ( + ChainedGradientTransformation, + EmptyState, + GradientTransformation, + identity, + ) + + +def test_clip_import() -> None: + torchopt.clip_grad_norm + torchopt.clip.clip_grad_norm + from torchopt import clip_grad_norm + from torchopt.clip import clip_grad_norm + + +def test_combine_import() -> None: + torchopt.chain + torchopt.chain.flat + torchopt.combine.chain + torchopt.combine.chain.flat + torchopt.combine.chain_flat + from torchopt import chain + from torchopt.combine import chain, chain_flat + + +def test_hook_import() -> None: + torchopt.register_hook + torchopt.hook.register_hook + torchopt.hook.zero_nan_hook + torchopt.hook.nan_to_num_hook + from torchopt import register_hook + from torchopt.hook import nan_to_num_hook, register_hook, zero_nan_hook + + +def test_pytree_import() -> None: + torchopt.pytree.tree_flatten_as_tuple + torchopt.pytree.tree_pos + torchopt.pytree.tree_neg + torchopt.pytree.tree_add + torchopt.pytree.tree_add_scalar_mul + torchopt.pytree.tree_sub + torchopt.pytree.tree_sub_scalar_mul + torchopt.pytree.tree_mul + torchopt.pytree.tree_matmul + torchopt.pytree.tree_scalar_mul + torchopt.pytree.tree_truediv + torchopt.pytree.tree_vdot_real + torchopt.pytree.tree_wait + from torchopt.pytree import ( + tree_add, + tree_add_scalar_mul, + tree_flatten_as_tuple, + tree_matmul, + tree_mul, + tree_neg, + tree_pos, + tree_scalar_mul, + tree_sub, + tree_sub_scalar_mul, + tree_truediv, + tree_vdot_real, + tree_wait, + ) + + +def test_typing_import() -> None: + torchopt.typing.GradientTransformation + torchopt.typing.ChainedGradientTransformation + torchopt.typing.EmptyState + torchopt.typing.UninitializedState + torchopt.typing.Params + torchopt.typing.Updates + torchopt.typing.OptState + torchopt.typing.Scalar + torchopt.typing.Numeric + torchopt.typing.Schedule + torchopt.typing.ScalarOrSchedule + torchopt.typing.PyTree + torchopt.typing.Tensor + torchopt.typing.OptionalTensor + torchopt.typing.ListOfTensors + torchopt.typing.TupleOfTensors + torchopt.typing.SequenceOfTensors + torchopt.typing.TensorOrTensors + torchopt.typing.TensorTree + torchopt.typing.ListOfOptionalTensors + torchopt.typing.TupleOfOptionalTensors + torchopt.typing.SequenceOfOptionalTensors + torchopt.typing.OptionalTensorOrOptionalTensors + torchopt.typing.OptionalTensorTree + torchopt.typing.TensorContainer + torchopt.typing.ModuleTensorContainers + torchopt.typing.Future + torchopt.typing.LinearSolver + torchopt.typing.Device + torchopt.typing.Size + torchopt.typing.Distribution + torchopt.typing.SampleFunc + torchopt.typing.Samplable + from torchopt.typing import ( + ChainedGradientTransformation, + Device, + Distribution, + EmptyState, + Future, + GradientTransformation, + LinearSolver, + ListOfOptionalTensors, + ListOfTensors, + ModuleTensorContainers, + Numeric, + OptionalTensor, + OptionalTensorOrOptionalTensors, + OptionalTensorTree, + OptState, + Params, + PyTree, + Samplable, + SampleFunc, + Scalar, + ScalarOrSchedule, + Schedule, + SequenceOfOptionalTensors, + SequenceOfTensors, + Size, + Tensor, + TensorContainer, + TensorOrTensors, + TensorTree, + TupleOfOptionalTensors, + TupleOfTensors, + UninitializedState, + Updates, + ) + + +def test_update_import() -> None: + torchopt.apply_updates + torchopt.update.apply_updates + from torchopt import apply_updates + from torchopt.update import apply_updates + + +def test_utils_import() -> None: + torchopt.utils.ModuleState + torchopt.utils.stop_gradient + torchopt.utils.extract_state_dict + torchopt.utils.recover_state_dict + torchopt.utils.module_clone + torchopt.utils.module_detach_ + from torchopt.utils import ( + ModuleState, + extract_state_dict, + module_clone, + module_detach_, + recover_state_dict, + stop_gradient, + ) + + +def test_version_import() -> None: + torchopt.__version__ + torchopt.version.__version__ + from torchopt import __version__ + from torchopt.version import __version__ + + +def test_visual_import() -> None: + torchopt.visual.make_dot + torchopt.visual.resize_graph + from torchopt.visual import make_dot, resize_graph diff --git a/tests/test_linalg.py b/tests/test_linalg.py new file mode 100644 index 00000000..7758b7db --- /dev/null +++ b/tests/test_linalg.py @@ -0,0 +1,27 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +import torch + +import torchopt + + +def test_normalize_matvec() -> None: + A = [torch.rand(10, 10) for _ in range(10)] + x = [torch.rand(10, 1) for _ in range(10)] + AxFn = torchopt.linalg.utils.normalize_matvec(A) + Ax = AxFn(x) + for Ax_item, A_item, x_item in zip(Ax, A, x): + assert torch.equal(Ax_item, A_item @ x_item) diff --git a/tests/test_meta_optim.py b/tests/test_meta_optim.py index 5916574e..2c0966cc 100644 --- a/tests/test_meta_optim.py +++ b/tests/test_meta_optim.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -13,11 +13,78 @@ # limitations under the License. # ============================================================================== +from typing import Tuple + +import torch +import torch.nn.functional as F + import helpers import torchopt -def test_filter_nones_in_params(): - model = helpers.get_models()[0] +@helpers.parametrize( + dtype=[torch.float64], + outer_lr=[1e-2, 1e-3, 1e-4], + inner_lr=[1e-2, 1e-3, 1e-4], + inner_update=[2, 3, 5], + betas=[(0.9, 0.999), (0.95, 0.9995)], + eps=[1e-8], + eps_root=[0.0, 1e-8], + weight_decay=[0.0, 1e-2], + maximize=[False, True], + use_accelerated_op=[False, True], + moment_requires_grad=[True, False], +) +def test_maml_meta_adam( + dtype: torch.dtype, + outer_lr: float, + inner_lr: float, + inner_update: int, + betas: Tuple[float, float], + eps: float, + eps_root: float, + weight_decay: float, + maximize: bool, + use_accelerated_op: bool, + moment_requires_grad: bool, +) -> None: + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) + + outer_optim = torchopt.Adam( + model.parameters(), + outer_lr, + betas=betas, + eps=eps, + eps_root=0.0, + weight_decay=weight_decay, + maximize=maximize, + use_accelerated_op=use_accelerated_op, + ) + + for xs, ys in loader: + xs = xs.to(dtype=dtype) + + inner_optim = torchopt.MetaAdam( + module=model, + lr=inner_lr, + betas=betas, + eps=eps, + eps_root=eps_root, + moment_requires_grad=moment_requires_grad, + weight_decay=weight_decay, + maximize=maximize, + use_accelerated_op=use_accelerated_op, + ) + + for _ in range(inner_update): + pred = model(xs) + inner_loss = F.cross_entropy(pred, ys) # compute loss + inner_optim.step(inner_loss) + + pred = model(xs) + outer_loss = F.cross_entropy(pred, ys) + outer_optim.zero_grad() + outer_loss.backward() + outer_optim.step() - meta_adam = torchopt.MetaAdam(model) + torchopt.stop_gradient(model) diff --git a/tests/test_nn.py b/tests/test_nn.py new file mode 100644 index 00000000..1b48c06b --- /dev/null +++ b/tests/test_nn.py @@ -0,0 +1,180 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +import re + +import pytest +import torch +import torch.nn as nn + +import helpers +import torchopt + + +def test_property() -> None: + m = torchopt.nn.MetaGradientModule() + x = helpers.get_model() + m.add_module('x', x) + assert m.x is x + delattr(m, 'x') + assert not hasattr(m, 'x') + m.add_meta_module('x', x) + assert m.x is x + delattr(m, 'x') + assert not hasattr(m, 'x') + x = torch.tensor(1.0, requires_grad=True) + m.register_parameter('x', x) + assert m.x is x + delattr(m, 'x') + assert not hasattr(m, 'x') + x = torch.tensor(1.0, requires_grad=True) + m.register_meta_parameter('x', x) + assert m.x is x + delattr(m, 'x') + assert not hasattr(m, 'x') + m.register_buffer('x', x) + assert len(m._buffers) == 1 + assert m.x is x + delattr(m, 'x') + assert len(m._buffers) == 0 + assert not hasattr(m, 'x') + + +def test_register_tensors() -> None: + x = torch.tensor(1.0, requires_grad=True) + y = torch.tensor(1.0, requires_grad=True) + z = torch.tensor(1.0, requires_grad=False) + b = torch.tensor(1.0, requires_grad=False) + + m = torchopt.nn.MetaGradientModule() + m.register_meta_parameter('x', x) + assert m.x is x + + m = torchopt.nn.MetaGradientModule(x) + m.x = x + m.y = y + m.z = z + + assert m._meta_parameters['x'] is x + assert m._parameters['y'] is y + assert hasattr(m, 'z') and m.z is z and 'z' not in m._buffers + + del m.x + object.__setattr__(m, 'x', x) + assert hasattr(m, 'x') and m.x is x and 'x' not in m._meta_parameters + m.x = x + assert m._meta_parameters['x'] is x + + m.register_buffer('b', None) + assert m.b is None + m.b = b + assert m.b is b and 'b' in m._buffers + + +def test_no_super_init() -> None: + class NoSuper1(torchopt.nn.MetaGradientModule): + def __init__(self, x): + self.x = x + + with pytest.raises( + AttributeError, match=re.escape('cannot assign parameters before Module.__init__() call') + ): + NoSuper1(torch.tensor(1.0, requires_grad=True)) + + class NoSuper2(torchopt.nn.MetaGradientModule): + def __init__(self): + self.x = torch.tensor(1.0, requires_grad=True) + + with pytest.raises( + AttributeError, match=re.escape('cannot assign parameters before Module.__init__() call') + ): + NoSuper2() + + class NoSuper3(torchopt.nn.MetaGradientModule): + def __init__(self): + self.register_buffer('x', torch.tensor(1.0)) + + with pytest.raises( + AttributeError, match=re.escape('cannot assign buffer before Module.__init__() call') + ): + NoSuper3() + + class NoSuper4(torchopt.nn.MetaGradientModule): + def __init__(self): + self.x = torch.tensor(1.0, requires_grad=False) + + NoSuper4() # no error + + class NoSuper5(torchopt.nn.MetaGradientModule): + def __init__(self, x): + self.x = x + + with pytest.raises( + AttributeError, match=re.escape('cannot assign module before Module.__init__() call') + ): + NoSuper5(nn.Linear(1, 1)) + + class NoSuper6(torchopt.nn.MetaGradientModule): + def __init__(self): + self.x = nn.Linear(1, 1) + + with pytest.raises( + AttributeError, match=re.escape('cannot assign module before Module.__init__() call') + ): + NoSuper6() + + +def test_add_meta_module() -> None: + meta_module = helpers.get_model() + fc = nn.Linear(1, 1) + + m = torchopt.nn.MetaGradientModule(meta_module) + m.fc = fc + assert m.fc is fc + assert m._modules['fc'] is fc + + m.meta = meta_module + assert m.meta is meta_module + assert m._meta_modules['meta'] is meta_module + + assert all(p1 is p2 for p1, p2 in zip(m.parameters(), fc.parameters())) + assert all(p1 is p2 for p1, p2 in zip(m.meta_parameters(), meta_module.parameters())) + + m = torchopt.nn.MetaGradientModule(meta_module) + m.add_meta_module('fc', fc) + assert m.fc is fc + assert all(p1 is p2 for p1, p2 in zip(m.meta_parameters(), fc.parameters())) + + +def test_meta_module() -> None: + m = torchopt.nn.MetaGradientModule() + meta_module = torch.nn.Linear(1, 1) + m.add_meta_module('m', meta_module) + assert next(m.named_meta_modules())[1] is meta_module + assert next(m.named_meta_children())[1] is meta_module + assert next(m.meta_children()) is meta_module + assert next(m.meta_modules()) is meta_module + + +def test_add_meta_parameters() -> None: + m = torchopt.nn.MetaGradientModule() + x = torch.tensor(1.0, requires_grad=True) + m.register_meta_parameter('x', x) + assert next(m.named_meta_parameters())[1] is x + + +def test_named_modules() -> None: + m = torchopt.nn.MetaGradientModule() + assert next(m.named_modules())[1] is m diff --git a/tests/test_optim.py b/tests/test_optim.py index fe1697c9..c43bc438 100644 --- a/tests/test_optim.py +++ b/tests/test_optim.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -91,6 +91,7 @@ def test_SGD( eps=[1e-8], weight_decay=[0.0, 1e-2], maximize=[False, True], + use_accelerated_op=[False, True], ) def test_Adam( dtype: torch.dtype, @@ -99,6 +100,7 @@ def test_Adam( eps: float, weight_decay: float, maximize: bool, + use_accelerated_op: bool, ) -> None: model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) @@ -110,6 +112,7 @@ def test_Adam( eps_root=0.0, weight_decay=weight_decay, maximize=maximize, + use_accelerated_op=use_accelerated_op, ) optim_ref = torch.optim.Adam( model_ref.parameters(), @@ -146,6 +149,7 @@ def test_Adam( eps=[1e-8], weight_decay=[1e-2, 1e-1], maximize=[False, True], + use_accelerated_op=[False, True], ) def test_AdamW( dtype: torch.dtype, @@ -154,6 +158,7 @@ def test_AdamW( eps: float, weight_decay: float, maximize: bool, + use_accelerated_op: bool, ) -> None: model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) @@ -165,6 +170,7 @@ def test_AdamW( eps_root=0.0, weight_decay=weight_decay, maximize=maximize, + use_accelerated_op=use_accelerated_op, ) optim_ref = torch.optim.AdamW( model_ref.parameters(), @@ -194,66 +200,14 @@ def test_AdamW( helpers.assert_model_all_close(model, model_ref, model_base, dtype=dtype) -@helpers.parametrize( - dtype=[torch.float64], - lr=[1e-2, 1e-3, 1e-4], - betas=[(0.9, 0.999), (0.95, 0.9995)], - eps=[1e-8], - weight_decay=[0.0, 1e-2], - maximize=[False, True], -) -def test_Adam_accelerated_cpu( - dtype: torch.dtype, - lr: float, - betas: Tuple[float, float], - eps: float, - weight_decay: float, - maximize: bool, -) -> None: - model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) - - optim = torchopt.Adam( - model.parameters(), - lr, - betas=betas, - eps=eps, - eps_root=0.0, - weight_decay=weight_decay, - maximize=maximize, - use_accelerated_op=True, - ) - optim_ref = torch.optim.Adam( - model_ref.parameters(), - lr, - betas=betas, - eps=eps, - amsgrad=False, - weight_decay=weight_decay, - maximize=maximize, - ) - - for xs, ys in loader: - xs = xs.to(dtype=dtype) - pred = model(xs) - pred_ref = model_ref(xs) - loss = F.cross_entropy(pred, ys) - loss_ref = F.cross_entropy(pred_ref, ys) - - optim.zero_grad() - loss.backward() - optim.step() - - optim_ref.zero_grad() - loss_ref.backward() - optim_ref.step() - - helpers.assert_model_all_close(model, model_ref, model_base, dtype=dtype) - - @pytest.mark.skipif(not torch.cuda.is_available(), reason='No CUDA device available.') @helpers.parametrize( dtype=[torch.float64], lr=[1e-2, 1e-3, 1e-4], + optimizers=[ + (torchopt.Adam, torch.optim.Adam), + (torchopt.AdamW, torch.optim.AdamW), + ], betas=[(0.9, 0.999), (0.95, 0.9995)], eps=[1e-8], weight_decay=[0.0, 1e-2], @@ -262,6 +216,7 @@ def test_Adam_accelerated_cpu( def test_Adam_accelerated_cuda( dtype: torch.dtype, lr: float, + optimizers: Tuple[torchopt.Optimizer, torch.optim.Optimizer], betas: Tuple[float, float], eps: float, weight_decay: float, @@ -270,7 +225,9 @@ def test_Adam_accelerated_cuda( device = 'cuda' model, model_ref, model_base, loader = helpers.get_models(device=device, dtype=dtype) - optim = torchopt.Adam( + torchopt_optimizer, torch_optimizer = optimizers + + optim = torchopt_optimizer( model.parameters(), lr, betas=betas, @@ -280,7 +237,7 @@ def test_Adam_accelerated_cuda( maximize=maximize, use_accelerated_op=True, ) - optim_ref = torch.optim.Adam( + optim_ref = torch_optimizer( model_ref.parameters(), lr, betas=betas, diff --git a/tests/test_pytree.py b/tests/test_pytree.py new file mode 100644 index 00000000..5594e30b --- /dev/null +++ b/tests/test_pytree.py @@ -0,0 +1,214 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +import torch + +import helpers +from torchopt import pytree + + +tree_a = (torch.randn(20, 10), torch.randn(20)) +tree_b = (torch.randn(20, 10), torch.randn(20)) + +tree_a_dict = ( + torch.tensor(1.0), + {'k1': torch.tensor(1.0), 'k2': (torch.tensor(1.0), torch.tensor(1.0))}, + torch.tensor(1.0), +) +tree_b_dict = ( + torch.tensor(1.0), + {'k1': torch.tensor(2.0), 'k2': (torch.tensor(3.0), torch.tensor(4.0))}, + torch.tensor(5.0), +) + +tensor_a = torch.randn(20) +tensor_b = torch.randn(20) + + +def test_tree_flatten_as_tuple() -> None: + expected_leaves, expected_treespec = (tensor_a,), pytree.tree_structure(tensor_a) + actual_leaves, actual_treespec = pytree.tree_flatten_as_tuple(tensor_a) + assert actual_leaves == expected_leaves + assert actual_treespec == expected_treespec + + leaves_a, treespec_a = pytree.tree_flatten(tree_a) + expected_leaves, expected_treespec = tuple(leaves_a), treespec_a + actual_leaves, actual_treespec = pytree.tree_flatten_as_tuple(tree_a) + assert actual_leaves == expected_leaves + assert actual_treespec == expected_treespec + + +def test_tree_pos() -> None: + expected = +tensor_a + actual = pytree.tree_pos(tensor_a) + helpers.assert_pytree_all_close(actual, expected) + + expected = (+tree_a[0], +tree_a[1]) + actual = pytree.tree_pos(tree_a) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_neg() -> None: + expected = -tensor_a + actual = pytree.tree_neg(tensor_a) + helpers.assert_pytree_all_close(actual, expected) + + expected = (-tree_a[0], -tree_a[1]) + actual = pytree.tree_neg(tree_a) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_add() -> None: + expected = tensor_a + tensor_b + actual = pytree.tree_add(tensor_a, tensor_b) + helpers.assert_pytree_all_close(actual, expected) + + expected = (tree_a[0] + tree_b[0], tree_a[1] + tree_b[1]) + actual = pytree.tree_add(tree_a, tree_b) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_add_scalar_mul() -> None: + expected = (tree_a[0] + tree_b[0], tree_a[1] + tree_b[1]) + actual = pytree.tree_add_scalar_mul(tree_a, tree_b) + helpers.assert_pytree_all_close(actual, expected) + + expected = (tree_a[0] + 0.5 * tree_b[0], tree_a[1] + 0.5 * tree_b[1]) + actual = pytree.tree_add_scalar_mul(tree_a, tree_b, 0.5) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_sub() -> None: + expected = tensor_a - tensor_b + actual = pytree.tree_sub(tensor_a, tensor_b) + helpers.assert_pytree_all_close(actual, expected) + + expected = (tree_a[0] - tree_b[0], tree_a[1] - tree_b[1]) + actual = pytree.tree_sub(tree_a, tree_b) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_sub_scalar_mul() -> None: + expected = (tree_a[0] - tree_b[0], tree_a[1] - tree_b[1]) + actual = pytree.tree_sub_scalar_mul(tree_a, tree_b) + helpers.assert_pytree_all_close(actual, expected) + + expected = (tree_a[0] - 0.5 * tree_b[0], tree_a[1] - 0.5 * tree_b[1]) + actual = pytree.tree_sub_scalar_mul(tree_a, tree_b, 0.5) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_mul() -> None: + expected = tensor_a * tensor_b + actual = pytree.tree_mul(tensor_a, tensor_b) + helpers.assert_pytree_all_close(actual, expected) + + expected = (tree_a[0] * tree_b[0], tree_a[1] * tree_b[1]) + actual = pytree.tree_mul(tree_a, tree_b) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_matmul() -> None: + tree_a = (torch.randn(20, 10), torch.randn(20, 1)) + tree_b = (torch.randn(10, 20), torch.randn(1, 20)) + tensor_a = torch.randn(10, 20) + tensor_b = torch.randn(20) + expected = tensor_a @ tensor_b + actual = pytree.tree_matmul(tensor_a, tensor_b) + helpers.assert_pytree_all_close(actual, expected) + + expected = (tree_a[0] @ tree_b[0], tree_a[1] @ tree_b[1]) + actual = pytree.tree_matmul(tree_a, tree_b) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_scalar_mul() -> None: + expected = 0.5 * tensor_a + actual = pytree.tree_scalar_mul(0.5, tensor_a) + helpers.assert_pytree_all_close(actual, expected) + + expected = (0.5 * tree_a[0], 0.5 * tree_a[1]) + actual = pytree.tree_scalar_mul(0.5, tree_a) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_truediv() -> None: + expected = (tree_a[0] / tree_b[0], tree_a[1] / tree_b[1]) + actual = pytree.tree_truediv(tree_a, tree_b) + helpers.assert_pytree_all_close(actual, expected) + + actual = pytree.tree_truediv(tree_a_dict, tree_b_dict) + expected = ( + torch.tensor(1.0), + {'k1': torch.tensor(0.5), 'k2': (torch.tensor(1.0 / 3.0), torch.tensor(0.25))}, + torch.tensor(0.2), + ) + helpers.assert_pytree_all_close(actual, expected) + + +def test_tree_vdot_real() -> None: + expected = torch.vdot(tensor_a, tensor_b).real + actual = torch.tensor(pytree.tree_vdot_real(tensor_a, tensor_b)) + helpers.assert_pytree_all_close(actual, expected) + + expected = ( + torch.vdot(tree_a[0].contiguous().view(-1), tree_b[0].contiguous().view(-1)) + + torch.vdot(tree_a[1].contiguous().view(-1), tree_b[1].contiguous().view(-1)) + ).real + actual = torch.tensor(pytree.tree_vdot_real(tree_a, tree_b)) + helpers.assert_all_close(actual, expected) + + tensor_a_complex = torch.randn(20, dtype=torch.cfloat) + tensor_b_complex = torch.randn(20, dtype=torch.cfloat) + expected = torch.vdot(tensor_a_complex, tensor_b_complex).real + actual = torch.tensor(pytree.tree_vdot_real(tensor_a_complex, tensor_b_complex)) + helpers.assert_pytree_all_close(actual, expected) + + tree_a_complex, tree_b_complex = pytree.tree_map( + lambda x: torch.randn(x.size(), dtype=torch.cfloat), (tree_a, tree_b) + ) + expected = ( + torch.vdot(tree_a_complex[0].contiguous().view(-1), tree_b_complex[0].contiguous().view(-1)) + + torch.vdot( + tree_a_complex[1].contiguous().view(-1), tree_b_complex[1].contiguous().view(-1) + ) + ).real + actual = torch.tensor(pytree.tree_vdot_real(tree_a_complex, tree_b_complex)) + helpers.assert_all_close(actual, expected) + + +@helpers.parametrize( + tree_name=[ + 'tree_a', + 'tree_b', + 'tree_a_dict', + 'tree_b_dict', + 'tensor_a', + 'tensor_b', + ] +) +def test_tree_wait(tree_name: str) -> None: + tree = globals()[tree_name] + + future_tree = pytree.tree_map(lambda x: torch.futures.Future(), tree) + new_future_tree = pytree.tree_map( + lambda fut: fut.then(lambda f: torch.square(f.wait()) + 1.0), future_tree + ) + pytree.tree_map_(lambda fut, x: fut.set_result(x), future_tree, tree) + + expected = pytree.tree_map(lambda x: torch.square(x) + 1.0, tree) + actual = pytree.tree_wait(new_future_tree) + assert all(fut.done() for fut in pytree.tree_leaves(new_future_tree)) + helpers.assert_pytree_all_close(actual, expected) diff --git a/tests/test_schedule.py b/tests/test_schedule.py index 67e3429a..9590acf8 100644 --- a/tests/test_schedule.py +++ b/tests/test_schedule.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -22,6 +22,7 @@ import helpers import torchopt +from torchopt.alias.utils import _set_use_chain_flat def test_linear_schedule() -> None: @@ -55,6 +56,7 @@ def test_linear_schedule() -> None: ], inplace=[True, False], weight_decay=[0.0, 1e-2], + use_chain_flat=[True, False], ) def test_lr_linear_schedule( dtype: torch.dtype, @@ -63,7 +65,10 @@ def test_lr_linear_schedule( optimizers: Tuple[Callable, torch.optim.Optimizer], inplace: bool, weight_decay: float, + use_chain_flat: bool, ) -> None: + _set_use_chain_flat(use_chain_flat) + model, model_ref, model_base, loader = helpers.get_models(device='cpu', dtype=dtype) torchopt_optimizer, torch_optimizer = optimizers @@ -102,3 +107,4 @@ def test_lr_linear_schedule( torch_scheduler.step() helpers.assert_model_all_close((params, buffers), model_ref, model_base, dtype=dtype) + _set_use_chain_flat(True) diff --git a/tests/test_transform.py b/tests/test_transform.py new file mode 100644 index 00000000..4dfd034d --- /dev/null +++ b/tests/test_transform.py @@ -0,0 +1,65 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +from typing import Tuple + +import functorch +import torch +import torch.nn.functional as F + +import helpers +import torchopt + + +def test_nan_to_num() -> None: + fn = torchopt.nan_to_num(0.0, 1.0, -1.0) + nan = torch.tensor(torch.nan) + inf = torch.tensor(torch.inf) + ninf = torch.tensor(-torch.inf) + updated, _ = fn.update(nan, None, inplace=False) + assert torch.equal(updated, torch.tensor(0.0)) + assert updated is not nan + + updated, _ = fn.update(inf, None, inplace=False) + assert torch.equal(updated, torch.tensor(1.0)) + assert updated is not inf + + updated, _ = fn.update(ninf, None, inplace=False) + assert torch.equal(updated, torch.tensor(-1.0)) + assert updated is not ninf + + updated, _ = fn.update(nan, None, inplace=True) + assert torch.equal(updated, torch.tensor(0.0)) + assert updated is nan + + updated, _ = fn.update(inf, None, inplace=True) + assert torch.equal(updated, torch.tensor(1.0)) + assert updated is inf + + updated, _ = fn.update(ninf, None, inplace=True) + assert torch.equal(updated, torch.tensor(-1.0)) + assert updated is ninf + + +def test_masked() -> None: + fn = torchopt.nan_to_num(0.0, 1.0, -1.0) + nan = torch.tensor(torch.nan) + updates = [nan, nan, nan] + + masked_fn = torchopt.transform.masked(fn, [True, False, True]) + state = masked_fn.init(updates) + + updates, _ = masked_fn.update(updates, state) + assert nan is updates[1] diff --git a/tests/test_utils.py b/tests/test_utils.py new file mode 100644 index 00000000..0c80cec0 --- /dev/null +++ b/tests/test_utils.py @@ -0,0 +1,140 @@ +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +import torch + +import torchopt +from torchopt import pytree + + +def test_stop_gradient() -> None: + x = torch.tensor(1.0, requires_grad=True) + y = 2 * x + assert y.grad_fn is not None + torchopt.stop_gradient(y) + assert y.grad_fn is None + fc = torch.nn.Linear(1, 1, False) + fc._parameters['weight'] = fc.weight * 2 + assert fc.weight.grad_fn is not None + torchopt.stop_gradient(fc) + assert fc.weight.grad_fn is None + + +def test_module_clone() -> None: + x = torch.tensor(1.0, requires_grad=True) + y = 2 * x + assert y.grad_fn is not None + z = torchopt.module_clone(y, by='reference') + assert z is y + z = torchopt.module_clone(x, by='copy') + assert z is not x + assert z.grad_fn.next_functions[0][0].variable is x + + z = torchopt.module_clone(y, by='deepcopy') + assert z is not y + assert z.grad_fn is None + assert torch.equal(z, y) + + x = torch.tensor(1.0, requires_grad=True) + y = torchopt.module_clone(x, by='reference', device='meta') + assert y.grad_fn.next_functions[0][0].variable is x + assert y.is_meta + + y = torchopt.module_clone(x, by='copy', device='meta') + assert y is not x + assert y.grad_fn.next_functions[0][0].next_functions[0][0].variable is x + assert y.is_meta + + y = torchopt.module_clone(x, by='deepcopy', device='meta') + assert y is not x + assert y.grad_fn is None + assert y.is_meta + + if torch.cuda.is_available(): + x = torch.tensor(1.0, requires_grad=True) + y = torchopt.module_clone(x, by='reference', device='cuda') + assert y.grad_fn.next_functions[0][0].variable is x + assert y.is_cuda + + y = torchopt.module_clone(x, by='copy', device='cuda') + assert y is not x + assert y.grad_fn.next_functions[0][0].next_functions[0][0].variable is x + assert y.is_cuda + + y = torchopt.module_clone(x, by='deepcopy', device='cuda') + assert y is not x + assert y.grad_fn is None + assert torch.equal(y.to(x.device), x) + assert y.is_cuda + + +def test_extract_state_dict(): + fc = torch.nn.Linear(1, 1) + state_dict = torchopt.extract_state_dict(fc, by='reference', device=torch.device('meta')) + for param_dict in state_dict.params: + for k, v in param_dict.items(): + assert v.is_meta + assert v.grad_fn.next_functions[0][0].variable is fc._parameters[k] + + state_dict = torchopt.extract_state_dict(fc, by='copy', device=torch.device('meta')) + for param_dict in state_dict.params: + for k, v in param_dict.items(): + assert v.is_meta + assert v.grad_fn.next_functions[0][0].next_functions[0][0].variable is fc._parameters[k] + + state_dict = torchopt.extract_state_dict(fc, by='deepcopy', device=torch.device('meta')) + for param_dict in state_dict.params: + for k, v in param_dict.items(): + assert v.is_meta + assert v.grad_fn is None + + state_dict = torchopt.extract_state_dict(fc, by='reference') + for param_dict in state_dict.params: + for k, v in param_dict.items(): + assert v is fc._parameters[k] + + state_dict = torchopt.extract_state_dict(fc, by='copy') + for param_dict in state_dict.params: + for k, v in param_dict.items(): + assert torch.equal(v, fc._parameters[k]) + assert v.grad_fn.next_functions[0][0].variable is fc._parameters[k] + + state_dict = torchopt.extract_state_dict(fc, by='deepcopy') + for param_dict in state_dict.params: + for k, v in param_dict.items(): + assert torch.equal(v, fc._parameters[k]) + assert v.grad_fn is None + + optim = torchopt.MetaAdam(fc, 1.0) + loss = fc(torch.ones(1, 1)).sum() + optim.step(loss) + state_dict = torchopt.extract_state_dict(optim) + same = pytree.tree_map(lambda x, y: x is y, state_dict, tuple(optim.state_groups)) + assert all(pytree.tree_flatten(same)[0]) + + +def test_stop_gradient_for_state_dict() -> None: + fc = torch.nn.Linear(1, 1) + + state_dict = torchopt.extract_state_dict(fc, by='copy') + for param_dict in state_dict.params: + for k, v in param_dict.items(): + assert v.grad_fn.next_functions[0][0].variable is fc._parameters[k] + + torchopt.stop_gradient(state_dict) + for param_dict in state_dict.params: + for k, v in param_dict.items(): + assert v.grad_fn is None + assert torch.equal(v, fc._parameters[k]) diff --git a/tests/test_zero_order.py b/tests/test_zero_order.py index a2e2c1f7..ac7ae840 100644 --- a/tests/test_zero_order.py +++ b/tests/test_zero_order.py @@ -54,7 +54,7 @@ def test_zero_order(lr: float, method: str, sigma: float) -> None: fmodel, params = functorch.make_functional(model) x = torch.randn(batch_size, input_size) * coef - y = torch.randn(input_size) * coef + y = torch.randn(batch_size, 1) * coef distribution = torch.distributions.Normal(loc=0, scale=1) @torchopt.diff.zero_order( @@ -106,7 +106,7 @@ def sample(self, sample_shape=torch.Size()): return self.distribution.sample(sample_shape) x = torch.randn(batch_size, input_size) * coef - y = torch.randn(input_size) * coef + y = torch.randn(batch_size, 1) * coef model_with_loss = FcNetWithLoss(input_size, output_size) optimizer = torchopt.Adam(model_with_loss.parameters(), lr=lr) diff --git a/torchopt/_C/adam_op.pyi b/torchopt/_C/adam_op.pyi index bf7b72bf..bc3e8ebc 100644 --- a/torchopt/_C/adam_op.pyi +++ b/torchopt/_C/adam_op.pyi @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -53,5 +53,6 @@ def backward_updates( new_nu: torch.Tensor, b1: float, b2: float, + eps_root: float, count: int, ) -> Tuple[torch.Tensor, torch.Tensor]: ... diff --git a/torchopt/__init__.py b/torchopt/__init__.py index 38c00a79..0c36ac07 100644 --- a/torchopt/__init__.py +++ b/torchopt/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -15,13 +15,18 @@ """TorchOpt: a high-performance optimizer library built upon PyTorch.""" from torchopt import ( + accelerated_op, + alias, + base, clip, combine, diff, distributed, hook, + linalg, linear_solve, nn, + optim, pytree, schedule, typing, diff --git a/torchopt/accelerated_op/__init__.py b/torchopt/accelerated_op/__init__.py index 874174f2..003a8a9f 100644 --- a/torchopt/accelerated_op/__init__.py +++ b/torchopt/accelerated_op/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -19,11 +19,10 @@ import torch from torchopt.accelerated_op.adam_op import AdamOp +from torchopt.typing import Device -def is_available( - devices: Optional[Union[int, str, torch.device, Iterable[Union[int, str, torch.device]]]] = None -) -> bool: +def is_available(devices: Optional[Union[Device, Iterable[Device]]] = None) -> bool: """Check the availability of accelerated optimizer.""" op = AdamOp() @@ -42,5 +41,5 @@ def is_available( updates = torch.tensor(1.0, device=device) op(updates, updates, updates, 1) return True - except BaseException: # pylint: disable=broad-except + except Exception: # pylint: disable=broad-except return False diff --git a/torchopt/accelerated_op/_src/adam_op.py b/torchopt/accelerated_op/_src/adam_op.py index 65752446..9f801b8d 100644 --- a/torchopt/accelerated_op/_src/adam_op.py +++ b/torchopt/accelerated_op/_src/adam_op.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -32,27 +32,32 @@ def forward_( count: int, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: """Adam forward inplace.""" - inv_one_minus_pow_b1 = 1.0 / (1.0 - pow(b1, count)) - inv_one_minus_pow_b2 = 1.0 / (1.0 - pow(b2, count)) - mu = mu.mul_(b1).add_(updates, alpha=1.0 - b1) - nu = nu.mul_(b2).add_(updates.square(), alpha=1.0 - b2) + nu = nu.mul_(b2).addcmul_(updates, updates, value=1.0 - b2) updates.copy_( - mu.mul(inv_one_minus_pow_b1).div_( - nu.mul(inv_one_minus_pow_b2).add_(eps_root).sqrt_().add_(eps) + mu.div(1.0 - pow(b1, count)).div_( + nu.div(1.0 - pow(b2, count)).add_(eps_root).sqrt_().add_(eps) ) ) return updates, mu, nu -def forward_mu(updates: torch.Tensor, mu: torch.Tensor, b1: float) -> torch.Tensor: +def forward_mu( + updates: torch.Tensor, + mu: torch.Tensor, + b1: float, +) -> torch.Tensor: """Adam forward mu.""" return mu.mul(b1).add_(updates, alpha=1.0 - b1) -def forward_nu(updates: torch.Tensor, nu: torch.Tensor, b2: float) -> torch.Tensor: +def forward_nu( + updates: torch.Tensor, + nu: torch.Tensor, + b2: float, +) -> torch.Tensor: """Adam forward nu.""" - return nu.mul(b2).add_(updates.square(), alpha=1.0 - b2) + return nu.mul(b2).addcmul_(updates, updates, value=1.0 - b2) def forward_updates( @@ -65,15 +70,16 @@ def forward_updates( count: int, ) -> torch.Tensor: """Adam forward updates.""" - inv_one_minus_pow_b1 = 1.0 / (1.0 - pow(b1, count)) - inv_one_minus_pow_b2 = 1.0 / (1.0 - pow(b2, count)) - return new_mu.mul(inv_one_minus_pow_b1).div_( - new_nu.mul(inv_one_minus_pow_b2).add_(eps_root).sqrt_().add_(eps) + return new_mu.div(1.0 - pow(b1, count)).div_( + new_nu.div(1.0 - pow(b2, count)).add_(eps_root).sqrt_().add_(eps) ) def backward_mu( - dmu: torch.Tensor, updates: torch.Tensor, mu: torch.Tensor, b1: float + dmu: torch.Tensor, + updates: torch.Tensor, + mu: torch.Tensor, + b1: float, ) -> Tuple[torch.Tensor, torch.Tensor]: """Adam backward mu.""" dupdates = dmu.mul(1.0 - b1) @@ -82,7 +88,10 @@ def backward_mu( def backward_nu( - dnu: torch.Tensor, updates: torch.Tensor, nu: torch.Tensor, b2: float + dnu: torch.Tensor, + updates: torch.Tensor, + nu: torch.Tensor, + b2: float, ) -> Tuple[torch.Tensor, torch.Tensor]: """Adam backward nu.""" dupdates = updates.mul(dnu).mul_(2.0 * (1.0 - b2)) @@ -97,17 +106,18 @@ def backward_updates( new_nu: torch.Tensor, b1: float, b2: float, + eps_root: float, count: int, ) -> Tuple[torch.Tensor, torch.Tensor]: """Adam backward updates.""" one_minus_pow_b1 = 1.0 - pow(b1, count) - inv_one_minus_pow_b2 = 1.0 / (1.0 - pow(b2, count)) + inv_one_minus_pow_b2 = 1.0 / (1.0 - pow(b2, count) + eps_root) updates_div_new_mu = updates.div(new_mu) - denominator = updates_div_new_mu.mul_(one_minus_pow_b1) dnew_mu_out = dupdates.mul(updates_div_new_mu) + denominator = updates_div_new_mu.mul_(one_minus_pow_b1) dnew_nu_out = ( - dupdates.mul(updates).mul_(denominator.square_()).mul_(-0.5 * inv_one_minus_pow_b2) + denominator.square_().mul_(dupdates).mul_(updates).mul_(-0.5 * inv_one_minus_pow_b2) ) mask = new_mu == 0 diff --git a/torchopt/accelerated_op/adam_op.py b/torchopt/accelerated_op/adam_op.py index 9c19cd6a..6b93bf18 100644 --- a/torchopt/accelerated_op/adam_op.py +++ b/torchopt/accelerated_op/adam_op.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -16,6 +16,7 @@ # pylint: disable=c-extension-no-member,invalid-name +import contextlib from typing import Any, Optional, Tuple import torch @@ -104,8 +105,10 @@ def backward(ctx: Any, *args: Any) -> Any: """Define a formula for differentiating the operation with backward mode automatic differentiation (alias to the :meth:`vjp` function).""" dupdates = args[0] updates, new_mu, new_nu = ctx.saved_tensors - b1, b2, _, _, count = ctx.others - result = adam_op.backward_updates(dupdates, updates, new_mu, new_nu, b1, b2, count) + b1, b2, _, eps_root, count = ctx.others + result = adam_op.backward_updates( + dupdates, updates, new_mu, new_nu, b1, b2, eps_root, count + ) return result[0], result[1], None # pylint: disable-next=too-many-arguments @@ -126,24 +129,44 @@ def __init__( self.inplace = inplace def __call__( - self, mu: torch.Tensor, nu: torch.Tensor, updates: Optional[torch.Tensor], count: int + self, + mu: torch.Tensor, + nu: torch.Tensor, + updates: Optional[torch.Tensor], + count: int, ) -> Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]: """Apply the Adam operator.""" if updates is None: return mu, nu, None - if updates.is_cuda: - current_device = torch.cuda.current_device() - torch.cuda.set_device(updates.device) - if self.inplace: - new_updates, new_mu, new_nu = adam_op.forward_( - updates, mu, nu, self.b1, self.b2, self.eps, self.eps_root, count - ) - else: - new_mu = self.MuOp.apply(updates, mu, self.b1) - new_nu = self.NuOp.apply(updates, nu, self.b2) - new_updates = self.UpdatesOp.apply( - new_mu, new_nu, (self.b1, self.b2, self.eps, self.eps_root, count) - ) - if updates.is_cuda: - torch.cuda.set_device(current_device) + device_context = ( + torch.cuda.device(torch.cuda.current_device()) + if updates.is_cuda + else contextlib.nullcontext() + ) + with device_context: # type: ignore[attr-defined] + if self.inplace: + new_updates, new_mu, new_nu = adam_op.forward_( + updates, + mu, + nu, + self.b1, + self.b2, + self.eps, + self.eps_root, + count, + ) + else: + new_mu = self.MuOp.apply(updates, mu, self.b1) + new_nu = self.NuOp.apply(updates, nu, self.b2) + new_updates = self.UpdatesOp.apply( + new_mu, + new_nu, + ( + self.b1, + self.b2, + self.eps, + self.eps_root, + count, + ), + ) return new_mu, new_nu, new_updates diff --git a/torchopt/alias/adam.py b/torchopt/alias/adam.py index 471e58b0..a7f90a79 100644 --- a/torchopt/alias/adam.py +++ b/torchopt/alias/adam.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -33,8 +33,12 @@ from typing import Tuple -from torchopt.alias.utils import flip_sign_and_add_weight_decay, scale_by_neg_lr -from torchopt.combine import chain_flat +from torchopt.alias.utils import ( + _get_use_chain_flat, + flip_sign_and_add_weight_decay, + scale_by_neg_lr, +) +from torchopt.combine import chain from torchopt.transform import scale_by_accelerated_adam, scale_by_adam from torchopt.typing import GradientTransformation, ScalarOrSchedule @@ -93,31 +97,40 @@ def adam( """ b1, b2 = betas # pylint: disable=invalid-name # pylint: disable=unneeded-not - if not (callable(lr) or 0.0 <= lr): + if not (callable(lr) or 0.0 <= lr): # pragma: no cover raise ValueError(f'Invalid learning rate: {lr}') - if not 0.0 <= eps: + if not 0.0 <= eps: # pragma: no cover raise ValueError(f'Invalid epsilon value: {eps}') - if not 0.0 <= b1 < 1.0: + if not 0.0 <= b1 < 1.0: # pragma: no cover raise ValueError(f'Invalid beta parameter at index 0: {b1}') - if not 0.0 <= b2 < 1.0: + if not 0.0 <= b2 < 1.0: # pragma: no cover raise ValueError(f'Invalid beta parameter at index 1: {b2}') - if not 0.0 <= weight_decay: + if not 0.0 <= weight_decay: # pragma: no cover raise ValueError(f'Invalid weight_decay value: {weight_decay}') # pylint: enable=unneeded-not + chain_fn = chain + flip_sign_and_add_weight_decay_fn = flip_sign_and_add_weight_decay if use_accelerated_op: - adam_scaler = scale_by_accelerated_adam.flat # type: ignore[attr-defined] + adam_scaler_fn = scale_by_accelerated_adam else: - adam_scaler = scale_by_adam.flat # type: ignore[attr-defined] + adam_scaler_fn = scale_by_adam + scale_by_neg_lr_fn = scale_by_neg_lr - return chain_flat( - flip_sign_and_add_weight_decay(weight_decay=weight_decay, maximize=maximize), - adam_scaler( + if _get_use_chain_flat(): # default behavior + chain_fn = chain_fn.flat # type: ignore[attr-defined] + flip_sign_and_add_weight_decay_fn = flip_sign_and_add_weight_decay_fn.flat # type: ignore[attr-defined] + adam_scaler_fn = adam_scaler_fn.flat # type: ignore[attr-defined] + scale_by_neg_lr_fn = scale_by_neg_lr_fn.flat # type: ignore[attr-defined] + + return chain_fn( + flip_sign_and_add_weight_decay_fn(weight_decay=weight_decay, maximize=maximize), + adam_scaler_fn( b1=b1, b2=b2, eps=eps, eps_root=eps_root, moment_requires_grad=moment_requires_grad, ), - scale_by_neg_lr(lr), + scale_by_neg_lr_fn(lr), ) diff --git a/torchopt/alias/adamw.py b/torchopt/alias/adamw.py index 6cd12662..9aecc8ee 100644 --- a/torchopt/alias/adamw.py +++ b/torchopt/alias/adamw.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -33,8 +33,12 @@ from typing import Any, Callable, Optional, Tuple, Union -from torchopt.alias.utils import flip_sign_and_add_weight_decay, scale_by_neg_lr -from torchopt.combine import chain_flat +from torchopt.alias.utils import ( + _get_use_chain_flat, + flip_sign_and_add_weight_decay, + scale_by_neg_lr, +) +from torchopt.combine import chain from torchopt.transform import add_decayed_weights, scale_by_accelerated_adam, scale_by_adam from torchopt.typing import GradientTransformation, Params, ScalarOrSchedule @@ -42,7 +46,7 @@ __all__ = ['adamw'] -# pylint: disable-next=too-many-arguments +# pylint: disable-next=too-many-arguments,too-many-locals def adamw( lr: ScalarOrSchedule = 1e-3, betas: Tuple[float, float] = (0.9, 0.999), @@ -104,32 +108,43 @@ def adamw( """ b1, b2 = betas # pylint: disable=invalid-name # pylint: disable=unneeded-not - if not (callable(lr) or 0.0 <= lr): + if not (callable(lr) or 0.0 <= lr): # pragma: no cover raise ValueError(f'Invalid learning rate: {lr}') - if not 0.0 <= eps: + if not 0.0 <= eps: # pragma: no cover raise ValueError(f'Invalid epsilon value: {eps}') - if not 0.0 <= b1 < 1.0: + if not 0.0 <= b1 < 1.0: # pragma: no cover raise ValueError(f'Invalid beta parameter at index 0: {b1}') - if not 0.0 <= b2 < 1.0: + if not 0.0 <= b2 < 1.0: # pragma: no cover raise ValueError(f'Invalid beta parameter at index 1: {b2}') - if not 0.0 <= weight_decay: + if not 0.0 <= weight_decay: # pragma: no cover raise ValueError(f'Invalid weight_decay value: {weight_decay}') # pylint: enable=unneeded-not + chain_fn = chain + flip_sign_and_add_weight_decay_fn = flip_sign_and_add_weight_decay if use_accelerated_op: - adam_scaler = scale_by_accelerated_adam.flat # type: ignore[attr-defined] + adam_scaler_fn = scale_by_accelerated_adam else: - adam_scaler = scale_by_adam.flat # type: ignore[attr-defined] + adam_scaler_fn = scale_by_adam + add_decayed_weights_fn = add_decayed_weights + scale_by_neg_lr_fn = scale_by_neg_lr - return chain_flat( - flip_sign_and_add_weight_decay(weight_decay=0.0, maximize=maximize), - adam_scaler( + if _get_use_chain_flat(): # default behavior + chain_fn = chain_fn.flat # type: ignore[attr-defined] + flip_sign_and_add_weight_decay_fn = flip_sign_and_add_weight_decay_fn.flat # type: ignore[attr-defined] + adam_scaler_fn = adam_scaler_fn.flat # type: ignore[attr-defined] + add_decayed_weights_fn = add_decayed_weights_fn.flat # type: ignore[attr-defined] + scale_by_neg_lr_fn = scale_by_neg_lr_fn.flat # type: ignore[attr-defined] + + return chain_fn( + flip_sign_and_add_weight_decay_fn(weight_decay=0.0, maximize=maximize), + adam_scaler_fn( b1=b1, b2=b2, eps=eps, eps_root=eps_root, moment_requires_grad=moment_requires_grad, ), - add_decayed_weights.flat(weight_decay=weight_decay, mask=mask), # type: ignore[attr-defined] - scale_by_neg_lr(lr), + add_decayed_weights_fn(weight_decay=weight_decay, mask=mask), + scale_by_neg_lr_fn(lr), ) diff --git a/torchopt/alias/rmsprop.py b/torchopt/alias/rmsprop.py index 777966eb..18a5c5e8 100644 --- a/torchopt/alias/rmsprop.py +++ b/torchopt/alias/rmsprop.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -31,8 +31,12 @@ # ============================================================================== """Preset :class:`GradientTransformation` for the RMSProp optimizer.""" -from torchopt.alias.utils import flip_sign_and_add_weight_decay, scale_by_neg_lr -from torchopt.combine import chain_flat +from torchopt.alias.utils import ( + _get_use_chain_flat, + flip_sign_and_add_weight_decay, + scale_by_neg_lr, +) +from torchopt.combine import chain from torchopt.transform import scale_by_rms, scale_by_stddev, trace from torchopt.typing import GradientTransformation, ScalarOrSchedule @@ -95,30 +99,41 @@ def rmsprop( The functional optimizer wrapper :class:`torchopt.FuncOptimizer`. """ # pylint: disable=unneeded-not - if not (callable(lr) or 0.0 <= lr): + if not (callable(lr) or 0.0 <= lr): # pragma: no cover raise ValueError(f'Invalid learning rate: {lr}') - if not 0.0 <= alpha: + if not 0.0 <= alpha: # pragma: no cover raise ValueError(f'Invalid alpha value: {alpha}') - if not 0.0 <= eps: + if not 0.0 <= eps: # pragma: no cover raise ValueError(f'Invalid epsilon value: {eps}') - if not 0.0 <= momentum: + if not 0.0 <= momentum: # pragma: no cover raise ValueError(f'Invalid momentum value: {momentum}') - if not 0.0 <= weight_decay: + if not 0.0 <= weight_decay: # pragma: no cover raise ValueError(f'Invalid weight_decay value: {weight_decay}') # pylint: enable=unneeded-not + chain_fn = chain + flip_sign_and_add_weight_decay_fn = flip_sign_and_add_weight_decay if centered: - rmsprop_scaler = scale_by_stddev.flat # type: ignore[attr-defined] + rmsprop_scaler_fn = scale_by_stddev else: - rmsprop_scaler = scale_by_rms.flat # type: ignore[attr-defined] + rmsprop_scaler_fn = scale_by_rms + trace_fn = trace + scale_by_neg_lr_fn = scale_by_neg_lr - return chain_flat( - flip_sign_and_add_weight_decay(weight_decay=weight_decay, maximize=maximize), - rmsprop_scaler( + if _get_use_chain_flat(): # default behavior + chain_fn = chain_fn.flat # type: ignore[attr-defined] + flip_sign_and_add_weight_decay_fn = flip_sign_and_add_weight_decay_fn.flat # type: ignore[attr-defined] + rmsprop_scaler_fn = rmsprop_scaler_fn.flat # type: ignore[attr-defined] + trace_fn = trace_fn.flat # type: ignore[attr-defined] + scale_by_neg_lr_fn = scale_by_neg_lr_fn.flat # type: ignore[attr-defined] + + return chain_fn( + flip_sign_and_add_weight_decay_fn(weight_decay=weight_decay, maximize=maximize), + rmsprop_scaler_fn( alpha=alpha, eps=eps, initial_scale=initial_scale, ), - trace.flat(momentum=momentum, nesterov=nesterov), # type: ignore[attr-defined] - scale_by_neg_lr(lr), + trace_fn(momentum=momentum, nesterov=nesterov), + scale_by_neg_lr_fn(lr), ) diff --git a/torchopt/alias/sgd.py b/torchopt/alias/sgd.py index 27dd1a13..61b3d6e4 100644 --- a/torchopt/alias/sgd.py +++ b/torchopt/alias/sgd.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -31,8 +31,12 @@ # ============================================================================== """Preset :class:`GradientTransformation` for the SGD optimizer.""" -from torchopt.alias.utils import flip_sign_and_add_weight_decay, scale_by_neg_lr -from torchopt.combine import chain_flat +from torchopt.alias.utils import ( + _get_use_chain_flat, + flip_sign_and_add_weight_decay, + scale_by_neg_lr, +) +from torchopt.combine import chain from torchopt.transform import trace from torchopt.typing import GradientTransformation, ScalarOrSchedule @@ -83,23 +87,34 @@ def sgd( The functional optimizer wrapper :class:`torchopt.FuncOptimizer`. """ # pylint: disable=unneeded-not - if not (callable(lr) or 0.0 <= lr): + if not (callable(lr) or 0.0 <= lr): # pragma: no cover raise ValueError(f'Invalid learning rate: {lr}') - if not 0.0 <= momentum: + if not 0.0 <= momentum: # pragma: no cover raise ValueError(f'Invalid momentum value: {momentum}') - if not 0.0 <= weight_decay: + if not 0.0 <= weight_decay: # pragma: no cover raise ValueError(f'Invalid weight_decay value: {weight_decay}') - if nesterov and (momentum <= 0.0 or dampening != 0.0): + if nesterov and (momentum <= 0.0 or dampening != 0.0): # pragma: no cover raise ValueError('Nesterov momentum requires a momentum and zero dampening') # pylint: enable=unneeded-not - return chain_flat( - flip_sign_and_add_weight_decay(weight_decay=weight_decay, maximize=maximize), - trace.flat( # type: ignore[attr-defined] + chain_fn = chain + flip_sign_and_add_weight_decay_fn = flip_sign_and_add_weight_decay + trace_fn = trace + scale_by_neg_lr_fn = scale_by_neg_lr + + if _get_use_chain_flat(): # default behavior + chain_fn = chain_fn.flat # type: ignore[attr-defined] + flip_sign_and_add_weight_decay_fn = flip_sign_and_add_weight_decay_fn.flat # type: ignore[attr-defined] + trace_fn = trace_fn.flat # type: ignore[attr-defined] + scale_by_neg_lr_fn = scale_by_neg_lr_fn.flat # type: ignore[attr-defined] + + return chain_fn( + flip_sign_and_add_weight_decay_fn(weight_decay=weight_decay, maximize=maximize), + trace_fn( momentum=momentum, dampening=dampening, nesterov=nesterov, moment_requires_grad=moment_requires_grad, ), - scale_by_neg_lr(lr), + scale_by_neg_lr_fn(lr), ) diff --git a/torchopt/alias/utils.py b/torchopt/alias/utils.py index 08f9fa08..869aad87 100644 --- a/torchopt/alias/utils.py +++ b/torchopt/alias/utils.py @@ -13,29 +13,89 @@ # limitations under the License. r"""Utilities for the aliases of preset :class:`GradientTransformation`\s for optimizers.""" +import threading +from typing import Optional, Tuple + +from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation, identity from torchopt.transform import scale, scale_by_schedule -from torchopt.transform.utils import tree_map_flat -from torchopt.typing import ScalarOrSchedule +from torchopt.transform.utils import tree_map_flat, tree_map_flat_ +from torchopt.typing import OptState, Params, ScalarOrSchedule, Updates __all__ = ['flip_sign_and_add_weight_decay', 'scale_by_neg_lr'] -def flip_sign_and_add_weight_decay(weight_decay: float = 0.0, maximize=False): +__USE_CHAIN_FLAT_LOCK = threading.Lock() +__USE_CHAIN_FLAT = True + + +def _set_use_chain_flat(use_chain_flat: bool) -> None: # only used for testing purposes + global __USE_CHAIN_FLAT # pylint: disable=global-statement + with __USE_CHAIN_FLAT_LOCK: + __USE_CHAIN_FLAT = use_chain_flat + + +def _get_use_chain_flat() -> bool: # only used for testing purposes + with __USE_CHAIN_FLAT_LOCK: + return __USE_CHAIN_FLAT + + +def flip_sign_and_add_weight_decay( + weight_decay: float = 0.0, maximize=False +) -> GradientTransformation: """Flip the sign of the updates and adds weight decay.""" - if not 0.0 <= weight_decay: # pylint: disable=unneeded-not + return _flip_sign_and_add_weight_decay( + weight_decay=weight_decay, + maximize=maximize, + already_flattened=False, + ) + + +def _flip_sign_and_add_weight_decay_flat( + weight_decay: float = 0.0, maximize=False +) -> GradientTransformation: + """Flip the sign of the updates and adds weight decay.""" + return _flip_sign_and_add_weight_decay( + weight_decay=weight_decay, + maximize=maximize, + already_flattened=True, + ) + + +def _flip_sign_and_add_weight_decay( + weight_decay: float = 0.0, + maximize=False, + *, + already_flattened: bool = False, +) -> GradientTransformation: + """Flip the sign of the updates and adds weight decay.""" + # pylint: disable-next=unneeded-not + if not 0.0 <= weight_decay: # pragma: no cover raise ValueError(f'Invalid weight_decay value: {weight_decay}') if not maximize and weight_decay == 0.0: return identity() - def init_fn(params): # pylint: disable=unused-argument + if already_flattened: + tree_map = tree_map_flat + tree_map_ = tree_map_flat_ + else: + tree_map = pytree.tree_map # type: ignore[assignment] + tree_map_ = pytree.tree_map_ # type: ignore[assignment] + + def init_fn(params: Params) -> OptState: # pylint: disable=unused-argument return EmptyState() if not maximize: # gradient descent - def update_fn(updates, state, *, params=None, inplace=True): + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, + inplace: bool = True, + ) -> Tuple[Updates, OptState]: assert params is not None, ( 'Parameters are required for weight decay. ' 'Call `update(updates, state, params=params)` instead.' @@ -48,34 +108,52 @@ def f(g, p): return g.add_(p, alpha=weight_decay) return g.add_(p.data, alpha=weight_decay) + updates = tree_map_(f, updates, params) + else: def f(g, p): return g.add(p, alpha=weight_decay) - updates = tree_map_flat(f, updates, params) + updates = tree_map(f, updates, params) + return updates, state else: # gradient ascent if weight_decay == 0.0: - # pylint: disable-next=unused-argument - def update_fn(updates, state, *, params=None, inplace=True): + + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: if inplace: def f(g): return g.neg_() + updates = tree_map_(f, updates) + else: def f(g): return g.neg() - updates = tree_map_flat(f, updates) + updates = tree_map(f, updates) + return updates, state else: - def update_fn(updates, state, *, params=None, inplace=True): + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, + inplace: bool = True, + ) -> Tuple[Updates, OptState]: assert params is not None, ( 'Parameters are required for weight decay. ' 'Call `update(updates, state, params=params)` instead.' @@ -84,26 +162,39 @@ def update_fn(updates, state, *, params=None, inplace=True): if inplace: def f(g, p): - if g is not None: - if g.requires_grad: - return g.neg_().add_(p, alpha=weight_decay) - return g.neg_().add_(p.data, alpha=weight_decay) - return None + if g.requires_grad: + return g.neg_().add_(p, alpha=weight_decay) + return g.neg_().add_(p.data, alpha=weight_decay) + + updates = tree_map_(f, updates, params) else: def f(g, p): return g.neg().add_(p, alpha=weight_decay) - updates = tree_map_flat(f, updates, params) + updates = tree_map(f, updates, params) + return updates, state return GradientTransformation(init_fn, update_fn) -def scale_by_neg_lr(lr: ScalarOrSchedule): +flip_sign_and_add_weight_decay.flat = _flip_sign_and_add_weight_decay_flat # type: ignore[attr-defined] +flip_sign_and_add_weight_decay.impl = _flip_sign_and_add_weight_decay # type: ignore[attr-defined] + + +def scale_by_neg_lr(lr: ScalarOrSchedule) -> GradientTransformation: """Scale the updates by the negative learning rate.""" - if not (callable(lr) or 0.0 <= lr): + return _scale_by_neg_lr(lr=lr, already_flattened=False) + + +def _scale_by_neg_lr_flat(lr: ScalarOrSchedule) -> GradientTransformation: + return _scale_by_neg_lr(lr=lr, already_flattened=True) + + +def _scale_by_neg_lr(lr: ScalarOrSchedule, *, already_flattened=False) -> GradientTransformation: + if not (callable(lr) or 0.0 <= lr): # pragma: no cover raise ValueError(f'Invalid learning rate: {lr}') if callable(lr): @@ -111,5 +202,12 @@ def scale_by_neg_lr(lr: ScalarOrSchedule): def schedule_wrapper(count): return -lr(count) # type: ignore[operator] - return scale_by_schedule.flat(schedule_wrapper) # type: ignore[attr-defined] - return scale.flat(-lr) # type: ignore[attr-defined] + return scale_by_schedule.impl( # type: ignore[attr-defined] + schedule_wrapper, + already_flattened=already_flattened, + ) + return scale.impl(-lr, already_flattened=already_flattened) # type: ignore[attr-defined] + + +scale_by_neg_lr.flat = _scale_by_neg_lr_flat # type: ignore[attr-defined] +scale_by_neg_lr.impl = _scale_by_neg_lr # type: ignore[attr-defined] diff --git a/torchopt/base.py b/torchopt/base.py index 6e9faa20..bb37b147 100644 --- a/torchopt/base.py +++ b/torchopt/base.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -37,7 +37,7 @@ from typing_extensions import Protocol # Python 3.8+ -if TYPE_CHECKING: +if TYPE_CHECKING: # pragma: no cover from torchopt.typing import OptState, Params, Updates @@ -170,12 +170,21 @@ def __new__(cls, *transformations: GradientTransformation) -> 'ChainedGradientTr ) ) + if len(transformations) == 0: + transformations = (IdentityGradientTransformation(),) + init_fns, update_fns = tuple(zip(*transformations)) - def init_fn(params): + def init_fn(params: 'Params') -> 'OptState': return tuple(fn(params) for fn in init_fns) - def update_fn(updates, state, *, params=None, inplace=True): + def update_fn( + updates: 'Updates', + state: 'OptState', + *, + params: Optional['Params'] = None, + inplace: bool = True, + ) -> Tuple['Updates', 'OptState']: if len(update_fns) != len(state): raise ValueError( 'The number of updates and states has to be the same in chain! Make sure you' @@ -191,14 +200,13 @@ def update_fn(updates, state, *, params=None, inplace=True): instance.transformations = transformations return instance - def __str__(self) -> str: + def __repr__(self) -> str: """Return a string representation of the chained gradient transformation.""" - return '{}(\n {}\n)'.format( - self.__class__.__name__, ',\n '.join(repr(t) for t in self.transformations) + return '{}(\n {},\n)'.format( + self.__class__.__name__, + ',\n '.join(repr(t) for t in self.transformations), ) - __repr__ = __str__ - def __eq__(self, other: object) -> bool: """Return whether two chained gradient transformations are equal.""" if isinstance(other, ChainedGradientTransformation): diff --git a/torchopt/clip.py b/torchopt/clip.py index b4fa24e4..2469d17a 100644 --- a/torchopt/clip.py +++ b/torchopt/clip.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -17,12 +17,13 @@ # ============================================================================== """Utilities for gradient clipping.""" -from typing import Union +from typing import Optional, Tuple, Union import torch from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation +from torchopt.typing import OptState, Params, Updates __all__ = ['clip_grad_norm'] @@ -49,10 +50,16 @@ def clip_grad_norm( An ``(init_fn, update_fn)`` tuple. """ - def init_fn(params): # pylint: disable=unused-argument + def init_fn(params: Params) -> OptState: # pylint: disable=unused-argument return ClipState() - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: available_updates = pytree.tree_leaves(updates) if len(available_updates) == 0: return updates, state diff --git a/torchopt/combine.py b/torchopt/combine.py index 7587e912..82297426 100644 --- a/torchopt/combine.py +++ b/torchopt/combine.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -31,9 +31,11 @@ # ============================================================================== """Utilities to define a chained transformation.""" +from typing import Optional, Tuple + from torchopt import pytree from torchopt.base import ChainedGradientTransformation, GradientTransformation, identity -from torchopt.typing import Updates +from torchopt.typing import OptState, Params, Updates __all__ = ['chain', 'chain_flat'] @@ -77,10 +79,16 @@ def chain_flat(*transformations: GradientTransformation) -> GradientTransformati else: inner = chain(*transformations) - def init_fn(params): + def init_fn(params: Params) -> OptState: return inner.init(pytree.tree_leaves(params, none_is_leaf=True)) - def update_fn(updates, state, *, params=None, inplace=True): + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, + inplace: bool = True, + ) -> Tuple[Updates, OptState]: flat_updates, treespec = pytree.tree_flatten(updates, none_is_leaf=True) if params is not None: flat_params = pytree.tree_leaves(params, none_is_leaf=True) diff --git a/torchopt/diff/implicit/nn/module.py b/torchopt/diff/implicit/nn/module.py index adac97db..f9bff4de 100644 --- a/torchopt/diff/implicit/nn/module.py +++ b/torchopt/diff/implicit/nn/module.py @@ -26,7 +26,7 @@ from torchopt.diff.implicit.decorator import custom_root from torchopt.nn.module import MetaGradientModule -from torchopt.nn.stateless import reparameterize, swap_state +from torchopt.nn.stateless import reparametrize, swap_state from torchopt.typing import LinearSolver, TupleOfTensors @@ -42,7 +42,7 @@ def _stateless_objective_fn( *input, **kwargs, ) -> torch.Tensor: - with reparameterize( + with reparametrize( self, itertools.chain( zip(__params_names, __flat_params), @@ -61,7 +61,7 @@ def _stateless_optimality_fn( *input, **kwargs, ) -> TupleOfTensors: - with reparameterize( + with reparametrize( self, itertools.chain( zip(__params_names, __flat_params), diff --git a/torchopt/diff/zero_order/decorator.py b/torchopt/diff/zero_order/decorator.py index 04ab35e1..80664d8b 100644 --- a/torchopt/diff/zero_order/decorator.py +++ b/torchopt/diff/zero_order/decorator.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -15,7 +15,7 @@ """Zero-Order Gradient Estimation.""" import functools -from typing import Any, Callable, List, Tuple, Union +from typing import Any, Callable, List, Sequence, Tuple, Union from typing_extensions import Literal # Python 3.8+ from typing_extensions import TypeAlias # Python 3.10+ @@ -23,14 +23,7 @@ from torch.autograd import Function from torchopt import pytree -from torchopt.typing import ( - ListOfTensors, - Numeric, - Samplable, - SampleFunc, - Sequence, - TupleOfOptionalTensors, -) +from torchopt.typing import ListOfTensors, Numeric, Samplable, SampleFunc, TupleOfOptionalTensors class WrappedSamplable(Samplable): # pylint: disable=too-few-public-methods diff --git a/torchopt/diff/zero_order/nn/module.py b/torchopt/diff/zero_order/nn/module.py index 9be7b16a..d76ac444 100644 --- a/torchopt/diff/zero_order/nn/module.py +++ b/torchopt/diff/zero_order/nn/module.py @@ -24,7 +24,7 @@ import torch.nn as nn from torchopt.diff.zero_order.decorator import Method, Samplable, zero_order -from torchopt.nn.stateless import reparameterize +from torchopt.nn.stateless import reparametrize from torchopt.typing import Numeric, TupleOfTensors @@ -55,7 +55,7 @@ def forward_fn( *input, **kwargs, ) -> torch.Tensor: - with reparameterize(self, zip(params_names, __flat_params)): + with reparametrize(self, zip(params_names, __flat_params)): return cls_forward(self, *input, **kwargs) return forward_fn(flat_params, *input, **kwargs) diff --git a/torchopt/distributed/__init__.py b/torchopt/distributed/__init__.py index d966691c..4272e37a 100644 --- a/torchopt/distributed/__init__.py +++ b/torchopt/distributed/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -25,6 +25,6 @@ __all__ = ['is_available', *api.__all__, *world.__all__] -def is_available(): +def is_available() -> bool: """Check if the distributed module is available.""" return dist.is_available() and rpc.is_available() and autograd.is_available() diff --git a/torchopt/distributed/api.py b/torchopt/distributed/api.py index 4a969a6a..53f87fba 100644 --- a/torchopt/distributed/api.py +++ b/torchopt/distributed/api.py @@ -33,7 +33,7 @@ import torch import torch.distributed.rpc as rpc -import torchopt.pytree as pytree +from torchopt import pytree from torchopt.distributed.world import get_worker_id, get_world_rank, get_world_size from torchopt.typing import Future diff --git a/torchopt/hook.py b/torchopt/hook.py index 625d4f25..949c76e7 100644 --- a/torchopt/hook.py +++ b/torchopt/hook.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,12 +14,13 @@ # ============================================================================== """Hook utilities.""" -from typing import Callable, Optional +from typing import Callable, Optional, Tuple import torch from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation +from torchopt.typing import OptState, Params, Updates __all__ = ['zero_nan_hook', 'nan_to_num_hook', 'register_hook'] @@ -51,10 +52,16 @@ def register_hook(hook) -> GradientTransformation: An ``(init_fn, update_fn)`` tuple. """ - def init_fn(params): # pylint: disable=unused-argument + def init_fn(params: Params) -> OptState: # pylint: disable=unused-argument return EmptyState() - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, # pylint: disable=unused-argument + ) -> Tuple[Updates, OptState]: def f(g): return g.register_hook(hook) diff --git a/torchopt/nn/__init__.py b/torchopt/nn/__init__.py index e6dcc14f..8271ad7d 100644 --- a/torchopt/nn/__init__.py +++ b/torchopt/nn/__init__.py @@ -17,13 +17,14 @@ from torchopt.diff.implicit.nn.module import ImplicitMetaGradientModule # circular reference from torchopt.diff.zero_order.nn.module import ZeroOrderGradientModule # circular reference from torchopt.nn.module import MetaGradientModule -from torchopt.nn.stateless import reparameterize, swap_state +from torchopt.nn.stateless import reparameterize, reparametrize, swap_state __all__ = [ 'MetaGradientModule', 'ImplicitMetaGradientModule', 'ZeroOrderGradientModule', + 'reparametrize', 'reparameterize', 'swap_state', ] diff --git a/torchopt/nn/module.py b/torchopt/nn/module.py index 156b7b3f..3716f674 100644 --- a/torchopt/nn/module.py +++ b/torchopt/nn/module.py @@ -54,6 +54,10 @@ def __new__(cls, *args, **kwargs) -> 'MetaGradientModule': instance._meta_modules: Dict[str, Optional[nn.Module]] = OrderedDict() # type: ignore[misc] return instance + def __init__(self, *args, **kwargs) -> None: # pylint: disable=unused-argument + """Initialize a new module instance.""" + super().__init__() + def __getattr__(self, name: str) -> Union[torch.Tensor, nn.Module]: """Get an attribute of the module.""" if '_parameters' in self.__dict__: diff --git a/torchopt/nn/stateless.py b/torchopt/nn/stateless.py index e0a9ecb8..2fc0dbb4 100644 --- a/torchopt/nn/stateless.py +++ b/torchopt/nn/stateless.py @@ -21,7 +21,7 @@ import torch.nn as nn -__all__ = ['swap_state', 'reparameterize'] +__all__ = ['swap_state', 'reparametrize', 'reparameterize'] MISSING: torch.Tensor = object() # type: ignore[assignment] @@ -82,7 +82,7 @@ def recursive_setattr(path: str, value: torch.Tensor) -> torch.Tensor: @contextlib.contextmanager -def reparameterize( +def reparametrize( module: nn.Module, named_tensors: Union[Dict[str, torch.Tensor], Iterable[Tuple[str, torch.Tensor]]], allow_missing: bool = False, @@ -97,3 +97,6 @@ def reparameterize( yield module finally: swap_state(module, orig_named_tensors, allow_missing=allow_missing) + + +reparameterize = reparametrize diff --git a/torchopt/pytree.py b/torchopt/pytree.py index dc75d104..0abcf4fd 100644 --- a/torchopt/pytree.py +++ b/torchopt/pytree.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -167,7 +167,7 @@ def tree_wait(future_tree: PyTree[Future[T]]) -> PyTree[T]: return tree_unflatten(treespec, results) -if rpc.is_available(): +if rpc.is_available(): # pragma: no cover def tree_as_rref(tree: PyTree[T]) -> PyTree[RRef[T]]: r"""Convert a tree of local objects to a tree of :class:`RRef`\s.""" diff --git a/torchopt/transform/__init__.py b/torchopt/transform/__init__.py index 07c1a8e9..7006090f 100644 --- a/torchopt/transform/__init__.py +++ b/torchopt/transform/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -31,7 +31,7 @@ # ============================================================================== """Preset transformations.""" -from torchopt.transform.add_decayed_weights import add_decayed_weights +from torchopt.transform.add_decayed_weights import add_decayed_weights, masked from torchopt.transform.nan_to_num import nan_to_num from torchopt.transform.scale import scale from torchopt.transform.scale_by_adam import scale_by_accelerated_adam, scale_by_adam @@ -46,6 +46,7 @@ 'scale', 'scale_by_schedule', 'add_decayed_weights', + 'masked', 'scale_by_adam', 'scale_by_accelerated_adam', 'scale_by_rms', diff --git a/torchopt/transform/add_decayed_weights.py b/torchopt/transform/add_decayed_weights.py index 48a117f5..772e6291 100644 --- a/torchopt/transform/add_decayed_weights.py +++ b/torchopt/transform/add_decayed_weights.py @@ -32,12 +32,12 @@ # ============================================================================== """Preset transformations for adding weight decay to updates.""" -from typing import Any, Callable, NamedTuple, Optional, Union +from typing import Any, Callable, NamedTuple, Optional, Tuple, Union from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation, identity -from torchopt.transform.utils import tree_map_flat -from torchopt.typing import Params +from torchopt.transform.utils import tree_map_flat, tree_map_flat_ +from torchopt.typing import OptState, Params, Updates __all__ = ['masked', 'add_decayed_weights'] @@ -108,12 +108,18 @@ def _masked( def tree_mask(params, mask_tree): return tree_map(lambda p, m: p if m else MaskedNode(), params, mask_tree) - def init_fn(params): + def init_fn(params: Params) -> OptState: mask_tree = mask(params) if callable(mask) else mask masked_params = tree_mask(params, mask_tree) return MaskedState(inner_state=inner.init(masked_params)) - def update_fn(updates, state, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, + inplace: bool = True, + ) -> Tuple[Updates, OptState]: mask_tree = mask(updates) if callable(mask) else mask masked_updates = tree_mask(updates, mask_tree) masked_params = None if params is None else tree_mask(params, mask_tree) @@ -123,7 +129,7 @@ def update_fn(updates, state, params=None, inplace=True): # pylint: disable=unu ) new_updates = tree_map( - lambda new_u, old_u, m: new_u if m else old_u, new_masked_updates, updates, mask_tree + lambda old_u, new_u, m: new_u if m else old_u, updates, new_masked_updates, mask_tree ) return new_updates, MaskedState(inner_state=new_inner_state) @@ -177,7 +183,8 @@ def _add_decayed_weights( *, already_flattened: bool = False, ) -> GradientTransformation: - if not 0.0 <= weight_decay: # pylint: disable=unneeded-not + # pylint: disable-next=unneeded-not + if not 0.0 <= weight_decay: # pragma: no cover raise ValueError(f'Invalid weight_decay value: {weight_decay}') if weight_decay == 0.0 and mask is None: @@ -185,13 +192,21 @@ def _add_decayed_weights( if already_flattened: tree_map = tree_map_flat + tree_map_ = tree_map_flat_ else: tree_map = pytree.tree_map # type: ignore[assignment] + tree_map_ = pytree.tree_map_ # type: ignore[assignment] - def init_fn(params): # pylint: disable=unused-argument + def init_fn(params: Params) -> OptState: # pylint: disable=unused-argument return AddDecayedWeightsState() - def update_fn(updates, state, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, + inplace: bool = True, + ) -> Tuple[Updates, OptState]: assert params is not None, ( 'Parameters are required for weight decay. ' 'Call `update(updates, state, params=params)` instead.' @@ -204,12 +219,15 @@ def f(g, p): return g.add_(p, alpha=weight_decay) return g.add_(p.data, alpha=weight_decay) + updates = tree_map_(f, updates, params) + else: def f(g, p): return g.add(p, alpha=weight_decay) - updates = tree_map(f, updates, params) + updates = tree_map(f, updates, params) + return updates, state # If mask is not `None`, apply mask to the gradient transformation. diff --git a/torchopt/transform/nan_to_num.py b/torchopt/transform/nan_to_num.py index b161f7ea..2c0b9d5e 100644 --- a/torchopt/transform/nan_to_num.py +++ b/torchopt/transform/nan_to_num.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,14 +14,17 @@ # ============================================================================== """Preset transformations that replaces updates with non-finite values to the given numbers.""" -from typing import Optional +from typing import Optional, Tuple from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation +from torchopt.typing import OptState, Params, Updates def nan_to_num( - nan: float = 0.0, posinf: Optional[float] = None, neginf: Optional[float] = None + nan: float = 0.0, + posinf: Optional[float] = None, + neginf: Optional[float] = None, ) -> GradientTransformation: """Replace updates with values ``nan`` / ``+inf`` / ``-inf`` to the given numbers. @@ -29,10 +32,16 @@ def nan_to_num( An ``(init_fn, update_fn)`` tuple. """ - def init_fn(params): # pylint: disable=unused-argument + def init_fn(params: Params) -> OptState: # pylint: disable=unused-argument return EmptyState() - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: if inplace: def f(g): diff --git a/torchopt/transform/scale.py b/torchopt/transform/scale.py index 828b4b2f..4afac163 100644 --- a/torchopt/transform/scale.py +++ b/torchopt/transform/scale.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -31,9 +31,12 @@ # ============================================================================== """Preset transformation for scaling updates by learning rate.""" +from typing import Optional, Tuple + from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation -from torchopt.transform.utils import tree_map_flat +from torchopt.transform.utils import tree_map_flat, tree_map_flat_ +from torchopt.typing import OptState, Params, Updates __all__ = ['scale'] @@ -58,27 +61,42 @@ def _scale_flat(step_size: float) -> GradientTransformation: return _scale(step_size=step_size, already_flattened=True) -def _scale(step_size: float, *, already_flattened: bool = False) -> GradientTransformation: +def _scale( + step_size: float, + *, + already_flattened: bool = False, +) -> GradientTransformation: if already_flattened: tree_map = tree_map_flat + tree_map_ = tree_map_flat_ else: tree_map = pytree.tree_map # type: ignore[assignment] + tree_map_ = pytree.tree_map_ # type: ignore[assignment] - def init_fn(params): # pylint: disable=unused-argument + def init_fn(params: Params) -> OptState: # pylint: disable=unused-argument return ScaleState() - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: if inplace: def f(g): return g.mul_(step_size) + updates = tree_map_(f, updates) + else: def f(g): return g.mul(step_size) - updates = tree_map(f, updates) + updates = tree_map(f, updates) + return updates, state return GradientTransformation(init_fn, update_fn) diff --git a/torchopt/transform/scale_by_adam.py b/torchopt/transform/scale_by_adam.py index f0065712..039d31fb 100644 --- a/torchopt/transform/scale_by_adam.py +++ b/torchopt/transform/scale_by_adam.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -33,7 +33,7 @@ # pylint: disable=invalid-name -from typing import NamedTuple +from typing import NamedTuple, Optional, Tuple import torch @@ -41,13 +41,13 @@ from torchopt.accelerated_op import AdamOp from torchopt.base import GradientTransformation from torchopt.transform.utils import inc_count, tree_map_flat, update_moment -from torchopt.typing import SequenceOfTensors, Updates +from torchopt.typing import OptState, Params, Updates __all__ = ['scale_by_adam', 'scale_by_accelerated_adam'] -TRIPLE_PYTREE_SPEC = pytree.tree_structure((0, 1, 2)) # type: ignore[arg-type] +TRIPLE_PYTREE_SPEC = pytree.tree_structure((0, 1, 2), none_is_leaf=True) # type: ignore[arg-type] class ScaleByAdamState(NamedTuple): @@ -55,14 +55,20 @@ class ScaleByAdamState(NamedTuple): mu: Updates nu: Updates - count: SequenceOfTensors # type: ignore + count: OptState -def _bias_correction(moment, decay, count, *, already_flattened=False): +def _bias_correction( + moment: Updates, + decay: float, + count: OptState, + *, + already_flattened: bool = False, +) -> Updates: """Perform bias correction. This becomes a no-op as count goes to infinity.""" def f(t, c): # pylint: disable=invalid-name - return t.div(1 - decay**c) + return t.div(1 - pow(decay, c)) if already_flattened: return tree_map_flat(f, moment, count) @@ -134,11 +140,11 @@ def _scale_by_adam( already_flattened: bool = False, ) -> GradientTransformation: # pylint: disable=unneeded-not - if not 0.0 <= eps: + if not 0.0 <= eps: # pragma: no cover raise ValueError(f'Invalid epsilon value: {eps}') - if not 0.0 <= b1 < 1.0: + if not 0.0 <= b1 < 1.0: # pragma: no cover raise ValueError(f'Invalid beta parameter at index 0: {b1}') - if not 0.0 <= b2 < 1.0: + if not 0.0 <= b2 < 1.0: # pragma: no cover raise ValueError(f'Invalid beta parameter at index 1: {b2}') # pylint: enable=unneeded-not @@ -147,7 +153,7 @@ def _scale_by_adam( else: tree_map = pytree.tree_map # type: ignore[assignment] - def init_fn(params): + def init_fn(params: Params) -> OptState: zero = tree_map( # count init lambda t: torch.zeros(1, dtype=torch.int64, device=t.device).squeeze_(), params ) @@ -159,7 +165,13 @@ def init_fn(params): ) return ScaleByAdamState(mu=mu, nu=nu, count=zero) - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: mu = update_moment.impl( # type: ignore[attr-defined] updates, state.mu, b1, order=1, inplace=inplace, already_flattened=already_flattened ) @@ -258,19 +270,24 @@ def _scale_by_accelerated_adam( already_flattened: bool = False, ) -> GradientTransformation: # pylint: disable=unneeded-not - if not 0.0 <= eps: + if not 0.0 <= eps: # pragma: no cover raise ValueError(f'Invalid epsilon value: {eps}') - if not 0.0 <= b1 < 1.0: + if not 0.0 <= b1 < 1.0: # pragma: no cover raise ValueError(f'Invalid beta parameter at index 0: {b1}') - if not 0.0 <= b2 < 1.0: + if not 0.0 <= b2 < 1.0: # pragma: no cover raise ValueError(f'Invalid beta parameter at index 1: {b2}') # pylint: enable=unneeded-not if already_flattened: tree_map = tree_map_flat - # pylint: disable-next=unused-argument - def update_fn(updates, state, *, params=None, inplace=True): + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: count_inc = inc_count.impl(updates, state.count, already_flattened=True) # type: ignore[attr-defined] op = AdamOp(b1=b1, b2=b2, eps=eps, eps_root=eps_root, inplace=inplace) @@ -282,11 +299,16 @@ def update_fn(updates, state, *, params=None, inplace=True): else: tree_map = pytree.tree_map # type: ignore[assignment] - # pylint: disable-next=unused-argument - def update_fn(updates, state, *, params=None, inplace=True): + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: count_inc = inc_count.impl(updates, state.count, already_flattened=False) # type: ignore[attr-defined] - treespec = pytree.tree_structure(updates) + treespec = pytree.tree_structure(updates, none_is_leaf=True) op = AdamOp(b1=b1, b2=b2, eps=eps, eps_root=eps_root, inplace=inplace) out = pytree.tree_map(op, state.mu, state.nu, updates, count_inc) @@ -297,7 +319,7 @@ def update_fn(updates, state, *, params=None, inplace=True): new_mu, new_nu, new_updates = pytree.tree_transpose(treespec, TRIPLE_PYTREE_SPEC, out) # type: ignore[misc] return new_updates, ScaleByAdamState(mu=new_mu, nu=new_nu, count=count_inc) - def init_fn(params): + def init_fn(params: Params) -> OptState: zero = tree_map( # count init lambda t: torch.zeros(1, dtype=torch.int64, device=t.device).squeeze_(), params ) diff --git a/torchopt/transform/scale_by_rms.py b/torchopt/transform/scale_by_rms.py index 3451fafe..7a685f6b 100644 --- a/torchopt/transform/scale_by_rms.py +++ b/torchopt/transform/scale_by_rms.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -31,14 +31,14 @@ # ============================================================================== """Preset transformations for scaling updates by exponential root mean-squared (RMS).""" -from typing import NamedTuple +from typing import NamedTuple, Optional, Tuple import torch from torchopt import pytree from torchopt.base import GradientTransformation -from torchopt.transform.utils import tree_map_flat, update_moment -from torchopt.typing import Updates +from torchopt.transform.utils import tree_map_flat, tree_map_flat_, update_moment +from torchopt.typing import OptState, Params, Updates __all__ = ['scale_by_rms'] @@ -51,7 +51,9 @@ class ScaleByRmsState(NamedTuple): def scale_by_rms( - alpha: float = 0.9, eps: float = 1e-8, initial_scale: float = 0.0 + alpha: float = 0.9, + eps: float = 1e-8, + initial_scale: float = 0.0, ) -> GradientTransformation: """Rescale updates by the root of the exp. moving avg of the square. @@ -78,7 +80,9 @@ def scale_by_rms( def _scale_by_rms_flat( - alpha: float = 0.9, eps: float = 1e-8, initial_scale: float = 0.0 + alpha: float = 0.9, + eps: float = 1e-8, + initial_scale: float = 0.0, ) -> GradientTransformation: return _scale_by_rms( alpha=alpha, @@ -96,22 +100,30 @@ def _scale_by_rms( already_flattened: bool = False, ) -> GradientTransformation: # pylint: disable=unneeded-not - if not 0.0 <= alpha: + if not 0.0 <= alpha: # pragma: no cover raise ValueError(f'Invalid alpha value: {alpha}') - if not 0.0 <= eps: + if not 0.0 <= eps: # pragma: no cover raise ValueError(f'Invalid epsilon value: {eps}') # pylint: enable=unneeded-not if already_flattened: tree_map = tree_map_flat + tree_map_ = tree_map_flat_ else: tree_map = pytree.tree_map # type: ignore[assignment] + tree_map_ = pytree.tree_map_ # type: ignore[assignment] - def init_fn(params): + def init_fn(params: Params) -> OptState: nu = tree_map(lambda n: torch.full_like(n, initial_scale), params) # second moment return ScaleByRmsState(nu=nu) - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: nu = update_moment.impl( # type: ignore[attr-defined] updates, state.nu, alpha, order=2, inplace=inplace, already_flattened=already_flattened ) @@ -121,12 +133,15 @@ def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable= def f(g, n): # pylint: disable=invalid-name return g.div_(n.sqrt().add_(eps)) + updates = tree_map_(f, updates, nu) + else: def f(g, n): # pylint: disable=invalid-name return g.div(n.sqrt().add(eps)) - updates = tree_map(f, updates, nu) + updates = tree_map(f, updates, nu) + return updates, ScaleByRmsState(nu=nu) return GradientTransformation(init_fn, update_fn) diff --git a/torchopt/transform/scale_by_schedule.py b/torchopt/transform/scale_by_schedule.py index 4b1d5d18..5556d111 100644 --- a/torchopt/transform/scale_by_schedule.py +++ b/torchopt/transform/scale_by_schedule.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -31,14 +31,14 @@ # ============================================================================== """Preset transformation for scaling updates by learning rate schedules.""" -from typing import NamedTuple +from typing import NamedTuple, Optional, Tuple import torch from torchopt import pytree from torchopt.base import GradientTransformation -from torchopt.transform.utils import inc_count, tree_map_flat -from torchopt.typing import Schedule, SequenceOfTensors +from torchopt.transform.utils import inc_count, tree_map_flat, tree_map_flat_ +from torchopt.typing import OptState, Params, Schedule, SequenceOfTensors, Updates __all__ = ['scale_by_schedule'] @@ -69,33 +69,46 @@ def _scale_by_schedule_flat(step_size_fn: Schedule) -> GradientTransformation: def _scale_by_schedule( - step_size_fn: Schedule, *, already_flattened: bool = False + step_size_fn: Schedule, + *, + already_flattened: bool = False, ) -> GradientTransformation: if already_flattened: tree_map = tree_map_flat + tree_map_ = tree_map_flat_ else: tree_map = pytree.tree_map # type: ignore[assignment] + tree_map_ = pytree.tree_map_ # type: ignore[assignment] - def init_fn(params): + def init_fn(params: Params) -> OptState: zero = tree_map( # count init lambda t: torch.zeros(1, dtype=torch.int64, device=t.device).squeeze_(), params ) return ScaleByScheduleState(count=zero) - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: if inplace: def f(g, c): # pylint: disable=invalid-name step_size = step_size_fn(c) return g.mul_(step_size) + updates = tree_map_(f, updates, state.count) + else: def f(g, c): # pylint: disable=invalid-name step_size = step_size_fn(c) return g.mul(step_size) - updates = tree_map(f, updates, state.count) + updates = tree_map(f, updates, state.count) + return ( updates, ScaleByScheduleState( diff --git a/torchopt/transform/scale_by_stddev.py b/torchopt/transform/scale_by_stddev.py index 37138566..c15a0d6c 100644 --- a/torchopt/transform/scale_by_stddev.py +++ b/torchopt/transform/scale_by_stddev.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -33,14 +33,14 @@ # pylint: disable=invalid-name -from typing import NamedTuple +from typing import NamedTuple, Optional, Tuple import torch from torchopt import pytree from torchopt.base import GradientTransformation -from torchopt.transform.utils import tree_map_flat, update_moment -from torchopt.typing import Updates +from torchopt.transform.utils import tree_map_flat, tree_map_flat_, update_moment +from torchopt.typing import OptState, Params, Updates __all__ = ['scale_by_stddev'] @@ -54,7 +54,9 @@ class ScaleByRStdDevState(NamedTuple): def scale_by_stddev( - alpha: float = 0.9, eps: float = 1e-8, initial_scale: float = 0.0 + alpha: float = 0.9, + eps: float = 1e-8, + initial_scale: float = 0.0, ) -> GradientTransformation: """Rescale updates by the root of the centered exponential moving average of squares. @@ -81,7 +83,9 @@ def scale_by_stddev( def _scale_by_stddev_flat( - alpha: float = 0.9, eps: float = 1e-8, initial_scale: float = 0.0 + alpha: float = 0.9, + eps: float = 1e-8, + initial_scale: float = 0.0, ) -> GradientTransformation: return _scale_by_stddev( alpha=alpha, @@ -99,23 +103,31 @@ def _scale_by_stddev( already_flattened: bool = False, ) -> GradientTransformation: # pylint: disable=unneeded-not - if not 0.0 <= alpha: + if not 0.0 <= alpha: # pragma: no cover raise ValueError(f'Invalid alpha value: {alpha}') - if not 0.0 <= eps: + if not 0.0 <= eps: # pragma: no cover raise ValueError(f'Invalid epsilon value: {eps}') # pylint: enable=unneeded-not if already_flattened: tree_map = tree_map_flat + tree_map_ = tree_map_flat_ else: tree_map = pytree.tree_map # type: ignore[assignment] + tree_map_ = pytree.tree_map_ # type: ignore[assignment] - def init_fn(params): + def init_fn(params: Params) -> OptState: mu = tree_map(torch.zeros_like, params) # first moment nu = tree_map(lambda n: torch.full_like(n, initial_scale), params) # second moment return ScaleByRStdDevState(mu=mu, nu=nu) - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: mu = update_moment.impl( # type: ignore[attr-defined] updates, state.mu, alpha, order=1, inplace=inplace, already_flattened=already_flattened ) @@ -128,12 +140,15 @@ def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable= def f(g, m, n): return g.div_(n.addcmul(m, m, value=-1.0).sqrt_().add(eps)) + updates = tree_map_(f, updates, mu, nu) + else: def f(g, m, n): return g.div(n.addcmul(m, m, value=-1.0).sqrt_().add(eps)) - updates = tree_map(f, updates, mu, nu) + updates = tree_map(f, updates, mu, nu) + return updates, ScaleByRStdDevState(mu=mu, nu=nu) return GradientTransformation(init_fn, update_fn) diff --git a/torchopt/transform/trace.py b/torchopt/transform/trace.py index 8bb138c2..45e043f0 100644 --- a/torchopt/transform/trace.py +++ b/torchopt/transform/trace.py @@ -1,4 +1,4 @@ -# Copyright 2022 MetaOPT Team. All Rights Reserved. +# Copyright 2022-2023 MetaOPT Team. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -33,14 +33,14 @@ # pylint: disable=invalid-name -from typing import NamedTuple +from typing import NamedTuple, Optional, Tuple import torch from torchopt import pytree from torchopt.base import GradientTransformation, identity -from torchopt.transform.utils import tree_map_flat -from torchopt.typing import Params +from torchopt.transform.utils import tree_map_flat, tree_map_flat_ +from torchopt.typing import OptState, Params, Updates __all__ = ['trace'] @@ -110,9 +110,9 @@ def _trace( already_flattened: bool = False, ) -> GradientTransformation: # pylint: disable=unneeded-not - if not 0.0 <= momentum: + if not 0.0 <= momentum: # pragma: no cover raise ValueError(f'Invalid momentum value: {momentum}') - if nesterov and (momentum <= 0.0 or dampening != 0.0): + if nesterov and (momentum <= 0.0 or dampening != 0.0): # pragma: no cover raise ValueError('Nesterov momentum requires a momentum and zero dampening') # pylint: enable=unneeded-not @@ -121,10 +121,12 @@ def _trace( if already_flattened: tree_map = tree_map_flat + tree_map_ = tree_map_flat_ else: tree_map = pytree.tree_map # type: ignore[assignment] + tree_map_ = pytree.tree_map_ # type: ignore[assignment] - def init_fn(params): + def init_fn(params: Params) -> OptState: return TraceState( trace=tree_map( lambda t: torch.zeros_like(t, requires_grad=moment_requires_grad), params @@ -133,7 +135,13 @@ def init_fn(params): first_call = True - def update_fn(updates, state, *, params=None, inplace=True): # pylint: disable=unused-argument + def update_fn( + updates: Updates, + state: OptState, + *, + params: Optional[Params] = None, # pylint: disable=unused-argument + inplace: bool = True, + ) -> Tuple[Updates, OptState]: nonlocal first_call if nesterov: @@ -148,7 +156,8 @@ def f2(g, t): return g.add_(t, alpha=momentum) new_trace = tree_map(f1, updates, state.trace) - updates = tree_map(f2, updates, new_trace) + updates = tree_map_(f2, updates, new_trace) + else: def f1(g, t): @@ -161,19 +170,21 @@ def f2(g, t): new_trace = tree_map(f1, updates, state.trace) updates = tree_map(f2, updates, new_trace) + else: if inplace: def f(g, t): if first_call: - return t.add(g) + return t.add_(g) return t.mul_(momentum).add_(g, alpha=1.0 - dampening) def copy_(g, t): return g.copy_(t) new_trace = tree_map(f, updates, state.trace) - updates = tree_map(copy_, updates, new_trace) + updates = tree_map_(copy_, updates, new_trace) + else: def f(g, t): diff --git a/torchopt/transform/utils.py b/torchopt/transform/utils.py index b3adedc8..a9f02295 100644 --- a/torchopt/transform/utils.py +++ b/torchopt/transform/utils.py @@ -32,7 +32,7 @@ """Utilities for the preset transformations.""" from collections import deque -from typing import Any, Callable, Iterable, List +from typing import Any, Callable, Sequence import torch @@ -46,7 +46,12 @@ INT64_MAX = torch.iinfo(torch.int64).max -def tree_map_flat(func: Callable, *flat_args: Any, none_is_leaf: bool = False) -> List[Any]: +def tree_map_flat( + func: Callable, + flat_arg: Sequence[Any], + *flat_args: Any, + none_is_leaf: bool = False, +) -> Sequence[Any]: """Apply a function to each element of a flattened list.""" if none_is_leaf: fn = func @@ -55,13 +60,16 @@ def tree_map_flat(func: Callable, *flat_args: Any, none_is_leaf: bool = False) - def fn(x, *xs): return func(x, *xs) if x is not None else None - return list(map(fn, *flat_args)) + return flat_arg.__class__(map(fn, flat_arg, *flat_args)) # type: ignore[call-arg] def tree_map_flat_( - func: Callable, flat_arg: Iterable[Any], *flat_args: Any, none_is_leaf: bool = False -) -> Iterable[Any]: - """Apply a function to each element of a flattened list.""" + func: Callable, + flat_arg: Sequence[Any], + *flat_args: Any, + none_is_leaf: bool = False, +) -> Sequence[Any]: + """Apply a function to each element of a flattened list and return the original list.""" if none_is_leaf: fn = func else: @@ -80,42 +88,85 @@ def inc_count(updates: Updates, count: TensorTree) -> TensorTree: Returns: A counter incremented by one, or :data:`INT64_MAX` if the maximum precision is reached. """ - return _inc_count(updates=updates, count=count, already_flattened=False) + return _inc_count( + updates=updates, + count=count, + already_flattened=False, + ) def _inc_count_flat(updates: Updates, count: TensorTree) -> TensorTree: - return _inc_count(updates=updates, count=count, already_flattened=True) + return _inc_count( + updates=updates, + count=count, + already_flattened=True, + ) def _inc_count( - updates: Updates, count: TensorTree, *, already_flattened: bool = False + updates: Updates, + count: TensorTree, + *, + already_flattened: bool = False, ) -> TensorTree: def f(c, g): # pylint: disable=invalid-name return c + (c != INT64_MAX).to(torch.int64) if g is not None else c if already_flattened: - return tree_map_flat(f, count, updates) - return pytree.tree_map(f, count, updates) + return tree_map_flat(f, count, updates, none_is_leaf=True) + return pytree.tree_map(f, count, updates, none_is_leaf=True) inc_count.flat = _inc_count_flat # type: ignore[attr-defined] inc_count.impl = _inc_count # type: ignore[attr-defined] -def update_moment(updates, moments, decay, *, order, inplace=True): +def update_moment( + updates: Updates, + moments: TensorTree, + decay: float, + *, + order: int, + inplace: bool = True, +) -> TensorTree: """Compute the exponential moving average of the ``order``-th moment.""" return _update_moment( - updates, moments, decay, order=order, inplace=inplace, already_flattened=False + updates, + moments, + decay, + order=order, + inplace=inplace, + already_flattened=False, ) -def _update_moment_flat(updates, moments, decay, *order, inplace=True): +def _update_moment_flat( + updates: Updates, + moments: TensorTree, + decay: float, + *, + order: int, + inplace: bool = True, +) -> TensorTree: return _update_moment( - updates, moments, decay, order=order, inplace=inplace, already_flattened=True + updates, + moments, + decay, + order=order, + inplace=inplace, + already_flattened=True, ) -def _update_moment(updates, moments, decay, *, order, inplace=True, already_flattened=False): +def _update_moment( + updates: Updates, + moments: TensorTree, + decay: float, + *, + order: int, + inplace: bool = True, + already_flattened=False, +) -> TensorTree: assert order in (1, 2) if inplace: @@ -141,7 +192,7 @@ def f(g, t): return t.mul(decay).add_(g, alpha=1 - decay) if g is not None else t if already_flattened: - return tree_map_flat(f, updates, moments) + return tree_map_flat(f, updates, moments, none_is_leaf=True) return pytree.tree_map(f, updates, moments, none_is_leaf=True) diff --git a/torchopt/typing.py b/torchopt/typing.py index 938f583e..2075dc62 100644 --- a/torchopt/typing.py +++ b/torchopt/typing.py @@ -14,6 +14,7 @@ # ============================================================================== """Typing utilities.""" +import abc from typing import Callable, Dict, List, Optional, Sequence, Tuple, TypeVar, Union from typing_extensions import TypeAlias # Python 3.10+ from typing_extensions import Protocol, runtime_checkable # Python 3.8+ @@ -24,7 +25,6 @@ from torch import Tensor from torch.distributions import Distribution from torch.futures import Future -from torch.types import Device from torchopt.base import ( ChainedGradientTransformation, @@ -72,6 +72,8 @@ T = TypeVar('T') +Device: TypeAlias = Union[torch.device, str, int] + Scalar: TypeAlias = Union[float, int, bool] Numeric: TypeAlias = Union[Tensor, Scalar] @@ -100,12 +102,13 @@ Updates: TypeAlias = Params # Gradient updates are of the same type as parameters. OptState: TypeAlias = TensorTree # States are arbitrary nests of `torch.Tensor`. -if rpc.is_available(): +if rpc.is_available(): # pragma: no cover from torch.distributed.rpc import RRef # pylint: disable=ungrouped-imports,unused-import __all__.extend(['RRef']) -else: - RRef = None # type: ignore[misc,assignment] # pylint: disable=invalid-name +else: # pragma: no cover + # pylint: disable-next=invalid-name + RRef = None # type: ignore[misc,assignment] # solver(matvec, b) -> solution LinearSolver: TypeAlias = Callable[[Callable[[TensorTree], TensorTree], TensorTree], TensorTree] @@ -121,12 +124,13 @@ class Samplable(Protocol): # pylint: disable=too-few-public-methods """Abstract protocol class that supports sampling.""" + @abc.abstractmethod def sample( self, sample_shape: Size = Size() # pylint: disable=unused-argument ) -> Union[Tensor, Sequence[Numeric]]: # pylint: disable-next=line-too-long """Generate a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched.""" - raise NotImplementedError + raise NotImplementedError # pragma: no cover Samplable.register(Distribution) diff --git a/torchopt/utils.py b/torchopt/utils.py index c00e6b4f..4deaba8b 100644 --- a/torchopt/utils.py +++ b/torchopt/utils.py @@ -39,7 +39,7 @@ from torchopt.typing import Device, ModuleTensorContainers, OptState, TensorContainer, TensorTree -if TYPE_CHECKING: +if TYPE_CHECKING: # pragma: no cover from torchopt.optim.meta.base import MetaOptimizer @@ -65,7 +65,7 @@ class ModuleState(NamedTuple): CopyMode: TypeAlias = Literal['reference', 'copy', 'deepcopy', 'ref', 'clone', 'deepclone'] -def stop_gradient(target: Union[TensorTree, ModuleState, nn.Module, 'MetaOptimizer']) -> None: +def stop_gradient(target: Union[ModuleState, nn.Module, 'MetaOptimizer', TensorTree]) -> None: """Stop the gradient for the input object. Since a tensor use :attr:`grad_fn` to connect itself with the previous computation graph, the @@ -108,11 +108,11 @@ def extract_state_dict( target: nn.Module, *, by: CopyMode = 'reference', - device: Device = None, + device: Optional[Device] = None, with_buffers: bool = True, enable_visual: bool = False, visual_prefix: str = '', -) -> ModuleState: +) -> ModuleState: # pragma: no cover ... @@ -121,11 +121,11 @@ def extract_state_dict( target: 'MetaOptimizer', *, by: CopyMode = 'reference', - device: Device = None, + device: Optional[Device] = None, with_buffers: bool = True, enable_visual: bool = False, visual_prefix: str = '', -) -> Tuple[OptState, ...]: +) -> Tuple[OptState, ...]: # pragma: no cover ... @@ -134,7 +134,7 @@ def extract_state_dict( target: Union[nn.Module, 'MetaOptimizer'], *, by: CopyMode = 'reference', - device: Device = None, + device: Optional[Device] = None, with_buffers: bool = True, detach_buffers: bool = False, enable_visual: bool = False, @@ -191,10 +191,10 @@ def clone(t: torch.Tensor) -> torch.Tensor: def clone_detach_(t: torch.Tensor) -> torch.Tensor: if isinstance(t, nn.Parameter): - return nn.Parameter(t.clone().detach_(), requires_grad=t.requires_grad).to( - device=target_device + return nn.Parameter( + t.clone().to(device=target_device).detach_(), requires_grad=t.requires_grad ) - return t.clone().detach_().to(device=target_device).requires_grad_(t.requires_grad) + return t.clone().to(device=target_device).detach_().requires_grad_(t.requires_grad) else: @@ -367,8 +367,8 @@ def module_clone( *, by: CopyMode = 'reference', detach_buffers: bool = False, - device: Device = None, -) -> nn.Module: + device: Optional[Device] = None, +) -> nn.Module: # pragma: no cover ... @@ -378,8 +378,8 @@ def module_clone( *, by: CopyMode = 'reference', detach_buffers: bool = False, - device: Device = None, -) -> 'MetaOptimizer': + device: Optional[Device] = None, +) -> 'MetaOptimizer': # pragma: no cover ... @@ -389,8 +389,8 @@ def module_clone( *, by: CopyMode = 'reference', detach_buffers: bool = False, - device: Device = None, -) -> TensorTree: + device: Optional[Device] = None, +) -> TensorTree: # pragma: no cover ... @@ -400,7 +400,7 @@ def module_clone( *, by: CopyMode = 'reference', detach_buffers: bool = False, - device: Device = None, + device: Optional[Device] = None, ) -> Union[nn.Module, 'MetaOptimizer', TensorTree]: """Clone a module. @@ -460,10 +460,10 @@ def clone(t: torch.Tensor) -> torch.Tensor: def clone_detach_(t: torch.Tensor) -> torch.Tensor: if isinstance(t, nn.Parameter): - return nn.Parameter(t.clone().detach_(), requires_grad=t.requires_grad).to( - device=target_device + return nn.Parameter( + t.clone().to(device=target_device).detach_(), requires_grad=t.requires_grad ) - return t.clone().detach_().to(device=target_device).requires_grad_(t.requires_grad) + return t.clone().to(device=target_device).detach_().requires_grad_(t.requires_grad) else: @@ -488,9 +488,29 @@ def clone_detach_(t: torch.Tensor) -> torch.Tensor: return pytree.tree_map(replicate, cast(TensorTree, target)) +@overload +def module_detach_(target: ModuleState) -> ModuleState: # pragma: no cover + ... + + +@overload +def module_detach_(target: nn.Module) -> nn.Module: # pragma: no cover + ... + + +@overload +def module_detach_(target: 'MetaOptimizer') -> 'MetaOptimizer': # pragma: no cover + ... + + +@overload +def module_detach_(target: TensorTree) -> TensorTree: # pragma: no cover + ... + + def module_detach_( - target: Union[TensorTree, ModuleState, nn.Module, 'MetaOptimizer'] -) -> Union[TensorTree, ModuleState, nn.Module, 'MetaOptimizer']: + target: Union[ModuleState, nn.Module, 'MetaOptimizer', TensorTree] +) -> Union[ModuleState, nn.Module, 'MetaOptimizer', TensorTree]: """Detach a module from the computation graph. Args: diff --git a/torchopt/visual.py b/torchopt/visual.py index 83872b8c..e8145240 100644 --- a/torchopt/visual.py +++ b/torchopt/visual.py @@ -17,13 +17,11 @@ # ============================================================================== """Computation graph visualization.""" -import warnings from collections import namedtuple from typing import Generator, Iterable, Mapping, Optional, Union, cast import torch from graphviz import Digraph -from pkg_resources import parse_version from torchopt.typing import TensorOrTensors from torchopt.utils import ModuleState @@ -113,13 +111,6 @@ def make_dot( max_attr_chars: If ``show_attrs`` is :data:`True`, sets max number of characters to display for any given attribute. """ - if parse_version(torch.__version__) < parse_version('1.9') and (show_attrs or show_saved): - warnings.warn( - 'make_dot: showing grad_fn attributes and saved variables ' - 'requires PyTorch version >= 1.9. (This does NOT apply to ' - 'saved tensors saved by custom autograd functions.)' - ) - param_map = {} if params is not None: From e2157de631fe71a7473c14ff88d9facc1f9c3d71 Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Wed, 15 Feb 2023 23:53:25 +0800 Subject: [PATCH 23/24] style: use postponed evaluation of annotations and update doctring style (#135) --- CHANGELOG.md | 1 + README.md | 1 - docs/source/api/api.rst | 109 ++++++++++++++ docs/source/distributed/distributed.rst | 7 - docs/source/spelling_wordlist.txt | 1 + tests/helpers.py | 22 +-- tests/test_alias.py | 14 +- tests/test_implicit.py | 7 +- tests/test_meta_optim.py | 4 +- tests/test_optim.py | 14 +- tests/test_schedule.py | 6 +- tests/test_transform.py | 5 - torchopt/_C/adam_op.pyi | 10 +- torchopt/accelerated_op/__init__.py | 6 +- torchopt/accelerated_op/_src/adam_op.py | 10 +- torchopt/accelerated_op/adam_op.py | 8 +- torchopt/alias/adam.py | 43 +++--- torchopt/alias/adamw.py | 63 ++++---- torchopt/alias/rmsprop.py | 41 +++-- torchopt/alias/sgd.py | 28 ++-- torchopt/alias/utils.py | 15 +- torchopt/base.py | 67 +++++---- torchopt/clip.py | 19 +-- torchopt/combine.py | 14 +- torchopt/diff/implicit/decorator.py | 81 +++++----- torchopt/diff/implicit/nn/module.py | 28 ++-- torchopt/diff/zero_order/decorator.py | 68 ++++----- torchopt/diff/zero_order/nn/module.py | 12 +- torchopt/distributed/api.py | 174 +++++++++++----------- torchopt/distributed/autograd.py | 50 +++---- torchopt/distributed/world.py | 17 ++- torchopt/hook.py | 10 +- torchopt/linalg/cg.py | 68 +++++---- torchopt/linalg/ns.py | 64 ++++---- torchopt/linalg/utils.py | 8 +- torchopt/linear_solve/cg.py | 24 +-- torchopt/linear_solve/inv.py | 26 ++-- torchopt/linear_solve/normal_cg.py | 24 +-- torchopt/linear_solve/utils.py | 6 +- torchopt/nn/module.py | 100 +++++++------ torchopt/nn/stateless.py | 10 +- torchopt/optim/adam.py | 46 +++--- torchopt/optim/adamw.py | 62 ++++---- torchopt/optim/base.py | 30 ++-- torchopt/optim/func/base.py | 28 ++-- torchopt/optim/meta/adam.py | 46 +++--- torchopt/optim/meta/adamw.py | 64 ++++---- torchopt/optim/meta/base.py | 27 ++-- torchopt/optim/meta/rmsprop.py | 44 +++--- torchopt/optim/meta/sgd.py | 31 ++-- torchopt/optim/rmsprop.py | 45 +++--- torchopt/optim/sgd.py | 29 ++-- torchopt/pytree.py | 25 ++-- torchopt/schedule/polynomial.py | 23 ++- torchopt/transform/add_decayed_weights.py | 46 +++--- torchopt/transform/nan_to_num.py | 10 +- torchopt/transform/scale.py | 8 +- torchopt/transform/scale_by_adam.py | 56 +++---- torchopt/transform/scale_by_rms.py | 19 +-- torchopt/transform/scale_by_schedule.py | 13 +- torchopt/transform/scale_by_stddev.py | 19 +-- torchopt/transform/trace.py | 22 +-- torchopt/transform/utils.py | 2 + torchopt/update.py | 10 +- torchopt/utils.py | 155 +++++++++---------- torchopt/visual.py | 41 ++--- 66 files changed, 1165 insertions(+), 1021 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 74d23144..3a3ad174 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Changed +- Use postponed evaluation of annotations and update doctring style by [@XuehaiPan](https://github.com/XuehaiPan) in [#135](https://github.com/metaopt/torchopt/pull/135). - Rewrite setup CUDA Toolkit logic by [@XuehaiPan](https://github.com/XuehaiPan) in [#133](https://github.com/metaopt/torchopt/pull/133). ### Fixed diff --git a/README.md b/README.md index c1fb97ba..321f39e3 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,6 @@ ![CodeCov](https://img.shields.io/codecov/c/gh/metaopt/torchopt) ![Documentation Status](https://img.shields.io/readthedocs/torchopt?logo=readthedocs) ![Downloads](https://static.pepy.tech/personalized-badge/torchopt?period=total&left_color=grey&right_color=blue&left_text=downloads) - ![GitHub Repo Stars](https://img.shields.io/github/stars/metaopt/torchopt?color=brightgreen&logo=github) ![License](https://img.shields.io/github/license/metaopt/torchopt?label=license&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCIgd2lkdGg9IjI0IiBoZWlnaHQ9IjI0IiBmaWxsPSIjZmZmZmZmIj48cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGQ9Ik0xMi43NSAyLjc1YS43NS43NSAwIDAwLTEuNSAwVjQuNUg5LjI3NmExLjc1IDEuNzUgMCAwMC0uOTg1LjMwM0w2LjU5NiA1Ljk1N0EuMjUuMjUgMCAwMTYuNDU1IDZIMi4zNTNhLjc1Ljc1IDAgMTAwIDEuNUgzLjkzTC41NjMgMTUuMThhLjc2Mi43NjIgMCAwMC4yMS44OGMuMDguMDY0LjE2MS4xMjUuMzA5LjIyMS4xODYuMTIxLjQ1Mi4yNzguNzkyLjQzMy42OC4zMTEgMS42NjIuNjIgMi44NzYuNjJhNi45MTkgNi45MTkgMCAwMDIuODc2LS42MmMuMzQtLjE1NS42MDYtLjMxMi43OTItLjQzMy4xNS0uMDk3LjIzLS4xNTguMzEtLjIyM2EuNzUuNzUgMCAwMC4yMDktLjg3OEw1LjU2OSA3LjVoLjg4NmMuMzUxIDAgLjY5NC0uMTA2Ljk4NC0uMzAzbDEuNjk2LTEuMTU0QS4yNS4yNSAwIDAxOS4yNzUgNmgxLjk3NXYxNC41SDYuNzYzYS43NS43NSAwIDAwMCAxLjVoMTAuNDc0YS43NS43NSAwIDAwMC0xLjVIMTIuNzVWNmgxLjk3NGMuMDUgMCAuMS4wMTUuMTQuMDQzbDEuNjk3IDEuMTU0Yy4yOS4xOTcuNjMzLjMwMy45ODQuMzAzaC44ODZsLTMuMzY4IDcuNjhhLjc1Ljc1IDAgMDAuMjMuODk2Yy4wMTIuMDA5IDAgMCAuMDAyIDBhMy4xNTQgMy4xNTQgMCAwMC4zMS4yMDZjLjE4NS4xMTIuNDUuMjU2Ljc5LjRhNy4zNDMgNy4zNDMgMCAwMDIuODU1LjU2OCA3LjM0MyA3LjM0MyAwIDAwMi44NTYtLjU2OWMuMzM4LS4xNDMuNjA0LS4yODcuNzktLjM5OWEzLjUgMy41IDAgMDAuMzEtLjIwNi43NS43NSAwIDAwLjIzLS44OTZMMjAuMDcgNy41aDEuNTc4YS43NS43NSAwIDAwMC0xLjVoLTQuMTAyYS4yNS4yNSAwIDAxLS4xNC0uMDQzbC0xLjY5Ny0xLjE1NGExLjc1IDEuNzUgMCAwMC0uOTg0LS4zMDNIMTIuNzVWMi43NXpNMi4xOTMgMTUuMTk4YTUuNDE4IDUuNDE4IDAgMDAyLjU1Ny42MzUgNS40MTggNS40MTggMCAwMDIuNTU3LS42MzVMNC43NSA5LjM2OGwtMi41NTcgNS44M3ptMTQuNTEtLjAyNGMuMDgyLjA0LjE3NC4wODMuMjc1LjEyNi41My4yMjMgMS4zMDUuNDUgMi4yNzIuNDVhNS44NDYgNS44NDYgMCAwMDIuNTQ3LS41NzZMMTkuMjUgOS4zNjdsLTIuNTQ3IDUuODA3eiI+PC9wYXRoPjwvc3ZnPgo=) diff --git a/docs/source/api/api.rst b/docs/source/api/api.rst index c7e04e95..b2866407 100644 --- a/docs/source/api/api.rst +++ b/docs/source/api/api.rst @@ -285,6 +285,115 @@ Chain .. autofunction:: chain +Distributed Utilities +===================== + +.. currentmodule:: torchopt.distributed + +Initialization and Synchronization +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + auto_init_rpc + barrier + +.. autofunction:: auto_init_rpc +.. autofunction:: barrier + +Process group information +~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + get_world_info + get_world_rank + get_rank + get_world_size + get_local_rank + get_local_world_size + get_worker_id + +.. autofunction:: get_world_info +.. autofunction:: get_world_rank +.. autofunction:: get_rank +.. autofunction:: get_world_size +.. autofunction:: get_local_rank +.. autofunction:: get_local_world_size +.. autofunction:: get_worker_id + +Worker selection +~~~~~~~~~~~~~~~~ + +.. autosummary:: + + on_rank + not_on_rank + rank_zero_only + rank_non_zero_only + +.. autofunction:: on_rank +.. autofunction:: not_on_rank +.. autofunction:: rank_zero_only +.. autofunction:: rank_non_zero_only + +Remote Procedure Call (RPC) +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + remote_async_call + remote_sync_call + +.. autofunction:: remote_async_call +.. autofunction:: remote_sync_call + +Predefined partitioners and reducers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + dim_partitioner + batch_partitioner + mean_reducer + sum_reducer + +.. autofunction:: dim_partitioner +.. autofunction:: batch_partitioner +.. autofunction:: mean_reducer +.. autofunction:: sum_reducer + +Function parallelization wrappers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + + parallelize + parallelize_async + parallelize_sync + +.. autofunction:: parallelize +.. autofunction:: parallelize_async +.. autofunction:: parallelize_sync + +Distributed Autograd +~~~~~~~~~~~~~~~~~~~~ + +.. currentmodule:: torchopt.distributed.autograd + +.. autosummary:: + + context + get_gradients + backward + grad + +.. autofunction:: context +.. autofunction:: get_gradients +.. autofunction:: backward +.. autofunction:: grad + + General Utilities ================= diff --git a/docs/source/distributed/distributed.rst b/docs/source/distributed/distributed.rst index f85eec3f..b6f00951 100644 --- a/docs/source/distributed/distributed.rst +++ b/docs/source/distributed/distributed.rst @@ -142,7 +142,6 @@ Initialization and Synchronization .. autosummary:: - torchopt.distributed.auto_init_rpc torchopt.distributed.barrier @@ -197,7 +196,6 @@ Process group information .. autosummary:: - torchopt.distributed.get_world_info torchopt.distributed.get_world_rank torchopt.distributed.get_rank @@ -228,7 +226,6 @@ Worker selection .. autosummary:: - torchopt.distributed.on_rank torchopt.distributed.not_on_rank torchopt.distributed.rank_zero_only @@ -275,7 +272,6 @@ Remote Procedure Call (RPC) .. autosummary:: - torchopt.distributed.remote_async_call torchopt.distributed.remote_sync_call @@ -354,7 +350,6 @@ Predefined partitioners and reducers .. autosummary:: - torchopt.distributed.dim_partitioner torchopt.distributed.batch_partitioner torchopt.distributed.mean_reducer @@ -439,7 +434,6 @@ Function parallelization wrappers .. autosummary:: - torchopt.distributed.parallelize torchopt.distributed.parallelize_async torchopt.distributed.parallelize_sync @@ -490,7 +484,6 @@ Distributed Autograd .. autosummary:: - torchopt.distributed.autograd.context torchopt.distributed.autograd.get_gradients torchopt.distributed.autograd.backward diff --git a/docs/source/spelling_wordlist.txt b/docs/source/spelling_wordlist.txt index 8f9d6895..aac17046 100644 --- a/docs/source/spelling_wordlist.txt +++ b/docs/source/spelling_wordlist.txt @@ -171,3 +171,4 @@ issubclass abc ABCMeta subclasscheck +ctx diff --git a/tests/helpers.py b/tests/helpers.py index 4bba706e..23e178f0 100644 --- a/tests/helpers.py +++ b/tests/helpers.py @@ -13,11 +13,13 @@ # limitations under the License. # ============================================================================== +from __future__ import annotations + import copy import itertools import os import random -from typing import Iterable, Optional, Tuple, Union +from typing import Iterable import numpy as np import pytest @@ -137,7 +139,7 @@ def get_model(): @torch.no_grad() def get_models( device: torch.types.Device = None, dtype: torch.dtype = torch.float32 -) -> Tuple[nn.Module, nn.Module, nn.Module, data.DataLoader]: +) -> tuple[nn.Module, nn.Module, nn.Module, data.DataLoader]: seed_everything(seed=42) model_base = get_model().to(dtype=dtype) @@ -166,12 +168,12 @@ def get_models( @torch.no_grad() def assert_model_all_close( - model: Union[nn.Module, Tuple[Iterable[torch.Tensor], Iterable[torch.Tensor]]], + model: nn.Module | tuple[Iterable[torch.Tensor], Iterable[torch.Tensor]], model_ref: nn.Module, model_base: nn.Module, dtype: torch.dtype = torch.float32, - rtol: Optional[float] = None, - atol: Optional[float] = None, + rtol: float | None = None, + atol: float | None = None, equal_nan: bool = False, ) -> None: if isinstance(model, tuple): @@ -194,8 +196,8 @@ def assert_all_close( actual: torch.Tensor, expected: torch.Tensor, base: torch.Tensor = None, - rtol: Optional[float] = None, - atol: Optional[float] = None, + rtol: float | None = None, + atol: float | None = None, equal_nan: bool = False, ) -> None: if base is not None: @@ -223,9 +225,9 @@ def assert_all_close( def assert_pytree_all_close( actual: TensorTree, expected: TensorTree, - base: Optional[TensorTree] = None, - rtol: Optional[float] = None, - atol: Optional[float] = None, + base: TensorTree | None = None, + rtol: float | None = None, + atol: float | None = None, equal_nan: bool = False, ) -> None: actual_leaves, actual_treespec = pytree.tree_flatten(actual) diff --git a/tests/test_alias.py b/tests/test_alias.py index c613d7d5..b609cf58 100644 --- a/tests/test_alias.py +++ b/tests/test_alias.py @@ -13,7 +13,9 @@ # limitations under the License. # ============================================================================== -from typing import Callable, Tuple +from __future__ import annotations + +from typing import Callable import functorch import pytest @@ -107,7 +109,7 @@ def test_sgd( def test_adam( dtype: torch.dtype, lr: float, - betas: Tuple[float, float], + betas: tuple[float, float], eps: float, inplace: bool, weight_decay: float, @@ -177,7 +179,7 @@ def test_maml_adam( outer_lr: float, inner_lr: float, inner_update: int, - betas: Tuple[float, float], + betas: tuple[float, float], eps: float, inplace: bool, weight_decay: float, @@ -263,7 +265,7 @@ def maml_inner_solver_torchopt(params, data, use_accelerated_op): def test_adamw( dtype: torch.dtype, lr: float, - betas: Tuple[float, float], + betas: tuple[float, float], eps: float, inplace: bool, weight_decay: float, @@ -333,8 +335,8 @@ def test_adamw( def test_adam_accelerated_cuda( dtype: torch.dtype, lr: float, - optimizers: Tuple[Callable, torch.optim.Optimizer], - betas: Tuple[float, float], + optimizers: tuple[Callable, torch.optim.Optimizer], + betas: tuple[float, float], eps: float, inplace: bool, weight_decay: float, diff --git a/tests/test_implicit.py b/tests/test_implicit.py index ce0ee23b..9e3722d3 100644 --- a/tests/test_implicit.py +++ b/tests/test_implicit.py @@ -13,10 +13,11 @@ # limitations under the License. # ============================================================================== +from __future__ import annotations + import copy from collections import OrderedDict from types import FunctionType -from typing import Tuple import functorch import jax @@ -55,7 +56,7 @@ def forward(self, x): return self.fc(x) -def get_model_jax(dtype: np.dtype = np.float32) -> Tuple[FunctionType, OrderedDict]: +def get_model_jax(dtype: np.dtype = np.float32) -> tuple[FunctionType, OrderedDict]: helpers.seed_everything(seed=42) def func(params, x): @@ -73,7 +74,7 @@ def func(params, x): @torch.no_grad() def get_model_torch( device: torch.types.Device = None, dtype: torch.dtype = torch.float32 -) -> Tuple[nn.Module, data.DataLoader]: +) -> tuple[nn.Module, data.DataLoader]: helpers.seed_everything(seed=42) model = FcNet(MODEL_NUM_INPUTS, MODEL_NUM_CLASSES).to(dtype=dtype) diff --git a/tests/test_meta_optim.py b/tests/test_meta_optim.py index 2c0966cc..61f8a7ad 100644 --- a/tests/test_meta_optim.py +++ b/tests/test_meta_optim.py @@ -13,7 +13,7 @@ # limitations under the License. # ============================================================================== -from typing import Tuple +from __future__ import annotations import torch import torch.nn.functional as F @@ -40,7 +40,7 @@ def test_maml_meta_adam( outer_lr: float, inner_lr: float, inner_update: int, - betas: Tuple[float, float], + betas: tuple[float, float], eps: float, eps_root: float, weight_decay: float, diff --git a/tests/test_optim.py b/tests/test_optim.py index c43bc438..b2be7500 100644 --- a/tests/test_optim.py +++ b/tests/test_optim.py @@ -13,7 +13,9 @@ # limitations under the License. # ============================================================================== -from typing import Callable, Tuple +from __future__ import annotations + +from typing import Callable import functorch import pytest @@ -96,7 +98,7 @@ def test_SGD( def test_Adam( dtype: torch.dtype, lr: float, - betas: Tuple[float, float], + betas: tuple[float, float], eps: float, weight_decay: float, maximize: bool, @@ -154,7 +156,7 @@ def test_Adam( def test_AdamW( dtype: torch.dtype, lr: float, - betas: Tuple[float, float], + betas: tuple[float, float], eps: float, weight_decay: float, maximize: bool, @@ -216,8 +218,8 @@ def test_AdamW( def test_Adam_accelerated_cuda( dtype: torch.dtype, lr: float, - optimizers: Tuple[torchopt.Optimizer, torch.optim.Optimizer], - betas: Tuple[float, float], + optimizers: tuple[torchopt.Optimizer, torch.optim.Optimizer], + betas: tuple[float, float], eps: float, weight_decay: float, maximize: bool, @@ -339,7 +341,7 @@ def test_RMSProp( def test_FuncOptimizer( dtype: torch.dtype, lr: float, - optimizers: Tuple[Callable, torch.optim.Optimizer], + optimizers: tuple[Callable, torch.optim.Optimizer], inplace: bool, weight_decay: float, ) -> None: diff --git a/tests/test_schedule.py b/tests/test_schedule.py index 9590acf8..ae714875 100644 --- a/tests/test_schedule.py +++ b/tests/test_schedule.py @@ -13,7 +13,9 @@ # limitations under the License. # ============================================================================== -from typing import Callable, Tuple +from __future__ import annotations + +from typing import Callable import functorch import numpy as np @@ -62,7 +64,7 @@ def test_lr_linear_schedule( dtype: torch.dtype, lr: float, total_iters: int, - optimizers: Tuple[Callable, torch.optim.Optimizer], + optimizers: tuple[Callable, torch.optim.Optimizer], inplace: bool, weight_decay: float, use_chain_flat: bool, diff --git a/tests/test_transform.py b/tests/test_transform.py index 4dfd034d..9598386d 100644 --- a/tests/test_transform.py +++ b/tests/test_transform.py @@ -13,13 +13,8 @@ # limitations under the License. # ============================================================================== -from typing import Tuple - -import functorch import torch -import torch.nn.functional as F -import helpers import torchopt diff --git a/torchopt/_C/adam_op.pyi b/torchopt/_C/adam_op.pyi index bc3e8ebc..7ecfe7c2 100644 --- a/torchopt/_C/adam_op.pyi +++ b/torchopt/_C/adam_op.pyi @@ -15,7 +15,7 @@ # pylint: disable=all -from typing import Tuple +from __future__ import annotations import torch @@ -28,7 +28,7 @@ def forward_( eps: float, eps_root: float, count: int, -) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: ... +) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]: ... def forward_mu(updates: torch.Tensor, mu: torch.Tensor, b1: float) -> torch.Tensor: ... def forward_nu(updates: torch.Tensor, nu: torch.Tensor, b2: float) -> torch.Tensor: ... def forward_updates( @@ -42,10 +42,10 @@ def forward_updates( ) -> torch.Tensor: ... def backward_mu( dmu: torch.Tensor, updates: torch.Tensor, mu: torch.Tensor, b1: float -) -> Tuple[torch.Tensor, torch.Tensor]: ... +) -> tuple[torch.Tensor, torch.Tensor]: ... def backward_nu( dnu: torch.Tensor, updates: torch.Tensor, nu: torch.Tensor, b2: float -) -> Tuple[torch.Tensor, torch.Tensor]: ... +) -> tuple[torch.Tensor, torch.Tensor]: ... def backward_updates( dupdates: torch.Tensor, updates: torch.Tensor, @@ -55,4 +55,4 @@ def backward_updates( b2: float, eps_root: float, count: int, -) -> Tuple[torch.Tensor, torch.Tensor]: ... +) -> tuple[torch.Tensor, torch.Tensor]: ... diff --git a/torchopt/accelerated_op/__init__.py b/torchopt/accelerated_op/__init__.py index 003a8a9f..ede60009 100644 --- a/torchopt/accelerated_op/__init__.py +++ b/torchopt/accelerated_op/__init__.py @@ -14,7 +14,9 @@ # ============================================================================== """The accelerated Ops.""" -from typing import Iterable, Optional, Union +from __future__ import annotations + +from typing import Iterable import torch @@ -22,7 +24,7 @@ from torchopt.typing import Device -def is_available(devices: Optional[Union[Device, Iterable[Device]]] = None) -> bool: +def is_available(devices: Device | Iterable[Device] | None = None) -> bool: """Check the availability of accelerated optimizer.""" op = AdamOp() diff --git a/torchopt/accelerated_op/_src/adam_op.py b/torchopt/accelerated_op/_src/adam_op.py index 9f801b8d..ab5ea195 100644 --- a/torchopt/accelerated_op/_src/adam_op.py +++ b/torchopt/accelerated_op/_src/adam_op.py @@ -16,7 +16,7 @@ # pylint: disable=invalid-name,too-many-arguments,unused-argument -from typing import Tuple +from __future__ import annotations import torch @@ -30,7 +30,7 @@ def forward_( eps: float, eps_root: float, count: int, -) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: +) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]: """Adam forward inplace.""" mu = mu.mul_(b1).add_(updates, alpha=1.0 - b1) nu = nu.mul_(b2).addcmul_(updates, updates, value=1.0 - b2) @@ -80,7 +80,7 @@ def backward_mu( updates: torch.Tensor, mu: torch.Tensor, b1: float, -) -> Tuple[torch.Tensor, torch.Tensor]: +) -> tuple[torch.Tensor, torch.Tensor]: """Adam backward mu.""" dupdates = dmu.mul(1.0 - b1) dmu = dmu.mul(b1) @@ -92,7 +92,7 @@ def backward_nu( updates: torch.Tensor, nu: torch.Tensor, b2: float, -) -> Tuple[torch.Tensor, torch.Tensor]: +) -> tuple[torch.Tensor, torch.Tensor]: """Adam backward nu.""" dupdates = updates.mul(dnu).mul_(2.0 * (1.0 - b2)) dnu = dnu.mul(b2) @@ -108,7 +108,7 @@ def backward_updates( b2: float, eps_root: float, count: int, -) -> Tuple[torch.Tensor, torch.Tensor]: +) -> tuple[torch.Tensor, torch.Tensor]: """Adam backward updates.""" one_minus_pow_b1 = 1.0 - pow(b1, count) inv_one_minus_pow_b2 = 1.0 / (1.0 - pow(b2, count) + eps_root) diff --git a/torchopt/accelerated_op/adam_op.py b/torchopt/accelerated_op/adam_op.py index 6b93bf18..232513d6 100644 --- a/torchopt/accelerated_op/adam_op.py +++ b/torchopt/accelerated_op/adam_op.py @@ -16,8 +16,10 @@ # pylint: disable=c-extension-no-member,invalid-name +from __future__ import annotations + import contextlib -from typing import Any, Optional, Tuple +from typing import Any import torch @@ -132,9 +134,9 @@ def __call__( self, mu: torch.Tensor, nu: torch.Tensor, - updates: Optional[torch.Tensor], + updates: torch.Tensor | None, count: int, - ) -> Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]: + ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor | None]: """Apply the Adam operator.""" if updates is None: return mu, nu, None diff --git a/torchopt/alias/adam.py b/torchopt/alias/adam.py index a7f90a79..08654577 100644 --- a/torchopt/alias/adam.py +++ b/torchopt/alias/adam.py @@ -31,7 +31,7 @@ # ============================================================================== """Preset :class:`GradientTransformation` for the Adam optimizer.""" -from typing import Tuple +from __future__ import annotations from torchopt.alias.utils import ( _get_use_chain_flat, @@ -49,7 +49,7 @@ # pylint: disable-next=too-many-arguments def adam( lr: ScalarOrSchedule = 1e-3, - betas: Tuple[float, float] = (0.9, 0.999), + betas: tuple[float, float] = (0.9, 0.999), eps: float = 1e-8, weight_decay: float = 0.0, *, @@ -68,26 +68,25 @@ def adam( - Kingma et al, 2014: https://arxiv.org/abs/1412.6980 Args: - lr: (default: :const:`1e-3`) - This is a fixed global scaling factor. - betas: (default: :const:`(0.9, 0.999)`) - Coefficients used for computing running averages of gradient and its square. - eps: (default: :const:`1e-8`) - A small constant applied to denominator outside of the square root (as in the Adam - paper) to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - eps_root: (default: :data:`0.0`) - A small constant applied to denominator inside the square root (as in RMSProp), to avoid - dividing by zero when rescaling. This is needed for example when computing - (meta-)gradients through Adam. - moment_requires_grad: (default: :data:`False`) - If :data:`True` the momentums will be created with flag ``requires_grad=True``, this - flag is often used in Meta-Learning algorithms. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. - use_accelerated_op: (default: :data:`False`) - If :data:`True` use our implemented fused operator. + lr (float or callable, optional): This is a fixed global scaling factor or a learning rate + scheduler. (default: :const:`1e-3`) + betas (tuple of float, optional): Coefficients used for computing running averages of + gradient and its square. (default: :const:`(0.9, 0.999)`) + eps (float, optional): A small constant applied to denominator outside of the square root + (as in the Adam paper) to avoid dividing by zero when rescaling. + (default: :const:`1e-8`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + eps_root (float, optional): A small constant applied to denominator inside the square root + (as in RMSProp), to avoid dividing by zero when rescaling. This is needed for example + when computing (meta-)gradients through Adam. (default: :const:`0.0`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created with + flag ``requires_grad=True``, this flag is often used in Meta-Learning algorithms. + (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of minimizing. + (default: :data:`False`) + use_accelerated_op (bool, optional): If :data:`True` use our implemented fused operator. + (default: :data:`False`) Returns: The corresponding :class:`GradientTransformation` instance. diff --git a/torchopt/alias/adamw.py b/torchopt/alias/adamw.py index 9aecc8ee..21ef84ef 100644 --- a/torchopt/alias/adamw.py +++ b/torchopt/alias/adamw.py @@ -31,7 +31,9 @@ # ============================================================================== """Preset :class:`GradientTransformation` for the AdamW optimizer.""" -from typing import Any, Callable, Optional, Tuple, Union +from __future__ import annotations + +from typing import Callable from torchopt.alias.utils import ( _get_use_chain_flat, @@ -40,7 +42,7 @@ ) from torchopt.combine import chain from torchopt.transform import add_decayed_weights, scale_by_accelerated_adam, scale_by_adam -from torchopt.typing import GradientTransformation, Params, ScalarOrSchedule +from torchopt.typing import GradientTransformation, OptState, Params, ScalarOrSchedule __all__ = ['adamw'] @@ -49,12 +51,12 @@ # pylint: disable-next=too-many-arguments,too-many-locals def adamw( lr: ScalarOrSchedule = 1e-3, - betas: Tuple[float, float] = (0.9, 0.999), + betas: tuple[float, float] = (0.9, 0.999), eps: float = 1e-8, weight_decay: float = 1e-2, *, eps_root: float = 0.0, - mask: Optional[Union[Any, Callable[[Params], Any]]] = None, + mask: OptState | Callable[[Params], OptState] | None = None, moment_requires_grad: bool = False, maximize: bool = False, use_accelerated_op: bool = False, @@ -70,35 +72,34 @@ def adamw( - Loshchilov et al, 2019: https://arxiv.org/abs/1711.05101 Args: - lr: (default: :const:`1e-3`) - This is a fixed global scaling factor. - betas: (default: :const:`(0.9, 0.999)`) - Coefficients used for computing running averages of gradient and its square. - eps: (default: :const:`1e-8`) - A small constant applied to denominator outside of the square root (as in the Adam - paper) to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`1e-2`) - Strength of the weight decay regularization. Note that this weight decay is multiplied - with the learning rate. This is consistent with other frameworks such as PyTorch, but - different from (Loshchilov et al, 2019) where the weight decay is only multiplied with - the "schedule multiplier", but not the base learning rate. - eps_root: (default: :data:`0.0`) - A small constant applied to denominator inside the square root (as in RMSProp), to avoid - dividing by zero when rescaling. This is needed for example when computing - (meta-)gradients through Adam. - mask: (default: :data:`None`) - A tree with same structure as (or a prefix of) the params PyTree, or a Callable that + lr (float or callable, optional): This is a fixed global scaling factor or a learning rate + scheduler. (default: :const:`1e-3`) + betas (tuple of float, optional): Coefficients used for computing running averages of + gradient and its square. (default: :const:`(0.9, 0.999)`) + eps (float, optional): A small constant applied to denominator outside of the square root + (as in the Adam paper) to avoid dividing by zero when rescaling. + (default: :const:`1e-8`) + weight_decay (float, optional): Strength of the weight decay regularization. Note that this + weight decay is multiplied with the learning rate. This is consistent with other + frameworks such as PyTorch, but different from (Loshchilov et al, 2019) where the weight + decay is only multiplied with the "schedule multiplier", but not the base learning rate. + (default: :const:`1e-2`) + eps_root (float, optional): A small constant applied to denominator inside the square root + (as in RMSProp), to avoid dividing by zero when rescaling. This is needed for example + when computing (meta-)gradients through Adam. (default: :const:`0.0`) + mask (tree of Tensor, callable, or None, optional): + A tree with same structure as (or a prefix of) the params pytree, or a function that returns such a pytree given the params/updates. The leaves should be booleans, :data:`True` for leaves/subtrees you want to apply the weight decay to, and - :data:`False` for those you want to skip. Note that the Adam gradient - transformations are applied to all parameters. - moment_requires_grad: (default: :data:`False`) - If :data:`True` the momentums will be created with flag ``requires_grad=True``, this - flag is often used in Meta-Learning algorithms. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. - use_accelerated_op: (default: :data:`False`) - If :data:`True` use our implemented fused operator. + :data:`False` for those you want to skip. Note that the Adam gradient transformations + are applied to all parameters. (default: :data:`None`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created with + flag ``requires_grad=True``, this flag is often used in Meta-Learning algorithms. + (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) + use_accelerated_op (bool, optional): If :data:`True` use our implemented fused operator. + (default: :data:`False`) Returns: The corresponding :class:`GradientTransformation` instance. diff --git a/torchopt/alias/rmsprop.py b/torchopt/alias/rmsprop.py index 18a5c5e8..f0eb92cd 100644 --- a/torchopt/alias/rmsprop.py +++ b/torchopt/alias/rmsprop.py @@ -69,28 +69,25 @@ def rmsprop( - Graves, 2013: https://arxiv.org/abs/1308.0850 Args: - lr: (default: :const:`1e-2`) - This is a fixed global scaling factor. - alpha: (default: :const:`0.99`) - Smoothing constant, the decay used to track the magnitude of previous gradients. - eps: (default: :const:`1e-8`) - A small numerical constant to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - momentum: (default: :const:`0.0`) - The decay rate used by the momentum term. The momentum is not used when it is set to - :const:`0.0`. - centered: (default: :data:`False`) - If :data:`True`, use the variance of the past gradients to rescale the latest - gradients. - initial_scale: (default: :data:`0.0`) - Initialization of accumulators tracking the magnitude of previous updates. PyTorch - uses :data:`0.0`, TensorFlow 1.x uses :data:`1.0`. When reproducing results from a - paper, verify the value used by the authors. - nesterov: (default: :data:`False`) - Whether to use Nesterov momentum. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. + lr (float or callable, optional): This is a fixed global scaling factor or a learning rate + scheduler. (default: :const:`1e-2`) + alpha (float, optional): Smoothing constant, the decay used to track the magnitude of + previous gradients. (default: :const:`0.99`) + eps (float, optional): A small numerical constant to avoid dividing by zero when rescaling. + (default: :const:`1e-8`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + momentum (float, optional): The decay rate used by the momentum term. The momentum is not + used when it is set to :const:`0.0`. (default: :const:`0.0`) + centered (bool, optional): If :data:`True`, use the variance of the past gradients to + rescale the latest gradients. (default: :data:`False`) + initial_scale (float, optional): Initialization of accumulators tracking the magnitude of + previous updates. PyTorch uses :data:`0.0`, TensorFlow 1.x uses :data:`1.0`. When + reproducing results from a paper, verify the value used by the authors. + (default: :data:`0.0`) + nesterov (bool, optional): Whether to use Nesterov momentum. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) Returns: The corresponding :class:`GradientTransformation` instance. diff --git a/torchopt/alias/sgd.py b/torchopt/alias/sgd.py index 61b3d6e4..7d86b538 100644 --- a/torchopt/alias/sgd.py +++ b/torchopt/alias/sgd.py @@ -64,21 +64,19 @@ def sgd( - Sutskever et al, 2013: http://proceedings.mlr.press/v28/sutskever13.pdf Args: - lr: This is a fixed global scaling factor. - momentum: (default: :const:`0.0`) - The decay rate used by the momentum term. The momentum is not used when it is set to - :const:`0.0`. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - dampening: (default: :const:`0.0`) - Dampening for momentum. - nesterov: (default: :data:`False`) - Whether to use Nesterov momentum. - moment_requires_grad: (default: :data:`False`) - If :data:`True` the momentums will be created with flag ``requires_grad=True``, this - flag is often used in Meta-Learning algorithms. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. + lr (float or callable): This is a fixed global scaling factor or a learning rate + scheduler. + momentum (float, optional): The decay rate used by the momentum term. The momentum is not + used when it is set to :const:`0.0`. (default: :const:`0.0`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + dampening (float, optional): Dampening for momentum. (default: :const:`0.0`) + nesterov (bool, optional): Whether to use Nesterov momentum. (default: :data:`False`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created with + flag ``requires_grad=True``, this flag is often used in Meta-Learning algorithms. + (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) Returns: The corresponding :class:`GradientTransformation` instance. diff --git a/torchopt/alias/utils.py b/torchopt/alias/utils.py index 869aad87..b5088164 100644 --- a/torchopt/alias/utils.py +++ b/torchopt/alias/utils.py @@ -13,8 +13,9 @@ # limitations under the License. r"""Utilities for the aliases of preset :class:`GradientTransformation`\s for optimizers.""" +from __future__ import annotations + import threading -from typing import Optional, Tuple from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation, identity @@ -93,9 +94,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, + params: Params | None = None, inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: assert params is not None, ( 'Parameters are required for weight decay. ' 'Call `update(updates, state, params=params)` instead.' @@ -126,9 +127,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: if inplace: def f(g): @@ -151,9 +152,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, + params: Params | None = None, inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: assert params is not None, ( 'Parameters are required for weight decay. ' 'Call `update(updates, state, params=params)` instead.' diff --git a/torchopt/base.py b/torchopt/base.py index bb37b147..b250c387 100644 --- a/torchopt/base.py +++ b/torchopt/base.py @@ -31,9 +31,11 @@ # ============================================================================== """The base classes for gradient transformation.""" +from __future__ import annotations + import itertools from abc import abstractmethod -from typing import TYPE_CHECKING, Callable, NamedTuple, Optional, Tuple +from typing import TYPE_CHECKING, Callable, NamedTuple from typing_extensions import Protocol # Python 3.8+ @@ -67,12 +69,11 @@ class TransformInitFn(Protocol): # pylint: disable=too-few-public-methods """ @abstractmethod - def __call__(self, params: 'Params') -> 'OptState': + def __call__(self, params: Params) -> OptState: """Initialize the gradient transformation state. Args: - params: - The initial value of the parameters. + params (tree of Tensor): The initial value of the parameters. Returns: The initial state of the gradient transformation. @@ -93,21 +94,21 @@ class TransformUpdateFn(Protocol): # pylint: disable=too-few-public-methods @abstractmethod def __call__( self, - updates: 'Updates', - state: 'OptState', + updates: Updates, + state: OptState, *, - params: Optional['Params'] = None, + params: Params | None = None, inplace: bool = True, - ) -> Tuple['Updates', 'OptState']: + ) -> tuple[Updates, OptState]: """Transform the updates and state. Args: - updates: A tree of candidate updates. - state: The state of the gradient transformation. - params: (optional) - The current value of the parameters. - inplace: (optional) - If :data:`True`, modify updates and state using inplace operations. + updates (tree of Tensor): A tree of candidate updates. + state (tree of Tensor): The state of the gradient transformation. + params (tree of Tensor or None, optional): The current value of the parameters. + (default: :data:`None`) + inplace (bool, optional): If :data:`True`, modify updates and state using inplace + operations. (default: :data:`True`) Returns: The transformed ``updates``, and the updated ``state``. @@ -134,9 +135,9 @@ class GradientTransformation(NamedTuple): optimizer state. update: A pure function which takes as input a pytree of updates (with the same tree structure - as the original params ``pytree`` passed to :attr:`init`), the previous optimizer state - (which may have been initialized using the :attr:`init` function), and optionally the - ``inplace`` flag. The :attr:`update` function then returns the computed gradient + as the original params ``pytree`` passed to ``init``), the previous optimizer state + (which may have been initialized using the ``init`` function), and optionally the + ``inplace`` flag. The ``update`` function then returns the computed gradient updates, and a updates optimizer state. If the ``inplace`` flag is :data:`True`, the output results are the same instance as the input. """ @@ -145,7 +146,7 @@ class GradientTransformation(NamedTuple): update: TransformUpdateFn # pylint: disable-next=redefined-builtin - def chain(self, next: 'GradientTransformation') -> 'ChainedGradientTransformation': + def chain(self, next: GradientTransformation) -> ChainedGradientTransformation: """Chain two gradient transformations together.""" return ChainedGradientTransformation(self, next) @@ -157,9 +158,9 @@ class ChainedGradientTransformation(GradientTransformation): gradient transformations. """ - transformations: Tuple[GradientTransformation, ...] + transformations: tuple[GradientTransformation, ...] - def __new__(cls, *transformations: GradientTransformation) -> 'ChainedGradientTransformation': + def __new__(cls, *transformations: GradientTransformation) -> ChainedGradientTransformation: """Create a new chained gradient transformation.""" transformations = tuple( itertools.chain.from_iterable( @@ -175,16 +176,16 @@ def __new__(cls, *transformations: GradientTransformation) -> 'ChainedGradientTr init_fns, update_fns = tuple(zip(*transformations)) - def init_fn(params: 'Params') -> 'OptState': + def init_fn(params: Params) -> OptState: return tuple(fn(params) for fn in init_fns) def update_fn( - updates: 'Updates', - state: 'OptState', + updates: Updates, + state: OptState, *, - params: Optional['Params'] = None, + params: Params | None = None, inplace: bool = True, - ) -> Tuple['Updates', 'OptState']: + ) -> tuple[Updates, OptState]: if len(update_fns) != len(state): raise ValueError( 'The number of updates and states has to be the same in chain! Make sure you' @@ -219,15 +220,15 @@ def __hash__(self) -> int: """Return the hash of the chained gradient transformation.""" return hash(self.transformations) - def __getstate__(self) -> Tuple[GradientTransformation, ...]: + def __getstate__(self) -> tuple[GradientTransformation, ...]: """Return the state of the chained gradient transformation for serialization.""" return self.transformations - def __setstate__(self, state: Tuple[GradientTransformation, ...]) -> None: + def __setstate__(self, state: tuple[GradientTransformation, ...]) -> None: """Set the state of the chained gradient transformation from serialization.""" self.transformations = state - def __reduce__(self) -> Tuple[Callable, Tuple[Tuple[GradientTransformation, ...]]]: + def __reduce__(self) -> tuple[Callable, tuple[tuple[GradientTransformation, ...]]]: """Serialize the chained gradient transformation.""" return ChainedGradientTransformation, (self.transformations,) @@ -240,18 +241,18 @@ def __new__(cls): return super().__new__(cls, init=cls.init_fn, update=cls.update_fn) @staticmethod - def init_fn(params: 'Params') -> 'OptState': # pylint: disable=unused-argument + def init_fn(params: Params) -> OptState: # pylint: disable=unused-argument """Return empty state.""" return EmptyState() @staticmethod def update_fn( - updates: 'Updates', - state: 'OptState', + updates: Updates, + state: OptState, *, - params: Optional['Params'] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, # pylint: disable=unused-argument - ) -> Tuple['Updates', 'OptState']: + ) -> tuple[Updates, OptState]: """Return updates unchanged.""" return updates, state diff --git a/torchopt/clip.py b/torchopt/clip.py index 2469d17a..b2aafb48 100644 --- a/torchopt/clip.py +++ b/torchopt/clip.py @@ -17,7 +17,7 @@ # ============================================================================== """Utilities for gradient clipping.""" -from typing import Optional, Tuple, Union +from __future__ import annotations import torch @@ -33,18 +33,19 @@ def clip_grad_norm( - max_norm: Union[float, int], - norm_type: Union[float, int] = 2.0, + max_norm: float | int, + norm_type: float | int = 2.0, error_if_nonfinite: bool = False, ) -> GradientTransformation: """Clip gradient norm of an iterable of parameters. Args: max_norm (float or int): The maximum absolute value for each element in the update. - norm_type (float or int): type of the used p-norm. Can be ``'inf'`` for - infinity norm. - error_if_nonfinite (bool): if :data:`True`, an error is thrown if the total norm of the - gradients from :attr:`updates` is ``nan``, ``inf``, or ``-inf``. + norm_type (float or int, optional): Type of the used p-norm. Can be ``'inf'`` for infinity + norm. (default: :const:`2.0`) + error_if_nonfinite (bool, optional): If :data:`True`, an error is thrown if the total norm + of the gradients from ``updates`` is ``nan``, ``inf``, or ``-inf``. + (default: :data:`False`) Returns: An ``(init_fn, update_fn)`` tuple. @@ -57,9 +58,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: available_updates = pytree.tree_leaves(updates) if len(available_updates) == 0: return updates, state diff --git a/torchopt/combine.py b/torchopt/combine.py index 82297426..0f1ed8ec 100644 --- a/torchopt/combine.py +++ b/torchopt/combine.py @@ -31,7 +31,7 @@ # ============================================================================== """Utilities to define a chained transformation.""" -from typing import Optional, Tuple +from __future__ import annotations from torchopt import pytree from torchopt.base import ChainedGradientTransformation, GradientTransformation, identity @@ -49,8 +49,8 @@ def chain(*transformations: GradientTransformation) -> GradientTransformation: :func:`update_fn` which chains the update transformations feeding the appropriate state to each. Args: - *transformations: - A sequence of chainable ``(init_fn, update_fn)`` tuples. + *transformations (iterable of GradientTransformation): A sequence of chainable + ``(init_fn, update_fn)`` tuples. Returns: A single ``(init_fn, update_fn)`` tuple. @@ -66,8 +66,8 @@ def chain_flat(*transformations: GradientTransformation) -> GradientTransformati """Wrap around the inner transformations that manipulate the flattened tree structure (:class:``list``). Args: - *transformations: - A sequence of chainable ``(init_fn, update_fn)`` tuples. + *transformations (iterable of GradientTransformation): A sequence of chainable + ``(init_fn, update_fn)`` tuples. Returns: A single ``(init_fn, update_fn)`` tuple. @@ -86,9 +86,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, + params: Params | None = None, inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: flat_updates, treespec = pytree.tree_flatten(updates, none_is_leaf=True) if params is not None: flat_params = pytree.tree_leaves(params, none_is_leaf=True) diff --git a/torchopt/diff/implicit/decorator.py b/torchopt/diff/implicit/decorator.py index 377bc1f4..a5908963 100644 --- a/torchopt/diff/implicit/decorator.py +++ b/torchopt/diff/implicit/decorator.py @@ -16,9 +16,11 @@ # pylint: disable=invalid-name +from __future__ import annotations + import functools import inspect -from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Type, Union +from typing import Any, Callable, Dict, Sequence, Tuple import functorch import torch @@ -47,7 +49,7 @@ def __init__( optimality_fn: Callable[..., TensorOrTensors], solution: TensorOrTensors, output_is_tensor: bool, - argnums: Tuple[int, ...], + argnums: tuple[int, ...], *args: Any, ) -> None: self.optimality_fn = optimality_fn @@ -88,7 +90,7 @@ def _root_vjp( args: Args, grad_outputs: TupleOfTensors, output_is_tensor: bool, - argnums: Tuple[int, ...], + argnums: tuple[int, ...], solve: Callable[..., TensorOrTensors] = linear_solve.solve_normal_cg(), ) -> TupleOfOptionalTensors: if output_is_tensor: @@ -145,14 +147,14 @@ def matvec(u: TupleOfTensors) -> TupleOfTensors: return tuple(true_output) -def _extract_kwargs(kwarg_keys: Sequence[str], flat_args: Tuple[Any, ...]) -> Tuple[Args, KwArgs]: +def _extract_kwargs(kwarg_keys: Sequence[str], flat_args: tuple[Any, ...]) -> tuple[Args, KwArgs]: nargs = len(flat_args) - len(kwarg_keys) args, kwarg_vals = flat_args[:nargs], flat_args[nargs:] kwargs = dict(zip(kwarg_keys, kwarg_vals)) return args, kwargs -def _signature_bind(signature: inspect.Signature, *args: Any, **kwargs: Any) -> Tuple[Args, KwArgs]: +def _signature_bind(signature: inspect.Signature, *args: Any, **kwargs: Any) -> tuple[Args, KwArgs]: bound = signature.bind(*args, **kwargs) bound.apply_defaults() return bound.args, bound.kwargs @@ -160,7 +162,7 @@ def _signature_bind(signature: inspect.Signature, *args: Any, **kwargs: Any) -> def _signature_bind_and_match( signature: inspect.Signature, *args: Any, **kwargs: Any -) -> Tuple[Args, KwArgs, Callable[[Args], Tuple[Args, KwArgs]]]: +) -> tuple[Args, KwArgs, Callable[[Args], tuple[Args, KwArgs]]]: # We want to bind *args and **kwargs based on the provided signature, but also to associate the # resulting positional arguments back. To achieve this, we lift arguments to a triple: # @@ -193,13 +195,13 @@ def map_args_back(out_args): def _split_tensor_and_others( - mixed_tuple: Tuple[Any, ...], -) -> Tuple[pytree.PyTreeSpec, Tuple[bool, ...], TupleOfTensors, Tuple[Any, ...]]: - flattened: List[Any] + mixed_tuple: tuple[Any, ...], +) -> tuple[pytree.PyTreeSpec, tuple[bool, ...], TupleOfTensors, tuple[Any, ...]]: + flattened: list[Any] flattened, treespec = pytree.tree_flatten(mixed_tuple, none_is_leaf=True) # type: ignore[arg-type] tensors: ListOfTensors = [] - non_tensors: List[Any] = [] - is_tensor_mask: List[bool] = [] + non_tensors: list[Any] = [] + is_tensor_mask: list[bool] = [] for item in flattened: is_tensor = isinstance(item, torch.Tensor) is_tensor_mask.append(is_tensor) @@ -212,10 +214,10 @@ def _split_tensor_and_others( def _merge_tensor_and_others( treespec: pytree.PyTreeSpec, - is_tensor_mask: Tuple[bool, ...], + is_tensor_mask: tuple[bool, ...], tensors: TupleOfTensors, - non_tensors: Tuple[Any, ...], -) -> Tuple[Any, ...]: + non_tensors: tuple[Any, ...], +) -> tuple[Any, ...]: tensor_counter = 0 non_tensor_counter = 0 results = [] @@ -231,13 +233,13 @@ def _merge_tensor_and_others( # pylint: disable-next=too-many-arguments,too-many-statements def _custom_root( - solver_fn: Callable[..., Union[TensorOrTensors, Tuple[TensorOrTensors, Any]]], + solver_fn: Callable[..., TensorOrTensors | tuple[TensorOrTensors, Any]], optimality_fn: Callable[..., TensorOrTensors], solve: Callable[..., TensorOrTensors], - argnums: Tuple[int, ...], + argnums: tuple[int, ...], has_aux: bool, - reference_signature: Optional[Union[inspect.Signature, Callable]] = None, -) -> Callable[..., Union[TensorOrTensors, Tuple[TensorOrTensors, Any]]]: + reference_signature: inspect.Signature | Callable | None = None, +) -> Callable[..., TensorOrTensors | tuple[TensorOrTensors, Any]]: solver_fn_signature = inspect.signature(solver_fn) if reference_signature is None: @@ -249,16 +251,16 @@ def _custom_root( reference_signature = inspect.signature(fn) def make_custom_vjp_solver_fn( - solver_fn: Callable[..., Union[TensorOrTensors, Tuple[TensorOrTensors, Any]]], + solver_fn: Callable[..., TensorOrTensors | tuple[TensorOrTensors, Any]], kwarg_keys: Sequence[str], - args_signs: Tuple[Tuple[int, int, Optional[Union[Type[tuple], Type[list]]]], ...], - ) -> Type[Function]: + args_signs: tuple[tuple[int, int, type[tuple] | type[list] | None], ...], + ) -> type[Function]: # pylint: disable-next=missing-class-docstring,abstract-method class ImplicitMetaGradient(Function): @staticmethod def forward( # type: ignore[override] # pylint: disable=arguments-differ ctx: Any, *flat_args: Any - ) -> Tuple[Any, ...]: + ) -> tuple[Any, ...]: output, aux, output_is_tensor = None, None, False args = [] @@ -361,12 +363,12 @@ def backward( # pylint: disable=too-many-locals @functools.wraps(solver_fn) def wrapped_solver_fn( *args: Any, **kwargs: Any - ) -> Union[TensorOrTensors, Tuple[TensorOrTensors, Any]]: + ) -> TensorOrTensors | tuple[TensorOrTensors, Any]: args, kwargs = _signature_bind(solver_fn_signature, *args, **kwargs) keys, vals = list(kwargs.keys()), list(kwargs.values()) - args_signs: List[Tuple[int, int, Optional[Union[Type[tuple], Type[list]]]]] = [] - flat_args: List[Any] = [] + args_signs: list[tuple[int, int, type[tuple] | type[list] | None]] = [] + flat_args: list[Any] = [] args_offset = 0 for idx, arg in enumerate(args): if idx in argnums: @@ -410,12 +412,12 @@ def wrapped_solver_fn( def custom_root( optimality_fn: Callable[..., TensorOrTensors], - argnums: Union[int, Tuple[int, ...]], + argnums: int | tuple[int, ...], has_aux: bool = False, solve: Callable[..., TensorOrTensors] = linear_solve.solve_normal_cg(), ) -> Callable[ - [Callable[..., Union[TensorOrTensors, Tuple[TensorOrTensors, Any]]]], - Callable[..., Union[TensorOrTensors, Tuple[TensorOrTensors, Any]]], + [Callable[..., TensorOrTensors | tuple[TensorOrTensors, Any]]], + Callable[..., TensorOrTensors | tuple[TensorOrTensors, Any]], ]: """Return a decorator for adding implicit differentiation to a root solver. @@ -442,18 +444,17 @@ def solver_fn(params, arg1, arg2, ...): **In best practice, the ``optimality_fn`` should have the same signature as ``solver_fn``.** Args: - optimality_fn: (callable) - An equation function, ``optimality_fn(params, *args)``. The invariant is - ``optimality_fn(solution, *args) == 0`` at the solution / root of ``solution``. - argnums: (int or tuple of ints) - Specifies arguments to compute gradients with respect to. The ``argnums`` can be an - integer or a tuple of integers, which respect to the zero-based indices of the arguments - of the ``solver_fn(params, *args)`` function. The argument ``params`` is included - for the counting, while it is indexed as ``argnums=0``. - has_aux: (default: :data:`False`) - Whether the decorated solver function returns auxiliary data. - solve: (callable, optional, default: :func:`linear_solve.solve_normal_cg`) - a linear solver of the form ``solve(matvec, b)``. + optimality_fn (callable): An equation function, ``optimality_fn(params, *args)``. The + invariant is ``optimality_fn(solution, *args) == 0`` at the solution / root of + ``solution``. + argnums (int or tuple of int): Specifies arguments to compute gradients with respect to. The + ``argnums`` can be an integer or a tuple of integers, which respect to the zero-based + indices of the arguments of the ``solver_fn(params, *args)`` function. The argument + ``params`` is included for the counting, while it is indexed as ``argnums=0``. + has_aux (bool, optional): Whether the decorated solver function returns auxiliary data. + (default: :data:`False`) + solve (callable, optional): A linear solver of the form ``solve(matvec, b)``. + (default: :func:`linear_solve.solve_normal_cg`) Returns: A solver function decorator, i.e., ``custom_root(optimality_fn)(solver_fn)``. diff --git a/torchopt/diff/implicit/nn/module.py b/torchopt/diff/implicit/nn/module.py index f9bff4de..bbae37c9 100644 --- a/torchopt/diff/implicit/nn/module.py +++ b/torchopt/diff/implicit/nn/module.py @@ -16,10 +16,12 @@ # pylint: disable=redefined-builtin +from __future__ import annotations + import abc import functools import itertools -from typing import Any, Iterable, Optional, Tuple, Type +from typing import Any, Iterable import functorch import torch @@ -38,7 +40,7 @@ def _stateless_objective_fn( __flat_meta_params: TupleOfTensors, __params_names: Iterable[str], __meta_params_names: Iterable[str], - self: 'ImplicitMetaGradientModule', + self: ImplicitMetaGradientModule, *input, **kwargs, ) -> torch.Tensor: @@ -57,7 +59,7 @@ def _stateless_optimality_fn( __flat_meta_params: TupleOfTensors, __params_names: Iterable[str], __meta_params_names: Iterable[str], - self: 'ImplicitMetaGradientModule', + self: ImplicitMetaGradientModule, *input, **kwargs, ) -> TupleOfTensors: @@ -72,8 +74,8 @@ def _stateless_optimality_fn( def make_optimality_from_objective( - cls: Type['ImplicitMetaGradientModule'], -) -> Type['ImplicitMetaGradientModule']: + cls: type[ImplicitMetaGradientModule], +) -> type[ImplicitMetaGradientModule]: """Derives the optimality function of the objective function.""" if ( getattr(cls, 'objective', ImplicitMetaGradientModule.objective) @@ -81,7 +83,7 @@ def make_optimality_from_objective( ): raise TypeError('The objective function is not defined.') - def optimality(self: 'ImplicitMetaGradientModule', *input, **kwargs) -> TupleOfTensors: + def optimality(self: ImplicitMetaGradientModule, *input, **kwargs) -> TupleOfTensors: params_names, flat_params = tuple(zip(*self.named_parameters())) meta_params_names, flat_meta_params = tuple(zip(*self.named_meta_parameters())) @@ -102,8 +104,8 @@ def optimality(self: 'ImplicitMetaGradientModule', *input, **kwargs) -> TupleOfT def enable_implicit_gradients( - cls: Type['ImplicitMetaGradientModule'], -) -> Type['ImplicitMetaGradientModule']: + cls: type[ImplicitMetaGradientModule], +) -> type[ImplicitMetaGradientModule]: """Enable implicit gradients for the :func:`solve` method.""" cls_solve = cls.solve if getattr(cls_solve, '__implicit_gradients_enabled__', False): @@ -122,17 +124,17 @@ def stateless_solver_fn( __params_names: Iterable[str], __meta_params_names: Iterable[str], # pylint: enable=unused-argument - self: 'ImplicitMetaGradientModule', + self: ImplicitMetaGradientModule, *input, **kwargs, - ) -> Tuple[TupleOfTensors, Any]: + ) -> tuple[TupleOfTensors, Any]: """Solve the optimization problem.""" output = cls_solve(self, *input, **kwargs) flat_optimal_params = tuple(p.detach_() for p in self.parameters()) return flat_optimal_params, output @functools.wraps(cls_solve) - def wrapped(self: 'ImplicitMetaGradientModule', *input, **kwargs) -> Any: + def wrapped(self: ImplicitMetaGradientModule, *input, **kwargs) -> Any: """Solve the optimization problem.""" params_names, flat_params = tuple(zip(*self.named_parameters())) meta_params_names, flat_meta_params = tuple(zip(*self.named_meta_parameters())) @@ -159,9 +161,9 @@ class ImplicitMetaGradientModule(MetaGradientModule): _custom_optimality: bool _custom_objective: bool - linear_solve: Optional[LinearSolver] + linear_solve: LinearSolver | None - def __init_subclass__(cls, linear_solve: Optional[LinearSolver] = None) -> None: + def __init_subclass__(cls, linear_solve: LinearSolver | None = None) -> None: """Validate and initialize the subclass.""" super().__init_subclass__() cls.linear_solve = linear_solve diff --git a/torchopt/diff/zero_order/decorator.py b/torchopt/diff/zero_order/decorator.py index 80664d8b..43522028 100644 --- a/torchopt/diff/zero_order/decorator.py +++ b/torchopt/diff/zero_order/decorator.py @@ -14,8 +14,10 @@ # ============================================================================== """Zero-Order Gradient Estimation.""" +from __future__ import annotations + import functools -from typing import Any, Callable, List, Sequence, Tuple, Union +from typing import Any, Callable, Sequence from typing_extensions import Literal # Python 3.8+ from typing_extensions import TypeAlias # Python 3.10+ @@ -33,9 +35,7 @@ def __init__(self, sample_fn: SampleFunc) -> None: """Wrap a sample function to make it a :class:`Samplable` object.""" self.sample_fn = sample_fn - def sample( - self, sample_shape: torch.Size = torch.Size() - ) -> Union[torch.Tensor, Sequence[Numeric]]: + def sample(self, sample_shape: torch.Size = torch.Size()) -> torch.Tensor | Sequence[Numeric]: # pylint: disable-next=line-too-long """Generate a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched.""" return self.sample_fn(sample_shape) @@ -44,14 +44,14 @@ def sample( def _zero_order_naive( # pylint: disable=too-many-statements fn: Callable[..., torch.Tensor], distribution: Samplable, - argnums: Tuple[int, ...], + argnums: tuple[int, ...], num_samples: int, sigma: Numeric, ) -> Callable[..., torch.Tensor]: @functools.wraps(fn) def apply(*args: Any) -> torch.Tensor: # pylint: disable=too-many-statements diff_params = [args[argnum] for argnum in argnums] - flat_diff_params: List[Any] + flat_diff_params: list[Any] flat_diff_params, diff_params_treespec = pytree.tree_flatten(diff_params) # type: ignore[arg-type] class ZeroOrder(Function): # pylint: disable=missing-class-docstring,abstract-method @@ -59,7 +59,7 @@ class ZeroOrder(Function): # pylint: disable=missing-class-docstring,abstract-m def forward(ctx: Any, *args: Any, **kwargs: Any) -> torch.Tensor: flat_diff_params = args[:-1] origin_args = list(args[-1][0]) - flat_args: List[Any] + flat_args: list[Any] flat_args, args_treespec = pytree.tree_flatten(origin_args, none_is_leaf=True) # type: ignore[arg-type] ctx.args_treespec = args_treespec @@ -107,7 +107,7 @@ def backward( # pylint: disable=too-many-locals flat_args.append(non_tensors[non_tensors_counter]) non_tensors_counter += 1 - args: List[Any] = pytree.tree_unflatten(ctx.args_treespec, flat_args) # type: ignore[assignment] + args: list[Any] = pytree.tree_unflatten(ctx.args_treespec, flat_args) # type: ignore[assignment] def add_perturbation(tensor, noises): return tensor.add(noises, alpha=sigma) @@ -119,7 +119,7 @@ def add_perturbation(tensor, noises): flat_noisy_params = [ add_perturbation(t, n) for t, n in zip(flat_diff_params, noises) ] - noisy_params: List[Any] = pytree.tree_unflatten( # type: ignore[assignment] + noisy_params: list[Any] = pytree.tree_unflatten( # type: ignore[assignment] diff_params_treespec, flat_noisy_params ) @@ -145,14 +145,14 @@ def add_perturbation(tensor, noises): def _zero_order_forward( # pylint: disable=too-many-statements fn: Callable[..., torch.Tensor], distribution: Samplable, - argnums: Tuple[int, ...], + argnums: tuple[int, ...], num_samples: int, sigma: Numeric, ) -> Callable[..., torch.Tensor]: @functools.wraps(fn) def apply(*args: Any) -> torch.Tensor: # pylint: disable=too-many-statements diff_params = [args[argnum] for argnum in argnums] - flat_diff_params: List[Any] + flat_diff_params: list[Any] flat_diff_params, diff_params_treespec = pytree.tree_flatten(diff_params) # type: ignore[arg-type] class ZeroOrder(Function): # pylint: disable=missing-class-docstring,abstract-method @@ -160,7 +160,7 @@ class ZeroOrder(Function): # pylint: disable=missing-class-docstring,abstract-m def forward(ctx: Any, *args: Any, **kwargs: Any) -> torch.Tensor: flat_diff_params = args[:-1] origin_args = list(args[-1][0]) - flat_args: List[Any] + flat_args: list[Any] flat_args, args_treespec = pytree.tree_flatten(origin_args, none_is_leaf=True) # type: ignore[arg-type] ctx.args_treespec = args_treespec @@ -209,7 +209,7 @@ def backward( # pylint: disable=too-many-locals flat_args.append(non_tensors[non_tensors_counter]) non_tensors_counter += 1 - args: List[Any] = pytree.tree_unflatten(ctx.args_treespec, flat_args) # type: ignore[assignment] + args: list[Any] = pytree.tree_unflatten(ctx.args_treespec, flat_args) # type: ignore[assignment] def add_perturbation(tensor, noises): return tensor.add(noises, alpha=sigma) @@ -221,7 +221,7 @@ def add_perturbation(tensor, noises): flat_noisy_params = [ add_perturbation(t, n) for t, n in zip(flat_diff_params, noises) ] - noisy_params: List[Any] = pytree.tree_unflatten( # type: ignore[assignment] + noisy_params: list[Any] = pytree.tree_unflatten( # type: ignore[assignment] diff_params_treespec, flat_noisy_params ) @@ -248,14 +248,14 @@ def add_perturbation(tensor, noises): def _zero_order_antithetic( # pylint: disable=too-many-statements fn: Callable[..., torch.Tensor], distribution: Samplable, - argnums: Tuple[int, ...], + argnums: tuple[int, ...], num_samples: int, sigma: Numeric, ) -> Callable[..., torch.Tensor]: @functools.wraps(fn) def apply(*args: Any) -> torch.Tensor: # pylint: disable=too-many-statements diff_params = [args[argnum] for argnum in argnums] - flat_diff_params: List[Any] + flat_diff_params: list[Any] flat_diff_params, diff_params_treespec = pytree.tree_flatten(diff_params) # type: ignore[arg-type] class ZeroOrder(Function): # pylint: disable=missing-class-docstring,abstract-method @@ -263,7 +263,7 @@ class ZeroOrder(Function): # pylint: disable=missing-class-docstring,abstract-m def forward(ctx: Any, *args: Any, **kwargs: Any) -> torch.Tensor: flat_diff_params = args[:-1] origin_args = list(args[-1][0]) - flat_args: List[Any] + flat_args: list[Any] flat_args, args_treespec = pytree.tree_flatten(origin_args, none_is_leaf=True) # type: ignore[arg-type] ctx.args_treespec = args_treespec @@ -309,7 +309,7 @@ def backward(ctx: Any, *grad_outputs: Any): # pylint: disable=too-many-locals flat_args.append(non_tensors[non_tensors_counter]) non_tensors_counter += 1 - args: List[Any] = pytree.tree_unflatten(ctx.args_treespec, flat_args) # type: ignore[assignment] + args: list[Any] = pytree.tree_unflatten(ctx.args_treespec, flat_args) # type: ignore[assignment] param_grads: ListOfTensors = [0.0 for _ in range(len(flat_diff_params))] # type: ignore[misc] @@ -318,7 +318,7 @@ def get_output(add_perturbation_fn, noises) -> torch.Tensor: add_perturbation_fn(t, n, alpha=sigma) for t, n in zip(flat_diff_params, noises) ] - noisy_params: List[Any] = pytree.tree_unflatten( # type: ignore[assignment] + noisy_params: list[Any] = pytree.tree_unflatten( # type: ignore[assignment] diff_params_treespec, flat_noisy_params ) @@ -349,28 +349,28 @@ def get_output(add_perturbation_fn, noises) -> torch.Tensor: def zero_order( - distribution: Union[SampleFunc, Samplable], + distribution: SampleFunc | Samplable, method: Method = 'naive', - argnums: Union[int, Tuple[int, ...]] = (0,), + argnums: int | tuple[int, ...] = (0,), num_samples: int = 1, sigma: Numeric = 1.0, ) -> Callable[[Callable[..., torch.Tensor]], Callable[..., torch.Tensor]]: """Return a decorator for applying zero-order differentiation. Args: - distribution: (function or Samplable) - A samplable object that has method ``samplable.sample(sample_shape)`` or a function that - takes the shape as input and returns a shaped batch of samples. This is used to sample - perturbations from the given distribution. The distribution should be sphere symmetric. - method: (str) - The algorithm to use. The currently supported algorithms are :const:`'naive'`, - :const:`'forward'`, and :const:`'antithetic'`. Defaults to :const:`'naive'`. - argnums: (int or tuple of int, default: :const:`0`) - Specifies arguments to compute gradients with respect to. - num_samples: (int, default :const:`1`) - The number of sample to get the averaged estimated gradient. - sigma: (Numeric) - The standard deviation of the perturbation. Defaults to :const:`1.0`. + distribution (callable or Samplable): A samplable object that has method + ``samplable.sample(sample_shape)`` or a function that takes the shape as input and + returns a shaped batch of samples. This is used to sample perturbations from the given + distribution. The distribution should be sphere symmetric. + method (str, optional): The algorithm to use. The currently supported algorithms are + :const:`'naive'`, :const:`'forward'`, and :const:`'antithetic'`. + (default: :const:`'naive'`) + argnums (int or tuple of int, optional): Specifies arguments to compute gradients with + respect to. (default: :const:`0`) + num_samples (int, optional): The number of sample to get the averaged estimated gradient. + (default: :const:`1`) + sigma (float or Tensor, optional): The standard deviation of the perturbation. + (default: :const:`1.0`) Returns: A function decorator that enables zero-order gradient estimation. diff --git a/torchopt/diff/zero_order/nn/module.py b/torchopt/diff/zero_order/nn/module.py index d76ac444..65014fb9 100644 --- a/torchopt/diff/zero_order/nn/module.py +++ b/torchopt/diff/zero_order/nn/module.py @@ -16,9 +16,11 @@ # pylint: disable=redefined-builtin +from __future__ import annotations + import abc import functools -from typing import Sequence, Type, Union +from typing import Sequence import torch import torch.nn as nn @@ -32,11 +34,11 @@ def enable_zero_order_gradients( - cls: Type['ZeroOrderGradientModule'], + cls: type[ZeroOrderGradientModule], method: Method = 'naive', num_samples: int = 1, sigma: Numeric = 1.0, -) -> Type['ZeroOrderGradientModule']: +) -> type[ZeroOrderGradientModule]: """Enable zero-order gradient estimation for the :func:`forward` method.""" cls_forward = cls.forward if getattr(cls_forward, '__zero_order_gradients_enabled__', False): @@ -45,7 +47,7 @@ def enable_zero_order_gradients( ) @functools.wraps(cls_forward) - def wrapped(self: 'ZeroOrderGradientModule', *input, **kwargs) -> torch.Tensor: + def wrapped(self: ZeroOrderGradientModule, *input, **kwargs) -> torch.Tensor: """Do the forward pass calculation.""" params_names, flat_params = tuple(zip(*self.named_parameters())) @@ -91,7 +93,7 @@ def forward(self, *args, **kwargs) -> torch.Tensor: @abc.abstractmethod def sample( self, sample_shape: torch.Size = torch.Size() # pylint: disable=unused-argument - ) -> Union[torch.Tensor, Sequence[Numeric]]: + ) -> torch.Tensor | Sequence[Numeric]: # pylint: disable-next=line-too-long """Generate a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched.""" raise NotImplementedError diff --git a/torchopt/distributed/api.py b/torchopt/distributed/api.py index 53f87fba..b46ad67e 100644 --- a/torchopt/distributed/api.py +++ b/torchopt/distributed/api.py @@ -14,6 +14,8 @@ # ============================================================================== """Distributed APIs.""" +from __future__ import annotations + import functools import sys from typing import ( @@ -73,8 +75,8 @@ class TensorDimensionPartitioner: while the non-tensor values will be broadcasted to partitions. Args: - dim: The dimension to partition. - exclusive: Whether to partition the batch exclusively. + dim (int): The dimension to partition. + exclusive (bool, optional): Whether to partition the batch exclusively. (default: :data:`False`) If :data:`True`, the batch will be partitioned into ``batch_size`` partitions, where ``batch_size`` is the size of the batch along the given dimension. Each batch sample will be assigned to a separate RPC call. @@ -82,11 +84,12 @@ class TensorDimensionPartitioner: partitions, where ``num_workers`` is the number of workers in the world. When ``batch_size > num_workers``, there can be multiple batch samples forward in a single RPC call. - keepdim: Whether to keep the partitioned dimension. Defaults to :data:`True`, i.e., keep the - batch dimension. If :data:`False`, use select instead of slicing. This functionality - should be used with ``exclusive=True``. - workers: The workers to partition the batch to. If :data:`None`, the batch will be - partitioned to all workers in the world. + keepdim (bool, optional): Whether to keep the partitioned dimension. (default: :data:`True`) + If :data:`True`, keep the batch dimension. If :data:`False`, use select instead of + slicing. This functionality should be used with ``exclusive=True``. + workers (sequence of int or str, or None, optional): The workers to partition the batch to. + If :data:`None`, the batch will be partitioned to all workers in the world. + (default: :data:`None`) """ def __init__( @@ -95,7 +98,7 @@ def __init__( *, exclusive: bool = False, keepdim: bool = False, - workers: Optional[Sequence[Union[int, str]]] = None, + workers: Sequence[int | str] | None = None, ) -> None: """Initialize the partitioner instance.""" if not keepdim and not exclusive: @@ -111,7 +114,7 @@ def __call__( self, *args: Any, **kwargs: Any, - ) -> List[Tuple[int, Optional[Args], Optional[KwArgs]]]: + ) -> list[tuple[int, Args | None, KwArgs | None]]: """Partition the batch of inputs along the given dimension.""" if self.workers is None: workers = list(range(get_world_size())) @@ -120,7 +123,7 @@ def __call__( num_workers = len(workers) args_tree = (args, kwargs) - flat_args: List[Any] + flat_args: list[Any] flat_args, treespec = pytree.tree_flatten(args_tree) # type: ignore[arg-type] batch_size = None @@ -137,8 +140,8 @@ def __call__( if batch_size is None: return [(get_world_rank(), args, kwargs.copy())] - dim_slices: List[Union[int, slice]] - batch_slices: List[Tuple[Union[int, slice, Ellipsis.__class__], ...]] # type: ignore[name-defined] + dim_slices: list[int | slice] + batch_slices: list[tuple[int | slice | Ellipsis.__class__, ...]] # type: ignore[name-defined] if self.exclusive: num_replicas = batch_size if self.keepdim: @@ -172,7 +175,7 @@ def __call__( for dim_slice in dim_slices ] - flat_args_replicas: List[List[Any]] = [[] for _ in range(num_replicas)] + flat_args_replicas: list[list[Any]] = [[] for _ in range(num_replicas)] for arg in flat_args: if isinstance(arg, torch.Tensor): for i, batch_slice in enumerate(batch_slices): @@ -181,7 +184,7 @@ def __call__( for i in range(num_replicas): flat_args_replicas[i].append(arg) - args_replicas: List[Tuple[Args, KwArgs]] = [ + args_replicas: list[tuple[Args, KwArgs]] = [ pytree.tree_unflatten(treespec, args_replica) # type: ignore[misc] for args_replica in flat_args_replicas ] @@ -193,10 +196,10 @@ def __call__( def __reduce__( self, - ) -> Tuple[ - Callable[..., 'TensorDimensionPartitioner'], - Tuple[int], - Dict[str, Union[bool, Optional[Sequence[Union[int, str]]]]], + ) -> tuple[ + Callable[..., TensorDimensionPartitioner], + tuple[int], + dict[str, bool | Sequence[int | str] | None], ]: """Return a tuple that allows the partitioner to be pickled.""" return ( @@ -211,7 +214,7 @@ def dim_partitioner( *, exclusive: bool = False, keepdim: bool = True, - workers: Optional[Sequence[Union[int, str]]] = None, + workers: Sequence[int | str] | None = None, ) -> PartitionFunction: """Partition a batch of inputs along a given dimension. @@ -219,8 +222,8 @@ def dim_partitioner( while the non-tensor values will be broadcasted to partitions. Args: - dim: The dimension to partition. - exclusive: Whether to partition the batch exclusively. + dim (int, optional): The dimension to partition. (default: :const:`0`) + exclusive (bool, optional): Whether to partition the batch exclusively. (default: :data:`False`) If :data:`True`, the batch will be partitioned into ``batch_size`` partitions, where ``batch_size`` is the size of the batch along the given dimension. Each batch sample will be assigned to a separate RPC call. @@ -228,11 +231,12 @@ def dim_partitioner( partitions, where ``num_workers`` is the number of workers in the world. When ``batch_size > num_workers``, there can be multiple batch samples forward in a single RPC call. - keepdim: Whether to keep the partitioned dimension. Defaults to :data:`True`, i.e., keep the - batch dimension. If :data:`False`, use select instead of slicing. This functionality - should be used with ``exclusive=True``. - workers: The workers to partition the batch to. If :data:`None`, the batch will be - partitioned to all workers in the world. + keepdim (bool, optional): Whether to keep the partitioned dimension. (default: :data:`False`) + If :data:`True`, keep the batch dimension. If :data:`False`, use select instead of + slicing. This functionality should be used with ``exclusive=True``. + workers (sequence of int or str, or None, optional): The workers to partition the batch to. + If :data:`None`, the batch will be partitioned to all workers in the world. + (default: :data:`None`) Returns: A partition function. @@ -273,26 +277,26 @@ def sum_reducer(results: Iterable[torch.Tensor]) -> torch.Tensor: def remote_async_call( func: Callable[..., T], *, - args: Optional[Args] = None, - kwargs: Optional[KwArgs] = None, - partitioner: Optional[Partitioner] = None, - reducer: Optional[Callable[[Iterable[T]], U]] = None, - timeout: Optional[float] = UNSET_RPC_TIMEOUT, -) -> Union[Future[List[T]], Future[U]]: + args: Args | None = None, + kwargs: KwArgs | None = None, + partitioner: Partitioner | None = None, + reducer: Callable[[Iterable[T]], U] | None = None, + timeout: float | None = UNSET_RPC_TIMEOUT, +) -> Future[list[T]] | Future[U]: """Asynchronously do an RPC on remote workers and return the a :class:`torch.Future` instance at the current worker. Args: - func (Callable[..., T]): The function to call. - args (Optional[Args], optional): The arguments to pass to the function. Defaults to - :data:`None`. - kwargs (Optional[KwArgs], optional): The keyword arguments to pass to the function. Defaults - to :data:`None`. - partitioner (Partitioner, optional): A partitioner that partitions the arguments to multiple - workers. Defaults to :func:`batch_partitioner`. - reducer (Callable[[Iterable[T]], U], optional): A reducer that reduces the results from - multiple workers. Defaults to :data:`None`. - timeout (float, optional): The timeout for the RPC call. Defaults to - :data:`rpc.api.UNSET_RPC_TIMEOUT`. + func (callable): The function to call. + args (tuple of object or None, optional): The arguments to pass to the function. + (default: :data:`None`) + kwargs (dict[str, object] or None, optional): The keyword arguments to pass to the function. + (default: :data:`None`) + partitioner (int, str, or callable, optional): A partitioner that partitions the arguments + to multiple workers. (default: :func:`batch_partitioner`) + reducer (callable or None, optional): A reducer that reduces the results from multiple + workers. If :data:`None`, do not reduce the results. (default: :data:`None`) + timeout (float, optional): The timeout for the RPC call. + (default: :data:`rpc.api.UNSET_RPC_TIMEOUT`) Returns: A :class:`torch.Future` instance for the result. The result is at the current worker. @@ -330,26 +334,26 @@ def remote_async_call( def remote_sync_call( func: Callable[..., T], *, - args: Optional[Args] = None, - kwargs: Optional[KwArgs] = None, - partitioner: Optional[Partitioner] = None, - reducer: Optional[Callable[[Iterable[T]], U]] = None, - timeout: Optional[float] = UNSET_RPC_TIMEOUT, -) -> Union[List[T], U]: + args: Args | None = None, + kwargs: KwArgs | None = None, + partitioner: Partitioner | None = None, + reducer: Callable[[Iterable[T]], U] | None = None, + timeout: float | None = UNSET_RPC_TIMEOUT, +) -> list[T] | U: """Do an RPC synchronously on remote workers and return the result to the current worker. Args: - func (Callable[..., T]): The function to call. - args (Optional[Args], optional): The arguments to pass to the function. Defaults to - :data:`None`. - kwargs (Optional[KwArgs], optional): The keyword arguments to pass to the function. Defaults - to :data:`None`. - partitioner (Partitioner, optional): A partitioner that partitions the arguments to multiple - workers. Defaults to :func:`batch_partitioner`. - reducer (Callable[[Iterable[T]], U], optional): A reducer that reduces the results from - multiple workers. Defaults to :data:`None`. - timeout (float, optional): The timeout for the RPC call. Defaults to - :data:`rpc.api.UNSET_RPC_TIMEOUT`. + func (callable): The function to call. + args (tuple of object or None, optional): The arguments to pass to the function. + (default: :data:`None`) + kwargs (dict[str, object] or None, optional): The keyword arguments to pass to the function. + (default: :data:`None`) + partitioner (int, str, or callable, optional): A partitioner that partitions the arguments + to multiple workers. (default: :func:`batch_partitioner`) + reducer (callable or None, optional): A reducer that reduces the results from multiple + workers. If :data:`None`, do not reduce the results. (default: :data:`None`) + timeout (float, optional): The timeout for the RPC call. + (default: :data:`rpc.api.UNSET_RPC_TIMEOUT`) Returns: The result of the RPC call. The result is at the current worker. @@ -365,10 +369,10 @@ def remote_sync_call( def parallelize_async( - partitioner: Optional[Partitioner] = None, - reducer: Optional[Callable[[Iterable[T]], U]] = None, - timeout: Optional[float] = UNSET_RPC_TIMEOUT, -) -> Callable[[Callable[..., T]], Callable[..., Union[Future[List[T]], Future[U]]]]: + partitioner: Partitioner | None = None, + reducer: Callable[[Iterable[T]], U] | None = None, + timeout: float | None = UNSET_RPC_TIMEOUT, +) -> Callable[[Callable[..., T]], Callable[..., Future[list[T]] | Future[U]]]: """Return a decorator for parallelizing a function. This decorator can be used to parallelize a function call across multiple workers. The @@ -376,13 +380,12 @@ def parallelize_async( return a :class:`torch.Future` instance of the result. Args: - partitioner (Partitioner, optional): A partitioner that partitions the arguments to multiple - workers. Defaults to :func:`batch_partitioner`. - reducer (Callable[[Iterable[T]], U], optional): A reducer that reduces the results from - multiple workers. Defaults to :func:`mean_reducer` if the ``partitioner`` is not - specified, i.e., :func:`batch_partitioner`. Otherwise, it defaults to :data:`None`. - timeout (float, optional): The timeout for the RPC call. Defaults to - :data:`rpc.api.UNSET_RPC_TIMEOUT`. + partitioner (int, str, or callable, optional): A partitioner that partitions the arguments + to multiple workers. (default: :func:`batch_partitioner`) + reducer (callable or None, optional): A reducer that reduces the results from multiple + workers. If :data:`None`, do not reduce the results. (default: :data:`None`) + timeout (float, optional): The timeout for the RPC call. + (default: :data:`rpc.api.UNSET_RPC_TIMEOUT`) Returns: The decorator function. @@ -392,9 +395,9 @@ def parallelize_async( if reducer is None: reducer = mean_reducer # type: ignore[assignment] - def wrapper(func: Callable[..., T]) -> Callable[..., Union[Future[List[T]], Future[U]]]: + def wrapper(func: Callable[..., T]) -> Callable[..., Future[list[T]] | Future[U]]: @functools.wraps(func) - def wrapped(*args: Any, **kwargs: Any) -> Union[Future[List[T]], Future[U]]: + def wrapped(*args: Any, **kwargs: Any) -> Future[list[T]] | Future[U]: return remote_async_call( func, args=args, @@ -423,22 +426,21 @@ def wrapped(*args: Any, **kwargs: Any) -> Union[Future[List[T]], Future[U]]: def parallelize( - partitioner: Optional[Partitioner] = None, - reducer: Optional[Callable[[Iterable[T]], U]] = None, - timeout: Optional[float] = UNSET_RPC_TIMEOUT, -) -> Callable[[Callable[..., T]], Callable[..., Union[List[T], U]]]: + partitioner: Partitioner | None = None, + reducer: Callable[[Iterable[T]], U] | None = None, + timeout: float | None = UNSET_RPC_TIMEOUT, +) -> Callable[[Callable[..., T]], Callable[..., list[T] | U]]: """Return a decorator for parallelizing a function. This decorator can be used to parallelize a function call across multiple workers. Args: - partitioner (Partitioner, optional): A partitioner that partitions the arguments to multiple - workers. Defaults to :func:`batch_partitioner`. - reducer (Callable[[Iterable[T]], U], optional): A reducer that reduces the results from - multiple workers. Defaults to :func:`mean_reducer` if the ``partitioner`` is not - specified, i.e., :func:`batch_partitioner`. Otherwise, it defaults to :data:`None`. - timeout (float, optional): The timeout for the RPC call. Defaults to - :data:`rpc.api.UNSET_RPC_TIMEOUT`. + partitioner (int, str, or callable, optional): A partitioner that partitions the arguments + to multiple workers. (default: :func:`batch_partitioner`) + reducer (callable or None, optional): A reducer that reduces the results from multiple + workers. If :data:`None`, do not reduce the results. (default: :data:`None`) + timeout (float, optional): The timeout for the RPC call. + (default: :data:`rpc.api.UNSET_RPC_TIMEOUT`) Returns: The decorator function. @@ -448,9 +450,9 @@ def parallelize( if reducer is None: reducer = mean_reducer # type: ignore[assignment] - def wrapper(func: Callable[..., T]) -> Callable[..., Union[List[T], U]]: + def wrapper(func: Callable[..., T]) -> Callable[..., list[T] | U]: @functools.wraps(func) - def wrapped(*args: Any, **kwargs: Any) -> Union[List[T], U]: + def wrapped(*args: Any, **kwargs: Any) -> list[T] | U: return remote_sync_call( func, args=args, diff --git a/torchopt/distributed/autograd.py b/torchopt/distributed/autograd.py index 5fe51278..17fa9463 100644 --- a/torchopt/distributed/autograd.py +++ b/torchopt/distributed/autograd.py @@ -14,14 +14,15 @@ # ============================================================================== """Distributed Autograd.""" +from __future__ import annotations + from threading import Lock -from typing import Optional, overload import torch import torch.distributed.autograd as autograd from torch.distributed.autograd import context -from torchopt.typing import TensorOrTensors, TupleOfOptionalTensors, TupleOfTensors +from torchopt.typing import TensorOrTensors, TupleOfOptionalTensors __all__ = ['is_available', 'context'] @@ -43,22 +44,23 @@ def backward( autograd_ctx_id: int, tensors: TensorOrTensors, retain_graph: bool = False, - inputs: Optional[TensorOrTensors] = None, + inputs: TensorOrTensors | None = None, ) -> None: """Perform distributed backward pass for local parameters. Compute the sum of gradients of given tensors with respect to graph leaves. Args: - autograd_ctx_id: The autograd context id. - tensors (Sequence[Tensor] or Tensor): Tensors of which the derivative will be computed. + autograd_ctx_id (int): The autograd context id. + tensors (Tensor or sequence of Tensor): Tensors of which the derivative will be computed. retain_graph (bool, optional): If :data:`False`, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to :data:`True` is not needed and often can be worked around in a much more efficient way. - inputs (Sequence[Tensor] or Tensor, optional): Inputs w.r.t. which the gradient be will - accumulated into ``.grad``. All other Tensors will be ignored. If not provided, the - gradient is accumulated into all the leaf Tensors that were used to compute the - attr::tensors. + (default: :data:`False`) + inputs (Tensor, sequence of Tensor, or None, optional): Inputs w.r.t. which the gradient + be will accumulated into ``.grad``. All other Tensors will be ignored. If not + provided, the gradient is accumulated into all the leaf Tensors that were used to + compute the ``tensors``. (default: :data:`None`) """ if inputs is not None: if isinstance(inputs, torch.Tensor): @@ -85,25 +87,6 @@ def backward( else: p.grad = g - @overload - def grad( - autograd_ctx_id: int, - outputs: TensorOrTensors, - inputs: TensorOrTensors, - retain_graph: bool = False, - ) -> TupleOfTensors: - ... - - @overload - def grad( - autograd_ctx_id: int, - outputs: TensorOrTensors, - inputs: TensorOrTensors, - retain_graph: bool = False, - allow_unused: bool = False, - ) -> TupleOfOptionalTensors: - ... - def grad( autograd_ctx_id: int, outputs: TensorOrTensors, @@ -114,16 +97,17 @@ def grad( """Compute and return the sum of gradients of outputs with respect to the inputs. Args: - autograd_ctx_id: The autograd context id. - outputs (sequence of Tensor): outputs of the differentiated function. - inputs (sequence of Tensor): Inputs w.r.t. which the gradient will be returned (and not - accumulated into ``.grad``). + autograd_ctx_id (int): The autograd context id. + outputs (Tensor or sequence of Tensor): Outputs of the differentiated function. + inputs (Tensor or sequence of Tensor): Inputs w.r.t. which the gradient will be returned + (and not accumulated into ``.grad``). retain_graph (bool, optional): If :data:`False`, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to :data:`True` is not needed and often can be worked around in a much more efficient way. + (default: :data:`False`) allow_unused (bool, optional): If :data:`False`, specifying inputs that were not used when computing outputs (and therefore their grad is always zero) is an error. - Defaults to :data:`False`. + (default: :data:`False`) """ outputs = [outputs] if isinstance(outputs, torch.Tensor) else list(outputs) inputs = (inputs,) if isinstance(inputs, torch.Tensor) else tuple(inputs) diff --git a/torchopt/distributed/world.py b/torchopt/distributed/world.py index 45140df1..804d4b9d 100644 --- a/torchopt/distributed/world.py +++ b/torchopt/distributed/world.py @@ -14,10 +14,12 @@ # ============================================================================== """Utilities for gathering information about the world.""" +from __future__ import annotations + import atexit import functools import os -from typing import Any, Callable, Iterable, NamedTuple, Optional, TypeVar, Union +from typing import Any, Callable, Iterable, NamedTuple, TypeVar import torch.distributed.rpc as rpc from torch.distributed.elastic.multiprocessing.errors import record @@ -127,32 +129,33 @@ def get_local_world_size() -> int: # pylint: disable-next=redefined-builtin,invalid-name -def get_worker_id(id: Optional[Union[str, int]] = None) -> int: +def get_worker_id(id: str | int | None = None) -> int: """Get the worker id from the given id.""" if isinstance(id, int): return id return rpc.get_worker_info(worker_name=id).id -def barrier(worker_names: Optional[Iterable[str]] = None) -> None: +def barrier(worker_names: Iterable[str] | None = None) -> None: r"""Synchronize local and remote RPC processes. This will block until all local and remote RPC processes specified under worker_names reach this method to wait for all outstanding work to complete. Args: - worker_names: The set of workers to synchronize. If :data:`None`, all workers. + worker_names (iterable of str or None, optional): The set of workers to synchronize. + If :data:`None`, all workers. (default: :data:`None`) """ worker_names = {} if worker_names is None else set(worker_names) rpc.api._barrier(worker_names) # pylint: disable=protected-access def auto_init_rpc( - worker_init_fn: Optional[Callable[[], None]] = None, + worker_init_fn: Callable[[], None] | None = None, worker_name_format: Callable[..., str] = default_worker_name_format, *, - backend: Optional['rpc.BackendType'] = None, - rpc_backend_options: Optional['rpc.RpcBackendOptions'] = None, + backend: rpc.BackendType | None = None, + rpc_backend_options: rpc.RpcBackendOptions | None = None, ) -> Callable[[F], F]: """Return a decorator to automatically initialize RPC on the decorated function.""" global _WORKER_NAME_FORMAT # pylint: disable=global-statement diff --git a/torchopt/hook.py b/torchopt/hook.py index 949c76e7..f188415c 100644 --- a/torchopt/hook.py +++ b/torchopt/hook.py @@ -14,7 +14,9 @@ # ============================================================================== """Hook utilities.""" -from typing import Callable, Optional, Tuple +from __future__ import annotations + +from typing import Callable import torch @@ -32,7 +34,7 @@ def zero_nan_hook(g: torch.Tensor) -> torch.Tensor: def nan_to_num_hook( - nan: float = 0.0, posinf: Optional[float] = None, neginf: Optional[float] = None + nan: float = 0.0, posinf: float | None = None, neginf: float | None = None ) -> Callable[[torch.Tensor], torch.Tensor]: """Return a ``nan`` to num hook to replace ``nan`` / ``+inf`` / ``-inf`` with the given numbers.""" @@ -59,9 +61,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, # pylint: disable=unused-argument - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: def f(g): return g.register_hook(hook) diff --git a/torchopt/linalg/cg.py b/torchopt/linalg/cg.py index 94daee53..5456f076 100644 --- a/torchopt/linalg/cg.py +++ b/torchopt/linalg/cg.py @@ -33,8 +33,10 @@ # pylint: disable=invalid-name +from __future__ import annotations + from functools import partial -from typing import Callable, Optional, Union +from typing import Callable import torch @@ -100,14 +102,14 @@ def body_fn(value): def _isolve( _isolve_solve: Callable, - A: Union[TensorTree, Callable[[TensorTree], TensorTree]], + A: TensorTree | Callable[[TensorTree], TensorTree], b: TensorTree, - x0: Optional[TensorTree] = None, + x0: TensorTree | None = None, *, rtol: float = 1e-5, atol: float = 0.0, - maxiter: Optional[int] = None, - M: Optional[Union[TensorTree, Callable[[TensorTree], TensorTree]]] = None, + maxiter: int | None = None, + M: TensorTree | Callable[[TensorTree], TensorTree] | None = None, ) -> TensorTree: if x0 is None: x0 = pytree.tree_map(torch.zeros_like, b) @@ -133,14 +135,14 @@ def _isolve( def cg( - A: Union[TensorTree, Callable[[TensorTree], TensorTree]], + A: TensorTree | Callable[[TensorTree], TensorTree], b: TensorTree, - x0: Optional[TensorTree] = None, + x0: TensorTree | None = None, *, rtol: float = 1e-5, atol: float = 0.0, - maxiter: Optional[int] = None, - M: Optional[Union[TensorTree, Callable[[TensorTree], TensorTree]]] = None, + maxiter: int | None = None, + M: TensorTree | Callable[[TensorTree], TensorTree] | None = None, ) -> TensorTree: """Use Conjugate Gradient iteration to solve ``Ax = b``. @@ -153,30 +155,30 @@ def cg( solves converge. Args: - A: (tensor or tree of tensors or function) - 2D array or function that calculates the linear map (matrix-vector product) ``Ax`` when - called like ``A(x)``. ``A`` must represent a hermitian, positive definite matrix, and - must return array(s) with the same structure and shape as its argument. - b: (tensor or tree of tensors) - Right hand side of the linear system representing a single vector. Can be stored as an - array or Python container of array(s) with any shape. - x0: (tensor or tree of tensors, optional) - Starting guess for the solution. Must have the same structure as ``b``. - rtol: (float, optional, default: :const:`1e-5`) - Tolerances for convergence, ``norm(residual) <= max(rtol*norm(b), atol)``. We do not - implement SciPy's "legacy" behavior, so TorchOpt's tolerance will differ from SciPy - unless you explicitly pass ``atol`` to SciPy's ``cg``. - atol: (float, optional, default: :const:`0.0`) - Tolerances for convergence, ``norm(residual) <= max(tol*norm(b), atol)``. We do not - implement SciPy's "legacy" behavior, so TorchOpt's tolerance will differ from SciPy - unless you explicitly pass ``atol`` to SciPy's ``cg``. - maxiter: (integer, optional) - Maximum number of iterations. Iteration will stop after maxiter steps even if the - specified tolerance has not been achieved. - M: (tensor or tree of tensors or function) - Pre-conditioner for ``A``. The pre-conditioner should approximate the inverse of ``A``. - Effective preconditioning dramatically improves the rate of convergence, which implies - that fewer iterations are needed to reach a given error tolerance. + A (Tensor or tree of Tensor): 2D array or function that calculates the linear map + (matrix-vector product) ``Ax`` when called like ``A(x)``. ``A`` must represent a + hermitian, positive definite matrix, and must return tensor(s) with the same structure + and shape as its argument. + b (Tensor or tree of Tensor): Right hand side of the linear system representing a single + vector. Can be stored as a tensor or Python container of tensor(s) with any shape. + x0 (Tensor, tree of Tensor, or None, optional): Starting guess for the solution. Must have + the same structure as ``b``. If :data:`None`, use zero initialization. + (default: :data:`None`) + rtol (float, optional): Tolerances for convergence, ``norm(residual) <= max(rtol*norm(b), atol)``. + We do not implement SciPy's "legacy" behavior, so TorchOpt's tolerance will differ from + SciPy unless you explicitly pass ``atol`` to SciPy's ``cg``. (default: :const:`1e-5`) + atol (float, optional): Tolerances for convergence, ``norm(residual) <= max(tol*norm(b), atol)``. + We do not implement SciPy's "legacy" behavior, so TorchOpt's tolerance will differ from + SciPy unless you explicitly pass ``atol`` to SciPy's ``cg``. (default: :const:`0.0`) + maxiter (int or None, optional): Maximum number of iterations. Iteration will stop after + maxiter steps even if the specified tolerance has not been achieved. If :data:`None`, + ``10 * size`` will be used, where ``size`` is the size of the flattened input tensor(s). + (default: :data:`None`) + M (Tensor, tree of Tensor, function, or None, optional): Pre-conditioner for ``A``. The + pre-conditioner should approximate the inverse of ``A``. Effective preconditioning + dramatically improves the rate of convergence, which implies that fewer iterations are + needed to reach a given error tolerance. If :data:`None`, no pre-conditioner will be + used. (default: :data:`None`) Returns: the Conjugate Gradient (CG) linear solver diff --git a/torchopt/linalg/ns.py b/torchopt/linalg/ns.py index 04f5dd11..c1975203 100644 --- a/torchopt/linalg/ns.py +++ b/torchopt/linalg/ns.py @@ -16,13 +16,15 @@ # pylint: disable=invalid-name +from __future__ import annotations + import functools -from typing import Callable, Optional, Union +from typing import Callable import torch from torchopt import pytree -from torchopt.linalg.utils import cat_shapes, normalize_matvec +from torchopt.linalg.utils import normalize_matvec from torchopt.typing import TensorTree @@ -33,7 +35,7 @@ def _ns_solve( A: torch.Tensor, b: torch.Tensor, maxiter: int, - alpha: Optional[float] = None, + alpha: float | None = None, ) -> torch.Tensor: """Use Neumann Series Matrix Inversion Approximation to solve ``Ax = b``.""" if A.ndim != 2 or A.shape[0] != A.shape[1]: @@ -57,27 +59,26 @@ def _ns_solve( def ns( - A: Union[TensorTree, Callable[[TensorTree], TensorTree]], + A: TensorTree | Callable[[TensorTree], TensorTree], b: TensorTree, - maxiter: Optional[int] = None, + maxiter: int | None = None, *, - alpha: Optional[float] = None, + alpha: float | None = None, ) -> TensorTree: """Use Neumann Series Matrix Inversion Approximation to solve ``Ax = b``. Args: - A: (tensor or tree of tensors or function) - 2D array or function that calculates the linear map (matrix-vector product) ``Ax`` when - called like ``A(x)``. ``A`` must represent a hermitian, positive definite matrix, and - must return array(s) with the same structure and shape as its argument. - b: (tensor or tree of tensors) - Right hand side of the linear system representing a single vector. Can be stored as an - array or Python container of array(s) with any shape. - maxiter: (integer, optional) - Maximum number of iterations. Iteration will stop after maxiter steps even if the - specified tolerance has not been achieved. - alpha: (float, optional) - Decay coefficient. + A (Tensor or tree of Tensor): 2D array or function that calculates the linear map + (matrix-vector product) ``Ax`` when called like ``A(x)``. ``A`` must represent a + hermitian, positive definite matrix, and must return tensor(s) with the same structure + and shape as its argument. + b (Tensor or tree of Tensor): Right hand side of the linear system representing a single + vector. Can be stored as a tensor or Python container of tensor(s) with any shape. + maxiter (int or None, optional): Maximum number of iterations. Iteration will stop after + maxiter steps even if the specified tolerance has not been achieved. If :data:`None`, + :const:`10` will be used. (default: :const:`10`) + alpha: (float or None, optional): Decay coefficient. If :data:`None`, :const:`1.0` will be + used. (default: :const:`1.0`) Returns: The Neumann Series (NS) matrix inversion approximation. @@ -111,7 +112,7 @@ def ns( return inv_A_hat_b -def _ns_inv(A: torch.Tensor, maxiter: int, alpha: Optional[float] = None): +def _ns_inv(A: torch.Tensor, maxiter: int, alpha: float | None = None): """Use Neumann Series iteration to solve ``A^{-1}``.""" if A.ndim != 2 or A.shape[0] != A.shape[1]: raise ValueError(f'`A` must be a square matrix, but has shape: {A.shape}') @@ -134,28 +135,27 @@ def _ns_inv(A: torch.Tensor, maxiter: int, alpha: Optional[float] = None): def ns_inv( A: TensorTree, - maxiter: Optional[int] = None, + maxiter: int | None = None, *, - alpha: Optional[float] = None, + alpha: float | None = None, ) -> TensorTree: """Use Neumann Series iteration to solve ``A^{-1}``. Args: - A: (tensor or tree of tensors or function) - 2D array or function that calculates the linear map (matrix-vector product) ``Ax`` when - called like ``A(x)``. ``A`` must represent a hermitian, positive definite matrix, and - must return array(s) with the same structure and shape as its argument. - maxiter: (integer, optional) - Maximum number of iterations. Iteration will stop after maxiter steps even if the - specified tolerance has not been achieved. - alpha: (float, optional) - Decay coefficient. + A (Tensor or tree of Tensor): 2D array or function that calculates the linear map + (matrix-vector product) ``Ax`` when called like ``A(x)``. ``A`` must represent a + hermitian, positive definite matrix, and must return tensor(s) with the same structure + and shape as its argument. + maxiter (int or None, optional): Maximum number of iterations. Iteration will stop after + maxiter steps even if the specified tolerance has not been achieved. If :data:`None`, + :const:`10` will be used. (default: :const:`10`) + alpha: (float or None, optional): Decay coefficient. If :data:`None`, :const:`1.0` will be + used. (default: :const:`1.0`) Returns: The Neumann Series (NS) matrix inversion approximation. """ if maxiter is None: - size = sum(cat_shapes(A)) - maxiter = 10 * size # copied from SciPy + maxiter = 10 return pytree.tree_map(functools.partial(_ns_inv, maxiter=maxiter, alpha=alpha), A) diff --git a/torchopt/linalg/utils.py b/torchopt/linalg/utils.py index 275232be..f301a624 100644 --- a/torchopt/linalg/utils.py +++ b/torchopt/linalg/utils.py @@ -14,8 +14,10 @@ # ============================================================================== """Utilities for linear algebra.""" +from __future__ import annotations + import itertools -from typing import Callable, Tuple, Union +from typing import Callable import torch @@ -23,14 +25,14 @@ from torchopt.typing import TensorTree -def cat_shapes(tree: TensorTree) -> Tuple[int, ...]: +def cat_shapes(tree: TensorTree) -> tuple[int, ...]: """Concatenate the shapes of the leaves of a tree of tensors.""" leaves = pytree.tree_leaves(tree) return tuple(itertools.chain.from_iterable(tuple(leaf.shape) for leaf in leaves)) def normalize_matvec( - matvec: Union[TensorTree, Callable[[TensorTree], TensorTree]] + matvec: TensorTree | Callable[[TensorTree], TensorTree] ) -> Callable[[TensorTree], TensorTree]: """Normalize an argument for computing matrix-vector product.""" if callable(matvec): diff --git a/torchopt/linear_solve/cg.py b/torchopt/linear_solve/cg.py index f75ef9f4..844c9407 100644 --- a/torchopt/linear_solve/cg.py +++ b/torchopt/linear_solve/cg.py @@ -33,8 +33,10 @@ # pylint: disable=invalid-name +from __future__ import annotations + import functools -from typing import Callable, Optional +from typing import Callable from torchopt import linalg from torchopt.linear_solve.utils import make_ridge_matvec @@ -47,8 +49,8 @@ def _solve_cg( matvec: Callable[[TensorTree], TensorTree], # (x) -> A @ x b: TensorTree, - ridge: Optional[float] = None, - init: Optional[TensorTree] = None, + ridge: float | None = None, + init: TensorTree | None = None, **kwargs, ) -> TensorTree: """Solve ``A x = b`` using conjugate gradient. @@ -56,10 +58,12 @@ def _solve_cg( This assumes that ``A`` is a hermitian, positive definite matrix. Args: - matvec: A function that returns the product between ``A`` and a vector. - b: A tree of tensors for the right hand side of the equation. - ridge: Optional ridge regularization. - init: Optional initialization to be used by conjugate gradient. + matvec (callable): A function that returns the product between ``A`` and a vector. + b (Tensor or tree of Tensor): A tree of tensors for the right hand side of the equation. + ridge (float or None, optional): Optional ridge regularization. If provided, solves the + equation for ``A x + ridge x = b``. (default: :data:`None`) + init (Tensor, tree of Tensor, or None, optional): Optional initialization to be used by + conjugate gradient. If :data:`None`, uses zero initialization. (default: :data:`None`) **kwargs: Additional keyword arguments for the conjugate gradient solver. Returns: @@ -80,8 +84,10 @@ def solve_cg(**kwargs): This assumes that ``A`` is a hermitian, positive definite matrix. Args: - ridge: Optional ridge regularization. Solves the equation for ``(A + ridge * I) @ x = b``. - init: Optional initialization to be used by conjugate gradient. + ridge (float or None, optional): Optional ridge regularization. If provided, solves the + equation for ``A x + ridge x = b``. (default: :data:`None`) + init (Tensor, tree of Tensor, or None, optional): Optional initialization to be used by + conjugate gradient. If :data:`None`, uses zero initialization. (default: :data:`None`) **kwargs: Additional keyword arguments for the conjugate gradient solver :func:`torchopt.linalg.cg`. diff --git a/torchopt/linear_solve/inv.py b/torchopt/linear_solve/inv.py index c3224a52..399a0ef9 100644 --- a/torchopt/linear_solve/inv.py +++ b/torchopt/linear_solve/inv.py @@ -33,8 +33,10 @@ # pylint: disable=invalid-name +from __future__ import annotations + import functools -from typing import Callable, Optional +from typing import Callable import torch @@ -49,7 +51,7 @@ def _solve_inv( matvec: Callable[[TensorTree], TensorTree], # (x) -> A @ x b: TensorTree, - ridge: Optional[float] = None, + ridge: float | None = None, ns: bool = False, **kwargs, ) -> TensorTree: @@ -59,11 +61,13 @@ def _solve_inv( in memory. Args: - matvec: A function that returns the product between ``A`` and a vector. - b: A tensor for the right hand side of the equation. - ridge: Optional ridge regularization. Solves the equation for ``(A + ridge * I) @ x = b``. - ns: Whether to use Neumann Series matrix inversion approximation. If :data:`False`, - materialize the matrix ``A`` in memory and use :func:`torch.linalg.solve` instead. + matvec (callable): A function that returns the product between ``A`` and a vector. + b (Tensor or tree of Tensor): A tree of tensors for the right hand side of the equation. + ridge (float or None, optional): Optional ridge regularization. If provided, solves the + equation for ``A x + ridge x = b``. (default: :data:`None`) + ns (bool, optional): Whether to use Neumann Series matrix inversion approximation. + If :data:`False`, materialize the matrix ``A`` in memory and use :func:`torch.linalg.solve` + instead. (default: :data:`False`) **kwargs: Additional keyword arguments for the Neumann Series matrix inversion approximation solver :func:`torchopt.linalg.ns`. @@ -94,9 +98,11 @@ def solve_inv(**kwargs): in memory. Args: - ridge: Optional ridge regularization. Solves the equation for ``(A + ridge * I) @ x = b``. - ns: Whether to use Neumann Series matrix inversion approximation. If :data:`False`, - materialize the matrix ``A`` in memory and use :func:`torch.linalg.solve` instead. + ridge (float or None, optional): Optional ridge regularization. If provided, solves the + equation for ``A x + ridge x = b``. (default: :data:`None`) + ns (bool, optional): Whether to use Neumann Series matrix inversion approximation. + If :data:`False`, materialize the matrix ``A`` in memory and use :func:`torch.linalg.solve` + instead. (default: :data:`False`) **kwargs: Additional keyword arguments for the Neumann Series matrix inversion approximation solver :func:`torchopt.linalg.ns`. diff --git a/torchopt/linear_solve/normal_cg.py b/torchopt/linear_solve/normal_cg.py index 3199a490..8d38f77a 100644 --- a/torchopt/linear_solve/normal_cg.py +++ b/torchopt/linear_solve/normal_cg.py @@ -33,8 +33,10 @@ # pylint: disable=invalid-name +from __future__ import annotations + import functools -from typing import Callable, Optional +from typing import Callable from torchopt import linalg from torchopt.linear_solve.utils import make_normal_matvec, make_ridge_matvec, make_rmatvec @@ -47,8 +49,8 @@ def _solve_normal_cg( matvec: Callable[[TensorTree], TensorTree], # (x) -> A @ x b: TensorTree, - ridge: Optional[float] = None, - init: Optional[TensorTree] = None, + ridge: float | None = None, + init: TensorTree | None = None, **kwargs, ) -> TensorTree: """Solve the normal equation ``A^T A x = A^T b`` using conjugate gradient. @@ -57,10 +59,12 @@ def _solve_normal_cg( positive definite. Args: - matvec: A function that returns the product between ``A`` and a vector. - b: A tree of tensors for the right hand side of the equation. - ridge: Optional ridge regularization. Solves the equation for ``(A.T @ A + ridge * I) @ x = A.T @ b``. - init: Optional initialization to be used by normal conjugate gradient. + matvec (callable): A function that returns the product between ``A`` and a vector. + b (Tensor or tree of Tensor): A tree of tensors for the right hand side of the equation. + ridge (float or None, optional): Optional ridge regularization. If provided, solves the + equation for ``A^T A x + ridge x = A^T b``. (default: :data:`None`) + init (Tensor, tree of Tensor, or None, optional): Optional initialization to be used by + conjugate gradient. If :data:`None`, uses zero initialization. (default: :data:`None`) **kwargs: Additional keyword arguments for the conjugate gradient solver :func:`torchopt.linalg.cg`. @@ -93,8 +97,10 @@ def solve_normal_cg(**kwargs): positive definite. Args: - ridge: Optional ridge regularization. Solves the equation for ``(A.T @ A + ridge * I) @ x = A.T @ b``. - init: Optional initialization to be used by normal conjugate gradient. + ridge (float or None, optional): Optional ridge regularization. If provided, solves the + equation for ``A^T A x + ridge x = A^T b``. (default: :data:`None`) + init (Tensor, tree of Tensor, or None, optional): Optional initialization to be used by + conjugate gradient. If :data:`None`, uses zero initialization. (default: :data:`None`) **kwargs: Additional keyword arguments for the conjugate gradient solver :func:`torchopt.linalg.cg`. diff --git a/torchopt/linear_solve/utils.py b/torchopt/linear_solve/utils.py index 9c2f7ced..f4f34e2a 100644 --- a/torchopt/linear_solve/utils.py +++ b/torchopt/linear_solve/utils.py @@ -31,7 +31,9 @@ # ============================================================================== """Utilities for linear algebra solvers.""" -from typing import Callable, Tuple +from __future__ import annotations + +from typing import Callable import functorch @@ -75,7 +77,7 @@ def ridge_matvec(y: TensorTree) -> TensorTree: def materialize_matvec( matvec: Callable[[TensorTree], TensorTree], x: TensorTree -) -> Tuple[ +) -> tuple[ TensorTree, Callable[[TensorTree], TensorTree], Callable[[TensorTree], TensorTree], diff --git a/torchopt/nn/module.py b/torchopt/nn/module.py index 3716f674..f8804864 100644 --- a/torchopt/nn/module.py +++ b/torchopt/nn/module.py @@ -14,8 +14,10 @@ # ============================================================================== """Base class for neural network modules that hold meta-parameters and meta-modules.""" +from __future__ import annotations + from collections import OrderedDict -from typing import Any, Dict, Iterator, List, NamedTuple, Optional, Set, Tuple, Union +from typing import Any, Iterator, NamedTuple import torch import torch.nn as nn @@ -27,8 +29,8 @@ class MetaInputsContainer(NamedTuple): """Container for parameters and modules in the constructor input arguments.""" - meta_parameters: Set[torch.Tensor] - meta_modules: Set[nn.Module] + meta_parameters: set[torch.Tensor] + meta_modules: set[nn.Module] class MetaGradientModule(nn.Module): # pylint: disable=abstract-method @@ -36,12 +38,12 @@ class MetaGradientModule(nn.Module): # pylint: disable=abstract-method _meta_inputs: MetaInputsContainer _meta_parameters: TensorContainer - _meta_modules: Dict[str, Optional[nn.Module]] + _meta_modules: dict[str, nn.Module | None] - def __new__(cls, *args, **kwargs) -> 'MetaGradientModule': + def __new__(cls, *args, **kwargs) -> MetaGradientModule: """Create a new module instance.""" instance = super().__new__(cls) - flat_args: List[Any] + flat_args: list[Any] flat_args = pytree.tree_leaves((args, kwargs)) # type: ignore[arg-type] meta_parameters = {x for x in flat_args if isinstance(x, torch.Tensor) and x.requires_grad} meta_modules = {x for x in flat_args if isinstance(x, nn.Module) and x.training} @@ -51,14 +53,14 @@ def __new__(cls, *args, **kwargs) -> 'MetaGradientModule': instance._meta_inputs = MetaInputsContainer(meta_parameters, meta_modules) instance._meta_parameters: TensorContainer = OrderedDict() # type: ignore[misc] - instance._meta_modules: Dict[str, Optional[nn.Module]] = OrderedDict() # type: ignore[misc] + instance._meta_modules: dict[str, nn.Module | None] = OrderedDict() # type: ignore[misc] return instance def __init__(self, *args, **kwargs) -> None: # pylint: disable=unused-argument """Initialize a new module instance.""" super().__init__() - def __getattr__(self, name: str) -> Union[torch.Tensor, nn.Module]: + def __getattr__(self, name: str) -> torch.Tensor | nn.Module: """Get an attribute of the module.""" if '_parameters' in self.__dict__: _parameters = self.__dict__['_parameters'] @@ -83,7 +85,7 @@ def __getattr__(self, name: str) -> Union[torch.Tensor, nn.Module]: raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") # pylint: disable-next=too-many-branches,too-many-statements - def __setattr__(self, name: str, value: Union[torch.Tensor, nn.Module]) -> None: + def __setattr__(self, name: str, value: torch.Tensor | nn.Module) -> None: """Set an attribute of the module.""" def remove_from(*dicts_or_sets): @@ -186,18 +188,17 @@ def __delattr__(self, name: str) -> None: else: object.__delattr__(self, name) - def register_parameter(self, name: str, param: Optional[torch.Tensor]) -> None: + def register_parameter(self, name: str, param: torch.Tensor | None) -> None: r"""Add a parameter to the module. The parameter can be accessed as an attribute using given name. Args: - name (string): name of the parameter. The parameter can be accessed - from this module using the given name - param (torch.Tensor or None): parameter to be added to the module. If - ``None``, then operations that run on parameters, such as :attr:`cuda`, - are ignored. If ``None``, the parameter is **not** included in the - module's :attr:`state_dict`. + name (str): The name of the parameter. The parameter can be accessed from this module + using the given name. + param (Tensor or None): The parameter to be added to the module. If :data:`None`, then + operations that run on parameters, such as ``cuda``, are ignored. If :data:`None`, + the parameter is **not** included in the module's ``state_dict``. """ if '_parameters' not in self.__dict__: raise AttributeError('cannot assign parameter before Module.__init__() call') @@ -231,18 +232,17 @@ def register_parameter(self, name: str, param: Optional[torch.Tensor]) -> None: self._parameters[name] = param # type: ignore - def register_meta_parameter(self, name: str, param: Optional[torch.Tensor]) -> None: + def register_meta_parameter(self, name: str, param: torch.Tensor | None) -> None: r"""Add a meta-parameter to the module. The meta-parameter can be accessed as an attribute using given name. Args: - name (string): name of the parameter. The parameter can be accessed - from this module using the given name - param (torch.Tensor or None): parameter to be added to the module. If - ``None``, then operations that run on parameters, such as :attr:`cuda`, - are ignored. If ``None``, the parameter is **not** included in the - module's :attr:`state_dict`. + name (str): The name of the meta-parameter. The meta-parameter can be accessed from this + module using the given name. + param (Tensor or None): The meta-parameter to be added to the module. If :data:`None`, + then operations that run on meta-parameters, such as ``cuda``, are ignored. If + :data:`None`, the meta-parameter is **not** included in the module's ``state_dict``. """ if '_meta_parameters' not in self.__dict__: raise AttributeError( @@ -273,15 +273,15 @@ def register_meta_parameter(self, name: str, param: Optional[torch.Tensor]) -> N self._meta_parameters[name] = param - def add_module(self, name: str, module: Optional[nn.Module]) -> None: + def add_module(self, name: str, module: nn.Module | None) -> None: r"""Add a child module to the current module. The module can be accessed as an attribute using the given name. Args: - name (string): name of the child module. The child module can be - accessed from this module using the given name - module (Module): child module to be added to the module. + name (str): The name of the child module. The child module can be accessed from this + module using the given name + module (nn.Module or None): The child module to be added to the module. """ if not isinstance(module, nn.Module) and module is not None: raise TypeError(f'{torch.typename(module)} is not a Module subclass') @@ -301,19 +301,19 @@ def add_module(self, name: str, module: Optional[nn.Module]) -> None: self._modules[name] = module - def register_module(self, name: str, module: Optional[nn.Module]) -> None: + def register_module(self, name: str, module: nn.Module | None) -> None: r"""Alias for :func:`add_module`.""" self.add_module(name, module) - def add_meta_module(self, name: str, meta_module: Optional[nn.Module]) -> None: + def add_meta_module(self, name: str, meta_module: nn.Module | None) -> None: r"""Add a child meta-module to the current module. The meta-module can be accessed as an attribute using the given name. Args: - name (string): name of the child meta-module. The child meta-module can be - accessed from this module using the given name - meta_module (Module): child meta-module to be added to the module. + name (str): The name of the child meta-module. The child meta-module can be accessed + from this module using the given name + meta_module (nn.Module or None): The child meta-module to be added to the module. """ if not isinstance(meta_module, nn.Module) and meta_module is not None: raise TypeError(f'{torch.typename(meta_module)} is not a Module subclass') @@ -328,7 +328,7 @@ def add_meta_module(self, name: str, meta_module: Optional[nn.Module]) -> None: self._meta_modules[name] = meta_module - def register_meta_module(self, name: str, meta_module: Optional[nn.Module]) -> None: + def register_meta_module(self, name: str, meta_module: nn.Module | None) -> None: r"""Alias for :func:`add_meta_module`.""" self.add_meta_module(name, meta_module) @@ -338,9 +338,9 @@ def meta_parameters(self, recurse: bool = True) -> Iterator[torch.Tensor]: This is typically passed to an optimizer. Args: - recurse (bool): if True, then yields parameters of this module and - all submodules. Otherwise, yields only meta-parameters that - are direct members of this module. + recurse (bool, optional): If :data:`True`, then yields parameters of this module and + all submodules. Otherwise, yields only meta-parameters that are direct members of + this module. (default: :data:`True`) Yields: Parameter: module meta-parameter @@ -358,14 +358,15 @@ def meta_parameters(self, recurse: bool = True) -> Iterator[torch.Tensor]: def named_meta_parameters( self, prefix: str = '', recurse: bool = True - ) -> Iterator[Tuple[str, torch.Tensor]]: + ) -> Iterator[tuple[str, torch.Tensor]]: r"""Return an iterator over module meta-parameters, yielding both the name of the meta-parameter as well as the meta-parameter itself. Args: - prefix (str): prefix to prepend to all meta-parameter names. - recurse (bool): if True, then yields meta-parameters of this module - and all submodules. Otherwise, yields only meta-parameters that - are direct members of this module. + prefix (str, optional): The prefix to prepend to all meta-parameter names. + (default: :const:`''`) + recurse (bool, optional): if :data:`True`, then yields meta-parameters of this module + and all submodules. Otherwise, yields only meta-parameters that are direct members + of this module. (default: :data:`True`) Yields: (string, Parameter): Tuple containing the name and parameter @@ -398,7 +399,7 @@ def meta_children(self) -> Iterator[nn.Module]: for _, module in self.named_meta_children(): yield module - def named_meta_children(self) -> Iterator[Tuple[str, nn.Module]]: + def named_meta_children(self) -> Iterator[tuple[str, nn.Module]]: r"""Return an iterator over immediate children meta-modules, yielding both the name of the meta-module as well as the meta-module itself. Yields: @@ -430,15 +431,18 @@ def meta_modules(self) -> Iterator[nn.Module]: yield meta_module def named_meta_modules( - self, memo: Optional[Set[nn.Module]] = None, prefix: str = '', remove_duplicate: bool = True - ) -> Iterator[Tuple[str, nn.Module]]: + self, memo: set[nn.Module] | None = None, prefix: str = '', remove_duplicate: bool = True + ) -> Iterator[tuple[str, nn.Module]]: r"""Return an iterator over all meta-modules in the network, yielding both the name of the meta-module as well as the meta-module itself. Args: - memo: a memo to store the set of meta-modules already added to the result - prefix: a prefix that will be added to the name of the meta-module - remove_duplicate: whether to remove the duplicated meta-module instances in the result - or not + memo (set of nn.Module or None, optional): A memory to store the set of meta-modules + already added to the result. If not provided, a new set will be created. + (default: :const:`None`) + prefix (str, optional): A prefix that will be added to the name of the meta-module. + (default: :const:`''`) + remove_duplicate (bool, optional): whether to remove the duplicated meta-module + instances in the result or not. (default: :const:`True`) Yields: (string, Module): Tuple of name and meta-module diff --git a/torchopt/nn/stateless.py b/torchopt/nn/stateless.py index 2fc0dbb4..9391352f 100644 --- a/torchopt/nn/stateless.py +++ b/torchopt/nn/stateless.py @@ -14,8 +14,10 @@ # ============================================================================== """Utility functions for stateless module calls.""" +from __future__ import annotations + import contextlib -from typing import Dict, Generator, Iterable, Tuple, Union +from typing import Generator, Iterable import torch import torch.nn as nn @@ -29,9 +31,9 @@ def swap_state( module: nn.Module, - named_tensors: Union[Dict[str, torch.Tensor], Iterable[Tuple[str, torch.Tensor]]], + named_tensors: dict[str, torch.Tensor] | Iterable[tuple[str, torch.Tensor]], allow_missing: bool = False, -) -> Dict[str, torch.Tensor]: +) -> dict[str, torch.Tensor]: """Swap the module parameters and/or buffers.""" if not isinstance(named_tensors, dict): named_tensors = dict(named_tensors) @@ -84,7 +86,7 @@ def recursive_setattr(path: str, value: torch.Tensor) -> torch.Tensor: @contextlib.contextmanager def reparametrize( module: nn.Module, - named_tensors: Union[Dict[str, torch.Tensor], Iterable[Tuple[str, torch.Tensor]]], + named_tensors: dict[str, torch.Tensor] | Iterable[tuple[str, torch.Tensor]], allow_missing: bool = False, ) -> Generator[nn.Module, None, None]: """Reparameterize the module parameters and/or buffers.""" diff --git a/torchopt/optim/adam.py b/torchopt/optim/adam.py index c56956f8..640eea1d 100644 --- a/torchopt/optim/adam.py +++ b/torchopt/optim/adam.py @@ -14,7 +14,9 @@ # ============================================================================== """Adam optimizer.""" -from typing import Iterable, Tuple +from __future__ import annotations + +from typing import Iterable import torch @@ -39,7 +41,7 @@ def __init__( self, params: Iterable[torch.Tensor], lr: ScalarOrSchedule, - betas: Tuple[float, float] = (0.9, 0.999), + betas: tuple[float, float] = (0.9, 0.999), eps: float = 1e-8, weight_decay: float = 0.0, *, @@ -50,25 +52,27 @@ def __init__( r"""Initialize the Adam optimizer. Args: - params: (iterable of torch.Tensor) - An iterable of :class:`torch.Tensor`\s. Specifies what tensors should be optimized. - lr: (default: :const:`1e-3`) - This is a fixed global scaling factor. - betas: (default: :const:`(0.9, 0.999)`) - Coefficients used for computing running averages of gradient and its square. - eps: (default: :const:`1e-8`) - A small constant applied to denominator outside of the square root (as in the Adam - paper) to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - eps_root: (default: :data:`0.0`) - A small constant applied to denominator inside the square root (as in RMSProp), to - avoid dividing by zero when rescaling. This is needed for example when computing - (meta-)gradients through Adam. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. - use_accelerated_op: (default: :data:`False`) - If :data:`True` use our implemented fused operator. + params (iterable of Tensor): An iterable of :class:`torch.Tensor`\s. Specifies what + tensors should be optimized. + lr (float or callable, optional): This is a fixed global scaling factor or a learning + rate scheduler. (default: :const:`1e-3`) + betas (tuple of float, optional): Coefficients used for computing running averages of + gradient and its square. (default: :const:`(0.9, 0.999)`) + eps (float, optional): A small constant applied to denominator outside of the square + root (as in the Adam paper) to avoid dividing by zero when rescaling. + (default: :const:`1e-8`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + eps_root (float, optional): A small constant applied to denominator inside the square + root (as in RMSProp), to avoid dividing by zero when rescaling. This is needed for + example when computing (meta-)gradients through Adam. (default: :const:`0.0`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created + with flag ``requires_grad=True``, this flag is often used in Meta-Learning + algorithms. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) + use_accelerated_op (bool, optional): If :data:`True` use our implemented fused operator. + (default: :data:`False`) """ super().__init__( params, diff --git a/torchopt/optim/adamw.py b/torchopt/optim/adamw.py index 19c70678..7db5e750 100644 --- a/torchopt/optim/adamw.py +++ b/torchopt/optim/adamw.py @@ -14,13 +14,15 @@ # ============================================================================== """AdamW optimizer.""" -from typing import Any, Callable, Iterable, Optional, Tuple, Union +from __future__ import annotations + +from typing import Callable, Iterable import torch from torchopt import alias from torchopt.optim.base import Optimizer -from torchopt.typing import Params, ScalarOrSchedule +from torchopt.typing import OptState, Params, ScalarOrSchedule __all__ = ['AdamW'] @@ -39,46 +41,48 @@ def __init__( self, params: Iterable[torch.Tensor], lr: ScalarOrSchedule = 1e-3, - betas: Tuple[float, float] = (0.9, 0.999), + betas: tuple[float, float] = (0.9, 0.999), eps: float = 1e-8, weight_decay: float = 1e-2, *, eps_root: float = 0.0, - mask: Optional[Union[Any, Callable[[Params], Any]]] = None, + mask: OptState | Callable[[Params], OptState] | None = None, maximize: bool = False, use_accelerated_op: bool = False, ) -> None: r"""Initialize the AdamW optimizer. Args: - params: (iterable of torch.Tensor) - An iterable of :class:`torch.Tensor`\s. Specifies what tensors should be optimized. - lr: (default: :const:`1e-3`) - This is a fixed global scaling factor. - betas: (default: :const:`(0.9, 0.999)`) - Coefficients used for computing running averages of gradient and its square. - eps: (default: :const:`1e-8`) - A small constant applied to denominator outside of the square root (as in the Adam - paper) to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`1e-2`) - Strength of the weight decay regularization. Note that this weight decay is - multiplied with the learning rate. This is consistent with other frameworks such as - PyTorch, but different from (Loshchilov et al, 2019) where the weight decay is only - multiplied with the "schedule multiplier", but not the base learning rate. - eps_root: (default: :data:`0.0`) - A small constant applied to denominator inside the square root (as in RMSProp), to - avoid dividing by zero when rescaling. This is needed for example when computing - (meta-)gradients through Adam. - mask: (default: :data:`None`) - A tree with same structure as (or a prefix of) the params PyTree, or a Callable that + params (iterable of Tensor): An iterable of :class:`torch.Tensor`\s. Specifies what + tensors should be optimized. + lr (float or callable, optional): This is a fixed global scaling factor or a learning + rate scheduler. (default: :const:`1e-3`) + betas (tuple of float, optional): Coefficients used for computing running averages of + gradient and its square. (default: :const:`(0.9, 0.999)`) + eps (float, optional): A small constant applied to denominator outside of the square + root (as in the Adam paper) to avoid dividing by zero when rescaling. + (default: :const:`1e-8`) + weight_decay (float, optional): Strength of the weight decay regularization. Note that + this weight decay is multiplied with the learning rate. This is consistent with + other frameworks such as PyTorch, but different from (Loshchilov et al, 2019) where + the weight decay is only multiplied with the "schedule multiplier", but not the base + learning rate. (default: :const:`1e-2`) + eps_root (float, optional): A small constant applied to denominator inside the square + root (as in RMSProp), to avoid dividing by zero when rescaling. This is needed for + example when computing (meta-)gradients through Adam. (default: :const:`0.0`) + mask (tree of Tensor, callable, or None, optional): + A tree with same structure as (or a prefix of) the params pytree, or a function that returns such a pytree given the params/updates. The leaves should be booleans, :data:`True` for leaves/subtrees you want to apply the weight decay to, and :data:`False` for those you want to skip. Note that the Adam gradient - transformations are applied to all parameters. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. - use_accelerated_op: (default: :data:`False`) - If :data:`True` use our implemented fused operator. + transformations are applied to all parameters. (default: :data:`None`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created + with flag ``requires_grad=True``, this flag is often used in Meta-Learning + algorithms. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) + use_accelerated_op (bool, optional): If :data:`True` use our implemented fused operator. + (default: :data:`False`) """ super().__init__( params, diff --git a/torchopt/optim/base.py b/torchopt/optim/base.py index e894b93b..aac3a782 100644 --- a/torchopt/optim/base.py +++ b/torchopt/optim/base.py @@ -14,7 +14,9 @@ # ============================================================================== """The base class for optimizers.""" -from typing import Callable, Iterable, List, Optional, Sequence, Tuple +from __future__ import annotations + +from typing import Callable, Iterable, Sequence import torch @@ -37,8 +39,8 @@ def __init__(self, params: Iterable[torch.Tensor], impl: GradientTransformation) params (iterable of torch.Tensor): An iterable of :class:`torch.Tensor`\s. Specifies what tensors should be optimized. impl (GradientTransformation): A low level optimizer function, it could be a optimizer - function provided by ``alias.py`` or a customized ``chain`` provided by - ``combine.py``. + function provided in :mod:`torchopt.alias` or a customized :func:`torchopt.chain`\ed + transformation. Note that using ``Optimizer(sgd())`` or ``Optimizer(chain(sgd()))`` is equivalent to :class:`torchopt.SGD`. """ @@ -46,9 +48,9 @@ def __init__(self, params: Iterable[torch.Tensor], impl: GradientTransformation) raise TypeError(f'{impl} (type: {type(impl).__name__}) is not a GradientTransformation') self.impl: GradientTransformation = impl - self.param_groups: List[TupleOfTensors] = [] - self.param_treespecs: List[pytree.PyTreeSpec] = [] - self.state_groups: List[OptState] = [] + self.param_groups: list[TupleOfTensors] = [] + self.param_treespecs: list[pytree.PyTreeSpec] = [] + self.state_groups: list[OptState] = [] if not isinstance(params, (list, tuple)): params = tuple(params) @@ -60,7 +62,8 @@ def zero_grad(self, set_to_none: bool = False) -> None: The behavior is similar to :meth:`torch.optim.Optimizer.zero_grad`. Args: - set_to_none (bool): Instead of setting to zero, set the ``grads`` to :data:`None`. + set_to_none (bool, optional): Instead of setting to zero, set the ``grads`` to + :data:`None`. (default: :data:`False`) """ if set_to_none: @@ -80,7 +83,7 @@ def f(p): pytree.tree_map_(f, self.param_groups) # type: ignore[arg-type] - def state_dict(self) -> Tuple[OptState, ...]: + def state_dict(self) -> tuple[OptState, ...]: """Return the state of the optimizer.""" return tuple(self.state_groups) @@ -88,18 +91,19 @@ def load_state_dict(self, state_dict: Sequence[OptState]) -> None: """Load the optimizer state. Args: - state_dict: Optimizer state. Should be an object returned from a call to - :meth:`state_dict`. + state_dict (sequence of tree of Tensor): Optimizer state. Should be an object returned + from a call to :meth:`state_dict`. """ self.state_groups[:] = list(state_dict) - def step(self, closure: Optional[Callable[[], torch.Tensor]] = None) -> Optional[torch.Tensor]: + def step(self, closure: Callable[[], torch.Tensor] | None = None) -> torch.Tensor | None: """Perform a single optimization step. The behavior is similar to :meth:`torch.optim.Optimizer.step`. Args: - closure (callable, optional): A closure that reevaluates the model and returns the loss. + closure (callable or None, optional): A closure that reevaluates the model and returns + the loss. Optional for most optimizers. (default: :data:`None`) """ loss = None if closure is not None: @@ -120,7 +124,7 @@ def f(p): return loss def add_param_group(self, params: Params) -> None: - """Add a param group to the optimizer's :attr:`param_groups`.""" + """Add a param group to the optimizer's ``param_groups``.""" flat_params: TupleOfTensors flat_params, params_treespec = pytree.tree_flatten_as_tuple(params) self.param_groups.append(flat_params) diff --git a/torchopt/optim/func/base.py b/torchopt/optim/func/base.py index 7e51a21b..9dce3412 100644 --- a/torchopt/optim/func/base.py +++ b/torchopt/optim/func/base.py @@ -14,7 +14,7 @@ # ============================================================================== """Functional optimizer wrappers.""" -from typing import Optional +from __future__ import annotations import torch @@ -41,26 +41,27 @@ class FuncOptimizer: # pylint: disable=too-few-public-methods """ def __init__(self, impl: GradientTransformation, *, inplace: bool = False) -> None: - """Initialize the functional optimizer wrapper. + r"""Initialize the functional optimizer wrapper. Args: impl (GradientTransformation): A low level optimizer function, it could be a optimizer - function provided by `alias.py` or a customized `chain` provided by `combine.py`. - inplace (optional): (default: :data:`False`) - The default value of ``inplace`` for each optimization update. + function provided in :mod:`torchopt.alias` or a customized :func:`torchopt.chain`\ed + transformation. + inplace (bool, optional): The default value of ``inplace`` for each optimization update. + (default: :data:`False`) """ if not isinstance(impl, GradientTransformation): raise TypeError(f'{impl} (type: {type(impl).__name__}) is not a GradientTransformation') self.impl: GradientTransformation = impl - self.optim_state: Optional[OptState] = UninitializedState() + self.optim_state: OptState | None = UninitializedState() self.inplace: bool = bool(inplace) def step( self, loss: torch.Tensor, params: Params, - inplace: Optional[bool] = None, + inplace: bool | None = None, ) -> Params: r"""Compute the gradients of loss to the network parameters and update network parameters. @@ -69,13 +70,12 @@ def step( gradients and update the network parameters without modifying tensors in-place. Args: - loss: (torch.Tensor) - loss that is used to compute the gradients to network parameters. - params: (tree of torch.Tensor) - An tree of :class:`torch.Tensor`\s. Specifies what tensors should be optimized. - inplace (optional): (default: :data:`None`) - Whether to update the parameters in-place. If :data:`None`, use the default value - specified in the constructor. + loss (Tensor): The loss that is used to compute the gradients to network parameters. + params (tree of Tensor): An tree of :class:`torch.Tensor`\s. Specifies what tensors + should be optimized. + inplace (bool or None, optional): Whether to update the parameters in-place. If + :data:`None`, use the default value specified in the constructor. + (default: :data:`None`) """ if isinstance(self.optim_state, UninitializedState): self.optim_state = self.impl.init(params) diff --git a/torchopt/optim/meta/adam.py b/torchopt/optim/meta/adam.py index 36d54857..bd9804b9 100644 --- a/torchopt/optim/meta/adam.py +++ b/torchopt/optim/meta/adam.py @@ -14,7 +14,7 @@ # ============================================================================== """Differentiable Adam optimizer.""" -from typing import Tuple +from __future__ import annotations import torch.nn as nn @@ -39,7 +39,7 @@ def __init__( self, module: nn.Module, lr: ScalarOrSchedule = 1e-3, - betas: Tuple[float, float] = (0.9, 0.999), + betas: tuple[float, float] = (0.9, 0.999), eps: float = 1e-8, weight_decay: float = 0.0, *, @@ -51,28 +51,26 @@ def __init__( """Initialize the meta-Adam optimizer. Args: - module: (nn.Module) - A network whose parameters should be optimized. - lr: (default: :const:`1e-3`) - This is a fixed global scaling factor. - betas: (default: :const:`(0.9, 0.999)`) - Coefficients used for computing running averages of gradient and its square. - eps: (default: :const:`1e-8`) - A small constant applied to denominator outside of the square root (as in the Adam - paper) to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - eps_root: (default: :data:`0.0`) - A small constant applied to denominator inside the square root (as in RMSProp), to - avoid dividing by zero when rescaling. This is needed for example when computing - (meta-)gradients through Adam. - moment_requires_grad: (default: :data:`True`) - If :data:`True` the momentums will be created with flag ``requires_grad=True``, this - flag is often used in Meta-Learning algorithms. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. - use_accelerated_op: (default: :data:`False`) - If :data:`True` use our implemented fused operator. + module (nn.Module): A network whose parameters should be optimized. + lr (float or callable, optional): This is a fixed global scaling factor or a learning + rate scheduler. (default: :const:`1e-3`) + betas (tuple of float, optional): Coefficients used for computing running averages of + gradient and its square. (default: :const:`(0.9, 0.999)`) + eps (float, optional): A small constant applied to denominator outside of the square + root (as in the Adam paper) to avoid dividing by zero when rescaling. + (default: :const:`1e-8`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + eps_root (float, optional): A small constant applied to denominator inside the square + root (as in RMSProp), to avoid dividing by zero when rescaling. This is needed for + example when computing (meta-)gradients through Adam. (default: :const:`0.0`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created + with flag ``requires_grad=True``, this flag is often used in Meta-Learning + algorithms. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) + use_accelerated_op (bool, optional): If :data:`True` use our implemented fused operator. + (default: :data:`False`) """ super().__init__( module, diff --git a/torchopt/optim/meta/adamw.py b/torchopt/optim/meta/adamw.py index dc869e30..c8a8ef9c 100644 --- a/torchopt/optim/meta/adamw.py +++ b/torchopt/optim/meta/adamw.py @@ -14,13 +14,15 @@ # ============================================================================== """Differentiable AdamW optimizer.""" -from typing import Any, Callable, Optional, Tuple, Union +from __future__ import annotations + +from typing import Callable import torch.nn as nn from torchopt import alias from torchopt.optim.meta.base import MetaOptimizer -from torchopt.typing import Params, ScalarOrSchedule +from torchopt.typing import OptState, Params, ScalarOrSchedule __all__ = ['MetaAdamW'] @@ -39,12 +41,12 @@ def __init__( self, module: nn.Module, lr: ScalarOrSchedule = 1e-3, - betas: Tuple[float, float] = (0.9, 0.999), + betas: tuple[float, float] = (0.9, 0.999), eps: float = 1e-8, weight_decay: float = 1e-2, *, eps_root: float = 0.0, - mask: Optional[Union[Any, Callable[[Params], Any]]] = None, + mask: OptState | Callable[[Params], OptState] | None = None, moment_requires_grad: bool = False, maximize: bool = False, use_accelerated_op: bool = False, @@ -52,37 +54,35 @@ def __init__( """Initialize the meta-AdamW optimizer. Args: - module: (nn.Module) - A network whose parameters should be optimized. - lr: (default: :const:`1e-3`) - This is a fixed global scaling factor. - betas: (default: :const:`(0.9, 0.999)`) - Coefficients used for computing running averages of gradient and its square. - eps: (default: :const:`1e-8`) - A small constant applied to denominator outside of the square root (as in the Adam - paper) to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`1e-2`) - Strength of the weight decay regularization. Note that this weight decay is - multiplied with the learning rate. This is consistent with other frameworks such as - PyTorch, but different from (Loshchilov et al, 2019) where the weight decay is only - multiplied with the "schedule multiplier", but not the base learning rate. - eps_root: (default: :data:`0.0`) - A small constant applied to denominator inside the square root (as in RMSProp), to - avoid dividing by zero when rescaling. This is needed for example when computing - (meta-)gradients through Adam. - mask: (default: :data:`None`) - A tree with same structure as (or a prefix of) the params PyTree, or a Callable that + module (nn.Module): A network whose parameters should be optimized. + lr (float or callable, optional): This is a fixed global scaling factor or a learning + rate scheduler. (default: :const:`1e-3`) + betas (tuple of float, optional): Coefficients used for computing running averages of + gradient and its square. (default: :const:`(0.9, 0.999)`) + eps (float, optional): A small constant applied to denominator outside of the square + root (as in the Adam paper) to avoid dividing by zero when rescaling. + (default: :const:`1e-8`) + weight_decay (float, optional): Strength of the weight decay regularization. Note that + this weight decay is multiplied with the learning rate. This is consistent with + other frameworks such as PyTorch, but different from (Loshchilov et al, 2019) where + the weight decay is only multiplied with the "schedule multiplier", but not the base + learning rate. (default: :const:`1e-2`) + eps_root (float, optional): A small constant applied to denominator inside the square + root (as in RMSProp), to avoid dividing by zero when rescaling. This is needed for + example when computing (meta-)gradients through Adam. (default: :const:`0.0`) + mask (tree of Tensor, callable, or None, optional): + A tree with same structure as (or a prefix of) the params pytree, or a function that returns such a pytree given the params/updates. The leaves should be booleans, :data:`True` for leaves/subtrees you want to apply the weight decay to, and :data:`False` for those you want to skip. Note that the Adam gradient - transformations are applied to all parameters. - moment_requires_grad: (default: :data:`False`) - If :data:`True` the momentums will be created with flag ``requires_grad=True``, this - flag is often used in Meta-Learning algorithms. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. - use_accelerated_op: (default: :data:`False`) - If :data:`True` use our implemented fused operator. + transformations are applied to all parameters. (default: :data:`None`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created + with flag ``requires_grad=True``, this flag is often used in Meta-Learning + algorithms. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) + use_accelerated_op (bool, optional): If :data:`True` use our implemented fused operator. + (default: :data:`False`) """ super().__init__( module, diff --git a/torchopt/optim/meta/base.py b/torchopt/optim/meta/base.py index 8db4f0a7..c5c9ad73 100644 --- a/torchopt/optim/meta/base.py +++ b/torchopt/optim/meta/base.py @@ -14,7 +14,9 @@ # ============================================================================== """The base class for differentiable meta-optimizers.""" -from typing import List, Sequence, Tuple +from __future__ import annotations + +from typing import Sequence import torch import torch.nn as nn @@ -33,14 +35,13 @@ class MetaOptimizer: """The base class for high-level differentiable optimizers.""" def __init__(self, module: nn.Module, impl: GradientTransformation) -> None: - """Initialize the meta-optimizer. + r"""Initialize the meta-optimizer. Args: - module: (nn.Module) - A network whose parameters should be optimized. - impl: (GradientTransformation) - A low level optimizer function, it could be a optimizer function provided by - ``alias.py`` or a customized ``chain`` provided by ``combine.py``. + module (nn.Module): A network whose parameters should be optimized. + impl (GradientTransformation): A low level optimizer function, it could be a optimizer + function provided in :mod:`torchopt.alias` or a customized :func:`torchopt.chain`\ed + transformation. Note that using ``MetaOptimizer(sgd(moment_requires_grad=True))`` or ``MetaOptimizer(chain(sgd(moment_requires_grad=True)))`` is equivalent to :class:`torchopt.MetaSGD`. @@ -49,8 +50,8 @@ def __init__(self, module: nn.Module, impl: GradientTransformation) -> None: raise TypeError(f'{impl} (type: {type(impl).__name__}) is not a GradientTransformation') self.impl: GradientTransformation = impl - self.param_containers_groups: List[ModuleTensorContainers] = [] - self.state_groups: List[OptState] = [] + self.param_containers_groups: list[ModuleTensorContainers] = [] + self.state_groups: list[OptState] = [] self.add_param_group(module) @@ -62,8 +63,8 @@ def step(self, loss: torch.Tensor) -> None: # pylint: disable=too-many-locals gradients and update the network parameters without modifying tensors in-place. Args: - loss: (torch.Tensor) - The loss that is used to compute the gradients to the network parameters. + loss (torch.Tensor): The loss that is used to compute the gradients to the network + parameters. """ # Step parameter only for i, (param_container, state) in enumerate( @@ -94,12 +95,12 @@ def step(self, loss: torch.Tensor) -> None: # pylint: disable=too-many-locals container.update(new_param) def add_param_group(self, module: nn.Module) -> None: - """Add a param group to the optimizer's :attr:`state_groups`.""" + """Add a param group to the optimizer's ``state_groups``.""" params_container = extract_module_containers(module, with_buffers=False)[0] self.param_containers_groups.append(params_container) self.state_groups.append(UninitializedState()) - def state_dict(self) -> Tuple[OptState, ...]: + def state_dict(self) -> tuple[OptState, ...]: """Extract the references of the optimizer states. Note that the states are references, so any in-place operations will change the states diff --git a/torchopt/optim/meta/rmsprop.py b/torchopt/optim/meta/rmsprop.py index f4dfdae6..3aff20e1 100644 --- a/torchopt/optim/meta/rmsprop.py +++ b/torchopt/optim/meta/rmsprop.py @@ -50,30 +50,26 @@ def __init__( """Initialize the meta-RMSProp optimizer. Args: - module: (nn.Module) - A network whose parameters should be optimized. - lr: (default: :const:`1e-2`) - This is a fixed global scaling factor. - alpha: (default: :const:`0.99`) - Smoothing constant, the decay used to track the magnitude of previous gradients. - eps: (default: :const:`1e-8`) - A small numerical constant to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - momentum: (default: :const:`0.0`) - The decay rate used by the momentum term. The momentum is not used when it is set to - :const:`0.0`. - centered: (default: :data:`False`) - If :data:`True`, use the variance of the past gradients to rescale the latest - gradients. - initial_scale: (default: :data:`0.0`) - Initialization of accumulators tracking the magnitude of previous updates. PyTorch - uses :data:`0.0`, TensorFlow 1.x uses :data:`1.0`. When reproducing results from a - paper, verify the value used by the authors. - nesterov: (default: :data:`False`) - Whether to use Nesterov momentum. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. + module (nn.Module): A network whose parameters should be optimized. + lr (float or callable, optional): This is a fixed global scaling factor or a learning + rate scheduler. (default: :const:`1e-2`) + alpha (float, optional): Smoothing constant, the decay used to track the magnitude of + previous gradients. (default: :const:`0.99`) + eps (float, optional): A small numerical constant to avoid dividing by zero when + rescaling. (default: :const:`1e-8`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + momentum (float, optional): The decay rate used by the momentum term. The momentum is + not used when it is set to :const:`0.0`. (default: :const:`0.0`) + centered (bool, optional): If :data:`True`, use the variance of the past gradients to + rescale the latest gradients. (default: :data:`False`) + initial_scale (float, optional): Initialization of accumulators tracking the magnitude + of previous updates. PyTorch uses :data:`0.0`, TensorFlow 1.x uses :data:`1.0`. When + reproducing results from a paper, verify the value used by the authors. + (default: :data:`0.0`) + nesterov (bool, optional): Whether to use Nesterov momentum. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) """ super().__init__( module, diff --git a/torchopt/optim/meta/sgd.py b/torchopt/optim/meta/sgd.py index 5f9177e1..476ed9d6 100644 --- a/torchopt/optim/meta/sgd.py +++ b/torchopt/optim/meta/sgd.py @@ -47,23 +47,20 @@ def __init__( """Initialize the meta-SGD optimizer. Args: - module: (nn.Module) - A network whose parameters should be optimized. - lr: This is a fixed global scaling factor. - momentum: (default: :const:`0.0`) - The decay rate used by the momentum term. The momentum is not used when it is set to - :const:`0.0`. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - dampening: (default: :const:`0.0`) - Dampening for momentum. - nesterov: (default: :const:`False`) - Whether to use Nesterov momentum. - moment_requires_grad: (default: :data:`True`) - If :data:`True` the momentums will be created with flag ``requires_grad=True``, this - flag is often used in Meta-Learning algorithms. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. + module (nn.Module): A network whose parameters should be optimized. + lr (float or callable): This is a fixed global scaling factor or a learning rate + scheduler. + momentum (float, optional): The decay rate used by the momentum term. The momentum is + not used when it is set to :const:`0.0`. (default: :const:`0.0`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + dampening (float, optional): Dampening for momentum. (default: :const:`0.0`) + nesterov (bool, optional): Whether to use Nesterov momentum. (default: :data:`False`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created + with flag ``requires_grad=True``, this flag is often used in Meta-Learning + algorithms. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) """ super().__init__( module, diff --git a/torchopt/optim/rmsprop.py b/torchopt/optim/rmsprop.py index 9101984f..5c4e536f 100644 --- a/torchopt/optim/rmsprop.py +++ b/torchopt/optim/rmsprop.py @@ -52,30 +52,27 @@ def __init__( r"""Initialize the RMSProp optimizer. Args: - params: (iterable of torch.Tensor) - An iterable of :class:`torch.Tensor`\s. Specifies what Tensors should be optimized. - lr: (default: :const:`1e-2`) - This is a fixed global scaling factor. - alpha: (default: :const:`0.99`) - Smoothing constant, the decay used to track the magnitude of previous gradients. - eps: (default: :const:`1e-8`) - A small numerical constant to avoid dividing by zero when rescaling. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - momentum: (default: :const:`0.0`) - The decay rate used by the momentum term. The momentum is not used when it is set to - :const:`0.0`. - centered: (default: :data:`False`) - If :data:`True`, use the variance of the past gradients to rescale the latest - gradients. - initial_scale: (default: :data:`0.0`) - Initialization of accumulators tracking the magnitude of previous updates. PyTorch - uses :data:`0.0`, TensorFlow 1.x uses :data:`1.0`. When reproducing results from a - paper, verify the value used by the authors. - nesterov: (default: :data:`False`) - Whether to use Nesterov momentum. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. + params (iterable of Tensor): An iterable of :class:`torch.Tensor`\s. Specifies what + tensors should be optimized. + lr (float or callable, optional): This is a fixed global scaling factor or a learning + rate scheduler. (default: :const:`1e-2`) + alpha (float, optional): Smoothing constant, the decay used to track the magnitude of + previous gradients. (default: :const:`0.99`) + eps (float, optional): A small numerical constant to avoid dividing by zero when + rescaling. (default: :const:`1e-8`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + momentum (float, optional): The decay rate used by the momentum term. The momentum is + not used when it is set to :const:`0.0`. (default: :const:`0.0`) + centered (bool, optional): If :data:`True`, use the variance of the past gradients to + rescale the latest gradients. (default: :data:`False`) + initial_scale (float, optional): Initialization of accumulators tracking the magnitude + of previous updates. PyTorch uses :data:`0.0`, TensorFlow 1.x uses :data:`1.0`. When + reproducing results from a paper, verify the value used by the authors. + (default: :data:`0.0`) + nesterov (bool, optional): Whether to use Nesterov momentum. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) """ super().__init__( params, diff --git a/torchopt/optim/sgd.py b/torchopt/optim/sgd.py index 223e856e..3da9595a 100644 --- a/torchopt/optim/sgd.py +++ b/torchopt/optim/sgd.py @@ -48,20 +48,21 @@ def __init__( r"""Initialize the SGD optimizer. Args: - params: (iterable of torch.Tensor) - An iterable of :class:`torch.Tensor`\s. Specifies what tensors should be optimized. - lr: This is a fixed global scaling factor. - momentum: (default: :const:`0.0`) - The decay rate used by the momentum term. The momentum is not used when it is set to - :const:`0.0`. - weight_decay: (default: :const:`0.0`) - Weight decay, add L2 penalty to parameters. - dampening: (default: :const:`0.0`) - Dampening for momentum. - nesterov: (default: :data:`False`) - Whether to use Nesterov momentum. - maximize: (default: :data:`False`) - Maximize the params based on the objective, instead of minimizing. + params (iterable of Tensor): An iterable of :class:`torch.Tensor`\s. Specifies what + tensors should be optimized. + lr (float or callable): This is a fixed global scaling factor or a learning rate + scheduler. + momentum (float, optional): The decay rate used by the momentum term. The momentum is + not used when it is set to :const:`0.0`. (default: :const:`0.0`) + weight_decay (float, optional): Weight decay, add L2 penalty to parameters. + (default: :const:`0.0`) + dampening (float, optional): Dampening for momentum. (default: :const:`0.0`) + nesterov (bool, optional): Whether to use Nesterov momentum. (default: :data:`False`) + moment_requires_grad (bool, optional): If :data:`True` the momentums will be created + with flag ``requires_grad=True``, this flag is often used in Meta-Learning + algorithms. (default: :data:`False`) + maximize (bool, optional): Maximize the params based on the objective, instead of + minimizing. (default: :data:`False`) """ super().__init__( params, diff --git a/torchopt/pytree.py b/torchopt/pytree.py index 0abcf4fd..d3b2d181 100644 --- a/torchopt/pytree.py +++ b/torchopt/pytree.py @@ -14,9 +14,11 @@ # ============================================================================== """The PyTree utilities.""" +from __future__ import annotations + import functools import operator -from typing import Callable, List, Optional, Tuple +from typing import Callable import optree import optree.typing as typing # pylint: disable=unused-import @@ -47,19 +49,20 @@ def tree_flatten_as_tuple( tree: PyTree[T], - is_leaf: Optional[Callable[[T], bool]] = None, + is_leaf: Callable[[T], bool] | None = None, *, none_is_leaf: bool = False, namespace: str = '', -) -> Tuple[Tuple[T, ...], PyTreeSpec]: +) -> tuple[tuple[T, ...], PyTreeSpec]: """Flatten a pytree to a tuple of leaves and a PyTreeSpec. Args: - tree: The pytree to flatten. - is_leaf: A function that returns :data:`True` if a given node is a leaf. - none_is_leaf: If :data:`True`, None is considered a leaf rather than a internal node with no - children. - namespace: The namespace of custom tree node types. + tree (pytree): The pytree to flatten. + is_leaf (callable or None, optional): An optionally specified function that returns + :data:`True` if a given node is a leaf. (default: :data:`None`) + none_is_leaf (bool, optional): If :data:`True`, :data:`None` is considered a leaf rather + than a internal node with no children. (default: :data:`False`) + namespace (str, optional): The namespace of custom tree node types. (default: :const:`''`) Returns: A tuple of (leaves, treespec). @@ -99,7 +102,7 @@ def tree_add(*trees: PyTree[T]) -> PyTree[T]: def tree_add_scalar_mul( - tree_x: TensorTree, tree_y: TensorTree, alpha: Optional[Scalar] = None + tree_x: TensorTree, tree_y: TensorTree, alpha: Scalar | None = None ) -> TensorTree: """Compute ``tree_x + alpha * tree_y``.""" if alpha is None: @@ -113,7 +116,7 @@ def tree_sub(minuend_tree: PyTree[T], subtrahend_tree: PyTree[T]) -> PyTree[T]: def tree_sub_scalar_mul( - tree_x: TensorTree, tree_y: TensorTree, alpha: Optional[Scalar] = None + tree_x: TensorTree, tree_y: TensorTree, alpha: Scalar | None = None ) -> TensorTree: """Compute ``tree_x - alpha * tree_y``.""" if alpha is None: @@ -190,4 +193,4 @@ def tree_local_value(rref_tree: PyTree[RRef[T]]) -> PyTree[T]: __all__.extend(['tree_as_rref', 'tree_to_here']) -del Callable, List, Optional, Tuple, optree, rpc, Scalar, T, RRef +del Callable, optree, rpc, Scalar, T, RRef diff --git a/torchopt/schedule/polynomial.py b/torchopt/schedule/polynomial.py index 8a8e51e8..d54dbf17 100644 --- a/torchopt/schedule/polynomial.py +++ b/torchopt/schedule/polynomial.py @@ -52,18 +52,17 @@ def polynomial_schedule( """Construct a schedule with polynomial transition from init to end value. Args: - init_value: Initial value for the scalar to be annealed. - end_value: End value of the scalar to be annealed. - power: The power of the polynomial used to transition from ``init`` to ``end``. - transition_steps: - Number of steps over which annealing takes place, the scalar starts changing at - ``transition_begin`` steps and completes the transition by - ``transition_begin + transition_steps`` steps. - If ``transition_steps <= 0``, then the entire annealing process is disabled and the - value is held fixed at ``init_value``. - transition_begin: - Must be *positive*. After how many steps to start annealing (before this many steps the - scalar value is held fixed at ``init_value``). + init_value (float or Tensor): Initial value for the scalar to be annealed. + end_value (float or Tensor): End value of the scalar to be annealed. + power (float or Tensor): The power of the polynomial used to transition from ``init`` to + ``end``. + transition_steps (int): Number of steps over which annealing takes place, the scalar starts + changing at ``transition_begin`` steps and completes the transition by + ``transition_begin + transition_steps`` steps. If ``transition_steps <= 0``, then the + entire annealing process is disabled and the value is held fixed at ``init_value``. + transition_begin (int, optional): Must be *positive*. After how many steps to start + annealing (before this many steps the scalar value is held fixed at ``init_value``). + (default: :const:`0`) Returns: schedule: diff --git a/torchopt/transform/add_decayed_weights.py b/torchopt/transform/add_decayed_weights.py index 772e6291..14745766 100644 --- a/torchopt/transform/add_decayed_weights.py +++ b/torchopt/transform/add_decayed_weights.py @@ -32,7 +32,9 @@ # ============================================================================== """Preset transformations for adding weight decay to updates.""" -from typing import Any, Callable, NamedTuple, Optional, Tuple, Union +from __future__ import annotations + +from typing import Any, Callable, NamedTuple from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation, identity @@ -59,7 +61,7 @@ class MaskedNode(NamedTuple): def masked( inner: GradientTransformation, - mask: Union[Any, Callable[[Params], Any]], + mask: OptState | Callable[[Params], OptState] | None = None, ) -> GradientTransformation: """Mask updates so only some are transformed, the rest are passed through. @@ -75,11 +77,12 @@ def masked( of :data:`True`. Args: - inner: Inner transformation to mask. - mask: A tree with same structure as (or a prefix of) the params tree, or a Callable that - returns such a tree given the params/updates. The leaves should be booleans, :data:`True` - for leaves/subtrees you want to apply the transformation to, and :data:`False` for those - you want to skip. The mask must be static for the gradient transformation to be jit-compilable. + inner (GradientTransformation): Inner transformation to mask. + mask (tree of Tensor, callable, or None, optional): A tree with same structure as (or a + prefix of) the params tree, or a function that returns such a tree given the + params/updates. The leaves should be booleans, :data:`True` for leaves/subtrees you want + to apply the transformation to, and :data:`False` for those you want to skip. + (default: :data:`None`) Returns: A :class:`GradientTransformation` wrapping ``inner``. @@ -89,14 +92,14 @@ def masked( def _masked_flat( inner: GradientTransformation, - mask: Union[Any, Callable[[Params], Any]], + mask: OptState | Callable[[Params], OptState] | None = None, ) -> GradientTransformation: return _masked(inner, mask, already_flattened=True) def _masked( inner: GradientTransformation, - mask: Union[Any, Callable[[Params], Any]], + mask: OptState | Callable[[Params], OptState] | None = None, *, already_flattened: bool = False, ) -> GradientTransformation: @@ -117,9 +120,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, + params: Params | None = None, inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: mask_tree = mask(updates) if callable(mask) else mask masked_updates = tree_mask(updates, mask_tree) masked_params = None if params is None else tree_mask(params, mask_tree) @@ -145,16 +148,17 @@ def update_fn( def add_decayed_weights( weight_decay: float = 0.0, - mask: Optional[Union[Any, Callable[[Params], Any]]] = None, + mask: OptState | Callable[[Params], OptState] | None = None, ) -> GradientTransformation: """Add parameter scaled by `weight_decay`. Args: - weight_decay: a scalar weight decay rate. - mask: a tree with same structure as (or a prefix of) the params tree, or a Callable that - returns such a pytree given the params/updates. The leaves should be booleans, - :data:`True` for leaves/subtrees you want to apply the transformation to, and - :data:`False` for those you want to skip. + weight_decay (float, optional): A scalar weight decay rate. (default: :const:`0.0`) + mask (tree of Tensor, callable, or None, optional): A tree with same structure as (or a + prefix of) the params tree, or a function that returns such a tree given the + params/updates. The leaves should be booleans, :data:`True` for leaves/subtrees you want + to apply the transformation to, and :data:`False` for those you want to skip. + (default: :data:`None`) Returns: An (init_fn, update_fn) tuple. @@ -168,7 +172,7 @@ def add_decayed_weights( def _add_decayed_weights_flat( weight_decay: float = 0.0, - mask: Optional[Union[Any, Callable[[Params], Any]]] = None, + mask: OptState | Callable[[Params], OptState] | None = None, ) -> GradientTransformation: return _add_decayed_weights( weight_decay=weight_decay, @@ -179,7 +183,7 @@ def _add_decayed_weights_flat( def _add_decayed_weights( weight_decay: float = 0.0, - mask: Optional[Union[Any, Callable[[Params], Any]]] = None, + mask: OptState | Callable[[Params], OptState] | None = None, *, already_flattened: bool = False, ) -> GradientTransformation: @@ -204,9 +208,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, + params: Params | None = None, inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: assert params is not None, ( 'Parameters are required for weight decay. ' 'Call `update(updates, state, params=params)` instead.' diff --git a/torchopt/transform/nan_to_num.py b/torchopt/transform/nan_to_num.py index 2c0b9d5e..804f8219 100644 --- a/torchopt/transform/nan_to_num.py +++ b/torchopt/transform/nan_to_num.py @@ -14,7 +14,7 @@ # ============================================================================== """Preset transformations that replaces updates with non-finite values to the given numbers.""" -from typing import Optional, Tuple +from __future__ import annotations from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation @@ -23,8 +23,8 @@ def nan_to_num( nan: float = 0.0, - posinf: Optional[float] = None, - neginf: Optional[float] = None, + posinf: float | None = None, + neginf: float | None = None, ) -> GradientTransformation: """Replace updates with values ``nan`` / ``+inf`` / ``-inf`` to the given numbers. @@ -39,9 +39,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: if inplace: def f(g): diff --git a/torchopt/transform/scale.py b/torchopt/transform/scale.py index 4afac163..639c903e 100644 --- a/torchopt/transform/scale.py +++ b/torchopt/transform/scale.py @@ -31,7 +31,7 @@ # ============================================================================== """Preset transformation for scaling updates by learning rate.""" -from typing import Optional, Tuple +from __future__ import annotations from torchopt import pytree from torchopt.base import EmptyState, GradientTransformation @@ -49,7 +49,7 @@ def scale(step_size: float) -> GradientTransformation: """Scale updates by some fixed scalar ``step_size``. Args: - step_size: A scalar corresponding to a fixed scaling factor for updates. + step_size (float): A scalar corresponding to a fixed scaling factor for updates. Returns: An ``(init_fn, update_fn)`` tuple. @@ -80,9 +80,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: if inplace: def f(g): diff --git a/torchopt/transform/scale_by_adam.py b/torchopt/transform/scale_by_adam.py index 039d31fb..36f30be9 100644 --- a/torchopt/transform/scale_by_adam.py +++ b/torchopt/transform/scale_by_adam.py @@ -33,7 +33,9 @@ # pylint: disable=invalid-name -from typing import NamedTuple, Optional, Tuple +from __future__ import annotations + +from typing import NamedTuple import torch @@ -88,17 +90,17 @@ def scale_by_adam( [Kingma et al, 2014](https://arxiv.org/abs/1412.6980) Args: - b1: (default: :const:`0.9`) - Decay rate for the exponentially weighted average of grads. - b2: (default: :const:`0.999`) - Decay rate for the exponentially weighted average of squared grads. - eps: (default: :const:`1e-8`) - Term added to the denominator to improve numerical stability. - eps_root: (default: :const:`0.0`) - Term added to the denominator inside the square-root to improve + b1 (float, optional): Decay rate for the exponentially weighted average of grads. + (default: :const:`0.9`) + b2 (float, optional): Decay rate for the exponentially weighted average of squared grads. + (default: :const:`0.999`) + eps (float, optional): Term added to the denominator to improve numerical stability. + (default: :const:`1e-8`) + eps_root (float, optional): Term added to the denominator inside the square-root to improve numerical stability when backpropagating gradients through the rescaling. - moment_requires_grad: (default: :data:`False`) - If :data:`True`, states will be created with flag `requires_grad = True`. + (default: :const:`0.0`) + moment_requires_grad (bool, optional): If :data:`True`, states will be created with flag + ``requires_grad = True``. (default: :data:`False`) Returns: An (init_fn, update_fn) tuple. @@ -169,9 +171,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: mu = update_moment.impl( # type: ignore[attr-defined] updates, state.mu, b1, order=1, inplace=inplace, already_flattened=already_flattened ) @@ -218,17 +220,17 @@ def scale_by_accelerated_adam( [Kingma et al, 2014](https://arxiv.org/abs/1412.6980) Args: - b1: (default: :const:`0.9`) - Decay rate for the exponentially weighted average of grads. - b2: (default: :const:`0.999`) - Decay rate for the exponentially weighted average of squared grads. - eps: (default: :const:`1e-8`) - Term added to the denominator to improve numerical stability. - eps_root: (default: :const:`0.0`) - Term added to the denominator inside the square-root to improve + b1 (float, optional): Decay rate for the exponentially weighted average of grads. + (default: :const:`0.9`) + b2 (float, optional): Decay rate for the exponentially weighted average of squared grads. + (default: :const:`0.999`) + eps (float, optional): Term added to the denominator to improve numerical stability. + (default: :const:`1e-8`) + eps_root (float, optional): Term added to the denominator inside the square-root to improve numerical stability when backpropagating gradients through the rescaling. - moment_requires_grad: (default: :data:`False`) - If :data:`True`, states will be created with flag `requires_grad = True`. + (default: :const:`0.0`) + moment_requires_grad (bool, optional): If :data:`True`, states will be created with flag + ``requires_grad = True``. (default: :data:`False`) Returns: An (init_fn, update_fn) tuple. @@ -285,9 +287,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: count_inc = inc_count.impl(updates, state.count, already_flattened=True) # type: ignore[attr-defined] op = AdamOp(b1=b1, b2=b2, eps=eps, eps_root=eps_root, inplace=inplace) @@ -303,9 +305,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: count_inc = inc_count.impl(updates, state.count, already_flattened=False) # type: ignore[attr-defined] treespec = pytree.tree_structure(updates, none_is_leaf=True) diff --git a/torchopt/transform/scale_by_rms.py b/torchopt/transform/scale_by_rms.py index 7a685f6b..7a0c8c20 100644 --- a/torchopt/transform/scale_by_rms.py +++ b/torchopt/transform/scale_by_rms.py @@ -31,7 +31,9 @@ # ============================================================================== """Preset transformations for scaling updates by exponential root mean-squared (RMS).""" -from typing import NamedTuple, Optional, Tuple +from __future__ import annotations + +from typing import NamedTuple import torch @@ -61,12 +63,11 @@ def scale_by_rms( [Hinton](www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) Args: - alpha: (default: :const:`0.9`) - Decay rate for the exponentially weighted average of squared grads. - eps: (default: :const:`1e-8`) - Term added to the denominator to improve numerical stability. - initial_scale: (default: :const:`0.0`) - Initial value for second moment + alpha (float, optional): Decay rate for the exponentially weighted average of squared grads. + (default: :const:`0.9`) + eps (float, optional): Term added to the denominator to improve numerical stability. + (default: :const:`1e-8`) + initial_scale (float, optional): Initial value for second moment. (default: :const:`0.0`) Returns: An (init_fn, update_fn) tuple. @@ -121,9 +122,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: nu = update_moment.impl( # type: ignore[attr-defined] updates, state.nu, alpha, order=2, inplace=inplace, already_flattened=already_flattened ) diff --git a/torchopt/transform/scale_by_schedule.py b/torchopt/transform/scale_by_schedule.py index 5556d111..d6e3b0fa 100644 --- a/torchopt/transform/scale_by_schedule.py +++ b/torchopt/transform/scale_by_schedule.py @@ -31,7 +31,9 @@ # ============================================================================== """Preset transformation for scaling updates by learning rate schedules.""" -from typing import NamedTuple, Optional, Tuple +from __future__ import annotations + +from typing import NamedTuple import torch @@ -54,9 +56,8 @@ def scale_by_schedule(step_size_fn: Schedule) -> GradientTransformation: """Scale updates using a custom schedule for the ``step_size``. Args: - step_size_fn: - A function that takes an update count as input and proposes the ``step_size`` to - multiply the updates by. + step_size_fn (callable): A function that takes an update count as input and proposes the + ``step_size`` to multiply the updates by. Returns: An ``(init_fn, update_fn)`` tuple. @@ -90,9 +91,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: if inplace: def f(g, c): # pylint: disable=invalid-name diff --git a/torchopt/transform/scale_by_stddev.py b/torchopt/transform/scale_by_stddev.py index c15a0d6c..228ed707 100644 --- a/torchopt/transform/scale_by_stddev.py +++ b/torchopt/transform/scale_by_stddev.py @@ -33,7 +33,9 @@ # pylint: disable=invalid-name -from typing import NamedTuple, Optional, Tuple +from __future__ import annotations + +from typing import NamedTuple import torch @@ -64,12 +66,11 @@ def scale_by_stddev( [Hinton](www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) Args: - alpha: (default: :const:`0.9`) - Decay rate for the exponentially weighted average of squared grads. - eps: (default: :const:`1e-8`) - Term added to the denominator to improve numerical stability. - initial_scale: (default: :const:`0.0`) - Initial value for second moment + alpha (float, optional): Decay rate for the exponentially weighted average of squared grads. + (default: :const:`0.9`) + eps (float, optional): Term added to the denominator to improve numerical stability. + (default: :const:`1e-8`) + initial_scale (float, optional): Initial value for second moment. (default: :const:`0.0`) Returns: An (init_fn, update_fn) tuple. @@ -125,9 +126,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: mu = update_moment.impl( # type: ignore[attr-defined] updates, state.mu, alpha, order=1, inplace=inplace, already_flattened=already_flattened ) diff --git a/torchopt/transform/trace.py b/torchopt/transform/trace.py index 45e043f0..03d2441d 100644 --- a/torchopt/transform/trace.py +++ b/torchopt/transform/trace.py @@ -33,7 +33,9 @@ # pylint: disable=invalid-name -from typing import NamedTuple, Optional, Tuple +from __future__ import annotations + +from typing import NamedTuple import torch @@ -65,14 +67,12 @@ def trace( Both are frequently found in the optimization literature. Args: - momentum: (default: :const:`0.9`) - The decay rate for the trace of past updates. - dampening: (default: :const:`0.0`) - Dampening for momentum. - nesterov: (default: :data:`False`) - Whether to use Nesterov momentum. - moment_requires_grad: (default: :data:`False`) - If :data:`True`, states will be created with flag `requires_grad = True`. + momentum (float, optional): The decay rate for the trace of past updates. + (default: :const:`0.9`) + dampening (float, optional): Dampening for momentum. (default: :const:`0.0`) + nesterov (bool, optional): Whether to use Nesterov momentum. (default: :data:`False`) + moment_requires_grad (bool, optional): If :data:`True`, states will be created with flag + ``requires_grad = True``. (default: :data:`False`) Returns: An (init_fn, update_fn) tuple. @@ -139,9 +139,9 @@ def update_fn( updates: Updates, state: OptState, *, - params: Optional[Params] = None, # pylint: disable=unused-argument + params: Params | None = None, # pylint: disable=unused-argument inplace: bool = True, - ) -> Tuple[Updates, OptState]: + ) -> tuple[Updates, OptState]: nonlocal first_call if nesterov: diff --git a/torchopt/transform/utils.py b/torchopt/transform/utils.py index a9f02295..77ba58ca 100644 --- a/torchopt/transform/utils.py +++ b/torchopt/transform/utils.py @@ -31,6 +31,8 @@ # ============================================================================== """Utilities for the preset transformations.""" +from __future__ import annotations + from collections import deque from typing import Any, Callable, Sequence diff --git a/torchopt/update.py b/torchopt/update.py index 3fdd38e1..9485896b 100644 --- a/torchopt/update.py +++ b/torchopt/update.py @@ -48,11 +48,11 @@ def apply_updates(params: Params, updates: Updates, *, inplace: bool = True) -> :func:`tree_map` (e.g. if you want to manipulate updates in custom ways before applying them). Args: - params: A tree of parameters. - updates: - A tree of updates, the tree structure and the shape of the leaf nodes must match that - of ``params``. - inplace: If :data:`True`, will update params in a inplace manner. + params (tree of Tensor): A tree of parameters. + updates (tree of Tensor): A tree of updates, the tree structure and the shape of the leaf + nodes must match that of ``params``. + inplace (bool, optional): If :data:`True`, will update params in a inplace manner. + (default: :data:`True`) Returns: Updated parameters, with same structure, shape and type as ``params``. diff --git a/torchopt/utils.py b/torchopt/utils.py index 4deaba8b..12adb214 100644 --- a/torchopt/utils.py +++ b/torchopt/utils.py @@ -14,21 +14,11 @@ # ============================================================================== """Utilities for TorchOpt.""" +from __future__ import annotations + import copy import itertools -from typing import ( - TYPE_CHECKING, - Dict, - List, - NamedTuple, - Optional, - Sequence, - Set, - Tuple, - Union, - cast, - overload, -) +from typing import TYPE_CHECKING, NamedTuple, Sequence, cast, overload from typing_extensions import Literal # Python 3.8+ from typing_extensions import TypeAlias # Python 3.10+ @@ -56,32 +46,30 @@ class ModuleState(NamedTuple): """Container for module state.""" - params: Tuple[Dict[str, torch.Tensor], ...] - buffers: Tuple[Dict[str, torch.Tensor], ...] - visual_contents: Optional[Dict] = None + params: tuple[dict[str, torch.Tensor], ...] + buffers: tuple[dict[str, torch.Tensor], ...] + visual_contents: dict | None = None detach_buffers: bool = False CopyMode: TypeAlias = Literal['reference', 'copy', 'deepcopy', 'ref', 'clone', 'deepclone'] -def stop_gradient(target: Union[ModuleState, nn.Module, 'MetaOptimizer', TensorTree]) -> None: +def stop_gradient(target: ModuleState | nn.Module | MetaOptimizer | TensorTree) -> None: """Stop the gradient for the input object. - Since a tensor use :attr:`grad_fn` to connect itself with the previous computation graph, the + Since a tensor use ``grad_fn`` to connect itself with the previous computation graph, the backpropagated gradient will flow over the tensor and continue flow to the tensors that is - connected by :attr:`grad_fn`. Some algorithms requires manually detaching tensors from the + connected by ``grad_fn``. Some algorithms requires manually detaching tensors from the computation graph. Note that the :func:`stop_gradient` operation is in-place. Args: - target: The target that to be detached from the computation graph, it could be a - :class:`nn.Module`, :class:`torchopt.MetaOptimizer`, state of the - :class:`torchopt.MetaOptimizer`, or just a plain list of tensors. - inplace: If :data:`True`, the target will be detached in-place. if :data:`Frue`, this - function will return a detached copy of the target. The in-place operation is fast and - memory efficient but may raise backpropagation error. + target (ModuleState, nn.Module, MetaOptimizer, or tree of Tensor): The target that to be + detached from the computation graph, it could be a :class:`nn.Module`, + :class:`torchopt.MetaOptimizer`, state of the :class:`torchopt.MetaOptimizer`, or just + a plain list of tensors. """ # pylint: disable-next=import-outside-toplevel from torchopt.optim.meta.base import MetaOptimizer @@ -108,7 +96,7 @@ def extract_state_dict( target: nn.Module, *, by: CopyMode = 'reference', - device: Optional[Device] = None, + device: Device | None = None, with_buffers: bool = True, enable_visual: bool = False, visual_prefix: str = '', @@ -118,57 +106,62 @@ def extract_state_dict( @overload def extract_state_dict( - target: 'MetaOptimizer', + target: MetaOptimizer, *, by: CopyMode = 'reference', - device: Optional[Device] = None, + device: Device | None = None, with_buffers: bool = True, enable_visual: bool = False, visual_prefix: str = '', -) -> Tuple[OptState, ...]: # pragma: no cover +) -> tuple[OptState, ...]: # pragma: no cover ... # pylint: disable-next=too-many-branches,too-many-locals def extract_state_dict( - target: Union[nn.Module, 'MetaOptimizer'], + target: nn.Module | MetaOptimizer, *, by: CopyMode = 'reference', - device: Optional[Device] = None, + device: Device | None = None, with_buffers: bool = True, detach_buffers: bool = False, enable_visual: bool = False, visual_prefix: str = '', -) -> Union[ModuleState, Tuple[OptState, ...]]: +) -> ModuleState | tuple[OptState, ...]: """Extract target state. - Since a tensor use :attr:`grad_fn` to connect itself with the previous computation graph, the + Since a tensor use ``grad_fn`` to connect itself with the previous computation graph, the backpropagated gradient will flow over the tensor and continue flow to the tensors that is - connected by :attr:`grad_fn`. Some algorithms requires manually detaching tensors from the + connected by ``grad_fn``. Some algorithms requires manually detaching tensors from the computation graph. Note that the extracted state is a reference, which means any in-place operator will affect the target that the state is extracted from. Args: - target: It could be a :class:`nn.Module` or :class:`torchopt.MetaOptimizer`. - by: The extract policy of tensors in the target. + target (nn.Module or MetaOptimizer): It could be a :class:`nn.Module` or + :class:`torchopt.MetaOptimizer`. + by (str, optional): The extract policy of tensors in the target. (default: :const:`'reference'`) - :const:`'reference'`: The extracted tensors will be references to the original tensors. - :const:`'copy'`: The extracted tensors will be clones of the original tensors. This - makes the copied tensors have :attr:`grad_fn` to be a ```` function - points to the original tensors. + makes the copied tensors have ``grad_fn`` to be a ```` function points + to the original tensors. - :const:`'deepcopy'`: The extracted tensors will be deep-copied from the original tensors. The deep-copied tensors will detach from the original computation graph. - device: If specified, move the extracted state to the specified device. - with_buffers: Extract buffer together with parameters, this argument is only used if the - input target is :class:`nn.Module`. - detach_buffers: Whether to detach the reference to the buffers, this argument is only used - if the input target is :class:`nn.Module` and ``by='reference'``. - enable_visual: Add additional annotations, which could be used in computation graph - visualization. Currently, this flag only has effect on :class:`nn.Module` but we will - support :class:`torchopt.MetaOptimizer` later. - visual_prefix: Prefix for the visualization annotations. + device (Device or None, optional): If specified, move the extracted state to the specified + device. (default: :const:`None`) + with_buffers (bool, optional): Extract buffer together with parameters, this argument is + only used if the input target is :class:`nn.Module`. (default: :const:`True`) + detach_buffers (bool, optional): Whether to detach the reference to the buffers, this + argument is only used if the input target is :class:`nn.Module` and ``by='reference'``. + (default: :const:`False`) + enable_visual (bool, optional): Add additional annotations, which could be used in + computation graph visualization. Currently, this flag only has effect on + :class:`nn.Module` but we will support :class:`torchopt.MetaOptimizer` later. + (default: :const:`False`) + visual_prefix (str, optional): Prefix for the visualization annotations. + (default: :const:`''`) Returns: State extracted of the input object. @@ -228,9 +221,9 @@ def clone_detach_(t: torch.Tensor) -> torch.Tensor: else: visual_contents = None - params: List[Dict[str, torch.Tensor]] = [] - buffers: List[Dict[str, torch.Tensor]] = [] - memo: Set[nn.Module] = set() + params: list[dict[str, torch.Tensor]] = [] + buffers: list[dict[str, torch.Tensor]] = [] + memo: set[nn.Module] = set() def update_params(container): if len(container) > 0: @@ -287,12 +280,12 @@ def get_variable(t): def extract_module_containers( module: nn.Module, with_buffers: bool = True -) -> Tuple[ModuleTensorContainers, ModuleTensorContainers]: +) -> tuple[ModuleTensorContainers, ModuleTensorContainers]: """Extract the references to the containers of parameters and buffers from a module.""" if isinstance(module, nn.Module): - params: List[TensorContainer] = [] - buffers: List[TensorContainer] = [] - memo: Set[nn.Module] = set() + params: list[TensorContainer] = [] + buffers: list[TensorContainer] = [] + memo: set[nn.Module] = set() def update_container(container, items): if len(items) > 0: @@ -316,8 +309,8 @@ def update_container(container, items): def recover_state_dict( - target: Union[nn.Module, 'MetaOptimizer'], - state: Union[ModuleState, Sequence[OptState]], + target: nn.Module | MetaOptimizer, + state: ModuleState | Sequence[OptState], ) -> None: """Recover state. @@ -327,8 +320,8 @@ def recover_state_dict( modified. Args: - target: Target that need to recover. - state: The recovering state. + target (nn.Module or MetaOptimizer): Target that need to recover. + state (ModuleState or sequence of tree of Tensor): The recovering state. """ # pylint: disable-next=import-outside-toplevel from torchopt.optim.meta.base import MetaOptimizer @@ -344,10 +337,7 @@ def clone_detach_(t: torch.Tensor) -> torch.Tensor: return nn.Parameter(t.clone().detach_(), requires_grad=t.requires_grad) return t.clone().detach_().requires_grad_(t.requires_grad) - buffers = cast( - Tuple[Dict[str, torch.Tensor], ...], - pytree.tree_map(clone_detach_, buffers), # type: ignore[arg-type] - ) + buffers = pytree.tree_map(clone_detach_, buffers) # type: ignore[assignment,arg-type] for tgt, src in itertools.chain( zip(params_containers, params), @@ -367,19 +357,19 @@ def module_clone( *, by: CopyMode = 'reference', detach_buffers: bool = False, - device: Optional[Device] = None, + device: Device | None = None, ) -> nn.Module: # pragma: no cover ... @overload def module_clone( - target: 'MetaOptimizer', + target: MetaOptimizer, *, by: CopyMode = 'reference', detach_buffers: bool = False, - device: Optional[Device] = None, -) -> 'MetaOptimizer': # pragma: no cover + device: Device | None = None, +) -> MetaOptimizer: # pragma: no cover ... @@ -389,34 +379,36 @@ def module_clone( *, by: CopyMode = 'reference', detach_buffers: bool = False, - device: Optional[Device] = None, + device: Device | None = None, ) -> TensorTree: # pragma: no cover ... # pylint: disable-next=too-many-locals def module_clone( - target: Union[nn.Module, 'MetaOptimizer', TensorTree], + target: nn.Module | MetaOptimizer | TensorTree, *, by: CopyMode = 'reference', detach_buffers: bool = False, - device: Optional[Device] = None, -) -> Union[nn.Module, 'MetaOptimizer', TensorTree]: + device: Device | None = None, +) -> nn.Module | MetaOptimizer | TensorTree: """Clone a module. Args: - target: The target to be cloned. - by: The extract policy of tensors in the target. + target (nn.Module, MetaOptimizer, or tree of Tensor): The target to be cloned. + by (str, optional): The extract policy of tensors in the target. (default: :const:`'reference'`) - :const:`'reference'`: The extracted tensors will be references to the original tensors. - :const:`'copy'`: The extracted tensors will be clones of the original tensors. This - makes the copied tensors have :attr:`grad_fn` to be a ```` function - points to the original tensors. + makes the copied tensors have ``grad_fn`` to be a ```` function points + to the original tensors. - :const:`'deepcopy'`: The extracted tensors will be deep-copied from the original tensors. The deep-copied tensors will detach from the original computation graph. - detach_buffers: Whether to detach the reference to the buffers, this argument is only used - if the input target is :class:`nn.Module` and ``by='reference'``. - device: If specified, move the cloned module to the specified device. + detach_buffers (bool, optional): Whether to detach the reference to the buffers, this + argument is only used if the input target is :class:`nn.Module` and ``by='reference'``. + (default: :const:`False`) + device (Device or None, optional): If specified, move the cloned module to the specified + device. (default: :const:`None`) Returns: The cloned module. @@ -499,7 +491,7 @@ def module_detach_(target: nn.Module) -> nn.Module: # pragma: no cover @overload -def module_detach_(target: 'MetaOptimizer') -> 'MetaOptimizer': # pragma: no cover +def module_detach_(target: MetaOptimizer) -> MetaOptimizer: # pragma: no cover ... @@ -509,12 +501,13 @@ def module_detach_(target: TensorTree) -> TensorTree: # pragma: no cover def module_detach_( - target: Union[ModuleState, nn.Module, 'MetaOptimizer', TensorTree] -) -> Union[ModuleState, nn.Module, 'MetaOptimizer', TensorTree]: + target: ModuleState | nn.Module | MetaOptimizer | TensorTree, +) -> ModuleState | nn.Module | MetaOptimizer | TensorTree: """Detach a module from the computation graph. Args: - target: The target to be detached. + target (ModuleState, nn.Module, MetaOptimizer, or tree of Tensor): The + target to be detached. Returns: The detached module. diff --git a/torchopt/visual.py b/torchopt/visual.py index e8145240..7afe65a4 100644 --- a/torchopt/visual.py +++ b/torchopt/visual.py @@ -17,8 +17,10 @@ # ============================================================================== """Computation graph visualization.""" +from __future__ import annotations + from collections import namedtuple -from typing import Generator, Iterable, Mapping, Optional, Union, cast +from typing import Generator, Iterable, Mapping, cast import torch from graphviz import Digraph @@ -71,14 +73,13 @@ def truncate(s): # pylint: disable=invalid-name # pylint: disable-next=too-many-branches,too-many-statements,too-many-locals def make_dot( var: TensorOrTensors, - params: Optional[ - Union[ - Mapping[str, torch.Tensor], - ModuleState, - Generator, - Iterable[Union[Mapping[str, torch.Tensor], ModuleState, Generator]], - ] - ] = None, + params: ( + Mapping[str, torch.Tensor] + | ModuleState + | Generator + | Iterable[Mapping[str, torch.Tensor] | ModuleState | Generator] + | None + ) = None, show_attrs: bool = False, show_saved: bool = False, max_attr_chars: int = 50, @@ -89,7 +90,7 @@ def make_dot( and is either blue, orange, or green: - **Blue** - Reachable leaf tensors that requires grad (tensors whose :attr:`grad` fields will be + Reachable leaf tensors that requires grad (tensors whose ``grad`` fields will be populated during :meth:`backward`). - **Orange** Saved tensors of custom autograd functions as well as those saved by built-in backward @@ -100,16 +101,16 @@ def make_dot( If any output is a view, we represent its base tensor with a dark green node. Args: - var: Output tensor. - params: ([dict of (name, tensor) or state_dict]) - Parameters to add names to node that requires grad. - show_attrs: Whether to display non-tensor attributes of backward nodes - (Requires PyTorch version >= 1.9) - show_saved: Whether to display saved tensor nodes that are not by custom autograd - functions. Saved tensor nodes for custom functions, if present, are always displayed. - (Requires PyTorch version >= 1.9) - max_attr_chars: If ``show_attrs`` is :data:`True`, sets max number of characters to display - for any given attribute. + var (Tensor or sequence of Tensor): Output tensor. + params: (dict[str, Tensor], ModuleState, iterable of tuple[str, Tensor], or None, optional): + Parameters to add names to node that requires grad. (default: :data:`None`) + show_attrs (bool, optional): Whether to display non-tensor attributes of backward nodes. + (default: :data:`False`) + show_saved (bool, optional): Whether to display saved tensor nodes that are not by custom + autograd functions. Saved tensor nodes for custom functions, if present, are always + displayed. (default: :data:`False`) + max_attr_chars (int, optional): If ``show_attrs`` is :data:`True`, sets max number of + characters to display for any given attribute. (default: :const:`50`) """ param_map = {} From 48bed065cb263240915816762c7287631078f495 Mon Sep 17 00:00:00 2001 From: Xuehai Pan Date: Wed, 15 Feb 2023 16:00:19 +0000 Subject: [PATCH 24/24] ver: bump version to 0.7.0 --- CHANGELOG.md | 27 ++++++++++++++++++++++----- CITATION.cff | 4 ++-- torchopt/version.py | 2 +- 3 files changed, 25 insertions(+), 8 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 3a3ad174..927cb1db 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,6 +13,26 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added +- + +### Changed + +- + +### Fixed + +- + +### Removed + +- + +------ + +## [0.7.0] - 2023-02-16 + +### Added + - Update Sphinx documentation by [@XuehaiPan](https://github.com/XuehaiPan) and [@Benjamin-eecs](https://github.com/Benjamin-eecs) and [@waterhorse1](https://github.com/waterhorse1) and [@JieRen98](https://github.com/JieRen98) in [#127](https://github.com/metaopt/torchopt/pull/127). - Add object-oriented modules support for zero-order differentiation by [@XuehaiPan](https://github.com/XuehaiPan) in [#125](https://github.com/metaopt/torchopt/pull/125). @@ -26,10 +46,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Update tests and fix corresponding bugs by [@XuehaiPan](https://github.com/XuehaiPan) and [@Benjamin-eecs](https://github.com/Benjamin-eecs) and [@JieRen98](https://github.com/JieRen98) in [#78](https://github.com/metaopt/torchopt/pull/78). - Fix memory leak in implicit MAML omniglot few-shot classification example with OOP APIs by [@XuehaiPan](https://github.com/XuehaiPan) in [#113](https://github.com/metaopt/torchopt/pull/113). -### Removed - - - ------ ## [0.6.0] - 2022-12-07 @@ -150,7 +166,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ------ -[Unreleased]: https://github.com/metaopt/torchopt/compare/v0.6.0...HEAD +[Unreleased]: https://github.com/metaopt/torchopt/compare/v0.7.0...HEAD +[0.7.0]: https://github.com/metaopt/torchopt/compare/v0.6.0...v0.7.0 [0.6.0]: https://github.com/metaopt/torchopt/compare/v0.5.0...v0.6.0 [0.5.0]: https://github.com/metaopt/torchopt/compare/v0.4.3...v0.5.0 [0.4.3]: https://github.com/metaopt/torchopt/compare/v0.4.2...v0.4.3 diff --git a/CITATION.cff b/CITATION.cff index aa997b82..83a207e6 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -32,7 +32,7 @@ authors: family-names: Yang affiliation: Peking University email: yaodong.yang@pku.edu.cn -version: 0.6.0 -date-released: "2022-12-07" +version: 0.7.0 +date-released: "2023-02-16" license: Apache-2.0 repository-code: "https://github.com/metaopt/torchopt" diff --git a/torchopt/version.py b/torchopt/version.py index 6d66f945..b8136a22 100644 --- a/torchopt/version.py +++ b/torchopt/version.py @@ -14,7 +14,7 @@ # ============================================================================== """TorchOpt: a high-performance optimizer library built upon PyTorch.""" -__version__ = '0.6.0' +__version__ = '0.7.0' __license__ = 'Apache License, Version 2.0' __author__ = 'TorchOpt Contributors' __release__ = False pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy