Skip to content

feat: meta-gradient module #101

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Nov 1, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

- Add object-oriented modules support for implicit meta-gradient by [@XuehaiPan](https://github.com/XuehaiPan) in [#101](https://github.com/metaopt/torchopt/pull/101).
- Bump PyTorch version to 1.13.0 by [@XuehaiPan](https://github.com/XuehaiPan) in [#104](https://github.com/metaopt/torchopt/pull/104).
- Add zero-order gradient estimation by [@JieRen98](https://github.com/JieRen98) in [#93](https://github.com/metaopt/torchopt/pull/93).
- Add RPC-based distributed training support and add distributed MAML example by [@XuehaiPan](https://github.com/XuehaiPan) in [#83](https://github.com/metaopt/torchopt/pull/83).
Expand Down
3 changes: 2 additions & 1 deletion conda-recipe.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,11 @@ dependencies:
- gxx = 10
- nvidia/label/cuda-11.7.1::cuda-nvcc
- nvidia/label/cuda-11.7.1::cuda-cudart-dev
- patchelf >= 0.9
- patchelf >= 0.14
- pybind11

# Misc
- optree >= 0.3.0
- typing-extensions >= 4.0.0
- numpy
- matplotlib-base
Expand Down
2 changes: 2 additions & 0 deletions docs/conda-recipe.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ dependencies:
# Learning
- pytorch::pytorch >= 1.13 # sync with project.dependencies
- pytorch::cpuonly
- pytorch::pytorch-mutex = *=*cpu*
- pip:
- torchviz
- sphinxcontrib-katex # for documentation
Expand All @@ -47,6 +48,7 @@ dependencies:
- pybind11

# Misc
- optree >= 0.3.0
- typing-extensions >= 4.0.0
- numpy
- matplotlib-base
Expand Down
10 changes: 10 additions & 0 deletions docs/source/api/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -139,12 +139,22 @@ Implicit differentiation
.. autosummary::

custom_root
nn.ImplicitMetaGradientModule

Custom solvers
~~~~~~~~~~~~~~

.. autofunction:: custom_root


Implicit Meta-Gradient Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. currentmodule:: torchopt.diff.implicit.nn

.. autoclass:: ImplicitMetaGradientModule
:members:

------

Linear system solving
Expand Down
22 changes: 21 additions & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import logging
import os
import pathlib
import sys
Expand All @@ -43,6 +44,24 @@ def get_version() -> str:
return version.__version__


try:
import sphinx_autodoc_typehints
except ImportError:
pass
else:

class RecursiveForwardRefFilter(logging.Filter):
def filter(self, record):
if (
"name 'TensorTree' is not defined" in record.getMessage()
or "name 'OptionalTensorTree' is not defined" in record.getMessage()
):
return False
return super().filter(record)

sphinx_autodoc_typehints._LOGGER.logger.addFilter(RecursiveForwardRefFilter())


# -- Project information -----------------------------------------------------

project = 'TorchOpt'
Expand Down Expand Up @@ -75,7 +94,7 @@ def get_version() -> str:
'sphinxcontrib.bibtex',
'sphinxcontrib.katex',
'sphinx_autodoc_typehints',
'myst_nb', # This is used for the .ipynb notebooks
'myst_nb', # this is used for the .ipynb notebooks
]

if not os.getenv('READTHEDOCS', None):
Expand Down Expand Up @@ -120,6 +139,7 @@ def get_version() -> str:
'exclude-members': '__module__, __dict__, __repr__, __str__, __weakref__',
}
autoclass_content = 'both'
simplify_optional_unions = False

# -- Options for bibtex -----------------------------------------------------

Expand Down
4 changes: 4 additions & 0 deletions docs/source/spelling_wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -88,3 +88,7 @@ deepcopy
deepclone
RRef
rref
ints
Karush
Kuhn
Tucker
5 changes: 2 additions & 3 deletions torchopt/alias/adamw.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,7 @@
from torchopt.alias.utils import flip_sign_and_add_weight_decay, scale_by_neg_lr
from torchopt.combine import chain_flat
from torchopt.transform import add_decayed_weights, scale_by_accelerated_adam, scale_by_adam
from torchopt.typing import Params # pylint: disable=unused-import
from torchopt.typing import GradientTransformation, ScalarOrSchedule
from torchopt.typing import GradientTransformation, Params, ScalarOrSchedule


__all__ = ['adamw']
Expand All @@ -51,7 +50,7 @@ def adamw(
weight_decay: float = 1e-2,
*,
eps_root: float = 0.0,
mask: Optional[Union[Any, Callable[['Params'], Any]]] = None,
mask: Optional[Union[Any, Callable[[Params], Any]]] = None,
moment_requires_grad: bool = False,
maximize: bool = False,
use_accelerated_op: bool = False,
Expand Down
5 changes: 5 additions & 0 deletions torchopt/diff/implicit/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,9 @@
# ==============================================================================
"""Implicit Meta-Gradient."""

from torchopt.diff.implicit import nn
from torchopt.diff.implicit.decorator import custom_root
from torchopt.diff.implicit.nn import ImplicitMetaGradientModule


__all__ = ['custom_root', 'ImplicitMetaGradientModule']
Loading
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy