Skip to content

fix(README): update image link and future plan #56

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Aug 9, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

- Add question/help/support issue template [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#43](https://github.com/metaopt/TorchOpt/pull/43).

### Changed

- Update image link in README to support pypi rendering [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#56](https://github.com/metaopt/TorchOpt/pull/56).

### Fixed

- Fix CUDA build for accelerated OP [@XuehaiPan](https://github.com/XuehaiPan) in [#53](https://github.com/metaopt/TorchOpt/pull/53).
- Fix gamma error in MAML-RL implementation [@Benjamin-eecs](https://github.com/Benjamin-eecs) [#47](https://github.com/metaopt/TorchOpt/pull/47).


### Removed

Expand Down
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<!-- markdownlint-disable html -->

<div align="center">
<img src="image/logo-large.png" width="75%" />
<img src="https://github.com/metaopt/TorchOpt/raw/HEAD/image/logo-large.png" width="75%" />
</div>

![Python 3.7+](https://img.shields.io/badge/Python-3.7%2B-brightgreen.svg)
Expand Down Expand Up @@ -113,7 +113,7 @@ params = torchopt.apply_updates(params, updates, inplace=False)
Meta-Learning has gained enormous attention in both Supervised Learning and Reinforcement Learning. Meta-Learning algorithms often contain a bi-level optimization process with *inner loop* updating the network parameters and *outer loop* updating meta parameters. The figure below illustrates the basic formulation for meta-optimization in Meta-Learning. The main feature is that the gradients of *outer loss* will back-propagate through all `inner.step` operations.

<div align="center">
<img src="/image/TorchOpt.png" width="85%" />
<img src="https://github.com/metaopt/TorchOpt/raw/HEAD/image/TorchOpt.png" width="85%" />
</div>

Since network parameters become a node of computation graph, a flexible Meta-Learning library should enable users manually control the gradient graph connection which means that users should have access to the network parameters and optimizer states for manually detaching or connecting the computation graph. In PyTorch designing, the network parameters or optimizer states are members of network (a.k.a. `torch.nn.Module`) or optimizer (a.k.a. `torch.optim.Optimizer`), this design significantly introducing difficulty for user control network parameters or optimizer states. Previous differentiable optimizer Repo [`higher`](https://github.com/facebookresearch/higher), [`learn2learn`](https://github.com/learnables/learn2learn) follows the PyTorch designing which leads to inflexible API.
Expand Down Expand Up @@ -191,7 +191,7 @@ One can think of the scale procedures on gradients of optimizer algorithms as a
Here we evaluate the performance using the MAML-Omniglot code with the inner-loop Adam optimizer on GPU. We comparable the run time of the overall algorithm and the meta-optimization (outer-loop optimization) under different network architecture/inner-step numbers. We choose [`higher`](https://github.com/facebookresearch/higher) as our baseline. The figure below illustrate that our accelerated Adam can achieve at least $1/3$ efficiency improvement over the baseline.

<div align="center">
<img src="image/time.png" width="80%" />
<img src="https://github.com/metaopt/TorchOpt/raw/HEAD/image/time.png" width="80%" />
</div>

Notably, the operator fusion not only increases performance but also help simplify the computation graph, which will be discussed in the next section.
Expand All @@ -205,7 +205,7 @@ Complex gradient flow in meta-learning brings in a great challenge for managing
The figure below show the visualization result. Compared with [`torchviz`](https://github.com/szagoruyko/pytorchviz), TorchOpt fuses the operations within the `Adam` together (orange) to reduce the complexity and provide simpler visualization.

<div align="center">
<img src="image/torchviz_torchopt.jpg" width="80%" />
<img src="https://github.com/metaopt/TorchOpt/raw/HEAD/image/torchviz_torchopt.jpg" width="80%" />
</div>

--------------------------------------------------------------------------------
Expand Down Expand Up @@ -258,6 +258,7 @@ pip3 install --no-build-isolation --editable .

## Future Plan

- [x] CPU-acclerated optimizer
- [ ] Support general implicit differentiation with functional programing.
- [ ] Support more optimizers such as AdamW, RMSProp

Expand Down
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy