Shortcuts

Linear

class torch.ao.nn.qat.Linear(in_features, out_features, bias=True, qconfig=None, device=None, dtype=None)[source][source]

A linear module attached with FakeQuantize modules for weight, used for quantization aware training.

We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentation.

Similar to torch.nn.Linear, with FakeQuantize modules initialized to default.

Variables

weight (torch.Tensor) – fake quant module for weight

classmethod from_float(mod, use_precomputed_fake_quant=False)[source][source]

Create a qat module from a float module or qparams_dict Args: mod a float module, either produced by torch.ao.quantization utilities or directly from user

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy