ConvTranspose1d ¬
- class torch.ao.nn.quantized.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)[source][source] ¬
在由多个输入平面组成的输入图像上应用 1D 转置卷积算子。有关输入参数、参数和实现的详细信息,请参阅
ConvTranspose1d
。注意
目前仅实现了 QNNPACK 引擎。请设置 torch.backends.quantized.engine = 'qnnpack'
对于特殊说明,请参阅
Conv1d
- 变量:
weight(张量)- 从可学习权重参数派生的打包张量。
scale(张量)- 输出缩放标量
零点(Tensor)- 输出零点的标量
请参阅
ConvTranspose2d
以获取其他属性。示例:
>>> torch.backends.quantized.engine = 'qnnpack' >>> from torch.ao.nn import quantized as nnq >>> # With square kernels and equal stride >>> m = nnq.ConvTranspose1d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nnq.ConvTranspose1d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> input = torch.randn(20, 16, 50) >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> output = m(q_input) >>> # exact output size can be also specified as an argument >>> input = torch.randn(1, 16, 12) >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> downsample = nnq.Conv1d(16, 16, 3, stride=2, padding=1) >>> upsample = nnq.ConvTranspose1d(16, 16, 3, stride=2, padding=1) >>> h = downsample(q_input) >>> h.size() torch.Size([1, 16, 6]) >>> output = upsample(h, output_size=input.size()) >>> output.size() torch.Size([1, 16, 12])