Xception

class torch_ecg.models.Xception(in_channels: int, **config)[source]

Bases: torch.nn.modules.container.Sequential, torch_ecg.utils.utils_nn.SizeMixin, torch_ecg.utils.misc.CitationMixin

Xception model.

Xception is an architecture that uses depthwise separable convolutions to build light-weight deep neural networks, as described in 1. Its official implementation is available in 2, and a PyTorch implementation is available in 3. Xception is currently not widely used in the field of ECG analysis, but has the potential to be highly effective for this task.

Parameters
  • in_channels (int) – Number of channels in the input.

  • config (dict) – Other hyper-parameters of the Module, ref. corr. config file. For keyword arguments that must be set in 3 sub-dict, namely in “entry_flow”, “middle_flow”, and “exit_flow”, refer to corr. docstring of each class.

References

1

Chollet, François. “Xception: Deep learning with depthwise separable convolutions.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

2

https://github.com/keras-team/keras-applications/blob/master/keras_applications/xception.py

3

https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/xception.py

compute_output_shape(seq_len: Optional[int] = None, batch_size: Optional[int] = None) Sequence[Optional[int]][source]

Compute the output shape the model.

Parameters
  • seq_len (int, optional) – Length of the input tensors.

  • batch_size (int, optional) – Batch size of the input tensors.

Returns

output_shape – The output shape of the module.

Return type

sequence

forward(input: torch.Tensor) torch.Tensor[source]

Forward pass of the model.

Parameters

input (torch.Tensor.) – Input signal tensor, of shape (batch_size, n_channels, seq_len).

Returns

output – Output tensor, of shape (batch_size, n_channels, seq_len).

Return type

torch.Tensor