Deep Separable Hypercomplex Networks

Authors

DOI:

https://doi.org/10.32473/flairs.36.133540

Keywords:

Hypercomplex networks, Separable Convolution, Separable hypercomplex networks, Quaternion CNNs, Vectormap CNNs

Abstract

Deep hypercomplex-inspired convolutional neural networks (CNNs) have recently enhanced feature extraction for image classification by allowing weight sharing across input channels. This makes it possible to improve the representation acquisition abilities of the networks. Hypercomplex-inspired networks, however, still incur higher computational costs than standard CNNs.
This paper reduces this cost by decomposing a quaternion 2D convolutional module into two consecutive separable vectormap modules.
In addition, we use 4 and 5D parameterized hypercomplex multiplication-based fully connected layers. Incorporating both yields our proposed hypercomplex CNN, a novel architecture that can be assembled to construct deep separable hypercomplex networks (SHNNs) for image classification.
We conduct experiments on CIFAR, SVHN, and Tiny ImageNet datasets and achieve better performance using fewer trainable parameters and FLOPS. Our proposed model achieves almost 2% higher performance for CIFAR and SVHN datasets and more than 3% for the ImageNet-Tiny dataset and takes 84%, 35%, and 51% fewer parameters than the ResNets, quaternion, and vectormap networks, respectively.
Also, we achieve state-of-the-art performance on CIFAR benchmarks in hypercomplex space.

Downloads

Published

08-05-2023

How to Cite

Shahadat, N., & Maida, A. S. (2023). Deep Separable Hypercomplex Networks. The International FLAIRS Conference Proceedings, 36(1). https://doi.org/10.32473/flairs.36.133540

Issue

Section

Special Track: Neural Networks and Data Mining