APTx: Better Activation Function than MISH, SWISH, and ReLU’s Variants used in Deep Learning

Authors

  • Ravin Kumar1* 1Department of Computer Science, Meerut Institute of Engineering and Technology, Meerut-250005, Uttar Pradesh, India.

DOI:

https://doi.org/10.51483/IJAIML.2.2.2022.56-61

Keywords:

Activation functions, ReLU, Leaky ReLU, ELU, SWISH, MISH, Neural networks

Abstract

Activation Functions introduce non-linearity in the deep neural networks. This
nonlinearity helps the neural networks learn faster and efficiently from the
dataset. In deep learning, many activation functions are developed and used
based on the type of problem statement. ReLU’s variants, SWISH, and MISH are
goto activation functions. MISH function is considered having similar or even
better performance than SWISH, and much better than ReLU. In this paper, we
propose an activation function named APTx which behaves similar to MISH, but
requires lesser mathematical operations to compute. The lesser computational
requirements of APTx does speed up the model training, and thus also reduces
the hardware requirement for the deep learning model.

Downloads

Published

2022-07-05

How to Cite

Ravin Kumar1*. (2022). APTx: Better Activation Function than MISH, SWISH, and ReLU’s Variants used in Deep Learning. International Journal of Artificial Intelligence and Machine Learning, 2(02), 56–61. https://doi.org/10.51483/IJAIML.2.2.2022.56-61

Similar Articles

1 2 > >> 

You may also start an advanced similarity search for this article.