Mohit Goyal

is this you? claim profile

0

  • Learning Activation Functions: A new paradigm of understanding Neural Networks

    There has been limited research in the domain of activation functions, most of which has focused on improving the ease of optimization of neural networks (NNs). However, to develop a deeper understanding of deep learning, it becomes important to look at the non linear component of NNs more carefully. In this paper, we aim to provide a generic form of activation function along with appropriate mathematical grounding so as to allow for insights into the working of NNs in future. We propose "Self-Learnable Activation Functions" (SLAF), which are learned during training and are capable of approximating most of the existing activation functions. SLAF is given as a weighted sum of pre-defined basis elements which can serve for a good approximation of the optimal activation function. The coefficients for these basis elements allow a search in the entire space of continuous functions (consisting of all the conventional activations). We propose various training routines which can be used to achieve performance with SLAF equipped neural networks (SLNNs). We prove that SLNNs can approximate any neural network with lipschitz continuous activations, to any arbitrary error highlighting their capacity and possible equivalence with standard NNs. Also, SLNNs can be completely represented as a collections of finite degree polynomial upto the very last layer obviating several hyper parameters like width and depth. Since the optimization of SLNNs is still a challenge, we show that using SLAF along with standard activations (like ReLU) can provide performance improvements with only a small increase in number of parameters.

    06/23/2019 ∙ by Mohit Goyal, et al. ∙ 56 share

    read it

  • Detection of Glottal Closure Instants using Deep Dilated Convolutional Neural Networks

    Glottal Closure Instants (GCIs) correspond to the temporal locations of significant excitation to the vocal tract occurring during the production of voiced speech. Detection of GCIs from speech signals is a well-studied problem given its importance in speech processing. Most of the existing approaches for GCI detection adopt a two-stage approach - (i) Transformation of speech signal into a representative signal where GCIs are localized better, (ii) extraction of GCIs using the representative signal obtained in first stage. The former stage is accomplished using signal processing techniques based on the principles of speech production and the latter with heuristic-algorithms such as dynamic programming and peak-picking. These methods are thus task-specific and rely on the methods used for representative signal extraction. However in this paper, we formulate the GCI detection problem from a representation learning perspective where appropriate representation is implicitly learned from the raw-speech data samples. Specifically, GCI detection is cast as a supervised multi-task learning problem which is solved using a deep dilated convolutional neural network jointly optimizing a classification and regression cost. The learning capabilities of the proposed model is demonstrated with several experiments on standard datasets. The results compare well with the state-of- the-art algorithms while performing better in the case of presence of real-world non-stationary noise.

    04/26/2018 ∙ by Prathosh A. P., et al. ∙ 0 share

    read it

  • DeepZip: Lossless Data Compression using Recurrent Neural Networks

    Sequential data is being generated at an unprecedented pace in various forms, including text and genomic data. This creates the need for efficient compression mechanisms to enable better storage, transmission and processing of such data. To solve this problem, many of the existing compressors attempt to learn models for the data and perform prediction-based compression. Since neural networks are known as universal function approximators with the capability to learn arbitrarily complex mappings, and in practice show excellent performance in prediction tasks, we explore and devise methods to compress sequential data using neural network predictors. We combine recurrent neural network predictors with an arithmetic coder and losslessly compress a variety of synthetic, text and genomic datasets. The proposed compressor outperforms Gzip on the real datasets and achieves near-optimal compression for the synthetic datasets. The results also help understand why and where neural networks are good alternatives for traditional finite context models

    11/20/2018 ∙ by Mohit Goyal, et al. ∙ 0 share

    read it

  • Achievable Rates of Attack Detection Strategies in Echo-Assisted Communication

    We consider an echo-assisted communication model wherein block-coded messages transmitted by a source reach the destination as multiple noisy copies. We address adversarial attacks on such models wherein a subset of the received copies at the destination are rendered unreliable by an adversary. Particularly, we study a non-persistent attack model with the adversary attacking 50 destination to detect the attacked copies within every codeword before combining them to recover the information bits. Our main objective is to compute the achievable rates of practical attack-detection strategies as a function of their false-positive and miss-detection rates. However, due to intractability in obtaining closed-form expressions on mutual information, we present a new framework to approximate the achievable rates in terms of their false-positives under special conditions. We show that the approximate rates offered by our framework is lower bounded by that of conservative countermeasures, thereby giving rise to interesting questions on code-design criteria at the source. Finally, we showcase the approximate rates achieved by traditional as well as neural-network based attack-detection strategies, and study their applicability to detect attacks on block-coded messages of short block-lengths.

    01/21/2019 ∙ by Mohit Goyal, et al. ∙ 0 share

    read it