Praphul Singh

I am working as a Deep Learning Engineer at Oracle, Bangalore. My interest lies collectively in ​Natural Language Processing, Domain Adaptation, Deep Reinforcement Learning, Graph Neural Networks and Capsule Networks.

I graduated from Indian Institute of Technology, Kanpur with a major in Electrical Engineering in 2019. I took a variety of courses on NLP, Visual Recognition, Data Mining, Neural Networks, Image Processing to strengthen my grasp in the field of machine learning.

At Oracle, I have been working with the SR Analysis team on Language Understanding Models using highly imbalanced datasets exploring various techniques based on LSTMs, Multihead Attention, Transformers, and Graph Convolutional Neural Networks. I also have a Reinforcement Learning Series published on Oracle Datascience Blog.

I have some patents and research papers submissions in the process as well, which I intend to mention once they get approved.

Email  /  LinkedIn  /  Resume  /  Google Scholar  /  Personal Blog

Paper Implementations
project_img

Text-Based Graph Convolutional Networks

Explained the basics of graph neural networks followed by graph convolutional networks and using it for language understanding.Also, implemented a research paper based on gcn in Tensorflow Keras.

project_img

Capsule Networks and Dynamic Routing between capsules

Explained the basics of capsule networks followed by the dynamic routing mechanism between the capsules and implemented a capsule network in Pytorch for image classification.

project_img

Gradient-weighted Class Activation Mapping

Explained and implemented the grad-cam in Pytorch for finding the class activation mappings for a given (image,class) pair.

project_img

Incremental Learning Without Forgetting

Tensorflow Keras implementation of a research paper based on continual learning which tries to solve the problem of catastrophic forgetting by pseudo labelling the new data.

project_img

Generating Handwritten Sequences Using LSTMs and Mixed Density Networks

Implemented and succesfully trained a model in Tensorflow based on the research paper by DeepMind which uses LSTM layers followed by Mixed Density Networks to generate handwriting strokes.

project_img

BiDirectional Attention Flow Model for Machine Comprehension

Implemented the research paper in Tensorflow which explains a mechanism to achieve fine representation of context and query by learning a common similarity matrix and a bidirectional attention mechanism i.e., Context2Query and Query2Context.

project_img

Multi Head Self Attention

Studied and explained the research paper, Attention is all you need which proposes a new way to attending to the encoded information by the means of multiple heads.

project_img

Breaking into Transformers

Explained and implemented the fundamental blocks of a bert transformer in Tensorflow Keras.


inspired from this website