Bharatram Natarajan


2021

pdf bib
MRE : Multi Relationship Extractor for Persona based Empathetic Conversational Model
Bharatram Natarajan | Abhijit Nargund
Proceedings of the 18th International Conference on Natural Language Processing (ICON)

Artificial intelligence(AI) has come a long way in aiding the user requirements in many fields and domains. However, the current AI systems do not generate human- like response for user query. Research in these areas have started gaining traction recently with explorations on persona or empathy based response selection. But the combination of both the parameters in an open domain haven’t been explored in detail by the research community. The current work highlights the effect of persona on empathetic response. This research paper concentrates on improving the response selection model for PEC dataset, containing both persona information and empathetic response. This is achieved using an enhanced multi relationship extractor and phrase based information for the selection of response.

2020

pdf bib
Semantic Slot Prediction on low corpus data using finite user defined list
Bharatram Natarajan | Dharani Simma | Chirag Singh | Anish Nediyanchath | Sreoshi Sengupta
Proceedings of the 17th International Conference on Natural Language Processing (ICON)

Semantic slot prediction is one of the important task for natural language understanding (NLU). They depend on the quality and quantity of the human crafted training data, which affects model generalization. With the advent of voice assistants exposing AI platforms to third party developers, training data quality and quantity matters for any machine learning algorithm to learn and generalize properly.AI platforms provides provision to add custom external plist defined by the developers for the training data. Hence we are exploring dataset, called LowCorpusSlotData, containing low corpus training data with larger number of slots and significant test data. We also use external plist for the above dataset to aid in slot identification. We experimented using state of the art architectures like Bi-directional Encoder Representations from Transformers (BERT) with variants and Bi-directional Encoder with Custom Decoder. To address the low corpus problem, we propose a pipeline approach where we extract candidate slot information using the external plist extractor module and feed as input along with utterance.

pdf bib
Neighbor Contextual Information Learners for Joint Intent and Slot Prediction
Bharatram Natarajan | Gaurav Mathur | Sameer Jain
Proceedings of the Workshop on Joint NLP Modelling for Conversational AI @ ICON 2020

Intent Identification and Slot Identification aretwo important task for Natural Language Understanding(NLU). Exploration in this areahave gained significance using networks likeRNN, LSTM and GRU. However, modelscontaining the above modules are sequentialin nature, which consumes lot of resourceslike memory to train the model in cloud itself. With the advent of many voice assistantsdelivering offline solutions for manyapplications, there is a need for finding replacementfor such sequential networks. Explorationin self-attention, CNN modules hasgained pace in the recent times. Here we exploreCNN based models like Trellis and modifiedthe architecture to make it bi-directionalwith fusion techniques. In addition, we proposeCNN with Self Attention network calledNeighbor Contextual Information Projector usingMulti Head Attention (NCIPMA) architecture. These architectures beat state of the art inopen source datasets like ATIS, SNIPS.

pdf bib
Unified Multi Intent Order and Slot Prediction using Selective Learning Propagation
Bharatram Natarajan | Priyank Chhipa | Kritika Yadav | Divya Verma Gogoi
Proceedings of the Workshop on Joint NLP Modelling for Conversational AI @ ICON 2020

Natural Language Understanding (NLU) involves two important task namely Intent Determination(ID) and Slot Filling (SF). With recent advancements in Intent Determination and Slot Filling tasks, explorations on handling of multiple intent information in a single utterance is increasing to make the NLU more conversation-based rather than command execution-based. Many have proven this task with huge multi-intent training data. In addition, lots of research have addressed multi intent problem only. The problem of multi intent also poses the challenge of addressing the order of execution of intents found. Hence, we are proposing a unified architecture to address multi-intent detection, associated slotsdetection and order of execution of found intents using low proportion multi-intent corpusin the training data. This architecture consists of Multi Word Importance relation propagator using Multi-Head GRU and Importance learner propagator module using self-attention. This architecture has beaten state-of-the-art by 2.58% on the MultiIntentData dataset.