Hwee Tou Ng


2023

pdf bib
Mind the Biases: Quantifying Cognitive Biases in Language Model Prompting
Ruixi Lin | Hwee Tou Ng
Findings of the Association for Computational Linguistics: ACL 2023

We advocate the importance of exposing uncertainty on results of language model prompting which display bias modes resembling cognitive biases, and propose to help users grasp the level of uncertainty via simple quantifying metrics. Cognitive biases in the human decision making process can lead to flawed responses when we are under uncertainty. Not surprisingly, we have seen biases in language models resembling cognitive biases as a result of training on biased textual data, raising dangers in downstream tasks that are centered around people’s lives if users trust their results too much. In this work, we reveal two bias modes leveraging cognitive biases when we prompt BERT, accompanied by two bias metrics. On a drug-drug interaction extraction task, our bias measurements reveal an error pattern similar to the availability bias when the labels for training prompts are imbalanced, and show that a toning-down transformation of the drug-drug description in a prompt can elicit a bias similar to the framing effect, warning users to distrust when prompting language models for answers.

pdf bib
Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Qingyu Tan | Lu Xu | Lidong Bing | Hwee Tou Ng
Findings of the Association for Computational Linguistics: ACL 2023

Relation extraction (RE) aims to extract relations from sentences and documents. Existing relation extraction models typically rely on supervised machine learning. However, recent studies showed that many RE datasets are incompletely annotated. This is known as the false negative problem in which valid relations are falsely annotated as ‘no_relation’. Models trained with such data inevitably make similar mistakes during the inference stage. Self-training has been proven effective in alleviating the false negative problem. However, traditional self-training is vulnerable to confirmation bias and exhibits poor performance in minority classes. To overcome this limitation, we proposed a novel class-adaptive re-sampling self-training framework. Specifically, we re-sampled the pseudo-labels for each class by precision and recall scores. Our re-sampling strategy favored the pseudo-labels of classes with high precision and low recall, which improved the overall recall without significantly compromising precision. We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated.

pdf bib
Unsupervised Grammatical Error Correction Rivaling Supervised Methods
Hannan Cao | Liping Yuan | Yuchen Zhang | Hwee Tou Ng
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

State-of-the-art grammatical error correction (GEC) systems rely on parallel training data (ungrammatical sentences and their manually corrected counterparts), which are expensive to construct. In this paper, we employ the Break-It-Fix-It (BIFI) method to build an unsupervised GEC system. The BIFI framework generates parallel data from unlabeled text using a fixer to transform ungrammatical sentences into grammatical ones, and a critic to predict sentence grammaticality. We present an unsupervised approach to build the fixer and the critic, and an algorithm that allows them to iteratively improve each other. We evaluate our unsupervised GEC system on English and Chinese GEC. Empirical results show that our GEC system outperforms previous unsupervised GEC systems, and achieves performance comparable to supervised GEC systems without ensemble. Furthermore, when combined with labeled training data, our system achieves new state-of-the-art results on the CoNLL-2014 and NLPCC-2018 test sets.

pdf bib
System Combination via Quality Estimation for Grammatical Error Correction
Muhammad Reza Qorib | Hwee Tou Ng
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Quality estimation models have been developed to assess the corrections made by grammatical error correction (GEC) models when the reference or gold-standard corrections are not available. An ideal quality estimator can be utilized to combine the outputs of multiple GEC systems by choosing the best subset of edits from the union of all edits proposed by the GEC base systems. However, we found that existing GEC quality estimation models are not good enough in differentiating good corrections from bad ones, resulting in a low F0.5 score when used for system combination. In this paper, we propose GRECO, a new state-of-the-art quality estimation model that gives a better estimate of the quality of a corrected sentence, as indicated by having a higher correlation to the F0.5 score of a corrected sentence. It results in a combined GEC system with a higher F0.5 score. We also propose three methods for utilizing GEC quality estimation models for system combination with varying generality: model-agnostic, model-agnostic with voting bias, and model-dependent method. The combined GEC system outperforms the state of the art on the CoNLL-2014 test set and the BEA-2019 test set, achieving the highest F0.5 scores published to date.

pdf bib
WAMP: Writing, Annotation, and Marking Platform
Geonsik Moon | Muhammad Reza Qorib | Daniel Dahlmeier | Hwee Tou Ng
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: System Demonstrations

pdf bib
Grammatical Error Correction: A Survey of the State of the Art
Christopher Bryant | Zheng Yuan | Muhammad Reza Qorib | Hannan Cao | Hwee Tou Ng | Ted Briscoe
Computational Linguistics, Volume 49, Issue 3 - September 2023

Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject–verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors, respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems, which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarize the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgments, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as a comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.

pdf bib
Mitigating Exposure Bias in Grammatical Error Correction with Data Augmentation and Reweighting
Hannan Cao | Wenmian Yang | Hwee Tou Ng
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

The most popular approach in grammatical error correction (GEC) is based on sequence-to-sequence (seq2seq) models. Similar to other autoregressive generation tasks, seq2seq GEC also faces the exposure bias problem, i.e., the context tokens are drawn from different distributions during training and testing, caused by the teacher forcing mechanism. In this paper, we propose a novel data manipulation approach to overcome this problem, which includes a data augmentation method during training to mimic the decoder input at inference time, and a data reweighting method to automatically balance the importance of each kind of augmented samples. Experimental results on benchmark GEC datasets show that our method achieves significant improvements compared to prior approaches.

pdf bib
ALLECS: A Lightweight Language Error Correction System
Muhammad Reza Qorib | Geonsik Moon | Hwee Tou Ng
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations

In this paper, we present ALLECS, a lightweight web application to serve grammatical error correction (GEC) systems so that they can be easily used by the general public. We design ALLECS to be accessible to as many users as possible, including users who have a slow Internet connection and who use mobile phones as their main devices to connect to the Internet. ALLECS provides three state-of-the-art base GEC systems using two approaches (sequence-to-sequence generation and sequence tagging), as well as two state-of-the-art GEC system combination methods using two approaches (edit-based and text-based). ALLECS can be accessed at https://sterling8.d2.comp.nus.edu.sg/gec-demo/

pdf bib
Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering
Hai Ye | Qizhe Xie | Hwee Tou Ng
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this work, we study multi-source test-time model adaptation from user feedback, where K distinct models are established for adaptation. To allow efficient adaptation, we cast the problem as a stochastic decision-making process, aiming to determine the best adapted model after adaptation. We discuss two frameworks: multi-armed bandit learning and multi-armed dueling bandits. Compared to multi-armed bandit learning, the dueling framework allows pairwise collaboration among K models, which is solved by a novel method named Co-UCB proposed in this work. Experiments on six datasets of extractive question answering (QA) show that the dueling framework using Co-UCB is more effective than other strong baselines for our studied problem.

pdf bib
Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Qingyu Tan | Hwee Tou Ng | Lidong Bing
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Reasoning about time is of fundamental importance. Many facts are time-dependent. For example, athletes change teams from time to time, and different government officials are elected periodically. Previous time-dependent question answering (QA) datasets tend to be biased in either their coverage of time spans or question types. In this paper, we introduce a comprehensive probing dataset TempReason to evaluate the temporal reasoning capability of large language models. Our dataset includes questions of three temporal reasoning levels. In addition, we also propose a novel learning framework to improve the temporal reasoning capability of large language models, based on temporal span extraction and time-sensitive reinforcement learning. We conducted experiments in closed book QA, open book QA, and reasoning QA settings and demonstrated the effectiveness of our approach.

2022

pdf bib
Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Qingyu Tan | Ruidan He | Lidong Bing | Hwee Tou Ng
Findings of the Association for Computational Linguistics: ACL 2022

Document-level Relation Extraction (DocRE) is a more challenging task compared to its sentence-level counterpart. It aims to extract relations from multiple sentences at once. In this paper, we propose a semi-supervised framework for DocRE with three novel components. Firstly, we use an axial attention module for learning the interdependency among entity-pairs, which improves the performance on two-hop relations. Secondly, we propose an adaptive focal loss to tackle the class imbalance problem of DocRE. Lastly, we use knowledge distillation to overcome the differences between human annotated data and distantly supervised data. We conducted experiments on two DocRE datasets. Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1.36 F1 and 1.46 Ign_F1 score on the DocRED leaderboard.

pdf bib
Robust Question Answering against Distribution Shifts with Test-Time Adaption: An Empirical Study
Hai Ye | Yuyang Ding | Juntao Li | Hwee Tou Ng
Findings of the Association for Computational Linguistics: EMNLP 2022

A deployed question answering (QA) model can easily fail when the test data has a distribution shift compared to the training data. Robustness tuning (RT) methods have been widely studied to enhance model robustness against distribution shifts before model deployment. However, can we improve a model after deployment? To answer this question, we evaluate test-time adaptation (TTA) to improve a model after deployment. We first introduce ColdQA, a unified evaluation benchmark for robust QA against text corruption and changes in language and domain. We then evaluate previous TTA methods on ColdQA and compare them to RT methods. We also propose a novel TTA method called online imitation learning (OIL). Through extensive experiments, we find that TTA is comparable to RT methods, and applying TTA after RT can significantly boost the performance on ColdQA. Our proposed OIL improves TTA to be more robust to variation in hyper-parameters and test distributions over time.

pdf bib
On the Robustness of Question Rewriting Systems to Questions of Varying Hardness
Hai Ye | Hwee Tou Ng | Wenjuan Han
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In conversational question answering (CQA), the task of question rewriting (QR) in context aims to rewrite a context-dependent question into an equivalent self-contained question that gives the same answer. In this paper, we are interested in the robustness of a QR system to questions varying in rewriting hardness or difficulty. Since there is a lack of questions classified based on their rewriting hardness, we first propose a heuristic method to automatically classify questions into subsets of varying hardness, by measuring the discrepancy between a question and its rewrite. To find out what makes questions hard or easy for rewriting, we then conduct a human evaluation to annotate the rewriting hardness of questions. Finally, to enhance the robustness of QR systems to questions of varying hardness, we propose a novel learning framework for QR that first trains a QR model independently on each subset of questions of a certain level of hardness, then combines these QR models as one joint model for inference. Experimental results on two datasets show that our framework improves the overall performance compared to the baselines.

pdf bib
Does BERT Know that the IS-A Relation Is Transitive?
Ruixi Lin | Hwee Tou Ng
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

The success of a natural language processing (NLP) system on a task does not amount to fully understanding the complexity of the task, typified by many deep learning models. One such question is: can a black-box model make logically consistent predictions for transitive relations? Recent studies suggest that pre-trained BERT can capture lexico-semantic clues from words in the context. However, to what extent BERT captures the transitive nature of some lexical relations is unclear. From a probing perspective, we examine WordNet word senses and the IS-A relation, which is a transitive relation. That is, for senses A, B, and C, A is-a B and B is-a C entail A is-a C. We aim to quantify how much BERT agrees with the transitive property of IS-A relations, via a minimalist probing setting. Our investigation reveals that BERT’s predictions do not fully obey the transitivity property of the IS-A relation.

pdf bib
Grammatical Error Correction: Are We There Yet?
Muhammad Reza Qorib | Hwee Tou Ng
Proceedings of the 29th International Conference on Computational Linguistics

There has been much recent progress in natural language processing, and grammatical error correction (GEC) is no exception. We found that state-of-the-art GEC systems (T5 and GECToR) outperform humans by a wide margin on the CoNLL-2014 test set, a benchmark GEC test corpus, as measured by the standard F0.5 evaluation metric. However, a careful examination of their outputs reveals that there are still classes of errors that they fail to correct. This suggests that creating new test data that more accurately measure the true performance of GEC systems constitutes important future work.

pdf bib
Domain Generalization for Text Classification with Memory-Based Supervised Contrastive Learning
Qingyu Tan | Ruidan He | Lidong Bing | Hwee Tou Ng
Proceedings of the 29th International Conference on Computational Linguistics

While there is much research on cross-domain text classification, most existing approaches focus on one-to-one or many-to-one domain adaptation. In this paper, we tackle the more challenging task of domain generalization, in which domain-invariant representations are learned from multiple source domains, without access to any data from the target domains, and classification decisions are then made on test documents in unseen target domains. We propose a novel framework based on supervised contrastive learning with a memory-saving queue. In this way, we explicitly encourage examples of the same class to be closer and examples of different classes to be further apart in the embedding space. We have conducted extensive experiments on two Amazon review sentiment datasets, and one rumour detection dataset. Experimental results show that our domain generalization method consistently outperforms state-of-the-art domain adaptation methods.

pdf bib
Frustratingly Easy System Combination for Grammatical Error Correction
Muhammad Reza Qorib | Seung-Hoon Na | Hwee Tou Ng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

In this paper, we formulate system combination for grammatical error correction (GEC) as a simple machine learning task: binary classification. We demonstrate that with the right problem formulation, a simple logistic regression algorithm can be highly effective for combining GEC models. Our method successfully increases the F0.5 score from the highest base GEC system by 4.2 points on the CoNLL-2014 test set and 7.2 points on the BEA-2019 test set. Furthermore, our method outperforms the state of the art by 4.0 points on the BEA-2019 test set, 1.2 points on the CoNLL-2014 test set with original annotation, and 3.4 points on the CoNLL-2014 test set with alternative annotation. We also show that our system combination generates better corrections with higher F0.5 scores than the conventional ensemble.

pdf bib
Revisiting DocRED - Addressing the False Negative Problem in Relation Extraction
Qingyu Tan | Lu Xu | Lidong Bing | Hwee Tou Ng | Sharifah Mahani Aljunied
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

The DocRED dataset is one of the most popular and widely used benchmarks for document-level relation extraction (RE). It adopts a recommend-revise annotation scheme so as to have a large-scale annotated dataset. However, we find that the annotation of DocRED is incomplete, i.e., false negative samples are prevalent. We analyze the causes and effects of the overwhelming false negative problem in the DocRED dataset. To address the shortcoming, we re-annotate 4,053 documents in the DocRED dataset by adding the missed relation triples back to the original DocRED. We name our revised DocRED dataset Re-DocRED. We conduct extensive experiments with state-of-the-art neural models on both datasets, and the experimental results show that the models trained and evaluated on our Re-DocRED achieve performance improvements of around 13 F1 points. Moreover, we conduct a comprehensive analysis to identify the potential areas for further improvement.

2021

pdf bib
Do Multi-Hop Question Answering Systems Know How to Answer the Single-Hop Sub-Questions?
Yixuan Tang | Hwee Tou Ng | Anthony Tung
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Multi-hop question answering (QA) requires a model to retrieve and integrate information from multiple passages to answer a question. Rapid progress has been made on multi-hop QA systems with regard to standard evaluation metrics, including EM and F1. However, by simply evaluating the correctness of the answers, it is unclear to what extent these systems have learned the ability to perform multi-hop reasoning. In this paper, we propose an additional sub-question evaluation for the multi-hop QA dataset HotpotQA, in order to shed some light on explaining the reasoning process of QA systems in answering complex questions. We adopt a neural decomposition model to generate sub-questions for a multi-hop question, followed by extracting the corresponding sub-answers. Contrary to our expectation, multiple state-of-the-art multi-hop QA models fail to answer a large portion of sub-questions, although the corresponding multi-hop questions are correctly answered. Our work takes a step forward towards building a more explainable multi-hop QA system.

pdf bib
Improved Word Sense Disambiguation with Enhanced Sense Representations
Yang Song | Xin Cai Ong | Hwee Tou Ng | Qian Lin
Findings of the Association for Computational Linguistics: EMNLP 2021

Current state-of-the-art supervised word sense disambiguation (WSD) systems (such as GlossBERT and bi-encoder model) yield surprisingly good results by purely leveraging pre-trained language models and short dictionary definitions (or glosses) of the different word senses. While concise and intuitive, the sense gloss is just one of many ways to provide information about word senses. In this paper, we focus on enhancing the sense representations via incorporating synonyms, example phrases or sentences showing usage of word senses, and sense gloss of hypernyms. We show that incorporating such additional information boosts the performance on WSD. With the proposed enhancements, our system achieves an F1 score of 82.0% on the standard benchmark test dataset of the English all-words WSD task, surpassing all previous published scores on this benchmark dataset.

pdf bib
Grammatical Error Correction with Contrastive Learning in Low Error Density Domains
Hannan Cao | Wenmian Yang | Hwee Tou Ng
Findings of the Association for Computational Linguistics: EMNLP 2021

Although grammatical error correction (GEC) has achieved good performance on texts written by learners of English as a second language, performance on low error density domains where texts are written by English speakers of varying levels of proficiency can still be improved. In this paper, we propose a contrastive learning approach to encourage the GEC model to assign a higher probability to a correct sentence while reducing the probability of incorrect sentences that the model tends to generate, so as to improve the accuracy of the model. Experimental results show that our approach significantly improves the performance of GEC models in low error density domains, when evaluated on the benchmark CWEB dataset.

pdf bib
System Combination for Grammatical Error Correction Based on Integer Programming
Ruixi Lin | Hwee Tou Ng
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)

In this paper, we propose a system combination method for grammatical error correction (GEC), based on nonlinear integer programming (IP). Our method optimizes a novel F score objective based on error types, and combines multiple end-to-end GEC systems. The proposed IP approach optimizes the selection of a single best system for each grammatical error type present in the data. Experiments of the IP approach on combining state-of-the-art standalone GEC systems show that the combined system outperforms all standalone systems. It improves F0.5 score by 3.61% when combining the two best participating systems in the BEA 2019 shared task, and achieves F0.5 score of 73.08%. We also perform experiments to compare our IP approach with another state-of-the-art system combination method for GEC, demonstrating IP’s competitive combination capability.

pdf bib
A Hierarchical Entity Graph Convolutional Network for Relation Extraction across Documents
Tapas Nayak | Hwee Tou Ng
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)

Distantly supervised datasets for relation extraction mostly focus on sentence-level extraction, and they cover very few relations. In this work, we propose cross-document relation extraction, where the two entities of a relation tuple appear in two different documents that are connected via a chain of common entities. Following this idea, we create a dataset for two-hop relation extraction, where each chain contains exactly two documents. Our proposed dataset covers a higher number of relations than the publicly available sentence-level datasets. We also propose a hierarchical entity graph convolutional network (HEGCN) model for this task that improves performance by 1.1% F1 score on our two-hop relation extraction dataset, compared to some strong neural baselines.

2020

pdf bib
Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Hai Ye | Qingyu Tan | Ruidan He | Juntao Li | Hwee Tou Ng | Lidong Bing
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Adapting pre-trained language models (PrLMs) (e.g., BERT) to new domains has gained much attention recently. Instead of fine-tuning PrLMs as done in most previous work, we investigate how to adapt the features of PrLMs to new domains without fine-tuning. We explore unsupervised domain adaptation (UDA) in this paper. With the features from PrLMs, we adapt the models trained with labeled data from the source domain to the unlabeled target domain. Self-training is widely used for UDA, and it predicts pseudo labels on the target domain data for training. However, the predicted pseudo labels inevitably include noise, which will negatively affect training a robust model. To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered. We further extend CFd to a cross-language setting, in which language discrepancy is studied. Experiments on two monolingual and multilingual Amazon review datasets show that CFd can consistently improve the performance of self-training in cross-domain and cross-language settings.

pdf bib
Learning to Identify Follow-Up Questions in Conversational Question Answering
Souvik Kundu | Qian Lin | Hwee Tou Ng
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Despite recent progress in conversational question answering, most prior work does not focus on follow-up questions. Practical conversational question answering systems often receive follow-up questions in an ongoing conversation, and it is crucial for a system to be able to determine whether a question is a follow-up question of the current conversation, for more effective answer finding subsequently. In this paper, we introduce a new follow-up question identification task. We propose a three-way attentive pooling network that determines the suitability of a follow-up question by capturing pair-wise interactions between the associated passage, the conversation history, and a candidate follow-up question. It enables the model to capture topic continuity and topic shift while scoring a particular candidate follow-up question. Experiments show that our proposed three-way attentive pooling network outperforms all baseline systems by significant margins.

pdf bib
A Survey of Unsupervised Dependency Parsing
Wenjuan Han | Yong Jiang | Hwee Tou Ng | Kewei Tu
Proceedings of the 28th International Conference on Computational Linguistics

Syntactic dependency parsing is an important task in natural language processing. Unsupervised dependency parsing aims to learn a dependency parser from sentences that have no annotation of their correct parse trees. Despite its difficulty, unsupervised parsing is an interesting research direction because of its capability of utilizing almost unlimited unannotated text data. It also serves as the basis for other research in low-resource parsing. In this paper, we survey existing approaches to unsupervised dependency parsing, identify two major classes of approaches, and discuss recent trends. We hope that our survey can provide insights for researchers and facilitate future research on this topic.

pdf bib
A Co-Attentive Cross-Lingual Neural Model for Dialogue Breakdown Detection
Qian Lin | Souvik Kundu | Hwee Tou Ng
Proceedings of the 28th International Conference on Computational Linguistics

Ensuring smooth communication is essential in a chat-oriented dialogue system, so that a user can obtain meaningful responses through interactions with the system. Most prior work on dialogue research does not focus on preventing dialogue breakdown. One of the major challenges is that a dialogue system may generate an undesired utterance leading to a dialogue breakdown, which degrades the overall interaction quality. Hence, it is crucial for a machine to detect dialogue breakdowns in an ongoing conversation. In this paper, we propose a novel dialogue breakdown detection model that jointly incorporates a pretrained cross-lingual language model and a co-attention network. Our proposed model leverages effective word embeddings trained on one hundred different languages to generate contextualized representations. Co-attention aims to capture the interaction between the latest utterance and the conversation history, and thereby determines whether the latest utterance causes a dialogue breakdown. Experimental results show that our proposed model outperforms all previous approaches on all evaluation metrics in both the Japanese and English tracks in Dialogue Breakdown Detection Challenge 4 (DBDC4 at IWSDS2019).

2019

pdf bib
Cross-Sentence Grammatical Error Correction
Shamil Chollampatt | Weiqi Wang | Hwee Tou Ng
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Automatic grammatical error correction (GEC) research has made remarkable progress in the past decade. However, all existing approaches to GEC correct errors by considering a single sentence alone and ignoring crucial cross-sentence context. Some errors can only be corrected reliably using cross-sentence context and models can also benefit from the additional contextual information in correcting other errors. In this paper, we address this serious limitation of existing approaches and improve strong neural encoder-decoder models by appropriately modeling wider contexts. We employ an auxiliary encoder that encodes previous sentences and incorporate the encoding in the decoder via attention and gating mechanisms. Our approach results in statistically significant improvements in overall GEC performance over strong baselines across multiple test sets. Analysis of our cross-sentence GEC model on a synthetic dataset shows high performance in verb tense corrections that require cross-sentence context.

pdf bib
An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis
Ruidan He | Wee Sun Lee | Hwee Tou Ng | Daniel Dahlmeier
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence. This task is usually done in a pipeline manner, with aspect term extraction performed first, followed by sentiment predictions toward the extracted aspect terms. While easier to develop, such an approach does not fully exploit joint information from the two subtasks and does not use all available sources of training information that might be helpful, such as document-level labeled sentiment corpus. In this paper, we propose an interactive multi-task learning network (IMN) which is able to jointly learn multiple related tasks simultaneously at both the token level as well as the document level. Unlike conventional multi-task learning methods that rely on learning common features for the different tasks, IMN introduces a message passing architecture where information is iteratively passed to different tasks through a shared set of latent variables. Experimental results demonstrate superior performance of the proposed method against multiple baselines on three benchmark datasets.

pdf bib
Improving the Robustness of Question Answering Systems to Question Paraphrasing
Wee Chung Gan | Hwee Tou Ng
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Despite the advancement of question answering (QA) systems and rapid improvements on held-out test sets, their generalizability is a topic of concern. We explore the robustness of QA models to question paraphrasing by creating two test sets consisting of paraphrased SQuAD questions. Paraphrased questions from the first test set are very similar to the original questions designed to test QA models’ over-sensitivity, while questions from the second test set are paraphrased using context words near an incorrect answer candidate in an attempt to confuse QA models. We show that both paraphrased test sets lead to significant decrease in performance on multiple state-of-the-art QA models. Using a neural paraphrasing model trained to generate multiple paraphrased questions for a given source question and a set of paraphrase suggestions, we propose a data augmentation approach that requires no human intervention to re-train the models for improved robustness to question paraphrasing.

pdf bib
Effective Attention Modeling for Neural Relation Extraction
Tapas Nayak | Hwee Tou Ng
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

Relation extraction is the task of determining the relation between two entities in a sentence. Distantly-supervised models are popular for this task. However, sentences can be long and two entities can be located far from each other in a sentence. The pieces of evidence supporting the presence of a relation between two entities may not be very direct, since the entities may be connected via some indirect links such as a third entity or via co-reference. Relation extraction in such scenarios becomes more challenging as we need to capture the long-distance interactions among the entities and other words in the sentence. Also, the words in a sentence do not contribute equally in identifying the relation between the two entities. To address this issue, we propose a novel and effective attention model which incorporates syntactic information of the sentence and a multi-factor attention mechanism. Experiments on the New York Times corpus show that our proposed model outperforms prior state-of-the-art models.

pdf bib
Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations
Christian Hadiwinoto | Hwee Tou Ng | Wee Chung Gan
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Contextualized word representations are able to give different representations for the same word in different contexts, and they have been shown to be effective in downstream natural language processing tasks, such as question answering, named entity recognition, and sentiment analysis. However, evaluation on word sense disambiguation (WSD) in prior work shows that using contextualized word representations does not outperform the state-of-the-art approach that makes use of non-contextualized word embeddings. In this paper, we explore different strategies of integrating pre-trained contextualized word representations and our best strategy achieves accuracies exceeding the best prior published accuracies by significant margins on multiple benchmark WSD datasets.

pdf bib
Learning from the Experience of Doctors: Automated Diagnosis of Appendicitis Based on Clinical Notes
Steven Kester Yuwono | Hwee Tou Ng | Kee Yuan Ngiam
Proceedings of the 18th BioNLP Workshop and Shared Task

The objective of this work is to develop an automated diagnosis system that is able to predict the probability of appendicitis given a free-text emergency department (ED) note and additional structured information (e.g., lab test results). Our clinical corpus consists of about 180,000 ED notes based on ten years of patient visits to the Accident and Emergency (A&E) Department of the National University Hospital (NUH), Singapore. We propose a novel neural network approach that learns to diagnose acute appendicitis based on doctors’ free-text ED notes without any feature engineering. On a test set of 2,000 ED notes with equal number of appendicitis (positive) and non-appendicitis (negative) diagnosis and in which all the negative ED notes only consist of abdominal-related diagnosis, our model is able to achieve a promising F_0.5-score of 0.895 while ED doctors achieve F_0.5-score of 0.900. Visualization shows that our model is able to learn important features, signs, and symptoms of patients from unstructured free-text ED notes, which will help doctors to make better diagnosis.

2018

pdf bib
Effective Attention Modeling for Aspect-Level Sentiment Classification
Ruidan He | Wee Sun Lee | Hwee Tou Ng | Daniel Dahlmeier
Proceedings of the 27th International Conference on Computational Linguistics

Aspect-level sentiment classification aims to determine the sentiment polarity of a review sentence towards an opinion target. A sentence could contain multiple sentiment-target pairs; thus the main challenge of this task is to separate different opinion contexts for different targets. To this end, attention mechanism has played an important role in previous state-of-the-art neural models. The mechanism is able to capture the importance of each context word towards a target by modeling their semantic associations. We build upon this line of research and propose two novel approaches for improving the effectiveness of attention. First, we propose a method for target representation that better captures the semantic meaning of the opinion target. Second, we introduce an attention model that incorporates syntactic information into the attention mechanism. We experiment on attention-based LSTM (Long Short-Term Memory) models using the datasets from SemEval 2014, 2015, and 2016. The experimental results show that the conventional attention-based LSTM can be substantially improved by incorporating the two approaches.

pdf bib
A Reassessment of Reference-Based Grammatical Error Correction Metrics
Shamil Chollampatt | Hwee Tou Ng
Proceedings of the 27th International Conference on Computational Linguistics

Several metrics have been proposed for evaluating grammatical error correction (GEC) systems based on grammaticality, fluency, and adequacy of the output sentences. Previous studies of the correlation of these metrics with human quality judgments were inconclusive, due to the lack of appropriate significance tests, discrepancies in the methods, and choice of datasets used. In this paper, we re-evaluate reference-based GEC metrics by measuring the system-level correlations with humans on a large dataset of human judgments of GEC outputs, and by properly conducting statistical significance tests. Our results show no significant advantage of GLEU over MaxMatch (M2), contradicting previous studies that claim GLEU to be superior. For a finer-grained analysis, we additionally evaluate these metrics for their agreement with human judgments at the sentence level. Our sentence-level analysis indicates that comparing GLEU and M2, one metric may be more useful than the other depending on the scenario. We further qualitatively analyze these metrics and our findings show that apart from being less interpretable and non-deterministic, GLEU also produces counter-intuitive scores in commonly occurring test examples.

pdf bib
Upping the Ante: Towards a Better Benchmark for Chinese-to-English Machine Translation
Christian Hadiwinoto | Hwee Tou Ng
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Neural Quality Estimation of Grammatical Error Correction
Shamil Chollampatt | Hwee Tou Ng
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Grammatical error correction (GEC) systems deployed in language learning environments are expected to accurately correct errors in learners’ writing. However, in practice, they often produce spurious corrections and fail to correct many errors, thereby misleading learners. This necessitates the estimation of the quality of output sentences produced by GEC systems so that instructors can selectively intervene and re-correct the sentences which are poorly corrected by the system and ensure that learners get accurate feedback. We propose the first neural approach to automatic quality estimation of GEC output sentences that does not employ any hand-crafted features. Our system is trained in a supervised manner on learner sentences and corresponding GEC system outputs with quality score labels computed using human-annotated references. Our neural quality estimation models for GEC show significant improvements over a strong feature-based baseline. We also show that a state-of-the-art GEC system can be improved when quality scores are used as features for re-ranking the N-best candidates.

pdf bib
Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification
Ruidan He | Wee Sun Lee | Hwee Tou Ng | Daniel Dahlmeier
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain. Our approach explicitly minimizes the distance between the source and the target instances in an embedded feature space. With the difference between source and target minimized, we then exploit additional information from the target domain by consolidating the idea of semi-supervised learning, for which, we jointly employ two regularizations — entropy minimization and self-ensemble bootstrapping — to incorporate the unlabeled target data for classifier refinement. Our experimental results demonstrate that the proposed approach can better leverage unlabeled data from the target domain and achieve substantial improvements over baseline methods in various experimental settings.

pdf bib
A Nil-Aware Answer Extraction Framework for Question Answering
Souvik Kundu | Hwee Tou Ng
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Recently, there has been a surge of interest in reading comprehension-based (RC) question answering (QA). However, current approaches suffer from an impractical assumption that every question has a valid answer in the associated passage. A practical QA system must possess the ability to determine whether a valid answer exists in a given text passage. In this paper, we focus on developing QA systems that can extract an answer for a question if and only if the associated passage contains an answer. If the associated passage does not contain any valid answer, the QA system will correctly return Nil. We propose a novel nil-aware answer span extraction framework that is capable of returning Nil or a text span from the associated passage as an answer in a single step. We show that our proposed framework can be easily integrated with several recently proposed QA models developed for reading comprehension and can be trained in an end-to-end fashion. Our proposed nil-aware answer extraction neural network decomposes pieces of evidence into relevant and irrelevant parts and then combines them to infer the existence of any answer. Experiments on the NewsQA dataset show that the integration of our proposed framework significantly outperforms several strong baseline systems that use pipeline or threshold-based approaches.

pdf bib
Exploiting Document Knowledge for Aspect-level Sentiment Classification
Ruidan He | Wee Sun Lee | Hwee Tou Ng | Daniel Dahlmeier
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Attention-based long short-term memory (LSTM) networks have proven to be useful in aspect-level sentiment classification. However, due to the difficulties in annotating aspect-level data, existing public datasets for this task are all relatively small, which largely limits the effectiveness of those neural models. In this paper, we explore two approaches that transfer knowledge from document-level data, which is much less expensive to obtain, to improve the performance of aspect-level sentiment classification. We demonstrate the effectiveness of our approaches on 4 public datasets from SemEval 2014, 2015, and 2016, and we show that attention-based LSTM benefits from document-level knowledge in multiple ways.

2017

pdf bib
Connecting the Dots: Towards Human-Level Grammatical Error Correction
Shamil Chollampatt | Hwee Tou Ng
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications

We build a grammatical error correction (GEC) system primarily based on the state-of-the-art statistical machine translation (SMT) approach, using task-specific features and tuning, and further enhance it with the modeling power of neural network joint models. The SMT-based system is weak in generalizing beyond patterns seen during training and lacks granularity below the word level. To address this issue, we incorporate a character-level SMT component targeting the misspelled words that the original SMT-based system fails to correct. Our final system achieves 53.14% F 0.5 score on the benchmark CoNLL-2014 test set, an improvement of 3.62% F 0.5 over the best previous published score.

pdf bib
Keynote Lecture 2: Grammatical Error Correction: Past, Present and Future
Hwee Tou Ng
Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017)

pdf bib
An Unsupervised Neural Attention Model for Aspect Extraction
Ruidan He | Wee Sun Lee | Hwee Tou Ng | Daniel Dahlmeier
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks.

2016

pdf bib
CoNLL 2016 Shared Task on Multilingual Shallow Discourse Parsing
Nianwen Xue | Hwee Tou Ng | Sameer Pradhan | Attapol Rutherford | Bonnie Webber | Chuan Wang | Hongmin Wang
Proceedings of the CoNLL-16 shared task

pdf bib
Automated Anonymization as Spelling Variant Detection
Steven Kester Yuwono | Hwee Tou Ng | Kee Yuan Ngiam
Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP)

The issue of privacy has always been a concern when clinical texts are used for research purposes. Personal health information (PHI) (such as name and identification number) needs to be removed so that patients cannot be identified. Manual anonymization is not feasible due to the large number of clinical texts to be anonymized. In this paper, we tackle the task of anonymizing clinical texts written in sentence fragments and which frequently contain symbols, abbreviations, and misspelled words. Our clinical texts therefore differ from those in the i2b2 shared tasks which are in prose form with complete sentences. Our clinical texts are also part of a structured database which contains patient name and identification number in structured fields. As such, we formulate our anonymization task as spelling variant detection, exploiting patients’ personal information in the structured fields to detect their spelling variants in clinical texts. We successfully anonymized clinical texts consisting of more than 200 million words, using minimum edit distance and regular expression patterns.

pdf bib
Source Language Adaptation Approaches for Resource-Poor Machine Translation
Pidong Wang | Preslav Nakov | Hwee Tou Ng
Computational Linguistics, Volume 42, Issue 2 - June 2016

pdf bib
A Neural Approach to Automated Essay Scoring
Kaveh Taghipour | Hwee Tou Ng
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Adapting Grammatical Error Correction Based on the Native Language of Writers with Neural Network Joint Models
Shamil Chollampatt | Duc Tam Hoang | Hwee Tou Ng
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2015

pdf bib
Flexible Domain Adaptation for Automated Essay Scoring Using Correlated Linear Regression
Peter Phandi | Kian Ming A. Chai | Hwee Tou Ng
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
How Far are We from Fully Automatic High Quality Grammatical Error Correction?
Christopher Bryant | Hwee Tou Ng
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Semi-Supervised Word Sense Disambiguation Using Word Embeddings in General and Specific Domains
Kaveh Taghipour | Hwee Tou Ng
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
One Million Sense-Tagged Instances for Word Sense Disambiguation and Induction
Kaveh Taghipour | Hwee Tou Ng
Proceedings of the Nineteenth Conference on Computational Natural Language Learning

pdf bib
The CoNLL-2015 Shared Task on Shallow Discourse Parsing
Nianwen Xue | Hwee Tou Ng | Sameer Pradhan | Rashmi Prasad | Christopher Bryant | Attapol Rutherford
Proceedings of the Nineteenth Conference on Computational Natural Language Learning - Shared Task

2014

pdf bib
Domain Adaptation with Active Learning for Coreference Resolution
Shanheng Zhao | Hwee Tou Ng
Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi)

pdf bib
Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task
Hwee Tou Ng | Siew Mei Wu | Ted Briscoe | Christian Hadiwinoto | Raymond Hendy Susanto | Christopher Bryant
Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
The CoNLL-2014 Shared Task on Grammatical Error Correction
Hwee Tou Ng | Siew Mei Wu | Ted Briscoe | Christian Hadiwinoto | Raymond Hendy Susanto | Christopher Bryant
Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
A Beam-Search Decoder for Disfluency Detection
Xuancong Wang | Hwee Tou Ng | Khe Chai Sim
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf bib
A Constituent-Based Approach to Argument Labeling with Joint Inference in Discourse Parsing
Fang Kong | Hwee Tou Ng | Guodong Zhou
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
Combining Punctuation and Disfluency Prediction: An Empirical Study
Xuancong Wang | Khe Chai Sim | Hwee Tou Ng
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
System Combination for Grammatical Error Correction
Raymond Hendy Susanto | Peter Phandi | Hwee Tou Ng
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
Exploiting Zero Pronouns to Improve Chinese Coreference Resolution
Fang Kong | Hwee Tou Ng
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Beam-Search Decoder for Normalization of Social Media Text with Application to Machine Translation
Pidong Wang | Hwee Tou Ng
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner English
Daniel Dahlmeier | Hwee Tou Ng | Siew Mei Wu
Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Towards Robust Linguistic Analysis using OntoNotes
Sameer Pradhan | Alessandro Moschitti | Nianwen Xue | Hwee Tou Ng | Anders Björkelund | Olga Uryupina | Yuchen Zhang | Zhi Zhong
Proceedings of the Seventeenth Conference on Computational Natural Language Learning

pdf bib
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task
Hwee Tou Ng | Joel Tetreault | Siew Mei Wu | Yuanbin Wu | Christian Hadiwinoto
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
The CoNLL-2013 Shared Task on Grammatical Error Correction
Hwee Tou Ng | Siew Mei Wu | Yuanbin Wu | Christian Hadiwinoto | Joel Tetreault
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
Grammatical Error Correction Using Integer Linear Programming
Yuanbin Wu | Hwee Tou Ng
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf bib
Better Evaluation for Grammatical Error Correction
Daniel Dahlmeier | Hwee Tou Ng
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
NUS at the HOO 2012 Shared Task
Daniel Dahlmeier | Hwee Tou Ng | Eric Jun Feng Ng
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP

pdf bib
Word Sense Disambiguation Improves Information Retrieval
Zhi Zhong | Hwee Tou Ng
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Character-Level Machine Translation Evaluation for Languages with Ambiguous Word Boundaries
Chang Liu | Hwee Tou Ng
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Combining Coherence Models and Machine Translation Evaluation Metrics for Summarization Evaluation
Ziheng Lin | Chang Liu | Hwee Tou Ng | Min-Yen Kan
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Source Language Adaptation for Resource-Poor Machine Translation
Pidong Wang | Preslav Nakov | Hwee Tou Ng
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
A Beam-Search Decoder for Grammatical Error Correction
Daniel Dahlmeier | Hwee Tou Ng
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
Grammatical Error Correction with Alternating Structure Optimization
Daniel Dahlmeier | Hwee Tou Ng
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Automatically Evaluating Text Coherence Using Discourse Relations
Ziheng Lin | Hwee Tou Ng | Min-Yen Kan
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Translating from Morphologically Complex Languages: A Paraphrase-Based Approach
Preslav Nakov | Hwee Tou Ng
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Automatic Evaluation of Chinese Translation Output: Word-Level or Character-Level?
Maoxi Li | Chengqing Zong | Hwee Tou Ng
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Correcting Semantic Collocation Errors with L1-induced Paraphrases
Daniel Dahlmeier | Hwee Tou Ng
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
Better Evaluation Metrics Lead to Better Machine Translation
Chang Liu | Daniel Dahlmeier | Hwee Tou Ng
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Probabilistic Forest-to-String Model for Language Generation from Typed Lambda Calculus Expressions
Wei Lu | Hwee Tou Ng
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
TESLA at WMT 2011: Translation Evaluation and Tunable Metric
Daniel Dahlmeier | Chang Liu | Hwee Tou Ng
Proceedings of the Sixth Workshop on Statistical Machine Translation

pdf bib
NUS at the HOO 2011 Pilot Shared Task
Daniel Dahlmeier | Hwee Tou Ng | Thanh Phu Tran
Proceedings of the 13th European Workshop on Natural Language Generation

2010

pdf bib
TESLA: Translation Evaluation of Sentences with Linear-Programming-Based Analysis
Chang Liu | Daniel Dahlmeier | Hwee Tou Ng
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

pdf bib
Joint Syntactic and Semantic Parsing of Chinese
Junhui Li | Guodong Zhou | Hwee Tou Ng
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
It Makes Sense: A Wide-Coverage Word Sense Disambiguation System for Free Text
Zhi Zhong | Hwee Tou Ng
Proceedings of the ACL 2010 System Demonstrations

pdf bib
Better Punctuation Prediction with Dynamic Conditional Random Fields
Wei Lu | Hwee Tou Ng
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
PEM: A Paraphrase Evaluation Metric Exploiting Parallel Texts
Chang Liu | Daniel Dahlmeier | Hwee Tou Ng
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Maximum Metric Score Training for Coreference Resolution
Shanheng Zhao | Hwee Tou Ng
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

2009

pdf bib
Recognizing Implicit Discourse Relations in the Penn Discourse Treebank
Ziheng Lin | Min-Yen Kan | Hwee Tou Ng
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Natural Language Generation with Tree Conditional Random Fields
Wei Lu | Hwee Tou Ng | Wee Sun Lee
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Joint Learning of Preposition Senses and Semantic Roles of Prepositional Phrases
Daniel Dahlmeier | Hwee Tou Ng | Tanja Schultz
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Improved Statistical Machine Translation for Resource-Poor Languages Using Related Resource-Rich Languages
Preslav Nakov | Hwee Tou Ng
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
NUS at WMT09: Domain Adaptation Experiments for English-Spanish Machine Translation of News Commentary Text
Preslav Nakov | Hwee Tou Ng
Proceedings of the Fourth Workshop on Statistical Machine Translation

pdf bib
The NUS statistical machine translation system for IWSLT 2009
Preslav Nakov | Chang Liu | Wei Lu | Hwee Tou Ng
Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign

We describe the system developed by the team of the National University of Singapore for the Chinese-English BTEC task of the IWSLT 2009 evaluation campaign. We adopted a state-of-the-art phrase-based statistical machine translation approach and focused on experiments with different Chinese word segmentation standards. In our official submission, we trained a separate system for each segmenter and we combined the outputs in a subsequent re-ranking step. Given the small size of the training data, we further re-trained the system on the development data after tuning. The evaluation results show that both strategies yield sizeable and consistent improvements in translation quality.

2008

pdf bib
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing
Mirella Lapata | Hwee Tou Ng
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Decomposability of Translation Metrics for Improved Evaluation and Efficient Algorithms
David Chiang | Steve DeNeefe | Yee Seng Chan | Hwee Tou Ng
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Generative Model for Parsing Natural Language to Meaning Representations
Wei Lu | Hwee Tou Ng | Wee Sun Lee | Luke S. Zettlemoyer
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Word Sense Disambiguation Using OntoNotes: An Empirical Study
Zhi Zhong | Hwee Tou Ng | Yee Seng Chan
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
MAXSIM: A Maximum Similarity Metric for Machine Translation Evaluation
Yee Seng Chan | Hwee Tou Ng
Proceedings of ACL-08: HLT

2007

pdf bib
Identification and Resolution of Chinese Zero Pronouns: A Machine Learning Approach
Shanheng Zhao | Hwee Tou Ng
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

pdf bib
A Statistical Language Modeling Approach to Lattice-Based Spoken Document Retrieval
Tee Kiah Chia | Haizhou Li | Hwee Tou Ng
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

pdf bib
Word Sense Disambiguation Improves Statistical Machine Translation
Yee Seng Chan | Hwee Tou Ng | David Chiang
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
Domain Adaptation with Active Learning for Word Sense Disambiguation
Yee Seng Chan | Hwee Tou Ng
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
Learning Predictive Structures for Semantic Role Labeling of NomBank
Chang Liu | Hwee Tou Ng
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
A Unified Tagging Approach to Text Normalization
Conghui Zhu | Jie Tang | Hang Li | Hwee Tou Ng | Tiejun Zhao
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
SemEval-2007 Task 11: English Lexical Sample Task via English-Chinese Parallel Text
Hwee Tou Ng | Yee Seng Chan
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

pdf bib
NUS-PT: Exploiting Parallel Texts for Word Sense Disambiguation in the English All-Words Tasks
Yee Seng Chan | Hwee Tou Ng | Zhi Zhong
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

2006

pdf bib
Estimating Class Priors in Domain Adaptation for Word Sense Disambiguation
Yee Seng Chan | Hwee Tou Ng
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing
Hwee Tou Ng | Olivia O.Y. Kwong
Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing

pdf bib
Semantic Role Labeling of NomBank: A Maximum Entropy Approach
Zheng Ping Jiang | Hwee Tou Ng
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

2005

pdf bib
A Maximum Entropy Approach to Chinese Word Segmentation
Jin Kiat Low | Hwee Tou Ng | Wenyuan Guo
Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing

pdf bib
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)
Kevin Knight | Hwee Tou Ng | Kemal Oflazer
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

2004

pdf bib
Mining New Word Translations from Comparable Corpora
Li Shao | Hwee Tou Ng
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
Supervised Word Sense Disambiguation with Support Vector Machines and multiple knowledge sources
Yoong Keok Lee | Hwee Tou Ng | Tee Kiah Chia
Proceedings of SENSEVAL-3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text

pdf bib
Chinese Part-of-Speech Tagging: One-at-a-Time or All-at-Once? Word-Based or Character-Based?
Hwee Tou Ng | Jin Kiat Low
Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing

2003

pdf bib
Named Entity Recognition with a Maximum Entropy Approach
Hai Leong Chieu | Hwee Tou Ng
Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003

pdf bib
Closing the Gap: Learning-Based Information Extraction Rivaling Knowledge-Engineering Methods
Hai Leong Chieu | Hwee Tou Ng | Yoong Keok Lee
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

pdf bib
Exploiting Parallel Texts for Word Sense Disambiguation: An Empirical Study
Hwee Tou Ng | Bin Wang | Yee Seng Chan
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

2002

pdf bib
Teaching a Weaker Classifier: Named Entity Recognition on Upper Case Text
Hai Leong Chieu | Hwee Tou Ng
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

pdf bib
Named Entity Recognition: A Maximum Entropy Approach Using Global Information
Hai Leong Chieu | Hwee Tou Ng
COLING 2002: The 19th International Conference on Computational Linguistics

pdf bib
An Empirical Evaluation of Knowledge Sources and Learning Algorithms for Word Sense Disambiguation
Yoong Keok Lee | Hwee Tou Ng
Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002)

2001

pdf bib
Question Answering Using a Large Text Database: A Machine Learning Approach
Hwee Tou Ng | Jennifer Lai Pheng Kwan | Yiyuan Xia
Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Machine Learning Approach to Coreference Resolution of Noun Phrases
Wee Meng Soon | Hwee Tou Ng | Daniel Chung Yong Lim
Computational Linguistics, Volume 27, Number 4, December 2001

2000

pdf bib
A Machine Learning Approach to Answering Questions for Reading Comprehension Tests
Hwee Tou Ng | Leong Hwee Teo | Jennifer Lai Pheng Kwan
2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora

1999

pdf bib
A Case Study on Inter-Annotator Agreement for Word Sense Disambiguation
Hwee Tou Ng | Chung Yong Lim | Shou King Foo
SIGLEX99: Standardizing Lexical Resources

pdf bib
Corpus-Based Learning for Noun Phrase Coreference Resolution
Wee Meng Soon | Hwee Tou Ng | Chung Yong Lim
1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora

pdf bib
Learning to Recognize Tables in Free Text
Hwee Tou Ng | Chung Yong Lim | Jessica Li Teng Koo
Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics

1997

pdf bib
Getting Serious about Word Sense Disambiguation
Hwee Tou Ng
Tagging Text with Lexical Semantics: Why, What, and How?

pdf bib
Exemplar-Based Word Sense Disambiguation” Some Recent Improvements
Hwee Tou Ng
Second Conference on Empirical Methods in Natural Language Processing

1996

pdf bib
Integrating Multiple Knowledge Sources to Disambiguate Word Sense: An Exemplar-Based Approach
Hwee Tou Ng | Hian Beng Lee
34th Annual Meeting of the Association for Computational Linguistics

Search
Co-authors