You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • EMNLP 2021 • Manling Li, Tengfei Ma, Mo Yu, Lingfei Wu, Tian Gao, Heng Ji, Kathleen McKeown

Timeline Summarization identifies major events from a news collection and describes them following temporal order, with key dates tagged.

no code implementations • EMNLP 2021 • Zeru Zhang, Zijie Zhang, Yang Zhou, Lingfei Wu, Sixing Wu, Xiaoying Han, Dejing Dou, Tianshi Che, Da Yan

Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks.

no code implementations • NeurIPS 2021 • Zeru Zhang, Jiayin Jin, Zijie Zhang, Yang Zhou, Xin Zhao, Jiaxiang Ren, Ji Liu, Lingfei Wu, Ruoming Jin, Dejing Dou

Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks.

no code implementations • NeurIPS 2021 • Shen Kai, Lingfei Wu, Siliang Tang, Yueting Zhuang, Zhen He, Zhuoye Ding, Yun Xiao, Bo Long

The task of visual question generation (VQG) aims to generate human-like neural questions from an image and potentially other side information (e. g., answer type or the answer itself).

no code implementations • 20 Nov 2021 • Hanning Gao, Lingfei Wu, Po Hu, Zhihua Wei, Fangli Xu, Bo Long

Finally, we apply an answer selection model on the full KSG and the top-ranked sub-KSGs respectively to validate the effectiveness of our proposed graph-augmented learning to rank method.

no code implementations • 20 Nov 2021 • Hanning Gao, Lingfei Wu, Po Hu, Zhihua Wei, Fangli Xu, Bo Long

Most previous methods solve this task using a sequence-to-sequence model or using a graph-based model to encode RDF triples and to generate a text sequence.

no code implementations • 29 Sep 2021 • Dong Chen, Lingfei Wu, Siliang Tang, Fangli Xu, Yun Xiao, Bo Long, Yueting Zhuang

Furthermore, to obtain a more accurate main direction for Eigen-Reptile in the presence of label noise, we further propose Introspective Self-paced Learning (ISPL).

no code implementations • 24 Sep 2021 • Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long

In this work, we propose an end-to-end heterogeneous global graph learning framework, namely Graph Learning Augmented Heterogeneous Graph Neural Network (GL-HGNN) for social recommendation.

no code implementations • 24 Sep 2021 • Qi Shen, Lingfei Wu, Yitong Pang, Yiming Zhang, Zhihua Wei, Fangli Xu, Bo Long

Based on the global graph, MGCNet attaches the global interest representation to final item representation based on local contextual intention to address the limitation (iii).

no code implementations • 21 Jul 2021 • Zhiqian Chen, Fanglan Chen, Lei Zhang, Taoran Ji, Kaiqun Fu, Liang Zhao, Feng Chen, Lingfei Wu, Charu Aggarwal, Chang-Tien Lu

Deep learning's performance has been extensively recognized recently.

no code implementations • 8 Jul 2021 • Yitong Pang, Lingfei Wu, Qi Shen, Yiming Zhang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long, Jian Pei

Additionally, existing personalized session-based recommenders capture user preference only based on the sessions of the current user, but ignore the useful item-transition patterns from other user's historical sessions.

1 code implementation • 10 Jun 2021 • Lingfei Wu, Yu Chen, Kai Shen, Xiaojie Guo, Hanning Gao, Shucheng Li, Jian Pei, Bo Long

Deep learning has become the dominant approach in coping with various tasks in Natural LanguageProcessing (NLP).

no code implementations • NAACL 2021 • Lingfei Wu, Yu Chen, Heng Ji, Yunyao Li

Due to its great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graph techniques (i. e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems.

1 code implementation • Findings (EMNLP) 2021 • Yangkai Du, Tengfei Ma, Lingfei Wu, Fangli Xu, Xuhong Zhang, Bo Long, Shouling Ji

Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks.

2 code implementations • Findings (EMNLP) 2021 • Xuye Liu, Dakuo Wang, April Wang, Yufang Hou, Lingfei Wu

Jupyter notebook allows data scientists to write machine learning code together with its documentation in cells.

no code implementations • 14 Feb 2021 • Xiao Qin, Nasrullah Sheikh, Berthold Reinwald, Lingfei Wu

Furthermore, the expressivity of the learned representation depends on the quality of negative samples used during training.

no code implementations • 27 Jan 2021 • Di Tong, Lingfei Wu, James Allen Evans

While substantial scholarship has focused on estimating the susceptibility of jobs to automation, little has examined how job contents evolve in the information age despite the fact that new technologies typically substitute for specific job tasks, shifting job skills rather than eliminating whole jobs.

no code implementations • 11 Jan 2021 • Aakash Bansal, Zachary Eberhart, Lingfei Wu, Collin McMillan

In this paper, we take initial steps to bringing state-of-the-art neural QA technologies to Software Engineering applications by designing a context-based QA system for basic questions about subroutines.

no code implementations • 1 Jan 2021 • Dong Chen, Lingfei Wu, Siliang Tang, Fangli Xu, Juncheng Li, Chang Zong, Chilie Tan, Yueting Zhuang

In particular, we first cast the meta-overfitting problem (overfitting on sampling and label noise) as a gradient noise problem since few available samples cause meta-learner to overfit on existing examples (clean or corrupted) of an individual task at every gradient step.

no code implementations • 1 Jan 2021 • Chengyue Huang, Lingfei Wu, Yadong Ding, Siliang Tang, Fangli Xu, Chang Zong, Chilie Tan, Yueting Zhuang

To this end, we learn a differentiable graph neural network as a surrogate model to rank candidate architectures, which enable us to obtain gradient w. r. t the input architectures.

no code implementations • 1 Jan 2021 • Shen Kai, Lingfei Wu, Siliang Tang, Fangli Xu, Zhu Zhang, Yu Qiang, Yueting Zhuang

The task of visual question generation~(VQG) aims to generate human-like questions from an image and potentially other side information (e. g. answer type or the answer itself).

no code implementations • 1 Jan 2021 • Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji

The proposed MGMN model consists of a node-graph matching network for effectively learning cross-level interactions between nodes of a graph and the other whole graph, and a siamese graph neural network to learn global-level interactions between two graphs.

no code implementations • 25 Oct 2020 • Hanlu Wu, Tengfei Ma, Lingfei Wu, Shouling Ji

Besides, we exploit the unknown latent interaction between the same type of nodes (workers or tasks) by adding a homogeneous attention layer in the graph neural networks.

no code implementations • 24 Oct 2020 • Xiang Ling, Lingfei Wu, Saizhuo Wang, Gaoning Pan, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji

To this end, we first represent both natural language query texts and programming language code snippets with the unified graph-structured data, and then use the proposed graph matching and searching model to retrieve the best matching code snippet.

no code implementations • 22 Oct 2020 • Devendra Singh Sachan, Lingfei Wu, Mrinmaya Sachan, William Hamilton

In this work, we introduce a series of strong transformer models for multi-hop question generation, including a graph-augmented transformer that leverages relations between entities in the text.

1 code implementation • NAACL 2021 • Wenhao Yu, Lingfei Wu, Yu Deng, Qingkai Zeng, Ruchi Mahindru, Sinem Guven, Meng Jiang

In this paper, we propose a novel framework of deep transfer learning to effectively address technical QA across tasks and domains.

1 code implementation • EMNLP 2020 • Hanlu Wu, Tengfei Ma, Lingfei Wu, Tariro Manyumwa, Shouling Ji

Experiments on Newsroom and CNN/Daily Mail demonstrate that our new evaluation method outperforms other metrics even without reference summaries.

1 code implementation • EMNLP 2020 • Wenhao Yu, Lingfei Wu, Yu Deng, Ruchi Mahindru, Qingkai Zeng, Sinem Guven, Meng Jiang

In recent years, the need for community technical question-answering sites has increased significantly.

1 code implementation • 8 Jul 2020 • Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji

In particular, the proposed MGMN consists of a node-graph matching network for effectively learning cross-level interactions between each node of one graph and the other whole graph, and a siamese graph neural network to learn global-level interactions between two input graphs.

no code implementations • ACL 2020 • Ying Lin, Heng Ji, Fei Huang, Lingfei Wu

OneIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations; (2) Identifying entity mentions and event triggers as nodes; (3) Computing label scores for all nodes and their pairwise links using local classifiers; (4) Searching for the globally optimal graph with a beam decoder.

1 code implementation • NeurIPS 2020 • Yu Chen, Lingfei Wu, Mohammed J. Zaki

In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly and iteratively learning graph structure and graph embedding.

1 code implementation • 9 Jun 2020 • Xiaojie Guo, Liang Zhao, Zhao Qin, Lingfei Wu, Amarda Shehu, Yanfang Ye

Disentangled representation learning has recently attracted a significant amount of attention, particularly in the field of image representation learning.

no code implementations • ACL 2020 • Wenhao Yu, Lingfei Wu, Qingkai Zeng, Shu Tao, Yu Deng, Meng Jiang

Existing methods learned semantic representations with dual encoders or dual variational auto-encoders.

no code implementations • ACL 2020 • Rajarshi Haldar, Lingfei Wu, JinJun Xiong, Julia Hockenmaier

The ability to match pieces of code to their corresponding natural language descriptions and vice versa is fundamental for natural language search interfaces to software repositories.

1 code implementation • ACL 2020 • Luyang Huang, Lingfei Wu, Lu Wang

Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive.

1 code implementation • 13 Apr 2020 • Yu Chen, Lingfei Wu, Mohammed J. Zaki

In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.

Ranked #3 on KG-to-Text Generation on WebQuestions

1 code implementation • 10 Apr 2020 • Sakib Haque, Alexander LeClair, Lingfei Wu, Collin McMillan

In this paper, we present an approach that models the file context of subroutines (i. e. other subroutines in the same file) and uses an attention mechanism to find words and concepts to use in summaries.

Software Engineering

1 code implementation • Findings of the Association for Computational Linguistics 2020 • Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu, Fengyuan Xu, Sheng Zhong

In particular, we investigated our model for solving two problems, neural semantic parsing and math word problem.

1 code implementation • 6 Apr 2020 • Alexander LeClair, Sakib Haque, Lingfei Wu, Collin McMillan

The first approaches to use structural information flattened the AST into a sequence.

no code implementations • 27 Feb 2020 • Zhiqian Chen, Fanglan Chen, Lei Zhang, Taoran Ji, Kaiqun Fu, Liang Zhao, Feng Chen, Lingfei Wu, Charu Aggarwal, Chang-Tien Lu

Deep learning's success has been widely recognized in a variety of machine learning tasks, including image classification, audio recognition, and natural language processing.

1 code implementation • 17 Dec 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki

In this paper, we propose an end-to-end graph learning framework, namely Deep Iterative and Adaptive Learning for Graph Neural Networks (DIAL-GNN), for jointly learning the graph structure and graph embeddings simultaneously.

no code implementations • 25 Nov 2019 • Lingfei Wu, Ian En-Hsu Yen, Zhen Zhang, Kun Xu, Liang Zhao, Xi Peng, Yinglong Xia, Charu Aggarwal

In particular, RGE is shown to achieve \emph{(quasi-)linear scalability} with respect to the number and the size of the graphs.

no code implementations • 25 Nov 2019 • Lingfei Wu, Ian En-Hsu Yen, Siyu Huo, Liang Zhao, Kun Xu, Liang Ma, Shouling Ji, Charu Aggarwal

In this paper, we present a new class of global string kernels that aims to (i) discover global properties hidden in the strings through global alignments, (ii) maintain positive-definiteness of the kernel, without introducing a diagonal dominant kernel matrix, and (iii) have a training cost linear with respect to not only the length of the string but also the number of training string samples.

1 code implementation • NeurIPS 2019 • Zhen Zhang, Yijian Xiang, Lingfei Wu, Bing Xue, Arye Nehorai

Graph matching plays a central role in such fields as computer vision, pattern recognition, and bioinformatics.

no code implementations • arXiv 2020 • Maxwell Crouse, Ibrahim Abdelaziz, Cristina Cornelio, Veronika Thost, Lingfei Wu, Kenneth Forbus, Achille Fokoue

Recent advances in the integration of deep learning with automated theorem proving have centered around the representation of logical formulae as inputs to deep learning systems.

Ranked #1 on Automated Theorem Proving on HolStep (Conditional)

no code implementations • WS 2019 • Siyu Huo, Tengfei Ma, Jie Chen, Maria Chang, Lingfei Wu, Michael Witbrock

Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations.

1 code implementation • 19 Oct 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki

Natural question generation (QG) aims to generate questions from a passage and an answer.

no code implementations • 5 Oct 2019 • Fangli Xu, Lingfei Wu, KP Thai, Carol Hsu, Wei Wang, Richard Tong

Automatic analysis of teacher and student interactions could be very important to improve the quality of teaching and student engagement.

no code implementations • 25 Sep 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki

In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly learning graph structure and graph embedding simultaneously.

no code implementations • 25 Sep 2019 • Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Chunming Wu, Shouling Ji

The proposed HGMN model consists of a multi-perspective node-graph matching network for effectively learning cross-level interactions between parts of a graph and a whole graph, and a siamese graph neural network for learning global-level interactions between two graphs.

1 code implementation • 26 Aug 2019 • Xiaojie Guo, Amir Alipour-Fanid, Lingfei Wu, Hemant Purohit, Xiang Chen, Kai Zeng, Liang Zhao

At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase.

no code implementations • 22 Aug 2019 • Yuyang Gao, Lingfei Wu, Houman Homayoun, Liang Zhao

In this paper, we first formulate the transition of user activities as a dynamic graph with multi-attributed nodes, then formalize the health stage inference task as a dynamic graph-to-sequence learning problem, and hence propose a novel dynamic graph-to-sequence neural networks architecture (DynGraph2Seq) to address all the challenges.

1 code implementation • ICLR 2020 • Yu Chen, Lingfei Wu, Mohammed J. Zaki

Natural question generation (QG) aims to generate questions from a passage and an answer.

1 code implementation • 31 Jul 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki

The proposed GraphFlow model can effectively capture conversational flow in a dialog, and shows competitive performance compared to existing state-of-the-art methods on CoQA, QuAC and DoQA benchmarks.

no code implementations • 10 Jun 2019 • Yao Ma, Suhang Wang, Tyler Derr, Lingfei Wu, Jiliang Tang

Graph Neural Networks (GNNs) have boosted the performance of many graph related tasks such as node classification and graph classification.

no code implementations • NAACL 2019 • Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, Wen-mei Hwu

Our generator employs an attention-based encoder-decoder to transfer a sentence from the source style to the target style.

2 code implementations • NAACL 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki

When answering natural language questions over knowledge bases (KBs), different question components and KB aspects play different roles.

no code implementations • 9 Jan 2019 • Maxwell Crouse, Achille Fokoue, Maria Chang, Pavan Kapanipathi, Ryan Musa, Constantine Nakos, Lingfei Wu, Kenneth Forbus, Michael Witbrock

Machine learning systems regularly deal with structured data in real-world applications.

1 code implementation • 1 Dec 2018 • Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock

In this paper we formulate the attacks with discrete input on a set function as an optimization task.

no code implementations • NIPS 2018 2018 • Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Liang Zhao, Yinglong Xia, Michael Witbrock

Graph kernels are one of the most important methods for graph data analysis and have been successfully applied in diverse applications.

1 code implementation • 12 Nov 2018 • Huimin Xu, Zhang Zhang, Lingfei Wu, Cheng-Jun Wang

Our analysis of thousands of movies and books reveals how these cultural products weave stereotypical gender roles into morality tales and perpetuate gender inequality through storytelling.

1 code implementation • EMNLP 2018 • Lingfei Wu, Ian E. H. Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, Michael J. Witbrock

While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings.

1 code implementation • 14 Sep 2018 • Lingfei Wu, Ian En-Hsu Yen, Jin-Feng Yi, Fangli Xu, Qi Lei, Michael Witbrock

The proposed kernel does not suffer from the issue of diagonal dominance while naturally enjoys a \emph{Random Features} (RF) approximation, which reduces the computational complexity of existing DTW-based techniques from quadratic to linear in terms of both the number and the length of time-series.

2 code implementations • 14 Sep 2018 • Lingfei Wu, Ian E. H. Yen, Jie Chen, Rui Yan

We thus propose the first analysis of RB from the perspective of optimization, which by interpreting RB as a Randomized Block Coordinate Descent in the infinite-dimensional space, gives a faster convergence rate compared to that of other random features.

1 code implementation • EMNLP 2018 • Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Vadim Sheinin

Previous work approaches the SQL-to-text generation task using vanilla Seq2Seq models, which may not fully capture the inherent graph-structured information in SQL query.

1 code implementation • EMNLP 2018 • Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Li-Wei Chen, Vadim Sheinin

Existing neural semantic parsers mainly utilize a sequence encoder, i. e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency graph or constituent trees.

1 code implementation • ECCV 2018 • Zhiqiang Tang, Xi Peng, Shijie Geng, Lingfei Wu, Shaoting Zhang, Dimitris Metaxas

Finally, to reduce the memory consumption and high precision operations both in training and testing, we further quantize weights, inputs, and gradients of our localization network to low bit-width numbers.

Ranked #15 on Pose Estimation on MPII Human Pose

1 code implementation • 30 May 2018 • Pin-Yu Chen, Lingfei Wu, Sijia Liu, Indika Rajapakse

The von Neumann graph entropy (VNGE) facilitates measurement of information divergence and distance between graphs in a graph sequence.

1 code implementation • 25 May 2018 • Lingfei Wu, Pin-Yu Chen, Ian En-Hsu Yen, Fangli Xu, Yinglong Xia, Charu Aggarwal

Moreover, our method exhibits linear scalability in both the number of data samples and the number of RB features.

Ranked #5 on Image/Document Clustering on pendigits

2 code implementations • 25 May 2018 • Xiaojie Guo, Lingfei Wu, Liang Zhao

To achieve this, we propose a novel Graph-Translation-Generative Adversarial Networks (GT-GAN) which will generate a graph translator from input to target graphs.

4 code implementations • ICLR 2019 • Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, Vadim Sheinin

Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.

Ranked #1 on SQL-to-Text on WikiSQL

no code implementations • 14 Feb 2018 • Lingfei Wu, Ian En-Hsu Yen, Fangli Xu, Pradeep Ravikumar, Michael Witbrock

For many machine learning problem settings, particularly with structured inputs such as sequences or sets of objects, a distance measure between inputs can be specified more naturally than a feature representation.

no code implementations • 14 Sep 2017 • Pin-Yu Chen, Lingfei Wu

The presented method, SGC-GEN, not only considers the detection error caused by the corresponding model mismatch to a given graph, but also yields a theoretical guarantee on community detectability by analyzing Spectral Graph Clustering (SGC) under GENerative community models (GCMs).

1 code implementation • 7 Sep 2017 • Lingfei Wu, Dashun Wang, James A. Evans

Teams dominate the production of high-impact science and technology.

Physics and Society Digital Libraries Social and Information Networks

no code implementations • 12 Feb 2017 • Qi Lei, Jin-Feng Yi, Roman Vaculin, Lingfei Wu, Inderjit S. Dhillon

A considerable amount of clustering algorithms take instance-feature matrices as their inputs.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.