章成志
Information Discovery with Machine Intelligence for Language
2019-7-7 22:01
阅读:3311
标签:CFP, Machine Intelligence, NLP

Call for papers :    Information Discovery with Machine Intelligence for Language


Special issue call for papers from Information Discovery and Delivery

Machine Intelligence for Natural Language Processing is a rapidly developing research area in recent years (Young et al. 2018). With the development of neural language modeling and transfer learning techniques, such as ULMfit (Howard and Ruder 2018), BERT (Devlin et al. 2018) and GPT-2, researchers have achieved state-of-the-art results on a variety of NLP tasks, at times even claiming to have out-performed human beings (Czapla, Howard, and Kardas 2018). We are interested in the question of whether and how the exciting new technologies currently being developed in deep learning and Natural Language Processing can lead to a boom of applications in the field of information discovery, and further to a consequent benefit to human beings.

Here are some examples of the types of questions we hope may be addressed by submissions to this special issue:

  1. We know labeling datasets can be very hard and/or expensive. Can we transfer a model trained on vast amounts of English text to another language which has only thousands or even hundreds of examples? Should we exploit existing word-level embeddings, or instead try sentence- and/or paragraph-level language modeling, or should we go in the opposite direction and employ character-level modeling?

  2. In computer vision, data augmentation is a common practice. Images are cropped or rotated to make "new” images to feed deep learning models, hoping to avoid overfitting the model to the training and test data. Can we do similar things by using synthetic text data? Why or why not? We need experiments to show the results, and we need analytics to explain the reasons for those results.

  3. How can we lead a machine to understand the meaning of a text, instead of just making predictions based on things like frequency, probability, or pure luck?

  4. We cannot forget that real people generate most texts. Can we use data generated by people to model the users and find their potential information needs?

  5. We invite authors to submit papers that address the questions above, as well as related questions not outlined in this proposal. Whether a particular paper address the concerns of the special issue will be left to the discretion of the guest editors, in consultation with the senior editor(s) where necessary.

Topics of interest include, but are not limited to:

  • Language Modeling for Information Retrieval

  • Transfer Learning for Text Classification

  • Word and Character Representations for Cross-Lingual Analysis

  • Information Extraction and Knowledge Graph Building

  • Discourse Analysis at Sentence Level and Beyond

  • Synthetic Text Data for Machine Learning Purposes

  • User Modeling and Information Recommendation based on Text Analysis

  • Semantic Analysis with Machine Learning

  • Other applications of CL/NLP for Information Discovery

  • Other related topics 

Guest Editors

Dr. Shuyi Wang, Tianjin Normal University, nkwshuyi@gmail.com

Dr. Alexis Palmer, University of North Texas, alexis.palmer@unt.edu

Dr. Chengzhi Zhang, Nanjing University of Science and Technology, zhangcz@njust.edu.cn

Important Dates

First announcement/CfP: June 3, 2019
Second CfP: October 15, 2019
Final Reminder: November 11, 2019
Submissions due: November 18, 2019
Papers sent to reviewers: November 25, 2019
Reviews due: December 20, 2019
Author notification: January 13, 2020
Final papers: February 7, 2020

Submissions must comply with the journal author guidelines which are here – see www.emeraldgrouppublishing.com/products/journals/author_guidelines.htm?id=idd 

Submissions must be made through ScholarOne Manuscripts, the online submission, and peer review system. Registration and access is available at mc.manuscriptcentral.com/idd

Information Discovery and Delivery covers information discovery and access for digital information researchers. This includes educators, knowledge professionals in education and cultural organizations, knowledge managers in media, health care and government, as well as librarians. IDD is a member of and subscribes to the principles of the Committee on Publication Ethics. 

References:

Czapla, Piotr, Jeremy Howard, and Marcin Kardas. 2018. “Universal Language Model Fine-Tuning with Subword Tokenization for Polish.” arXiv:1810.10222 [Cs, Stat], October. http://arxiv.org/abs/1810.10222.

Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv:1810.04805 [Cs], October. http://arxiv.org/abs/1810.04805.

Howard, Jeremy, and Sebastian Ruder. 2018. “Universal Language Model Fine-Tuning for Text Classification.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 328–39. Melbourne, Australia: Association for Computational Linguistics. https://aclweb.org/anthology/P18-1031

Young, T., D. Hazarika, S. Poria, and E. Cambria. 2018. “Recent Trends in Deep Learning Based Natural Language Processing [Review Article].” IEEE Computational Intelligence Magazine 13 (3): 55–75. https://doi.org/10.1109/MCI.2018.2840738.


 


转载本文请联系原作者获取授权,同时请注明本文来自章成志科学网博客。

链接地址:https://wap.sciencenet.cn/blog-36782-1188507.html?mobile=1

收藏

分享到:

当前推荐数:0
推荐到博客首页
网友评论0 条评论
确定删除指定的回复吗?
确定删除本博文吗?