3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS)

6 Dec 2023 @ EMNLP 2023 in Singapore

With great scientific breakthrough comes solid engineering and open communities. The Natural Language Processing (NLP) community has benefited greatly from the open culture in sharing knowledge, data, and software. The primary objective of this workshop is to further the sharing of insights on the engineering and community aspects of creating, developing, and maintaining NLP open source software (OSS), which we seldom talk about in scientific publications. Our secondary goal is to promote synergies between different open source projects and encourage cross-software collaborations and comparisons.

There are many workshops focusing on the creation and curation of open language resources and annotations (e.g. BUCC, GWN, LAW, LOD, WAC). Moreover, we have the flagship LREC conference dedicated to linguistic resources. However, the engineering aspects of NLP-OSS are overlooked and under-discussed within the community. There are open source conferences and venues (such as FOSDEM, OSCON, Open Source Summit) where discussions range from operating system kernels to air traffic control hardware but the representation of NLP related presentations is limited. In the Machine Learning (ML) field, the Journal of Machine Learning Research - Machine Learning Open Source Software (JMLR-MLOSS) is a forum for discussions and dissemination of ML OSS topics. We envision that the Workshop for NLP-OSS becomes a similar avenue for NLP-OSS discussions.

Recently there have been successful Big Science workshop series which examine and promote open science in NLP. While important and complementary, the goals of Big Science are distinct from those of NLP-OSS which focuses more on the community of practice in open-source software in support of NLP and language technologies. We expect many who participated in the BigScience workshop to participate in NLP-OSS as many of the participants are former PC members in past editions of NLP-OSS. Another grassroot community movement, Eleuther AI started with the researchers attempting to replicate commercial language models and has since grown to an active decentralized community of volunteer researchers, engineers, and developers focused on AI alignment, scaling, and open source AI research.

With the rise of open source startups like Huggingface, the democratization of NLP gives researchers and the general public easy access to language models once available only to a handful of industrial research labs. This acceleration of NLP tools availability creates new synergies between cloud integrations, e.g. Huggingface x AWS Sagemaker, that allows engineers and researchers to train and deploy live applications with minimal infrastructure setups. Building on the shoulders of giants, the scikit-learn and Huggingface ecosystems are now interoperable under the skops framework. We want to highlight these emergent communities and synergies in the NLP-OSS workshop and promote future collaborations with like-minded open source NLP researchers in the third NLP-OSS workshop. We hope that the NLP-OSS workshop could also be hosted in an *ACL conference, and be the intellectual forum to collate this type of knowledge, announce new software/features, promote the open source culture and OSS best practices.

Call for Papers

We invite full papers (8 pages) or short papers (4 pages) on topics related to NLP-OSS broadly categorized into (i) software development, (ii) scientific contribution and (iii) NLP-OSS case studies.

Submission information

Authors are invited to submit a

Submissions can be non-archival and be presented in the NLP-OSS workshop, but we would still require at least a 4-page submission so that reviewers have enough information to make the acceptance/rejection decision. This non-archival option is helpful for author(s) who wants to publish or had published the work elsewhere and would like to present/discuss pertinent NLP-OSS related work to the workshop PCs and attendees.

All papers are allowed unlimited but sensible pages for references. Final camera-ready versions will be allowed an additional page of content to address reviewers’ comments.

Due to the nature of open source software, we find it a bit tricky to “anonymize” “open source”. For this reason, we don’t require your publication to be anonymous. However, if you prefer your paper to be anonymized, please mask any identifiable phrase with REDACTED.

Submission should be formatted according to the EMNLP 2023 LaTeX or MS Word templates at https://2023.emnlp.org/calls/style-and-formatting/

Submissions should be uploaded to OpenReview conference management system at https://openreview.net/group?id=EMNLP/2023/Workshop/NLP-OSS

Important dates

The 3rd NLP-OSS workshop will be co-located with the EMNLP 2023 conference.

Invited Speakers

trlX: A Framework for Large Scale Open Source RLHF

Louis Castricato

Reinforcement learning from human feedback (RLHF) utilizes human feedback to better align large language models with human preferences via online optimization against a learned re- ward model. Current RLHF paradigms rely on Proximal Policy Optimization (PPO), which quickly becomes a challenge to implement and scale up to large architectures. To address this difficulty we created the trlX library as a feature-complete open-source framework for RLHF fine-tuning of models up to and exceeding 70 billion parameters. This talk presents the trlX implementation that supports for multiple types of dis- tributed training including distributed data par- allel, model sharded, as well as tensor, sequential, and pipeline parallelism.


Louis Castricato is a research scientist at EleutherAI, working on RLHF infrastructure and engineering. Previously, Louis was head of LLMs at Stability AI and team lead at CarperAI, the largest open source RLHF group, as well as a PhD student at Brown University.

Southeast Asia LLMs: SEA-LION and Wangchan-LION

David Tat-Wee Ong and Peerat Limkonchotiwat

SEA-LION (Southeast Asian Languages In One Network) is a family of multilingual LLMs that is specifically pre-trained and instruct-tuned for the South- east Asian (SEA) region, incorporating a custom SEABPETokenizer which is specially tailored for SEA languages. The first part of this talk will cover our design philosophy and pre-training methodology for SEA- LION. The second part of this talk will cover PyThaiNLP’s work on Wangchan-LION, an instruct-tuned version of SEA-LION for the Thai community.


David is presently the Head of Engineering in AI Singapore’s (AISG) Products pillar, managing a team of software engineers to support AISG’s Products research implementation. David Tat-Wee holds a M.Sc in Computer Science and a M.Sc in Financial Engineering from NUS.


Peerat Limkonchotiwat is a Ph.D. student in information science and technology (IST) at VIS-TEC, Thailand. He contributes to the WangchanX project, a Thai NLP group developing applications, that comprises Thai sentence embedding benchmarks, Thai text processing datasets (VISTEC-TPTH-2021 and NNER-TH), and WangchanGLM and Wangchan-Sealion generative models.

Towards Explainable and Accessible AI

Brandon Duderstadt and Yuvanesh Anand

Large language models (LLMs) have recently achieved human-level performance on a range of professional and academic benchmarks. Unfortunately, the explainability and accessibility of these models has lagged behind their performance. State-of-the-art LLMs require costly infrastructure, are only accessible via rate-limited, geo-locked, and censored web interfaces, and lack publicly available code and technical reports. Moreover, the lack of tooling for understanding the massive datasets used to train and produced by LLMs presents a critical challenge for explainability research. This talk will be an overview of Nomic AI’s efforts to address these challenges through its two core initiatives: GPT4All and Atlas.


Brandon Duderstadt is the founder and CEO of Nomic AI, a startup whose mission is to improve the explainability and accessibility of AI. His experiences at Rad AI and John Hopkins convinced him of both the profound impact that this new wave of AI technology would have as well as the need to improve the explainability and accessibility of AI models.


Yuvanesh Anand is a freshman computer science student at the Virginia Institute of Technology with broad interests in natural language processing and open-source software development. While still in high school, Yuvanesh joined Nomic AI as a software engineering intern, where he led the data collection and early development of the GPT4All project.

Workshop Program

The timezone for the program schedule below are in Singapore Time (GMT +8).

09:00 - 09:15    Opening Remarks

09:15 - 10:15    Invited Talk - trlX: A Framework for Large Scale Open Source RLHF
Louis Castricato

10:30 - 11:00    Coffee Break

11:00 - 11:30    Lightning Session 1

11:30 - 12:15    Poster Session 1

An Open-source Web-based Application for Development of Resources and Technologies in Underresourced Languages
Siddharth Singh, Shyam Ratan, Neerav Mathur and Ritesh Kumar

AWARE-TEXT: An Android Package for Mobile Phone Based Text Collection and On-Device Processing
Salvatore Giorgi, Garrick Sherman, Douglas Bellew, Sharath Chandra Guntuku, Lyle Ungar and Brenda Curtis

Beyond the Repo: A Case Study on Open Source Integration with GECToR
Sanjna Kashyap, Zhaoyang Xie, Kenneth Steimel and Nitin Madnani

calamanCy: A Tagalog Natural Language Processing Toolkit
Lester James Validad Miranda

Deepparse: A State-Of-The-Art Library for Parsing Multinational Street Addresses
David Beauchemin

EDGAR-CRAWLER: Finding Needles in the Haystack of Financial Documents
Lefteris Loukas, Manos Fergadiotis and Prodromos Malakasiotis

Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models
Michael Günther, Georgios Mastrapas, Bo Wang, Han Xiao and Jonathan Geuter

Kani: A Lightweight and Highly Hackable Framework for Language Model Applications
Andrew Zhu, Liam Dugan, Alyssa Hwang and Chris Callison-Burch

nanoT5: Fast & Simple Pre-training and Fine-tuning of T5 Models with Limited Resources
Piotr Nawrot

PyThaiNLP: Thai Natural Language Processing in Python
Wannaphong Phatthiyaphaibun, Korakot Chaovavanich, Charin Polpanumas, Arthit Suriyawongkul, Lalita Lowphansirikul, Pattarawat Chormai, Peerat Limkonchotiwat, Thanathip Suntorntip and Can Udomcharoenchaikit

Rumour Detection in the Wild: A Browser Extension for Twitter
Andrej Jovanovic and Björn Ross

SOTASTREAM: A Streaming Approach to Machine Translation Training
Matt Post, Thamme Gowda, Roman Grundkiewicz, Huda Khayrallah, Rohit Jain and Marcin Junczys-Dowmunt

Zelda Rose: a tool for hassle-free training of transformer models
Loïc Grobol

12:15 - 13:45    Lunch Break

13:45 - 14:45    Invited Talk - SEA-LION (Southeast Asian Languages In One Network): A Family of Southeast Asian Language Models
David Ong and Peerat Limkonchotiwat

14:45 - 15:15    Lightning Session 2

15:15 - 15:30    Coffee Break

15:30 - 16:15    Poster Session 2

Antarlekhaka: A Comprehensive Tool for Multi-task Natural Language Annotation
Hrishikesh Terdalkar and Arnab Bhattacharya

DeepZensols: A Deep Learning Natural Language Processing Framework for Experimentation and Reproducibility
Paul Landes, Barbara Di Eugenio and Cornelia Caragea

GPT4All: An Ecosystem of Open Source Compressed Language Models
Yuvanesh Anand, Zach Nussbaum, Adam Treat, Aaron Miller, Richard Guo, Benjamin M Schmidt, Brandon Duderstadt and Andriy Mulyar

GPTCache: An Open-Source Semantic Cache for LLM Applications Enabling Faster Answers and Cost Savings
Fu Bang

Improving NER Research Workflows with SeqScore
Constantine Lignos, Maya Kruse and Andrew Rueda

LaTeX Rainbow: Open Source Document Layout Semantic Annotation Framework from LaTeX to PDF
Changxu Duan and Sabine Bartsch

nerblackbox: A High-level Library for Named Entity Recognition in Python
Felix Stollenwerk

News Signals: An NLP Library for Text and Time Series
Chris Hokamp, Demian Gholipour Ghalandari and Parsa Ghaffari

PyTAIL: An Open Source Tool for Interactive and Incremental Learning of NLP Models with Human in the Loop for Online Data
Shubhanshu Mishra, Jana Diesner

The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
Dung Nguyen Manh, Nam Le Hai, Anh T. V. Dau, Anh Minh Nguyen, Khanh Nghiem, Jin Guo and Nghi D. Q. Bui

Two Decades of the ACL Anthology: Development, Impact, and Open Challenges
Marcel Bollmann, Nathan Schneider, Arne Ko ̈hn and Matt Post

torchdistill Meets Hugging Face Libraries for Reproducible, Coding-free Deep Learning Studies: A Case Study on NLP
Yoshitomo Matsubara

Using Captum to Explain Generative Language Models
Vivek Miglani, Aobo Yang, Aram H. Markosyan, Diego Garcia-Olano and Narine Kokhlikyan

16:15 - 17:15    Invited Talk - Towards Explainable and Accessible AI
Brandon Duderstadt and Yuvanesh Anand

17:15 - 17:30    Closing Remarks


Programme Committee

Previous Workshops

Second Workshop for Natural Language Processing Open Source Software (NLP-OSS 2020)

First Workshop for Natural Language Processing Open Source Software (NLP-OSS 2018)