Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

This paper outlines the system using which team Nowruz participated in SemEval 2022 Task 7 'Identifying Plausible Clarifications of Implicit and Underspecified Phrases' for both subtasks A and B (Roth et al., 2022). Using a pre-trained transformer as a backbone, the model targeted the task of multi-task classification and ranking in the context of finding the best fillers for a cloze task related to instructional texts on the website Wikihow. The system employed a combination of two ordinal regression components to tackle this task in a multi-task learning scenario. According to the official leaderboard of the shared task, this system was ranked 4th in both classification and ranking subtasks out of 21 participating teams. With additional experiments, the models have since been further optimised. The code used in the experiments is going to be freely available at


Conference paper

Publication Date



1071 - 1077