Mr-Fosdick at SemEval-2023 Task 5: Comparing Dataset Expansion Techniques for Non-Transformer and Transformer Models: Improving Model Performance through Data Augmentation
Christian Falkenberg, Erik Sch\"{o}nw\"{a}lder, Tom Rietzke, Chris-Andris G\"{o}rner, Robert Walther, Julius Gonsior, Anja Reusch
The 17th International Workshop on Semantic Evaluation (SemEval-2023) Task 5: clickbait spoiling Paper
TLDR:
In supervised learning, a significant amount of data is essential. To achieve this, we generated and evaluated datasets based on a provided dataset using transformer and non-transformer models. By utilizing these generated datasets during the training of new models, we attain a higher balanced accur
You can open the
#paper-SemEval_14
channel in a separate window.
Abstract:
In supervised learning, a significant amount of data is essential. To achieve this, we generated and evaluated datasets based on a provided dataset using transformer and non-transformer models. By utilizing these generated datasets during the training of new models, we attain a higher balanced accuracy during validation compared to using only the original dataset.