An article by our scientists entitled “Combining Recommender Systems and Language Models in Early Detection of Signs of Anorexia” has been published in open access. The work describes the approach that obtained the best result according to the ERDE metric in the international competition for early risk prediction on the Internet – eRisk 2024. Authors of the work: Oskar Riewe-Perła and Prof. Agata Filipowska.
Tag: language models
Continued achievements in the global information verification competition
In 2024, scientists from the Department of Information Systems participated for the second time in the CheckThat! competition, part of the international CLEF conference, focused on identifying check-worthy texts. Leveraging transformer-based language models and cross-linguistic transfer learning techniques, the team secured two second-place rankings for English and Arabic submissions.
Combining multiple attack methods for effective adversarial text generation
An article by our scientists entitled “OpenFact at CheckThat! 2024: Combining Multiple Attack Methods for Effective Adversarial Text Generation” has been published in open access. The paper describes the approach that won first place in an international competition in the area of information credibility.
Participation in an international competition in the area of analysis of multi-author writing style
Scientists from the Department of Information Systems took part in an international competition for the analysis of multi-author writing style – PAN 2024. PAN is a series of scientific events related to stylometric analysis and forensic linguistics, organized during the CLEF 2024 conference. The competition task was to detect places where the author changes in a text written by several authors.
Artificial intelligence in fake news campaigns: experiments with ChatGPT
The scientific work of members of our Department was published in the Economics and Business Review journal. The article “Artificial intelligence – friend or foe in fake news campaigns” focuses on analyzing the impact of large language models (LLM) on the phenomenon of fake news. On the one hand decent text‐generation capabilities can be misused for mass fake news production. On the other, LLMs trained on huge volumes of text … Read More
First place in the international competition CLEF-2023 CheckThat! Lab
The OpenFact project team took part in the CheckThat! organized as part of the international conference CLEF 2023 (Conference and Labs of the Evaluation Forum). The method proposed by our scientists took first place. This method detects English sentences that need to be reviewed because of potential misleading and therefore are worth fact-checking.