Question Generation Capabilities of “Small" Large Language Models
- authored by
- Joshua Berger, Jonathan Koß, Markos Stamatakis, Anett Hoppe, Ralph Ewerth, Christian Wartena
- Abstract
Questions are an integral part of test formats in education. Also online learning platforms like Coursera or Udemy use questions to check learners’ understanding. However, the manual creation of questions can be very time-intensive. This problem can be mitigated through automatic question generation. In this paper, we present a comparison of fine-tuned text-generating transformers for question generation. Our methods include (i) a comparison of multiple fine-tuned transformers to identify differences in the generated output, (ii) a comparison of multiple token search strategies evaluated on each model to find differences in generated questions across different strategies and (iii) a newly developed manual evaluation metric that evaluates generated questions regarding aspects of naturalness and suitability. Our experiments show a difference in question length, structure and quality depending on the used transformer architecture, which indicates a correlation between transformer architecture and question structure. Furthermore, different search strategies for the same model architecture do not greatly impact structure or quality.
- Organisation(s)
-
L3S Research Centre
- External Organisation(s)
-
University of Applied Sciences and Arts Hannover (HsH)
German National Library of Science and Technology (TIB)
- Type
- Conference contribution
- Pages
- 183-194
- No. of pages
- 12
- Publication date
- 20.09.2024
- Publication status
- Published
- Peer reviewed
- Yes
- ASJC Scopus subject areas
- Theoretical Computer Science, General Computer Science
- Electronic version(s)
-
https://doi.org/10.1007/978-3-031-70242-6_18 (Access:
Closed)