Dataset Restricted Access
Völschow, Marcel;
Buczek, P.;
Carreno-Mosquera, P.;
Mousavias, C.;
Reganova, S.;
Roldan-Rodriguez, E.;
Steinbach, Peter;
Strube, A.
<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
<identifier identifierType="DOI">10.14278/rodare.3137</identifier>
<creators>
<creator>
<creatorName>Völschow, Marcel</creatorName>
<givenName>Marcel</givenName>
<familyName>Völschow</familyName>
</creator>
<creator>
<creatorName>Buczek, P.</creatorName>
<givenName>P.</givenName>
<familyName>Buczek</familyName>
</creator>
<creator>
<creatorName>Carreno-Mosquera, P.</creatorName>
<givenName>P.</givenName>
<familyName>Carreno-Mosquera</familyName>
</creator>
<creator>
<creatorName>Mousavias, C.</creatorName>
<givenName>C.</givenName>
<familyName>Mousavias</familyName>
</creator>
<creator>
<creatorName>Reganova, S.</creatorName>
<givenName>S.</givenName>
<familyName>Reganova</familyName>
</creator>
<creator>
<creatorName>Roldan-Rodriguez, E.</creatorName>
<givenName>E.</givenName>
<familyName>Roldan-Rodriguez</familyName>
</creator>
<creator>
<creatorName>Steinbach, Peter</creatorName>
<givenName>Peter</givenName>
<familyName>Steinbach</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-4974-230X</nameIdentifier>
</creator>
<creator>
<creatorName>Strube, A.</creatorName>
<givenName>A.</givenName>
<familyName>Strube</familyName>
</creator>
</creators>
<titles>
<title>mlphys101 - Exploring the performance of Large-Language Models in multilingual undergraduate physics education</title>
</titles>
<publisher>Rodare</publisher>
<publicationYear>2024</publicationYear>
<subjects>
<subject>machine learning</subject>
<subject>deep learning</subject>
<subject>large language models</subject>
<subject>chatgpt</subject>
<subject>blablador</subject>
</subjects>
<dates>
<date dateType="Issued">2024-09-09</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="Dataset"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://rodare.hzdr.de/record/3137</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsIdenticalTo">https://www.hzdr.de/publications/Publ-39561</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.14278/rodare.3136</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://rodare.hzdr.de/communities/rodare</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/restrictedAccess">Restricted Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>Large-Language Models such as ChatGPT have the potential to revo-<br>
lutionize academic teaching in physics in a similar way the electronic calculator,<br>
the home computer or the internet did. AI models are patient, produce answers<br>
tailored to a student’s needs and are accessible whenever needed. Those involved<br>
in academic teaching are facing a number of questions: Just how reliable are pub-<br>
licly accessible models in answering, how does the question’s language affect the<br>
models’ performance and how well do the models perform with more difficult tasks<br>
beyond retrieval? To adress these questions, we benchmark a number of publicly<br>
available models on the mlphys101 dataset, a new set of 823 university level MC5<br>
questions and answers released alongside this work. While the original questions<br>
are in English, we employ GPT-4 to translate them into various other languages,<br>
followed by revision and refinement by native speakers. Our findings indicate that<br>
state-of-the-art models perform well on questions involving the replication of facts,<br>
definitions, and basic concepts, but struggle with multi-step quantitative reason-<br>
ing. This aligns with existing literature that highlights the challenges LLMs face<br>
in mathematical and logical reasoning tasks. We conclude that the most advanced<br>
current LLMs are a valuable addition to the academic curriculum and LLM pow-<br>
ered translations are a viable method to increase the accessibility of materials, but<br>
their utility for more difficult quantitative tasks remains limited.</p>
<p>The dataset is available in English here only and will be removed, once the mlphys101 publication was accepted and released to the public.</p></description>
<description descriptionType="Other">The dataset is available in English here only and will be removed, once the mlphys101 publication was accepted and released to the public.</description>
</descriptions>
</resource>
| All versions | This version | |
|---|---|---|
| Views | 552 | 552 |
| Downloads | 2 | 2 |
| Data volume | 660.5 kB | 660.5 kB |
| Unique views | 498 | 498 |
| Unique downloads | 2 | 2 |