Dataset Restricted Access

mlphys101 - Exploring the performance of Large-Language Models in multilingual undergraduate physics education

Völschow, Marcel; Buczek, P.; Carreno-Mosquera, P.; Mousavias, C.; Reganova, S.; Roldan-Rodriguez, E.; Steinbach, Peter; Strube, A.


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.14278/rodare.3137</identifier>
  <creators>
    <creator>
      <creatorName>Völschow, Marcel</creatorName>
      <givenName>Marcel</givenName>
      <familyName>Völschow</familyName>
    </creator>
    <creator>
      <creatorName>Buczek, P.</creatorName>
      <givenName>P.</givenName>
      <familyName>Buczek</familyName>
    </creator>
    <creator>
      <creatorName>Carreno-Mosquera, P.</creatorName>
      <givenName>P.</givenName>
      <familyName>Carreno-Mosquera</familyName>
    </creator>
    <creator>
      <creatorName>Mousavias, C.</creatorName>
      <givenName>C.</givenName>
      <familyName>Mousavias</familyName>
    </creator>
    <creator>
      <creatorName>Reganova, S.</creatorName>
      <givenName>S.</givenName>
      <familyName>Reganova</familyName>
    </creator>
    <creator>
      <creatorName>Roldan-Rodriguez, E.</creatorName>
      <givenName>E.</givenName>
      <familyName>Roldan-Rodriguez</familyName>
    </creator>
    <creator>
      <creatorName>Steinbach, Peter</creatorName>
      <givenName>Peter</givenName>
      <familyName>Steinbach</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-4974-230X</nameIdentifier>
    </creator>
    <creator>
      <creatorName>Strube, A.</creatorName>
      <givenName>A.</givenName>
      <familyName>Strube</familyName>
    </creator>
  </creators>
  <titles>
    <title>mlphys101 - Exploring the performance of Large-Language Models in multilingual undergraduate physics education</title>
  </titles>
  <publisher>Rodare</publisher>
  <publicationYear>2024</publicationYear>
  <subjects>
    <subject>machine learning</subject>
    <subject>deep learning</subject>
    <subject>large language models</subject>
    <subject>chatgpt</subject>
    <subject>blablador</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2024-09-09</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="Dataset"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://rodare.hzdr.de/record/3137</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsIdenticalTo">https://www.hzdr.de/publications/Publ-39561</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.14278/rodare.3136</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://rodare.hzdr.de/communities/rodare</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="info:eu-repo/semantics/restrictedAccess">Restricted Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Large-Language Models such as ChatGPT have the potential to revo-&lt;br&gt;&#13;
lutionize academic teaching in physics in a similar way the electronic calculator,&lt;br&gt;&#13;
the home computer or the internet did. AI models are patient, produce answers&lt;br&gt;&#13;
tailored to a student’s needs and are accessible whenever needed. Those involved&lt;br&gt;&#13;
in academic teaching are facing a number of questions: Just how reliable are pub-&lt;br&gt;&#13;
licly accessible models in answering, how does the question’s language affect the&lt;br&gt;&#13;
models’ performance and how well do the models perform with more difficult tasks&lt;br&gt;&#13;
beyond retrieval? To adress these questions, we benchmark a number of publicly&lt;br&gt;&#13;
available models on the mlphys101 dataset, a new set of 823 university level MC5&lt;br&gt;&#13;
questions and answers released alongside this work. While the original questions&lt;br&gt;&#13;
are in English, we employ GPT-4 to translate them into various other languages,&lt;br&gt;&#13;
followed by revision and refinement by native speakers. Our findings indicate that&lt;br&gt;&#13;
state-of-the-art models perform well on questions involving the replication of facts,&lt;br&gt;&#13;
definitions, and basic concepts, but struggle with multi-step quantitative reason-&lt;br&gt;&#13;
ing. This aligns with existing literature that highlights the challenges LLMs face&lt;br&gt;&#13;
in mathematical and logical reasoning tasks. We conclude that the most advanced&lt;br&gt;&#13;
current LLMs are a valuable addition to the academic curriculum and LLM pow-&lt;br&gt;&#13;
ered translations are a viable method to increase the accessibility of materials, but&lt;br&gt;&#13;
their utility for more difficult quantitative tasks remains limited.&lt;/p&gt;&#13;
&#13;
&lt;p&gt;The dataset is available in English here only and will be removed, once the mlphys101 publication was accepted and released to the public.&lt;/p&gt;</description>
    <description descriptionType="Other">The dataset is available in English here only and will be removed, once the mlphys101 publication was accepted and released to the public.</description>
  </descriptions>
</resource>
12
0
views
downloads
All versions This version
Views 1212
Downloads 00
Data volume 0 Bytes0 Bytes
Unique views 88
Unique downloads 00

Share

Cite as