Wie sich Large Language Models im Mathematikunterricht bewähren

und woran sie (noch) scheitern

Authors

  • Valentin Katter Universität Bielefeld
  • Daniel Barton

Abstract

The article examines the rapid advancements in the performance of large language models such as ChatGPT within the context of mathematics. It contextualizes these developments using current benchmarks while also highlighting persistent limitations—especially in terms of technical language precision and the capacity to deliver individualized feedback. The article shows that large language models (LLMs) can already assist with routine tasks and illustrate solution strategies, yet they fall short of replacing the essential didactic roles of teachers. It provides a framework for the meaningful integration of AI-powered tools into the classroom.

Published

2025-09-23

Issue

Section

Digitales Lehren und Lernen - Konzepte und Beispiele