Use of AI in teaching

The following guiding principles provide guidance for the responsible use of AI systems in teaching. They are intended to help make meaningful use of the opportunities offered by digital tools without compromising academic integrity, individual achievement, or fairness in teaching.

Training and information opportunities

Lecturers can make use of the university’s training and consulting services on the use of AI in teaching. Collegial exchange on good practices and lessons learned in the use of AI tools in teaching is expressly encouraged.

The university's data protection-compliant AI tool

RAI (Responsible AI) is our central, data protection-compliant tool. It is a versatile tool for text-related tasks. Tasks such as image generation or web-based research are supported by other approved tools from the Whitelist. This allows instructors to select the appropriate tool depending on the purpose. The tool is accessible only on the university network or via VPN and offers two model options.

RAI is operated in the EU. Inputs are not used for training. RAI can be used for creating teaching materials, course design, drafts for assessments/feedback, and much more. Additionally, RAI can be specified for use in courses, and a group chat feature is available.

Choosing an RAI model

Typical examples:

  • Teaching materials or syllabi that can be made available to the general public are classified as C1.
  • Presentation materials by students, which are intended to be shown only within a course, are classified as C2.
  • Email drafts containing real names and personal feedback are classified as C3. Another example: Exam questions with answer keys provided before the exam is has been taken.
  • Email texts revealing students’ health information are classified as C4. 

Vertraulichkeitsklassen

Whitelist KI-Tools

GWDG models
Suitable for C1–C3 (C3 possibly with documented approval), ideal for sensitive text/code drafts, exam questions, and sample solutions.

Azure/OpenAI in RAI
: For C1–C2, no confidential content.

Use of AI in examinations

As an examiner, you decide whether the use of AI is permitted for your exam. Select one option per graded assessment and document it in the syllabus/ILIAS. Communicate the chosen option early and consistently.

Allow

AI assistance is permitted, but individual performance remains decisive

Avoid

Choose exam formats and content in such a way that the use of AI is not possible, e.g., oral supplementary examination, live demonstration

Require

Deliberately require the use of AI

Prohibit

Clearly exclude the use of AI (prohibit)

  • Option 1 (allow): "You may use AI tools for brainstorming, structuring, and linguistic refinement." Please indicate the tool, date, and purpose. The content responsibility lies with you.
  • Option 2 (avoid): "The exam is designed so that the use of AI is not possible (e.g., oral supplementary exam, supervised work, individual data, or live demonstrations). Only your individual work will be assessed."
  • Option 3 (require): "The use of [Tool] is required to complete the task. There is no requirement to use private accounts. Identification as described in [Identification Rules] is required."
  • Option 4 (prohibit): "The use of generative AI for completing this graded assessment is not permitted. Violations are considered an attempt to deceive and will be assessed according to the examination regulations.
  • Design AI-resilient tasks: Choose exam formats or content where individual performance is clearly visible (e.g., oral supplementary exams, supervised work, individual datasets, practical projects).
  • Use AI as a targeted learning aid: Leverage new opportunities by asking reflective questions about AI-generated results and encouraging students to critically evaluate and refine AI suggestions.
  • Ensure traceability: Ask for intermediate steps, versions, or brief protocols to keep the work process transparent.
  • Ensure fairness and access: Do not demand private accounts; always provide an equivalent alternative without AI.
  • Ensure transparency: Communicate your chosen option (1–4) in the syllabus clearly and at an early stage and explaining how labeling requirements should be implemented.
  • Use AI detectors with caution: Keep in mind that automated detection is not legally binding; sanctions require reliable evidence (e.g., fabricated sources, contradictions, implausible results).
  • Emphasize professional responsibility: Make it clear that students are responsible for checking the accuracy and source quality of AI results themselves.

Discuss AI issues

Since May 2024, the university-wide AI group has been meeting regularly online via Webex for open discussion. The aim is to reflect jointly on the dynamic development of generative AI systems and to promote their meaningful and responsible use at the University of Stuttgart. The circle discusses current AI topics as well as university-specific questions on strategy, tools, regulations, training, and ethical aspects. It networks initiatives, bundles activities, and helps to disseminate knowledge about AI applications.

The AI Circle is organized by the Vice-Rectorate for IT. Interested parties can contact by Email to be added to the mailing list and receive information about upcoming meetings.

Guidelines for the responsible use of AI for lecturers

1. Using AI is permitted

AI systems can generally be used in teaching, provided that this is done in a didactically meaningful way and in compliance with the provisions of the AI Regulation.

2. Designing examination conditions and courses

Lecturers must clearly and transparently define whether, and to what extent, the use of AI systems is permitted or prohibited in graded assessments (e.g., term papers, project reports, theses) and courses (e.g., exercises, assignments, group work).

These regulations should be communicated to students clearly and in a timely manner – for example, via ILIAS, during the course introduction, or in written form.

  • the permitted purpose of AI use (e.g., research, writing assistance),
  • any restrictions on use,
  • labeling requirements (e.g., disclosure of AI tools used in the appendix).

Detailed options and recommendations are provided in the guide for examiners on AI tools and examinations (University of Stuttgart, version 1.0, July 2023).

3. Responsibility for data and input

When using AI systems, data protection, copyright, and ethical requirements must be observed. In particular, no personal, confidential, or copyrighted content may be entered into AI systems that use input data for training or may reproduce it elsewhere. Confidential, internal, or personal data may only be entered into AI systems whose use has been expressly approved by the university for this data.

4. Responsibility for AI-generated output

Teachers are responsible for the content generated by AI systems and used in teaching. AI-generated output must be carefully checked before use, particularly with regard to:

  • technical accuracy,
  • ethical acceptability,
  • discriminatory or biased content,
  • as well as possible plagiarism and hallucinations

5. Transparency towards students

If AI is used in teaching materials, assignments, or assessment criteria, this must be clearly communicated to students in a comprehensible manner. Transparency is essential for promoting trust, fairness, and critical understanding of AI.

6. Promotion of AI expertise

Lecturers are called upon to guide students toward a reflective and competent use of AI tools. Where appropriate, the potential, risks, and limitations of AI systems should be addressed and critically discussed—e.g., in the context of teaching content or exercise formats.

7. Preferred use of university systems & fair use

Teachers are encouraged to use university-operated, GDPR-compliant AI systems such as the RAI tool provided by the University of Stuttgart. The use of such systems makes it possible to integrate AI applications into courses and, where didactically appropriate, to make them mandatory. When using university systems, care must also be taken to ensure fair, resource-efficient use in order to enable equal access for all members of the university.

Students cannot be required to create or use private accounts with external AI systems (e.g., ChatGPT) that may not be GDPR-compliant. The use of AI tools is only permissible if there are no data protection concerns (as with RAI) or if the university provides an equivalent alternative that does not use AI.

8. Legal Limits under the EU AI Act

Certain AI practices such as “social scoring” or hidden behavioral influence by AI systems are prohibited without exception under Article 5 of the EU AI Regulation. Other systems, such as those used for automated performance assessment or exam monitoring (including in learning management systems), are considered high-risk AI: they are not prohibited, but may only be used under strict conditions (e.g., with transparency, human oversight, and in compliance with data protection regulations). Please note that, according to current legislation, the use of AI systems for automatic grading or automatic admission without a final human decision is not permitted.

Frequently asked questions (FAQ)

Yes, if this is intended as part of your course. Select one of the four options and document it in the syllabus.

Preferably use RAI with GWDG models (C1–C3). Public AI systems are only suitable for C1 content.

No. AI systems may offer support, but the final judgment must remain a human one.

No. Students are not obliged to use AI systems that are not provided by the university in compliance with data protection regulations. Use RAI or an alternative without an AI system.

Events related to AI


This image shows Lisa Schöllhammer

Lisa Schöllhammer

 

AI consultant

 

Center for Higher Education and Lifelong Learning (zlw)

Azenbergstraße 16, 70174 Stuttgart

To the top of the page