Framework for the use of AI for all employees

How can I use AI systems safely and responsibly in everyday university work? The following content provides guidance: What am I allowed to do? Where are the boundaries? And what support is available?

Using AI responsibly in everyday work

A policy for researchers will follow.

The use of AI systems opens up a wide range of opportunities for the University of Stuttgart to increase efficiency, quality, and innovation. At the same time, it requires conscious, responsible, and legally compliant handling. The university therefore adopts a balanced approach: While sensitive information is rigorously protected, access to powerful AI applications is provided in a responsible and transparent manner. To this end, the university supplements freely available systems (e.g., ChatGPT) for public (C1) data with its own data protection-compliant solutions such as RAI and other university AI offerings. This creates a secure, broad-based portfolio that allows employees to benefit from modern AI technologies without compromising data protection or information security.

Before using an AI system in a work context, please note the following basic steps and the mandatory module for all employees. They serve as guiding principles of the regulation, helping you act safely and in full compliance with the law.

Determining confidentiality – How sensitive is my information?

Before using an AI system, you must determine which confidentiality class (C1–C4) your information belongs to. This classification determines which systems you are permitted to use. Every AI system processes information differently. Some store entries permanently, others – such as our university systems – only temporarily or not at all. Some learn about users from user input, others do not. You can only make safe decisions about where to input information if you understand both the sensitivity of your data and how the respective AI system operates.

C1 – Public

Information that is freely accessible or has already been published, such as websites, press releases, and public lectures.

C2 – Internal

Information intended for university use only, such as internal circulars, internal procedures, and unpublished teaching materials.

C3 – Confidential

Information requiring increased protection, such as internal protocols, unpublished research data, personnel information.

C4 – Strictly confidential

Information requiring the highest level of protection, such as health data, contract documents, confidential research collaborations.

Selecting the appropriate system type – What AI systems are available, and how do they differ from each other?

Not all AI systems are the same. Some are run within the university, others externally by partners or commercial providers. Classifying these helps you decide which system is suitable for which information.

Contractual protection: Internal DS/IS approval; audit/traceability mandatory
Hosting:
Isolated on-premises; cloud (including EU) generally not permitted
Training by provider:
Never permitted
Account access control:
University accounts only; role-based; logging/traceability

Contractual safeguards: Cooperation agreement (AVV) in place; Strong assurance: No third-party access permitted
Hosting: On-premises or in a partner’s EU-based cloud, hosting outside the EU is not allowed
Training by provider: Not permitted (explicitly excluded by contract)
Account access control: University login / federated authentication; data handling pseudonymized outside the university

Contractual safeguards: Enterprise contract with AVV/SCC
Hosting: EU cloud
Training from provider: Not permitted (contractually excluded)
Account access control: University login; no long-term provider logging /evaluation

Contractual safeguards: Enterprise contract + AVV
Hosting: Provider cloud (usually EU/EEA)
Training from provider: Not permitted (enterprise setting)
Account access control: University login; no long-term provider logging/evaluation

Contractual safeguards: None (terms and conditions only)
Hosting: External cloud, EU or non-EU
Training from provider: Allowed; inputs may be used
Account access control: Individual use only; no university link or traceability

Which information belongs in which AI system?

The combination of confidentiality class and system type determines what information you are allowed to enter into which AI system. 

 

The University of Stuttgart's AI portfolio

The portfolio combines systems operated within, or in close partnership with, the university with external commercial offerings. This is gradually creating a networked, responsible AI ecosystem that combines performance, security, and learning progress. The currently approved systems are listed in the University of Stuttgart’s Whitelist, which is being updated continuously.

The graphic illustrates the University of Stuttgart’s AI portfolio as a tiered model that differentiates between various types of AI systems according to their levels of data protection and operational control. It shows that the University combines different systems – ranging from in-house developments to external services – to provide a complementary portfolio that balances security and innovation. On the left, the University’s own systems are shown. These are fully operated by the University and offer the highest level of data protection. This category includes a planned internal chatbot based on an open-weight language model to be hosted at the University’s Computing Center (TIK). These systems are intended for sensitive and confidential information and are still under development. Next are the University-operated systems in cooperation with scientific partners. This includes RAI with the GWDG models, which can be used as a secure system for internal and confidential content. The portfolio also includes University-operated systems that access powerful commercial models through University-managed interfaces. An example is RAI with Azure or OpenAI models, hosted within the EU and suitable for public and internal content (C1–C2). Below that, external enterprise systems with academic contracts are shown, such as DeepL Pro EDU or Microsoft Copilot EDU. These can be accessed via the University account and are approved for public and internal content. To the right are freely available, unverified systems such as ChatGPT Team, ChatGPT Free, Perplexity, or Gemini. These tools may only be used for public content (C1) and are not subject to institutional data protection controls or central funding. The illustration is titled “AI Portfolio”. It conveys the core idea of a complementary AI ecosystem: the University supplements powerful, freely available tools with data protection–compliant, University-operated systems so that all members of the University can use Artificial Intelligence safely, responsibly, and in ways that promote innovation.
The AI portfolio of the University of Stuttgart

If you use external tools, you should make sure to disable the use of inputs as training data. Support in selecting freely available tools is provided by the AI toolbox in ILIAS and the VK:KIWA resource list.

RAI – The university's data protection-compliant AI tool

RAI (Responsible AI) is our central, data protection-compliant tool. It is a versatile tool for text-related tasks. Tasks such as image generation or web-based research are supported by other approved tools from the Whitelist. This allows instructors to select the appropriate tool depending on the purpose. The tool is accessible only on the university network or via VPN and offers two model options.

RAI is operated in the EU. Inputs are not used for training. RAI can be used for creating teaching materials, course design, drafts for assessments/feedback, and much more. Additionally, RAI can be specified for use in courses, and a group chat feature is available.

GWDG models
Suitable for C1–C3 (C3 possible with documented approval), ideal for sensitive text/code drafts, exam questions, and sample solutions.

Azure/OpenAI in RAI
For C1–C2, no confidential content.

Training and skills development

The AI learning portal in ILIAS offers training courses for employees:

  • Mandatory module: Either the BW basic course or the AI Campus video course. Both provide the basics for the safe use of AI systems. After passing, you will receive a certificate that must be presented to your supervisor before using AI for work purposes.
  • Advanced self-study options: In-depth courses on AI Campus, OpenHPI, and internal ILIAS courses.
  • Training on AI regulations: Online video on confidentiality classification, system types, and guidelines; management training (dates in the event calendar, contact: Alina Gräber).
  • Quick guides & tutorials: Help with registering and using RAI and freely available tools.

To access some of the offerings, you must log in to ILIAS with your ac or gs account.

The Vice-Rectorate for IT is offering an AI question and answer session every Friday from November 7 to December 12. It will take place between 10 and 11 a.m. You can find the Webex link in the AI learning portal.

Discuss AI issues

Since May 2024, the university-wide AI group has been meeting regularly online via Webex for open discussion. The aim is to reflect jointly on the dynamic development of generative AI systems and to promote their meaningful and responsible use at the University of Stuttgart. The circle discusses current AI topics as well as university-specific questions on strategy, tools, regulations, training, and ethical aspects. It networks initiatives, bundles activities, and helps to disseminate knowledge about AI applications.

The AI Circle is organized by the Vice-Rectorate for IT. Interested parties can contact by Email to be added to the mailing list and receive information about upcoming meetings.

Frequently asked questions about the use of AI systems

Only if the system type and confidentiality classification permit this, in accordance with the Whitelist. Personal data also always requires a legal basis. For C3/C4, documented approval from management or a higher authority is generally required. If you are unsure, do not use the system for this data and ask for clarification.

The restrictive case applies: The system may only be used for public content (C1) until its classification has been clarified or it has been approved on the Whitelist.

Yes, if an AI system plays a major role in shaping content (e.g., text drafting) or if AI-generated content is published without further editing (website, official letter, email to external parties). No labeling is required for very minor use (e.g., individual wording suggestions). When responding to students/external parties, the use of an AI system must be clearly indicated.

Entries can be saved, evaluated, or used for training purposes and assigned to an account. Only use such systems for C1 content and, if possible, disable the use of your entries as training data.

No. RAI is a data-secure, all-round AI system for text work (ideas, rephrasing, translations, analyses within the scope of the regulations). This does not replace specialist research, special analyses, or databases.

After logging in to RAI, click in the input window of the AI tool. The language model currently in use is now displayed on the right, e.g., “Microsoft Azure (C1-C2): OpenAI GPT 4.1 Mini, Apr 2025.” To the left of the model name, you will find a small arrow/triangle pointing upwards. When you click on the arrow, you will see all the language models you can use. All names of GWDG models begin with “GWDG (C1-C3)”. All names of OpenAI models via Microsoft Azure begin with “Microsoft Azure (C1-C2)”. You can find a visual representation in the RAI tutorials.

Take a look at the current version of the Whitelist. Only systems listed there are approved for official use, with details of the permitted confidentiality classes. If there is no system, the following applies: only initiate C1 use or approval process.

Systems that are not on the Whitelist must be checked before they can be used for business purposes with C2 or higher. Please submit your request through your supervisors to the Vice Rectorate for IT. The university management will make the final decision regarding admission.

Only in approved systems and only after approval (C3/C4) or where there is a legal basis for doing so. This is prohibited in freely available systems (type 5).

Report the incident to Data Protection [de] and Information Security [de]. Prompt, open acknowledgement of the mistake helps to limit risks and handle the incident correctly.

Before using AI for work purposes, you must inform yourself about safe, legal, and responsible use. The university offers training courses (asynchronous) for this purpose. Among other things, they raise awareness of bias, discrimination, hallucinations, and sustainable, digitally sovereign use.

Yes, for work activities outside of research. Research activities are subject to additional research-specific regulations (see brief policy on AI in research).

Contact

This image shows Lisa Schöllhammer

Lisa Schöllhammer

 

AI consultant

 

Datenschutz

 

Stabsstelle Informationssicherheit

To the top of the page