Understanding how data and information are managed with AI is crucial. ITS strives to thoroughly vet and approve AI tools for university use based on data privacy and protection measures, including:
- Secure handling and storage: When using AI tools, personal and institutional data should be securely managed and not shared with third parties.
- Data retention: When prompts, responses or uploaded files are submitted to an AI tool, they may be copied or retained by the LLM provider. ITS encourages the use of tools that do not use university data to train public models.
- Encryption: Data transmitted to and from AI platforms should be encrypted in transit. ITS supports tools that use strong encryption methods to safeguard university information.
Reminder: Even with these precautions, data privacy and security remain paramount. Do not enter sensitive or confidential information into any AI chat interface.
This is particularly important due to the potential risks associated with third-party add-ons and models not vetted by the university. This practice protects sensitive personal and financial information and intellectual property.
Awareness
Any data entered into third-party AI systems is collected, analyzed and stored in that system which raises concerns about surveillance, data privacy and security risks.
You can build awareness of how large language models work (2024) and more from the New York Times (free to App State students, faculty and staff through University Libraries).
Caution
AI systems can be unreliable and inconsistent. Do not enter confidential or sensitive data into AI systems, whether using a personal or App State-approved AI platform. Be aware that browser extensions and third-party add-ons like Grammarly may also collect data without clear disclosure.
Confidential and sensitive data guidance. Have more questions? Start a conversation with OIS.
Transparency
If your final product is significantly modeled or influenced by an AI platform, consider informing people how you used AI.
Review this webpage for more information on How to Cite Generative AI.
Acceptable Use
Allowable
Public or published data: Data that is publicly available or published university information, as defined by the App State Data Classification standards, can be used with AI tools.
Acceptable use: In all cases, use should be consistent with the Acceptable Use Policy.
Prohibited or Sensitive
Unauthorized AI tools: Tools that lack a university contract, lack appropriate data-sharing controls. This includes free or non-App State-managed versions of AI tools like ChatGPT.
Sensitive information: Student records subject to FERPA, health information, proprietary information and any other data classified as confidential must not be used with any AI tools.
Fraudulent or illegal activities: AI tools must not be used for activities that are illegal, fraudulent or violate any state or federal laws, or UNC System or Appalachian State University policies.
Additional Consideration
Personal liability: Accepting click-through agreements without delegated signature authority may result in personal responsibility for compliance with the terms and conditions of the AI tool.
- AI and cyber security: what you need to know - U.K. National Cyber Security Centre (2024).
- ​​Protecting Data Privacy as a Baseline for Responsible AI, 2024 - Center for Strategic and International Studies (2024), includes audio (15:02).
- Cost of a Data Breach Report 2024 - IBM (2024), includes video (14:35).
- Why True Data And AI Literacy Goes Beyond Technical Training - Forbes (2025).

Data Governance
Learn more about Data governance, classifications, practices, access and resources at App State.