Guidelines for Using Generative AI at App State
The AI landscape is evolving rapidly, with its capabilities constantly improving and new tools being released daily. These guidelines for AI use will change as more work around this topic takes place at our university and as the field progresses.
App State encourages safe exploration and use of generative AI tools to further our teaching, learning, research and other pursuits.
Work within current university policies and standards.
Our policies are here to help protect the university and its people.
The use of AI tools must adhere to existing policies, including the Acceptable Use of Computing and Electronic Resources Policy, as well as existing federal and state regulations.
Students are expected to abide by the Academic Integrity Code.
Faculty are asked to communicate clear expectations to students on generative AI use in their courses. View syllabus guidance.
AI tools used with university information must follow review processes.
Faculty, students and staff may use university information with generative AI tools or services only when:
The information is classified as public or approved for use with non-public data.
The AI tool or service being used has undergone appropriate internal reviews and contract terms are in place to protect data assets. Visit Approved AI Tools for more information.
Act ethically with AI.
Adhere to ethical and legal standards and norms of the disciplines and the university, with regard to data privacy, consent, ownership and academic integrity. The Office of Research and Innovation has published guidelines to help researchers navigate the laws, rules and policies governing research.
You are 100% responsible for the output you use.
Generative AI-produced content may be biased, inaccurate, completely fabricated or contain copyright-protected or proprietary information, therefore requiring thorough human review prior to use.
Be transparent and cite usage of AI.
The use of generative AI for any university-related research, scholarship or work, should be clearly documented and disclosed. Review How to Cite GenAI (App State LibGuides).
Additional Resources
National AI Policies & Guidance
National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems.
Cybersecurity & Infrastructure Security Agency published recent actions taken by the U.S. government’s executive and legislative branches related to AI-based software systems.
Federal Trade Commission (FTC) Artificial Intelligence Compliance Plan explains the principles, guidelines and policies the agency will adopt to ensure consistency.
Equal Employment Opportunity Commission (EEOC) Guidelines were created to help employers using AI to comply with anti-discrimination laws. The guidance applies to AI used in hiring, promotion and other employment decisions.
International AI Policies & Principles
- European Union (EU) AI Strategy aims to make the EU a world-class hub for AI and ensure that AI is human-centric and trustworthy.
- Organisation for Economic Co-operation and Development (OECD) AI Principles promote the use of AI that is innovative and trustworthy while respecting human rights and democratic values and encourage international cooperation on AI standards and policies.
- United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of AI provides a global framework for the ethical development and use of AI, emphasizing human rights, inclusivity and sustainability.