AI Governance in Academia: Guidelines for Generative AI

Authors

  • Clayton Peterson Université du Québec à Trois-Rivières https://orcid.org/0000-0003-0592-2197
  • Marie-Catherine Deschênes Université du Québec à Trois-Rivières

DOI:

https://doi.org/10.32473/flairs.38.1.138855

Keywords:

Ethical guidelines, Ethics of AI, Explainable AI, Large language models, Responsible use of AI

Abstract

Generative AI tools tend to be used as if they were built to gather or confirm truthful information, as if they were knowledge-based systems. As such, there is a discrepancy between how generative AI (e.g. ChatGPT) is conceived and used by the general public, and what it really is and can accomplish. Given a lack of proper legal framework and the widespread usage of these tools, organizations have raised red flags and urged academic institutions to reflect on governance principles for the use of generative AI. In this paper, we present the principles adopted by an Institutional AI committee to guide usage of generative AI, as well as the theoretical and practical considerations motivating their introduction.

Downloads

Published

14-05-2025

How to Cite

Peterson, C., & Deschênes, M.-C. (2025). AI Governance in Academia: Guidelines for Generative AI. The International FLAIRS Conference Proceedings, 38(1). https://doi.org/10.32473/flairs.38.1.138855

Issue

Section

Special Track: Semantic, Logics, Information Extraction and AI