AI on Campus: The 5 Things Higher Ed Needs to Consider When Crafting AI Workplace Policies
Insights
3.27.24
Higher education leaders have plenty of cutting-edge legal issues to follow – affirmative action, freedom of speech, discrimination – but you are now on the front lines when it comes to the appropriate use of generative artificial intelligence. You have a more complicated relationship with AI than most employers, however, because you must address its use on two fronts. You need to ensure students are using AI ethically and fairly, leveraging its benefits while avoiding risky behavior. And of course, you cannot lose sight of appropriate AI policies for your employees, since administrators and faculty face risks and benefits when using AI in their roles. Like all employers, however, you can benefit from proactive thinking about these issues and establishing clear policies. Here are five key concerns you should consider when addressing AI in your handbooks.
Quick Background
- Generally, AI is technology that replaces or augments aspects of human decision making or simulate human intelligence.
- There are many third-party, publicly available AI systems that permit your students and employees to generate content, including ChatGPT, Google’s Gemini, Microsoft’s Co-Pilot, DALL-E, and Midjourney.
- Concerns about the use of AI in the workplace are widespread. In a recent Executive Order, President Biden discussed concerns such as worsening job quality, encouraging undue worker surveillance, and lessening market competition.
- FP has released a summary of the 10 things employers must include in any workplace AI policy – but what follows are five specific considerations for higher ed institutions.
5 Considerations When Creating Higher Ed Workplace Policies on AI
1. Student Privacy and Data Security
Universities typically collect significant amounts of student data and generally have robust mechanisms in place to protect that data. Preventing this data from being inadvertently exposed on AI systems should be top of mind for university administrators.
Because AI systems gather data from various public sources, all AI users should assume that any data or queries that they enter into AI systems will become public information. In fact, when I asked ChatGPT if a teacher could upload student information to it and be assured that it would not be disclosed to others, ChatGPT told me, “I cannot guarantee 100% data security or complete confidentiality.” It added, “It is always advisable to exercise caution when sharing sensitive information online. If you have concerns about privacy and security, it’s best to follow your organization’s guidelines regarding data handling.”
ChatGPT itself makes a good argument that universities should consider adopting explicit prohibitions on uploading or inputting any personal information about any student into any AI system.
2. Trade Secrets
Does your institution own original content or trade secrets that are not in the public domain? As with inadvertently sharing student information, universities risk public disclosure of privately held information if your administration or faculty inputs trade secrets – including important scientific work – into third-party AI systems.
Put differently, there is a risk of accidentally sharing trade secrets by inputting data into third-party AI systems. Since it is practically impossible to remove pieces of content from AI systems, the best practice is to prevent protected information from being entered into AI systems in the first place. Training your staff and establishing a clear policy against using trade secrets or confidential information in AI systems may help prevent inadvertent information-sharing and public release of that data.
3. Academic Freedom
A core tenant of most universities is that faculty have the freedom to teach, research, and express ideas without censorship or unreasonable interference. Appropriate use of AI systems is a topic ripe for disagreement among faculty. While some professors perceive AI as a fundamental threat to the integrity of education, others see AI as an opportunity for new and creative ways to teach and learn.
As in other areas, debates on the appropriate place of AI on a campus can become vitriolic and heated. Does your handbook or faculty supplement address academic freedom? Do your policies contemplate strong disagreements among faculty about the impact and use of AI? Clear policies supporting diverse opinions on the use of AI in the classroom may foster healthy discussion on this topic as AI systems progress and become more prevalent on campus.
4. Research Ethics and Integrity
While faculty members may disagree on the influence of AI systems in the educational process, there is no doubt that using AI tools for research creates risks. For example, AI tools have been shown to “hallucinate” or provide false answers or information. Does your school have standards for misconduct in research?
You may want to adopt a policy on research ethics that requires all faculty to thoroughly and independently vet all research conducted with the aid of AI. Also, your university may want to explicitly prohibit professors and others from representing AI-generated work as being their own original work.
5. Tenure and Promotion Criteria
If your handbook or faculty supplement details criteria or guidelines for tenure or promotion, consider adopting policies to reflect how AI contributions will be considered. For example, universities can articulate the importance of AI-related research, like developing and evaluating AI-assisted tutoring systems or AI decision support in healthcare settings. Your university may expect that AI-related research will meet certain originality, peer recognition, and documentation criteria. A faculty handbook can articulate the role of AI contributions in tenure or promotion and detail related criteria to assure that promotion expectations are clear from the outset and are fairly and equally applied.
Conclusion
All employers should consider updating their employee handbooks and policies to address acceptable uses of AI – but universities face specific challenges and pressures. You would be well served to provide state-of-the-art AI training and clear policies for your students and also your faculty and administrators.
Make sure that you are subscribed to Fisher Phillips’ Insights to get the most up-to-date information directly to your inbox. If you have questions about implementing training or policies for your institution, contact your Fisher Phillips attorney, the authors of this Insight, or any attorney in our Education Practice Group or AI Practice Group for additional information.
Related People
-
- Erin Gibson Allen
- Of Counsel