What is Artificial Intelligence? (AI)
Artificial Intelligence, or AI, is a branch of computer science that focuses on creating systems capable of performing tasks that typically require human intelligence. These tasks include understanding language, recognizing patterns, solving problems, and making decisions. AI systems learn from data and improve over time, much like how humans learn from experience. This technology is widely used in various fields to enhance efficiency and innovation.
Please Note:
Faculty have discretion pertaining to the students enrolled in their course as to whether they are permitted the use of AI tools. The course outline will clarify their course policy for AI use. Faculty may choose to have different AI policies for each of their courses, so it is imperative that each course outline be reviewed. Additionally, all parties have the responsibility to abide by CCAC’s Academic Integrity policy.
Types of AI
🔓 Open AI
OpenAI refers to artificial intelligence systems that operate using other’s data, whether it is open sourced, user provided or sold by a company. One of their key areas is machine learning, where AI systems learn from large amounts of data provided by others to improve their performance. By analyzing patterns and information from this data, OpenAI's models can understand and generate human-like text, recognize images, and even play games.
🔒 Closed AI
Closed AI refers to artificial intelligence systems that operate within a secure, controlled environment. These systems are designed to handle sensitive or confidential data, ensuring that information is protected from unauthorized access. Unlike open AI systems, which may interact with a wide range of external data sources, closed AI systems are restricted to specific datasets to maintain security and privacy.
To learn more: Open AI vs Closed AI: What’s the Difference and Why Does It Matter? | Formtek Blog
Why does it matter if it is Open or Closed AI?
OpenAI can pose a risk for Personal Identifiable Information (PII) because it processes and learns from large amounts of data, which may include sensitive user information. If this data is not properly secured, there is a chance that unauthorized individuals could access it, leading to privacy breaches. Additionally, AI models can sometimes inadvertently reveal personal details when generating responses. Ensuring robust security measures and strict data handling protocols are essential to protect user information and maintain privacy.
Users should only use the “Supported” list of AIs. Utilizing unsupported AIs can lead to the exposure of sensitive information or data breaches, thereby putting the college and its user at a risk.
General use cases:
- Reviewing Public Info – AI can help analyze and summarize public content, general academic concepts, and non-sensitive data
- Generating content for personal use or academic work
- Lesson Planning – Get ideas for syllabi, learning objectives, and assessments
- Drafting Emails – Use generic names and avoid personal info
- Meeting Notes – Automate the capturing and organization of meeting notes
- Event Planning – AI can suggest themes, timelines, and activities
- Professional Development – Use AI to draft workshop or presentation content
- Supporting learning, research, and analysis in ways that respect data security guidelines – Ensure anonymized data sets are used
Be cautious when using AI
Users are responsible for adhering to the guidelines outlined in this regulation regarding the input of information into generative AI tools like Microsoft Copilot & ChatGPT. Specifically, no confidential information may be entered into Microsoft Copilot & ChatGPT or any similar generative AI tool without explicit ITS & AI Oversight Committee approval.
Listing of AIs
Approved for use
Product | Use Case | Notes |
Anthology | Blackboard | |
Cisco | Meeting Summaries | |
Microsoft 365 (Enterprise) Copilot Chat | Copilot Chat | Definition of Enterprise |
SurveyMonkey | All Features | |
OpenAI/ChatGPT | Writing content, knowledge assistance | OpenAI has a robust privacy policy on how they collect, store and use data. Data is stored primarily in the US. |
Zoom AI Companion | Zoom Meeting Notes | |
Formstack | Form Creation, Simplified Workflow | |
Google Gemini | Search Assistance | |
Grammarly | Grammar/Clarity | Can be detected by plagiarism checkers |
Under Review
Product | Notes |
Microsoft Copilot (with enterprise data collection) | Requires additional licensing. If CCAC wants an official AI tool, this would be the best option since Microsoft can secure it. |
Adobe | Requires Licensing & Ownership Renouncement. Adobe is also known for its issues with user accounts and vulnerabilities. |
Apple Intelligence | Built into newer Apple devices. Like Gemini it will be hard to tell if usage is intentional or not. |
White Rabbit Neo | Free Use, Cyber Security AI Model |
Calendly | Enhance scheduling, improve meeting coordination, and streamline workflows. |
DocuSign | Iris, Agreement Analysis & Insights |
EAB Navigate | AI powered tools for content generation, filtering and reports. However, the permissions to access are not currently enabled. |
Not Approved for use
Product | Notes |
DeepSeek | Stores user data on non-US based servers. |
Llama | Provides harmful information |
WormGPT | Malicious (BEC tool kit) |
FraudGPT | Malicious (AOI Cyber Crime Solution) |
WolfGPT | Malicious (Anonymous Malware Creation) |
XXXGPT | Malicious (Malware Dropper) |
PoisonGPT | Malicious (Misinformation AI Model) |
Meta AI | Meta AI platforms are not marketed for educational use and lack FERPA-specific compliance assurances. |
How to get an AI reviewed
You can fill out a supportal form, and that will create a ticket on your behalf. Then, the ITS teams will review your request and the software and let you know if the AI is approved or should not be used.