Skip to Main Content
George Mason University Infoguides | University Libraries
See Updates and FAQss for the latest library services updates.

Generative Artificial Intelligence (AI)

Generative Artificial Intelligence

AI and Ethics

What is AI Ethics?

AI ethics is the study of how to optimize artificial intelligence's beneficial impact while reducing risks and harmful outcomes. As AI becomes increasingly integrated into research, education, and daily life, understanding ethical considerations is essential for responsible use.

Practical Steps for Ethical AI Use

Before You Start

  • Review your instructor's AI policy for the course
  • Check George Mason's current AI guidelines
  • Understand the privacy policy of any AI tool you plan to use

During Use

  • Keep records of your AI interactions for citation purposes
  • Fact-check all AI-generated information
  • Ensure AI enhances your work rather than replacing your thinking

In Your Final Work

  • Clearly disclose all AI assistance
  • Cite AI tools appropriately according to your style guide
  • Take responsibility for the accuracy of your final product

 

This guide follows George Mason University's AI Guidelines and is regularly updated to reflect evolving best practices and policies. Claude.ai was used for formatting the information and content was reviewed by Dr. Heidi Blackburn. 

Mason Specific Guidelines for Ethics

Mason-Specific Guidelines

George Mason University's AI Guidelines emphasize six key principles:

  1. Human Oversight - You remain responsible for all AI-assisted work
  2. Transparency - Clearly disclose AI use in your work
  3. Compliance - Follow all university policies and legal requirements
  4. Data Privacy - Protect confidential and sensitive information
  5. Critical Thinking - Develop AI literacy and question AI outputs
  6. Accuracy - Verify all AI-generated content before use

What's Prohibited at George Mason ⚠️

  • Uploading confidential data or personally identifiable information (sensitive research data, student information, proprietary content)
  • Using AI to create deceptive or misleading content
  • Violating copyright through AI-generated materials
  • Academic dishonesty involving undisclosed AI use

Key Ethical Considerations

Key Ethical Considerations

Bias and Fairness

AI systems can perpetuate or amplify existing biases present in their training data. AI output depends entirely on its input, including the dataset used for training, which can result in explicit and implicit bias. This occurs because generative AI ingests enormous amounts of training data from across the internet, which means it can replicate the biases, stereotypes, and hate speech found on the web.

What this means for you:

  • Be aware that AI responses may reflect societal biases
  • Critically evaluate AI outputs, especially on sensitive topics
  • Consider diverse perspectives when using AI for research

Privacy and Data Security

There are ongoing privacy concerns about how AI systems harvest personal data from users, including information you may not realize you're sharing. Personal or sensitive user-submitted data can become part of the material used to train AI without explicit consent.

Best practices:

  • Never input confidential or sensitive data into AI tools
  • Read privacy policies before using new AI tools
  • Be especially cautious with research data, personal information, or proprietary content
  • Follow George Mason's AI Guidelines regarding data security
  • IRB Protocols: Consult with Information Technology Services' (ITS) IT Security Office

Academic Integrity

Using AI in academic work raises important questions about originality, citation, and honest representation of your work.

Key principles:

  • Disclosure: Always indicate when and how you've used AI assistance
  • Verification: AI tools are known for producing "hallucinations" - false information created by the AI system, including partially or fully fabricated citations
  • Original thinking: Ensure AI enhances rather than replaces your critical thinking

Accuracy and Misinformation

AI-generated content may contain factual errors, outdated information, or completely fabricated details presented convincingly.

Critical evaluation tips:

  • Always fact-check AI-generated information
  • Cross-reference with reliable, primary sources
  • Be aware that AI training data may not include recent developments
  • Understand that AI information may lack currency as systems may have been trained on past datasets

Broader Ethical Impacts

Broader Ethical Impacts

Environmental Impact

AI technologies rely on vast physical infrastructures that require tremendous amounts of natural resources, including energy, water, and rare earth minerals.

Labor and Consent

Academic publishers have struck deals with AI companies to provide access to books and scholarly journals, without necessarily giving notice to authors. This raises questions about consent and fair compensation for intellectual property use.

Graduate Student-Specific Considerations

Graduate Student-Specific Considerations

Discipline-Specific Guidelines

While George Mason's AI guidelines apply universally, individual academic units may have additional requirements or interpretations. Different fields may have varying levels of AI acceptance based on disciplinary norms and methodological traditions.

Action steps:

  • Check with your specific college, school, or department for additional guidance
  • Consult with your advisor about field-specific AI practices and expectations
  • Review professional organizations' emerging AI policies in your discipline

Collaborative Research and Co-Authorship

Working with others adds complexity to AI disclosure and decision-making. Team members may have different comfort levels and institutional requirements for AI use.

Best practices:

  • Establish team agreements about AI use early in collaborative projects
  • Ensure all collaborators understand disclosure requirements
  • Document AI use decisions for shared reference
  • When working with advisors, discuss AI policies upfront to avoid conflicts

Research Applications and IRB Considerations

AI use in research involving human subjects requires special attention to ethics and IRB compliance.

Key areas:

  • IRB protocols: Researchers should fully and explicitly include any use of AI in research processes in their Institutional Review Board (IRB) protocols and consult with Information Technology Services (ITS) IT Security Office to understand levels of risk
  • Using AI for data analysis raises confidentiality and replicability concerns:
    • with qualitative research (transcripts, interviews content)
    • with quantitative research (data sets with sensitive content)
    • with reproducability and replicability of research
    • For secondary data, consult terms of service or data use agreements
  • Literature reviews: AI assistance in systematic reviews may affect methodology reporting
  • Grant writing: Some funding agencies have specific AI disclosure requirements or prohibit the use of AI

🚨 Important: Research involving or using sensitive data requires an approved protected AI environment which does not share information outside of the project. Consult ITS data classification guidance.

University-Approved Tools for Graduate Students

George Mason provides specific AI tools that meet university security and privacy standards. Using approved tools helps ensure compliance with institutional policies.

Enterprise-approved tools (no restrictions):

  • Adobe AI
  • Microsoft Copilot Chat
  • LinkedIn Learning AI Career Coach
  • PatriotAI (university-managed access to large language models)
  • Zoom AI Companion

Approved but not supported (public data only):

  • ChatGPT (with specific privacy settings - see AI Toolkit for details)

Getting tools reviewed: Contact the Architectural Standards Review Board (ASRB) to request evaluation of tools not on the approved list.

Professional Development Implications

Professional Development Implications

Skill Development and Learning

Consider how AI use affects your academic and professional growth as a student.

Balance considerations:
  • Use AI to enhance learning without replacing critical thinking skills
  • Develop AI literacy as a professional competency
  • Understand when human expertise is irreplaceable
  • Build skills that complement rather than compete with AI

Academic Publishing and Career Preparation

The academic publishing landscape is rapidly evolving regarding AI use, with different journals and conferences developing varying policies.

Evolving standards:
  • Journal policies on AI use are still developing across disciplines
  • Conference submission guidelines increasingly address AI disclosure
  • Publishers are creating new standards for AI-generated content
  • Peer review processes may prohibit or restrict AI use
Career implications:
  • AI literacy is becoming an expected skill in many fields
  • Demonstrable human expertise remains valuable
  • Understanding ethical AI use is a professional asset

"Can I use AI for this task?" Flowchart

"Can I use AI for this task?" Flowchart

**These tools are meant as guides for the thought process. Always consult with your instructor, advisor, or publisher for the most current guidelines. 

Academic Writing:

  1. Is this a high-stakes assignment (thesis, dissertation, publication)? ➡️ Proceed with extra caution, consult advisor
  2. Does your instructor/program have specific AI policies? ➡️ Follow those requirements first
  3. Are you using AI to replace your own thinking? ➡️ Not recommended
  4. Are you using AI to enhance organization, grammar, or brainstorming? ➡️ Generally acceptable with disclosure

Research Tasks:

  1. Does your task involve confidential or sensitive data? ➡️ Do NOT use public AI tools. Use tools approved for the task. Understand the sensitivity level of your data.
  2. Are you using university-approved tools only? ➡️ Check the AI Toolkit
  3. Is this part of IRB-approved research? ➡️ Must be disclosed in your IRB protocol
  4. Can you verify all AI outputs independently? ➡️ Required for research integrity

Data and Privacy:

  1. Is this information you would share publicly? ➡️ If no, don't share with AI
  2. Does this contain student information, research participant data, or proprietary content? ➡️ Prohibited
  3. Are you using an enterprise-approved tool? ➡️ Safer choice but check guidelines for IRB
  4. Can you complete your work without sharing sensitive information? ➡️ Best practice

Template Disclosure Statement Examples

For course assignments: "I used [AI tool name] on [date] to help with [specific task, e.g., brainstorming ideas, organizing content, checking grammar]. All final content was reviewed, revised, and verified by me."

For research papers: "AI assistance was used in this research for [specific tasks]. [AI tool name] was used on [dates] to [specific description]. All AI-generated content was verified through independent sources and analysis."

For dissertation/thesis acknowledgments: "I acknowledge the use of [AI tool name] for [specific assistance] during the completion of this work. All analysis, interpretations, and conclusions remain my own."