AI-Generated Content Policy

Institute of Applied Artificial Intelligence and Robotics (IAAIR)
Last updated: 06/15/2025

1. Purpose

The purpose of this policy is to define how AI-generated content should be used, disclosed, and governed at the Institute of Applied Artificial Intelligence and Robotics (IAAIR). This includes content generated using large language models (LLMs), image generators, code generators, voice synthesis tools, and other generative AI systems.

IAAIR supports the innovative use of generative AI to accelerate research, improve communication, and enhance productivity. However, we recognize the ethical, legal, and reputational risks associated with misuse or nondisclosure of AI-generated material. This policy ensures that all such content is used responsibly, transparently, and with human accountability.

2. Scope

This policy applies to all individuals affiliated with IAAIR, including:

  • Full-time and part-time staff

  • Researchers, faculty, and postdocs

  • Interns, fellows, and visiting scholars

  • Contractors, developers, and external collaborators

It governs any content created or modified using AI tools, including:

  • Written content (e.g., manuscripts, abstracts, reports, emails, documentation)

  • Code and scripts

  • Images, figures, charts, and visualizations

  • Synthetic media (e.g., voice, video, avatars)

  • Educational or training materials

  • Web content and social media posts

3. Key Principles

IAAIR’s use of AI-generated content must follow these core principles:

3.1 Transparency

Use of AI tools to generate or assist with content creation must be disclosed clearly in any formal publication, technical document, public release, or internal report.

3.2 Human Accountability

All AI-generated content must be reviewed and validated by a human author or responsible party. The final user is fully accountable for its accuracy, appropriateness, originality, and ethical compliance.

3.3 Originality and Attribution

Generative AI content must not infringe on copyrights, misuse proprietary data, or misrepresent original authorship. Any external datasets, models, or tools used must be properly cited or credited.

3.4 Ethical Use

AI-generated content must not be used to:

  • Fabricate or manipulate research data

  • Generate misleading media, deepfakes, or deceptive simulations

  • Circumvent authorship or peer review standards

  • Replace genuine stakeholder engagement or informed consent

4. Disclosure Requirements

Use of generative AI tools in formal outputs must be disclosed in one of the following ways:

  • Research Papers: Include a statement in the Methods or Acknowledgments section (e.g., "Portions of the manuscript were generated using OpenAI's GPT-4 and reviewed by the authors for accuracy.")

  • Technical Reports: Add a note in the introduction or footnotes describing the role of the AI tool

  • Visual Media: Include a caption or alt text (e.g., "AI-generated image using DALL·E 3")

  • Internal Use: Document tool usage in research logs or version history

  • Web and Outreach Content: Use visual disclaimers (e.g., “Generated with the assistance of AI”) where appropriate

Failure to disclose AI assistance may be considered a violation of research integrity and institutional policy.

5. Prohibited Uses

The following uses of AI-generated content are prohibited at IAAIR:

  • Publishing AI-generated content as human-authored without disclosure

  • Submitting fully AI-written academic papers or grant proposals

  • Using synthetic voice or media to impersonate individuals or create false representation

  • Generating AI content to fabricate experimental data or participant responses

  • Using generative AI for deceptive communications, marketing, or lobbying

6. AI Tools Permitted with Oversight

Commonly used AI tools may include but are not limited to:

  • Large language models (e.g., ChatGPT, Claude, Bard, LLaMA)

  • Image and video generators (e.g., DALL·E, Midjourney, Runway)

  • Code assistants (e.g., GitHub Copilot, Amazon CodeWhisperer)

  • Voice synthesizers and avatars (e.g., Descript, Synthesia)

Use of these tools is allowed with full oversight and adherence to this policy. Custom or experimental models must be vetted by the IAAIR Research Office before use in formal publications or public-facing content.

7. AI as a Tool, Not an Author

IAAIR follows the position of major academic publishers and professional bodies: AI systems cannot be listed as authors on any publication, dataset, or official document. Authorship requires human accountability, intent, and responsibility—qualities that current AI systems do not possess.

8. Review and Oversight

The Office of Research Integrity and the Responsible AI and Ethics Team are responsible for:

  • Auditing compliance with this policy

  • Investigating potential misuse or non-disclosure

  • Updating guidelines based on evolving AI capabilities and legal standards

  • Supporting researchers in understanding appropriate use cases

9. Training and Awareness

IAAIR will provide regular training to researchers, staff, and students on:

  • Responsible use of generative AI

  • Disclosure requirements

  • Intellectual property and licensing risks

  • Emerging ethical concerns in AI content creation

Completion of training may be required for access to specific AI tools or approval for public release of AI-generated content.

10. Contact and Reporting

For questions about this policy or to report a suspected violation, please contact:

Office of Research Integrity and Ethics
📧 hello@iaair.ai