- Recommendations for Research Organizations:
- Promote, guide, and support the responsible use of generative AI in research activities.
- Actively monitor the development and use of generative AI systems within their organizations.
- Reference or integrate generative AI guidelines into general research guidelines for good research practices and ethics.
- Implement locally hosted or cloud-based generative AI tools that they govern themselves for data protection and confidentiality.
Promote
The guidelines on the responsible use of generative AI in research, developed by the European Research Area Forum, emphasize the importance of promoting, guiding, and supporting the responsible use of AI in research activities. Research organizations are encouraged to design funding instruments that are open and supportive of ethical AI use. They should ensure that funded research aligns with national and international legislation and good practices. Additionally, organizations are urged to encourage researchers to use generative AI ethically and responsibly, respecting legal and research standards requirements.
Furthermore, the guidelines stress the need for research organizations to review their internal processes regarding generative AI transparently and responsibly. They should maintain full accountability for the use of AI, ensuring that it aligns with ethical principles and does not compromise the confidentiality or fairness of the process. By choosing generative AI tools carefully and ensuring adherence to quality standards, data protection, and intellectual property rights, research organizations can contribute to a culture of responsible AI use in the research community.
Monitor
This means that research organizations need to stay informed about how generative AI is being utilized in their research activities. By actively monitoring these systems, organizations can provide guidance, identify training needs, and understand what support is most beneficial. This knowledge helps in anticipating and preventing possible misuse or abuse of AI tools, ultimately contributing to a more responsible and ethical use of generative AI in research.
For research organizations, actively monitoring the development and use of generative AI systems is crucial for ensuring compliance with ethical and legal requirements. By analyzing the limitations of the technology and tools, research organizations can provide valuable feedback and recommendations to their researchers. This feedback loop helps in maintaining a critical approach to using generative AI, addressing biases, and ensuring the integrity of the content generated. Additionally, sharing this knowledge with the scientific community fosters transparency and accountability, contributing to a culture of responsible AI use in research organizations.
Reference guidelines
This action ensures that the principles of responsible AI use are seamlessly incorporated into the overall research framework, promoting transparency and accountability in the use of generative AI tools within research organizations.
By referencing or integrating the generative AI guidelines into their general research guidelines for good research practices and ethics, research organizations can create a cohesive approach to using AI responsibly. This integration helps establish clear standards and expectations for the ethical and transparent use of generative AI tools, ensuring that research activities align with the principles of integrity and responsible innovation in the rapidly evolving landscape of AI technology.
Implement self-governed AI tools
This means that organizations should ensure that the AI tools they use are under their control, either through hosting them on their own servers or using cloud-based tools that provide guarantees of data protection and confidentiality. By governing these tools themselves, organizations can better protect sensitive research data and ensure compliance with data protection regulations.
In line with the recommendations for research organizations, implementing locally hosted or cloud-based generative AI tools that they govern themselves is crucial for maintaining data protection and confidentiality in research activities. This action ensures that organizations have direct control over the AI tools being used, allowing them to safeguard sensitive information and adhere to ethical and legal requirements. By hosting these tools locally or through trusted cloud services, research organizations can mitigate risks related to data privacy and intellectual property rights, fostering a secure environment for the responsible use of generative AI in research.