Construction and validation of the AI Ethics Scale in language research
DOI:
https://doi.org/10.14742/ajet.11034Keywords:
language research, artificial intelligence (AI), ethics, validity, reliability, scale developmentAbstract
Despite the growing relevance of artificial intelligence (AI) in language research, there is a lack of validated instruments to assess researchers’ ethical awareness regarding its use. Given the increasing integration of AI into technology-enhanced language teaching, assessment and learning environments, there is also a need to develop scales to provide a foundation for strengthening ethical AI practices across language research contexts. This study aimed to develop and validate the AI Ethical Awareness and Responsibility Scale (AI-EARS) to measure ethical awareness and responsibility in AI use among language researchers. The scale development process followed established validation standards and involved exploratory and confirmatory factor analyses. After item reduction, a two-factor structure was identified in the exploratory analysis, while a confirmatory factor analysis supported a refined one-factor, five-item model with strong model fit and excellent internal consistency. This final structure emerged because several items demonstrated weak loadings, high residual correlations or conceptual overlap during confirmatory analysis, which indicated that they did not sufficiently contribute to the latent construct. Evidence for concurrent validity and test-retest reliability further supported the psychometric robustness of the scale.
Implications for practice or policy:
- Language researchers should adopt the AI-EARS to self-assess and enhance their ethical awareness when integrating AI tools into their studies.
- Academic institutions could incorporate the AI-EARS into research ethics training programmes to promote responsible AI use among scholars.
- Research policymakers may use the AI-EARS as a benchmark to develop guidelines addressing ethical AI practices in language research.
- AI-EARS serves institutions as a diagnostic tool, enabling targeted training to ensure ethical and transparent AI integration in language research.
Downloads
Metrics
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Bora Demir, Selami Aydın

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Articles published in the Australasian Journal of Educational Technology (AJET) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC-ND 4.0). Authors retain copyright in their work and grant AJET right of first publication under CC BY-NC-ND 4.0.
This copyright notice applies to articles published in AJET volumes 36 onwards. Please read about the copyright notices for previous volumes under Journal History.
