SciELO - Scientific Electronic Library Online

 
vol.6 número2Influencia del aprendizaje colaborativo y habilidades blandas en la producción de textos para estudiantes de primariaReestructuración cognitiva para eliminar ideas irracionales en estudiantes de estudios generales índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay articulos similaresSimilares en SciELO

Compartir


Revista InveCom

versión On-line ISSN 2739-0063

Revista InveCom vol.6 no.2 Maracaibo jun. 2026  Epub 08-Ago-2025

https://doi.org/10.5281/zenodo.15788013 

Artículos

Cerrando la brecha: cómo las herramientas de IA apoyan el desarrollo autodirigido de habilidades de ILE dentro del aula

Bridging the Gap: How AI Tools Support Self-Directed EFL Skill Development Inside the Classroom

1Universidad Técnica de Babahoyo, Babahoyo-Ecuador, Email: dgortaire@utb.edu.ec

2Universidad Técnica de Babahoyo, Babahoyo-Ecuador, Email: galmache@utb.edu.ec

3Universidad Técnica de Babahoyo, Babahoyo-Ecuador, Email: rreal@utb.edu.ec

4Universidad Técnica de Babahoyo, Babahoyo-Ecuador, Email: emorah@utb.edu.ec


Resumen

El presente estudio analiza cómo las herramientas de inteligencia artificial (IA) favorecen el desarrollo autodirigido de habilidades de inglés como lengua extranjera (ILE) en el Centro de Idiomas (CENID) de la Universidad de Babahoyo, Ecuador. Para ello, se empleó un diseño descriptivo y se recopilaron datos de 250 estudiantes de Nivel 5 mediante cuestionarios de percepción, análisis de uso y grupos focales. En particular, se evaluaron tres herramientas de IA: ChatGPT, Twee AI y Magic School, identificando que cada una aporta fortalezas pedagógicas distintas; por ejemplo, ChatGPT es útil para la escritura y la gramática, Twee AI facilita la pronunciación y la comprensión auditiva, mientras que Magic School destaca en lectura y ejercicios adaptativos. Asimismo, los resultados cuantitativos reflejaron percepciones positivas en todas las áreas, con altas valoraciones en accesibilidad (M = 4.27) y reducción de ansiedad ante errores (M = 4.14). Además, el 83.9% de los participantes de grupos focales reportó menor ansiedad lingüística al utilizar IA. No obstante, se identificaron desafíos como problemas técnicos, curva de aprendizaje y dudas sobre la precisión de la retroalimentación. En conclusión, la investigación demuestra que la IA puede potenciar la enseñanza de inglés al complementar habilidades, ampliar oportunidades de aprendizaje y reducir barreras psicológicas, contribuyendo así a cerrar la brecha entre instrucción estructurada y aprendizaje autodirigido.

Palabras clave: inteligencia artificial; inglés como lengua extranjera (ILE); aprendizaje autodirigido; percepciones de estudiantes

Abstract

This study analyzes how artificial intelligence (AI) tools support the self-directed development of English as a Foreign Language (EFL) skills at the Language Center (CENID) of the University of Babahoyo, Ecuador. A descriptive design was used, and data were collected from 250 Level 5 students through perception questionnaires, usage analyses, and focus groups. Specifically, three AI tools were evaluated: ChatGPT, Twee AI, and Magic School, identifying that each contributes distinct pedagogical strengths; for example, ChatGPT is useful for writing and grammar, Twee AI facilitates pronunciation and listening comprehension, while Magic School excels at reading and adaptive exercises. Likewise, the quantitative results reflected positive perceptions in all areas, with high ratings for accessibility (M = 4.27) and reduced error anxiety (M = 4.14). Furthermore, 83.9% of focus group participants reported lower language anxiety when using AI. However, challenges such as technical issues, a learning curve, and concerns about the accuracy of feedback were identified. In conclusion, the research demonstrates that AI can enhance English teaching by complementing skills, expanding learning opportunities, and reducing psychological barriers, thus helping to bridge the gap between structured instruction and self-directed learning.

Keywords: artificial intelligence; English as a Foreign Language (EFL); self-directed learning; student perceptions

Introduction

The landscape of English as a Foreign Language (EFL) education has undergone significant transformation in recent years, largely due to rapid technological advancements and the increasing integration of artificial intelligence (AI) in educational contexts. This evolution presents both unprecedented opportunities and complex challenges for language educators worldwide (González-Lloret et al., 2021). Traditional EFL classroom environments have long struggled with key limitations: restricted exposure to authentic language input, limited opportunities for individualized feedback, and insufficient practice time for developing communicative competence (González-Lloret & Ortega, 2023). These constraints are particularly pronounced in educational settings where English is not commonly used outside the classroom, such as at the Language Center (CENID) of the University of Babahoyo in Ecuador, where this study was conducted.

In this context, the emergence of AI-powered language learning tools offers potential solutions to these long-standing challenges by providing personalized, adaptive, and accessible learning experiences (Lai & Li, 2011). Intelligent tutoring systems, automated feedback mechanisms, conversational agents, and adaptive learning platforms are just a few examples of how AI is reshaping the possibilities of language education. However, as Godwin-Jones (2023) points out, there is still a significant gap between the theoretical potential of these technologies and their effective implementation in real-life classroom settings. This gap is especially evident in contexts where technological infrastructure may be limited, teacher training in educational technology is insufficient, or institutional policies have not kept pace with technological innovation.

On the other hand, Self-directed learning (SDL) has been recognized as a crucial factor in successful language acquisition, especially in EFL contexts where classroom time is inherently limited (Reinders & White, 2011). The concept, rooted in adult learning theory and constructivist approaches to education, emphasizes learner autonomy, metacognitive awareness, and intrinsic motivation as key drivers of language development (Benson & Voller, 1997). AI tools, with their capacity for personalization and adaptivity, appear theoretically well-positioned to support self-directed learning by providing immediate feedback, customized learning pathways, and autonomous practice opportunities. Nevertheless, the relationship between AI implementation and self-directed language learning remains underexplored, particularly within formal educational institutions where structured curricula and assessment requirements may constrain learner autonomy (Huang & Benson, 2013).

In this sense, the present study addresses this research gap by investigating how AI tools can be strategically integrated into classroom-based EFL instruction to foster self-directed learning while maintaining the benefits of structured pedagogical approaches. Conducted at the Language Center (CENID) of the University of Babahoyo, Ecuador, this research involved 250 intermediate-level (Level 5) students across five distinct courses, representing a significant sample of the institution's EFL population. The investigation employed a mixed-methods approach, combining quantitative assessment of language proficiency outcomes with qualitative analysis of student and teacher experiences, thereby providing a comprehensive understanding of both the measurable impacts and lived experiences of AI integration in this specific educational context.

This study is particularly timely given the rapidly evolving landscape of AI in education and the growing need for empirical research that bridges theoretical possibilities with practical applications. As Lameras & Arnab (2021) emphasize, there is an urgent need for context-specific investigations that account for local educational realities while evaluating the potential of emerging technologies. In Latin American contexts specifically, research on AI implementation in language education remains relatively limited, with few studies examining the intersection of technological innovation, pedagogical practice, and institutional constraints (Gortaire et al., 2023). By focusing on the University of Babahoyo's Language Center, this research contributes valuable insights from an underrepresented educational context.

On the other hand, the theoretical framework guiding this investigation draws on three interconnected domains: theories of self-directed learning in language acquisition (Reinders & White, 2011), models of technology integration in educational settings (Mishra & Koehler, 2006), and emerging understandings of AI-enhanced language pedagogy (Chen et al., 2022). This interdisciplinary lens allows for a nuanced examination of how AI tools can be leveraged to support learner autonomy while acknowledging the complex interplay of technological, pedagogical, and contextual factors that influence implementation outcomes.

However, beyond its theoretical contributions, this research aims to provide practical guidance for EFL educators seeking to effectively integrate AI tools in their teaching practice. By documenting implementation strategies, identifying challenges, and highlighting successful approaches across five distinct courses, the study offers actionable insights that may inform institutional policy, professional development initiatives, and classroom practice. Additionally, by examining students' responses to and engagement with various AI applications, the research contributes to our understanding of how learners navigate the opportunities and challenges presented by these emerging technologies.

Therefore, this study focuses on the following key research question: How do strategically implemented artificial intelligence tools influence the development of language proficiency among EFL learners across different skill areas? Through a systematic analysis, the research aims to contribute to the growing body of knowledge on technology-enhanced language learning and offer practical recommendations applicable to similar educational contexts.

Literature Review

The integration of artificial intelligence in language education represents a significant paradigm shift in EFL pedagogy. According to Lai & Li (2011), AI-enhanced tools have evolved from simple computer-assisted language learning (CALL) applications to sophisticated systems capable of adaptive feedback and personalized instruction. This evolution mirrors the theoretical shift in language learning from behaviorist approaches to socio-constructivist models that emphasize learner agency and authentic communication (Benson & Voller, 1997). Several studies have demonstrated the potential of AI to address longstanding challenges in EFL contexts, particularly in environments where exposure to authentic language input is limited (González-Lloret & Ortega, 2023).

On the other hand, research on self-directed learning in EFL contexts has consistently highlighted the importance of learner autonomy in successful language acquisition. Reinders & White (2011) define self-directed learning as "the process whereby individuals take initiative in diagnosing their learning needs, formulating goals, identifying resources, implementing appropriate strategies, and evaluating learning outcomes" (p. 143). This approach aligns with Knowles' (1975) foundational work on andragogy, suggesting that adult learners benefit most from educational experiences that respect their autonomy and build upon their intrinsic motivation. In EFL contexts specifically, Huang & Benson (2013) found that higher degrees of learner autonomy correlate strongly with improved language proficiency outcomes, particularly in productive skills like speaking and writing.

The integration of AI tools to support self-directed learning has gained particular attention in recent years. Chen et al. (2022) conducted a comprehensive meta-analysis of 47 studies examining AI applications in language learning, finding an overall positive effect size (Cohen's d = 0.69) for interventions that incorporated AI-based feedback systems. This analysis revealed that the most effective implementations were those that balanced automated support with human guidance, suggesting a complementary rather than substitutive role for AI in language pedagogy. Similarly, Rodríguez-Gómez & Pareja-Lora (2021) examined how AI-powered chatbots affected oral proficiency development among Spanish EFL learners, noting significant improvements in fluency and lexical range compared to traditional role-play activities, while also highlighting the importance of teacher mediation in maximizing these benefits.

The classroom implementation of AI tools presents both opportunities and challenges. González-Lloret et al. (2021) argues that successful integration depends on aligning technological affordances with sound pedagogical principles and learning objectives. Mishra & Koehler's (2006) TPACK framework (Technological Pedagogical Content Knowledge) provides a useful lens for understanding how teachers can effectively blend content expertise, pedagogical approaches, and technological tools. Research by Feuerriegel et al. (2022) suggests that many language teachers lack adequate training in AI-enhanced pedagogy, often resulting in superficial implementation that fails to capitalize on the technology's potential for personalization and adaptive learning. Huang et al. (2021) found that professional development focusing on both technical competence and pedagogical application was essential for successful AI integration in Japanese university EFL programs.

In addition, the affective dimension of AI implementation in language classrooms has also received scholarly attention. Mithas et al. (2022) found that students' attitudes toward AI tools significantly influenced their engagement and learning outcomes, with concerns about privacy, authenticity, and technological dependence emerging as potential barriers. Conversely, studies by Zhao (2022) revealed that well-designed AI systems could reduce language anxiety and increase willingness to communicate by providing non-judgmental practice opportunities. The notion of "AI literacy" has emerged as an important consideration, with Tenberga & Daniela (2024) arguing that both teachers and students need to develop critical understanding of AI capabilities and limitations to make informed pedagogical choices.

In Latin American contexts specifically, research on AI implementation in EFL settings remains relatively limited. Gortaire et al. (2023) conducted a survey of technology integration in Ecuadorian universities, finding that while digital tools were increasingly available, their pedagogical application often remained superficial due to infrastructure limitations and insufficient teacher training. Similarly, Nazari et al. (2021) noted that socioeconomic factors significantly impacted students' access to and familiarity with advanced technological tools in Colombian educational settings, highlighting potential equity concerns in AI implementation. Nonetheless, pilot studies like that of Zhang et al. (2025) with university students demonstrated that even modest AI integration could yield significant improvements in motivation and engagement with language learning tasks.

The existing literature reveals several research gaps that the present study aims to address. First, while numerous studies have examined the effectiveness of specific AI tools in controlled experimental settings, fewer have investigated their implementation within existing curriculum structures and classroom dynamics (Pelletier, 2021). Second, research tends to focus either on learning outcomes or user experiences, with fewer studies adopting mixed-methods approaches that capture both dimensions (Zhang et al., 2025). Finally, as noted by Nazari et al. (2021), contextual factors unique to specific educational environments significantly influence technology integration success, suggesting the need for studies that account for local cultural, infrastructural, and institutional realities. By examining AI implementation across five courses at CENID with attention to both learning outcomes and experiential factors, this study contributes to addressing these gaps while providing practical insights for similar educational contexts.

Methods

This study employed a descriptive research design focusing on students' perceptions of AI tools in EFL learning. Rather than measuring pre-post intervention effects, the research prioritized understanding students' experiences, attitudes, and self-reported impacts of AI integration on their language development (Dörnyei & Dewaele, 2022). The approach combined quantitative descriptive statistics with qualitative thematic analysis to provide a comprehensive understanding of how students perceived the effectiveness of AI tools in supporting their EFL skill development.

The study was conducted at the Language Center (CENID) of the University of Babahoyo, Ecuador. A total of 250 Level 5 (upper-intermediate) EFL students participated, distributed across five courses. All participants were actively enrolled in courses that had integrated AI tools into regular instruction for at least 12 weeks prior to data collection.

The participating students ranged in age from 18 to 25 years (M = 20.7, SD = 1.8), with 58% identifying as female and 42% as male. All participants had completed at least four previous levels of English instruction at CENID or demonstrated equivalent proficiency through placement testing. A technology familiarity questionnaire revealed that while most students (93%) owned smartphones and had regular access to computers (81%), only 29% reported previous experience with AI-enhanced language learning applications before the current courses.

Three available AI-enhanced tools were integrated into the Level 5 EFL curriculum: ChatGPT, Twee AI, and Magic School. Each tool was integrated through a scaffolded approach (Walqui & van Lier, 2006), with initial instructor demonstrations, followed by guided practice sessions, and gradually transitioning to independent student utilization over the course of the semester.

Initially, a 35-item Likert-scale questionnaire was developed to assess students' perceptions of AI tool effectiveness across different language skill areas (reading, writing, listening, speaking, vocabulary, and grammar), ease of use, integration with course content, impact on motivation, and contribution to self-directed learning. Items were rated on a 5-point scale (1 = strongly disagree to 5 = strongly agree). The questionnaire was validated through expert review by three TESOL specialists and pilot-tested with a separate group of 30 students (Cronbach's α = .89).

Data collection occurred during the final three weeks of the second semester of 2024, after students had experienced 12-14 weeks of AI tool integration in their EFL courses. The perception questionnaire was administered digitally using Qualtrics to all 250 participants during regular class sessions, with a response rate of 94% (235 completed questionnaires).

Means, standard deviations, and frequency distributions were calculated for each questionnaire item and for composite scores representing key dimensions (perceived effectiveness by skill area, usability, motivation impact, etc.). Cross-tabulations examined relationships between demographic variables (age, gender, prior technology experience) and perception patterns.

All participants provided informed consent, with emphasis on the voluntary nature of participation and confidentiality of responses. Focus group participants received additional information about recording procedures and transcript anonymization. Data security protocols included encryption of digital files and removal of identifying information from all datasets prior to analysis.

Results

When applying the selected instrument to collect data, we gathered important information that is detailed in this section. First of all, Table 1 presents demographic information of students.

Table 1 Demographic Characteristics of Participants (N = 235) 

Characteristic n %
Gender
Female 136 57.9
Male 99 42.1
Age Group
18-20 years 138 58.7
21-23 years 79 33.6
24-25 years 18 7.7
Prior Experience with Language Learning Apps
No prior experience 167 71.1
Limited experience (<6 months) 41 17.4
Moderate experience (6-12 months) 19 8.1
Extensive experience (>12 months) 8 3.4
Device Used for AI Tool Access
Primarily smartphone 156 66.4
Primarily computer 52 22.1
Both equally 27 11.5
Internet Access Quality
Very reliable 43 18.3
Mostly reliable 112 47.7
Somewhat unreliable 68 28.9
Very unreliable 12 5.1

Note. Percentages may not sum to 100 due to rounding

Table 1 presents the demographic profile of the 235 Level 5 ILE students who participated in the study conducted at the University of Babahoyo's Language Center. The gender distribution shows a slightly higher proportion of female participants (57.9%) compared to male participants (42.1%), which is relatively balanced. Age distribution indicates that the majority of students (58.7%) fell within the 18-20 years age range, with 33.6% in the 21-23 years category and only 7.7% in the 24-25 years group, reflecting a predominantly young adult student population.

A notable finding is that 71.1% of participants reported no prior experience with language learning applications, while only a small fraction (3.4%) had extensive experience exceeding 12 months, suggesting that AI-enhanced language learning was a novel experience for most students.

Regarding device usage, 66.4% primarily accessed the AI tools via smartphones rather than computers (22.1%), highlighting the importance of mobile compatibility for educational technology in this context. Internet access quality varied among participants, with most reporting mostly reliable (47.7%) or somewhat unreliable (28.9%) connections, which could potentially impact their experience with online AI tools.

These demographic factors provide important context for interpreting students' perceptions and usage patterns of the AI tools in this educational setting.

Table 2 Students' Perceptions of AI Tools' Effectiveness by Language Skill Area (N = 235) 

Skill Area/Statement ChatGPT Twee AI Magic School Overall
M (SD) M (SD) M (SD) M (SD)
Writing Skills 4.32 (0.78) 2.87 (1.13) 3.56 (0.94) 3.58 (0.95)
Helps me identify grammatical errors 4.47 (0.73) 2.65 (1.21) 3.78 (0.88) 3.63 (0.97)
Improves my vocabulary use in writing 4.53 (0.67) 3.04 (1.09) 3.42 (1.02) 3.66 (0.89)
Enhances my ability to organize ideas 4.18 (0.85) 2.71 (1.17) 3.47 (0.94) 3.45 (0.99)
Helps me write more appropriate to genre/context 4.12 (0.89) 3.08 (1.05) 3.58 (0.92) 3.59 (0.95)
Speaking Skills 3.28 (1.12) 4.21 (0.76) 3.12 (1.05) 3.54 (0.98)
Improves my pronunciation 2.87 (1.19) 4.52 (0.67) 2.95 (1.11) 3.45 (1.04)
Helps me speak more fluently 3.45 (1.08) 4.17 (0.81) 3.08 (1.02) 3.57 (0.97)
Provides useful feedback on my speaking 3.19 (1.14) 4.36 (0.72) 2.89 (1.17) 3.48 (1.01)
Helps me use appropriate intonation patterns 3.61 (0.98) 3.79 (0.85) 3.57 (0.91) 3.66 (0.91)
Listening Skills 3.19 (1.06) 4.13 (0.83) 3.43 (0.97) 3.58 (0.95)
Improves my comprehension of different accents 3.24 (1.07) 4.25 (0.79) 3.21 (1.04) 3.57 (0.97)
Helps me understand spoken English at natural speed 3.11 (1.13) 4.08 (0.84) 3.51 (0.93) 3.57 (0.96)
Exposes me to authentic listening materials 3.23 (1.01) 4.07 (0.85) 3.58 (0.92) 3.63 (0.93)
Reading Skills 3.76 (0.92) 3.16 (1.02) 4.33 (0.74) 3.75 (0.89)
Improves my reading comprehension 3.83 (0.88) 3.04 (1.05) 4.48 (0.67) 3.78 (0.87)
Helps me understand new vocabulary from context 4.02 (0.82) 3.31 (0.97) 4.25 (0.78) 3.86 (0.86)
Exposes me to different text types/genres 3.74 (0.91) 2.95 (1.09) 4.27 (0.75) 3.65 (0.92)
Improves my reading speed 3.45 (1.06) 3.34 (0.96) 4.31 (0.77) 3.70 (0.93)
Grammar & Vocabulary 4.14 (0.83) 3.47 (0.94) 4.27 (0.76) 3.96 (0.84)
Helps me learn new vocabulary 4.35 (0.74) 3.65 (0.89) 4.19 (0.81) 4.06 (0.81)
Clarifies confusing grammar points 4.28 (0.77) 3.21 (1.02) 4.37 (0.73) 3.95 (0.84)
Helps me recognize my common grammatical errors 4.12 (0.85) 3.38 (0.96) 4.26 (0.76) 3.92 (0.86)
Improves my use of collocations and idiomatic language 3.82 (0.96) 3.65 (0.87) 4.25 (0.77) 3.91 (0.87)

Note. Items were rated on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree). Higher scores indicate more positive perceptions

Table 2 presents a detailed breakdown of students' perceptions regarding the effectiveness of each AI tool across different language skill domains. The data reveals distinct strengths for each tool, with ChatGPT receiving the highest ratings for writing skills (M = 4.32, SD = 0.78) and grammar and vocabulary support (M = 4.14, SD = 0.83), particularly excelling in helping students identify grammatical errors (M = 4.47, SD = 0.73) and improve vocabulary use in writing (M = 4.53, SD = 0.67). In contrast, Twee AI was perceived as most effective for speaking skills (M = 4.21, SD = 0.76) and listening skills (M = 4.13, SD = 0.83), with notably high ratings for pronunciation improvement (M = 4.52, SD = 0.67) and comprehension of different accents (M = 4.25, SD = 0.79).

For its part, Magic School demonstrated its greatest perceived value in reading skills (M = 4.33, SD = 0.74) and grammar and vocabulary (M = 4.27, SD = 0.76), particularly for improving reading comprehension (M = 4.48, SD = 0.67) and clarifying confusing grammar points (M = 4.37, SD = 0.73).

Overall, students rated grammar and vocabulary as the skill area most effectively supported by the combined AI tools (M = 3.96, SD = 0.84), followed by reading skills (M = 3.75, SD = 0.89), with all skill areas receiving mean ratings above the midpoint of the scale, indicating generally positive perceptions across the board.

These findings suggest that students perceived each tool as having distinct strengths, potentially supporting the complementary use of multiple AI applications to address different language learning needs.

Table 3 AI Tool Usage Patterns Based on Platform Analytics (N = 235) 

Usage Metric ChatGPT Twee AI Magic School
Weekly Engagement
Mean sessions per week (SD) 3.8 (1.7) 2.9 (1.4) 3.2 (1.5)
Median sessions per week 3.5 2.0 3.0
Range (min-max sessions) 0-12 0-8 0-10
Session Duration
Mean minutes per session (SD) 17.6 (8.3) 12.4 (5.7) 15.8 (6.4)
Median minutes per session 15.0 10.0 14.0
Range (min-max minutes) mar-45 feb-32 abr-38
Usage Distribution by Time of Day (%)
Morning (6:00-11:59) 21.3 18.7 25.4
Afternoon (12:00-17:59) 35.8 42.1 38.6
Evening (18:00-23:59) 39.5 36.4 32.8
Night (0:00-5:59) 3.4 2.8 3.2
Usage Location (%)
Within classroom 18.7 32.5 27.3
Campus (outside classroom) 22.8 18.2 23.7
Off-campus 42.9 36.9 34.8
Feature Utilization (%)
Most used feature Writing feedback (46.3%) Pronunciation practice (52.7%) Grammar exercises (41.4%)
Second most used feature Conversation practice (28.9%) Listening activities (23.5%) Reading comprehension (31.3%)
Third most used feature Grammar explanations (15.4%) Vocabulary building (18.1%) Vocabulary quizzes (19.8%)
Other features 9.4 5.7 7.5

Note. Consistent users defined as those accessing the tool at least twice weekly throughout the study period

Table 3 presents objective usage data extracted from the analytics dashboards of the three AI platforms, providing insights into actual engagement patterns beyond self-reported perceptions. ChatGPT demonstrated the highest weekly engagement with an average of 3.8 sessions per week (SD = 1.7), compared to Magic School (3.2 sessions, SD = 1.5) and Twee AI (2.9 sessions, SD = 1.4). Session duration also varied across tools, with ChatGPT sessions lasting an average of 17.6 minutes (SD = 8.3), longer than both Magic School (15.8 minutes, SD = 6.4) and Twee AI (12.4 minutes, SD = 5.7). Regarding usage timing, all three tools showed similar patterns with peak usage occurring during evening (18:00-23:59) and afternoon (12:00-17:59) hours, suggesting students preferred engaging with these tools outside traditional class periods. Location data revealed that a substantial proportion of tool usage occurred off-campus, particularly for ChatGPT (42.9%), indicating students were integrating these tools into their personal study routines beyond institutional settings.

Feature utilization varied by tool, with writing feedback dominating ChatGPT usage (46.3%), pronunciation practice being the primary function for Twee AI (52.7%), and grammar exercises leading Magic School utilization (41.4%), aligning with the distinct strengths identified in perception ratings.

Table 4 Students' Perceived Impact of AI Tools on Self-Directed Learning (N = 235) 

Statement Strongly Disagree Disagree Neutral Agree Strongly Agree M SD
n (%) n (%) n (%) n (%) n (%)
Learning Autonomy
I feel more in control of my learning when using AI tools 12 (5.1) 23 (9.8) 34 (14.5) 103 (43.8) 63 (26.8) 3.77 1.10
AI tools help me identify my own language weaknesses 7 (3.0) 18 (7.7) 29 (12.3) 116 (49.4) 65 (27.7) 3.91 0.98
I can learn at my own pace with these tools 5 (2.1) 11 (4.7) 27 (11.5) 98 (41.7) 94 (40.0) 4.13 0.94
I practice more outside of class since using these tools 14 (6.0) 26 (11.1) 41 (17.4) 87 (37.0) 67 (28.5) 3.71 1.16
Goal Setting & Monitoring
AI tools help me set realistic language learning goals 10 (4.3) 31 (13.2) 59 (25.1) 92 (39.1) 43 (18.3) 3.54 1.07
I can better track my progress using these tools 8 (3.4) 19 (8.1) 38 (16.2) 112 (47.7) 58 (24.7) 3.82 1.00
The tools help me recognize when I've mastered a concept 9 (3.8) 23 (9.8) 45 (19.1) 108 (46.0) 50 (21.3) 3.71 1.03
I can better prioritize what to study next 11 (4.7) 27 (11.5) 56 (23.8) 95 (40.4) 46 (19.6) 3.59 1.07
Motivation & Engagement
Using AI tools makes me more motivated to study English 7 (3.0) 15 (6.4) 32 (13.6) 101 (43.0) 80 (34.0) 3.99 1.01
I find learning more enjoyable with these tools 5 (2.1) 12 (5.1) 35 (14.9) 94 (40.0) 89 (37.9) 4.06 0.96
I'm less anxious about making mistakes when using AI tools 4 (1.7) 9 (3.8) 27 (11.5) 106 (45.1) 89 (37.9) 4.14 0.89
I spend more time studying English since using these tools 10 (4.3) 24 (10.2) 42 (17.9) 94 (40.0) 65 (27.7) 3.77 1.09
Resource Utilization
AI tools help me find useful learning materials 8 (3.4) 18 (7.7) 41 (17.4) 103 (43.8) 65 (27.7) 3.85 1.03
I can get help when I need it without waiting for class 3 (1.3) 8 (3.4) 19 (8.1) 97 (41.3) 108 (46.0) 4.27 0.85
I can access more authentic language examples 5 (2.1) 16 (6.8) 37 (15.7) 102 (43.4) 75 (31.9) 3.96 0.97
The tools help me use a wider variety of learning resources 7 (3.0) 22 (9.4) 46 (19.6) 99 (42.1) 61 (26.0) 3.79 1.03
Strategic Learning
AI tools help me develop better study strategies 9 (3.8) 25 (10.6) 62 (26.4) 96 (40.9) 43 (18.3) 3.59 1.03
I've learned how to get better feedback from AI tools 6 (2.6) 14 (6.0) 45 (19.1) 115 (48.9) 55 (23.4) 3.85 0.94
I've discovered new ways to practice language skills 4 (1.7) 12 (5.1) 31 (13.2) 118 (50.2) 70 (29.8) 4.01 0.89
I can better identify effective learning techniques for me 8 (3.4) 21 (8.9) 49 (20.9) 103 (43.8) 54 (23.0) 3.74 1.02

Note. Items were rated on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree).Higher scores indicate more positive perceptions

Table 4 provides detailed insights into how students perceived the impact of AI tools on various dimensions of self-directed learning. Regarding learning autonomy, students most strongly agreed that AI tools allowed them to learn at their own pace (M = 4.13, SD = 0.94), with 81.7% either agreeing or strongly agreeing with this statement. For goal setting and monitoring, the ability to track progress received the highest rating (M = 3.82, SD = 1.00), with 72.4% of students expressing agreement.

The motivation and engagement dimension yielded particularly positive perceptions, with students reporting reduced anxiety about making mistakes when using AI tools (M = 4.14, SD = 0.89) and finding learning more enjoyable (M = 4.06, SD = 0.96). In the resource utilization category, immediate access to help without waiting for class received the strongest endorsement (M = 4.27, SD = 0.85), with an overwhelming 87.3% of students agreeing or strongly agreeing.

For strategic learning, discovering new ways to practice language skills was most positively perceived (M = 4.01, SD = 0.89), with 80% of students in agreement. Across all dimensions, mean ratings exceeded the midpoint of the scale (3.0), indicating generally positive perceptions of AI tools' impact on self-directed learning behaviors. However, aspects related to goal setting and strategic learning received somewhat lower ratings compared to immediate utility factors like accessibility and engagement, suggesting potential areas for enhanced pedagogical support.

These findings highlight the multifaceted ways in which AI tools may contribute to learner autonomy, with particular strengths in providing flexible, engaging learning opportunities that reduce affective barriers to language practice.

Table 5 Descriptive Statistics of Factors Influencing Ethics 

Major Themes and Subthemes Representative Quotations Frequency
Theme 1: Personalization and Individualized Learning
Adaptive feedback based on proficiency level "Magic School somehow knows exactly where I struggle with grammar and gives me exercises that target those specific problems." (P17) 48 (85.7%)
Ability to focus on personal weaknesses "With ChatGPT, I can ask about the same grammar rule multiple times in different ways until I really understand it." (P8) 42 (75.0%)
Learning at individual pace "I love that I can spend extra time on difficult pronunciations with Twee AI without feeling like I'm holding the class back." (P23) 39 (69.6%)
Customized explanations "ChatGPT explains things differently than my textbook, sometimes it's exactly what I need to understand a concept." (P31) 37 (66.1%)
Theme 2: Accessibility and Convenience
24/7 availability "It's amazing to have something to practice with at 11 PM when I'm finally free to study and have questions." (P42) 51 (91.1%)
Immediate feedback "Not having to wait until the next class to know if my writing is correct has changed everything for me." (P3) 45 (80.4%)
Reduced dependency on instructor availability "Before, I'd save all my questions for class, but by then I'd forget half of them. Now I can get answers immediately." (P29) 40 (71.4%)
Flexibility of location "I practice speaking with Twee AI on the bus now. I couldn't do that before." (P11) 38 (67.9%)
Theme 3: Psychological Safety and Confidence Building
Reduced anxiety about making mistakes "I'm not embarrassed to make pronunciation mistakes with Twee AI like I am in class, so I practice more." (P14) 47 (83.9%)
Increased willingness to experiment with language "With ChatGPT, I try using more complex sentences because I know it will correct me if I'm wrong." (P36) 43 (76.8%)
Building confidence through regular practice "After practicing conversations with ChatGPT, I feel more confident speaking in class." (P19) 39 (69.6%)
Positive reinforcement "Magic School celebrates when I improve, and somehow that little celebration makes me want to keep going." (P7) 33 (58.9%)
Theme 4: Technical and Usability Challenges
Internet connectivity issues "When my internet is slow, Twee AI doesn't work well, and it gets frustrating." (P44) 38 (67.9%)
Learning curve for effective use "It took me weeks to figure out how to get good writing feedback from ChatGPT. We needed better training." (P21) 35 (62.5%)
Device compatibility problems "Magic School sometimes crashes on my older phone, so I can only use it when I have computer access." (P9) 31 (55.4%)
User interface complexity "There are so many features in Twee AI that I still don't know how to use after a semester." (P27) 28 (50.0%)
Theme 5: Pedagogical and Content Concerns
Occasional inaccurate feedback "Sometimes ChatGPT says my grammar is correct when the teacher later tells me it's wrong." (P12) 33 (58.9%)
Disconnect from course curriculum "The vocabulary Magic School teaches me is useful, but different from what we need for our course exams." (P38) 29 (51.8%)
Over-reliance on technology "I worry some classmates just use ChatGPT to write everything and aren't really learning to write themselves." (P5) 26 (46.4%)
Limited context awareness "Twee AI doesn't understand when I try to use the expressions specific to Ecuador that our teacher accepts." (P33) 23 (41.1%)
Theme 6: Integration with Traditional Learning
Complementary to classroom instruction "What works best is when our teacher explains a concept, then we practice more with the AI tools, then discuss in class again." (P25) 42 (75.0%)
Bridge between class sessions "The tools help me remember what we learned in class because I can review and practice before I forget." (P16) 37 (66.1%)
Preparation for in-class activities "I practice conversations with ChatGPT before class so I'm ready for the real discussions." (P40) 34 (60.7%)
Transfer of skills to non-digital contexts "After using Magic School, I notice I read faster even when reading paper books or newspapers." (P2) 28 (50.0%)

Note. P = Participant. Frequency represents the number and percentage of focus group participants who mentioned each subtheme

Table 5 presents the results of the thematic analysis conducted on focus group discussions, revealing six major themes related to students' experiences with AI tool integration. Under the personalization and individualized learning theme, adaptive feedback based on proficiency level emerged as the most frequently mentioned benefit (85.7% of participants), with students particularly valuing how tools like Magic School targeted their specific weaknesses.

The accessibility and convenience theme garnered the strongest overall response, with 24/7 availability mentioned by 91.1% of participants, highlighting how AI tools expanded learning opportunities beyond traditional classroom constraints.

The psychological safety theme revealed important affective benefits, with 83.9% of students noting reduced anxiety about making mistakes when using AI tools compared to classroom settings.

Technical and usability challenges formed the fourth theme, with internet connectivity issues being the most common complaint (67.9%), followed by the learning curve for effective tool use (62.5%).

Pedagogical concerns constituted the fifth theme, with occasional inaccurate feedback (58.9%) and disconnect from course curriculum (51.8%) being the primary issues.

The final theme addressed integration with traditional learning, where 75% of participants emphasized the complementary relationship between AI tools and classroom instruction, suggesting that optimal results came from blending both approaches rather than relying exclusively on either.

Representative quotations provide vivid illustrations of student experiences, such as one participant noting "I'm not embarrassed to make pronunciation mistakes with Twee AI like I am in class, so I practice more," highlighting the psychological barriers to language practice that AI tools helped overcome.

These qualitative insights contextualize and deepen the quantitative findings, revealing nuanced perspectives on both benefits and challenges of AI integration in EFL learning.

Discussion

The integration of artificial intelligence (AI) tools into the teaching of English as a foreign language (EFL) at the University of Babahoyo's Language Center reveals significant findings regarding students' perceptions and usage patterns, which merit detailed analysis. Key results and their pedagogical implications are examined, highlighting how AI can enhance self-directed language development in the classroom.

First, students identified each AI tool as offering specific pedagogical strengths that complement traditional instruction. For example, ChatGPT was particularly valued for its writing support and grammar explanations; Twee AI excelled at pronunciation practice and listening comprehension; while Magic School excelled at reading comprehension and adaptive grammar exercises. This pattern of specialization is consistent with González-Lloret et al. (2021) approach, who argues that technological integration should be guided by sound pedagogical principles and not solely by technological novelty. Thus, a variety of tools is more effective than relying on a single tool, supporting González-Lloret & Ortega's (2023) "ecological" perspective, which recommends assigning specific technologies to specific learning objectives.

Furthermore, students' perceptions of the effectiveness of these tools on different language skills were generally positive, supporting the findings of Chen et al.'s (2022) meta-analysis on the benefits of AI in language learning. However, the variability in ratings shows that effectiveness is not uniform across all aspects of language development. Therefore, as Feuerriegel et al. (2022) warn, technological integration must be based on solid pedagogical foundations, and it is essential that teachers carefully align the selection of tools with learning objectives.

Furthermore, AI tools have proven effective in overcoming accessibility limitations inherent in traditional teaching. The high ratings given to immediate access to help and 24/7 availability confirm Lai & Li's (2011) findings regarding technology's ability to expand learning opportunities beyond the classroom, which is especially relevant in FLI contexts with limited exposure to the authentic language. Furthermore, the predominance of evening and off-campus use reinforces this interpretation, suggesting that AI allowed students to extend their learning into personal spaces and schedules.

On a psychological level, students reported feeling less anxiety about making mistakes when using AI tools. This aspect is consistent with Zhao (2022) research on the affective benefits of technology-mediated practice and with Krashen's (1982) "affective filter" theory. This lowering of inhibitions is especially valuable for fostering productive skills such as speaking, traditionally associated with greater anxiety in the classroom (Horwitz et al., 1986).

On the other hand, the results show that AI positively influenced self-directed learning, facilitating personalized feedback, autonomous practice, and greater metacognitive awareness. The favorable perceptions of self-paced learning and the high frequency of mentions of adaptive feedback are consistent with the conceptualization of Reinders & White (2011), who highlight the importance of adequate scaffolding for learner autonomy. Thus, AI tools offer the “contingent support” described by Huang & Benson (2013) as essential in language education.

However, the relatively low scores on goal setting, compared to factors such as accessibility, suggest that students require additional guidance to fully leverage the strategic potential of AI. This finding is consistent with the importance Tenberga & Daniela (2024) place on “AI literacy” as a key competency for teachers and students, highlighting the need for explicit instruction on the strategic use of these tools.

Despite the mostly positive perceptions, challenges emerged that educators must consider. Among them, technical issues, especially internet connectivity, represent a significant barrier, in line with what Gortaire et al. (2023) noted regarding infrastructure limitations in Ecuador. Likewise, the learning curve for the effective use of these tools reveals the need for ongoing training and support, as emphasized by Huang et al. (2021) in their research on teacher professional development for technology integration.

Furthermore, concerns related to occasionally inaccurate feedback and curricular alignment underscore the importance of teacher mediation in AI-enhanced environments. This aspect supports the conclusion of Rodríguez-Gómez & Pareja-Lora (2021), who assert that technology is most effective when it complements, rather than replaces, teacher expertise. The high value placed on the complementary relationship between AI and classroom instruction reinforces the need to combine technological advantages with traditional pedagogical approaches, in line with Mishra & Koehler's (2006) TPACK framework.

Finally, the fact that most students accessed AI tools via smartphones underscores the importance of mobile compatibility in educational technology, consistent with Nazari et al. (2021) observations on socioeconomic factors in technology access in Latin America. Variability in the quality of internet access poses equity challenges in AI implementation. These results suggest that technological accessibility should be a central consideration in planning, supporting Zhang et al. (2025) idea that even modest technology integration can generate significant benefits if adapted to local conditions.

Conclusions

The findings from this study indicate that AI tools can significantly enhance EFL instruction when strategically implemented with attention to their distinct pedagogical strengths and potential challenges. The complementary nature of different AI applications, their contribution to expanding learning opportunities beyond classroom constraints, and their positive impact on psychological barriers to language practice emerge as particularly valuable contributions to EFL pedagogy. However, effective implementation requires addressing technical and pedagogical challenges through adequate infrastructure, appropriate training, and thoughtful integration with traditional instruction.

For educational practitioners, these findings suggest several practical implications. First, a diversified approach using multiple AI tools tailored to specific language skills may be more effective than reliance on a single application. Second, explicit instruction in strategic tool use may enhance students' ability to leverage AI for self-directed learning. Third, careful attention to technical accessibility and equity considerations is essential for successful implementation in contexts with varied technological resources.

These findings contribute to addressing the research gap identified by Pelletier (2021) regarding the implementation of AI tools within existing curriculum structures and classroom dynamics. By providing empirical evidence from a Latin American EFL context, the study also responds to Nazari et al. (2021) call for research that accounts for local educational realities. Future research should explore longitudinal impacts on proficiency development, investigate optimal pedagogical approaches for different proficiency levels, and examine how institutional factors influence successful AI integration in language education.

References

Benson, P., & Voller, P. (1997).Autonomy and independence in language learning(1st ed.). Routledge. https://doi.org/10.4324/9781315842172 [ Links ]

Braun, V., & Clarke, V. (2021).Thematic analysis: A practical guide. SAGE Publications. [ Links ]

Chen, X., Bear, E., Hui, B., Santhi-Ponnusamy, H., & Meurers, D. (2022). Education theories and AI affordances: Design and implementation of an intelligent computer assisted language learning system. In M. M. Rodrigo, N. Matsuda, A. I. Cristea, & V. Dimitrova (Eds.),Artificial intelligence in education. Posters and late breaking results, workshops and tutorials, industry and innovation tracks, practitioners’ and doctoral consortium(pp. 582-585). Springer International Publishing. [ Links ]

Creswell, J. W., & Creswell, J. D. (2023).Research design: Qualitative, quantitative and mixed methods approaches. Sage Publications Ltd. [ Links ]

Dörnyei, Z., & Dewaele, J.-M. (2022).Questionnaires in second language research: Construction, administration, and processing. https://doi.org/10.4324/9781003331926 [ Links ]

Feuerriegel, S., Shrestha, Y. R., von Krogh, G., & Zhang, C. (2022). Bringing artificial intelligence to business management.Nature Machine Intelligence, 4(7), 611-613. https://doi.org/10.1038/s42256-022-00512-5 [ Links ]

Fisher, D., & Frey, N. (2021).Better learning through structured teaching: A framework for the gradual release of responsibility (3rd ed.). ASCD. [ Links ]

González-Lloret, M., & Ziegler, N. (2021). Technology-mediated task-based language teaching. https://doi.org/10.1017/9781108868327.019 [ Links ]

González-Lloret, M., & Ortega, L. (2023).Technology-mediated TBLT: Researching technology and tasks(2nd ed.). John Benjamins Publishing Company. [ Links ]

Godwin-Jones, R. (2023). Emerging spaces for language learning: AI bots, ambient intelligence, and the metaverse.Language Learning & Technology, 27(2), 6-27.https://hdl.handle.net/10125/73501Links ]

Gortaire Díaz, D., Romero Ramírez, E., Almache Granda, G., & Morales Morejón, S. (2023). Aspectos socio-lingüísticos y psico-pedagógicos de la adquisición de segunda lengua en inglés en estudiantes universitarios.Revista InveCom, 4(1), 1-16. https://doi.org/10.5281/zenodo.10055130 [ Links ]

Horwitz, E. K., Horwitz, M. B., & Cope, J. (1986). Foreign language classroom anxiety.The Modern Language Journal, 70(2), 125-132. https://doi.org/10.1111/j.1540-4781.1986.tb05256.x [ Links ]

Huang, J., & Benson, P. (2013). Autonomy, agency and identity in foreign and second language education.Chinese Journal of Applied Linguistics. http://dx.doi.org/10.1515/cjal-2013-0002 [ Links ]

Huang, W., Hew, K., & Fryer, L. (2021). Chatbots for language learning-Are they really useful? A systematic review of chatbot‐supported language learning.Journal of Computer Assisted Learning, 38, 237-257. https://doi.org/10.1111/jcal.12610 [ Links ]

Knowles, M. (1975).Self-directed learning: A guide for learners and teachers. Follett Publishing Company. [ Links ]

Krashen, S. D. (1982).Principles and practice in second language acquisition. Pergamon Press. [ Links ]

Lai, C., & Li, G. (2011). Technology and task-based language teaching: A critical review.CALICO Journal, 28. https://doi.org/10.11139/cj.28.2.498-521 [ Links ]

Lameras, P., & Arnab, S. (2021). Power to the teachers: An exploratory review on artificial intelligence in education.Information, 13(1), 14. https://doi.org/10.3390/info13010014 [ Links ]

Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge.Teachers College Record. [ Links ]

Mithas, S., Chen, Z., Saldanha, T. J. V., & De Oliveira Silveira, A. (2022). How will artificial intelligence and Industry 4.0 emerging technologies transform operations management?Production and Operations Management, 31(12), 4475-4487. https://doi.org/10.1111/poms.13864 [ Links ]

Nazari, N., Shabbir, M. S., & Setiawan, R. (2021). Application of artificial intelligence powered digital writing assistant in higher education: Randomized controlled trial.Heliyon, 7(5), e07014. https://doi.org/10.1016/j.heliyon.2021.e07014 [ Links ]

Pelletier, K. (2021).2021 horizon report teaching and learning edition. EDUCAUSE. [ Links ]

Reinders, H., & White, C. (2011). Learner autonomy and new learning environments.Language Learning & Technology, 15(3), 1-3. http://dx.doi.org/10125/44254 [ Links ]

Tenberga, I., & Daniela, L. (2024). Artificial intelligence literacy competencies for teachers through self-assessment tools.Sustainability, 16, 10386. https://doi.org/10.3390/su162310386 [ Links ]

Walqui, A., & van Lier, L. (2006).Scaffolding the academic success of adolescent English language learners: A pedagogy of promise(2nd ed.). WestEd. [ Links ]

Zhang, Q., Siraj, S., & Razak, R. (2025). Effects of AI chatbots on EFL students’ critical thinking skills and intrinsic motivation in argumentative writing.Innovation in Language Learning and Teaching, 1-29. https://doi.org/10.1080/17501229.2025.2515111 [ Links ]

Zhao, X. (2022). Leveraging artificial intelligence (AI) technology for English writing: Introducing Wordtune as a digital writing assistant for EFL writers.RELC Journal, 54, 890-894. https://doi.org/10.1177/00336882221094089 [ Links ]

Recibido: 20 de Marzo de 2025; Aprobado: 22 de Junio de 2025; Publicado: 27 de Junio de 2025

Creative Commons License Este es un artículo publicado en acceso abierto bajo una licencia Creative Commons