Analyzing University Students’ Attitude and Behavior Toward AI Using the Extended Uniﬁ ed Theory of Acceptance and Use of Technology Model

,


INTRODUCTION
Artifi cial Intelligence (AI), which automates tasks and emulates human intelligence (Geetha & Bhanu Sree Reddy, 2018;Jarrett & Choo, 2021;Khanagar et al., 2021;Saravanan et al., 2017), is rapidly growing (Barton et al., 2017;Beig & Qasim, 2023;Hassani et al., 2020;Hilale, 2021;Olhede & Wolfe, 2018).It has now become an important technology that benefi ts society and the economy (Cockburn et al., 2018;Hall & Pesenti, 2017;Lu et al., 2018) and pervades many aspects of people's daily lives (Hilale, 2021;Loble et al., 2017;Mintz & Brodie, 2019;Olhede & Wolfe, 2018;Tahiru, 2021).As AI has been prominent in various sectors (Berdiyorova et al., 2021;Hall & Pesenti, 2017;Jindal et al., 2021;Paul et al., 2021), this technology has also been integrated into the field of higher education (Crompton & Burke, 2023;Pedro et al., 2019;Zawacki-Richter et al., 2019;Zhang & Aslan, 2021).The use of Artifi cial Intelligence (AI) has been implemented in language learning to improve instruction and address the learners' needs (Chen, 2021), facilitate interaction between instructors and learners (Seo et al., 2021), and foster learners' educational experience (Alam, 2021;Kavitha & Lohani, 2018).Although artifi cial intelligence is thriving (Shao et al., 2020;Tan & Ran, 2022;Zhou et al., 2019), it is also seen as a threat to education (Humble & Mozelius, 2022;Xie & Wang, 2023).Some college students were concerned with ethical issues (Farhi et al., 2023;Ghotbi et al., 2022) and were wary of an unnatural learning environment (Kushmar et al., 2022).The majority of college students have no intention of using AI to complete assignments or exams in the near future (Welding, 2023), and according to Skeat and Ziebell (2023), a signifi cant number of students still strongly oppose such technology utilization.In a similar study, it was also revealed that although the respondents understand the essence of AI technology and how it benefi ts their daily lives, they are not entirely clear about the benefi ts of incorporating artifi cial intelligenceenhanced technologies in learning and teaching (Slavov et al., 2023).Prior studies explored people's attitudes toward and behavioral intention to use artifi cial intelligence.In the study of Yadrovskaia et al. (2023), the respondents have a positive attitude towards the use of AI despite not fully grasping the fundamentals of these technologies.Additionally, some students believe that AI will positively benefi t the fi eld of education (Kairu, 2020;Marrone et al., 2022), and they also have a positive attitude toward using it because it engages students and accommodates their varying cognitive levels (Obenza et al., 2023b;Pande et al., 2020).These attitudes regarding AI affect people's level of trust in AI technology (Liehner et al., 2023).Moreover, artifi cial intelligence, such as chatbots, is found appealing to language learners since they can use them without the teachers' assistance, which helps them develop into independent learners (Mohamed & Alian, 2023).According to Chen et al. (2021), students' behavioral intention to study a language was positively correlated with their knowledge of AI-enabled language applications, attitude to use AI, perceived ease of use, subjective norm, and behavioral intention.Romero-Rodriguez et al. (2023) used the Unifi ed Theory of Acceptance and Use of Technology (UTAUT) model for technology adoption to fi nd that university students accept artifi cial intelligence like ChatGPT because they think it could help them learn.Usefulness, performance expectancy, hedonic motivation, private value, and habits also affect students' AI chatbot prototype use.Kim (2017) found using the UTAUT model that expectation, social infl uence, work usefulness, and anxiety signifi cantly affected healthcare university students' intention to use AI technology.Kim (2017) found that the use intention factor partially mediated the direct effect of the anxiety factor on the attitude factor and the task's usefulness factor on the attitude factor after verifying the indirect effect.Kaya et al. (2022) also reported that AI anxiety is a problem that can hinder the adoption, use, or acceptance of technology and can cause people to underestimate the usefulness of AI technology, fail to recognize its simplicity and fail to recognize its benefi ts.Moreover, in the study of Gado et al. (2022), the perceived usefulness of AI, attitude towards AI, perceived social norm regarding AI, and AI literacy proved to be signifi cant indicators of students' intent to use artifi cial intelligence.Additionally, Alzahrani (2023) discovered that while students' attitudes were adversely affected by perceived risk, their behavioral intention to utilize AI in education was signifi cantly infl uenced by performance expectancy and facilitating conditions.Additionally, the results indicate that effort expectation has no substantial effect on attitudes toward the use of AI in higher education.In light of the empirical investigations delineated within extant literature, the pivotal function of AI literacy in shaping attitudes towards artifi cial intelligence (AI) emerges prominently.In a recent study conducted by Obenza et al. (2024), it was discerned that the cultivation of cognitive absorption among students represents a viable strategy for enhancing AI literacy, given its established status as a signifi cant predictor thereof.In addition to the above studies, UTAUT model has been used to study students' adoption of artifi cial intelligenceenabled e-learning systems (Lin et al., 2021), intelligencebased robots (Roy et al., 2022), and AI-powered webbased English writing assistance software (Intiser et al., 2023).However, despite studies and existing literature concerning people's attitudes and behavioral intention to use AI, a particular study using UTAUT delving into AI trust and AI awareness in explaining and creating the structural model of college students' attitudes and behavior towards AI has been none.Therefore, this study is conducted to address this research gap.The results of this study can contribute specifi cally to academic sectors that are progressively integrating artifi cial intelligence in pedagogical approaches, as well as aid the technology sectors in enhancing AI tools that increase people's positive view and adoption of AI.This can also benefi t future researchers in further in-depth exploration of factors infl uencing the students' attitude and behavioral intention to use artifi cial intelligence.

MATERIALS AND METHODS
This study utilized a quantitative research design, and more specifi cally, the non-experimental correlational approach was utilized all throughout the research process.In accordance with the defi nition provided by Creswell and Creswell (2023), quantitative studies make use of inquiry methodologies such as surveys and experiments, and the data collected is gathered on predetermined instruments that generate statistical measurements.Researchers distributed Google Forms survey questionnaires to random participants.A stratifi ed random sampling method was utilized to select participants for the study.Utilizing this method allowed for the study variables to be represented in an equitable manner.For latent variable path models, the estimation of complex cause-effect relationships can be accomplished through the use of partial least squares path modeling (PLS-PM) or structural equation modeling (PLS-EM).As PLS-SEM gains popularity, more researchers are using it (Hair et al., 2019a;Hair et al., 2017b;Ringle et al., 2015;Sarstedt et al., 2019b).Using PLS-SEM, researchers are able to estimate large models that contain multiple constructs, indicator variables, and structural paths without making any assumptions about distributional relationships.More importantly, PLS-SEM emphasizes prediction in statistical model estimation and is designed to explain causality (Wold, 1982;Sarstedt et al., 2017a).This partial data analysis method allows for smaller sample sizes.For the purpose of extrapolating sample results to the relevant population, larger sample sizes should be used whenever possible.(Hair et al., 2022b;Kock & Hadaya, 2018).The researchers used adapted questionnaires in the form of 5-point Likert Scales to gather the data.The attitude toward AI scale (Suh & Ahn, 2022), facilitating condition, performance expectancy, effort expectancy, and behavioral intention to use scales (Chatterjee & Bhattacharjee, 2020), social infl uence scale (Kandoth & Shekhar, 2022), AI awareness scale (Isaac et al., 2017), and AI trust scale (Choung et al., 2022).The study applied the 10-times rule proposed by Hair et al. (2011) to determine the total number of samples gathered.This method is commonly utilized in PLS-SEM to determine the minimal sample size.This strategy relies on the premise that the sample size must exceed ten times the highest number of inner or outer model linkages directed at any latent variable in the model (Hair et al., 2017).The minimal sample size computed based on this criteria is 90.The study selected 322 college students from different universities in Region XI by stratifi ed random sampling, exceeding the recommended sample size to ensure accurate results, particularly as we are suggesting a concise model.A further evaluation of the validity and reliability of the measurement model was carried out using Cronbach's alpha.The method known as the Average Variance Extracted (AVE) was utilized in order to assess the convergent validity of the model.On the other hand, the Hetero-Monotrait Ratio (HTMT) method was utilized in order to assess the discriminant validity of the model.The VIF was also utilized in this process.For the purpose of evaluating the hypothesized structural model, the bootstrapping standardized algorithm was utilized through the SmarPLS 4.0 software.

RESULTS AND DISCUSSION Assessment of Measurement Model
Cronbach alpha and composite reliability are the two measurements that are most frequently used to determine internal consistency.These measurements determine reliability based on the interrelationship of the variables that are observed in the items (Hamid et al., 2017).The reliability of the instruments that were utilized in the research is presented in Table 1.It was determined that Cronbach's alpha was the most reliable method for evaluating the instruments.Cronbach's alpha values for the questionnaires are as follows: 0.854 for AI Awareness (AI-A), 0.937 for AI Trust (AI-T), 0.951 for Attitude towards AI (At-AI), 0.904 for Behavioral Intention to Use (BIU), 0.767 for Effort Expectancy (EE), 0.864 for Facilitating Conditions (FC), 0.873 for Performance Expectancy (PE), and 0.896 for Social Infl uence.These values indicate that the questionnaires have a high degree of internal consistency (SI).Composite reliability and Cronbach alpha values that fall within the range of 0.60 to 0.70 are considered acceptable; however, in the more advanced stage, the value absolutely must be greater than 0.70.(Hair et al., 2014).The evaluation of the instruments' convergent validity was conducted by calculating the AVE.Convergent validity is the degree of agreement regarding the correlation between multiple indicators of the same construct (Hamid et al., 2017).BIU (0.777), AI-A (0.696), AI-T (0.613), SI (0.766), EE (0.682), FC (0.647), and PE (0.664) all had AVE values that surpassed the 0.5 threshold.This is deemed acceptable in light of the fact that an acceptable minimum acceptable AVE is 0.50.An AVE value of 0.50 or greater signifi es that the construct accounts for a minimum of 50 percent of the variance exhibited by the items comprising the construct (Bagozzi & Yi, 1988;Fornell & Larcker, 1981;Hair et al., 2014;Henseler et al., 2009).Prior to evaluating the structural relationships, it is necessary to investigate the collinearity in order to guarantee that it does not introduce any bias into the regression results.VIF values that are greater than fi ve are indicative of probable collinearity issues among the predictor constructs, as stated by Hair et al. (2019).However, collinearity issues can also occur at lower VIF values, which range from three to fi ve (Mason & Perreault, 1991;Becker et al., 2014).In an ideal situation, the values of the VIF should be close to three or lower.
The creation of higher-order models that are capable of being supported by theory is a common solution that is utilized in situations where collinearity is a problem (Hair et al., 2017b).
association both with Attitude toward AI (coeffi cient = 0.366) and Behavioral Intention to Use (coeffi cient = 0.173).These fi ndings suggest that as students' trust in AI increases, so does their favorable attitude toward it, as well as their inclination to use AI technology.These relationships are statistically signifi cant, underscoring the signifi cance of trust in shaping attitudes and intentions toward AI adoption (T = 5.621, p = 0.000 and T = 2.679, p = 0.007, respectively).The results of this study lend credence to the fi ndings of The results of this study suggest that the students' perceptions of the amount of effort required to use artifi cial intelligence and the presence of favorable conditions do not signifi cantly impact the attitudes and intentions of the students regarding the adoption of AI in this particular environment.These fi ndings are in direct opposition to the fi ndings of other research such those of Hasan Emon and Alzahrani (2023) that have demonstrated that enabling environments have an effect on the intention to engage in certain behaviors.Similarly, while Performance Expectancy exhibits a positive relationship with Attitude toward AI, the relationship is not statistically signifi cant (T = 1.711, p = 0.087), indicating that perceptions regarding the performance benefi ts of AI may not strongly infl uence attitudes toward AI among university students.According to Alzahrani (2023), performance expectancy signifi cantly infl uences students' attitudes toward AI.Finally, Social Infl uence proves to be a substantial indicator of Attitude towards AI, with a coeffi cient of

Assessment of Structural Model
Examining the path from AI-Awareness to Attitude toward AI, the coeffi cient of 0.156 indicates a positive relationship.This suggests that as students' awareness of AI increases, their attitude toward AI tends to become more positive.These fi ndings indicate that as students' knowledge of AI grows, their perception of AI tends to become more favorable.The statistical analysis reveals a signifi cant connection (T = 3.004, p = 0.003), emphasizing its signifi cance.Similarly, the association between AI Awareness and Behavioral Intention to Use is larger, with a value of 0.337.This implies that higher levels of AI awareness among students are associated with a greater intention to use AI technology.This relationship is not only statistically signifi cant but also notably stronger (T = 6.009, p = 0.000).This corroborates the fi ndings of Marrone et al. (2022), who discovered that students with a greater comprehension of AI expressed more favorable attitudes toward incorporating AI into their educational environments.Students with limited comprehension of AI exhibited a tendency to experience apprehension towards AI.
Moving on to AI Trust, the results indicate a positive Am.J. Appl.Stat.Econ.3(1) 99-108, 2024 0.21.Consequently, the impact of classmates, professors, or societal norms is crucial in molding students' attitudes toward AI.The statistical analysis reveals a signifi cant association (T = 4.051, p = 0.000), indicating that social variables play a crucial role in shaping attitudes towards the adoption of AI.The smaller these values, the better the model fi t.Based on the fact that the values of the saturated model, which is the most complex model, and the estimated model, which is the proposed model, are relatively close to one another, it is possible to draw the conclusion that the proposed model fi ts almost as well as the most complex model that is possible.Utilizing the Chi-square statistic, one can perform an analysis of the disparity that exists between the covariance matrices that were observed and those that were anticipated.However, a lower value indicates a better fi t between the data and the model.The chi-square value is sensitive to the sample size; yet, a lower value indicates a better fi t.However, this should be interpreted in the context of other fi t indices and sample sizes.The model has a high chi-square value, which may indicate that it does not fi t the data well.However, this should be taken into consideration.
Am. J. Appl.Stat.Econ.3(1) 99-108, 2024 To determine how well the estimated model fi ts the data, the Normed Fit Index (NFI) compares it to a null model.This comparison is made in order to determine how well the model fi ts the data.A more satisfactory fi t is indicated by values that are closer to the number one.Each of the models has an NFI of approximately 0.72, which is lower than the threshold of 0.95 that is typically recommended.This indicates that there is room for improvement in the model.(Venkatesh et al., 2016).Furthermore, the signifi cant impact of AI trust on both attitude and behavioral intention emphasizes the importance of credibility and reliability in the adoption process, which is consistent with the model's suggestion that social infl uence and enabling conditions are critical in technology acceptance.However, the weaker/non-signifi cant connections between effort expectancy and facilitating conditions with attitude and behavioral intention, respectively, call into question the UTAUT model's assertions about student AI adoption.This shows that other factors, presumably related to AI technology, such as ethical issues or the type of AI applications, may have a greater impact on students' views and intents.The robust association between attitude toward AI and behavioral intention to utilize AI further reinforces the main premise of the UTAUT model: good attitudes toward technology greatly contribute to its acceptance and utilization (Venkatesh & Davis, 2000;Venkatesh et al., 2003).This suggests a direct channel for educators and policymakers to infl uence AI adoption by instilling a positive attitude towards AI in students.
The signifi cant predictive power of social infl uence on attitude toward AI emphasizes the model's assertion that social factors are crucial in technology adoption (Venkatesh et al., 2003).This underscores the need for educational institutions to foster a culture that supports and encourages AI learning and exploration.In light of these fi ndings, this study expands upon the UTAUT model by highlighting the subtle impacts of AI awareness, trust, and social infl uence on university students' views and actions toward AI.It implies that although the basic elements of the UTAUT model are still applicable, the distinct features of AI technology and its specifi c usage in educational environments require modifi cations to the model in order to understand the process of AI adoption in educational settings comprehensively.

RECOMMENDATIONS
The fi ndings initiate a discourse regarding the strategic emphasis of educational programs and interventions.The primary focus of efforts to improve AI acceptance should be on establishing trust and promoting awareness while also creating an environment that supports good social infl uence.Future research should investigate the specifi c reasons why performance and effort expectancy are not signifi cant and further examine how social infl uence works in the acceptability of technology in educational contexts.The correlation between AI Awareness and the inclination to utilize AI indicates a pressing requirement for educational initiatives focused on enhancing AI literacy among students.This entails instructing not only the technical facets of AI but also its ethical, social, and practical ramifi cations.Integrating AI subjects into the curriculum, covering fundamental principles to sophisticated applications, can cultivate a better-informed and favorable disposition towards AI technologies.The impact of AI Trust on attitude, while not directly on behavioral intention, suggests that trust plays a crucial role in shaping favorable views, but it may not be enough to solely drive actual usage.Hence, establishing confi dence should be a comprehensive undertaking, encompassing not just the dependability and openness of AI but also addressing students' apprehensions and misunderstandings around AI. Given the increasing importance of AI across different industries, providing students with education on AI equips them with the necessary skills for future employment.Gaining profi ciency in AI will be an essential aptitude, and early familiarity can provide pupils with a distinct advantage.

LIMITATIONS AND FUTURE RESEARCH DIRECTIONS
Although this study offers valuable information, it does have limits.Further inquiry is needed to explore additional moderating or mediating variables due to the lack of a signifi cant association between most dimensions of the UTAUT model and At-AI and BIU.Subsequent investigations could examine how elements such as AI effi cacy, aspects of TAM (Technology Acceptance Model), AI ethics, or specifi c AI features infl uence students' attitudes and behavioral intentions.
The swift advancement of AI technology may surpass the conclusions of the study, thus requiring ongoing research.Furthermore, longitudinal studies have the potential to offer a more profound comprehension of the progression of students' perspectives and intentions as they acquire increased familiarity with AI technologies.Subsequent investigations ought to overcome these constraints by broadening the range of participants, consistently incorporating the most recent advancements in artifi cial intelligence, and potentially integrating supplementary constructs, theories, and methodologies to enhance comprehension of students' attitudes and behaviors toward technology acceptance.
Obenza et al. (2023a), which indicated that trust in artifi cial intelligence had a signifi cant impact on attitudes toward AI.The fi ndings are also consistent with the assertions made byChoung et al. (2022), who stated that trust acts as a precursor to positive attitudes, which in turn affects usage intentions.The level of trust that an individual has in artifi cial intelligence (AI) is a signifi cant factor in determining their attitude toward AI technology as well as their willingness to interact with it, as stated bySchepman and Rodway (2023).Furthermore, the fi ndings of the research carried out byEmon et al. (2023) andCook (2023) demonstrated that trust in artifi cial intelligence (AI) plays a signifi cant part in determining whether or not an individual intends to make use of using it.The path from Attitude toward AI to Behavioral Intention to Use exhibits a robust positive relationship (coeffi cient = 0.457).This implies that a positive attitude toward AI among students signifi cantly infl uences their intention to use it.This relationship is highly signifi cant (T = 8.597, p = 0.000), indicating its substantial impact.These fi ndings are consistent with multiple research that have demonstrated a substantial correlation between attitude toward AI and the intention to employ it(Hasan  Emon, 2023; Saxena, 2023).However, the paths from Effort Expectancy to Attitude toward AI and from Facilitating Conditions to Behavioral Intention to Use demonstrate weaker relationships, as evidenced by their non-signifi cant p-values (p > 0.05).

Figure 1 :
Figure 1: Partial least squares structural equation modeling (PLS-SEM) Results using Smart PLS 4.0 Table 3 presents various statistical measures used to evaluate the fi t and predictive power of a structural equation model (SEM) that aims to analyze university students' attitudes and behavior toward AI using the UTAUT model.The Bayesian Information Criterion (BIC) values of both endogenous variables, 'At-AI' and 'BIU,' have negative BIC values, suggesting a good fi t.Lower and Negative BIC values can occur and generally indicate a very strong model according to the likelihood function.The R-squared (R²) and Adjusted R² values represent the proportion of variance explained by the model.For At-AI, 61.2% of the variance is explained, and for BIU, 71.0% is explained.The high values of both R² and adjusted R² indicate a strong model.The Predictive Relevance (Q²) value shows the model's predictive relevance.A value larger than zero suggests the model has predictive relevance for the construct.Both of both endogenous variables, At-AI and BIU, show values well above zero, indicating good predictive power.The Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) are measurements of the average error that occurs between the predicted and observed Lower values are preferable because they demonstrate that the model's predictions are relatively close to the values that actually occur.Because the values of the model are relatively low, it appears that the model is able to make accurate predictions.The Standardized Root Mean Square Residual (SRMR) is a measure of fi t used in structural equation models.Values less than 0.08 are generally considered good.The model shows SRMR values close to this threshold, suggesting an acceptable fi t.The Unweighted Least Squares discrepancy (d_ULS) and Geodesic discrepancy (d_G) are discrepancy functions based on unweighted least squares and geodesic distances.The smaller these values, the better the model fi t.Based on the fact that the values of the saturated model, which is the most complex model, and the estimated model, which is the proposed model, are relatively close to one another, it is possible to draw the conclusion that the proposed model fi ts almost as well as the most complex model that is possible.Utilizing the Chi-square statistic, one can perform an analysis of the disparity that exists between the covariance matrices that were observed and those that were anticipated.However, a lower value indicates a better fi t between the data and the model.The chi-square value is sensitive to the sample size; yet, a lower value indicates a better fi t.However, this should be interpreted in the context of other fi t indices and sample sizes.The model has a high chi-square value, which may indicate that it does not fi t the data well.However, this should be taken into consideration.

Table 4 :
Model Fit