Assessing the Teaching Quality of Economics Programme: Instructor Course Evaluations

Introduction. In recent years, measuring the efficiency and effectiveness of higher education has become a major issue. Most developed countries are using national surveys to measure teaching and assessment as key determinants of students’ approaches to learning which have direct effect on the quality of their learning outcomes. In less developed countries, there does not exist a national survey. This paper aims to propose an original questionnaire assessing the teaching quality. The specifics of this questionnaire, termed as the Instructor Course Evaluation Survey, is that it addresses three main dimensions, such as: Learning Resources, Teaching Effectiveness, and Student Support. Materials and Methods. The paper opted for an analytic study using 3,776 completed questionnaires. This is a case study applied to the students enrolled in economics program in a private university in Albania. The Instructor Course Evaluation Survey design was supported by the literature review, identifying the three main dimensions included in the questionnaire. The reliability was tested with Cronbach’s alpha and with Confirmatory Factor Analysis. The use of Confirmatory Factor Analysis helps in identifying issues of multi-dimensionality in scales. Results. The paper provides empirical insights into the assessing methodology and brings a new model of it. The finding suggests that Learning Resources, Teaching Effectiveness and Student Support increase the quality of teaching. Because of the chosen research target group, students from economics program, the research results may not be generalizable. Therefore, researchers are encouraged to test the proposed statements further. Discussion and Conclussion. The paper includes implications for the development of a simple and useful questionnaire assessing the quality of teaching. Although Instructor Course Evaluation Survey was applied specifically to economics program, the proposed questionnaire can be broadly applied. This paper fulfills an identified need to propose an original and simple questionnaire to be used from different universities and programs to measure the quality of teaching.


Introduction
According to the Standards and Guidelines for Quality Assurance in the European Higher Education Area 1 , 'universities have to review their programs on a regular basis ensuring their compliance with international aims meeting learners' and social needs, especially on quality assurance'. The academic knowledge and skills followed by concrete examples directly linked to the real world persist to be crucial issues to be absorbed and transmitted to students as learning tools and added value [1].
Stergiou and Airey and Darwin state that 'the systems for the evaluation of teaching and course quality in higher education in-stitutions have long been established both in the United States and Australia and they have also become increasingly common in the United Kingdom' [2 ; 3]. Other authors such as Clayson & Haley, Kuzmanovic et al. and Surgenor state that have been established in other countries too [4][5][6]. Student evaluations of teaching (SET) is the most commonly used method as they provide rapid feedback [7], and ratings that are easily compared across units and between instructors [8]. These surveys are used to identify the problem areas and to set up some action plans to enhance them. The evaluation of both, teachers and teaching, is an important part of higher education [9] and can be used to help improve teaching qual-ity [10]. They are often an important part of accreditation processes too. Marsh, Paulsen and Richardson suggest that 'student ratings demonstrate acceptable psychometric properties which can provide important evidence for educational research' [11][12][13].
The Law no. 9741, dated 21.05.2007 for the higher education made some new establishments with respect to administration, organization and financial aspect to improve quality of Albanian HEIs in alignment with the European Standards 2 . Even though further amendments to this were carried out, Law no. 9832, dated 12.11.2007 and Law no. 10 307, dated 22.07.2010, again there were concerns related to quality weaknesses in the HEIs 3 .
The new Law No. 80, dated 17.09.2015, "On Higher Education and Scientific Research in Higher Education Institutions of the Republic of Albania", would enforce the establishment of the internal and external mechanisms on quality control in each institution 4 . Article 103/3 of this law states that each institution must spread and collect questionnaires before the final exams of each semester in order to track data regarding quality of teaching within the programs.
In 2014, the Ministry of Education and Sport of Albania and the Quality Assurance Agency for Higher Education (QAA) in the UK signed a Memorandum of Understanding and during 2016-2017, all the 35 HEIs in Albania, both public and private, entered the process of institutional accreditation 5 . One of the standards that the HEI has to fulfill is the "Study programmes are subject to their continuous improvement to increase quality", and the concrete examples of this standard are as following: 1. Lecturers are regularly assessed by institution structures that pursue qualitative implementation of study programmes.
2. Students are involved in evaluation of lecturers and study programme implementation.
3. Outcomes of examinations and competitions are published.
4. Study programmes are improved by taking into account the outcomes of their evaluation by academic staff and students. 5.Study programmes quality is evaluated also by statistics of employment of graduates in the relevant study programme 6 .
Even though the evaluation of study programs is a requirement, systematic data collection and evaluation process is not well established in most of the Albanian universities. Hoxhaj and Hysa in 2015 stated that the main and the most difficult challenge for the HEIs in Albania is the improvement of controlling, monitoring and reviewing quality assurance in universities. Many public and private universities are not accomplishing the standards of existence and are still operating in educative market [14].

MODERNIZATION OF EDUCATION
Motivated from the requirements to measure the teaching and course quality and a lack of instructor evaluation survey analysis in the Albanian higher education system, this study provides a useful starting point for the purpose of the present investigation. This is the first study of this kind conducted for any Albanian university. Epoka University (EU) is one of the leading universities in Albania, which is often included in the list of top three universities of this country 7 . This is the main reason for selecting EU as a case study. Secondly, this study can serve as a good practice, and the survey can be proposed as a quality measurement tool to the other higher education institutions of this region.
More specifically, this study is conducted for the economics program of the first cycle of study. The research focuses on a general literature review regarding the usage of different surveys and the variety of dimensions they include. A special part of the literature covers some previous studies that have used similar methods of analyzing the students' surveys. The second part is devoted to the methodology used and data collection for our survey. The next session includes the descriptive statistics, which help to measure the reliability and internal consistency by using Cronbach's alpha, and the Confirmatory Factor Analysis (CFA) to investigate the correlation between dimensions of the survey. Finally, the conclusions, discussions and study limitation takes place.

Literature Review
Bassi et al. state that one of the aspects of students' surveys is the measurement of the quality of teaching [15]. Meanwhile, it is arduous to define the quality of something since it depends on many various elements. 'Different interest groups, or stakeholders, have different priorities' [16].
Spooren et al. state that different surveys have used a great number of instruments available to students for assessing teaching [17]. Some of the examples are found in the below table 1.
Although some level of consensus regarding the characteristics of effective or good teaching has been reached [17], existing SETs instruments vary widely in the dimensions they try to capture [15].   [20] employed questionnaires including a total of nine dimensions, three of which are similar to our dimensions. Most of these works have used the reliability test, Cronbach's alpha and confirmatory factor analysis.
Kember and Leung used the case of 'designing a new course questionnaire to discuss the issues of validity, reliability and diagnostic power in good questionnaire design' [21]. The authors have interviewed award-winning teachers about their principles and practices, resulting nine dimensions of good teaching, which were developed into nine questionnaire scales. Along with the test of reliability with Cronbach's alpha and with confirmatory factor analysis, the authors introduced 'the concept of diagnostic power as the ability of an instrument to distinguish between related constructs'.
Barth examined the student evaluation of teaching instrument used in the College of Business Administration at Georgia Southern University, which measured five dimensions, such as quality of instruction, course rigor, level of interest, grades and instructor helpfulness [22]. Apart from the level of interest and grades, the other three dimensions match with our survey. The author found that 'the overall instructor rating is primarily driven by the quality of instruction'.
Ginns et al. used the Course Experience Questionnaire to receive the students' perceptions on a number of dimensions, including 'Good Teaching, Clear Goals and Standards, Appropriate Assessment, Appropriate Workload, and Generic Skills development' [23]. 'Confirmatory factor analyses supported the hypothesised factor structure and estimates of inter-rater agreement on SCEQ scales indicated student ratings of degrees can be meaningfully aggregated up to the faculty level' [23].
Entwistle et al. define teaching and learning environment as the aggregate of four elements: 'course contexts, teaching and assessment of contents, relationship between students and staff, and students and their cultures' [24]. This definition is similar to our survey. Course context is considered to be the learning resource scale. 'Course contexts include, among others, aims and intended learning outcomes for a specific course' [24]. 'Teaching and assessment of contents refer to pedagogical practices that support students' understanding of discipline-specific ways of thinking and reasoning' [25], which is consistent with teaching effectiveness scale in our survey. 'Relationship between students and staff describes the affective quality of the relationships between students and teachers, such as the provision of flexible instructional support for both cognitively and affectively diverse learners' [26; 27]. This element is the last scale of our survey, that of student support scale. Whereas the fourth element, it is not being considered in the ICES.
Usage of Reliability, Validity and Confirmatory Factor Analysis in Literature Review. Though the validity and reliability of the instrument are important, often they are not given sufficient attention [20]. Generally, the reported ratings from SET are assumed to be valid indicators of teaching performance [27]. 'There is only limited evidence-based research on the validity and the reliability of SET instruments in the literature' [8]. 'SETs typically contain groupings of items reflecting different dimensions of the student experience of a particular course, referred to as scales' [2].
Both reliability and validity are categorized to be important psychometric elements of surveys. Although reliability may be measured in a number of ways, the most commonly accepted measure is internal consistency reliability using alpha coefficient. In their studies, Nunnally 8 and Hinkin [28] define 'reliability as being concerned with the accuracy of the actual measuring instrument, and validity referring to the instrument's success at measuring what it purports to measure'.
Traditionally, the internal structure of a questionnaire is evaluated via Confirmatory Factor Analysis [2; 21; 29-31], 'which MODERNIZATION OF EDUCATION tests the theoretically justified measurement model against the data collected with the questionnaire' [32].

Methodology and Data Collection
The main objective of this study is to validate the scales of student evaluation of teaching used at the bachelor program of economics at Epoka University in Albania. The population for the study consisted of students of the above-mentioned program for the academic year 2017-2018. EU has been using its' own survey, named as the "Instructor Course Evaluation Survey", which was filled up electronically and the participants were assured that their responses would be kept confidential and anonymous. The students had to complete the form before the final exam period of fall and spring semesters.
There are two categories of students' filling up the survey; the students enrolled in the economics department or others having as electives the courses of this department. These students are in the first year, second year and third year of their studies. Survey results are shown electronically from the university interactive system. Individual results are reported to each faculty member, accordingly. Moreover, the list of the courses offered per each program under the department is shown to each head of the department account.
The Instructor Course Evaluation Survey was fulfilled for 41 courses in the fall semester, and 43 courses in the spring semester, a total of 84 courses for academic year 2017-2018. These 84 courses represent the collective evaluations of 32 different instructors, based on the surveys of 3,776 students. The responce rate to this survey was soaring; the lowest percentage response rate per courses has been calculated to be 90.00% and the highest one 100%.
The ICES used was based on the 14 item instrument merged into three scales reflecting different dimensions of teaching, such as Learning Resources Scale, Teaching Effectiveness Scale, and Student Support Scale. Students are required to evaluate the teaching of each course by responding to the questions using a 5-point Likert Scale, from 0 for 'definitely disagree' to 4 for 'definitely agree'.
ICES included also a session in which the students could write additional comments. Even though reading all the comments and including this information in the analyse seems to be an imperative work, these comments sometimes are rich and much more informative and often they can serve to stress the students' evaluation.
The 14 items are categorized under 3 dimensions (see Table 2) which can be can be summarized as: Learning Resources Scale (LRS)which are mostly related to the course type, structure and organization.
Teaching Effectiveness Scale (TES)covering the teaching methodology, effectiveness and assessment.
Student Support Scale (SSS) -comprises the lecturers' readiness to support students and their punctuality.

Results and Discussion
Descriptive statistics. Table 2 shows the descriptive and reliability statistics of instructor course evaluation survey (ICES) using learning resource scale (LRS), teaching effectiveness scale (TES) and student support scale (SSS). These three scale measure the efficiency and effectiveness of teaching. In the first column, the information on ICES is reported which is obtained from the students of economics. The second column shows the code of each statement (see Table 2). These statements are coded as LRS_1, LRS_2, LRS_3, TES_1 and so forth. Mean values and the standard deviations are reported in the third and fourth columns. Overall, the mean values are greater than 3 which indicate that most of the students ranked their teacher performance satisfactory. In order to examine the reliability and internal consistency between variables (or statements), the Cronbach's alpha values indicates that these statements which are related to learning resource scale (LRS), teaching effectiveness scale (TES), and the student support scale (SSS) are highly correlated and suggest that reliability of all these variables have excellent (alpha > 0.90) test scores.

Confirmatory factor analysis of Instructor Course Evaluation Survey (ICES).
To investigate the correlation between LRS, TES and SSP, we have used path covariance analysis which is also known as confirmatory factor analysis (see Figure). Figure reports the latent variables in three circles which are labeled as 'LRS', 'TES', and 'SSS'. Each latent or unobserved variable is linked with their proxies (observed variables). For example, learning resource scale (LRS) is associated with LRS_1, LRS_2 and LRS_3 with error terms (in small circles). Similarly, teaching effective scale (TES) is a latent variable and related with TES_1, TES_2, TES_3 and so forth. One-sided arrow shows the linear relationship (regression) between latent variable and their proxies. Along each arrow the factor loading values have been reported. Each factor value shows the relationship between a latent and an observed variable. Two-sided arrow presents the correlation (covariance) between latent variables. Higher the factor value means that two variables are strongly correlated. In order to check the fitness of the factor model, we can observe that comparative fit index (CFI) value is 0.90 which suggest that the model is good fit (see Table 3). Similarly, other model fit statistics such as root mean square error approximation (RMSEA) with value 0.05 and standardized root mean residual value which is 0.019 shows that our confirmatory factor analysis is appropriate.   Table 4. Learning resource scale is measured with 3 items. The standardized coefficients (or factor loadings) are high and showed significant association (β > 0.90; p-value = 0.000) to learning resource scale (LRS). This outcome indicates that clarity regarding syllabus, textbook and reading materials positively enhance the students learning skills. Regarding teaching effectiveness (TES), the teaching methodology, instructor use of course related knowledge, effective communication and teacher assessment of students' grades are positively correlated (β = 0.9; p-value = = 0.000) with teaching effectiveness scale. Student support scale (SSS) also showed strong statistical evidence (β > 0.90; p-value = 0.000) which rejects the null hypothesis. This outcome indicates that instructor preparation for lecture, being punctual and interaction with students supports their learning abilities (see Table 4). Lastly, the three latent or unobserved variables (see Table 4 or Figure) show a strong positive correlation with each other. This finding suggests that using learning resource scale (LRS), teaching effectiveness (TES) and student support scale (SSS) will increase the quality of teaching in degree programs. So, our results confirmed the validity of these three ICES scale and it useful to implement in Albanian higher education institutions. T a b l e 4. Confirmatory factor analysis with standardized factor loadings of ICES

Conclusion
Researchers widely use student ratings of instruction as a metric of instructor performance [22]. 'From the university pedagogics perspective, in order to support students' learning and thinking, it is important to know how students perceive their teaching-learning environments' [25].
This study aimed at validating the scale of students' evaluation of teaching used by Epoka University in Albania, with particular regard to economics program and indicators assessing the teaching carried out by instructors of this university. The satisfying results concerning the statistical validity and reliability of the questionnaire lay the foundation for improvement in terms of the quality of teaching and learning processes. The three scales/dimensions used in ICES related to learning resource, teaching effectiveness, and the student support are found to be highly correlated and all these variables are reliable and have internal consistency.
Both, the comparative fit index and square error approximation show that the model is a good fit and that the used confirmatory factor analysis is appropriate. The three scales are correlated to each other, underling the fact that all together they significantly and positively contribute to quality of teaching in this program.
Although the results reported here are specific to Epoka University, economics program and ICES, researches can use the same survey in measuring the teaching performance and finding out if the three dimensions of ICES are reliable and valid to their institution. The usage of such surveys and the examination of their dimensions make possible a better understanding of the teaching quality and the factors affecting it.