• 検索結果がありません。

Numerical Rating Scales and Class Participation : A Pilot Study

N/A
N/A
Protected

Academic year: 2021

シェア "Numerical Rating Scales and Class Participation : A Pilot Study"

Copied!
20
0
0

読み込み中.... (全文を見る)

全文

(1)

Numerical Rating Scales and Class Participation:

A Pilot Study

Harry W. Harris, Jr.

§

(Faculty of Education, Hakuoh University)

This pilot study explores the development and trial use of a numerical rating scale distributed to Japanese university students for explanation of participation requirements and reports survey results as to its effectiveness in encouraging this performance factor. The results do not indicate that this rating scale enhanced student participation more than did a participation system transmitted orally to a control group. Future research should target systematic participation guideline methods. この試験的な調査は、授業参加必要性を説明する為に日本の大学生に分配 した数値評価基準の発展と試験的な使用を研究し、数値評価基準が学生の パーフォマンス要因を促すことの効果の調査結果を報告する。その結果は、 文書で参加必要性を説明したグループの学生より、口頭で参加必要性を説 明されたコントロールグループの学生の授業参加が向上したということを 示すものではない。将来の調査ではシステマティックな参加ガイドライン 方法を目的とすべきである。

Key Words: class participation, pilot study, rating scale, rubric

      

(2)

Introduction

  I had doubts. Most of my Japan-based English teaching experience has been gained in a senmon gakko (vocational school) with highly motivated students who usually came to class sufficiently prepared, making participation assessment a curricular non-issue. Now, in my newer university context, I had inherited public-available department-wide evaluative measurements which required that participation comprise 30%-50% (course dependent) of final grades in large classes of students who might attend class without textbooks, paper, or even writing utensils. With no further guidance, I found myself wondering how I could justify this. For me, attendance, 30% of final grades, was not an issue because I kept careful records and students were, for encouragement, frequently reminded of the attendance and punctuality policy. As well, tests, 20%-40% of grades, allowed me to gauge student achievement, at least quantifiably. However, although it was clear to me that participation is an important performance criterion, allowing me to consider student efforts over a period of time, the concept seemed just too vague and subjective for meaningful student guidance. I needed a resolution to this issue. Would a carefully explained rating scale help?

  This paper reports the results of a pilot study focused on the effects of teacher-generated participatory performance criteria arranged in a numerical rating scale which was used in several EFL classrooms at two Japanese universities. Through hands-on engagement, pilot studies help provide researchers experiential knowledge, offering a “practical sense of the domain within which the phenomenon is situated” (Kezar, 2000, p. 385), allowing the researcher to modify or discard the research instrument afterwards and learn more about the research process (Van Teijlingen &

(3)

Hundley, 2001). That this study was a pilot was thus important because, in determining an instrument for its purposes, options varied from a yes-no checklist to Likert rating scales, with anchor numbers incrementally approaching the optimum behavior presented in the criteria, or a rubric, with different descriptions of different levels of the performance elicited (see, e.g., Airasian, 1997, for further discussion). Because it was ultimately decided that a checklist would not provide enough detail whereas a rubric might provide too much, a numerical rating scale was developed, successful similar use of which has been reported elsewhere (Dancer & Kamvounias, 2005). In this paper we will examine the development for and trial use of this instrument in Japanese university classrooms and examine student response as to its effectiveness in providing guidelines that encourage student participation.

Definitions

  This paper defines assessment as “the collection, synthesis, interpretation, and use of information to aid teacher decision making” (McMillan & Workman, 1998, p.10), for summative (for grades) and formative (for teaching and learning adjustments) purposes (Garrison & Ehringhaus, n.d.), with a “focus on academic achievement and social behavior” (Airasian, 1984: cited in McMillan & Workman, 1998, p.11), in that the classroom is a social context (Getzels & Thelen, 1960). For our purposes, evaluation will be the performance quality judgments which inform the decision-making process (McMillan & Workman, 1998). Participation assessment, then, is teacher-solicited and self-initiated student behavior (Day, 1984) that the teacher has noted and can use for summative and formative purposes.

(4)

Literature Review

  Little is reportedly known about EFL/ESL assessment and evaluation at universities (Cheng, Rogers, & Hu, 2004) in particular, and a review of the literature reveals few studies of rating scale use for participation standards in ESL/EFL contexts. In Japan-based studies, however, Luc & Muta (2007) report successful use of a rubric with space added for teacher comments. The final form that was used allowed multiple ratings of each student by up to six teachers in an intensive program, meeting the overall goal of providing students with meaningful feedback and summative grades based on the criteria and gaining positive student feedback. In another Japanese academic context, Gage (2004) reports the introduction of a rubic which resulted in previously challenged (and challenging) students attending class more prepared and eager to participate. In a third context, White (2009) reports that a pilot study in which students self-assessed resulted in enhanced class participation, though he questions the reliability of student scoring.

  As for whether participation should play a role in student evaluation for grade purposes, it is important to report here that researchers disagree. Jacobs & Chase (1992) explain that students usually do not receive instruction for class participation improvement, that interpretation of student behavior is subjective, that shy students are at a disadvantage in classes that require oral student response, and that record-keeping is problematic (pp.195-196). As well, other researchers suggest that class participation is used as a “fudge” factor in computing final grades (Bean & Peterson, n.d), purportedly allowing teachers to make adjustments in student grade assignment (Jacobs & Chase, 1992, pp.196-197).

(5)

forms of assessment in a move away from dependence on testing. (See Lashway, 2001, for a presumably exhaustive delineation of arguments against testing.) Surveys of U.S. teacher grade-determination practices report that teachers routinely use ability, attitude, effort, and participation and other criteria in addition to achievement (Friedman & Manley, 1991: as cited in McMillan & Workman, 1998), that 31% consider laudatory and disruptive behavior (Frary, Cross, & Weber, 1993: as cited in McMillan & Workman, 1998), that 39% consider conduct and attitude important (Cross & Frary, 1996), and that 32% factored in student behavior whereas only 9% factored in ability (Truog & Frieman, 1996: as cited in McMillan & Workman, 1998).

  In response to the above-cited criticisms that participatory evaluation has drawn, it is the thesis of this paper that pro-active explanation of participatory standards in itself provides the instruction that can encourage students to improve class participation and can make more objective its evaluation. As with other grade criteria, to be fair and meaningful, the grading system must be explicit (Anderson, 2003). In this researcher’s opinion, to maintain that a criterion should not be considered for grade purposes because it has not been explained to students is issue avoidance at best. Organized educators should determine the validity, within their academic framework, of all potential parts of their grading system and inform students when they have opted for their use. As well, suggestions have been made to help encourage introverted students to participate by, for example, giving them more time to prepare for speaking activities (e.g., Bean & Peterson, n.d.) and to motivate all students by providing a wide variety of novel tasks that actively engage students (e.g., Ames, 1992: as cited in Alkharusi, 2009). Finally, though there is indeed a need to keep records, these need not be extensive (as we shall see later

(6)

in this paper), and the “fudge” factor, the decision process of raising (or lowering) borderline (or perhaps non-borderline) grades, becomes less possible, or at least less opaque, with focused, explicit guidelines, the need for which has been pointed out above.

  Before looking at our methodology, it should be pointed out that there are theoretical frameworks with which participation assessment and test de-emphasis coalesce. For one, there is a constructionist body of research that maintains that knowledge and skills derive from social and environmental interaction (e.g., Dewey, 1997; Piaget, 1990; Vygotsky, 1986). Schindler (2003, p. 21), in particular, crystallizes the importance of this idea in his observation that participation assessment can help students internalize a concept of quality behavior, promoting healthy group behavior and ultimately growth. With this constructionist theoretical framework in mind, we must remember that learners pass through developmental stages at different times making knowledge acquisition variable, that alternative forms of assessment, including that of participation, are important because we have different human capacities (Gardner, 1993), and that in the real world we often have multiple opportunities to show that we can complete a task (Hancock, 1994), unlike with a test.

  Another theoretical basis is that of self-efficacy, the belief that one’s performance ability (which, we will remember, includes that for participation) can influence events in one’s life (Bandura, 1994). Students with a strong sense of efficacy are more deeply interested in engagement in learning activities because they see these as challenges rather than obstacles, unlike those with a weaker sense of efficacy, who focus on their own deficiencies, the problematic nature of tasks, and the possibility of failed outcome (Bandura, 1994). As MacMillan & Workman (1998) have

(7)

pointed out, student knowledge of grading criteria enhances self-efficacy because students can anticipate steps they should take to satisfy teacher expectations and are, therefore, more likely to see tasks as within their ability.

Methods

Setting and Context

  The pilot was conducted at two private suburban eastern Japan universities outside of Tokyo. Students in all departments at both schools must complete two years of communicative English classes, with the broad goals of improving basic speaking fluency and listening skills. English-language classes involved in this pilot were for the major part conducted in English, as per (Japanese) government guidelines (MEXT, 2011, p. 17), and included pair, group, and whole-class activities with listening exercises. The first university, hereafter referred to as home, has English-language course objectives and requirements across three faculties that encourage the use of the same textbooks and similar consideration of attendance, participation, and assessment for grade-final purposes, though individual teachers vary in their interpretation of these factors and, of course, in their teaching practices. The second university, with two faculties, was included in the pilot to augment the study scope, due to scheduling issues which made extensive cooperation difficult at the home university. The faculty at the second university which involved the students forming part of this study has a published syllabus with a suggested textbook, a list of chapters that can be covered, and grading criteria based on attendance, participation, quizzes, and final exams. To this researcher’s knowledge, textbook consideration and selection is teacher initiated.

(8)

Subjects

  The subjects in the pilot study were 212 male and female first-year Japanese university students, majoring in Sports (N = 26), Law (N = 72), and Business (N = 83) at the home university and Economics (N =20) and Business (N = 11) at the second university. Each class met once a week for 90 minutes in the first semester (April-July) of two-semester 30-week courses. TOEFL scores are unavailable.

Instructor Participants

  The three participant instructors were all professionals, each with more than thirty years of English-language education experience in Japan, at universities and community colleges and in other professional contexts. Instructor A, male, holds a part-time position at the home university and at the second university. Instructors B, male, and C, female, are full-time contractual instructors at the home university. All three instructors have degrees from North American universities, Instructors A and B with graduate degrees in TEFL and linguistics, respectively, and Instructor C with a B.A. in Asian Studies.

Instrument and Procedure

  During the planning stage, different instructors at the home university were approached and asked to participate in this first-semester pilot study. Ultimately, two agreed to collaborate with this researcher, though, as we shall see below, there was feedback from other colleagues. After some discussion, it was decided that Instructor A, with 2 classes of 31

(9)

students (n = 20, n = 11, respectively) at another private university, and this researcher (Instructor B), with 1 class of 26 students at the home university, would use a numerical rating scale. Instructor C, with the control group of 4 classes of 155 home-university students (N = 44, N = 43, N = 40, N = 28, respectively) would not use a rating scale, but would use her own student participation system explained below.

  Before the semester began, Instructors A and B and a fourth otherwise non-participating instructor devised a rough draft of the rating scale, which we will call the Class Participation Assessment Score Sheet (CPASS). Several drafts of this were exchanged by e-mail and made available to other colleagues for their input. The final criteria are based on behavior that most colleagues who provided feedback agreed were issues that needed to be addressed. (One colleague felt uncomfortable with the 2nd criterion, but the consensus was that frequent student requests for

bathroom visits could be disruptive in pair-work, group-work structured language classes and thus merited greater teacher guidance.) Instructors A and B would distribute copies of the final CPASS version to their participating classes for explanatory purposes of the participation criteria. The final draft included a native-generated in-text Japanese translation (see Appendix A: part of the Japanese translation has been omitted due to space concerns).

  Instructors A and B handed out to students the CPASS during the 2nd or 3rd class sessions and again during the 10th, 11th, or 12th sessions,

explaining that the participation portion of student grades would derive from those criteria. Students completed a self-assessment with this form (even in the first instance, as practice) to familiarize themselves with participation expectations. Though student self-assessment has received support (Brown & Hudson, 1998) as has student peer assessment (Okuda

(10)

& Otsu, 2010), because student self-assessment accuracy has been questioned (Blanche, 1988; Yamashita, 1996: as cited in Brown & Hudson, 1998; Burke, 1969: as cited in Jacobs & Chase), student self-assessment scores were not used for grade purposes in this study.

  Rather than use the CPASS or provide students with an alternative explanatory handout, Instructor C explained to students her own criteria in the first class session. Used only with large classes, her system includes bonus points for active listening participation, asking and answering questions, using English for task completion, doing good pair work, carefully completing class work and homework, and generally showing positive attitude and effort. It also includes penalties for attending class without the textbook or homework, sleeping, using Japanese, especially during pair work, cell phone texting, being tardy, and not making efforts. The teacher begins each class with a fresh seating chart, which she places on a podium in front of the class. As she circulates during the class, she visibly annotates this chart whenever she revisits the podium and, after class, transfers this information as points onto a class roster on which she color codes the bonus and penalty points. Students are reminded of grade consequences when the teacher feels there is a need.

 During the 13th week, all student subjects completed a five-question

survey (see Table for questions), in English with a Japanese translation. The survey asked subjects to rate five class-participation-related items on a 6-point agree-disagree Likert scale continuum.

Table. Survey questions

⑴ The teacher has helped me understand the importance of participation in this class. このクラスに参加する事の重要性を理解するのを先生は助けてくれる。 ⑵ I understand the teacher’s criteria for grading my participation in this class. 先生のこのクラスで私への参加評価基準を私は理解する。

(11)

⑶ The teacher’s criteria for grading my participation in this class are fair. このク ラスで私の参加評価の先生の基準は公平である。

⑷ The teacher’s criteria for grading my participation in this class have encouraged me to participate more. このクラスで私の参加評価の先生の基準は、 より参加するよう私を励ましてくれる。

⑸ The teacher’s criteria for grading participation have helped make a better learning environment in this class. 先生の参加評価基準は、このクラスでより良 い学習環境をつくるのに役立っている。

  Following the survey, the raw counts on the 6-point (agree-disagree) Likert scale were entered into an Excel spreadsheet and the means calculated. The class mean scores for each separate item (in gray shade), class mean averages of those scores (in gray shade), total item mean averages, total class mean averages (in gray shade), and standard deviations (SD) for the class mean scores of the separate items were determined and recorded for all CPASS and non-CPASS pilot responses (see Appendix B). Furthermore, t-test figures were also determined with SPSS software to explore the different effects of rating scale use and non-use and home and non-home institutions (see below).

Results and Discussion

  The results shown in Appendix B indicate that it cannot be concluded that the CPASS used in this pilot study can encourage greater student participation than an alternative system with similar objectives. All item mean averages for Instructors A and B reflect less agreement with the survey items than those for Instructor C, and t-test results reveal that the difference between rating scale use and non-use was non-significant, t(210) = 1.51, p = .133. Also, the total class mean average for Instructors A and B

(12)

was 2.03, and that for Instructor C, 1.88, yielding a difference that cannot lead to conclusions. Though these scores reflect high respondent report of satisfaction with the CPASS, the averages are in fact more positive for Instructor C, who distributed no paper guidelines.

  It should be pointed out that subjects gave particularly higher ratings to Item 3, which inquired about fairness, with total item mean averages of 1.87 for Instructors A and B and 1.77 for Instructor C. Though this information is difficult to interpret, this may reflect student acknowledgement of teacher guidance efforts and hesitancy to pass negative judgment on teacher fairness. It should also be pointed out that the class mean average for Instructor B (this researcher) indicated significantly less student agreement than in all other classes. Though there are too many variables to allow for conclusions to be drawn for this (e.g., teacher gender and personality, class composition, and anomaly), this issue will be explored in a later context.

  The above said, it would be counterintuitive to maintain that these results indicate that rating scales are inferior substitutes for other well-planned and well-executed guidance methods such as that of Instructor C. As we can see by the high-agreement ratings reported for the CPASS participation criteria, subjects evidenced no major confusion with or lack of enthusiasm for this instrument or its implementation. The results, therefore, cannot but (re)turn our attention to a recognition or confirmation that, at least in the context of this pilot study, systematic guidance and reminders of purposes and consequences are vital. Because it is this researcher’s opinion that the CPASS has provided these elements, this instrument will not be discarded, but rather the possibility of working it and elements of Instructor C’s methods into a future participation assessment context will be explored.

(13)

Conclusion

  Before closing, we should mention that the voluntary nature of this pilot study was one of its limitations. Instructor participants volunteered to collaborate and then engaged their classes in this study, the purpose of which was to explore the effectiveness of a rating scale (CPASS) in providing guidelines that encourage student participation. The nature of this study meant that potential instructor participants had to be informed of this purpose, and ethical issues precluded a search for an instructor willing not to transmit this kind of information. As has been indicated above, Instructor C, with many years of English-language education experience, had a fully developed system of participation in situ, though she did not distribute handouts. She informed students of the details and consequences of this system, with later reminders. The implication of this is that it was not unexpected that her students would report satisfaction with her guidance.

  Another possible limitation of the study was the involvement of classes from another university. As Sporn (1996) points out, universities, like other institutions, can have distinctive organizational cultures. Though this issue cannot be examined here in detail, organizational culture differences may mean variation in student expectations and response to teacher expectations, as well as in subject response to inquiries of an empirical nature. However, t-test results, in fact, indicate that the difference between the home and home university samples was non-significant, t(210) = -0.71, p = .479, possibly reflective of the similarities of the study groups involved at the two universities in terms of their demographics and the English-language education contexts. We will also recall this is a pilot study, one purpose of which was to provide experiential

(14)

knowledge to help prepare for future research.

  This pilot study examined the relationship between rating scale standards and participation in an effort to investigate whether a rating scale instrument developed for the study would provide meaningful guidelines that encouraged students to participate more fully. It was found that an alternative oral guidance system without student handouts provided somewhat more successful survey results. However, none of the results should be taken as conclusive, given the exploratory nature of the study. Future research must take into account the study limitations, and further exploratory considerations must be given to systematic participation guideline methods.

Acknowledgements: The author would like to thank Instructors A and C

and other colleagues for their cooperation on this project as well as for their draft comments. Also, special thanks to Professor O’ki for his very professional knowledge and assistance, without which the statistic analysis used in this paper would have been less possible.

References

Airasian, P. W. (1997). Classroom assessment. Third edition. N.Y.: McGraw-Hill.

Alkharusi, H. (2009). Classroom assessment environment, self-efficacy, and mastery goal orientation: A causal model. Paper presented at the Second International Conference of Teaching and Language Learning, INTI University College, Malaysia. Retrieved from http:// ictl.intimal.edu.my/ictl2009/proceedings/ index.html

(15)

Anderson, L. W. (2003). Classroom assessment: Enhancing the quality of teacher decision making. Mahwah, N.J.: Lawrence Erlbaum Associates, Publishers.

Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia of human behavior (Vol. 4, pp. 71-81). New York: Academic Press. Bean, J. C. & Peterson, D. (n.d.) Grading classroom participation.

Retrieved from http://www.csufresno.edu/academics/documents/ participation/grading_class_ participation.pdf

Brown, J. D. & Hudson, T. (1998). The alternatives in language assessment. TESOL Quarterly, 32(4), 653-675.

Cheng, L., Rogers, T., & Hu, H. (2004). ESL/EFL instructors’ classroom assessment practices: purposes, methods, and procedures. Language Testing 21(3), 360-389.

Cross, L. H. & Frary, R. B. (1996). Hodgepodge grading: Endorsed by students and teachers alike. Paper presented at the annual meeting of the National Council on Measurement in Education, New York. Dancer, D. & Kamvounias, P. (2005). Student involvement in assessment:

a project designed to assess class participation fairly and reliably. Assessment & Evaluation in Higher Education, 30(4), 445-454. Day, R. R. (1984). Student participation in the ESL classroom or some

(16)

Dewey, J. (1997). Experience and education. New York: MacMillan Publishing Co.

Gage, P. (2004). A participation rubric for university classes. JALT 2004 at Nara Conference Proceedings, 1203-1209.

Gardner, H. (1993). Multiple intelligences: The theory in practice. New York: Basic Books.

Garrison, C. & Ehringhaus, M. (n.d.). Formative and summative assessments in the classroom. Retrieved from http://www.nmsa.org/portals/0/ pdf/publications/Web_Exclusive/Formative_Summative_ Assessment.pdf

Getzels, J. W., & Thelen, H. A. (1960). The classroom as a unique social system. In N. B. Henry (Ed.), The dynamics of instructional groups: Sociopsychological aspects of teaching and learning (Fifty-ninth yearbook of the National Society for the Study of Education, Part 2). Chicago: University of Chicago Press.

Hancock, C. R. (1994). Alternative assessment and second language study: What and why? (ERIC 376695)

Jacobs, L. C. & Chase, C. I. (1992). Developing and using tests effectively: A guide for faculty. San Francisco: Jossey-Bass.

Kezar, A. (2000). The importance of pilot studies: Beginning the hermeneutic circle. Research in Higher Education 41(3), 385-400.

(17)

Lashway, L. (2001). The new standards and accountability: Will rewards and sanctions motivate America’s schools to peak performance? Eugene, OR: ERIC Clearinghouse on Educational Management.

Luc, R. & Muta, Y. (2007). Making participation assessment more meaningful: A formative approach to grading classroom participation. The Bulletin of Nagasaki Junior College, 19, 31-42. Retrieved from http://ci.nii.ac.jp/naid/11000640527 0/ en McMillan, J. H. & Workman, D. J. (1998). Classroom assessment and

grading practices: A review of the literature. Metropolitan Educational Research Consortium, Richmond, VA. [ERIC 453 263].

MEXT. (2011). Higher education in Japan. Retrieved from http://www. mext.go.jp/english/highered/__icsFiles/afieldfile/2011/02 /28/1302653_001.pdf

Newfields, T. (2006). Suggested answers for assessment literacy self-study quiz #1. Shiken: JALT Testing & Evaluation SIG Newsletter, 10(2), 25-32.

Okuda, R. & Otsu, R. (2010). Peer assessment for speeches as an aid to teacher grading. The Language Teacher, 34(4), 41-47.

Piaget, J. (1990). The childʼs conception of the world. New York: Littlefield Adams.

(18)

assessing student participation and process. The Online Journal of Peace and Conflict Resolution, 5(1), summer: 20-32.

Sporn, B. (1996). Managing university culture: An analysis of the relationship between institutional culture and management approaches. Higher Education, 32(1), 41-61.

Van Teijlingen, E., & Hundley, V. (2001). The importance of pilot studies. Social Research Update, Issue 35, Winter.

Vygotsky, L. (1986). Thought and language. Boston: MIT Press.

White, E. (2009). Assessing the assessment: An evaluation of self-assessment of class participation procedure. The Asian EFL Journal Quarterly, 11(3), 75-109.

(19)

Appendix A. Class Participation Score Sheet Class Participation Assessment Score Sheet

授業参加評価基準 スコアー シート Student Name:

(Adapted from White, 2000)

Class Participation Criteria

授業参加評価基準 Poor 悪い (0-1) Average 平均 (2-3) Excellent 優秀 (4-5) 1. Preparation 準備

Student comes to class with homework and with textbook and writing tools.

2. Attentiveness 注意

Student stays focused on English and does not waste time chatting, checking cell phone, sleeping, or making frequent requests for bathroom privileges.

3. Cooperativeness and Completion of Tasks 課題に対する協調性と、課題の完了

Student actively cooperates to complete lone, pair, or group in-class tasks.

4. Active Listening and Note Taking 積極的なリスニングとノートをとる事

Student listens actively to teacher and classmates, taking notes when important.

5. Language Use 使用言語

Student communicates as much as possible in English, showing attempts not to use Japanese in class.

6. Overall Effort and Attitude 全体的な努力と態度 Student has been an active member of class, showing efforts to communicate in English with the teacher and other students and to improve speaking skills.

(20)

Appendix B. CPASS and non-CPASS pilot study

survey results with mean scores (N = 212) and standard deviations

Instructors A and B Q1 Q2 Q3 Q4 Q5 Class Mean

Average A/Class 1 (Economics) (n = 20) 1.85 (SD= 0.75) 2 (SD= 0.79) 2 (SD= 0.86) 2.1 (SD= 0.91) 1.9 (SD= 0.72) 1.97 A/Class 2 (Business) (n = 11) 2 (SD= 0.77) 1.73 (SD= 0.79) 1.37 (SD= 0.67) 1.72 (SD= 1.01) 1.63 (SD= 0.67) 1.69 B/Class 1 (Sports) (n = 26) 2.42 (SD= 1.10) 2.46 (SD= 0.99) 2.23 (SD= 1.31) 2.35 (SD= 0.98) 2.62 (SD= 1.27) 2.42

CPASS Total Item Mean Average

(n = 57)

2.09 2.06 1.87 2.06 2.05 2.03

Instructor C Q1 Q2 Q3 Q4 Q5 Class Mean

Average C/Class 1 (Law) (n = 44) 2.23 (SD= 1.05) 2.18 (SD= 1.08) 1.95 (SD= 1.14) 2.32 (SD= 1.07) 2.00 (SD= 1.01) 2.14 C/Class 2 (Business) (n = 43) 1.72 (SD= 0.77) 1.74 (SD= 0.79) 1.63 (SD= 0.95) 1.86 (SD= 0.91) 1.65 (SD= 0.95) 1.72 C/Class 3 (Business) (n = 40) 1.19 (SD= 1.08) 2.05 (SD= 1.34) 1.85 (SD= 1.29) 1.98 (SD= 1.21) 1.88 (SD= 1.18) 1.79 C/Class 4 (Law) (n = 28) 1.86 (SD= 0.71) 1.89 (SD= 0.92) 1.64 (SD= 0.73) 1.96 (SD= 0.84) 1.96 (SD= 0.83) 1.86 Non-CPASS Total Item Mean Average

(n = 155)

1.75 1.97 1.77 2.03 1.87 1.88

参照

関連したドキュメント

ABSTRACT: [Purpose] In this study, we examined if a relationship exists between clinical assessments of symptoms pain and function and external knee and hip adduction moment

Time series plots of the linear combinations of the cointegrating vector via the Johansen Method and RBC procedure respectively for the spot and forward data..

In this, the first ever in-depth study of the econometric practice of nonaca- demic economists, I analyse the way economists in business and government currently approach

To overcome the drawbacks associated with current MSVM in credit rating prediction, a novel model based on support vector domain combined with kernel-based fuzzy clustering is

Therefore, with the weak form of the positive mass theorem, the strict inequality of Theorem 2 is satisfied by locally conformally flat manifolds and by manifolds of dimensions 3, 4

Trujillo; Fractional integrals and derivatives and differential equations of fractional order in weighted spaces of continuous functions,

Beyond proving existence, we can show that the solution given in Theorem 2.2 is of Laplace transform type, modulo an appropriate error, as shown in the next theorem..

Actually it can be seen that all the characterizations of A ≤ ∗ B listed in Theorem 2.1 have singular value analogies in the general case..