Skip to content
Quick Start for:
-->
Appendix B
AN OPINION SURVEY OF TEACHERS AND ADMINISTRATORS CONCERNING TAAS AND PEIMS DATA IN TEXAS SCHOOLS
Appendix B (Part 1)
Appendix B (Part 2)

Prepared by
The University of Texas at Austin
School of Social Work

David W. Springer, Ph.D., Principal Investigator
Michael Lauderdale, Ph.D., Principal Investigator
Noel Landuyt, Ph.D., Survey Coordinator

May 2000

The opinions expressed in this report represent the responses of the individuals surveyed, and do not necessarily reflect the opinions of the Task Force, the Comptroller, or the University of Texas.

AN OPINION SURVEY OF TEACHERS AND ADMINISTRATORS CONCERNING TAAS AND PEIMS DATA IN TEXAS SCHOOLS

Introduction

This report has been prepared by The University of Texas at Austin, School of Social Work under contract with the State of Texas, Office of the Comptroller. The purpose of this opinion survey is to assist the Public Education Integrity Task Force ("Task Force") with meeting its charge of addressing and examining issues surrounding the reporting and administration of Texas Assessment of Academic Skills (TAAS) data, dropout data, and instructional and administrative costs. The survey described below was developed to obtain the opinion of teachers and administrators regarding the adequacy of controls, processes and procedures used to administer, collect, and report PEIMS, dropout, and TAAS data to the state. The survey also contained open-ended questions designed to solicit teacher and administrator suggestions for change or improvement to those processes and procedures.

Methodology

Sampling Strategy

The database of teacher and administrator names and addresses was obtained from the State Board for Educator Certification (SBEC). The most recent database (1999) was used. A step-by-step description of how the final sample was selected is provided below.

The original database from SBEC contained over 338,000 names. The budget allowed us to sample approximately 5,000 respondents. Thus, we began a process of stratifying the sample. Stratified sampling is a method of reducing sampling error (Rubin & Babbie, 1997). Rather than selecting the sample from the total population at large (n = 338,000), we ensured that appropriate numbers of elements were drawn from homogeneous subsets of the population. The elements that we used to guide the stratification process were as follows: type of school (i.e., high school); role (i.e., teacher or administrator); size of district (i.e., major urban, major suburban, rural); and ethnicity. These were elements identified as critical by the Task Force. Once the sample was stratified by these key elements, a systematic sampling procedure was employed where we selected every kth name from each district (major urban, major suburban, rural).

We first sorted the sample by city, dropping all names with no mailing addresses (n = 17,460). We then limited the sample to high schools only, which reduced the potential sample to just over 92,000. Next, from this list of high school personnel, some categories were eliminated, such as nurses; librarians; social workers; speech, music, art, occupational, and physical therapists; substitute teachers; and so on. We then stratified the sample by district (major urban, major suburban, and rural), which resulted in the following potential samples: major urban (n = 11,709); major suburban (n = 19,381); and rural (n = 5,278). In short, at this point, we had narrowed the list from over 338,000 potential respondents to approximately 36,000 potential respondents. However, we could only select 5,000 total respondents. Therefore, we systematically selected approximately 1,670 respondents from each of the three types of districts.

Due to the disproportionate number of potential respondents in each type of district and to ensure equal representation from each group, we chose every 7th person from urban, every 12th person from suburban, and every 3rd person from rural. We also stratified by ethnicity to ensure a diverse sample.

Sample

Basic demographic characteristics of respondents on the mailing list are described below. Of the potential survey respondents to be samples, there were 268,996 (94%) teachers to choose from and 15,778 (6%) administrators to choose from. Once the sample was stratified, as described above, a sample of 5,000 potential respondents was randomly selected for the survey mailout. This resulted in 4,737 (94%) teachers and 263 (6%) administrators. Thus, the random selection process served the purpose for which it was intended. The sample is precisely proportionate, or representative, of the larger sampling frame that we started with.

Most respondents (91.5%) were teachers, with the remainder being administrators (8.3%). Years that respondents had been an educator are as follows: 0 to 5 years: 18.2%; 6 to 10 years: 16.5%; 11 to 15 years: 15.5%; 16 to 20 years: 11.5%; and 20+ years: 38.3%. Thus, there was a range of years as an educator represented among respondents, with the majority (81.8%) having 6 years of experience or more. About three-fifths (58.2%) of respondents were female and two-fifths (41.2%) were male. The ethnic breakdown of respondents is as follows: Anglo = 82.7%; African American = 6.2%; Hispanic = 8.7%; Asian = .2%; and Other = 2.2%. There was equal representation among urban (32.2%), suburban (30.7%), and rural (37.1%) respondents.

Of the 5,000 potential respondents that were mailed a survey, 1,515 (30%) returned a completed survey (or completed it on-line). Most of the respondents (n = 1341; 88%) returned the survey by mail, while the remaining 12% (n = 174) completed the survey on-line. This demonstrates the importance of the use of the Internet in this.

Urban

Of the 1,669 participants on the urban mailing list, 925 were female and 744 were male. Almost three-fifths (n = 978; 59%) were white, with the rest of the sample being African American (n = 367; 22%), Hispanic (n = 308; 18%), and Asian (n = 10; less than 1%).

Suburban

Of the 1,670 participants on the suburban mailing list, 987 were female and 683 were male. A majority (n = 1392; 83%) were white, with the rest of the sample being African American (n = 108), Hispanic (n = 150), Asian (n = 9), and Native American (n = 11).

Rural

Of the 1,667 participants on the rural mailing list, half were female (n = 836) and half were male (n = 831). A majority (n = 1446; 87%) were white, with the rest of the sample being African American (n = 74), Hispanic (n = 145), and Asian (n = 2).

Results

Reliability and Validity of Survey Tool

Any time a pencil-and-paper instrument is used to capture data in a research project, one is concerned with whether that tool is reliable and valid. In other words, we want to know if that tool performs consistently over time (reliability), and if it measures what it is intended to measure (validity) (Springer, 1998).

Reliability

There are several ways to determine if a tool is reliable. In most situations the researcher is not primarily concerned with how the respondents score on only those specific survey items; usually, the researcher wants to generalize from these specific items to a larger domain of possible items that might have been asked. One way to estimate how consistently respondents' performance on a specific instrument can be generalized to the domain (or pool) of items that might have been asked is to determine how consistently the respondents performed across items on this single survey form. Procedures designed to estimate reliability in this manner are called internal consistency methods. When respondents perform consistently across items within an instrument, item homogeneity is indicated.

Cronbach's (1951) coefficient alpha, as a measure of internal consistency, is the preferred method of establishing reliability (Springer, Abell, & Hudson, in press). Cronbach's alpha ranges from 0.0 to 1.0. An alpha coefficient of .60 or greater indicates that a tool is appropriate for use in research projects such as this one (Hudson, 1982).

The Cronbach's coefficient alpha computed for the survey used in this study is .799, indicating that it far exceeds the threshold needed for purposes of conducting a research study. In short, the tool used in this study is reliable.

Validity

Factorial analysis is useful in determining how item responses cluster together, and how many factors are necessary to explain the relationships among items in an instrument. In essence, one can determine the factor structure of a new instrument by using factorial analysis, and this should be done with any new tool (Springer, Abell, & Nugent, in press).

The factor analysis computed on this survey tool reveals that the tool has a factor structure as we intended. If one inspects the items at face value, it is apparent that the items fall into three apparent clusters: items that ask about TAAS data, items that ask about dropout data, and items that ask about PEIMS. The factor analysis revealed that there are in fact three factors, or three domains, that make up the survey as intended.

Thus, the above psychometric analyses indicate that the survey tool used in this study is both reliable (consistent) and valid (accurate), which allows one to place a certain degree of confidence in the results presented below. In other words, because the psychometric properties of this survey tool are solid, if this study were conducted again with this survey, we would most likely obtain similar results.

Table 1: Overall Responses to Survey
Item Responses
Disagree
or
Strongly
Disagree
Neutral/
Don't
Know
Agree
or
Strongly
Agree
1. Teachers at my school administer the TAAS honestly. 2.1% 3.2% 94.7%
2. At my school, the TAAS is administered in the same way from class to class. 2.9% 5.8% 91.3%
3. Teachers at my school clearly understand the guidelines for administering the TAAS. 3.0% 3.4% 93.6%
4. I am aware of teachers at my school who assist students during TAAS testing or revise answer sheets. 85.4% 10.0% 4.5%
5. TAAS booklets at my school are tightly guarded until tests are administered. 2.0% 5.2% 92.7%
6. TAAS test sheets at my school are tightly guarded until they are sent in for grading. 2.4% 9.1% 88.6%
7. Our administration provides clear guidelines for administering the TAAS. 4.3% 2.8% 92.9%
8. Administrators at my school alter TAAS answer sheets. 88.3% 10.3% 1.5%
9. Administrators at my school ask our teachers to alter TAAS answer sheets. 92.0% 5.9% 2.1%
10. Our campus administrators receive adequate training on administering the TAAS correctly. 2.9% 23.9% 73.2%
11. Our teachers receive adequate training on administering the TAAS correctly. 5.3% 4.3% 90.5%
12. Our teachers feel pressure to have students perform at a certain level on the TAAS. 11.8% 6.7% 81.5%
13. Students predicted to do poorly on TAAS are classified as exempt or encouraged not to attend on TAAS days. 79.2% 12.9% 7.9%
14. Our teachers receive adequate training to identify and report why students leave the district (dropout reporting). 36.7% 32.3% 31.0%
15. Our campus administrators receive adequate training on dropout reporting. 5.9% 51.5% 42.6%
16. Our central office administrators know the true numbers of students that drop out. 6.8% 42.9% 50.3%
17. Our teachers are involved in identifying the reasons that students have left the school. 39.5% 22.9% 37.6%
18. Dropout data for my school is altered on purpose to make our school "look good." 68.1% 25.0% 7.0%
19. Our campus administrators are involved in identifying the reasons students have left the district. 5.7% 40.4% 53.8%
20. Our administration has a good system in place to capture dropout and leaver data. 9.6% 49.8% 40.5%
21. The RESC regularly provides training on attendance reporting. 11.5% 68.5% 20.0%
22. The RESC regularly provides training on dropout data reporting. 11.1% 73.7% 15.3%
23. People on my campus are classified as instructional staff but perform primarily administrative duties. 63.2% 22.4% 14.4%
24. I understand how data is collected for PEIMS in my district. 32.0% 32.9% 35.1%
25. The number of teachers working in our district is accurately reported on PEIMS. 2.6% 57.2% 40.2%
26. Our administrators can verify the accuracy of PEIMS before it is submitted to the state. 1.9% 59.8% 38.4%
27. I believe that the amount of information requested for PEIMS is appropriate. 5.9% 65.3% 28.7%
28. TEA has adequate personnel available to answer questions about data reporting. 7.4% 74.3% 18.2%
29. TEA's TAAS testing manuals are clear and easy to understand. 6.1% 7.8% 86.1%
30. Guidelines for reporting dropouts and leaver codes are clear and understandable. 6.7% 71.0% 22.3%
31. Guideline changes for reporting dropouts and PEIMS data are communicated well in advance if implementation dates. 8.0% 78.1% 14.0%
32. Our school takes the integrity of our TAAS data seriously. 2.6% 1.4% 95.9%
33. Our school takes the integrity of our dropout data seriously. 6.7% 14.4% 78.8%