NRC/GT Running Head
- This publication may be copied and distributed without permission. Electronic reproduction is not permitted.
- Reproduced from: Jarwan, F. A. & Feldhusen, J. F. (1993). Residential schools of mathematics and science for academically talented youth: An analysis of admission programs (CRS93304). Storrs, CT: The National Research Center on the Gifted and Talented, University of Connecticut.

NRC/GT is funded under the Jacob K. Javits Gifted and Talented Students Program Act
Note to Readers...

Residential Schools of Mathematics and Science for Academically Talented Youth: An Analysis of Admission Programs

Fathi A. Jarwan
John F. Feldhusen
Purdue University
West Lafayette, Indiana

NRC/GT Bar

Table of Contents

ABSTRACT

EXECUTIVE SUMMARY

Statement of the Problem

Purpose of the Study

Introduction and Background of the Study

General Characteristics of Sound Admission Programs

Defining the Target Population

Identification/Selection Criteria

Special Populations

Selection Strategy

Validating the Identification/Selection Procedures

Methodology of the Current Study: An Application of Regression Analysis

Population and Sampling

Instrumentation

Pre- and Post-Admission Data
Methods of Data Analyses
Results

Overview of Selection Admission Programs
File Review
On-campus Interview
Selection of Finalists
Appeal Review

Descriptive Statistics of Validation Data
Predictor Variables
Criterion Variables

Correlational and Regression Analyses
Regression Analyses

Analysis of Interviews

Discussion and Conclusions

Limitations of Study

Recommendations

Lessons From This Research for the Development and Validation of Identification/Selection Systems

References

Appendices

NRC/GT Bar

ABSTRACT

The purpose of this project was to analyze and evaluate the procedures used in selecting youth for state supported residential schools of mathematics and science. A combination of qualitative and quantitative research designs was used to test the predictive potential of selection variables. Special forms were used to collect quantitative and demographic data. The predictor variables included home school grade point average (GPA), standardized aptitude test (SAT-M, SAT-V, or ACT) scores, interview ratings, file ratings, and composite scores. The criterion variables included first and second year adjusted grade point averages (GPA), and the overall first and second year GPAs. An interview protocol composed of 12 questions was developed to survey administrators regarding information about admission programs from administrators. Promotional literature of all schools was another source of information about admissions. In sum, the data collected included:
  1. pre- and post-admission data for 742 students in seven schools,
  2. demographic distribution of student populations in terms of ethnicity and gender for seven schools,
  3. 12 taped, semi-structured interviews with directors and/or coordinators of admissions in seven schools, and
  4. promotional literature of nine schools.
Correlational and multiple regression procedures were used to:
  1. determine the relative potential of different selection criteria for predicting academic success, as measured by school grades in mathematics, science, and English language courses, and
  2. develop a "best predictor."
Interview tapes were transcribed, content analyzed, and summarized. Promotional literature of schools was analyzed to identify common selection procedures and policies.

Results of the correlation and regression analyses of pre- and post-admission data from seven schools indicated that the students' home school adjusted grade point average was the best predictor of first and second year grade point averages. The Scholastic Aptitude Test (SAT) was the second best predictor.

Ratings of complete files and ratings of applicants by admission interviewers were of far less value in predicting student achievement; there was a great deal of fluctuation and inconsistency in how they correlated with criterion variables. Composite scores function poorly and inconsistently for predicting first year GPA in most schools. Overall, statistical prediction is superior to professional prediction by interview or rating of complete files.

Analysis of enrollment data indicate that African Americans and Hispanic students are proportionally underrepresented, while Asian students are proportionally over-represented. White students are fairly represented in some schools, underrepresented in some schools, and over-represented in others. Male students outnumbered female students in some schools and vice versa. Male students outscore female students on the mathematical section of the SAT.

Results of the interviews indicated that the use of multiple criteria is seen by administrators as a major strength of their identification systems, but the lack of minority representation is viewed as a major weakness. The relatively high rate of attrition is also viewed as a weakness. Teachers in most schools are not directly involved in identification and selection processes. Instead, decisions were made by admission personnel, counselors, and administrative staff.

Back to the Top

NRC/GT Bar

EXECUTIVE SUMMARY

The purpose of this project was to analyze and evaluate the procedures used in selecting youth for state supported residential schools of mathematics and science. A combination of qualitative and quantitative research designs was used to test the predictive potential of selection variables.

Selection of gifted and talented youth for state supported residential schools of mathematics and science poses a wide range of problems different from those addressed in current theory and research on identification of the gifted. What selection criteria are used in the identification/selection process? How are school faculty involved in the identification process? Are the selection criteria valid for identifying youth who will succeed in the program and go on to high level achievement in mathematics and/or science, and to what extent? How can the identification process be made practical and efficient in these schools? These are some of the problems and questions that inspired this research.

A review of the literature reveals that the identification of gifted students has been the focus of a large number of publications. However, there has been little research on the identification of gifted students for high school programs (Feldhusen, Hoover, and Sayler, 1990). Also, little attention has been paid to the identification and selection of gifted and talented youth in specialized public high schools. Specialized residential schools for gifted and talented youth have to deal with a wide range of variables beyond those encountered in typical public school programs. Legal and political considerations and the diversity of student populations are some examples of the complexity of the identification and selection processes at these schools.

Very few and limited validation studies of selection procedures have been reported although ten years have passed since the first school was opened (Hoge, 1988, 1989); there is only a small number of published works (e.g., Cox and Daniel, 1983; Cox, Daniel, and Boston, 1985; Kolloff, 1991; Stanley, 1986, 1991a, 1991b) on special schools. In most reports general descriptions of admission procedures are discussed and no further analysis of the relationship between selection procedures and the curricular or instructional outcomes is provided.

Defining the Target Population

Defining the target population to be served is the first and perhaps the most important step in planning programs for gifted and talented youth (Borland, 1989). Almost all essential components of any well-structured program for the gifted and talented are shaped by the definition of the target population. It is important because of the close link that must exist between the definition and the identification system (Feldhusen, Asher, and Hoover, 1984; Hoge, 1988; Ward, 1983). It is also important because of its relationship with program goals and curriculum offerings (Feldhusen, 1982). Finally, the definition adopted or developed by a school will determine, in general terms, who will be selected and who will be excluded.

Identification/Selection Criteria

The literature suggests a variety of data sources for the identification of academically talented youth, including: standardized tests of intelligence, aptitude, and achievement; school grades; rating scales; references; essay writing (Feldhusen and Baska, 1989); awards and accomplishments (Coleman, 1985); interviews; creativity tests (Torrance, 1984); and creativity inventories (Rimm, 1984).

Validating the Identification/Selection Procedures

The use of multiple identification/selection criteria is generally recommended, but multiple criteria present a serious problem concerning the method of weighting and combining data. How should a school synthesize the accumulated set of data in a defensible way that facilitates the final step of making selection decisions? The way the data are synthesized and summarized is critical to making reliable and valid selection decisions (Feldhusen, Baska, and Womble, 1981). Of equal importance is the method used for weighting the different components of the selection process.

A major principle underlying the use of measurement in identification of gifted and talented students is that the measures used must have predictive validity (Hoge, 1988, 1989; Petersen, 1976). They must be correlated with indices of successful performance in or as a result of the program. Justification for using a measurement instrument in the selection process also assumes that there is a relationship between the measure used or the data collected and (a) program objectives, (b) program offerings, and (c) measures of success in the program. Thus, data must be collected which indicate that the instruments used to identify students do indeed predict success in the program (Feldhusen, Asher, and Hoover, 1984).

Multiple Regression

Multiple regression analysis is widely used in industry, business, and educational selection and placement, but rarely used in programs for talented youth. It can potentially handle the problems of both combining data and validating the identification system. Multiple regression analysis allows for the most accurate predictions; no other method offers better or more accurate predictions (Meehl, 1954; Sawyer, 1966).

The use of multiple regression analysis as a basis for combining, selecting, and validating the identification/selection data implies the need for linking the major components of the program: 1) Program Objectives, 2) Identification/Selection Measures (Predictors), 3) Instructional Program, and 4) Measures of Outcomes (Criteria).

Methodology of the Current Study: An Application of Regression Analysis

A special form for recording pre- and post-admission data of students was prepared and mailed with written instructions to nine residential schools in June 1991. The process of data collection continued from July through November, 1991, including phone calls, letters, and on-site contacts; additional demographic information about enrollments was collected during field visits to all schools during the period of June to October, 1991.

The pre-admission data included the home school (the high school students attended prior to enrollment in the residential school) grade point average (HS-GPA), scores on both mathematical and verbal sections of the Scholastic Aptitude Test (SAT-M, SAT-V), the American College Assessment Program (ACT), or the Preliminary SAT; interview ratings and file ratings by faculty of the residential schools, and composite scores. The criterion performance data included adjusted grade point averages and overall GPA for the first and second years of study at the residential school. An adjusted grade point average (GPA) for each student was obtained by averaging all grades in science, mathematics, and English courses for students in each year at the residential school.

Population and Sampling

The population for this study consisted of students in seven state supported residential schools of mathematics and science. The sample included 406 male students and 336 female students. Four schools had proportional representation of both sexes. In the other three schools, percentages of female students were in the range of 34% to 40%. The analysis of data for all schools collected in the fall of 1992 indicate that females outnumbered males in two schools and males outnumbered females in three schools. In other schools they were almost even. In all but one school, students enter in their junior year of high school; in one, they enter in the sophomore year.

Conclusions and Recommendations

  1. The regression analyses yielded quite accurate predictions of achievement in the residential schools and indicated which variables were best predictors in the identification/selection process.
  2. The best predictors or selection criteria are SAT or ACT scores or GPA in the high school courses taken prior to selection and admission to the residential school.
  3. Adequate training of committee members and faculty who are involved in the selection process is necessary to assure a reasonable degree of cross-rater or cross-interviewer reliability in the interview data and composite scores.
  4. Active involvement of teachers in identification and selection processes and the use of information collected during these processes may be important factors for lowering attrition rates and for planning successful instruction.
  5. Identification/selection of students for residential school programs is basically a measurement and statistical process and should be carried out by personnel who are well trained and competent in these areas.

Applicability of the Results to Gifted Programs in Public Schools

The results of this research can be generalized to identification methods used in all gifted programs, to all youth programs in which applications for admission and selection methods are used, and to talent search programs. The two most powerful messages are that identification/selection search systems should be empirically validated and that individual identification/selection variables should be evaluated in terms of their contributions to the identification process. The field of gifted education has spent several decades debating the pros and cons of identification methods and the potential value of individual tests and rating scales. It is high time to begin using empirical data to validate identification/selection systems.

The results of this research also suggest that professionals who are called upon to do ratings, recommendations, and comprehensive evaluations of student potential for selection into a special academic program need intensive orientation and/or training for the tasks to assure reliability of assessment. It cannot be assumed that their general professional training readies them for the specific tasks of evaluating student potential for success in highly challenging academic programs.

We are also reminded by this research that articulation of the identification/selection system with the curriculum and evaluation methods is essential to program success. That is, the identification/selection system must bring into the program youth who need and will profit from the specific curriculum offered, and the evaluation of student success must be linked to both the selection criteria and the curriculum. If the curriculum stresses mathematics and science, then the identification/selection system should find youth with particular strength, precocity or talents in those areas, and the evaluation methods should focus on mathematics and science achievement in the program.

Finally, the educational programs and curricula that we observed in the residential schools were of very high quality and could readily serve as models for public school programs for gifted and talented youth.

References

Borland, J. H. (1989). Planning and implementing programs for the gifted. New York: Teachers College, Columbia University.

Coleman, L. J. (1985). Schooling the gifted. Menlo Park, CA: Addison-Wesley Publishing.

Cox, J., & Daniel, N. (1983, May/June). Specialized schools for high ability students. Gifted Child Today, 28, 2-9.

Cox, J., Daniel, N., & Boston, B. (1985). Educating able learners: Programs and promising practices. Austin, TX: University of Texas Press.

Feldhusen, J. F. (1982). Meeting the needs of gifted students through differentiated programming. Gifted Child Quarterly, 26, 37-41.

Feldhusen, J. F., Asher, J. W., & Hoover, S. M. (1984). Problems in identification of giftedness, talent, or ability. Gifted Child Quarterly, 28, 149-151.

Feldhusen, J. F., Hoover, S. M., & Sayler, M. F. (1990). Identification of gifted students at the secondary level. Monroe, NY: Trillium.

Hoge, R. D. (1988). Issues in the definition and measurement of the giftedness construct. Educational Researcher, 17(7), 12-16.

Hoge, R. D. (1989). An examination of the giftedness construct. Canadian Journal of Education, 14(1), 6-17.

Kolloff, P. B. (1991). Special residential high schools. In N. Colangelo & G. A. Davis (Eds.), Handbook of gifted education (pp. 209-215). Needham Heights, MA: Allyn and Bacon.

Meehl, P. E. (1954). Clinical versus statistical prediction. Minneapolis, MN: University of Minnesota Press.

Petersen, N. S. (1976). An expected utility model for "optimal" selection. Journal of Educational Statistics, 1(4), 333-358.

Rimm, S. (1984). The characteristics approach: Identification and beyond. Gifted Child Quarterly, 28, 181-187.

Sawyer, R. (1966). Measurement and prediction. Clinical Psychological Review, 66, 178-200.

Stanley, J. C. (1986). Residential state high schools for youths who are highly talented mathematically and/or scientifically: Several suggestions. Paper presented at Ball State University (December, 1986), Muncie, IN.

Stanley, J. C. (1991a). An academic model for educating the mathematically talented. Gifted Child Quarterly, 35, 36-42.

Stanley, J. C. (1991b). A better model for residential high schools for talented youth. Phi Delta Kappan, 72, 471-473.

Torrance, E. P. (1984). The role of creativity in identification of the gifted and talented. Gifted Child Quarterly, 28, 153-162.

Ward, V. S. (1983). Gifted education: Exploratory studies of theory and practice. Manassas, VA: The Reading Tutorium.

NRC/GT Bar

Statement of the Problem

There are now nine residential high schools in the United States offering a uniquely challenging education for students talented in the areas of mathematics and science. Approximately 1,800 students are selected each year for admission, while hundreds of others apply and are rejected. By law the selection process must be fair, equitable, and valid.

Selection of gifted and talented youth for state supported residential schools of mathematics and science poses a wide range of qualitatively different problems from those addressed in current theory and research studies on identification of the gifted. How do these schools provide equal access to information about their programs for potential students? How do they handle demographic variables in their admission procedures? Are they operating under state mandates regarding representation of special populations? How is the context in which selection takes place shaped by state mandates? Are the selection criteria valid for identifying youth who will succeed in the program and go on to high level achievement in mathematics and/or science, and to what extent? How can the identification process be made practical and efficient in these schools? These are the problems and questions that inspired this research.

Purpose of the Study

Twelve years have passed since the first state supported residential school of mathematics and science was opened at Durham, North Carolina. Following the model of the North Carolina school, eight others have been established. Yet, little information is available about the validity of the selection procedures at these institutions. The purposes of this research are to:
  1. analyze and evaluate identification procedures and policies used in selecting students;
  2. assess the validity of the identification procedures in predicting success as measured by school grades;
  3. construct a "best predictor" model from the set of selection measures used in the admission procedures in these schools;
  4. evaluate the effectiveness of selection procedures, as perceived by teachers, as reflected in the size of applicant pools, and rates of attrition;
  5. identify strengths and weaknesses of the selection procedures, as expressed by the schools' directors and coordinators of admissions.
Four major questions were formulated to guide this research:
  1. What are the common policies and procedures used for selecting students in state supported residential schools of mathematics and science?
  2. What is the predictive validity of the admission procedures employed by these schools?
  3. Are teachers trained for and involved in the selection process?
  4. What are problems, strengths, and weaknesses of selection systems, as perceived by the schools' administrators?

Introduction and Background of the Study

During the last decade the phenomenon of establishing state supported schools of science and mathematics for intellectually talented students has extended from North Carolina to several other states. With the opening of the Alabama School in September 199l, nine state supported residential schools are currently in operation. (A list of these schools is presented in Appendix A.) Plans were underway for the opening of another school in September 1993 in the state of Arkansas. All of these schools have common features and similar policies and practices for fulfilling their special missions.

The change from traditional programs to specialized full time schools is a dramatic development gradually becoming acceptable at both national and international levels. The Israeli Arts and Science Academy, the Jubilee School of Jordan, and the Cairo School for Superior Students are examples of how the phenomenon is also advancing outside the United States.

One of the most critical problems facing these schools is how to select students, as the selection procedures and policies act as a keystone of the entire school program. While residential schools are often seen as serving gifted youth, one step in the selection and admission process is eliminated. In the residential schools there is no intermediate step of labeling and categorizing youth as "gifted," and the educational programs are not labeled as "gifted education." These schools stress selection of youth for a specific program to facilitate their intellectual growth and meet their educational needs.

A review of the literature reveals that the identification of gifted students has been the focus of a large number of publications. However, there has been little research on the identification of gifted students for high school programs (Feldhusen, Hoover, and Sayler, 1990). Also, little attention has been paid to the identification and selection of gifted and talented youth in specialized public high schools. Specialized residential schools for gifted and talented youth have to deal with a wide range of variables beyond those encountered in typical public school programs. Legal and political considerations and the diversity of student populations are some examples of the complexity of the identification and selection processes at these schools. Very few and limited validation studies of selection procedures have been reported although ten years have passed since the first school was created (Hoge, 1988, 1989); there is only a small number of published works (e.g., Cox and Daniel, 1983; Cox, Daniel, and Boston, 1985; Kolloff, 1991; Stanley, 1986, 1991a, 1991b) on special schools. In most cases, however, general descriptions of admission procedures are discussed and no further analysis of the relationship between selection procedures and the curricular or instructional outcomes is provided.

General Characteristics of Sound Admission Programs

A statewide specialized residential school is similar to postsecondary or higher education institutions in terms of the process of admitting students. All students throughout a state who are interested, and meet certain criteria, have the right to apply for admission. The term "identification" does not describe the process, and also is not used in current schools; it is really a selection and admission program. Therefore, it may be appropriate to begin with what can be a framework for a sound admission program as given by Hills (1971), in a discussion focusing on the use of measurement in selection and placement:
Whether an institution is open, selective, or competitive in admission, it seems that certain characteristics must inhere in a sound admission program. The program must be orderly. The proper steps must be taken in the proper sequence and on time, and they must be done reliably, one term after another. The program should be fully specified and clear so that all who are involved or who may become involved can follow the steps without faltering. To be sound, the program must be rational (i.e., it must be designed to achieve carefully determined objectives), and the design must be logical and thoroughly planned to eliminate any nonessentials while including all essentials in their proper places. Finally, the program must be modifiable on the basis of observations of its operation and its success in meeting the specified objectives efficiently. (p. 682)
In this precise description, Hills (1971) pinpointed major characteristics applicable to any sound admission program. The order and sequence of steps, clarity of objectives, logic and purposefulness of each component, and the rationale and modifiability of the program are important factors in comprehensive selection and admission programs.

A defensible goal for a special school's admission program should be selecting and admitting students who are most likely to benefit from the school's educational experiences and pass criteria of success, as defined by the school's goals. In addition to guiding all activities to be undertaken from the very beginning, precisely stated objectives provide a solid foundation for any formal or informal attempt to monitor and evaluate: (a) the degree to which the objectives have been achieved, (b) the weaknesses and strengths of procedures and methods, and (c) the value of the objectives themselves. A key question to be answered is: Under what conditions will the goals and objectives of the identification/selection program be achieved?

Defining the Target Population

Defining the target population to be served is the first and perhaps the most important step in planning programs for gifted and talented youth (Borland, 1989). Almost all essential components of any well-structured program for the gifted and talented are shaped by the definition of the target population. It is important because of the close link that must exist between the definition and the identification system (Feldhusen, Asher, and Hoover, 1984; Hoge, 1988; Ward, 1983). It is also important because of its relationship with program goals and curriculum offerings (Feldhusen, 1982). Finally, the definition adopted or developed by a school will determine, in general terms, who will be selected and who will be excluded.

In addition, accurate and updated information about the statewide population of students in the grade level from which the selection is to be done is fundamental to informed planning for admissions. The area of the state, the size of student population, and the make-up of this population are all important elements in the planning stage. Statistics about the number of high schools, school districts, and the distribution of students in the state provide basic data to develop a sound plan for admissions.

Historically, the gifted education movement has witnessed extensive efforts in both theoretical and empirical areas to define the construct giftedness. Yet, there is disagreement among researchers and educators on a precise definition and measurement of giftedness (Horowitz and O'Brien, 1985; Janos and Robinson, 1985). Hallahan and Kauffman (1982) proposed that the reasons for disagreement are mainly due to differences of approach regarding four issues: (a) the range of skills and behaviors to which the term giftedness should be applied, (b) the measurement of giftedness, (c) the cutoff point above which a child is considered gifted, and (d) the nature of the comparison group.

Conceptions of giftedness have changed over the years from the psychometric tradition that equates giftedness with high IQ (Terman, 1925), to multidimensional conceptions (e.g., Marland, 1971; Renzulli, 1978; Tannenbaum, 1983) which include intellectual and nonintellectual domains or factors (Feldhusen, 1986; Feldhusen and Hoover, 1986), to more talent-oriented or domain-specific ones (Stanley, 1979). The domain specific conception of giftedness appears to be more appropriate to the mission of schools of mathematics and science. The term "talented" generally refers to students who are outstanding in a specific skill such as arts, music, mathematics, science, or any other aesthetic or academic area (Feldhusen, 1992).

The major dimensions upon which definitions of giftedness can be categorized include comprehensiveness, degree of superiority, level of potentiality, and terminology.

Degree of comprehensiveness or breadth refers to the nature and number of variables included in the definition. At one extreme are definitions with a single variable and domain such as mathematical aptitude or creativity. At the other extreme are multivariate definitions that include a wide range of traits, in addition to cognitive variables (Sternberg and Davidson, 1986).

Degree of superiority ranges from conservative definitions such as Terman's top one percent in general ability, as measured by the Stanford-Binet Intelligence Test to liberal definitions such as Taylor's (1978) multiple talent conception that considers almost everyone to be gifted or talented in some way.

Gifted versus potentially gifted refers to an important aspect of the conceptualization of giftedness in terms of the extent to which definitions involve a static or dynamic view of components or characteristics of giftedness. According to Hoge (1989), the continuum ranges from definitions stressing performance on IQ tests to definitions that involve a set of potentialities to be developed.

Terminology of giftedness varies a great deal. A variety of terms have been used in defining the giftedness construct such as "genius," "talent," "creative," "precocious," and "aptitude." Some use the terms "giftedness" and "talent" synonymously, some distinguish between them, others associate them with the term "creativity." The variability has been well documented by Richert, Alvino, and McDonnel (1982).

It should also be noted that the term "gifted" is often criticized as being undesirable and outdated. The major problems with the term are that:
  1. it evokes a sense of elitism,
  2. it carries undesirable genetic connotations,
  3. measurement assumptions associated with it are often naive,
  4. it incorrectly communicates a fixed or entity conception of ability,
  5. labeling may in itself be undesirable, and
  6. simply identifying a student as gifted leaves unexplained the nature of his/her special talent or aptitude.
The latter is often considered essential in developing sound instructional programs.

In sum, there is much disagreement and conflict regarding the definition of giftedness. No one definition will fit all programs and situations, nevertheless, a definition should be a central component of all organized programs. The following criteria for the employment of a definition are derived from research:
  1. a definition should be based on theoretical and empirical literature in psychology and education about characteristics and needs of gifted students;
  2. a definition should be explicitly stated in operational form;
  3. a definition should tolerate a degree of subjectivity in estimating performance and potential.
We advocate the use of the term "talented." "Talented" may mean high general intellectual ability, specific aptitude or talent, a strong emerging knowledge base, and non-cognitive characteristics including an individual's achievement motivation and internal locus of control. Knowledge base can be demonstrated by prior academic achievement. Personal characteristics associated with talent development can be assessed through questionnaires, standardized scales, such as the California Personality Inventory, and interviews. A final comprehensive evaluation should lead to the selection of talented youth who can perform well in the program.

Identification/Selection Criteria

In almost every program for the gifted there is a fixed number of openings and limited resources. There are criteria defining eligibility to apply for admission. These are intended to give students, who are interested in the program, the information needed to decide whether they should apply for admission or not. Also, they function as a means of keeping the number of applications within a reasonable range. The literature suggests a variety of data sources for the identification of academically gifted and talented youth. The list includes: standardized tests of intelligence, aptitude, and achievement; school grades; rating scales; references; essay writing (Feldhusen and Baska, 1989); awards and accomplishments (Coleman, 1985); interviews; creativity tests (Torrance, 1984); and creativity inventories (Rimm, 1984). The Scholastic Aptitude Test (SAT) has been widely used as a requirement for college admissions, as an off-level testing procedure in The Johns Hopkins University Study of Mathematically Precocious Youth (Stanley and Benbow, 1983), and in other talent search programs. The use of appropriate standardized tests as part of selection criteria provides a reasonable basis for equitable assessment of students' abilities across varying schools and programs.

Rating scales can also provide some valuable information but they often lack reliability and validity (Feldhusen, Asher, and Hoover, 1984). If rating procedures or scales are used, teachers must be given training in their use before they rate program applicants (Hoge and Cudmore, 1986). Additional sources of data and other procedures have been tried and evaluated for educational selection with little consistent evidence of increases in validity (Hills, 1971, p. 692).

Although many leaders in gifted education have argued in support of the use of multiple selection criteria in identification and selection (e.g., Howley, Howley, and Pendarvis, 1986; Reynolds and Birch, 1977; Richert, Alvino, and McDonnel, 1982), the quality and relevance of the specific measures place limits on the reliability and validity of such decisions. The question to be raised, therefore, is not how many measures are used in the identification process, but rather for what reason and what contribution each piece of information has in making valid decisions or to serving specific objectives. It is a waste of money, time, and effort to collect data that are not going to be used or that do not contribute to the validity of decisions.

Special Populations

A policy statement developed by the National Association for Gifted Children Committee on Special Populations defined the term "special populations" to include "children and adults who are African American, Hispanic, Native American, Asian Pacific, rural, economically disadvantaged, handicapped, or female" (Jenkins-Friedman, Richert, and Feldhusen, 1991). Identifying gifted students from special populations and from ethnic groups in particular, has posed a great challenge for professionals in gifted education and leaders of special programs for talented youth.

Underrepresentation of special populations in programs for the gifted has been documented by several researchers (Baldwin, 1985; Baska, 1989; Davis and Rimm, 1985; Frasier, 1989; Richert, 1985, VanTassel-Baska and Willis, 1987). The problem of underrepresentation is especially acute for African American students. Studies of racial and ethnic group differences on ability and college admissions standardized tests reveal that in random samples of African Americans and Whites the mean score of Whites exceeds that of Blacks by about one standard deviation (Hilliard, 1984).

The debate over bias in using psychological and educational standardized tests has developed political and legal dimensions (Reynolds and Brown, 1984). Suggestions for dealing with this problem of identifying gifted minorities include the adoption of a quota system, lowering the achievement criteria for admission (VanTassel-Baska, 1989), applying a culturally specific assessment system or culture fair tests (Frasier, 1989; Richert, 1991), and using a case study approach (Maker, 1989). Stanley (1986), however, warns residential schools of mathematics and science about the consequences of applying quotas or exceptions policies in their identification. Such approaches can lead to weakening of programs or severe frustration for youth who are selected but unable to achieve at acceptable levels in the program. Instead, he suggested setting an appropriate minimum ability level and adhering to it in all selections. His suggestion is based on a conviction that special schools for academically talented students should provide advanced and rigorous curriculum experiences in order to satisfy the intellectual needs of talented youth not addressed in their regular schools.

Selection Strategy

A straightforward objective strategy for selection is to rank order all applicants based on a composite index (score), and to select from high to low until all openings are filled. The best available choices can be guaranteed by using this strategy. However, there is still the problem of determining whether all those selected are qualified according to minimal standards for admissions. Popham (1990) suggested that there are many situations in education where the selection decision "revolves around not who is best or worst but rather who is qualified" (p. 35). Highly structured programs for the gifted can be included in those situations. An efficient program for admissions can play a role in defining the requirements for making selection decisions on the basis of "who is qualified" or likely to succeed in the program. It is important for administrators to be clear and specific in adopting a selection strategy that specifies minimum competencies of applicants, regardless of the number of applicants or the number of openings.

The decision to set a minimal cutoff score on a test or tests as a condition for admission is influenced by the selection strategy and the orientation of the school program. Empirical evidence and/or professional judgment can be used to make sound decisions. A formula for setting the minimum level can be based, as suggested by Stanley (1986), on state or national statistics of students' performance on standardized tests.

Validating the Identification/Selection Procedures

The use of multiple identification/selection criteria has presented a serious problem concerning the method of weighting and combining data. How should a school synthesize the accumulated set of data in a defensible way that facilitates the final step of making selection decisions? First, the way the data are synthesized and summarized is critical to making reliable and valid selection decisions (Feldhusen, Baska, and Womble, 1981). Of equal importance, is the method used for weighting the different components of the selection process. The rank order of student candidates, and consequently their chances for selection, may differ depending on the method used for combining the data. Individual students may be accepted or rejected depending on the method used. Second, most of the methods for synthesizing data deal with only one aspect of the problem, the summarization of the data. These methods include: (a) using matrices such as the Baldwin Identification Matrix (1984), (b) converting all raw scores into standard scores and adding them to get a composite standard score, (c) setting a cutoff score for each measure used in the identification process, and (d) using a holistic case study method or professional judgment for ranking or assigning an overall score for each candidate. None of these methods answers the empirical question concerning the value of identification or selection procedures for predicting student success in programs.

The principle underlying the use of measurement in identification of gifted and talented students is that the measures used must have predictive validity (Hoge, 1988, 1989; Petersen, 1976). They must be correlated with indices of successful performance in or as a result of the program. Justification for using a measurement instrument in the selection process implies an assumption that there is a relationship between the measure used or the data collected and (a) program objectives, (b) program offerings, and (c) measures of success in the program. Thus, data must be collected which indicate that the instruments used to identify students do indeed predict success in the program (Feldhusen, Asher, and Hoover, 1984).

Multiple regression analysis is widely used in industry, business, and educational selection and placement, but rarely used in programs for talented youth. It can potentially handle the problems of both combining data and validating the identification system. Multiple regression analysis allows for the most accurate predictions; no other method offers better or more accurate predictions (Meehl, 1954; Sawyer, 1966).

The use of multiple regression analysis as a basis for combining, selecting, and validating the identification/selection data implies the need for linking the major components of the program:
  1. Program Objectives
  2. Identification/Selection Measures (Predictors)
  3. Instructional Program
  4. Measures of Outcomes (Criteria)
A comprehensive arrangement such as this requires an institutional commitment not just to obtain reliable and valid measures for selection, but also to establish reliable and valid measures of criterion performance in the program. A mathematical model (equation) for prediction can be developed using available data from the first cohort of students or from subsequent years. The relative value of each measure or variable used in selecting students can be demonstrated using this method. All components of selection data are combined and each component is weighted based on its contribution to the making of predictions. The multiple correlation coefficient produced by the analysis is an accurate indicator of the relationship between the composite obtained score and the predicted criterion score. It indicates whether the whole identification/selection program is working and whether it needs modification.

Methodology of the Current Study: An Application of Regression Analysis

A special form for recording pre- and post-admission data of students was prepared and mailed with written instructions to the seven participating schools in June 1991. The process of data collection continued from July through November, 1991, including phone calls, letters, and on-site contacts; additional demographic information about enrollments was collected during field visits to all schools during the period of June to October, 1991.

The pre-admission data included the home school grade point average (HS-GPA), scores on both mathematical and verbal sections of the Scholastic Aptitude Test (SAT-M, SAT-V), the American College Assessment Program (ACT), or the Preliminary SAT; interview ratings, file ratings, and composite scores. The criterion performance data included adjusted grade point averages and overall GPA for the first and second years of the study. An adjusted grade point average (GPA) for each student was obtained by averaging all grades in science, mathematics, and English courses which he/she studied in each year. Grade point averages were coded as follows:

HS-GPA = home school grade point average
GPA1 = first year adjusted grade point average, residential school
GPA2 = second year adjusted grade point average, residential school
GPAO1 = first year overall grade point average, residential school
GPAO2 = second year overall grade point average, residential school
Data were collected for all students of the class graduated in 1991 from five schools (N=636). In addition, one of two schools that opened in 1990, provided data for fifty students randomly selected from those who finished their junior year in the Spring of 1991, while the other school provided data for the total inaugural class of 58 students. Data on gender, attrition rates, and ethnicity were also collected at each school.

On-site visits to six schools were scheduled during the period from June to October 1991. Semi-structured interviews were conducted with directors and coordinators of admissions in six schools to gather additional qualitative information about strengths and weaknesses of schools' selection procedures. An interview protocol containing 12 questions was developed to guide the interviews (Appendix B). All of the interviews were taped. Tapes were transcribed and analyzed to identify important variables.

Population and Sampling

The population for this study consisted of students in seven state supported residential schools of mathematics and science. All these schools share the following characteristics:
  • They are all public, residential, specialized high schools.
  • They provide a two or three year program.
  • They are mathematics and science oriented.
  • They use similar procedures for selecting and admitting students.
In all but one school, students enter in their junior year of high school. In one, they enter in the sophomore year. A breakdown of the sample for which data were collected by school, gender, and status is presented in Table 1. Two schools did not provide information about race/ethnicity.



Table 1

Breakdown of the Sample by School, Sex, Race/Ethnicity, Status

SchoolNMFAAHAWNADroppedOthers*

A150906014232102 42108
B50252531442  50
C855431     3055
D19596999128156126169
E64333182648 262
F583820     949
G140707026 13101 26114

Total742406336606834491135607

N=Number, M=Male, F=Female, AA=African American, H=Hispanic, A=Asian, W=White, NA=Native American
* Including students graduated or still at school in the senior class


Instrumentation

Pre- and Post-Admission Data

In order to evaluate the predictive validity of the identification procedures, two types of data were collected. The first included all preadmission data upon which the selection decisions were based, and the second included measures of academic performance in the program during the junior year and/or upon graduation. The first type was used in regression analyses as independent variables or predictors, and the second as dependent variables or criteria to be predicted. The predictor variables used included:

Home School Grade Point Average (HS-GPA). The home school grade point average represents the academic achievement earned by students during the ninth grade and the first semester of the tenth grade at the previously attended high school. Grades given in letter form were transformed to a 4 point scale as follows: A+ = 4.33, A = 4.0, A- = 3.62, B+ = 3.33, B = 3.0, B- = 2.62, C+ = 2.33, C = 2.0, C- = 1.67, D+ = 1.33, D = 1.0, D- = .67. Grades given in percentage form were converted to a 4 point scale by dividing each individual grade by 25. A total of 713 students had GPAs in their files.

Scholastic Aptitude Test (SAT). Separate verbal (SAT-V) and mathematics (SAT-M) scores are reported on a 200 to 800 scale. The SAT score is a transformed standard score with a mean of 500 and a standard deviation of 100. Scores of the SAT were found for 542 students from five schools. Sixty students had scores on the PSAT, which were converted to the SAT scale. One of the residential schools used the ACT instead of the SAT, and another school accepts more than one kind of standardized test.

File Ratings. In all schools an overall rating of the documents in each applicant's file is assigned by a file review committee. File ratings were collected from school records for a total of 656 students. Whether serving as final indices for selection or as part of selection indices, these ratings were used in regression analyses as a predictor variable entitled "file rating."

Interview Ratings. A personal interview is a main component of the selection criteria in most residential schools. Data on interview ratings were collected for 501 students.

Composite Scores. A composite score (or file rating in some schools) is used as the final index for selection. Two methods were identified for generating composite scores, the clinical and statistical methods. The clinical method means that someone examines the file and makes a professional judgment of the composite score. The statistical method means that scores are combined using a statistical guide that yields the composite score. Different formulas were used at the different schools in the statistical method to calculate composite scores. Composite scores were available for 619 students.

Gender. The gender of each student was determined and the data were used in both correlational and regression analyses as an independent variable. Information was collected and coded with 1 for males, and 0 for females for 742 students.

The criterion measures of success used for regression analyses included the following:

First Year Adjusted Grade Point Average (GPA1). The adjusted GPA1 represents an average grade, computed on a 4 point scale, for only mathematics, science, and English courses which are required or studied in the first and second years of enrollment at the residential school. All schools included in this study are mathematics and science oriented. For selection purposes most of these schools also adjust applicants' previous GPA to include only mathematics, science, and English courses. The decision to compute an adjusted GPA as a major criterion of success was based on an assumption that the selection criteria should reflect the orientation of schools. Data for 644 students were collected from six schools.

Second Year Adjusted Grade Point Average (GPA2). An adjusted grade point average for the second year (GPA2) was computed in the same way as for the first year. The GPA2 was used to examine the prediction power of the selection measures after two years of admission. Four hundred and eighty-eight students had grades for the mathematics, sciences, and English courses of the second year in their files.

First and Second Year Overall Grade Point Averages (GPAO1, GPAO2). An overall grade point average for 334 students from three schools was calculated for the first year, and for 315 students upon graduation. The first year overall grade point average represents all required courses studied by the student during the first year, while the second overall grade point average represents all courses studied by the student during two years. This group also included all students who dropped out before completion of two years of studies.

In addition to predictor and criterion variables, information was collected on ethnicity of students in order to examine the composition of student populations in terms of minority representation. Students were classified into five groups: Native Americans, African Americans, Asians, Hispanics, and Whites. This information was available for 599 students (graduated or still enrolled by the end of the school year 1991-1992) distributed as follows:

Race/EthnicityNumberPercent
Whites44975
Asians8314
African Americans6010
Hispanics61
Native Americans1.1
Based on these data, it was obvious that all schools had disproportionate numbers of ethnic groups. Asians were overrepresented while African Americans and Hispanics were underrepresented. The total enrollment of all residential schools in the fall of 1992 was 2,993. As shown in Table 2, Asians maintain their status as the dominant minority; Whites, Native Americans, and Hispanics are fairly represented; and African Americans are underrepresented in the range of 7-19%, based on statistics of the seven states' populations and their public school enrollments.



Table 2

Distribution of Residential Schools' Enrollment by Race/Ethnicity (Fall, 1992)

SchoolWAAHANAOther

A19624   15
B3563733188 15
C21416415 24
D31426360 4
E2114012012
F3511217637 
G99641815 
H10515110  
I24892768  

Total2,094294804422360
%70%10%3%15%1%2%

W=White, AA=African American, H=Hispanic, A=Asian, NA=Native American


Methods of Data Analyses

For each student, there were two profiles: the admission profile and the residential school achievement profile. The admission profile contained all quantitative data gathered during the identification process including home school grades (HS-GPA), SAT scores, and ratings of characteristics and interviews by selection committees. The achievement profile contained information about the student's achievements during the first and/or the second year of enrollment including grades in mathematics, science, and English courses. Information about gender and race/ethnicity were included in both profiles.

Descriptive statistics including the mean, median, mode, standard deviation, lowest score, and highest score were determined for HS-GPA, SAT-M, SAT-V, GPA1, and GPA2, file ratings, interviews, and composite scores. Normal distribution statistics for grades and scores, histograms, and graphic representation of descriptive statistics were generated for the HS-GPA, SAT-M, SAT-V, GPA1, and GPA2. Analyses of variance were conducted for school, gender, and racial background variables, and Duncan's multiple-range test was used to examine the differences between means when the analysis of variance indicated a significant difference. The p values reported for the analyses of variance were the exact values produced by computer when differences were significant at p <.05. Otherwise, the alpha level for tests of significance was set at .05.

In order to determine the nature and magnitude of relationships among predictor and criterion variables, Pearson product-moment correlations were calculated and used to generate a matrix involving both predictor and criterion variables for each school and for all the schools combined data. The ratio of the number of students to the number of variables used in both correlational and regression analyses was at least 15:1, except in two schools for which it was approximately 8:1. Three statistics were grouped for each combination of two variables in the correlational matrices: the correlation coefficient, the p level of significance, and the number of observations. The number of observations was not provided if it was consistent with the majority of the correlational combinations.

An optimum multiple regression equation for prediction was generated for each school, for the first year adjusted grade point average, and for the second year adjusted grade point average, if available. The predictor variables used in the analysis of individual school data were not always identical, depending on the school's system. Because of the exploratory nature of this study, a conservative level of significance (p=.15) was chosen as the statistical criterion for the inclusion or exclusion of variables in the regression equations.

A general stepwise multiple regression procedure was used. The criterion for inclusion or exclusion of variables is the extent to which a variable contributes to the improvement of prediction. Regression equations were assembled using beta weights (Regression coefficients). Beta weights provide information about the relative predictive value of each predictor variable compared to other variables in the equation. Separate analyses for males and females were conducted; however, the small number of students classified as African American, Hispanic, and Asian did not allow for analysis by ethnicity.

Interviews with six directors and six coordinators of admission were taped, transcribed, content analyzed, and organized around the interview questions. Frequencies were counted for each question. Samples of responses were also reported for each question. Promotional literature, together with information gathered from interviews, were used to present an overview of the identification and selection systems, and to supplement the quantitative aspects of the validation process.

Results

Results and conclusions of the study are presented in four sections as follows:
  1. overview of the selection and admission programs
  2. descriptive statistics of validation data
  3. correlational and multiple regression analyses of pre- and post-admission data and
  4. analysis of interviews
The results will be presented without specific identification of the participating schools.

Overview of Selection Admission Programs

Admission to state supported residential schools of mathematics and science is a multiphase competitive process. Students are selected from a statewide pool of applicants during their tenth grade year, except for the Illinois Academy, where the selection process takes place during the ninth grade. A review of the schools' promotional literature on admissions and interview information indicates that the major components of admission systems in residential schools are almost identical. The overall process may be broken down into eight stages:
  1. Preparing and mass mailing print materials describing the school's programs and its admission processes to public and private high schools, public libraries, and other civic organizations;
  2. Conducting onsite regional presentations and meetings in selected locations throughout the state;
  3. Broadcasting public service announcements on both television and radio;
  4. Doing press releases on admission procedures and school programs;
  5. Organizing a visitation for prospective students to tour the school and talk with students and teachers;
  6. Designating one or more full or part time recruiters in targeted areas to encourage greater participation of minority and underrepresented populations;
  7. Setting up a statewide toll free telephone number to encourage inquiries about the school's programs and its admission procedures all year long or during the recruitment season; and
  8. Establishing regional support groups and/or booster clubs to promote the school's programs and maintain a systematic broad network throughout the state.
The purpose of these activities is to develop an applicant pool of as many qualified students as possible.

State resident students seeking admission who are enrolled in the tenth grade (or in the ninth grade for the Illinois Academy) are asked to complete and submit an application form ranging from two to seven pages in length. Typically, the packet includes the application form and three or four recommendation forms. The application form has sections asking for information on the applicant, the family, the applicant's educational background, his/her interests, activities, and accomplishments. Specific essay questions are also often required. The application must be signed by the applicant and his/her parent or guardian, and postmarked to the school before a deadline.

Recommendation forms are completed by teachers of mathematics, science, and English; and/or an administrator, counselor, or teacher who knows the applicant. Before distributing them to teachers, the student and a parent or guardian complete and sign an information release section included in the recommendation forms. The forms include a behavioral Likert-type rating scale of general characteristics of superior students. In addition, it includes open-ended questions designed to elicit information about the applicant's aptitude, ability to adapt to the rigors of academic and social life, motivation, and any other significant information. In some cases a special section of the application form is completed by a counselor. The recommendation forms are usually mailed directly to the school. Occasionally, they are collected by the applicant in sealed envelopes and mailed to the school admissions office. The minimal admission criteria include:
  1. Resident of the state of......
  2. Current enrollment in tenth grade (or ninth grade for the Illinois Academy);
  3. School performance above average in most subjects and superior in science and mathematics; (The GPA for 9th grade and the first semester of 10th grade are usually required.)
  4. Interests in related areas such as electronics, research, computers, and math games;
  5. Evidence of intellectual curiosity, analytical thinking, and imagination;
  6. Strong personal desire to attend the school;
  7. High scores on a standardized test of aptitude or ability;
  8. Evidence of strong interest in science and math;
  9. Samples of writing;
  10. Letters of recommendation from high school teachers of science, mathematics, and English; and/or a high school principal or counselor.
  11. Interview; and
  12. Other criteria (e.g., diagnostic tests, Raven's Progressive Matrices, Test of Standard Written English).

File Review

Completed files are evaluated and rated by a review committee comprised of selected educators and lay persons from across the state along with representatives from the schools' faculty and staff. Each committee has between two and five members. The composition and the task of file review teams vary from one school to another. In some instances the committee is composed of at least one representative from the school, one representative from the zone or legislative district from which an application originates, and an additional member chosen at random from a pool of available reviewers. In other instances, all committee members are from outside the zone from which an application originates as a precaution against bias.

Files are screened without inclusion of test scores and transcripts in some schools, while in others they are submitted to reviewers with all relevant data included. In all schools except one, a reviewer is required to assign a final holistic score for each file reviewed. In the one exception, the content of the file is divided into different areas. The number of people on each review team corresponds to the number of these areas. Each member rates an assigned section in all files given to the team. Guidelines and instructions are provided for reviewers to facilitate their task.

A framework for screening an applicant file is given in writing along with intensive short training for reviewers. The framework for evaluation generally includes three main sub-areas: accomplishments, achievement, and aptitude. The first sub-area involves evidence of an applicant's potential for mathematical and scientific reasoning, communication skills, interpersonal relations, and school performance. Scores from standardized tests and transcripts, when included in applicants' files, provide evidence on aptitude and achievement.

Each piece of information is used in this phase to get a total picture of the applicant's accomplishments from the perspective or the context of the student's home community. Therefore, information about the home school district, the size, the economic base, and the average achievement of the applicant's age peers is provided by some schools. After members rate each file independently they come to a final consensus, or maintain a difference in their ratings on any one file within a restricted range (e.g., half a point on a scale of five points). Otherwise, another committee is asked to review problematic cases. A final score is generated by summing up individual reviewer's scores. The file ratings are used in some schools as a final index by which applicants are rank ordered for selection, while in other schools file ratings are treated as one component of the selection index. This latter index is generated mathematically by combining a standardized test score (usually the SAT), high school GPA, a file rating, and the interview rating, and usually assembled by an external consulting authority.

On-campus Interview

Based on the file ratings and/or scores and grades of applicants, a list of finalists and alternates or a list of semifinalists is prepared by the school admission staff or admission committee. In both cases candidates are invited for a tour and an interview/audition day on campus. In all schools except one, the interview serves as one of the final selection criteria. An interview questionnaire or a rating scale is usually used for rating invited candidates on their demonstrated verbal and social skills, emotional maturity, academic goals, and personal ambition. Independent ratings of a candidate are conducted by each member of the interview committee. In some cases only one person is assigned the task of assessment.

The interview, as mentioned above, does not always have the same function. According to promotional literature of one school, it is not intended to uncover any new information that would reverse earlier decisions made by the admission committee. Rather, it is an opportunity for both parties to exchange information, and to affirm that the student is the one making the decision to attend the academy. Nevertheless, the evaluation may not lead to a recommendation for admission. In such cases the candidate has the right to a second interview. At another school only those applicants identified through the file review process as having discrepancies or ambiguities in their records or as being exceptionally young may be asked for a personal interview.

In addition to interviews and tours, candidates may audition in their talent areas or submit portfolios of original artwork or photography. Additional placement and/or selection tests may be administered during the interview day. Parents are requested to accompany their children, and general sessions for parents and students are provided to inform them more fully about rules, regulations, facilities, and expectations of the school.

Selection of Finalists

In this phase of the admission process, whether prior to or after the interview assessment, the task of making selection decisions is carried out. Any admission committee or any individual consultant in charge of the decision making process has to take into account political and logistical constraints in addition to an array of data including test scores, school grades, file ratings, and interview ratings. An analysis of the schools' promotional literature on admission revealed that selection of finalists is not a straightforward process. Rather, it is complicated and multifaceted. The first step is to develop a weighting system for each selection variable on the basis of its importance in relationship to other variables. In all schools weighting schema have been developed on the basis of professional judgments.

The second step is the computation of a composite selection score for each candidate. That is, all the scores and ratings of a student are combined in some way to get a single total score. The computation procedure and the form in which composite scores are calculated differ from school to school. They may be in raw score form or standard score form. The third step is rank ordering candidates based on their composite scores by region, gender, race/ethnicity, and at large. Beyond this point most schools have adopted a selection policy corresponding to legislative mandates or expectations of state authorities.

Approximately 65% of the semifinalists and 65% of the finalists, are selected purely on the basis of the objective ranking of all candidates. The remaining students are selected by the director of the school and the director of admissions to balance race/ethnicity, gender, region, and high school representation.

Appeal Review

The admission policies and procedures of most state supported residential schools of science and mathematics include statements on appeals. Applicants denied admission may appeal the decision of the admission committee. A written request for review is accepted during a time limit of two weeks from the date of receiving the notification of denial. The basis for the appeal must be clearly stated in the request. Appeals are reviewed by officials from the school or by a special committee appointed for this purpose in accordance with the school's policy. The charge to whomever is responsible for the review is to determine whether appropriate procedures were followed in all processing stages for the student seeking a review. The original decision may be affirmed or reversed. If the student does not agree with the decision of the review committee, he or she may appeal again, within a limited period, to the school director. The decision of the director in such cases is final.

Descriptive Statistics of Validation Data

In order to evaluate the predictive validity of identification/selection procedures, two types of data were analyzed. The first included pre-admission data upon which the selection decisions were based, and the second included outcome measures of success or performance in the school after one year of study and/or upon graduation.

Predictor Variables

Home School Grade Point Average (HS-GPA). Home school grade point averages were found for 713 students. The highest HS-GPA was 4.14 and the lowest GPA was 2.33, giving a range of 1.81. The mean of the GPA-HS was 3.77, and the standard deviation was .27. Analysis of variance was used to test for possible variations among HS-GPA means for different schools. The F-test indicated that at least one of the GPA means is significantly different from one of the other means (p=.0001).

Means and standard deviations of HS-GPA were computed for male/female and graduated/dropped students (see Table 3). A t-test was used to determine whether there were significant differences between means. While there was no significant difference between male/female means, a significant difference was found between those who dropped out before graduation and those who graduated (p=.001). The mean HS-GPA earned was higher for graduated students. Analysis of variance was used to test for differences among means of racial groups. Results of the analysis indicated that there were no significant differences among means.



Table 3

Analysis of Entry HS-GPA for Male/Female, Graduated/Dropped, and All Students

VariableNMeanSTDLowestHighestt-testp-value

Male3843.76.282.334.14-1.7.09
Female3273.79.262.504.10
Graduated5073.78.252.604.00
Dropped1073.65.382.334.10-3.3.001

All7133.77.272.334.14




Scholastic Aptitude Test-Math Scores (SAT-M). The highest SAT-M score was 800 and the lowest SAT-M was 330, a range of 470. The mean of the SAT-M for all students' scores was 599; the median was 600; and the mode was 660. The standard deviation of the distribution of SAT-M scores was 94.

Analysis of variance indicated that there were large differences among schools on the SAT-M (p=.0001). The difference between the highest (649) and lowest (532) mean was 117 points, more than one standard deviation. The standard deviations of scores across schools ranged from 62 to 88.

Analysis of variance and Duncan's multiple range test of SAT-M scores for different racial groups of the 1991 class also indicated that there were large differences in mean scores. As shown in Table 4, Asian students had the highest mean, more than one standard deviation above African American students.



Table 4

Analysis of SAT-M Scores by Race/Ethnicity

Race/EthnicityNMeanSDLowestHighest

African American3451079350790
Hispanic654566460640
White34258996330800
Asian6962988370770

All54259994330800




There was no significant difference between Asian and White students' mean SAT-M scores, and, there was no significant difference between Hispanic and African American students' mean scores. The Asian mean score was significantly higher than the means for both Hispanics and African Americans. The mean score for White students was significantly higher than African Americans', but not significantly different from the mean for Hispanic students.

Means and standard deviations were computed for male and female and graduated/dropped students on SAT-M scores to test whether there were significant differences among means. As shown in Table 5, the t-test indicated that there was no significant difference between students who dropped before graduation and those who graduated. The SAT-M mean for male students was significantly and considerably higher than the mean for female students (p=.0001).



Table 5

Analysis of SAT-M Scores for Male/Female, Graduated/Dropped Students

SexNMeanStd Dev.Std ErrorLowestHighestt-testp-value

M2956179353308005.18.0001
F245576926330770  

Graduated390594965330800
Dropped10260010010330800.6.6




Scholastic Aptitude Test-Verbal Scores (SAT-V). The highest SAT-V score was 790 and the lowest was 240, a range of 550. The overall mean of the SAT-V scores was 520; the median was 520; and the mode was 530. The standard deviation of the SAT-V scores was 80, with a range of 36 points across schools.

Analysis of SAT-V scores for different groups of the 1991 class is summarized in Table 6. White students had the highest mean score with more than one-half standard deviation above African American students. Variability of scores was highest for Hispanic students and lowest for African American students. Analysis of variance indicated that there was a significant difference among means of different groups.



Table 6

Analysis of SAT-V Scores by Race/Ethnicity

Race/EthnicityNMeanSDLowestHighest

African American3446968240600
Asian6949582300680
Hispanic6492120350690
White34252987300790

All54252080240790




Analysis of variance was used to examine possible variation among schools on the SAT-V scores. The F test was significant (p=.0001). Results of t-tests indicated that there were no significant differences between males and females nor between graduated and dropped students on the SAT-V scores (Table 7). The test for equality of variances indicated that there was a significant difference between male and female students.



Table 7

Analysis of Mean SAT-V Scores for Male/Female, Graduated/Dropped Students

VariableNMeanSD ErrorStd estLowestHighestt-valuep-value

M295523805240790.6.5
F245518916290730  

Graduated390521904.5240790  
Dropped102518798290720-.4.7




File Ratings. Different types of scales were used to rate applicants' files. Means and standard deviations for individual schools are presented in Table 8. School C did not provide ratings of applicant files. Ratings in this school were assigned on a five point Likert type scale as follows: recommended for admission very highly, recommended highly, recommended, recommended with reservation, not recommended. File ratings for schools B and C actually represent final indices for selection. Therefore, file ratings for these schools were analyzed as composite scores and as file ratings as well. Statistics for file ratings presented in Table 8 may express, to some extent, the degree of importance each school places on this selection criterion. Analysis of variance was not possible because there was no common scale for comparison of file ratings among schools.



Table 8

Analysis of File Ratings by School

SchoolNMeanMedianModeSDLowestHighest

A15067707093080
B501841801702340150
D194707065113080
E641919176835
F5879807846084
G1401111102715




Interview Ratings. Interview ratings, like file ratings, were assigned using different scales for different schools depending on the importance of the interview for the selection process. Interview ratings were available for 501 students. They were used in the original raw score form for correlational and regression analyses of individual schools. As shown in Table 9, two schools did not include interviews among their selection criteria. Five schools provided interview ratings assigned on scales the upper limits of which ranged from 4 to 65.



Table 9

Analysis of Interview Ratings by School*

SchoolNMeanSDMedianModeLowestHighest

C534.44434
D1884.94515
E6283.596117
F58538.455602665
G14051.16626

* Interviews are not part of the composite score in schools A & B


Gender. The sample included 406 male students and 336 female students. The analysis of data for seven schools collected in the Fall of 1992 indicate that females outnumbered males in two schools and males outnumbered females in two schools. In other schools they were almost even. Table 10 shows the distribution of male and female students in each school.



Table 10

Distribution of Residential Schools' Enrollment by Gender (Fall, 1992)

SchoolTotalMaleFemaleDominantGender

A235111124None 
B629348281Male(55%)
C273115158Females(58%)
D407178229Females(56%)
E275134141None 
F549289260None 
G1428557Males(60%)

Total2,5101,2601,250None 




Criterion Variables

First and Second Year Adjusted Grade Point Averages (GPA1, GPA2). Data on first year grades in mathematics, science, and English were available for 644 students. The first year adjusted grade point averages were computed and coded as GPA1. One school was excluded from this category because, unlike other schools, the grading system was different and there was a wide range of coursework options. Thus, it was inappropriate to produce an adjusted grade point average for comparison. The highest GPA1 was 4.0 and the lowest was 1.33, a range of 2.67. The mean of the GPA1s was 3.21; the median was 3.33. The standard deviation of the distribution of GPA1s was .60.

Analysis of variance indicated that there was a significant difference among means (p=.0001). Analysis of variance for ethnicity indicated that differences among groups were significant. The highest mean was for Asian students, and the lowest mean was for Hispanic students. The t-test showed no significant differences between means of male and female students. Variances of male and female students were also not significantly different. The GPA1 means for students who graduated or were still attending school were significantly different from the mean of students who dropped (p=.0001). The higher mean was for students who were attending school.

Second Year Grade Point Average (GPA2). Second year grades were available for only 480 students. Grade point averages were computed and coded as GPA2. Two schools did not have these data because they were starting their second year of operation at the time of data collection. In addition, a number of students discontinued their study in the other schools. The highest GPA2 was 4.00, and the lowest, 1.00. The mean for GPA2 was 3.16, the median, 3.76.

Analysis of variance indicated that there was a significant difference among GPA2 means for schools (p=.0001). A t-test for the gender variable indicated no significant difference between means of male and female students. Variance of GPA2 female students was significantly greater than that of male students.

Analysis of variance of GPA2 means for race/ethnicity indicated that there were no significant differences. A t-test for GPA2 means of students who graduated and students who dropped school indicated a significant difference. The mean GPA2 for graduated students was 3.20, and for dropped students was 2.59.

First and Second Year Overall Grade Point Averages (GPAO1, GPAO2). Data on first year overall grade point average were available for 334 students. The highest GPAO1 was 4.0 and the lowest was .67, a range of 3.33. The mean for GPAO1 was 3.15, the median, 3.24.

For the second-year overall grade point averages were available for 315 students. The highest GPAO2 was 4.0 and the lowest was 1.0, with a range of 3.0. The mean for GPAO2 was 3.28, the median was 3.30.

Correlational and Regression Analyses

Correlation coefficients of all predictor variables used in selecting students at different schools, with the GPA1 criterion variable were calculated to determine whether there were any trends or differences in their relationships (Table 11).



Table 11

Correlations of Predictor Variables With Criterion Variables

Criterion PredictorGPA1GPA2GPAO1GPAO2

HS-GPA.53 **.41 **.51 **.42 **
SAT-M.27 **.28 **.34 **.12 *
SAT-V.18 **.33 **.28 **.23 **
ACT.25 **.23 **  
File Rating.04-.18 **.40 **-.14 *
Interview-.15 **-.02.27 **.11
Composite-.08.28 **.18 **.18 **
Gender-.02-.09-.03-.10 *

* indicates significance at the .05 level
** indicates significance at the .01 level


Of all predictor variables used in this study, the HS-GPA had the strongest relationship with all criterion variables, ranging from .41 to .53 (p < .01). Among other predictor variables only the SAT-M, SAT-V, and ACT correlated significantly with all criterion variables. The relationship between interviews and criterion variables was inconsistent. Like the interviews, two correlations of file ratings with the criteria were negative, one was about zero, and one was significant. The correlations between composite scores and the criteria were much lower than those of the HS-GPA and standardized tests.

Regression Analyses

As shown in Table 12, the HS-GPA was the first selection for six out of nine different regression equations. Actually, it was selected first in all cases where the number of degrees of freedom was more than 100. In four of these cases it was followed by the SAT-M. The range of R-square in these cases was between .30 and .44, while the range of R-square in the models excluding GPA was between .15 and .29. Two cases with a one-variable equation, namely, school B and school C had very low values of R-square. File ratings were selected first in three models of which one was a single-variable model representing a selection index, and one had a negative correlation coefficient. The interview rating was selected third in one model.



Table 12

Summary of Stepwise Selection for Criterion Variable GPA1 by School

SchoolDFFirstSecondThirdR2

A141HS-GPASAT-MSex.44
B50File  .15
C64HS-GPA  .07
D184HS-GPASAT-MFile.40
E56FileSexInterview.29
F54FileHS-GPASex.22
G140HS-GPAFileACT.38
A & D333HS-GPASAT-M .33
All416HS-GPASAT-MSAT-V.30




Analysis of Interviews

Interviews with directors and admission coordinators at six residential schools were taped, transcribed, content analyzed, and summarized in relation to the research questions. Frequencies of responses were counted and reported for each question or each category.

  1. How do you maintain equal access to information about your program?
Several strategies were mentioned in response to this question. Similar procedures were used to reach statewide school systems and, to some extent, to contact prospective students. The following strategies were typical for most schools:
  1. mailing out packets of information to superintendents, principals, coordinators of gifted education, guidance counselors, and department heads of mathematics and science (N=12);
  2. holding evening meetings for prospective students and their parents at strategic points statewide (N=12);
  3. hosting visitor information days on campus (N=10);
  4. involving students in advertising and recruitment work in their home schools (N=10);
  5. issuing press releases and radio announcements (N=10); and
  6. hiring recruiters (N=6).
Nevertheless, the outcome of these efforts as reflected in the applicant pool size for all cases but one was disappointing. For instance, one director described the development of his school's applicant pool size over a period of four years by saying, "The first year we had 186; the second year we had 224; the next year we had 263 and we were elated with that; this year we only had 237." Since this school admits between 140 and 175 students each year, the problem is rather serious. The director of one of the schools summarized the situation by saying, "We rely heavily on the local school districts to disseminate information within schools. That's not always a reliable way." A director of another school explained: "We're confident that the information gets to schools. From that point, we have no control over who in the school actually has access." Another director made the same point by saying, "We had only one school district that would not let us in at all. We had, I think, eight school districts that would not let us talk to students but allowed us to talk with staff." In addition, five out of six schools do not have adequate human resources to maintain direct recruitment of students statewide. Consequently, it seems that the process of providing equal access to information about programs of residential schools is far from optimum.

  1. Are your faculty involved in the identification and selection process?
Although a variety of responses was reported, ten administrators out of twelve answered "yes." The extent of involvement and their role in the process varied widely from school to school. One director answered the question by saying, "A very minor involvement. The English people put together a list of topics on which we ask students to write essays. They do not serve on any of the committees. They do not have any direct contact with the students. So, they really do not, it's almost none." Another director responded, "Some of them are involved in the recruiting process. They actually visit schools. As far as reading the files and evaluating the overall profiles of students, the faculty really are not involved." A third director said, "Yes, with only one or two exceptions, all of our faculty are involved in the selection process in one form or another."
  1. Do you provide training for all people involved in the identification and selection process?
All respondents reported that they provide some kind of training for those who are involved in the process of identification and selection. However, the adequacy and quality of training is limited. Except for one school, nothing was mentioned about the importance of including people having expertise in measurement and evaluation as trainers or as members of selection or reviewing committees. One director answered the question as follows, "The faculty is trained before they go out, and then we bring in the citizens committee and the faculty and our consultant, our director of admissions, and I do a 2-3 hour workshop on how to evaluate the written material."

Another director responded by saying, "I'm not personally familiar with the extent of the training, but they are certainly briefed in regard to what they should be looking for. Some kind of inservice is provided before they commence reading the files, but I don't know the details." A third one described the training process provided during a two day file review session as follows, "It's very labor intensive and we really take the first half of the first day to train them in the file review process."

  1. Do you accept exceptionally talented students who are enrolled in lower grades than the tenth and/or the ninth grades?
All administrators interviewed answered, "No," except for two in one school. From all responses to this question it seems that there are no systematic recruiting programs directed to those students who may be extremely gifted but do not meet the grade requirement as stated by rules or laws. A typical response was: "No, we do not. Our admission policies require that applicants have completed the tenth grade." However, the director of one school answered, "Yes, and many come from the 8th grade and skip the 9th grade. About 12% of our students come directly from the 8th grade, have completed 8th grade, and go into 10th grade. We do not have an age cut-off. We say students have to have completed the equivalent of 9th grade." Another director said, "No, we don't. We simply have permission from the board to work with those students on an individual basis. We are not going out and recruiting them. Only upon request will we offer our services."

  1. Do you use the information gathered during the selection process to plan instruction?
All schools except one use the information for placement purposes only. One director answered, "We use the essays that the applicants submit to us. The English teachers evaluate them and recommend which level of English they go to." A second director responded, "Probably, the information that we use in planning curriculum has to do with what they are taking, courses they are taking in the local high school." A third director presented a different perspective by saying, "Absolutely. We ask our instructors to read the files of the students who are in their classes. There are some who believe that that's a danger, that that's going to bias the teacher, but I think there's nothing in there that would bias the teachers negatively about students."

Another director gave a precise statement regarding the major role of the identification system of his school, "It is not a case where we redefine the curriculum on an annual basis with each applicant pool that we have, because we have designed our curriculum to meet the needs of students whom we have characterized in our mission statement. Then it is our identification method that is intended to select students who are consistent with that statement of mission."

  1. Do you offer any kind of remedial instruction for newcomers, and, if yes, in what areas and for how many?
Without exception all six directors and six coordinators of admissions answered, "Yes," although some of them did not like the term "remedial." In most cases, special programs are given in mathematics. In some schools such programs are also provided in science and English. On the average 10% of new students need some kind of support in order to catch up with what is already planned. One year after his school opened, a director clearly analyzed the situation by saying,
We were confident when we began last year that students who came to us would be able to succeed (if they exerted the effort—that's the one variable we can't control) in our lowest math level. But we found that students did not learn some of the basic mathematics concepts in the math courses that we knew they had. There were some assumptions that we made that were proven incorrect. We found more students than we anticipated having completed algebra II but never having had any trigonometry. We also had more students who took algebra II who really had not yet completed a standard algebra I curriculum. So, we designed a course to accommodate such deficits that our students had in comparison with other students.
Another response was, "We have students who come in with nothing more than algebra I and geometry. It may not be a very good background. So what we do is just focus on it and move them along. We require them to go to some extra tutorial sessions. We do have a three week program for students who demonstrate some skill deficiencies, and we have some students who are worked with and monitored and supported throughout the year. Typically, the students in the Excel program are minority students or students from very rural areas or foreign background."

  1. What is the average rate of attrition and what are the reasons for attrition?
The rates of attrition varied widely among schools and within schools over time. On average, the percentage ranged between 10% and 19%. The freshman class has the highest rate of attrition in all schools. The most significant reason, as reported in most schools, was student homesickness. Additionally, some students are not invited back after the freshman year because they do not meet academic expectations. This latter reason was explained as, "Some of these students had never been challenged academically, and all of a sudden they didn't know how to cope with it."

  1. Are your selection policies restricted by state mandates, and, if yes, how are they shaped by them?
The typical answer to this question was negative. It seems that none of the administrators interviewed saw any kind of restrictions. However, the following samples of their answers may express a reality. "No, the only restriction would be on the number of students that we could physically fit into the facilities. We've worked hard to have a representative population, and by placing heavier weight on actual performance in their home schools, minority students and students from rural small schools have a better chance to be admitted."

Another director answered, "We do strive for geographical distribution, and to that extent I suppose we are being responsive to political realities. There have been threats that we would be required to take students from every county, but this has never come to pass and in my opinion would be very detrimental."

A third director elaborated on the point, "No. As a matter of fact, our policies have never been questioned. However, I think we are missing some very talented kids who need nontraditional instrumentation to really figure out their talents."

It was clear from the analysis of responses on different occasions during interviews that the impact of political pressures do exist and are practically reflected in strategies used to increase the numbers of gifted minority students.

  1. Do you have a clearly stated definition of the type of students you are looking for?
Based on responses of directors and coordinators of six schools, the target population of these schools has not been defined uniformly. Two schools attempt to avoid using terms such as gifted or talented. For example, one director said, "I think we have an understanding of the type of students that we're looking for. We try to steer clear of the label, 'gifted,' because some of the students don't have gifted programs in their schools. What we try to say is we offer an opportunity for academically able students."

A sample of other responses makes clear that theoretical definitions are generally not used: "We simply are looking for bright, talented young people who are interested in math and science as a career possibility. So, being bright by itself is not enough. Well, in our strategic plan we'd say that we are recruiting and admitting students of exceptional talent in mathematics and science and who are capable of completing our graduation requirements."

  1. What kind of relationships do you have among the major components of your program: goals and mission, admission procedures, curriculum, and criteria of success?
Responses of directors and coordinators of admissions in six schools suggest that there is some lack of unity among the major components of their programs. In some cases administrators were aware of the importance of such a unity, but were doing little to reach that goal. For example, a director of one school said, "Well, I think the admissions process has operated perhaps too independently of the academic program. I mentioned earlier that the academic program has responded, in some ways, to the nature of the students that we actually accept, but I think we could probably do a better job of bringing the admissions process and the curriculum closer together."

Only one school seemed to have some kind of structured planning to integrate all components in a dynamic unity. This is clearly expressed by the school's director in the following response, "I think it is getting better and better. We have an institutional strategic plan, which very clearly states the mission of the school. We are defining what learners of exceptional talent in mathematics and science look like, and that is part of the selection process. We have three strategic initiatives: concept-centered curriculum, teaching as facilitative discovering, and success defined by evidence of student work. It's a very congruent kind of package from admissions to program to assessment. I think we are beginning to be very congruent in the multiple systems that are part of the institution."

  1. Do you evaluate the effectiveness of your identification and selection system on a regular basis?
All directors and coordinators of admissions interviewed expressed the need for, and importance of, evaluation. However, administrators of four schools responded in the negative. For two schools information related to the question was not available at the time of data collection. Samples of responses are the following:

One director answered by saying, "First of all, no, I don't think we do evaluate in any formal way. We have had relatively low attrition. We have had 8-10% in most years. So, I think our procedures have been successful in terms of the students we've been able to retain. They've been successful in terms of the students we've been able to place in good colleges and universities and win scholarships for, things of that sort. We have not formally evaluated the process."

Another director responded, "We had contracted with a person to develop a follow-up program for our graduates, and another was to set up a program to validate our selection process. After a year we never got anything, so we're now looking for someone to set up a statistical analysis of our selection process to validate it."

A third director answered, "No. We do not have a process, although we have developed one for this year. We will be going out to the selection group and asking them to evaluate the process. But, heretofore, we have not had a formal written evaluation of our selection process. We do have a state audit that comes in periodically that is called compliance audit, to evaluate our compliance with policies, and we have always come out of that extremely well."

  1. What are the strengths of your identification and selection system?
The following characteristics were reported by directors and coordinators of admission as strengths of their selection systems:
  1. The freedom of the school's administration and faculty to design the system internally, and to make changes and adaptations as necessary.
  2. The use of multiple criteria and multiple sources of information.
  3. Evaluating the student in the context of his or her home and home school, because, as one director said, "The way a young boy or girl in a very rural and potentially poor farm community will evidence giftedness will be different from a very rich student in a suburban school. Otherwise, we are penalizing students for lack of experience or exposure which was not their fault."
  4. The involvement of people from across the state, with diverse backgrounds, in the processes of screening and reviewing applications and interviewing applicants. By so doing, "Every student gets a very fair opportunity for selection."
  5. The flexibility of the system to integrate the efficiency of traditional measures with some kind of clinical decisions.
  6. Placing emphasis on home school GPA relative to other selection criteria in order to give "minority students or students from deprived areas a better chance to be admitted."
The weakness mentioned in interviews were:
  1. The small size of applicant pools. Ten out of twelve administrators expressed major concerns about the applicant pool size in their schools.
  2. The lack of instruments to identify students who do not perform well on traditional tests of ability. One director explained, "The SAT score reflects opportunity and access more than aptitude."
  3. The inclusion of components that are very labor intensive, such as interviews and file reviews.
  4. The lack of appropriate validation or evaluation designs.
  5. The inadequacy of some types of recommendation forms.
A director of one school said, "We are trying to get recommendations that are meaningful. Most of the teachers describe students as wonderful, pleasant, but we are really looking for intellectual capacity, potential."

Discussion and Conclusions

This study analyzed and evaluated the identification and selection systems used in state supported residential schools of mathematics and science. In order to evaluate these systems from different perspectives, quantitative and qualitative methods were used. The following questions were addressed.

Question One: What are the common policies and procedures used for identifying students in state supported residential schools of mathematics and science?

The typical identification/selection system found in residential schools can best be described as flexible, uses multiple criteria, and follows a multiple stage format. Five selection criteria and five stages were involved in most schools. The selection criteria included standardized tests (verbal and mathematics sections), the home school (the high school in which the student was enrolled before attending the residential school) GPA, ratings of all the information gathered on a student by a selection committee, and interviews. The identification and selection stages included: a recruitment campaign, application file development, file review, interviews, and selection decisions. Except for one school, there were no cutoffs on the selection measures, a practice that allows administrators to make adjustments for representation by region and race/ethnicity. While Stanley (1986), for example, takes a strong position in favor of setting minimal standards without exceptions, other researchers recommend using practices similar to those used in the residential schools (Baska, 1989; Maker, 1989).

Different committees were used for different stages of the selection process. Subjective judgments were used either as components of the overall composite index for selection (i.e., file review, interview rating) or as a final index for selection based on the file review. In the latter case, all transcripts and standardized test scores were included in the applicants' files. One big school did not inform the reviewers about scores. Zero correlations between the SAT-M and SAT-V scores and file ratings were found in the analysis of data from this big school. In contrast, significant correlations were found between these two variables and file ratings in most other schools. These findings support a policy of not including standardized test scores in the applicant's file for the review process. Reviewers can be influenced by extreme scores while rating student files. Uncorrelated components of a selection system, when combined properly, can generate a better validity coefficient with criterion variables.

Grades, ratings, and standardized scores were combined in order to generate a composite index for selection. All schools except two used weighted raw form grades and different methods for developing composite scores. There were no empirical data concerning the validity of any of these methods. At least two problems are involved in these methods. First, adding variables together in raw score form will weight them in unknown ways and produce undesirable statistical artifacts (Lauer and Asher, 1988). Second, the reliability of composite scores is a product of the reliabilities of their components, meaning the unreliabilities of the components will be reflected in the reliability of the composite. Some of the components used in the selection process probably lack reliability. A composite score with low reliability, when correlated with a criterion with low reliability, results in a low correlation coefficient since the upper limit of the correlation coefficient between two variables is restricted by their internal consistency reliabilities. The higher the reliabilities, the higher the correlation. In two schools there was no mechanical addition of different data sources; rather, a holistic score was assigned to each student based on personal judgments of the credentials and accomplishments by members of the selection committee.

Following this discussion and based on the results of this study, two questions can be raised: Why use a multiple criterion system if the final product of the system is invalid, as is the case with composite scores? Why use personal judgments to qualify what has already been quantified as standardized test scores and HS-GPA? Knowing that the reliabilities of the subjective portions of the composite score (i.e., interviews, file ratings) are questionable and costly relative to other objective components, the value of the whole system is therefore questionable.

Question Two: Are the identification and selection systems as used in residential schools valid for predicting success as measured by students' grades?

The correlational analysis of data for most schools indicate that the final index for selection, the composite score, is invalid for predicting success as measured by first year adjusted grade point average (GPA1). The correlation of the composite score with first year GPA was lower than the correlations of home school GPA, SAT, ACT, and interviews with the first year GPA. The stepwise multiple regression analysis excluded the composite score as one of the variables selected for an optimum prediction equation for the criterion GPA1, given the set of variables used for all schools. Composite scores, therefore, function poorly as a predictor for first year adjusted GPA earned at residential schools. The correlations of composite scores with other criterion variables were also lower than the correlations of most predictors.

Another aspect of the analysis carried out in this research was to check the validity of statistical versus clinical judgment since two schools used the latter method. The correlational and regression analyses of data indicate that the use of statistical prediction is far superior to professional judgments for predicting the criterion variables. This finding is in agreement with previous research (Sawyer, 1966). The use of regression equations to combine data and for selection can actually support the strategy of using multiple criteria. Once a regression equation has been cross validated, it can be used for selection of future classes. Mainframe or personal computer statistical programs can do the job in a very short time. The process simply requires entering data for the variables in the equation, calculating the predicted criterion scores, ranking students based on their predicted scores, and using the rankings as selection indices.

A correlation coefficient in the range of .30 to .40 is commonly considered meaningful for educational selection (Kaplan and Saccuzzo, 1989). The HS-GPA was the only variable to meet this criterion consistently with GPAs earned at the end of the first and second years in the residential schools, and for both males and females. This finding was based on data from all schools, from two big schools, and from the only school offering a three year program. Accordingly, the home school GPA proved to be a valid predictor of success as measured by residential school grades. Also, both the SAT-M and SAT-V (or ACT) increased the prediction power of the HS-GPA for the criterion variables. These three variables were selected as the best linear combination for predicting GPA1 in most cases. This result is consistent with previous research done at two of the largest residential schools. An evaluation study conducted at the North Carolina School of Mathematics and Science in 1987 concluded that "high school grade point average is the best predictor of first year grade point average (r=.49)" (p. 24). Another, research study, conducted in 1988 at the Louisiana School for Math, Science, and the Arts, concluded that the highest relationship was between the grade point average earned at the student's home school and the grade point average earned at the Louisiana residential school. What was missing in both studies was the inclusion of composite scores in the analyses.

As for interview ratings, the results of this study suggest a great deal of fluctuation and inconsistency in their correlations with criterion variables. The file ratings correlated significantly and positively with only one (GPAO1) of four criterion variables. They were not consistent in their correlations with criterion variables across schools. File ratings in most schools were based mainly on references and biographical data. As reported by many researchers, interviews, letters of recommendation, and biographical data, in addition to their cost, are poor predictors of future academic success (Hills, 1971).

Question Three: Are teachers trained for and involved in selection?

This study found the involvement of teachers in the selection process to be minimal, as is their training for identification and selection. Inadequate training and involvement of teachers in the selection processes may lead to unrealistic expectations of students and may be related to both lower student grades and the high attrition rates found in most schools. Also, the range of student abilities, as reflected in SAT scores, is close to what is expected in the general population even though the mean would be much higher. Additionally, it was found that identification data are used only for placement in mathematics and language courses rather than for general instructional planning and counseling. Yet, several researchers have urged that such planning and counseling should be guided by information gathered from the identification and selection processes (Borland, 1989; Feldhusen, 1982; Renzulli, 1984). Without factual information about students, teachers' expectations of gifted students will often be too high. This is evidenced by the findings of a qualitative study conducted at the Texas Academy (1990) in which some of the university faculty reported that they did indeed make their teaching and tests more difficult in courses in which students in the state residential school program were enrolled along with other university students.

A system to evaluate student performance or achievement should be included as an integral part of instructional planning. In order for teachers to participate in making the gifted program successful, with a high retention rate, emphasis should be placed on informing them about the characteristics of their prospective students. Teachers should be provided with systematic orientation and should be actively involved in the selection process. Adequate training increases the accuracy of teacher ratings (Hoge and Cudmore, 1986) and provides opportunities for a better understanding of student needs. The role the teachers play is crucial in programs for the gifted, and therefore they should be part of the whole system.

Question Four: What are common problems, strengths, and weaknesses of selection systems as perceived by school administrators?

The underrepresentation of minority students is viewed by administrators as a weakness and a major problem for selection systems as well. Different strategies are reported during interviews and in promotional literature to increase the number of minority students in both applicant pools and the schools' populations. These strategies include the following:
  1. placing heavier weight on the students' home school GPA, or lowering the weight of standardized tests;
  2. using a quota system by admitting a fixed number of students from each congressional district or geographical region;
  3. authorizing administrators to make selection decisions for a fixed percentage of the total freshman class;
  4. hiring recruiters and locating them in certain regions or communities to recruit minority students; and
  5. initiating long term programs aimed at early screening and identification of gifted minority students.
The identification and selection systems have been shaped in part by the above strategies. However, their impact was minimal on the actual numbers of minority students found in the schools in this study. There were six Hispanic students, one Native American, and 60 African American in a sample of 596 students. This result is consistent with the findings of Zappia (1989) and VanTassel-Baska and Willis (1987) on minority underrepresentation in gifted programs.

Attrition is found to be a major concern for administrators of residential schools. The most significant reason for attrition, as reported in all interviews, is homesickness. Although this might be one reason for attrition, adjustment to residential life restrictions and to high academic requirements may also be important and call for further exploration to get a better understanding of the problem. Students are exposed to at least two kinds of pressures in residential schools of mathematics and science. First, there is the pressure to achieve in an unusually rigorous academic atmosphere, including high level expectations from peers, instructors, and parents. For the first time these students may find that they are not "the stars" in their classes as they were in their home schools. Second, there is the need to adjust to residential life far away from home. To avoid failure experiences in one or both of these areas, students may opt for the readily available solution, withdrawal from school.

Without exception across schools, the overall mean of first year grade point averages is lower than that of the entry grade point average and showed large variation. This may indicate that students are faced with much more challenging tasks at the residential schools than they were used to at their home schools. The larger variability among student first year grade point averages also indicates that the challenging curricula at the residential schools lead to greater differences in performance among students than the curricula in their home schools.

On the positive side, administrators agreed that the use of multiple criteria for selection is a major strength in their identification and selection systems. This evaluation is supported with what has been repeatedly emphasized and recommended by authorities in the field of gifted education (Cox, Daniel, and Boston, 1985; Feldhusen, 1989; Renzulli, 1984; Richert, Alvino, and McDonnel, 1982). Nevertheless, the actual value of any component in a selection system lies in its validity or incremental validity for predicting specified criteria. It is difficult to justify using a variable that does not correlate significantly with measures of achievement or outcome performance. Further, the potential of using multiple criteria turned out to be ineffective for predicting later achievement as represented by residential school grades.

The involvement of a large number of individuals from across the state in the identification and selection system is also seen as a strength by most of the administrators. The assignments for those individuals include serving on different committees such as file reviewers or as interviewers. They were intentionally selected from diverse backgrounds. This can be a strength as well as a weakness in the system depending on how well they were trained to do the evaluation.

The analyses of pre-admission data indicate that differences in SAT-M means between male and female students, African American and White, and African American and Asian students were large. The mean SAT-M for African American students is about one standard deviation lower than the mean for White students, and more than one standard deviation lower than the mean for Asian students. This result is consistent with findings of previous research (Colangelo and Kerr, 1990; Manning and Jackson, 1984; Stanley, 1992; Stanley and Benbow, 1983).

In conclusion, the selection systems used for identifying and selecting students in residential schools can best be described as variable, heavily influenced by subjective judgments, and labor intensive. The student's home school grade point average is, in most cases, the best predictor of success, as measured by grades in courses at residential schools. The current strategies to improve representation of minority students and the size of applicant pools are at best only moderately successful.

Limitations of Study

The correlations and regression equations derived in this study should be interpreted or used for prediction with the following limitations:
  1. The restriction in range of predictor variables, especially the students' home school GPAs, reduces the correlations between predictors and criteria. Therefore, the correlations obtained in this study are lower than they would be for the entire applicant pool or the general population.

  2. Most predictor variables and outcome measures used in the study probably contain considerable amounts of measurement error. Grades and ratings are incomplete measures of achievement and lack reliability. Low reliability of predictors reduces their power to predict later outcomes. Predictors are also less effective than they could be if the measures of criteria were more reliable.

  3. For some schools the sample size is relatively small. Chance relationships in small samples have substantial effects on the regression coefficients and consequently they may produce misleading statistical artifacts.

  4. Relatively low reliability of teachers' grades as reflected in GPA1 and GPA2 set the upper limits of power of the selection predictor variables. The R-square value could not exceed the level of reliability of the criteria. Thus, the level of predictions achieved in this research may be as high as possible, given the limited reliability of the criteria.

Recommendations

The following recommendations are made based on the results of this study:
  1. The size of the applicant pool is a direct measure of whether a campaign for recruitment of students is successful, and it is critical in maintaining a highly selective admission process. Intensive direct contacts with students and parents through field visits and presentations should be emphasized. In addition, more systematic efforts and solid networking with postsecondary institutions, including universities and community colleges, may be useful in publicizing residential school programs.

  2. Combining data from multiple sources in any form other than standard scores or regression analyses is unacceptable, as is weighting components of selection criteria using professional judgment. Correlational and multiple regression methods are more accurate, defensible, and appropriate.

  3. The annual attrition rate may be a significant indicator of the efficiency of selection systems in residential schools. It is a disruptive phenomenon and has negative effects on the whole program of residential schools. Future research on the attrition problem should explore the use of personality scales to assess student adjustment and counseling services specifically directed to the attrition problem.

  4. Accurate and updated information about the statewide population of students in the grade levels from which the selection is to be made is fundamental to informed planning for admission. Therefore, keeping track of statistics and demographics of the population of potential students may be a requisite for a comprehensive identification system for residential schools.

  5. Adequate training of committee members and faculty who are involved in the selection process is necessary to assure a reasonable degree of cross rater or cross interviewer reliability. Systematic programs for training should be developed and conducted for committee members on campus and for high school teachers who are responsible for writing references and completing rating scales.

  6. Active involvement of teachers in identification and selection processes and the use of information collected during these processes may be important factors for lowering attrition rates and for planning successful instruction.

  7. Future research should address the process of reviewing applicant files. While high GPA is an excellent predictor of academic success in residential schools, knowledge of a student's GPA might have a strong biasing influence on reviewers of files and limit their capacity to detect additional variance that could contribute significantly to prediction and decision making.

  8. Identification and selection of students for residential school programs is basically a measurement process. Consultants on measurement and evaluation for the admissions staff could help the schools develop more accurate and valid systems for selection.

  9. Future research is needed to cross validate results of this study on samples of students from new classes and with long range success criteria after graduation.

  10. There is a critical need to continue efforts to find qualified minority students and to develop counseling and instructional methods to help them succeed once enrolled.

Lessons From This Research for the Development and Validation of Identification/Selection Systems

The results of this research can be generalized to identification methods used in all gifted programs, to all youth programs in which application for admission and selection methods are used, and to talent search programs. The two most powerful messages are that identification/selection/search programs should be empirically validated and that individual identification/selection variables should be evaluated in terms of their contributions to the identification process. The field of gifted education has spent several decades debating the pros and cons of identification methods and the potential value of individual tests and rating scales. Rarely have questions been raised or studied about the predictive validity of the process or individual selection variables. If the fields of gifted education and talent search are serious in their pretension of launching gifted and talented students to high and creative levels of achievement, it is imperative that efforts be made to determine if the identification/selection systems and variables are finding youth who need and will profit from the program services offered and/or are missing youth who need it. Short and long range follow-up of youth who have been in programs and searches among the general population for youth with gifted and talented potential who were not in programs are rare.

The results of this research also suggest that professionals who are called upon to do ratings, recommendations, and comprehensive evaluations of student potential for selection into a special academic program need intensive orientation and/or training for the tasks to assure reliability of assessment. It cannot be assumed that their general professional training readies them for the specific tasks of evaluating student potential for success in highly challenging academic programs.

We are also reminded by this research that articulation of the identification/selection system with the curriculum and evaluation methods is essential to program success. That is, the identification/selection must bring in to the program youth who need and will profit from the specific curriculum offered, and the evaluation of student success must be linked to both the selection criteria and the curriculum. If the curriculum stresses mathematics and science, then the identification/selection system should find youth with particular strength or talents in those areas, and the evaluation methods should focus on mathematics and science achievement in the program.

This research also corroborates the need for psychometric and statistical expertise in designing and implementing identification/selection systems. Identification/selection systems in public schools should be guided by professionals who are well trained in measurement and statistical methodology to assure that the process of selection is reliable and valid.

Finally, it is clear from this research and other research focused on public school programs that representation of African American, Hispanic, and Native American youth in special programs for gifted and talented youth falls far short of their representation in the general population. Their scores on the standardized tests used in the identification/selection process are about one standard deviation below the mean of White youth. Is the search for gifted and talented youth reaching all potentially gifted and talented youth in those minority populations? Are those youth in school programs that are preparing them to do well on the tests and to have the necessarily high grade point averages? This problem remains unresolved.

In summary, the search for and identification/selection of youth for special educational programs in public schools should profit from knowledge of the care and effort exhibited by the residential schools in their quest to find and enroll youth who need and will profit from special educational programs. It is also noteworthy and meritorious that personnel in these schools generally avoid the promiscuous and pretentious use of the label "gifted" which characterizes many public school programs. Finally, the educational programs and curricula that we observed in the residential schools were of very high quality and could readily serve as models for public school programs for gifted and talented youth.

References

Baldwin, A. Y. (1984). Baldwin identification matrix for the identification of gifted and talented. New York: Trillium.

Baldwin, A. Y. (1985). Programs for the gifted and talented: Issues concerning minority populations. In F. D. Horowitz & M. O'Brien (Eds.), The gifted and talented: Developmental perspectives (pp. 223-250). Washington, DC: American Psychological Association.

Baska, L. (1989). Characteristics and needs of the gifted. In J. Feldhusen, J. VanTassel-Baska, & K. Seeley (Eds.), Excellence in educating the gifted (pp. 15-28). Denver, CO: Love Publishing.

Borland, J. H. (1989). Planning and implementing programs for the gifted. New York: Teachers College, Columbia University.

Coleman, L. J. (1985). Schooling the gifted. Menlo Park, CA: Addison-Wesley Publishing.

Colangelo, N., & Kerr, B. (1990). Extreme academic talent: Profiles of perfect scorers. Journal of Educational Psychology, 82(3), 404-449.

Cox, J., & Daniel, N. (1983, May/June). Specialized schools for high ability students. Gifted Child Today, 28, 2-9.

Cox, J., Daniel, N., & Boston, B. (1985). Educating able learners: Programs and promising practices. Austin, TX: University of Texas Press.

Davis, G. A., & Rimm, S. B. (1985). Education of the gifted and talented. Englewood Cliffs, NJ: Prentice-Hall.

Feldhusen, J. F. (1982). Meeting the needs of gifted students through differentiated programming. Gifted Child Quarterly, 26, 37-41.

Feldhusen, J. F. (1986). A new conception of giftedness and programming for the gifted. Illinois Council for the Gifted Journal, 5, 2-6.

Feldhusen, J. F. (1989). Synthesis of research on gifted youth. Educational Leadership, 46(6), 6-11.

Feldhusen, J. F. (1992). Talent identification and development in education (TIDE). Sarasota, FL: Center for Creative Learning.

Feldhusen, J. F., Asher, J. W., & Hoover, S. M. (1984). Problems in identification of giftedness, talent, or ability. Gifted Child Quarterly, 28, 149-151.

Feldhusen , J. F., & Baska, L. K. (1989). Identification and assessment of the gifted. In J. Feldhusen, J. VanTassel-Baska, & K. Seeley (Eds.), Excellence in educating the gifted (pp. 85-101). Denver, CO: Love Publishing.

Feldhusen, J. F., Baska, L. K., & Womble, S. R. (1981). Using standard scores to synthesize data in identifying the gifted. Journal for the Education of the Gifted, 4, 177-185.

Feldhusen, J. F., & Hoover, S. M. (1986). A conception of giftedness: Intelligence, self-concept and motivation. Roeper Review, 8, 140-143.

Feldhusen, J. F., Hoover, S. M., & Sayler, M. F. (1990). Identification of gifted students at the secondary level. Monroe, NY: Trillium.

Frasier, M. M. (1989). Identification of gifted black students: Developing new perspectives. In C. J. Maker & S. W. Schiever (Eds.), Critical issues in gifted education (Vol. II, pp. 213-225). Austin, TX: PRO-ED.

Hallahan, D. P., & Kauffman, J. M. (1982). Exceptional children (2nd Ed.). Englewood Cliffs, NJ: Prentice-Hall.

Hilliard, A. G. (1984). IQ testing as the emperor's new alternative to heredity-environment controversy. In C. R. Reynolds & R. T. Brown (Eds.), Perspectives on bias in mental testing (pp. 139-169). New York: Plenum Press.

Hills, J. R. (1971). Use of measurement in selection and placement. In R. L. Thorndike (Ed.), Educational Measurement (2nd Ed.) (pp. 680-732). Washington, DC: American Council on Education.

Hoge, R. D. (1988). Issues in the definition and measurement of the giftedness construct. Educational Researcher, 17(7), 12-16.

Hoge, R. D. (1989). An examination of the giftedness construct. Canadian Journal of Education, 14(1), 6-17.

Hoge, R. D., & Cudmore, L. (1986). The use of teacher-judgment measures in the identification of gifted pupils. Teaching and Teacher Education, 2(2), 181-196.

Horowitz, F. D., & O'Brien, M. (Eds.). (1985). The gifted and talented: Developmental perspectives. Washington, DC: The American Psychological Association.

Howley, A., Howley, C. B., & Pendarvis, E. D. (1986). Teaching gifted children: Principles and strategies. Boston: Little, Brown.

Janos, P. M., & Robinson, M. N. (1985). Psychological development in intellectually gifted children. In F. D. Horowitz & M. O'Brien (Eds.). (1985). The gifted and talented: Developmental perspectives (pp. 149-196). Washington, DC: The American Psychological Association.

Jenkins-Friedman, R. (1982). Cosmetic use of multiple selection criteria. Gifted Child Quarterly, 26, 24-26.

Jenkins-Friedman, R., Richert, E. S., & Feldhusen, J. F. (Eds.). (1991). Special populations of gifted children. New York: Trillium.

Kaplan, R. M., & Saccuzzo, D. P. (1989). Psychological testing: Principles, application, and issues (2nd Ed.). Pacific Grove, CA: Brooks/Cole Publishing Company.

Kerr, B., & Colangelo, N. (1988). The college plans of academically talented students. Journal of Counseling and Development, 67, 42-48.

Kolloff, P. B. (1991). Special residential high schools. In N. Colangelo, & G. A. Davis (Eds.), Handbook of gifted education (pp. 209-215). Needham Heights, MA: Allyn and Bacon.

Lauer, J. M., & Asher, J. W. (1988). Composition research: Empirical design. New York: Oxford University Press.

Louisiana School for Math, Science, and the Arts. (1988). Learning from Louisiana's academically talented students. Unpublished research report.

Maker, C. J. (1989). Program for gifted minority students: A synthesis of perspectives. In C. J. Maker & S. W. Schiever (Eds.), Critical issues in gifted education (Vol. II, pp. 293-309). Austin, TX: PRO-ED.

Manning, W. H., & Jackson, R. (1984). College entrance examinations: Objective selection or gatekeeping for the economically privileged. In C. R. Reynolds & R. T. Brown (Eds.), Perspectives on bias in mental testing (pp. 189-220). New York: Plenum Press.

Marland, S. P. (1971). Education of the gifted and talented. Report to the Congress of the United States by the Commissioner of Education. Washington, DC: U.S. Government Printing Office.

Meehl, P. E. (1954). Clinical versus statistical prediction. Minneapolis, MN: University of Minnesota Press.

North Carolina School of Science and Mathematics. (1987). Review of recruitment and admissions: Final report. Unpublished research report.

Popham, W. J. (1990). Modern educational measurement (2nd Ed.). Englewood Cliffs, NJ: Prentice-Hall.

Renzulli, J. S. (1978). What makes giftedness? Reexamining a definition. Phi Delta Kappan, 60, 180-184, 261.

Renzulli, J. S. (1984). The triad/revolving door system: A research-based approach to identification and programming for the gifted and talented. Gifted Child Quarterly, 28, 163-171.

Reynolds, M. C., & Birch, J. W. (1977). Teaching exceptional children in all America's schools. Reston, VA: The Council for Exceptional Children.

Reynolds, C. R., & Brown, R. T. (1984). Bias in mental testing: An introduction to the issues. In C. R. Reynolds & R. T. Brown (Eds.), Perspectives on bias in mental testing (pp. 1-39). New York: Plenum Press.

Richert, E. S. (1985). Identification of gifted children: An update. Roeper Review, 8, 66-72.

Richert, E. S. (1991). Rampant problems and promising practices in identification. In N. Colangelo & G. A. Davis (Eds.), Handbook of gifted education (pp. 81-96). Needham Heights, MA: Allyn and Bacon.

Richert, E. S., Alvino, J., & McDonnel, R. (1982). The national report on identification of gifted and talented youth: Assessment and recommendations for comprehensive identification of gifted and talented youth. Sewell, NJ: Educational Improvement Center-South.

Rimm, S. (1984). The characteristics approach: Identification and beyond. Gifted Child Quarterly, 28, 181-187.

Sawyer, R. (1966). Measurement and prediction. Clinical Psychological Review, 66, 178-200.

Stanley, J. C. (1979). Identifying and nurturing the intellectually gifted. In W. C. George, S. J. Cohn, & J. C. Stanley (Eds.), Educating the gifted: Acceleration and enrichment (pp. 172-180). Baltimore: The John Hopkins University Press.

Stanley, J. C. (1986). Residential state high schools for youths who are highly talented mathematically and/or scientifically: Several suggestions. Paper presented at Ball State University (December, 1986), Muncie, IN.

Stanley, J. C. (1991a). An academic model for educating the mathematically talented. Gifted Child Quarterly, 35, 36-42.

Stanley, J. C. (1991b). A better model for residential high schools for talented youth. Phi Delta Kappan, 72, 471-473.

Stanley, J. C. (1992). Gender differences on eighty-six nationally standardized aptitude and achievement tests. In N. Colangelo, S. G. Assouline, & D. L. Ambroson (Eds.), The Henry B. and Jocelyn Wallace National Research Symposium on Talent Development: Book of Proceedings (pp. 41-61). Unionville, NY: Trillium.

Stanley, J. C., & Benbow, C. P. (1983). SMPY's first decade: Ten years of posing problems and solving them. The Journal of Special Education, 17(1), 11-25.

Sternberg, R. J., & Davidson, J. E. (Eds.). (1986). Conceptions of giftedness. New York: Cambridge University Press.

Tannenbaum, A. J. (1983). Gifted children: Psychological and educational perspectives. New York: Macmillan.

Taylor, C. W. (1978). How many types of giftedness can your program tolerate? Journal of Creative Behavior, 12, 39-51.

Terman, L. M. (1925). Genetic studies of genius (Vol. 1). Mental and physical traits of a thousand gifted children. Stanford, CA: Stanford University Press.

Texas Academy of Math and Science. (1990). TAMS evaluation executive summary. Unpublished report.

Torrance, E. P. (1984). The role of creativity in identification of the gifted and talented. Gifted Child Quarterly, 28, 153-162.

VanTassel-Baska, J. (1989). The disadvantaged gifted. In J. Feldhusen, J. VanTassel-Baska, & K. Seeley (Eds.), Excellence in educating the gifted (pp. 53-70). Denver, CO: Love Publishing.

VanTassel-Baska, J., & Willis, G. (1987). A three year study of the effects of low income on SAT scores among the academically able. Gifted Child Quarterly, 31, 169-173.

Ward, V. S. (1983). Gifted education: Exploratory studies of theory and practice. Manassas, VA: The Reading Tutorium.

Zappia, I. A. (1989). Identification of gifted Hispanic students: A multidimensional view. In C. J. Maker & S. W. Schiever (Eds.), Critical issues in gifted education (Vol. II, pp. 19-26). Austin, TX: PRO-ED.

Appendix A

List of State Supported Residential Schools of Mathematics and Science

Alabama School of Mathematics & Science
P.O. Box 161628
Mobile, AL 36616-2628
Dr. Robert Peters, Associate Director for Academic Affairs

Governor's School for Science & Mathematics
306 East Home Avenue
Hartsville, SC 29550
Dr. Leland Cox, Director
Mr. Fred Lynn, Assistant Director
Mr. Van Sturgeon, Director of Admissions

Illinois Mathematics & Science Academy
1500 West Sullivan Road
Aurora, IL 60506-1039
Dr. Stephanie Marshall, Executive Director
Dr. Lou Ann Smith, Director of Admissions

Indiana Academy for Science, Mathematics, & Humanities
Ball State University
Muncie, IN 47306
Dr. Philip L. Borders, Superintendent, Director
Dr. Walter K. Lambert, Associate Director for Academic Life

Louisiana School for Math, Science, & the Arts
715 College Avenue
Natchitoches, LA 71457
Dr. Arthur Williams, Director
Mrs. Dottie DeSette, External Affairs Coordinator

Mississippi School for Mathematics & Science
P.O. Box W-1627
Columbus, MS 39701
Dr. Katherine Bunch, Director of Admissions

North Carolina School of Science & Mathematics
P.O. Box 2418
1219 Broad Street
Durham, NC 27705
Mr. John Fredrick, Director
Mr. Doug Gray, Principal

Oklahoma School of Science & Mathematics
1515 North Lincoln Boulevard
Oklahoma City, OK 73104-1253
Dr. Edna Manning, President
Mrs. Suzanne Donnolo, Director of Admissions

Texas Academy of Mathematics & Science
University of North Texas
P.O. Box 5307
Denton, TX 76203
Dr. Richard Steam, Director of Admissions

Appendix B

Interview Protocol

  1. How do you maintain equal access to information about your program?

  2. Are your faculty involved in the identification and selection process?

  3. Do you provide training for all people involved in the identification and selection process?

  4. Do you accept exceptionally talented youth who are enrolled in lower grades than the tenth and/or the ninth grades?

  5. Do you use the information gathered during the selection process for planning instruction?

  6. Do you offer any kind of remedial instruction for newcomers, and, if yes, in what areas and for how many?

  7. What is the average rate of attrition and what are the reasons for attrition?

  8. Are your selection policies restricted by state mandates, and if yes, how are they shaped by that?

  9. Do you have a clearly stated definition of the type of students you are looking for?

  10. What kind of relationships do you have among the major components of your program: goals and mission, admission procedures, curriculum, and criteria of success?

  11. Do you evaluate the effectiveness of your identification and selection system on a regular basis?

  12. What are the strengths of your identification and selection system?

Back to the Top

Return Bar

Neag Center Home | NRC/GT | On-line Resources | Residential Schools of Mathematics and Science for Academically Talented Youth