Survey methods

Self-reported learning gains: A theory and test of college student survey response (Research in Higher Education, 2013)

Recent studies have asserted that self-reported learning gains (SRLG) are valid measures of learning, because gains in specific content areas vary across academic disciplines as theoretically predicted. In contrast, other studies find no relationship between actual and self-reported gains in learning, calling into question the validity of SRLG. I reconcile these two divergent sets of literature by proposing a theory of college student survey response that relies on the belief-sampling model of attitude formation. This theoretical approach demonstrates how students can easily construct answers to SRLG questions that will result in theoretically consistent differences in gains across academic majors, while at the same time lacking the cognitive ability to accurately report their actual learning gains. Four predictions from the theory are tested, using data from the 2006–2009 Wabash National Study. Contrary to previous research, I find little evidence as to the construct and criterion validity of SRLG questions.

Do college student surveys have any validity? (Review of Higher Education, 2011)

Using standards established for validation research, I review the theory and evidence underlying the validity argument of the National Survey of Student Engagement (NSSE). I use the NSSE because it is the preeminent survey of college students, arguing that if it lacks validity, then so do almost all other college student surveys. I find that is fails to meet basic standards for validity and reliability, and recommend that higher education researchers initiate a new research agenda to develop valid college student surveys.

The validity of student engagement survey questions: Can we accurately measure academic challenge? (New Directions for Institutional Research, 2011)

This chapter examines the validity of several questions about academic challenge taken from the National Survey of Student Engagement. We compare student self-reports about the number of books assigned to the same number derived from course syllabi, finding little relationship between the two measures.

Mixed mode contacts in web surveys: Paper is not necessarily better (Public Opinion Quarterly, 2007)

This paper investigates the impact of paper and email contacts on web survey response rates.We use six combinations of paper and email prenotifications and reminders to test the impact of mixedmode contacts. In addition, we use two survey samples that differ in their relationship with the sponsoring institution to test if the impact of contact mode is conditional on relationship between respondents and the survey researchers. Contrary to previous research, we find little differences in response rates across experimental groups.

Student survey response rates across institutions: Why do they vary? (Research in Higher Education, 2006)

While many studies have examined nonresponse in student surveys, little research investigates why some schools achieve higher student survey response rates than other schools. Using hierarchical linear modeling, we analyze survey data from 321 institutions that participated in the 2003 National Survey of Student Engagement to understand how characteristics of colleges and universities relate to student survey response rates. We find that the makeup of the student body, as well as institutional characteristics such public/private status and urban location affects response rates, and that the number of computers per undergraduate has a strong positive effect for web survey response rates.

Non-response in student surveys: The role of demographics, engagement and personality (Research in Higher Education, 2005)

What causes a student to participate in a survey? This paper looks at participation across multiple surveys to understand survey non-response; by using multiple surveys we minimize the impact of survey salience. Students at a selective liberal arts college were administered four different surveys throughout the 2002–2003 academic year, and we use the number of surveys participated in to understand how student characteristics such as demographics, engagement and Holland personality type affect cooperation. We find that survey respondents are more likely to be female and socially engaged, less likely to be on financial aid, more likely to be an investigative personality type and less likely to be an enterprising personality type.

The impact of lottery incentives on student survey response rates (Research in Higher Education, 2003)

Lottery incentives are widely used by institutional researchers despite a lack of research documenting the effectiveness of postpaid incentives in general and lottery incentives in particular. A controlled experiment tested the effects of lottery incentives using a prospective college applicant Web survey, with e-mails sent to more than 9,000 high school students. The impact of the level of lottery incentive on response rates and response bias is discussed.

The impact of contact type on web survey response rates (Public Opinion Quarterly, 2003)

Using a web survey of high school students, we investigated the impact of characteristics of the e-mail contact on response rates, varying such attributes as personalization of salutation, e-mail address, job title and office of sender, statements of deadlines, and statements of selectivity. Our results indicate that some of the tactics used to increase response rates in paper surveys may not directly translate to the electronic realm.