Insights and Implications

Mining Data from Scientific, Random-Sample Surveys

January, 2015 Don E. Lifto and Chris Deets Download PDF

A group of senior administrators in a suburban district were stumped with what they read. The charge from the superintendent was to carefully review an 80-page workbook of detail from a scientific, random-sample phone survey, which provided a wealth of data compared and contrasted demographically. Specifically, they were to look for eye-popping surprises or significant differences of more than 15 percent between demographic groups (for example, responses from males vs. females or young respondents vs. older respondents).

As they mined the survey analysis, a team member spotted a data point that qualified for further review based on both criteria. A random-sample of 300 registered voters had been asked the following question:

“Would you be more likely or less likely to support a school operating referendum if you knew that some of the money would be used to improve instruction for students who were behind in reading achievement?”

The data summary detailed in the survey workbook provided the team with a broad range of comparative statistics based on multiple demographic characteristics such as parent status, age, geography, frequency of past voting behavior, education level, household income, housing type, and length of time living in the school district. The data point that jumped off the page, however, was the stark difference in the results based on gender. When women responded to the reading instruction question, a strong 76 percent said they would be “more likely to support a school operating referendum.” Men, however, barely eclipsed a majority threshold tipping the scales at an unenthusiastic 54 percent support. As the group focused on this starkly different response, their shared but not yet spoken question was, “22 percent difference by gender – what’s this all about?”

Survey methodology and analysis

Many school districts have designed and administered scientific, random-sample surveys to collect feedback for a broad range of planning needs. Phone surveys are commonly administered to random samples ranging from 200 respondents in the smallest school districts up to as many of 600 respondents in large, more diverse communities. The vast majority of school surveys can obtain excellent data from a sample of 300-400, producing an acceptable error of measurement typically in the +/- 4.5-5.5 percent range. In using this engagement tool, school districts can probe such topics as general satisfaction, support for elements of a strategic plan, where constituents get their information about the public schools, or residents’ tax tolerance for a future operating or debt issuance referendum.

To maximize the investment of time and money in conducting a scientific, random-sample survey, our experience would emphasize two key methodologies that can be harnessed to enhance the quality of the data analysis and the potential to translate survey findings into value-added planning by the school district. First, it is important to understand that the power of the data analysis available post-survey is directly related to the richness of the demography in the sample from which the random calls are drawn.

There are two ways to achieve rich demographic analysis: (1) ask a lot of demographic questions in the survey and hope respondents answer them accurately and honestly (e.g., “How old are you?” or “Are you a registered voter?”) or (2) merge public and commercial databases into the registered voter data base such that data analysis can be drawn from the demography in the file without having to ask for the information during the phone interview. In our experience, the second approach produces the best results and also makes for a shorter phone call and higher percentage of respondents finishing the interview.

The second recommended methodology is to “roll up your collective sleeves” and be strategic and thorough in analyzing the demographic detail contained in a survey workbook. Maintaining a 40,000-foot perspective in a PowerPoint presentation of major findings is a good place to start, but is not a substitute for thorough data analyses. Best practice is to systematically sort through the survey findings, peeling back the demographic layers, and focusing on large percentage differences and surprises similar to the reading achievement example. Processing the data in this way produces a broader, richer understanding of which data are most important and provides the fuel that propels effective planning post-survey. One data analysis tool we have used with success is the Insights and Implications model.

Insights and implications

Turning hard data into actionable strategies and objectives is a process we call Insights and Implications, which is a somewhat complex process at first blush, but is ultimately simple and highly critical to a successful planning. Like a seasoned journalist, Insights and Implications probes for the “story behind the story” – processing the valuable data from the survey results, sorting through it to identify directional results compared by demography, and then identifying implications that may arise from acting (or, in some cases, not acting) upon the results.

While generating data in the first place is essential, mining those data for insights that are actionable is critically important. The first step of data mining involves looking for large discrepancies and gaps in the percentages. These gaps serve as signals to your team that “something is there, we need to look closer, and try to understand.” More often than not, these gaps represent information that may be counter to prevailing assumptions; the larger the gap and the more counter-intuitive it is, the greater the need for analysis. As the data are mined and the gaps are identified, they are placed into the “Insights” column of the accompanying chart in order of the size of the gap from largest to smallest. Divining insights from data is the first step in converting data into actionable information.

The second step is developing implications from the insights. This ensures the wisdom from our insights is translated into actions to help lay the foundation for a successful planning, whether that is a referendum or implementing a new strategic plan. Developing implications starts by asking two questions of each insight. The first question is “If we take action and ensure these percentages remain the same or even grow, would that help achieve the planning objective?” The second question flows naturally from the first, “If we take action and try to alter these percentages, would that help achieve our goals?”

The primary insight from the reading achievement example is that women care greatly about reading achievement. The implication is that women can be motivated by communicating that funds from a successful referendum will be allocated to address reading achievement. The deeper insight, however, is that men do not, at least on the surface, seem to place value on high reading achievement. We would counsel the team to consider whether it is true that a higher percentage of men don’t seem to care about reading achievement. Or, is their response to the survey question driven by lack of knowledge and exposure to the teaching and learning process? Could the campaign affect this dichotomy and capture more male supporters by getting them into the classroom and directly involved in the reading program, thereby moving the campaign in the team’s direction?

Usually there are a number of insights gleaned from any set of data. However, we find focusing on a limited number that show the greatest discrepancy — and thus hold the greatest promise for action — is the best use of the team’s resources.

Research to practice

In processing the reading instruction gender gap through the Insights and Implications model, the superintendent posed a question to the group: “How many advisory committees do we have throughout the school district, and what is their gender makeup?” Not surprisingly, the answer was the district had more than a dozen advisory committees and females were overrepresented, making up close to 80 percent of the membership. The follow-up question was even more intriguing: “How much could we positively influence men’s attitudes about reading instruction over time if they were equally represented on our advisory committees and we got them into our classrooms for reading instruction to a greater extent?” Focusing on data and processing the survey results through the Insights and Implications model resulted in a planning directive in this suburban district requiring that all advisory committees be gender balanced within one year. A “Reading with Dad” initiative was also launched. While just two small steps forward, these are examples of effectively translating research to better practice in data analysis while maximizing the value and positive impact of a scientific, random-sample survey.

About the Authors

Don E. Lifto, Ph.D., is executive vice president at Springsted Incorporated, an independent financial advisor and consultant to school districts, cities, counties, and non-profit organizations. He previously served as a public school superintendent for 25 years in rural, suburban, and intermediate districts. Lifto is coauthor of School Finance Elections: A Comprehensive Planning Model for Success, 2nd Edition, and is a frequent presenter on referendum planning at AASA, ASBO, and NSBA.

Chris Deets, President of Dustin Deets, LLC, is passionate about turning insights into innovative ways of communicating to help school districts achieve their objectives – whether that be introducing a new technology to parents, helping implement new strategic priorities, or passing the next bond or levy. Leveraging his commercial experience taking innovative technologies to market for both start-ups as well as established, publicly-traded companies, Deets has served both as a project-based consultant as well as temporary support to internal teams.

Download PDF version of this article
Go to top