Political leaders often embrace positive satisfaction ratings, but should they?
It should come as no surprise that political leaders enjoy quoting the positive ratings from surveys of the communities that they serve – a sort of badge of honor for their job performance.  Ex-Dallas City Manager A.C. Gonzales is no exception, making reference to a recent citizen satisfaction survey of 1,512 Dallas City residents that showed an overall community that, with some exceptions, appeared quite happy with City services.  Mayor Mike Rawlings has also referenced these positive ratings from these surveys as well. At the national level, Republican nominee Donald Trump recently pointed to positive student satisfaction ratings to counter allegations of fraud in lawsuits against Trump University.  Indeed, positive ratings are like candy to politicians, whether deserved or not.
But how much faith can we place in these satisfaction ratings? In a recent column by Dallas Morning News columnist Robert Wilonsky, he noted the apparent paradox of the City’s continuing high ratings given the multitude of problems that are left unresolved, such as potholes, loose dogs mauling citizens in poor neighborhoods, contracting irregularities, deteriorating air quality, traffic congestion, and a host of other issues.  Wilonsky also pointed out that the survey vendor’s report curiously omitted information about the ages represented by the study respondents.  Indeed, the report tells us nothing about the satisfaction levels across racial-ethnic groups, income groups, age groups or other key demographics – information that would provide more insight on how well the study sample mirrored Dallas’ diverse population. The City of Dallas is now 41 percent Latino, 24 percent black, 3 percent Asian, and 29 percent white – a diverse community of residents that are entitled to have their voices heard in surveys sponsored by their tax dollars.
While City leaders have no problem embracing citizen satisfaction ratings, we should be cautious about embracing the results of satisfaction surveys, especially those that consistently show their sponsors in a positive light. In the case of the City of Dallas, there is reason to believe that these satisfaction ratings could be inflated and a self-serving exercise for City leaders:
  • Past community surveys for the City have shown a pattern of under-representing certain racial-ethnic groups, age groups, non-English speakers, and the lower income  – groups who are more likely to have negative experiences and opinions of City services. Loose dogs and potholes, for example, are more common in poor neighborhoods.  To what extent would the positive ratings diminish if the voices of such residents were properly represented in the survey?
  • Of course, the survey vendor’s quality of work may be spectacular, making it easier to eliminate the competition. However, the most recent City satisfaction report omitted standard demographic information about the 1,512 city residents that completed the survey.  One has no idea if the survey respondents accurately reflected the diversity of this community by race, ethnicity, gender or age. This is information that is considered standard in industry research reports — information that is commonly used to judge the scientific credibility of the survey findings. Why have City staff allowed the omission of this important information from its report?
  • Given the positive ratings that the City continues to enjoy from these surveys, it is not surprising that the survey company that conducts these surveys has enjoyed a preferred vendor status for many years. While the survey contract is bid competitively, the same out-of-state vendor has been successful in obtaining the contract year after year even though there are various local vendors that are equally qualified to conduct the work.  Are City leaders and staff concerned that a different vendor would change the positive ratings that they enjoy?  
          Community satisfaction ratings provide one measure of the City’s performance in serving a community, but provide an incomplete picture of its actual performance since key groups are often omitted or under-represented in such studies. The fascination of City leaders with these positive ratings and comparisons to other U.S. cities creates the false impression that everything in Dallas is just peachy.  A guided tour of City neighborhoods tells quite a different story.

Clearly, the next City Manager for Dallas, as well as the next Mayor, will have a long list of City-related needs that will require their immediate attention. If the results of citizen satisfaction surveys continue to be used by City leaders and staff as a benchmark of their annual or periodic performance, some changes will be needed to inspire more confidence in the ratings provided by this survey.  First, it is absolutely essential that the public is provided access to a detailed methodology that describes the steps used to conduct the study, including the extent of support in languages other than English.  This is important because many studies confirm that over half of Latino and Asian adults prefer to communicate in their native language, a fact that improves comprehension and survey participation.  Second, the report must provide a detailed demographic profile of the survey respondents – a standard requirement in all research industry studies – and perhaps the only evidence that the random selection of City households resulted in a fair and unbiased representation of the City’s diverse community.  Lastly, to remove the appearance of favoritism in the vendor selection process, City staff should be required to justify the continued selection of one vendor over several years despite the availability of various equally qualified survey vendors.
Is your multicultural research misleading marketing decisions?

Despite the dramatic growth of multicultural populations in the U.S., many survey companies continue to use outdated assumptions and practices in the design and execution of surveys in communities that are linguistically and culturally diverse. Following are some of the more problematic practices that may warrant your attention, whether you are a survey practitioner or a buyer of survey research.

1. Is your survey team culturally sterile?
If your survey team lacks experience conducting surveys in diverse communities, you may  already be dead on arrival. Since most college courses on survey or marketing research do not address the problems that are likely to occur in culturally-diverse communities, mistakes are very likely to happen.  An experienced multicultural survey team member is needed to assess the study challenges and resources. Really, how else will you know if something goes wrong?
2.  Are you planning to outsource to foreign companies?
So your firm has decided to outsource its Latino or Asian surveys instead of hiring your own bilingual interviewers. Think twice about this.  If you have ever monitored interviews conducted by foreign survey shops, you are likely to discover several issues that impact survey quality: language articulation problems, and a lack of familiarity with U.S. brands, institutions, and geography.  The money that you save by outsourcing will not fix the data quality issues that will emerge from these studies. Better to use an experienced, U.S. based research firm with multilingual capabilities that does not outsource to foreign survey shops.
3. Are you forcing one mode of data collection on survey respondents?
Think about it —  mail surveys require reading and writing ability; phone surveys require one to speak clearly; and online surveys require reading ability and Internet access. Forcing one mode of data collection can exclude important segments of consumers that can bias your survey results. Increasingly, survey organizations are using mixed-mode methods (i.e., combination of mail, phone and online) to remove these recognized limitations, and achieving improved demographic representation and better quality data.
4. English-only surveys make little sense in a multicultural America.
Of course, everyone in America should be able to communicate in English, and most do. But our own experience confirms that two-thirds of Latino adults and 7 in 10 Asians prefer a non-English interview when given a choice. The reason is simple: Latino and Asian adults have large numbers of immigrants who understand their native language better than English – which translates to enhanced comprehension of survey questions,, more valid responses, and improved response rates.  Without bilingual support, the quality of survey data is increasingly suspect in today’s diverse communities.
5. Are you still screening respondents with outdated race-ethnic labels?
Multicultural persons dislike surveys that use outdated or offensive race-ethnic labels that are used to classify them – which can result in the immediate termination of the interview, misclassification of survey respondents, or missing data. Published research by the Pew Research Center and our own experience suggests that it is better to use multiple rather than single labels in a question: that is, “Do you consider yourself Black or African American, Hispanic or Latino, Asian or Asian American, white or Anglo American?” Since Latinos and Asians identify more strongly with their country of origin, it is a good idea to record their country of origin or provide a listing of the countries represented by the terms Latino or Asian.  Use of the label Caucasian is often used along with the white label, but should be avoided because the Caucasian category also includes Latinos.
6.  Are your survey respondents consistently skewed towards women?
A common problem is that multicultural males are considerably more reluctant than white males to participate in surveys, which often results in survey data that is overly influenced by female sentiments and behaviors. The imbalance often results from the poor management of interviewers who dedicate less effort to getting males to cooperate. Rather than improve data collection practices that create such imbalances, survey analysts will typically apply post-stratification weights to correct the imbalance even when large imbalances are found – a practice that can distort the survey results.  It is always a good practice to review both un-weighted and weighted survey data to judge the extent of this problem.
7.  Online panels are not the solution for locally-focused multicultural studies.
With high anxiety running throughout the survey industry from the recent FCC settlement of $12 million with the Gallup Organization, many survey companies will likely replace their telephone studies with online panels.  For nationally-focused surveys, online panels may be an adequate solution to reach a cross section of multicultural online consumers. For local markets, however, the number of multicultural panel members is often insufficient to complete a survey with a minimum sample of 400 respondents. Worse yet, the majority of multicultural panel members are the more acculturated, English-speaking, higher income individuals – immigrants are minimal on such panels. Online panel companies will have to do a better job of expanding their participants with multicultural consumers. In the meantime, don’t get your hopes too high.
8.  Translators are definitely not the last word on survey questionnaires.
So your questionnaire has just been translated by a certified translator, and you are confident that you are ready to begin the study of multicultural consumers. After a number of interviews, however, you learn that the survey respondents are having difficulty understanding some of the native language vocabulary being used, and interviewers are having to “translate-on-the-fly” by substituting more familiar wording – a major problem in multicultural studies. It is obvious that the survey team placed undue confidence on the work of the certified translator, and did not conduct a pilot study of the translated questionnaire to check for its comprehension and relevance among the target respondents.  A good pilot study can save you time, money and headaches.
These tips represent only a partial listing of the many ways in which a survey can misrepresent multicultural communities.  Industry recognition of these types of problems is a first step towards their elimination, although survey practitioners are slow to change their preferred ways of collecting data. Raising the standards for multicultural research will perhaps pick up steam once higher education institutions require the study of these issues in their research courses, and buyers of research require higher standards from research vendors.

You can reach Dr. Rincón at edward@rinconassoc.com

© Rincón & Associates LLC 2015

Is Mayor Rawlings Hiding Behind Inflated Satisfaction Ratings of Dallas Residents?
“Dallas residents generally say they’re more satisfied than people in many other cities.” 
According to the Dallas Morning News, that is the response that Mayor Rawlings gave to challenger Marcos Ronquillo during their recent debate at the Belo Mansion when Mr. Ronquillo challenged the Mayor’s misplaced priorities on the Trinity toll road issue. As Mr. Ronquillo asserted, it makes little sense to make such an expensive investment of questionable value given the evidence that the City’s urban core was crumbling – the third highest poverty rate in the nation, a public school system beset by many problems, and thousands of pot holes that residents endure on a daily basis.  But are Dallas residents really more satisfied than people in other cities?  A closer look at how these satisfaction ratings are produced should raise some eyebrows.
We are all accustomed to hearing of efforts to inflate performance ratings – colleges leaving out the test scores of athletes, and school districts omitting or doctoring the test scores of low performers – all efforts to inflate performance and deceive the public. Although less obvious to the public, opinion polling firms also use questionable practices to distort survey results.   In reviewing the survey reports for the City’s satisfaction ratings, it turns out that the ratings are inflated because segments of City residents who are the most likely to receive poor services are excluded from the surveys. Curiously, for several years now the City has awarded the contract for satisfaction surveys to the same survey company that uses the same flawed methodology to produce the same inflated ratings. Really makes you wonder.  The reports are available to the public for their own independent review.

Mayor Rawlings, you owe the public an explanation about the manner in which these satisfaction ratings are produced. More importantly, you cannot hide behind inflated satisfaction ratings that have little credibility.  The public deserves to get a more reasoned explanation about your willingness to overlook the City’s crumbling infrastructure while you continue to promote the questionable investment in the Trinity toll road.
The Texas Recipe for Muting the Hispanic Voice in Public Opinion Polls


If you are a tax-paying Texas resident, should your opinion matter in decisions related to publicly-funded programs or services in Texas?  Of course, you may say, the opinions of all Texas residents are important.  But one Texas state agency thinks that it is acceptable to exclude Spanish-speaking Hispanics from state-funded public opinion polls that are used to decide how tax dollars are spent.  I would like to share the details of an actual case study that vividly illustrates how one Texas agency is being allowed to silence the voice of Texas Hispanics in its public opinion polls.
The State of Texas has plans to spend billions of dollars to improve their transportation system, including the possibility of high speed rail. To ensure that the improved system meets the needs and expectations of Texas residents, the Texas Transportation Institute has the responsibility for conducting important surveys of Texans that reside in specific geographic areas, or corridors, that are likely to be impacted by these improvements.
A recently released report by the Texas Transportation Institute for the first of these two surveys, conducted during the Fall of 2012, provides concrete evidence that the voice of Texas Hispanics was muted by the survey planners.  Indeed, Hispanics represented only 20 percent of the survey respondents, despite their current representation of 38 percent in Texas (American Community Survey, 2011).  Even worse, only 19 percent of the few Hispanics included in the study were interviewed in Spanish – which compares poorly with other state surveys that have shown that 50 to 67 percent of Hispanics prefer a Spanish-language interview.
How could this occur, especially when the study design was reviewed by a “panel of experts” at the Institute?  A careful review of the study methodology reveals several missteps in the planning and execution of this survey:

  • The 16,000 households selected as respondents received only an English-language version of the survey.
  • The cover letter that was included with the English-language survey was provided only in English, and did not offer respondents any support to complete the survey in another language.
  • A question that asked respondents to identify their race-ethnic background provided only one ethnic identifier for Spanish-speaking respondents — “Hispanic” – which could partly explain the under-count of these respondents because other labels are often preferred over the “Hispanic” option.
  • A call center was supposedly set up to receive incoming calls from survey respondents that had questions or needed Spanish-language support. But this call center probably received few calls from Spanish-speaking respondents since the cover letter did not provide the needed contact information.  Moreover, the report did not include a copy of the Spanish-language telephone survey that was supposedly used by the survey vendor’s call center to capture incoming calls by the survey respondents.  
  • The study design required that automated advanced calls (or “robo calls”) be made to the selected households prior to the survey mailing.  Automated calls are a recognized nuisance from telemarketers and political campaigns that often discourage response rates to legitimate public opinion polls.
  • The report indicated that the survey “participation rate” was 34.6 percent – a rate that appears subjectively created and not recognizable by the American Association of Public Opinion Research (2011).  Instead, the overall survey response rate was more likely to be a much lower 9.7 percent (1,559 completions /16,000 invited participants) – not surprising given the recognized shortcomings in the methodology.
  • While the vendor acknowledged that Hispanic respondents were significantly under-represented and non-Hispanic whites were over-represented, no explanation was provided about the potential causes or consequences of this imbalance.
  • The fact that the survey planners ignored a previous warning about the potential flaws in the survey methodology suggests that the poor survey outcomes did not result from just simple carelessness.

         To make matters worse, the same survey vendor was awarded a second contract to conduct another public opinion poll of Texas residents using the same flawed methodology.  Why are state officials allowing such flawed practices to take place, especially at a time when the state’s population is being heavily impacted by the growing Hispanic population?

As one of the vendors that competed for both survey contracts, Rincón & Associates LLC monitored both competitions with some concern. In both competitions, state procurement staff decided to award the contract to the lowest bidder, which may not have been the brightest decision given the poor study outcomes.  As both studies required a mixed-mode survey methodology that few U.S. companies were capable of executing, more weight should have been given to proven experience using this specialized methodology with diverse communities in Texas. Procurement staff did not have to settle for the lowest bidder as other Texas vendors were ready, willing and able to conduct both studies.
Fortunately, an investigation was initiated on March 3, 2013 to find out why a state agency like the Texas Transportation Institute is allowed to deliberately design a study that minimizes the participation of Hispanic residents, especially Spanish speakers.  The outcome of this investigation is important because it could either (a) allow other state agencies to exclude Spanish-speaking Hispanics from state-funded studies, or (b) raise the standards of research for all state agencies to ensure that all state-funded studies provide adequate Spanish-language support.
The practical significance of this issue cannot be over-stated. Spanish-speaking residents are often the most likely to be overlooked in the delivery of public services, the most likely to receive the lowest quality services, and show distinctive attitudes or values that differ significantly from English-speakers.  Thus, excluding Spanish-speakers from opinion polls can lead to more positive satisfaction ratings than is actually the case, and result in erroneous public policy decisions.
It is time to require a higher standard for public opinion polling in communities that are linguistically and culturally diverse. Although professional research organizations have always defined quality and ethical standards for the research industry, it is apparent from the case just discussed that public agencies may not feel the need to follow these guidelines.  Following are a few ideas suggested from our experiences with public agencies like the Texas Transportation Institute:
  • Research firms that compete for opinion polls in the public sector should be required to produce evidence that they have the staff, facilities and past experience to conduct polls in linguistically and culturally diverse communities. If a research firm does not produce a representative sample of such communities in a contracted study, they should not be rewarded with another contract that utilizes the same flawed methodology.
  • The committee members convened by public agencies to evaluate research proposals may not have the expertise to judge these proposals in terms of their adequacy for diverse communities. The inclusion of experts with experience in conducting polls in diverse communities may have prevented the missteps in the Texas A & M studies.
  • In the haste to award a contract to the lowest bidder, proposal evaluators do not regularly check the references provided by the different bidders, but oddly enough still find a way to rate the relevant experiences of the bidders without this information. Prior to contract award, an audit should be conducted to ensure that such references were verified for all of the vendors that submitted a bid in such competitions.

       It is unclear that the State of Texas got the “best value” by selecting the lowest bidder from outside of Texas. Indeed, what is the economic benefit to Texans when a contract is awarded to a non-Texas vendor whose payrolls, taxes and local spending for goods and services will only benefit another state? 

Legislators and advocacy organizations, especially those that represent the needs of Texas Latinos, should show their concern about public opinion polling practices that minimize or eliminate the voice of the constituents that they represent. Can we afford to remain silent on this issue? 
The more conservative members of the Texas community may believe that all public work should be conducted only in English, and that no special accommodations should be made to non-English speakers. Unless we are willing to also exempt non-English speakers from the payment of taxes as well, then I believe that they should be given the option of voicing their opinions on topics that impact their quality of life.  Although many Hispanics and Asian residents have proficiency in English and their native language, about 50 to 70 percent of these residents still prefer to express their opinions in their native language.  By providing the appropriate linguistic options, public opinion researchers are more likely to establish rapport, increase response rates, and obtain more valid responses to their questions from ethnic respondents – all desirable outcomes for high quality research.     
Texas public agencies, especially the Texas Transportation Institute, must be required to raise their standards when conducting opinion polls of Texas residents, and legislators must take a more assertive role to ensure this outcome. We cannot afford to bury our collective heads in the sand on this issue.

National Poll on Arizona’s Immigration Law May Be Misleading

A recent national poll released by the Pew Research Center (5-12-10) reported widespread public support of Arizona’s new immigration law — a resounding 73 percent of the survey respondents! Headlines such as these, reinforced by the scientific credibility of an established polling organization, undoubtedly adds more momentum to the call for similar laws in other states.

Is the national mood really that supportive of Arizona’s new immigration law? Not being one to embrace polling results uncritically, I reviewed the study methodology and discovered that the entire survey was conducted in one language: English. Let me explain why this bias seriously limits the usefulness of the poll results.

Having conducted studies of multicultural populations over the past 30 years, I can assure you that two-thirds of Hispanics and 80 percent of Asians prefer to communicate in their native language when provided the choice. When a poll that includes these segments is conducted only in English, the results are predictable: lower response rates, less valid information, and more missing data. More importantly, because these respondents are more likely to be foreign-born, their exclusion from the Pew study has no doubt also inflated the reported level of public support for Arizona’s new law.

One only has to wonder why the Pew Research Center decided to address such a controversial topic in a manner that silenced the very voices that might have shared a different point of view about Arizona’s new immigration law.