MGMT610: Advanced Quantitative Approaches To Management
Comprehensive portal for accessing the course outline and lecture materials. Please click on the buttons to the left to access weekly lecture notes and the buttons on the right to submit weekly (required) actions.
Developing Research Instruments for Quantitative Data Collection
You need to find ways to measure the constructs you want to include in your model. Usually this is done through reflective latent measures on a Likert scale. It is conventional and encouraged to leverage existing scales that have already been either proposed or, better yet, validated in extant literature. If you can’t find existing scales that match your construct, then you might need to develop your own.
-
Find existing scales
-
A convenient way to find existing scales for management research is to go to:
-
Google Scholar
-
Handbook of Management Scales
-
-
Where necessary, adapt the scales to suit your context. (maintain the ‘spirit’ of the construct/ report the fact that you adapted your scales)
-
Where necessary, Trim the scale as needed.
-
Reflective constructs rarely ever requires more than 4 or 5 items.
-
Select the 4-5 items that best capture the construct of interest.
-
If the scale is multidimensional, it is likely formative. In this case, you can either:
-
Keep the entire scale (this can greatly inflate your survey, but it allows you to use a latent structure)
-
Keep only one dimension (just pick the one that best reflects the construct you are interested in)
-
Keep one item from each dimension (this allows you to create an aggregate score; i.e., sum, average, or weighted average)
-
Develop new scales
-
Precisely define your construct. You cannot develop new measures for a construct if you do not know precisely what it is you are hoping to measure.
-
Once you have defined your construct, develop scales (reflective rather than formative where possible, as formative are a bit more complicated to analyze from a statistical perspective)
-
For reflective measures, simply create 5 interchangeable statements that can be measured on a 5-point Likert scale of agreement, frequency, or intensity. We develop 5 items so that we have some flexibility in dropping 1 or 2 during the EFA if needed. If the measures are truly reflective, using more than 5 items would be unnecessarily redundant.
-
-
-
If developing your own scales, do pretesting
-
To ensure the newly developed scales make sense to others and will hopefully measure the construct you think they should measure, you need to do some pretesting. Two very common pretesting exercises are ‘talk-aloud’ and ‘Q-sort’.
-
Talk-aloud exercises include sitting down with between five and eight individuals who are within, or close to, your target population. For example, if you plan on surveying marketers, then you should do talk-alouds with marketers. If you are surveying a more difficult to access population, such as CEOs, you can probably get away with doing talk-alouds with upper level management instead. The purpose of the talk-aloud is to see if the newly developed items make sense to others. Invite the participant (just one participant at a time) to read out loud each item and respond to it. If they struggle to read it, then it is worded poorly. If they have to think very long about how to answer, then it needs to be more direct. If they are unsure how to answer, then it needs to be clarified. If they say “well, it depends” then it needs to be simplified or made more contextually specific. You get the idea. After the first talk-aloud, revise your items accordingly, and then do the second talk-aloud. Repeat until you stop getting meaningful corrections.
-
Q-sort is an exercise where the participant (ideally from the target population, but not strictly required) has a card (physical or digital) for each item in your survey, even existing scales. They then sort these cards into piles based on what construct they think the item is measuring. To do this, you’ll need to let them know your constructs and the construct definitions. This should be done for formative and reflective constructs, but not for non-latent constructs (e.g., gender, industry, education). You should have at least 8 people participate in the Q-sort. If you arrive at consensus (>70% agreement between participants) after the first Q-sort, then move on. If not, identify the items that did not achieve adequate consensus, and then try to reword them to be more conceptually distinct from the construct they miss-loaded on while being more conceptually similar to the construct they should have loaded on. Repeat the Q-sort (with different participants) until you arrive at adequate consensus.
-
-
-
Identify target sample and obtain approval to collect data
-
Conduct a Pilot Study
-
It is exceptionally helpful to conduct a pilot study if time and target population permit. A pilot study is a smaller data collection effort (between 30 and 100 participants) used to obtain reliability scores (like Cronbach’s alpha) for your reflective latent factors, and to confirm the direction of relationships, as well as to do preliminary manipulation checks (where applicable). Usually the sample size of a pilot study will not allow you to test the full model (either measurement or structural) altogether, but it can give you sufficient power to test pieces at a time. For example, you could do an EFA with 20 items at a time, or you could run simple linear regressions between an IV and a DV.
-
Often time and target population do not make a pilot study feasible. For example, you would never want to cannibalize your target population if that population is difficult to access and you are concerned about final sample size.. If the results of the pilot study reveal poor Cronbach’s alphas, or poor loadings, or significant cross-loadings, you should revise your items accordingly. Poor Cronbach’s alphas and poor loadings indicate too much conceptual inconsistency between the items within a construct. Significant cross-loadings indicate too much conceptual overlap between items across separate constructs.
-
-
Collect Data
Data collection could take anywhere from 1 week to three months, depending on many factors. Be prepared to send reminders or give incentives. Also be prepared to only obtain a fraction of the responses you expected. Your response rate is what matters the most
CALL TO ACTION: Now it’s your turn. Use the steps outlined above to develop a suitable research questionnaire for your study
(Culled from SW)
