Conclusion

In this chapter, we explored how nursing research can account for factors that influence study results beyond the intervention itself. You learned about extraneous variables and strategies to control them, including random assignment, matching, holding variables constant, and incorporating them into the study design. We also discussed practical strategies for measuring constructs, from conceptual definitions to selecting or creating reliable tools, implementing them consistently, and evaluating their validity. Understanding these concepts helps ensure that research findings accurately reflect the effects of the intervention and can be applied appropriately in real-world nursing practice. 

 

Key Takeaways

  • Measurement is the assignment of scores to individuals so that the scores represent some characteristic of the individuals. Measurement can be achieved in a wide variety of ways, including self-report, behavioral, and physiological measures.
  • Nursing constructs such as anxiety, quality of life, compassion, etc. are variables that are not directly observable because they represent behavioral tendencies or complex patterns of behavior and internal processes. An important goal of scientific research is to conceptually define constructs in ways that accurately describe them.
  • For any conceptual definition of a construct, there will be many different operational definitions or ways of measuring it.
  • Variables can be measured at four different levels—nominal, ordinal, interval, and ratio—that communicate increasing amounts of quantitative information. The level of measurement affects the kinds of statistics you can use and conclusions you can draw from your data.
  • Researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • Good measurement begins with a clear conceptual definition of the construct to be measured. This is accomplished both by clear and detailed thinking and by a review of the research literature.
  • You often have the option of using an existing measure or creating a new measure. You should make this decision based on the availability of existing measures and their adequacy for your purposes.
  • Several simple steps can be taken in creating new measures and in implementing both existing and new measures that can help maximize reliability and validity.
  • Once you have used a measure, you should reevaluate its reliability and validity based on your new data. Remember that the assessment of reliability and validity is an ongoing process.

 

 

Knowledge Check

 

 

License

Advancing Evidence Based Nursing Research Copyright © by jobando; ffehr; gregsonk19; and stavingai23. All Rights Reserved.

Share This Book