Resampling Methods: Resting Robustness and Reliability (but Really Replicability)

  • No photo
    Paul D. Bliese

    Editor, Organizational Research Methods
    Professor, Department of Management
    Darla Moore School of Business
    University of South Carolina
    1014 Greene Street
    Columbia, SC 29208
    Phone: 803-777-5957
    Email Paul
    Paul’s Website

    Paul D. Bliese received a Ph.D. from Texas Tech University and a B.A. from Texas Lutheran University. After graduating in 1991, he worked for a year for the Bureau of Labor Statistics. In 1992, he joined the US Army where he spent 22 years as a research psychologist at the Walter Reed Army Institute of Research (WRAIR). In 2009, he formed the Center for Military Psychiatry and Neuroscience at WRAIR, and served as the Director until he retired at the rank of Colonel in 2014. Over his military career, Dr. Bliese directed a large portfolio of research initiatives examining stress, leadership, well-being, and performance. In this capacity, from 2007 to 2014 he oversaw the US Army’s Mental Health Advisory Team program assessing the morale and well-being of Soldiers deployed to Iraq and Afghanistan. His applied research was influential in policy decisions within the US Army and the Department of Defense. Throughout his professional career, Dr. Bliese has led efforts to advance statistical methods and apply analytics to complex organizational data. He developed and maintains the multilevel package for the open-source statistical programming language R, and his research has been influential in advancing organizational multilevel theory. Currently, Dr. Bliese is a professor in the Management Department of the Darla Moore School of Business at the University of South Carolina. He has served on numerous editorial boards, was an associate editor for the Journal of Applied Psychology from 2010 to 2016 and is the editor Organizational Research Methods.


Abstract:
A simple modification of the non-parametric bootstrap can be used to count how often a finding is or is not statistically significant. In so doing, a researcher could provide summary information in the form of XX% of the time one would expect an exact replication to find a statistically significant result (percent significant index). Two examples are provided: one using existing data and the other using simulated data from a published study. The percent significant index potentially represents a way to use statistical power in a post-hoc fashion to help readers draw inferences about the replicability of findings. The idea can be modified to be efficiently implemented without using the bootstrap. This talk is designed to spur discussion of whether such an index would be useful, and to show that there is only about a 50% probability that many published findings will replicate.