The preregistration can be found here. In what follows, we present where we decided to deviate from the preregistration.
In general, instead of analyzing self-disclosure, we’re now analyzing communication behavior. Before, we tried to operationalized self-disclosure with the log of communication frequency plus likes and dislikes. However, we realized that this operationalization is too coarse to represent self-disclosure. We now analyze the mere quantity of communication, which is much less debatable.
Initially, we planned to exclude all participants who finished the questionnaire in less than 6 minutes. However, we realized that this would lead to the deletion of participants who provided answers that seemed perfectly fine. In conclusion, we relaxed the criterion to 3 minutes, which led to the exclusion of 27 participants. Results changed only marginally and do not hinge on exclusion (see additional analyses).
In the preregistration, we stated that we would measure expected benefits by means of 5 general items (“Using the participation platform had many benefits for me”). However, we had also designed additional items for more specific gratifications that we did not include in the preregistration (the preregistration manual stated that additional variables that one does not plan to analyze do not need to be preregistered). These specific measures of gratifications were hence used for exploratory analyses.
Originally, we operationalized trust for three entities (i.e., provider, website, and other users) using four subdimensions (i.e., general trust, ability, benevolence, and integrity). Only later did we realize that the literature differentiates between general and specific trust beliefs, which we could conceptualize with the items we measured. As a result, in the paper we now differentiate both dimensions.
Given that we found not apparent effects of the three websites on the privacy calculus, we did not test for indirect effects as proposed in the preregistration. We additionally also controlled for education (we forgot to explicitly mention this control variable in the preregistration; results do not differ, but we think it should be included, which is why we did).
In the preregistration, power analyses were conducted assuming 80% statistical power. However, in the meantime we have come to believe that in most cases and if logistically possible we should strive for balanced alpha and beta errors, which given an alpha of 5% leads to a desired power of 95%. As a result, we report power analyses aiming for 95% power.
In the preregistration, we stated that we would analyze the additional exploratory analyses using Bonferroni-Holm correction. However, in the meantime we have come to understand that formal inferences tests for exploratory analyses are debatable, for example because the number of tests one could potentially account for is infinite. Instead, we decided not to make strong inferences on the basis of the exploratory analyses, and report p-values / confidence intervals only descriptively as a measure of precision.