Introduction

Understanding why people disclose personal information online remains a critical question for both society and research. Originally, it was assumed that online self-disclosure is erratic and that it cannot be predicted by people’s personal beliefs, concerns, or standpoints. Most prominently, the privacy paradox stated that people self-disclose vast amounts of personal information online despite having substantial concerns about their privacy (Barnes 2006; Taddicken and Jers 2011).

Somewhat surprisingly, and despite its popularity in the media (Radio 2018), the privacy paradox has garnered comparatively little empirical support. A recent meta-analysis reported a correlation between privacy concerns and self-disclosure on SNS of r = -.13 (Baruh, Secinti, and Cemalcilar 2017), which shows that privacy concerns are indeed often related to self-disclosure online.

Hence, rather than further pursuing the privacy paradox, a large share of current day research builds on the so-called privacy-calculus (Laufer and Wolfe 1977), which states that self-disclosure online can be explained—at least partly—by means of expected risks and expected benefits (Krasnova et al. 2010). Specifically, by operationalizing expected risks as privacy concerns, several studies have shown that experiencing greater privacy concerns is related to disclosing less information online, whereas expecting benefits is related to disclosing more information online (Heirman, Walrave, and Ponnet 2013; Koohikamali, French, and Kim 2019).

However, although the privacy calculus has gained momentum in academic research, several important questions remain unanswered. First, we still know little about whether the privacy calculus can be replicated with behavioral data in an authentic long-term setting (Kokolakis 2017). Thus far, most research supporting the privacy calculus has used either self-reports of behavior (e.g., Krasnova et al. 2010), vignette approaches (e.g., Bol et al. 2018), or one-shot experiments in the lab (e.g., Trepte, Scharkow, and Dienlin 2020). Still missing is a more long-term field study in which actual behavior is observed in an authentic context.

Second, current research on the privacy calculus is often criticized for not explicitly focusing on the deliberation process of self-disclosure. According to critics (e.g., Knijnenburg et al. 2017), showing that concerns and gratifications both correlate with self-disclosure is not evidence for an explicit weighing process of pros and cons. We agree. In this study, we therefore explicitly focus on the privacy deliberation process. Related, and on a more general level, we explore the usefulness of further extending the privacy calculus model by adding new variables such as privacy deliberation, trust, and self-efficacy.

Finally, because the privacy calculus does not take place in a vacuum and because it is often argued that self-disclosure can be easily triggered by external circumstances, we analyze whether the privacy calculus is affected by the design of a website. Specifically, we investigate whether popularity cues such as like and dislike buttons have the power to affect the privacy calculus and to foster self-disclosure.

To test our research questions, we collected a representative sample of the German population and conducted a preregistered online field experiment. Participants were randomly distributed to one of three different websites, which either included a like button, both a like and a dislike button, or no buttons at all. Over the course of one week participants had the chance to discuss a topical issue (i.e., prevention of terrorist attacks in Germany). Afterward, they answered a follow-up questionnaire with items measuring the privacy calculus variables.

The Privacy Calculus

Self-disclosure is a primary means of regulating privacy (e.g., Masur 2018). It is our key variable of interest. There are two different understandings of self-disclosure in the literature: The first limits self-disclosure to deliberate acts of sharing truthful information about the self with others (Jourard 1964). The second considers all acts of sharing information—be they active or passive, deliberate or unwitting—as self-disclosure, because each piece of information allows for meaningful inferences about a person (Watzlawick et al. 2011). In this paper we follow the latter approach, not least because the recent years have illustrated how easy it is to derive personal insights simply by analyzing exchanged communication (Kosinski, Stillwell, and Graepel 2013). Moreover, independent from which position one adopts, it is possible to differentiate the content of self-disclosure into three different dimensions: breadth (i.e., number of topics covered), depth (i.e., intimacy of topics covered), and length (i.e., quantity of disclosure) (Omarzu 2000). In this study we mainly focus on communication quantity as proxy for self-disclosure. The relation between communication quantity and self-disclosure is not linear. Impressions are formed quickly, and the more we have already expressed about ourselves the harder it becomes to self-disclose novel information.

Privacy concerns have been defined as follows: “Concerns about online privacy represent how much an individual is motivated to focus on his or her control over a voluntary withdrawal from other people or societal institutions on the Internet, accompanied by an uneasy feeling that his or her privacy might be threatened” (Dienlin, Masur, and Trepte 2019, 6). Previous research has found that people who are more concerned about their privacy than others are less likely to share personal information (Baruh, Secinti, and Cemalcilar 2017; Heirman, Walrave, and Ponnet 2013; Koohikamali, French, and Kim 2019).

H1: People are more likely to self-disclose on a website when they are less concerned about their privacy.

Although privacy concerns are related to self-disclosure, one can argue that most effects reported in the literature are only small, and that there should be additional factors explaining self-disclosure. For example, it has been argued that people trade a loss of privacy for a gain in gratifications (e.g., Taddicken and Jers 2011). The most prominent gratifications include social support (Krasnova et al. 2010), social capital (Ellison et al. 2011), entertainment (Dhir and Tsai 2017), information-seeking (Whiting and Williams 2013), and self-presentation (Min and Kim 2015).

H2: People are more likely to self-disclose on a website when they obtain more gratifications from using the website.

As mentioned above, there is still a shortage of studies explicitly analyzing the decision process behind the disclosing of information—although this point of criticism has been leveled several times (Knijnenburg et al. 2017) and although other fields such as behavioral economics have long focused on the underlying problem (Zhu et al. 2017). This criticism is justified. The observation that privacy concerns and expected gratifications are related to self-disclosure is by itself not sufficient evidence for an explicit weighing process. Hence, research on the privacy calculus would benefit from analyzing this decision process explicitly. Building on Omarzu (2000) and Altman (1976), we hence address a novel concept that might best be termed privacy deliberation, which captures the extent to which individual people explicitly compare potential positive and negative outcomes before communicating with others.

On the one hand, it seems plausible that deliberating about one’s privacy would dampen subsequent self-disclosure, because refraining from regular communication—the primary means of connecting with others—requires at least a minimum of active and hence deliberate restraint. On the other hand, deliberating about one’s privacy might also increase self-disclosure, because a person concerned about his or her privacy might arrive at the conclusion that in this situation self-disclosure is not only appropriate but expedient. In light of the lack of empirical studies and the plausibility of both effects, we formulate the following research question:

RQ1: Are people more or less likely to self-disclose on a website depending on how actively they deliberate about whether they should self-disclose?

Several attempts have already been made to expand the privacy calculus, introducing additional variables such as self-efficacy or trust (Dinev and Hart 2006). Self-efficacy in the context of the privacy calculus captures whether people believe in their own capacity to implement particular privacy behaviors in the future (Dienlin and Metzger 2016). These privacy behaviors refer to either self-disclosure (e.g., publishing a blog post) or self-withdrawal (e.g., deleting inappropriate content). People who report more privacy self-efficacy also engage in more self-withdrawal (Chen 2018). In light of our focus on active communication, in this study we investigate the influence of self-disclosure self-efficacy.

H3: People are more likely to self-disclose on a website when their self-efficacy about self-disclosing on the website is higher.

The next variable, trust, can be conceptualized in two different ways (Gefen, Karahanna, and Straub 2003): It either captures “specific beliefs dealing primarily with the integrity, benevolence, and ability of another party” (Gefen, Karahanna, and Straub 2003, 55, emphasis added) or a “general belief that another party can be trusted” (Gefen, Karahanna, and Straub 2003, 55, emphasis added). Whereas specific trust focuses on the causes of trust, general trust emphasizes the experience of trust. Gefen, Karahanna, and Straub (2003) prioritize specific trust (p. 60). In the online context, it is also important to differentiate among several targets of trust (Söllner, Hoffmann, and Leimeister 2016). Potential targets include (a) the information system, (b) the provider, (c) the Internet, and (d) the community of other users (Söllner, Hoffmann, and Leimeister 2016). Trust plays a key role in online communication (Metzger 2004). For example, people who put more trust in the providers of networks also disclose more personal information (Li 2011).

H4: People are more likely to self-disclose on a website when they have greater trust in the provider, the website, and the other users.

The Effect of Popularity Cues

How does the communication context affect the privacy calculus and self-disclosure? First, it has often been noted that researchers should not exclusively focus on specific features of particular websites, for they are prone to change and to quickly become obsolete (Fox and McEwan 2017). Instead, it has been suggested to prioritize the underlying latent structures by analyzing so-called affordances (Ellison and Vitak 2015; Fox and McEwan 2017). The concept of affordances was developed by Gibson (2015), who argued that it is not the objective features of objects that determine behavior, but our subjective perceptions. Affordances are mental representations of how objects might be used; as such, they are by definition subjective. There is an ongoing debate on what exactly defines an affordance (Evans et al. 2017). For example, whereas Evans et al. (2017) propose three affordances for mediated communication (i.e., anonymity, persistence, and visibility), Fox and McEwan (2017) suggest 10 affordances for SNSs alone (i.e., accessibility, bandwidth, social presence, privacy, network association, personalization, persistence, editability, conversation control, and anonymity).

As the privacy calculus states that both benefits and costs determine behavior, we suggest that popularity cues such as like and dislike buttons—which are categorized as “paralinguistic digital affordances” (Carr, Hayes, and Sumner 2018, 142)—nicely map unto the two sides of the privacy calculus. The like button is positive and as such a potential benefit: It expresses an endorsement, a compliment, a reward (Carr, Hayes, and Sumner 2018; Sumner, Ruge-Jones, and Alcorn 2017). The dislike button is negative and therefore a potential cost: It expresses criticism and is a major means of downgrading content.

Paralinguistic digital affordances and specifically popularity cues can affect behavior (Krämer and Schäwel 2020; Trepte, Scharkow, and Dienlin 2020). For example, a large-scale field experiment in which 101,281 comments were analyzed found that comments with dislikes were more likely to receive further dislikes (Muchnik, Aral, and Taylor 2013). Stroud, Muddiman, and Scacco (2017) demonstrated that when users disagreed with a post, they were more likely to click on a button labeled respect compared to a button labeled like. The potentially stark negative effect of the dislike button might also explain why to date only a handful of major websites have implemented it (e.g., youtube, reddit, or stackexchange).

In this vein, it seems plausible that popularity cues might also impact the privacy calculus (Krämer and Schäwel 2020), and that they serve as a means of reward and punishment. Receiving a like online is similar to receiving a compliment offline. Likes are positive and represent the positivity bias typical of social media (Reinecke and Trepte 2014). Introducing the option to receive likes mighty thereby afford and emphasize a gain frame (see also Rosoff, Cui, and John (2013)). These gains can be garnered only through participation. In addition, because like buttons emphasize positive outcomes, it is likely that concerns decrease. Finally, in situations where there is more to win, people should more actively deliberate about whether or not to disclose information.

H5. Compared to people who use a website without like or dislike buttons, people who use a website with like buttons (a) self-disclose more, (b) obtain more gratifications, (c) are less concerned about their privacy, and (d) deliberate more about whether they should communicate online.

By contrast, receiving a dislike should feel more like a punishment. Dislikes introduce a loss frame. Although most communication emphasizes positive aspects, the Internet is also replete with spite, envy, and arguments. As a result, websites featuring both like and dislike buttons should be more ambivalent compared to websites without any popularity cues. In online contexts, gains often outweigh losses, which is why having both types of popularity cues might still lead to more gratifications and self-disclosure. However, privacy concerns should not be reduced anymore: Because people who are more concerned about their privacy are also more shy and risk averse (Dienlin 2017), implementing the dislike button might increase privacy concerns, thereby canceling out the positive effects of the like button. And because both wins and losses can accrue, participants should deliberate even more whether or not to disclose.

H6. Compared to people who use a website without like or dislike buttons, people who use a website with like and dislike buttons (a) self-disclose more, (b) obtain more gratifications, and (c) deliberate more about whether they should communicate online.

When directly comparing websites including both like and dislike buttons with website including only like buttons, building on the rationales presented above it is likely that websites including both like and dislike buttons should lead to more privacy concerns and privacy deliberation.

H7. Compared to people who use a website with only like buttons, people who use a website with like and dislike buttons (a) are more concerned about their privacy, and (b) deliberate more about whether they should communicate online.

For a simplified overview of our theoretical model, see Figure @ref(fig:model).

Overview of theoretical model.

Overview of theoretical model.

Methods

Open Science

The online supplementary material (OSM) of this study includes the data, research materials, analyses scripts, and a reproducible version of this manuscript, which can be found on the manuscript’s companion website (https://tdienlin.github.io/privacy_calc_exp). We preregistered the study using the registration form OSF Prereg, which includes the hypotheses, sample size, research materials, analyses, and exclusion criteria (see https://osf.io/a6tzc/?view_only=5d0ef9fe5e1745878cd1b19273cdf859). We needed to change our pre-defined plan in some cases. For a full account of all changes, see OSM. New analyses that were not preregistered appear in the section Exploratory analyses.

Procedure

The study was designed as an online field experiment with three different groups. The first group used a website without like/dislike buttons, the second the same website but with only like buttons, and the third the same website but with both like and dislike buttons. Participants were randomly distributed to one of the three websites in a between-subject design.

We collaborated with a professional market research company to recruit participants. As incentive, participants were awarded digital points, which they could use to get special offers from other online commerce services. Participants were above the age of 18 and lived in Germany. In a first step, the agency sent its panel members an invitation to participate in the study (invitation). In this invitation, panel members were asked to participate in a study analyzing the current threat posed by terrorist attacks in Germany.1 Members who decided to take part were subsequently sent the first questionnaire (T1), in which we (a) asked about their sociodemographics, (b) provided more details about the study, and (c) included a registration link for the website, which was described as “participation platform”. Afterward, participants were randomly assigned to one of the three websites. After registration was completed, participants were invited (but not obliged) to discuss the topic of the terrorism threat in Germany over the course of one week (field). Subsequently, participants received a follow-up questionnaire in which the self-reported measures were collected (T2). Measures were collected after and not before the field phase in order not to prime participants or reveal our primary research interest.

We programmed an online website based on the open-source software discourse (https://www.discourse.org/). We conducted several pretests with students from the local university to make sure the website had an authentic feel (see Figure @ref(fig:website)). Participants used the website actively: Overall, they spent 162 hours online, wrote 1,171 comments, and clicked on 560 popularity cues. Notably, we did not find any instances of people providing meaningless text. For an example of communication that took place, see Figure @ref(fig:comments).

The website's homepage. (Translated to English.)

The website’s homepage. (Translated to English.)

Communication that took place on the website with like and dislike buttons. (Translated to English.)

Communication that took place on the website with like and dislike buttons. (Translated to English.)

Participants

We ran a priori power analyses to determine how many participants to recruit. The power analysis was based on a smallest effect size of interest (SESOI; Lakens, Scheel, and Isager 2018). In other words, we defined a minimum effect size that we considered sufficiently large enough to support our hypotheses. Because small effects should be expected when researching aspects of privacy online (e.g., Baruh, Secinti, and Cemalcilar 2017), with standardized small effects beginning at an effect size of r = .10 (Cohen 1992), we set our SESOI to be r = .10. Our aim was to be able to detect this SESOI with a probability of at least 95%. Using the regular alpha level of 5%, basic power analyses revealed a minimum sample size of n = 1,077. In the end, we were able to include n = 559 in our analyses (see below). This means that our study had a probability (power) of 77% to find an effect at least as large as r = .10. Put differently, we were able to make reliable inferences (i.e., power = 95%) about effects at least as big as r = .14.

We collected a representative sample of the German population in terms of age, sex, and federal state. 1,619 participants completed the survey at T1, 960 participants created a user account on the website, and 982 participants completed the survey at T2. Using tokens and IP addresses, we connected the data from T1, participants’ behavior on the website, and T2 by means of objective and automated processes. The data of several participants could not be matched for technical reasons, for example because they used different devices for the respective steps. In the end, the data of n = 590 participants could be matched successfully. We excluded n = 29 participants who finished the questionnaire at T2 in less than three minutes, which we considered to be unreasonably fast.2 To detect potentially problematic data, we calculated Cook’s distance. We excluded 2 participants who provided clear response patterns (i.e., straight-lining). The final sample included 559 participants. The sample characteristics at T1 and T2 were as follows: T1: Age = 45 years, sex = 49% male, college degree = 22%. T2: Age = 46 years, sex = 49% male, college degree = 29%. One participant did not report his or her sex.

Measures

In what follows, we present the materials we used to measure the variables. Wherever possible, we operationalized the variables using established measures. Where impossible (for example, to date there exists no scale on privacy deliberation), we self-designed novel items, which we pretested concerning legibility and understandability. To assess factor validity we ran confirmatory factor analyses (CFA). If the CFAs revealed insufficient fit, we deleted malfunctioning items. All items were formulated as statements to which participants indicated their (dis-)agreement on a bipolar 7-point scale. Answer options were visualized as follows: -3 (strongly disagree), -2 (disagree), -1 (slightly disagree), 0 (neutral), +1 (slightly agree), +2 (agree), +3 (strongly agree). For the analyses, answers were coded from 1 to 7. In the questionnaire, all items measuring a variable were presented on the same page in randomized order.

For an overview of the means, standard deviations, factorial validity, and reliability, see Table @ref(tab:CFA). For an overview of the variables’ distributions, see Figure @ref(fig:corrplot). For the exact wording of all items and their individual distributions, see OSM.

(#tab:CFA)
Psychometric Properties, Factorial Validity, and Reliability of Measures
m sd chisq df pvalue cfi tli rmsea srmr omega ave
Privacy concerns 3.21 1.51 11.04 9.00 0.27 1.00 1.00 0.02 0.01 0.96 0.80
General gratifications 4.76 1.22 34.03 5.00 0.00 0.98 0.95 0.10 0.02 0.93 0.74
Specific gratifications 4.71 1.02 269.77 85.00 0.00 0.94 0.93 0.06 0.05 0.93 0.59
Privacy deliberation 3.93 1.29 15.55 5.00 0.01 0.98 0.96 0.06 0.02 0.84 0.53
Self-efficacy 5.25 1.12 3.23 1.00 0.07 0.99 0.96 0.06 0.01 0.86 0.59
General trust 5.21 1.04 2.07 1.00 0.15 1.00 0.99 0.04 0.01 0.86 0.70
Specific trust 5.08 0.94 99.48 26.00 0.00 0.96 0.94 0.07 0.04 0.92 0.62

Note. omega = Raykov’s composite reliability coefficient omega; avevar = average variance extracted.

 

Above diagonal: zero-order correlation matrix; diagonal: density plots for each variable; below diagonal: bivariate scatter plots for zero-order correlations. Solid regression lines represent linear regressions, dotted regression lines represent quadratic regressions. Calculated with the model predicted values for each variable (baseline model).

Above diagonal: zero-order correlation matrix; diagonal: density plots for each variable; below diagonal: bivariate scatter plots for zero-order correlations. Solid regression lines represent linear regressions, dotted regression lines represent quadratic regressions. Calculated with the model predicted values for each variable (baseline model).

Privacy concerns

Privacy concerns were measured with seven items based on Buchanan et al. (2007). One example item was “When using the participation platform, I had concerns about my privacy”. One item was deleted due to poor psychometric properties.

Gratifications

We differentiated between two separate types of gratifications. General gratifications were measured with five items based on Sun et al. (2015). One example item was “Using the participation platform has paid off for me”. Specific gratifications were measured with 15 items on five different subdimensions with three items each. The scaled was based on Scherer and Schlütz (2002). Example items were: “Using the participation platform made it possible for me to” … “learn things I would not have noticed otherwise” (information), “react to a subject that is important to me” (relevance), “engage politically” (political participation), “try to improve society” (idealism), and “soothe my guilty consciences” (extrinsic benefits).

Privacy deliberation

Privacy deliberation was measured with five self-designed items. One example item was “While using the participation platform I have weighed the advantages and disadvantages of writing a comment.”

Self-efficacy

Self-efficacy was captured with six self-designed items, which measured whether participants felt that they had sufficient self-efficacy to write a comment on the website. For example, we asked “I felt technically competent enough to write a comment.” Two inverted items were deleted due to poor psychometric properties.

Trust

We differentiated between two types of trust. General trust was operationalized based on Söllner, Hoffmann, and Leimeister (2016), addressing three targets (i.e., provider, website, and other users) with one item each. One example items was “The operators of the participation platform seemed trustworthy.” Specific trust was operationalized for the same three targets with three subdimensions each (i.e., ability, benevolence/integrity, and reliability), which were measured with one item each. Example items were “The operators of the participation platform have done a good job” (ability), “The other users had good intentions” (benevolence/integrity), “The website worked well” (reliability). The results showed that the provider and website targets were not sufficiently distinct, as was evidenced by a Heywood case. We hence adapted the scale to combine these two targets. The updated scale exhibited adequate fit.

Self-disclosure

Self-disclosure was calculated by taking the log scale of the number of words each participant wrote in a comment, to which we added the number of likes and dislikes, which were multiplied by two (preregistered). The number of likes and dislikes were multiplied by two because, rudimentarily, like buttons abbreviate the sentence “I like” and dislike buttons “I dislike”. The sum of words and likes/dislikes was log-scaled because the relative amount of self-disclosure diminishes the more a person has already expressed.

Data analysis

All hypotheses and research questions were tested using structural equation modeling with latent variables. The influence of the three websites was analyzed using contrast coding, which allows for testing the effects of experimental manipulations within a theoretical framework while using latent variables (Kline 2016). Because the dependent variable self-disclosure was not normally distributed, we estimated the model using robust maximum likelihood (Kline 2016). As recommended by Kline (2016), to assess global fit we report the model’s \(\chi^2\), RMSEA (90% CI), CFI, and SRMR. Because sociodemographic variables are often related to self-disclosure and other privacy-related variables (Dindia and Allen 1992), we controlled all variables for the influence of sex, age, and education. Preregistered hypotheses were tested with a one-sided significance level of 5%. Research questions were tested with a two-sided 5% significance level using family-wise Bonferroni-Holm correction. Exploratory analyses were conducted from a descriptive perspective, which is why the reported p-values and confidence intervals should not be overinterpreted.

We used R (Version 3.6.1; R Core Team 2018) and the R-packages lavaan (Version 0.6.5; Rosseel 2012), papaja (Version 0.1.0.9942; Aust and Barth 2018), pwr (Version 1.2.2; Champely 2018), quanteda (Version 1.5.2; Benoit 2018), semTools (Version 0.5.2; Jorgensen et al. 2018), and tidyverse (Version 1.3.0; Wickham 2017) for all our analyses.

Results

Descriptive Analyses

We first measured and plotted all bivariate relations between the study variables (see Figure @ref(fig:corrplot)). The results did not reveal any relationships to be particularly curvilinear. Furthermore, all variables referring to the privacy calculus demonstrated the expected relationships with self-disclosure. For example, people who were more concerned about their privacy disclosed less information (r = -.16). Worth noting, specific gratifications predicted self-disclosure better than general gratifications (r = .23 vs. r = .13). The mean of privacy deliberation was m = 3.93. Altogether, 32% of participants reported having actively deliberated about their privacy.

It is important to note that the bivariate results showed three large correlations: First, between specific trust and general gratifications (r = .79); second, between privacy concerns and privacy deliberation (r = .61); third, between specific gratifications and self-efficacy (r = .55). As all six variables were later analyzed within a single multiple regression, problems of multicollinearity might occur.

Privacy Calculus

Preregistered analyses

First, we ran a model as specified in the preregistration. The model fit our data okay, \(\chi^2\)(388) = 953.45, < .001, cfi = .94, rmsea = .05, 90% CI [.05, .05], srmr = .05. Regarding H1, we did not find that general gratifications predicted self-disclosure (\(\beta\) = -.04, = -0.06, 95% CI [-0.22, 0.09], = -0.78, = .217; one-sided). With regard to H2, privacy concerns did not significantly predict self-disclosure (\(\beta\) = .07, = 0.14, 95% CI [-0.19, 0.47], = 0.84, = .199; one-sided). RQ1 similarly revealed that privacy deliberation was not correlated with self-disclosure (\(\beta\) = -.10, = -0.16, 95% CI [-0.34, 0.02], = -1.72, = .085; two-sided). Regarding H3, however, we found that experiencing self-efficacy predicted self-disclosure substantially (\(\beta\) = .38, = 0.78, 95% CI [0.49, 1.07], = 5.29, < .001; one-sided). Concerning H4, results showed that trust was not associated with self-disclosure (\(\beta\) = -.12, = -0.30, 95% CI [-0.83, 0.22], = -1.13, = .129; one-sided).

However, these results should be treated with caution, because they indeed exhibited problems typical of multicollinearity, such as large standard errors or “wrong” signs of the predictors (Grewal, Cote, and Baumgartner 2004). For example, in the multiple regression trust had a negative relation with self-disclosure, whereas in the bivariate analysis it was positive.

Exploratory analyses

Thus, we slightly adapted our preregistered model on the basis of the insights described above. First, instead of specific trust and general gratifications we now included general trust and specific gratifications, which were correlated slightly less strongly. The adapted model fit our data comparatively well, \(\chi^2\)(507) = 1501.14, < .001, cfi = .93, rmsea = .06, 90% CI [.06, .06], srmr = .06.

In the adapted privacy calculus model, specific gratifications were positively related to self-disclosure online (\(\beta\) = .16, = 0.46, 95% CI [0.06, 0.86], = 2.26, = .024). Furthermore, people who deliberated more about their privacy disclosed less information (\(\beta\) = -.13, = -0.20, 95% CI [-0.39, -0.02], = -2.17, = .030; two-sided). Self-efficacy remained substantially correlated with self-disclosure (\(\beta\) = .33, = 0.68, 95% CI [0.40, 0.96], = 4.78, < .001; two-sided). However, we again found a negative correlation between trust and self-disclosure (\(\beta\) = -.18, = -0.53, 95% CI [-0.96, -0.10], = -2.44, = .015; two-sided), which again implies multicollinearity.

When confronted with multicollinearity, two responses are typically recommended (Grewal, Cote, and Baumgartner 2004): (a) combining collinear variables into a single measure, or (b) keeping only one of the collinear variables. Combining variables was not an option in our case, because both trust and expected benefits are theoretically distinct constructs. And because several variables were closely related to one another, we therefore decided to fit a simple privacy calculus model containing only privacy concerns and specific gratifications.

The simple model fit our data well, \(\chi^2\)(202) = 712.53, < .001, cfi = .95, rmsea = .07, 90% CI [.06, .07], srmr = .05. First, we found that people who experienced more privacy concerns than others disclosed less information (\(\beta\) = -.14, = -0.20, 95% CI [-0.32, -0.08], = -3.26, = .001; two-sided). Second, people who reported more specific gratifications than others self-disclosed more information (\(\beta\) = .22, = 0.64, 95% CI [0.36, 0.93], = 4.45, < .001; two-sided). Both effect sizes were above our predefined SESOI of r = .10, which implies that the they were large enough to be theoretically relevant.

When comparing the three models with one another, the adapted model explained the most variance in self-disclosure (17.56 %), followed by the preregistered model (16.34 %), and the simple privacy calculus model (8.03 %). At the same time, the simple privacy calculus model was the most parsimonious one (BIC = 37,168, AIC = 36,567), followed by the preregistered model (BIC = 48,949, AIC = 48,097), and the adapted model (BIC = 57,409, AIC = 56,441). For a visual overview of all results, see Figure @ref(fig:plotpc).

Predictors of self-disclosure. Displayed are the 95\% CIs of unstandardized effects.

Predictors of self-disclosure. Displayed are the 95% CIs of unstandardized effects.

Popularity Cues

Preregistered analyses

In a next step, we analyzed the potential effects of the popularity cues. We for example expected that websites with like buttons would lead to more self-disclosure, gratifications, and privacy deliberation and to less privacy concerns. Somewhat surprisingly, we found no effects of the popularity cues on the privacy calculus variables whatsoever. For an illustration, see Figure @ref(fig:popularitycues), which displays the model-predicted values for each variable (using the baseline model). The results show that the confidence intervals of all preregistered variables overlap, illustrating that there were no statistically significant differences across websites. For the detailed results of the specific inference tests using contrasts, see the OSM.

Exploratory analyses

The picture remained the same also when analyzing variables not included in the preregistration. Note that some differences missed statistical significance only marginally (e.g., specific gratifications for the comparison between the website with like buttons and the control website without like and dislike buttons). Nevertheless, we refrain from reading too much into these differences and conclude that the three websites were comparable regarding the privacy calculus variables and the amount of self-disclosure.

Overview of the model-predicted values for each variable, separated for the three websites. Control: Website without buttons. Like: Website with like buttons. Like \& Dislike: Website with like and dislike buttons.

Overview of the model-predicted values for each variable, separated for the three websites. Control: Website without buttons. Like: Website with like buttons. Like & Dislike: Website with like and dislike buttons.

Discussion

In this study, we analyzed the privacy calculus using actual observed behavior in a preregistered field experiment. We additionally asked whether the privacy calculus is affected by popularity cues such as like and dislike buttons. The data stem from a representative sample of the German population and were analyzed using structural equation modeling with latent variables.

In the bivariate analyses, all privacy calculus variables significantly predicted self-disclosure. In the preregistered analyses using multiple regression, however, only self-efficacy significantly predicted self-disclosure. All other variables were not significant. The preregistered extended privacy calculus model was therefore not supported by the data. However, the model showed problems typical of multicollinearity, which is why we also explored (a) an adapted version of the preregistered model, in which we exchanged two variables, and (b) a simple privacy calculus model, which included only privacy concerns and specific gratifications.

The adapted model suggests that also when holding all other variables constant, people who deliberate more about their privacy disclose less, and that people who expect more specific gratifications and who feel more self-efficacious disclose more. However, the model also suggests that if trust increases, while all other factors remain constant, self-disclosure decreases. This seems theoretically implausible. As a result, we also fit the above-mentioned simple privacy calculus model, which showed that both privacy concerns and obtained gratifications significantly and meaningfully predicted self-disclosure. Taken together, the results support the privacy calculus framework and suggest that—at least in specific contexts—self-disclosure online is not erratic and that it can be explained by several psychological variables.

Aligned with this observation, the results also suggest that in new communication contexts at least one third of all Internet users actively deliberates about their privacy. Determining whether this figure is large or small is a normative question. Although the effect seems substantial to us, one could argue that it should be higher and that more people should actively deliberate about their self-disclosure practices online. Interestingly, results showed that privacy deliberation and privacy concerns were remarkably similar, which was evidenced by their strong correlation with one another and their comparable correlations with other variables. This either implies that thinking about one’s privacy increases one’s concern or, conversely, that being concerned about one’s privacy leads one to think about one’s options more actively. Future research might tell.

The next major implication is that popularity cues do not always seem to have a strong influence on the privacy calculus and self-disclosure. Although some studies have found that popularity cues can substantially impact behavior (e.g., Muchnik, Aral, and Taylor 2013), in our study we found the opposite. Users still disclosed the same amount of personal information regardless of whether or not a website included like or dislike buttons, potentially highlighting the agency of users. This is of course not to say that popularity cues have no impact on the privacy calculus in general. Instead, the results only suggest that there exist certain contexts in which the influence of popularity cues is negligible.

The results also have several more fine-grained implications. First, one can question the tendency to further increase the complexity of the privacy calculus model by adding additional variables (e.g., Dienlin and Metzger 2016). “Since all models are wrong the scientist cannot obtain a”correct" one by excessive elaboration. […] Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity" (Box 1976, 792). Specifically, we have come to believe that adding self-efficacy to privacy calculus models is of limited value, because self-efficacy is often only a self-reported proxy of behavior offering little epistemic insight. Instead, it might be more interesting to find out why some people feel sufficiently efficacious to self-disclose whereas others do not. In addition, although adding variables increases the amount of explained variance, it introduces further problems, for example spurious results due to multicollinearity.

Interestingly, multicollinearity might not even be a problem per se, but rather a helpful warning sign. From a statistical perspective, strongly correlated predictors only mean that standard errors become larger (Vanhove 2019). In other words, when predictors are strongly correlated we can be less certain about the effects we obtain, because there is less unique variance (Vanhove 2019). As a remedy, researchers could simply collect larger samples, which would increase statistical power and precision. Fortunately, using accessible statistical software it is now possible to run a priori power analyses that explicitly account for correlated/collinear predictors (Wang and Rhemtulla 2020).

From a theoretical perspective, multicollinearity could also suggest that the underlying theoretical model is ill-configured. It is our understanding that multiple regression is often used with the aim to isolate effects, to make sure that they are not simply caused by another third variable. However, in cases of highly correlated measures this often does not make much sense theoretically. For example, in our case combining trust and gratification asks how increasing benefits affects self-disclosure while holding trust constant. Theoretically, however, it is more plausible to assume that increasing gratifications also automatically increases trust (Söllner, Hoffmann, and Leimeister 2016). In the preregistered analysis we even went further and tested whether trust increases self-disclose while holding constant gratifications, privacy concerns, privacy deliberations, and self-efficacy—measures which are all strongly correlated. In short, the effects we found could even be correct, but the interpretation is more difficult, potentially artificial, and thereby of little theoretical and practical value.

Furthermore, we found a surprisingly strong correlation between specific trust and expected gratifications (i.e., r = .79). At first glance, this strong relation seemed somewhat peculiar to us. On closer inspection, however, we realized that the way trust is typically operationalized is remarkably close to expected gratifications. To illustrate, the trust subdimension ability includes items such as “The comments of other users were useful”. In fact, in the literature trust is often operationalized as a formative construct that directly results from factors such as expected benefits (Söllner, Hoffmann, and Leimeister 2016). In conclusion, our results suggest that causes of trust should not be confused with measures of trust, for this might introduce problems of both homogeneity and/or multicollinearity. Instead, we recommend to use general and reflective measures of trust.

Limitations

The results do not allow for causal interpretation on the within-person level. First, all results are based on analyses of between-person variance. However, between-person relations often do not translate well to within-person effects (Hamaker, Kuiper, and Grasman 2015). While some studies on privacy concerns online have begun to examine both sources of variance (Dietvorst et al. 2017), similar analyses are still lacking for the privacy calculus.

Second, the self-reported measures were collected after the field phase in which the dependent variable was measured. As a result, the coefficients might overestimate the actual relations, because demand effects might have led participants to artificially align their theoretical answers with their practical behavior. Nevertheless, we deliberately decided to measure the self-reported variables afterward in order not to bias participants.

Third, the assumption of stable unit treatment states that in experiments we should manipulate only the experimental variable while holding all others constant (Kline 2016). In this study, we explicitly manipulated the popularity cues. However, because the experiment was conducted in the field several other variables could not be held constant. This includes the content of communication by other users, the unfolding communication dynamics, and the characteristics of other users. As a result, the assumption of stable unit treatment was violated.

Again, although we did not find significant effects of like and dislike buttons in this study, this does not mean they have no effect on the privacy calculus in general. Null-findings pose the Duhème-Quinn Problem (Dienes 2008), which—put somewhat crudely—states that null findings can either result from an actual non-existence of effects or, instead, from a poor operationalization of the research question. In this case, we were not able send participants notifications when their comments were liked/disliked, which significantly decreased the popularity cues’ salience.

This paper analyzes self-disclosure in the context of political participation. Our focus was on understanding self-disclosure, which is why we deliberately excluded variables pertaining to political participation, such as informational self-efficacy (Loy et al. 2018). Moreover, operationalizing self-disclosure via communication quantity is, of course, only a proxy.

Conclusion

Whereas some scholars discuss whether we should wish “Death to the privacy calculus?” (Knijnenburg et al. 2017, 1), we think that the privacy calculus is alive and kicking. In this study, people who were more concerned about their privacy than others disclosed less information online, whereas people who received more gratifications from using a website than others disclosed more information online. In addition, the results suggest that a substantial share of internet users, approximately 30%, consciously engage in a privacy calculus by actively deliberating about whether or not to disclose information. Popularity cues such as like and dislike buttons seem to play only a minor role in this process. In conclusion, the results provide further evidence against the privacy paradox. Internet users are at least somewhat proactive and reasonable—maybe no more or less proactive or reasonable than in other everyday situations.

References

Altman, Irwin. 1976. “Privacy: A Conceptual Analysis.” Environment and Behavior 8 (1): 7–29. https://doi.org/10.1177/001391657600800102.

Aust, Frederik, and Marius Barth. 2018. papaja: Create APA Manuscripts with R Markdown. https://github.com/crsh/papaja.

Barnes, Susan B. 2006. “A Privacy Paradox: Social Networking in the United States.” First Monday 11 (9).

Baruh, Lemi, Ekin Secinti, and Zeynep Cemalcilar. 2017. “Online Privacy Concerns and Privacy Management: A Meta-Analytical Review.” Journal of Communication 67 (1): 26–53. https://doi.org/10.1111/jcom.12276.

Benoit, Kenneth. 2018. Quanteda: Quantitative Analysis of Textual Data. https://doi.org/10.5281/zenodo.1004683.

Bol, Nadine, Tobias Dienlin, Sanne Kruikemeier, Marijn Sax, Sophie C. Boerman, Joanna Strycharz, Natali Helberger, and Claes H. Vreese. 2018. “Understanding the Effects of Personalization as a Privacy Calculus: Analyzing Self-Disclosure Across Health, News, and Commerce Contexts.” Journal of Computer-Mediated Communication 23 (6): 370–88. https://doi.org/10.1093/jcmc/zmy020.

Box, George E. P. 1976. “Science and Statistics.” Journal of the American Statistical Association 71 (356): 791–99. https://doi.org/10.1080/01621459.1976.10480949.

Buchanan, Tom, Carina Paine, Adam N. Joinson, and Ulf-Dietrich Reips. 2007. “Development of Measures of Online Privacy Concern and Protection for Use on the Internet.” Journal of the American Society for Information Science and Technology 58 (2): 157–65. https://doi.org/10.1002/asi.20459.

Carr, Caleb T., Rebecca A. Hayes, and Erin M. Sumner. 2018. “Predicting a Threshold of Perceived Facebook Post Success via Likes and Reactions: A Test of Explanatory Mechanisms.” Communication Research Reports 35 (2): 141–51. https://doi.org/10.1080/08824096.2017.1409618.

Champely, Stephane. 2018. Pwr: Basic Functions for Power Analysis. https://CRAN.R-project.org/package=pwr.

Chen, Hsuan-Ting. 2018. “Revisiting the Privacy Paradox on Social Media with an Extended Privacy Calculus Model: The Effect of Privacy Concerns, Privacy Self-Efficacy, and Social Capital on Privacy Management.” American Behavioral Scientist 62 (10): 1392–1412. https://doi.org/10.1177/0002764218792691.

Cohen, Jacob. 1992. “A Power Primer.” Psychological Bulletin 112 (1): 155–59. https://doi.org/10.1037/0033-2909.112.1.155.

Dhir, Amandeep, and Chin-Chung Tsai. 2017. “Understanding the Relationship Between Intensity and Gratifications of Facebook Use Among Adolescents and Young Adults.” Telematics and Informatics 34 (4): 350–64. https://doi.org/10.1016/j.tele.2016.08.017.

Dienes, Zoltán. 2008. Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference. New York, N.Y.: Palgrave Macmillan.

Dienlin, Tobias. 2017. The Psychology of Privacy: Analyzing Processes of Media Use and Interpersonal Communication. Hohenheim, Germany: University of Hohenheim.

Dienlin, Tobias, Philipp K. Masur, and Sabine Trepte. 2019. “A Longitudinal Analysis of the Privacy Paradox.” Preprint. SocArXiv. https://doi.org/10.31235/osf.io/fm4h7.

Dienlin, Tobias, and Miriam J. Metzger. 2016. “An Extended Privacy Calculus Model for SNSs: Analyzing Self-Disclosure and Self-Withdrawal in a Representative U.S. Sample.” Journal of Computer-Mediated Communication 21 (5): 368–83. https://doi.org/10.1111/jcc4.12163.

Dietvorst, Evelien, Marieke Hiemstra, Manon H. J. Hillegers, and Loes Keijsers. 2017. “Adolescent Perceptions of Parental Privacy Invasion and Adolescent Secrecy: An Illustration of Simpson’s Paradox.” Child Development, January. https://doi.org/10.1111/cdev.13002.

Dindia, K., and M. Allen. 1992. “Sex Differences in Self-Disclosure: A Meta-Analysis.” Psychological Bulletin 112 (1): 106–24.

Dinev, Tamara, and Paul Hart. 2006. “An Extended Privacy Calculus Model for E-Commerce Transactions.” Information Systems Research 17 (1): 61–80. https://doi.org/10.1287/isre.1060.0080.

Ellison, Nicole B., and Jessica Vitak. 2015. “Social Network Site Affordances and Their Relationship to Social Capital Processes.” In The Handbook of the Psychology of Communication Technology, edited by S. Shyam Sundar, v.33:205–27. Handbooks in Communication and Media. Chichester, MA: Wiley Blackwell.

Ellison, Nicole B., Jessica Vitak, Charles Steinfield, Rebecca Gray, and Cliff Lampe. 2011. “Negotiating Privacy Concerns and Social Capital Needs in a Social Media Environment.” In Privacy Online: Perspectives on Privacy and Self-Disclosure in the Social Web, 19–32. Berlin, Germany: Springer. https://doi.org/10.1007/978-3-642-21521-6_3.

Evans, Sandra K., Katy E. Pearce, Jessica Vitak, and Jeffrey W. Treem. 2017. “Explicating Affordances: A Conceptual Framework for Understanding Affordances in Communication Research.” Journal of Computer-Mediated Communication 22 (1): 35–52. https://doi.org/10.1111/jcc4.12180.

Fox, Jesse, and Bree McEwan. 2017. “Distinguishing Technologies for Social Interaction: The Perceived Social Affordances of Communication Channels Scale.” Communication Monographs 9 (January): 1–21. https://doi.org/10.1080/03637751.2017.1332418.

Gefen, David, Elena Karahanna, and Detmar W. Straub. 2003. “Trust and TAM in Online Shopping: An Integrated Model.” MIS Q 27 (1): 5190.

Gibson, James J. 2015. The Ecological Approach to Visual Perception. New York, NY: Psychology Press.

Grewal, Rajdeep, Joseph A. Cote, and Hans Baumgartner. 2004. “Multicollinearity and Measurement Error in Structural Equation Models: Implications for Theory Testing.” Marketing Science 23 (4): 519–29. https://doi.org/10.1287/mksc.1040.0070.

Hamaker, Ellen L., Rebecca M. Kuiper, and Raoul P. P. P. Grasman. 2015. “A Critique of the Cross-Lagged Panel Model.” Psychological Methods 20 (1): 102–16. https://doi.org/10.1037/a0038889.

Heirman, Wannes, Michel Walrave, and K. Ponnet. 2013. “Predicting Adolescents’ Disclosure of Personal Information in Exchange for Commercial Incentives: An Application of an Extended Theory of Planned Behavior.” Cyberpsychology, Behavior, and Social Networking 16 (2): 81–87. https://doi.org/10.1089/cyber.2012.0041.

Jorgensen, T. D., Pornprasertmanit, S., Schoemann, A. M., Rosseel, and Y. 2018. semTools: Useful Tools for Structural Equation Modeling. https://CRAN.R-project.org/package=semTools.

Jourard, S. M. 1964. The Transparent Self. New York, NY: Van Nostrand.

Kline, Rex B. 2016. Principles and Practice of Structural Equation Modeling. Fourth. Methodology in the Social Sciences. New York, NY: The Guilford Press.

Knijnenburg, Bart, Elaine Raybourn, David Cherry, Daricia Wilkinson, Saadhika Sivakumar, and Henry Sloan. 2017. “Death to the Privacy Calculus?” SSRN Electronic Journal, January. https://doi.org/10.2139/ssrn.2923806.

Kokolakis, Spyros. 2017. “Privacy Attitudes and Privacy Behaviour: A Review of Current Research on the Privacy Paradox Phenomenon.” Computers & Security 64 (January): 122–34. https://doi.org/10.1016/j.cose.2015.07.002.

Koohikamali, Mehrdad, Aaron M. French, and Dan J. Kim. 2019. “An Investigation of a Dynamic Model of Privacy Trade-Off in Use of Mobile Social Network Applications: A Longitudinal Perspective.” Decision Support Systems 119 (April): 46–59. https://doi.org/10.1016/j.dss.2019.02.007.

Kosinski, Michal, David Stillwell, and Thore Graepel. 2013. “Private Traits and Attributes Are Predictable from Digital Records of Human Behavior.” Proceedings of the National Academy of Sciences of the United States of America 110 (15): 5802–5. https://doi.org/10.1073/pnas.1218772110.

Krasnova, Hanna, Sarah Spiekermann, Ksenia Koroleva, and Thomas Hildebrand. 2010. “Online Social Networks: Why We Disclose.” Journal of Information Technology 25 (2): 109–25. https://doi.org/10.1057/jit.2010.6.

Krämer, Nicole C, and Johanna Schäwel. 2020. “Mastering the Challenge of Balancing Self-Disclosure and Privacy in Social Media.” Current Opinion in Psychology, Privacy and Disclosure, Online and in Social Interactions, 31 (February): 67–71. https://doi.org/10.1016/j.copsyc.2019.08.003.

Lakens, Daniël, Anne M. Scheel, and Peder M. Isager. 2018. “Equivalence Testing for Psychological Research: A Tutorial.” Advances in Methods and Practices in Psychological Science 1 (2): 259–69. https://doi.org/10.1177/2515245918770963.

Laufer, Robert S., and Maxine Wolfe. 1977. “Privacy as a Concept and a Social Issue: A Multidimensional Developmental Theory.” Journal of Social Issues 33 (3): 22–42. https://doi.org/10.1111/j.1540-4560.1977.tb01880.x.

Li, Yuan. 2011. “Empirical Studies on Online Information Privacy Concerns: Literature Review and an Integrative Framework.” Communications of the Association for Information Systems 28 (January): 453–96.

Loy, Laura S., Philipp K. Masur, Josephine B. Schmitt, and Cornelia Mothes. 2018. “Psychological Predictors of Political Internet Use and Political Knowledge in Light of the Perceived Complexity of Political Issues.” Information, Communication & Society 45 (January): 1–18. https://doi.org/10.1080/1369118X.2018.1450886.

Masur, Philipp K. 2018. Situational Privacy and Self-Disclosure: Communication Processes in Online Environments. Cham, Switzerland: Springer.

Metzger, Miriam J. 2004. “Privacy, Trust, and Disclosure: Exploring Barriers to Electronic Commerce.” Journal of Computer-Mediated Communication 9 (4). https://doi.org/10.1111/j.1083-6101.2004.tb00292.x.

Min, Jinyoung, and Byoungsoo Kim. 2015. “How Are People Enticed to Disclose Personal Information Despite Privacy Concerns in Social Network Sites? The Calculus Between Benefit and Cost.” Journal of the Association for Information Science and Technology 66 (4): 839–57. https://doi.org/10.1002/asi.23206.

Muchnik, Lev, Sinan Aral, and Sean J. Taylor. 2013. “Social Influence Bias: A Randomized Experiment.” Science (New York, N.Y.) 341 (6146): 647–51. https://doi.org/10.1126/science.1240466.

Omarzu, Julia. 2000. “A Disclosure Decision Model: Determining How and When Individuals Will Self-Disclose.” Personality and Social Psychology Review 4 (2): 174–85. https://doi.org/10.1207/S15327957PSPR0402_5.

Radio, New York Public. 2018. “The Privacy Paradox.” InternetDocument. https://project.wnyc.org/privacy-paradox/.

R Core Team. 2018. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.

Reinecke, Leonard, and Sabine Trepte. 2014. “Authenticity and Well-Being on Social Network Sites: A Two-Wave Longitudinal Study on the Effects of Online Authenticity and the Positivity Bias in SNS Communication.” Computers in Human Behavior 30 (January): 95–102. https://doi.org/10.1016/j.chb.2013.07.030.

Rosoff, Heather, Jinshu Cui, and Richard S. John. 2013. “Heuristics and Biases in Cyber Security Dilemmas.” Environment Systems and Decisions 33 (4): 517–29. https://doi.org/10.1007/s10669-013-9473-2.

Rosseel, Yves. 2012. “lavaan: An R Package for Structural Equation Modeling.” Journal of Statistical Software 48 (2): 1–36. http://www.jstatsoft.org/v48/i02/.

Scherer, H., and D. Schlütz. 2002. “Gratifikation à La Minute: Die Zeitnahe Erfassung von Gratifikationen.” In Empirische Perspektiven Der Rezeptionsforschung, edited by Patrick Rössler, 133–51. Munich, Germany: Reinhard Fischer.

Söllner, Matthias, Axel Hoffmann, and Jan Marco Leimeister. 2016. “Why Different Trust Relationships Matter for Information Systems Users.” European Journal of Information Systems 25 (3): 274–87. https://doi.org/10.1057/ejis.2015.17.

Stroud, Natalie Jomini, Ashley Muddiman, and Joshua M. Scacco. 2017. “Like, Recommend, or Respect?: Altering Political Behavior in News Comment Sections.” New Media & Society 19 (11): 1727–43. https://doi.org/10.1177/1461444816642420.

Sumner, Erin M., Luisa Ruge-Jones, and Davis Alcorn. 2017. “A Functional Approach to the Facebook Like Button: An Exploration of Meaning, Interpersonal Functionality, and Potential Alternative Response Buttons.” New Media & Society 20 (4): 1451–69. https://doi.org/10.1177/1461444817697917.

Sun, Yongqiang, Nan Wang, Xiao-Liang Shen, and Jacky Xi Zhang. 2015. “Location Information Disclosure in Location-Based Social Network Services: Privacy Calculus, Benefit Structure, and Gender Differences.” Computers in Human Behavior 52 (January): 278–92. https://doi.org/10.1016/j.chb.2015.06.006.

Taddicken, Monika, and Cornelia Jers. 2011. “The Uses of Privacy Online: Trading a Loss of Privacy for Social Web Gratifications?” In Privacy Online: Perspectives on Privacy and Self-Disclosure in the Social Web, 143–58. Berlin, Germany: Springer.

Trepte, Sabine, Michael Scharkow, and Tobias Dienlin. 2020. “The Privacy Calculus Contextualized: The Influence of Affordances.” Computers in Human Behavior 104 (March): 106115. https://doi.org/10.1016/j.chb.2019.08.022.

Vanhove, Jan. 2019. “Collinearity Isn’t a Disease That Needs Curing.” https://osf.io/8x4uc/.

Wang, Yilin Andre, and Mijke Rhemtulla. 2020. “Power Analysis for Parameter Estimation in Structural Equation Modeling: A Discussion and Tutorial,” March. https://doi.org/10.31234/osf.io/pj67b.

Watzlawick, Paul, Janet Beavin Bavelas, Don D. Jackson, and Bill O’Hanlon. 2011. Pragmatics of Human Communication: A Study of Interactional Patterns, Pathologies, and Paradoxes. New York, NY: W.W. Norton & Co.

Whiting, Anita, and David Williams. 2013. “Why People Use Social Media: A Uses and Gratifications Approach.” Qualitative Market Research: An International Journal 16 (4): 362–69. https://doi.org/10.1108/QMR-06-2013-0041.

Wickham, Hadley. 2017. Tidyverse: Easily Install and Load the ’Tidyverse’. https://CRAN.R-project.org/package=tidyverse.

Zhu, Hui, Carol X. J. Ou, W. J. A. M. van den Heuvel, and Hongwei Liu. 2017. “Privacy Calculus and Its Utility for Personalization Services in E-Commerce: An Analysis of Consumer Decision-Making.” Information & Management 54 (4): 427–37. https://doi.org/10.1016/j.im.2016.10.001.


  1. Although the terror attack was not of primary interest for this study, the data can and will also be used to analyze perceptions of the terrorism threat. Hence, no deception took place, and in the debriefing participants were informed about our additional research interest in privacy.↩︎

  2. We preregistered to delete participants with less than 6 minutes answer time. However, this led to the exclusion of too many data points of high quality, which is why we relaxed this criterion. In the OSM, we report also the results using all participants.↩︎