Jinkwon Lee and Jose Luis Lima

We compare two payoff mechanisms in a finitely repeated linear public good game: accumulated payoff mechanism (APM) and random round payoff mechanism (RRPM). While subjects' behavior must not be theoretically different between both payoff mechanisms, we find clear behavioral difference between them. We also attempt to explain the so-called "restart effect" by a kind of background risk hypothesis that we call "opportunity effect." The results, however, are inconclusive.

Introduction

In a finitely repeated linear public good game, we compare a random cash payoff mechanism for subjects, which depends on the token earning of one randomly chosen round after a whole session is completed (RRPM), to a payoff mechanism based on accumulated token earning through overall rounds (APM). RRPM has been widely used in individual decision-making experiments because it is known to be able to control a wealth effect.1 However, a generic linear public good game has a unique dominant strategy so wealth effects should not matter. Hence, almost all experiments in the finitely repeated linear public good game literature have used APM.2

However, we hypothesize that RRPM may also control subjects' risk attitude, and that might make a difference. Our psychological intuition is that under APM (but not under RRPM), subjects will be less risk averse in early periods and more risk averse (and more attentive) as future opportunities dwindle in the last few periods. Let us call this "opportunity effect."3 An example from our daily life: many students enjoy life during the first and second years without studying when their final standing is accumulated over 4 years. Suppose that their final standing depends on only one year's grade randomly chosen after 4 years are completed. Then they might pay more attention to classes in early years. This is because they feel different degrees of risk for each year under APM and RRPM. See also Davis and Holt (1993:85). The opportunity effect is related to the background risk hypothesis suggested by Selten et al. (1999). The background risk hypothesis implies that RRPM makes bias. However, if opportunity effect exists, then the difference that RRPM makes relative to APM may be interpreted not as a bias but as the result of controlling risk attitude through rounds.

We use this hypothesis to attempt to explain the "restart effect," which has been a puzzle since Andreoni (1988). He reports that the contribution level to public good at the first round of the restarted run is much higher than the last round of the previous run.4 This restart effect is found even in the Strangers treatment, intended to isolate learning effects by elimination of any reputation effects possible in Partners treatment.5 Why subjects start again with a high level of contribution if they learned in the first session that the level of contribution fell down? Should the learning hypothesis be rejected?

The leading explanation is not very satisfying. It points to cognitive dissonance theory, which is based on subjects' psychological discomfort by inconsistent choice with their belief and reaction for reducing it. However, this explanation requires that subjects have enough time to reconsider their previous choice. It may not be able to explain the restart effect when the second run immediately follows the first. See Burlando and Hey (1997).

Our suggested explanation combines the opportunity effect at initial rounds of the second run with the possibility of signaling behavior there and with the possibility of the indirect feedback in the Strangers treatment.6 When the Stranger treatment is used and the second session is started, subjects may still have an uncertainty about the type of other members though the uncertainty about the game itself may almost disappear. Hence, they may under APM have an incentive for taking the risk to signal others by contributing to the public good to obtain more desirable payoffs through later rounds, Pareto optimal level, at least at initial rounds of the second session if the opportunity effect exists and they think of the indirect feedback.7 However, the signaling is too risky an attempt under RRPM, because the cash payoff depends on only one randomly chosen round: that is, it controls the opportunity effect. Therefore, the restart effect will not be there and subjects will keep the pace of learning procedure under RRPM while the restart effect may happen under APM.

Experimental design

We follow a generic linear public good game experiment design. Subjects i=1, 2, ..., N are given an endowment e of tokens which they can invest in a private good, x, or contribute to a public good, g, at each round. They must use all endowments at each round. At each round, the token earnings to any subject are determined by /'.= a^j'^p where N is the number of group members and e.= x+gr If the marginal rate of return from the public good, a, is chosen such that l/N<a<l, then zero contribution to the public good (g=0) is a unique dominant strategy and full contribution of endowment (g=e) for all i is the symmetric Pareto efficient outcome.

In our experiments, there were sixteen subjects divided in four groups of four members each. Subjects were undergraduate students of economics in Trento University, Italy. Each group played two consecutive sessions of six rounds, and subject s were told about this before they started the experiment. Two groups faced the APM in both sessions and their membership was randomly rematched among eight subjects each round: subjects' total cash payoff in each session depended on accumulated token earnings, and the Stranger treatment was used. We call these groups GAPM. The other two groups faced the APM in the first session and the RRPM in the second session: subjects' total cash payoff in the second session depended on six times the token earning of randomly chosen one round after that session is completed, while their total cash payoff in the first session depended on the accumulated token earning. They also were randomly rematched among eight subjects each round (Strangers). We call these groups GRRPM. The conversion rate was 350 Italian lire for each 120 tokens. The participation fee of 5,000 lire was given to all subjects. The token endowment (m) was 100 each round, and the marginal rate of return from the public good (a) was 0.5. Hence, the GAPM and GRRPM were different only in the payoff mechanism in the second session.

Results

The average contribution level to the public good of both GAPM and GRRPM in both sessions is shown in Figure 20.1.8 Both GAPM and GRRPM in the first session (rounds 1-6), which faces the identical payoff mechanism (APM), appear to have a similar decay pattern though the contribution level of GRCPC is higher than that of GAPM (Table 20.1).9 Our main interest is in the second session (rounds 7-12). There is a clear difference between GRRPM and APM. However, the direction of the difference is opposite to our prediction. Surprisingly, GRRPM shows a clear restart effect while there is a slight one for GAPM.

Our prediction needs reconsideration, but the evidence suggests related explanations. GRRPM faces a change in the payoff procedure between the first session and the second session, but GAPM does not. It is possible that GRRPM regards the second session as a new game. This requires them to build a new initial belief before the first round of the second session. A higher risk aversion level induced by RRPM may lead subjects to make an initial decision that they imitate their decision in the first round of the first session.10 The fact that six of eight subjects of GRRPM contribute exactly same level as those of the first round of the

Figure 20.1 Average contribution of GAPS and GRCPS.
Table 20.1 Subjects' token contributions in APM and RRPM

Sessions

Rounds

Treatments

Subjects

Was this article helpful?

0 0

Post a comment