For all t < T ? 1 , the strategy of the RA depends on its own and its competitors' reputation
When A is large, RA1 always gives a GramsR to a bad project. Conversely, when A is small RA1 behaves honestly and gives NR to bad projects. In the intermediate range, RA1 has a mixed strategy, with 0 < x1 < 1 . Note that the lower threshold for A is increasing with RA1's reputation.
The results imply that RA1 tends to lie less as its reputation increases (Corollary 3). The intuition behind this result is straightforward. Since we assumed pG = 1 , the reputation of RA1 goes to zero immediately after a project fails. This means that the cost of lying increases with RA1’s reputation while the benefit of lying stays constant. Hence, it is not surprising that RA1 prefers to lie less as its reputation increases. 18 18 Our results in Section 5 show that this is no longer true if pG < 1 . The penalty on reputation will be smaller as the reputation of RA increases, that is, the cost of rating inflation can decrease with reputation, resulting in a “u-shaped” relationship between strategy and reputation.
Furthermore, according to Corollary 3, RA1’s approach does improve with RA2’s profile. Due to the fact explained prior to, battle features several opposite outcomes towards the actions away from RA1: this new disciplining feeling therefore the business-sharing effect. In sdc visitors the event the reputation for their enemy increases, RA1 will get it shorter attractive to improve a unique profile provided an inferior expected coming share of the market, and hence commonly react alot more laxly. In addition, RA1 have incentives to do something honestly when RA2’s profile develops to maintain the industry frontrunner reputation. All of our research shows that the business-revealing perception tends to take over the new disciplining feeling. One to potential cause is the fact that share of the market regarding a get department is decided not merely by the profile relative to one to of their competitor, and also by pure quantity of the profile. Which is, also a beneficial monopolistic RA dont react entirely laxly, once the otherwise its character perform become too lowest so you’re able to credibly price extremely projects. Hence, this new incentives regarding a beneficial RA to keep up an effective character, even in lack of battle, promote brand new disciplining aftereffect of battle weaker. We feel this is exactly reasonable since the actually, provided rational people, good monopolistic RA don’t have unbounded market efforts.
However, the results above are based on a three-period model with the assumption that pG = 1 , that is, the strategic RA is caught immediately after the project fails. The results may be driven by the fact that the RAs only live for three periods, and hence have limited potential gains associated with higher reputation. In order to capture the long-term benefits of reputation under a more general setting, we move on to the next section, where we relax parameter assumptions and compute numerical solutions in an infinite-horizon case.
5 Unlimited-Views Options
We have now introduce new numerical service of one’s model into the unlimited horizon. The new mathematical solution is once more calculated having fun with backwards induction, which is, we first solve brand new model on the limited months circumstances, then improve amount of periods so that the harmony approach converges to your infinite-views solution.
We assume that the model ends at period T and solve the model backwards. We know that the strategic RA will always lie at period T and T ? 1 , according to Corollary 2. We solve for the equilibrium strategy of the RA described in Section 3. We look at the pay-offs from lying and being honest and determine the strategy. As long as for xt = 1 , RA1 will always choose to lie. Conversely, if for xt = 0 , RA1 will always tell the truth. In all other intermediate cases, there exists a unique xt states that at which RA1 is indifferent between lying or not. Hence, we deduce inductively the equilibrium strategies of RA1. As T goes to infinity, we approach the infinite horizon solution. Since ? < 1 , the Blackwell conditions are satisfied.