The code used to find fixed points of the game described in Chapter 2 was written in Python. It was written with flexibility in mind so it could, for example, be adapted for use in the experimental environment of Chapter 3 and under the alternative hypotheses of Chapter 4.
The code consists of two basic objects, states and environments. An environment specifies the variables of the game such as the number of senders, the type-space, and the probability distributions over types. A state is initialized with an environment object and strategies for both senders and receivers. A state contains methods for computing aggregate densities (more on this below) as well as methods for computing best responses given the initialization variables (more on this below as well).
The algorithm starts with an initial state, calculates best responses to the strategies of the initial state given its environment, and then generates a new state using the initial environment and a convex combination of initial strategies and best response strategies. The new state then becomes the “initial” state. This process is repeated until the best response strategies are sufficiently close to the initial strategies, at which point the algorithm terminates and outputs the final strategies. We varied the initial state, but most often the initial state consisted of the sincere messaging strategy, a participation strategy that satisfied the extreme participation conjecture, and a cutoff strategy around vI for the receiver such that A(vI) = A(vΦ) = 0.5. Most often we accepted the best
response strategies as the new strategies (α = 1 in the convex combination) but we altered α as needed to locate fixed points. Finally, to calculate “sufficiently close” between initial strategies and best responses, we took the norm of the differences between vectors of strategies. If the min of the absolute value of this norm was less than 11−4, the program terminated.
The best response message strategy is, for each type, the message that maximizes a sender’s expected utility given the state’s initial variables. To calculate this, for each message we computed the likelihood the sender was each type of pivotal. These likelihoods were then weighted by the sender’s posterior expected utility given the outcome specified by that pivotality type. The sum of these weighted likelihoods represent the part of expected utility that depends on the sender’s message. Therefore, the maximizing message also maximizes expected utility. In the event of indifference (to at least 4 decimal places), the program selected the message nearest to the message specified by the initial strategy (provided that was a pure strategy).
The best response participation strategy is the likelihood a sender of each type participates given the environment and given the initial strategies. The method we used used, as both input and output, the vector of FC(X(vi)) for eachvi. This is without loss of generality for our purposes. In this case,X(vi) (the net expected utility of participation, excluding the cost of participation) used the expected utility of participation given that the sender played the expected utility maximizing message (calculated as described above).
We calculated the state conditional densities of aggregate messages both when considering allN senders as well as when considering onlyN−1 senders (since the former is necessary for calculating a receiver’s best response and the latter is necessary for computing a sender’s best response). To calculate these we first calculated the densities of aggregate messages conditional on both the state as well as on each level of participation (more on this below), and calculated the likelihood of each level of participation given the initial strategies. The likelihood of a given aggregate message was then the sum, over levels of participation, of likelihoods of the aggregate message weighted by the likelihood of that level of participation.
Calculating the densities of aggregate messages conditional on level of participation represented
the computationally intensive portion of the algorithm. To achieve these densities, we utilized a recursive function that took as arguments five scalars, a state object, and two vectors of densities with length equal to the number of feasible aggregate messages given the level of participation (this is finite given that we estimate continuous spaces with discrete spaces). This recursive function is recreated below. The idea of the functions was to compute the likelihood of every permutation of sent-messages in each state of the world in a computationally parsimonious way. It worked by looping through the set of feasible messages for each participant, keeping track of the sum of all sent-messages as well as the product of the likelihoods of those sent messages in each state. Once the function reached the final participant, it calculated the average value of the sent messages (the sum divided by total participation), and added the final products of sent-message likelihoods to the entry of the respective density vectors that represented that average value. Because the loops over feasible sent-messages were nested (achieved by calling the function recursively), we ensured that every permutation was accounted for.
def aggregate_densities(prods, theSum, player, z, st, densities):
"""
prods is a 2-tuple of scalars. It tracks the interim likelihood of the sent-messages
theSum is the interim sum of sent-messages. This is a sufficient statistic for the average of sent-messages, given that we know the level of participation.
z is the total level of participation
player is the number of player the function is currently considering.
player starts at 1 and increases to z.
st is a state object. It is used for access to the environment object which provides us with the feasible message and aggregate message spaces
densities is a 2-tuple of vectors with length equal to the number of feasible aggregate messages. Both start as vectors of zeros.
Calculates the likelihood of each feasible aggregate message, given the state, st, and given the total level of participation, z.
Called as: aggregate_densities((1, 1), 0, 1, z, self, densities)
"""
if player == z:
for i in range(len(st.env.spaces["messages"])):
em = st.env.spaces["messages"][i]
theSumTemp = theSum + em theMean = theSumTemp*1./z
mean_ind = appxEqualIndex(theMean, \
st.env.spaces["aggregateMessagesByN"][z-1]) if not type(mean_ind) is int:
print "ERROR: %s not in %s" % \
(theMean, st.env.spaces["aggregateMessagesByN"][z-1]) densities[0][mean_ind] += st.fm_lVector[i]*prods[0]
densities[1][mean_ind] += st.fm_hVector[i]*prods[1]
return densities else:
for i in range(len(st.env.spaces["messages"])):
em = st.env.spaces["messages"][i]
theSumTemp = theSum + em
prodsTemp = (prods[0]*st.fm_lVector[i], \ prods[1]*st.fm_hVector[i])
densities = fmu_helper(prodsTemp, theSumTemp, player+1, z, st,\
densities) return densities