Print

Print


*****  To join INSNA, visit http://www.insna.org  *****




Here's my two cents on the excellent points raised by Tom, Christian and
Sinan:

Manski highlights three hypotheses in his classic 1995 monograph "to explain
the common observation that individuals belonging to the same group tend to
behave similarly... endogenous effects, wherein the propensity of an
individual to behave in some way varies with the prevalence of that behavior
in the group; contextual effects, wherein the propensity of an individual to
behave in some way varies with the distribution of background
characteristics in the group; and correlated effects, wherein individuals in
the same group tend to behave similarly because they face similar
institutional environments or have similar individual characteristics."
(Identification problems in the social sciences, Harvard University Press)

The first two hypotheses express inter-agent causality in a model. The third
hypothesis does not. The important distinction between the two inter-agent
causal effects is that the first involves feedback that can be reinforcing
over the course of time. The policy implications of the approaches are
widely different, especially if there does indeed exist a case of an
inherent dynamic with feedback depending on the strength of the endogenous
effect in relation to other effects. Access to temporal panel data is highly
desirable in order to better empirically distinguish the effects.

An important econometric issue also arises in the empirical estimation of
discrete choice models using a multinomial logit specification in that the
Gumbel error terms are assumed to be identically and independently
distributed across choice alternatives and across individuals. It is not
obvious that this is in fact a valid assumption when we are specifically
considering interdependence between individuals' choices. As above, we might
reason that if there is a systematic dependence of each individual's choice
on an explanatory variable that captures the choices of other individuals
who are in some way related to that individual, then there might be an
analogous dependence in the error structure. Otherwise said, the same
unobserved effects might be likely to influence the choice made by a given
individual as well as the choices made by those in the individual's
reference group. The results and coefficients of such a model are likely to
be biased.

Furthermore, when considering longitudinal panel data, there may be also be
an additional correlation across the responses of a single individual over
time.

In work with Michiel van Meeteren (who replied to the list recently in the
query about snowball sampling) and Ate Poorthuis, we demonstrate an example
of the empirical estimation of discrete choice model with network
interaction effects, specifically testing for correlation in the error
structure in the particular empirical case study, through the use of a mixed
multinomial panel logit model. We presented this work at Sunbelt last year
in Riva del Garda, at the RC33 Social Network Analysis session organized by
Anuška Ferligoj, Vladimir Batagelj and Peter Carrington at the 17th ISA
world congress of sociology, at the WIN workshop hosted by Sinan and
colleagues at NYU, and most recently at a workshop on Transportation and
Social Networks in Manchester hosted by Martin Everrett together with the
Futurenet team at Nottingham and Loughborough. We have been a little slow to
write up the paper, but we have an extended 5-page abstract including the
estimation results in the WIN proceedings. If anyone is interested, please
send us a mail! We'd love to hear from you. Capturing correlation shows a
substantial increase in model fit even in a previously seemingly saturated
model.

In this same research, we also compare the contribution of several different
classical centrality measures in explaining choice behavior. Drawing on
insights from transportation research we introduce a measure to answer your
question, Steve, about "adoption contagion" that incorporates not only
network characteristics, but also relevant individual characteristics.

In a different application in earlier work with Laszlo Gulyas, we embed
estimation results from a simpler discrete choice model in an agent-based
model to observe the theoretical simulated evolution of choice behavior over
time. We compare cases where unobserved heterogeneity is captured to some
extent in the original estimation, and when it is not. We find that even
when the estimation results show little statistical difference between the
choice models for many of the estimated utility parameters, the long run
impact over time of accounting for the unobserved heterogeneity or not has a
dramatic effect due to the difference in the feedback from the endogenous
effect. Conclusion: capturing heterogeneity matters, even when it is
unobserverd!

Dugundji ER, Gulyás L (2008) Socio-dynamic discrete choice on networks:
Impacts of agent heterogeneity on emergent equilibrium outcomes. Environment
& Planning B: Planning and Design 35(6): 1028-1054
<http://www.envplan.com/abstract.cgi?id=b33021t>
http://www.envplan.com/abstract.cgi?id=b33021t

Best regards,
Elenna



________________________________

Van: Social Networks Discussion Forum [ <mailto:[log in to unmask]>
mailto:[log in to unmask]] Namens Sinan Aral
Verzonden: zondag 27 februari 2011 21:52
Aan: [log in to unmask]
Onderwerp: Re: Measuring contagion in longitudinal behavior data


***** To join INSNA, visit  <http://www.insna.org> http://www.insna.org
*****

Excellent point Christian and I agree that Tom is right. In the Marketing
Science commentary I pointed to I make this exact point - experiments are
difficult and observational data is abundant. That's why we need statistical
methods that try to tease out contagion from selection/homophily and other
confounds. In that review I list your work with Tom Snijders as one of those
methods (as well as our own dynamic propensity score methods published in
the PNAS -- I pointed to this article in a previous post as well). Both of
these are excellent dynamic methods for teasing out influence/contagion from
other confounds. But, again, the problem is *very* difficult in my opinion.
The reason I think we have to focus on this problem as a community and make
it such a high priority in networks research (and thus the reason I focus on
it so much in my own research) is because a) it is critical to knowing when
we are observing "network effects" and when we are not, which is then in
turn critical to policy choices...(for example peer-to-peer contagion
management policies won't work if our estimates tell us contagion is at play
when it really isn't) and b) because it is such a hard nut to crack.

For those of you that will be at the SONIC/ANN/NICO conference at
Northwestern this weekend, Michael Macy and I will have a session on
"Causality in Networks" on Saturday.

Here is the schedule of the Workshop portion of the event:
 
<http://sonic.northwestern.edu/events/webnetsciworkshop/conference-schedule/
>
http://sonic.northwestern.edu/events/webnetsciworkshop/conference-schedule/

And, here is the abstract of my talk for those that are interested:

Title: "Causality in Networks"

Abstract: Many of us are interested in whether "networks matter." Whether in
the spread of disease, the diffusion of information, the propagation of
behavioral contagions, the effectiveness of viral marketing, or the
magnitude of peer effects in a variety of settings, a key question that must
be answered before we can understand whether networks matter, is whether the
statistical relationships we see can be interpreted causally. Several
sources of bias in analysis of interactions and outcomes among peers can
confound assessments of peer influence and social contagion in networks. If
uncorrected, these biases can lead researchers to attribute observed
correlations to causal peer influence, resulting in misinterpretations of
social network effects as well as biased estimates of the potential
effectiveness of different intervention strategies. Several approaches for
identifying peer effects have been proposed. However, randomized trials are
considered to be one of the most effective ways to obtain unbiased estimates
of causal peer effects. I will review a) the importance of establishing
causality in networks, b) the various methods that have been proposed to
address causal inference in networks, and in particular focus on c) the use
of randomized trials to establish causality. I will provide an example from
a randomized field experiment we conducted on a popular social networking
website to test the effectiveness of "viral product design" strategies in
creating peer influence and social contagion among the 1.4 million friends
of 9,687 experimental users. In addition to estimating the effects of viral
product design on social contagion and product diffusion, our work also
provides a model for how randomized trials can be used to identify peer
influence effects in networks.


Best

Sinan


Sinan Aral
Assistant Professor, NYU Stern School of Business.
Research Affiliate, MIT Sloan School of Management.
Personal Webpage:  <http://pages.stern.nyu.edu/~saral>
http://pages.stern.nyu.edu/~saral
SSRN Page:  <http://ssrn.com/author=110270> http://ssrn.com/author=110270
WIN Workshop:  <http://www.winworkshop.net> http://www.winworkshop.net
Twitter:  <http://twitter.com/sinanaral> http://twitter.com/sinanaral

On 2/27/2011 12:38 PM, Christian Steglich wrote:

        ***** To join INSNA, visit  <http://www.insna.org>
http://www.insna.org ***** Hi Steve,
       
        as I understand, your network is static and not changing in itself?
If it does change as well, this can potentially undermine all sorts of
conclusions you may want to draw from an analysis, as any behaviour
association between connected actors could be due to selection effects as
well.
       
        See here and article I participated in for some methodological
arguments, a brief critical review of methods, and a proposal to use
actor-based modelling in Snijders' tradition:
<http://dx.doi.org/10.1111/j.1467-9531.2010.01225.x>
http://dx.doi.org/10.1111/j.1467-9531.2010.01225.x
       
        For a static network, the most serious confounders of contagion are
context effects - so you should capture all context information you can.
Cohen-Cole & Fletcher wrote a funny piece illustrating how some analysis
methods, when failing to include context, can provide patently misleading
results (they showed how some methods would diagnose e.g. body height as
socially contagious):  <http://dx.doi.org/10.1136/bmj.a2533>
http://dx.doi.org/10.1136/bmj.a2533
       
        In general, I agree with Sinan that experiments are the best way to
obtain "unequivocal" contagion effects, but Tom is right when pointing out
that this is very often is not possible in applied research settings...
       
        Greetings,
        Christian
       
       
        Am 25/02/2011 20:28, schrieb Steve Eichert:

                ***** To join INSNA, visit  <http://www.insna.org>
http://www.insna.org ***** Hello SOCNET,

                I'm looking for books, papers, algorithms, and/or ideas on
how best to measure contagion in a network.  We have longitudinal behavior
data for all actors in a directed network and want to calculate the degree
of contagion occurring between all connected nodes.  We would like to use
the calculated "contagion score" to identify nodes that we can do further
analysis on, as well as to measure the overall level of contagion in the
network.  The longitudinal behavior data we have indicates how much of
something the nodes within the network are using over time.  We're
interested in better understanding the algorithms folks are using for
"adoption contagion" (someone who has already adopted influences a non
adopter to adopt) as well as "behavior contagion" (a high user influences
those connected to them to use more).

                Thoughts?

                Thanks,
                Steve
 
_____________________________________________________________________ SOCNET
is a service of INSNA, the professional association for social network
researchers ( <http://www.insna.org> http://www.insna.org). To unsubscribe,
send an email message to [log in to unmask] containing the line
UNSUBSCRIBE SOCNET in the body of the message.





_____________________________________________________________________
SOCNET is a service of INSNA, the professional association for social
network researchers (http://www.insna.org). To unsubscribe, send
an email message to [log in to unmask] containing the line
UNSUBSCRIBE SOCNET in the body of the message.