*****  To join INSNA, visit  *****


I agree. Important and useful research begins with the right questions - 
that goes without saying.

But the problem of looking under the lamp post is not solely a problem 
of RCTs. The same problem arises when we consider what observational 
data we have access to. In fact, it seems likely that this problem would 
be more pronounced for observational data that we happen to get access 
to than for experimental studies that we have to design explicitly and 
in advance in order to examine a particular question or set of questions.

That we should use theory and scholarly intuition to seek out 
interesting questions and phenomena to study is clear. However, in this 
case, the value of the "question" of estimating social influence 
(broadly defined) is already quite clear and its relevance is relatively 
well accepted. The reason why causal estimation is important in this 
already well defined research area (and people have been writing about 
this for decades) is that separating correlation from causation in this 
specific case can inform us about what the effects of various policy 
alternatives might be for programs aimed at peer to peer HIV prevention, 
smoking cessation, obesity prevention, product marketing, and so on.

I could not agree more that the study that finds a valid instrument and 
then searches for the question that the instrument helps address is 
misguided. But, it doesn't seem to me that finding novel solutions to 
causal estimation in networks leads us to the question of estimating the 
magnitude of peer effects. In most contexts, that question itself is 
already well motivated. To the contrary, it seems to me that our methods 
lag behind the theoretical development of the questions (and the 
theories that explain social influence) in this case.

Another danger, besides looking under the lamp post, is remaining 
content with showing correlation and assuming causation. Based on your 
own previous work and our previous work together, I know you believe 
this as well. But, its worth repeating in order to extend this 
discussion a bit I think.



Sinan Aral
Assistant Professor, NYU Stern School of Business.
Research Affiliate, MIT Sloan School of Management.
Personal Webpage:
SSRN Page:
WIN Workshop:

On 2/28/2011 11:36 PM, Arun Sundararajan wrote:
> *****  To join INSNA, visit   *****
> The Onion clip is wonderful. Very much in snyc with the fake news
> often being more informative than the real...should be required
> viewing.
> It seems to say a lot more than just "give observational data a
> chance" to me in the context of this larger discussion. it isn't
> merely that a large fraction of the phenomena we want to study occur
> before we can design trials to measure them (and economists have been
> dealing with this reality for decades). or that observational data are
> more likely to lead to interesting discoveries of new things. Or that
> whatever the methods, there are always alternative explanations,
> especially when dealing with people in social settings. It's also that
> if we start to believe in experiments and RCT's as the holy grail,
> there's a danger of focusing too much on the kinds of questions that
> lend themselves to that specific methodology, rather than going after
> the ones that matter. (Even in the context of social influence in
> networks.) analogously, there's a lot of time spent by research in
> economics and marketing looking for "natural experiments" for
> identification, and this gets to the point sometimes where it seems
> like the research question was designed merely to exploit the cool
> natural experiment...
> i think that many aspects of this RCT vs. observational data (or for
> observational data, matched-sample versus "structural" methods for
> claiming causation) debates aren't unique to the context of social
> influence in networks. For example:
> cheers, Arun.
> On Mon, Feb 28, 2011 at 12:40 AM, James Fowler<[log in to unmask]>  wrote:
>> *****  To join INSNA, visit    *****
>> We have also relied on methods like the ones Tom Valente mentioned in many
>> of our observational studies, and we summarize the pluses and minuses of
>> this approach in a new paper here:
>> I also in principle like the actor-oriented model approach of Siena, but in
>> the past I could never get the model to converge for networks larger than
>> 1000 nodes (this might be my own failing, though, as there is always a bit
>> of art to getting models like that to work).
>> We also have relied on experiments like this one in PNAS:
>> and I am a big fan of David Nickerson's voter experiment and Sinan's new
>> RCT.
>> But I would resist abandoning evidence from observational studies.
>> The resistance to observational studies reminds me of this Onion story:
>> Multiple Stab Wounds May Be Harmful To Monkeys
>> :)
>> j
>> James H. Fowler
>> Professor of Medical Genetics and Political Science
>> UC San Diego
>> _____________________________________________________________________
>> SOCNET is a service of INSNA, the professional association for social
>> network researchers ( To unsubscribe, send
>> an email message [log in to unmask]  containing the line
>> UNSUBSCRIBE SOCNET in the body of the message.

SOCNET is a service of INSNA, the professional association for social
network researchers ( To unsubscribe, send
an email message to [log in to unmask] containing the line
UNSUBSCRIBE SOCNET in the body of the message.