*****  To join INSNA, visit  *****

Just to add some brief comments to this thread -

I would certainly agree that the existing evidence points to the fact 
that both self and third-party reports of social interaction (broadly 
defined) are error-prone.  However, I am not aware of any study whose 
data actually supports the extreme cognitivist view that such reports 
are effectively unrelated to behavior.  (There are certainly those who 
have claimed this in print, but reanalysis of the data in question does 
not support this assertion.  This includes the BKS data, although I 
don't know that B, K, or S would currently endorse that position in any 
event!)  On the contrary, there is both direct and indirect evidence 
that informants' accounts do reflect observable social reality, albeit 
with a fair amount of noise.  (Some of this is literature is reviewed in 
the 2003 _Social Networks_ piece cited below.  See Freeman et al. (1987) 
for a nice example of a case with a well-observed criterion.)

As my colleagues Kim Romney and Bill Batchelder demonstrated some years 
ago, it is often possible to pull external reality out of such noisy 
reports using appropriate inferential techniques.  (Gest's comments seem 
to be in this vein as well, although I am not familiar with their 
particular approach.)  There are obviously limits on the efficacy of 
such methods, and they are not "magic bullets" which can solve all data 
problems; as folks like Devon Brewer have emphasized over the years, 
there is no substitute for good data.  However, they have been shown to 
be effective in a range of settings, and Steve Borgatti tells me that he 
can even use the approach to accurately grade his students' exams (not 
that he does, mind you!).  In the network case, simulation studies I 
have performed vis a vis a Bayesian version of this approach (some of 
which are very briefly described in section 3 of the 2003 paper) 
indicate that the models are fairly robust to structurally correlated 
errors, high variation in error rates, etc.  This does _not_ mean that 
they are always right, or that they cannot be broken...however, there is 
some reason to think that they can ferret out a pretty good 
approximation of the truth under the kinds of error rates which have 
been observed in the prior literature.

I think that the debate over informant accuracy has been tremendously 
important to the field.  Unfortunately, I suspect that many researchers 
are still unaware of the nature of the problem, and of the kinds of 
techniques which now exist for minimizing it (or at least estimating its 
consequences).  Obtaining multiple reports on each edge, for instance, 
can greatly improve the accuracy of subsequent analyses, yet few 
researchers include such measures when designing their studies. 
Hopefully, this situation will improve as these newer tools and 
techniques diffuse into the network community.



Batchelder, W.H and Romney, A.K.  1988.  Test Theory Without an Answer 
Key.  Psychometrika, 53(1): 71-92.

Butts, C.T.  2003.  Network Inference, Error, and Informant 
(In)accuracy: a Bayesian Approach.  _Social Networks_, 25:103-140.

Freeman, L.C.; Romney, A.K.; and Freeman, S.C.  1987.  Cognitive 
Structure and Informant Accuracy.  _American Anthropologist_, 89(2):310-325.

See also the "bbnam" and "consensus" functions in the sna library for R, 
or Borgatti's ANTHROPAC, for software implementations of these models.

SOCNET is a service of INSNA, the professional association for social
network researchers ( To unsubscribe, send
an email message to [log in to unmask] containing the line
UNSUBSCRIBE SOCNET in the body of the message.