*****  To join INSNA, visit  *****

Hello all,

I am looking for advice on calculating the reliability of peer ratings.
I have collected network data (n=84) in which individuals rated the
"influence" of their peer contacts.  Ratings were assigned to contacts
using a seven-point scale ranging from 1 = very little influence to 7 =
very great influence.  Although I think Cronbach's Alpha would be an
appropriate measure of inter-rater reliability for this data, the number
of "missing values" (due to individuals not having the same number of
ties, hence an unequal number of ratings) seems to prevent its

Given the same problem, Brass and Burkhardt (AMJ 1993) employed
Coefficient C, which appears to assume ordinal data and compares "the
degree to which any pair of raters is concordant in their ordinal
ratings of a pair of ratees" (p. 456).  To the best of my knowledge,
Coefficient C is either an alias for a modified form of Cohen's Kappa,
known as Kappa-n, or is Kendall's Coefficient of Concordance.

This leads me to two questions.  First, given the "missing values"
problem that would seem to be common in medium- to large-sized
interpersonal networks, can anyone advise me on the appropriateness of
Coefficient C or other tests of inter-rater reliability for this data?
Second, is anyone aware of software that can calculate this type of
reliability despite the missing values, or will it require hand

Thank you in advance:)

Matt Seevers
[log in to unmask]

SOCNET is a service of INSNA, the professional association for social
network researchers ( To unsubscribe, send
an email message to [log in to unmask] containing the line
UNSUBSCRIBE SOCNET in the body of the message.