***** To join INSNA, visit http://www.sfu.ca/~insna/ *****
I am working with a database consisting of various subsets and need some help
with some of the analyses. I feel that I am in over my head since I am a novice
in this area. My main question deals with normalizing/standardizing values
within/across subsets of data, but I thought it might help to give an overview
of my research and data before I am more specific about my question.
In my research I am interested in looking at different intra-organizational
knowledge networks: hierarchical, disciplinary/competence, subsidiary, etc.
and the position of an individual in these different knowledge networks. I
have collected advice and friendship network data within one multinational with
1698 individuals spread across 25 subsidiaries in different countries (87% response
rate). Each individual has a variety of attributes: a) hierarchy level, b)
competence group, and c) Thompson (i.e., producing goods and services in the
organizational technical core vs. working in support functions in the organizational
periphery), plus a few others.
I have calculated various ego and centrality network measures for each individual
for the different knowledge networks within which each individual is nested.
Thus, there are several different subsets of the data. To give an example,
say for John who works as a programmer in the Hong Kong office, I have calculated
several different measures of advice closeness: a) within the programming competence
group in the Hong Kong office, b) within the technical core within the Hong
Kong office, c) within the entire Hong Kong office, d) within the programmer
competence group in the entire multinational, e) within the technical core within
the entire multinational, and g) within the entire multinational as a whole.
The main question I have is whether I need to normalize/standardize the values
for the different subsets and if so, how to do this. For example, I calculated
the closeness measure for the individuals in each of the five different competence
groups in the Hong Kong office by splitting up the data into the different competence
groups and then running the analyses for each competence group subset. Now
I need to rejoin the measures for the different competence groups into a closeness
variable that I will enter into an spss database. In other words, I need to
join the closeness values that I got for the programming group with those of
the strategy, project management, interface, and administration groups. However,
these groups are of different sizes. In my reading of the literature, I have
seen that sometimes people normalize these. So, my question is if I want to
create an spss variable that is say closeness within one’s competence group
do I cut and paste the ucinet-generated closeness values for each of the competence
groups into one column in spss? Or do I cut and paste the non-normalized ucinet
closeness values into spss and then standardize them based on the database split
by competence. Or do I just use the non-normalized closeness values that ucinet
produces and not worry about normalizing/standardizing. Do I then use the same
procedure for all the various measures (both egocentric and sociocentric) and
all the subsets?
Finally, on a less complicated note, I would like to learn more about the eigenvector
and power centrality measures. What are the references for some articles in
which these measures have been used (other than those in the ucinet help file)
and which give an explanation for a novice in this field?
Thanks very much and any help would be most appreciated!
Institute of International Business
Stockholm School of Economics
[log in to unmask]
SOCNET is a service of INSNA, the professional association for social
network researchers (http://www.sfu.ca/~insna/). To unsubscribe, send
an email message to [log in to unmask] containing the line
UNSUBSCRIBE SOCNET in the body of the message.