A social Network Analysis of the Operations Research/Industrial Engineering Faculty Hiring Network

02/28/2018 ∙ by Enrique del Castillo, et al. ∙ 0

We study the Operations Research/Industrial-Systems Engineering (ORIE) faculty hiring network, consisting of 1,179 faculty origin and destination data together with attribute data from 83 ORIE departments. A social network analysis of the faculty hires can reveal patterns that have an impact on the dissemination of educational innovations within a profession. We first statistically test for the presence of a linear hierarchy in the network and for its steepness. We proceed to find a near linear hierarchical order of the departments, which we contrast with other indicators of hierarchy, including published rankings. A single index is not able to capture the structure of a complex network, so we next fit a latent variable exponential random graph model for the faculty hires, which is able to reproduce the main observed network characteristics: high incidence of self-hiring, skewed out-degree edge distribution, low density (except at the top of the hierarchy) and clustering. Finally, we simplify the network to one where faculty hires take place among three groups of departments. We discuss the implications of these findings for the flow of education and teaching ideas within the ORIE discipline and compare our findings with those reported for other related disciplines, Computer Science and Business.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

page 19

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

A faculty hiring network is represented by a graph composed of vertices denoting university departments in a given academic discipline, and directed arcs , whose integer value attribute denotes the number of faculty hired by department who received their Ph.D.’s in department . A department hiring a Ph.D. from another department generates a directed edge in the network going from the sender department to the receiver department. In the pre-internet era, the study of faculty hiring networks was confined to departments of Sociology (Schichor, 1970; Burris, 2004) where directories with the necessary faculty information existed in print. Today, with most departments and faculty posting their information in personal web pages, studying a faculty hiring network has become easier to do even if no directories are available, provided the faculty information is gathered. There now exist analyses of faculty hiring networks in a broad range of disciplines in the US, such as Political Science (Fowler et al. , 2007; Schmidt & Chingos, 2007), Mathematics (Myers et al. , 2011), Communication (Mai et al. , 2015), Business (Clauset et al. , 2015), Computer Science (Clauset et al. , 2015; Huang et al. , 2015), Law (Katz et al. , 2011), and History (Clauset et al. , 2015).

Researchers have studied the hiring and placement patterns of their academic fields for a variety of reasons. One concept these studies attempt to clarify is the prestige of a department, a rather elusive and inadequately theorized concept (Burris, 2004) but popular among university administrators. Departmental prestige can be studied as an effect due to the position of the department in a hierarchy existing in the faculty hiring network, as these positions provide a ranking within a discipline. Some authors have considered hiring networks for the purpose of determining the inequality in the production of Ph.Ds (Clauset et al. , 2015), an apparently general phenomena in which a small number of departments produce a large fraction of the Ph.Ds hired as professors. Others have studied these networks to determine inequalities in the hiring process (e.g., of women, see Way et al.  (2016)), or to study the sociological aspects of a discipline (Fowler et al. , 2007; Katz et al. , 2011), e.g., whether communities or a dominance hierarchy exists among departments. Hiring and placing of Ph.D. students is also studied in some areas to understand how new ideas are disseminated through a profession.

In this paper we make available and model the U.S. ORIE hiring faculty network. The network is composed of departments/programs/schools (from now on, ”departments”) within these 3 inter-related fields. Appendix 1 gives a list of all the 83 ORIE departments considered and how they were selected for this study. The ORIE dataset was compiled in summer 2016. We first perform statistical tests on the existence and steepness of a linear dominance hierarchy among the 83 ORIE departments. Having found strong evidence of such hierarchy, we then introduce the notion of a minimum violation and strength (MVS) measure of departmental prestige, and use it to compute a near linear hierarchy for the ORIE network, contrasting it with other more common measures of individual importance in a social network, including published ORIE rankings (US News and NRC). Given that single measures of department prestige provide an inherently incomplete description of a complex network, we further model the positions of each department in the network as latent variables in an exponential random graph model that allows us to consider the uncertainty in the edge data and permits to find groups of related departments. With these positions, we reduce the ORIE network to a simpler network of faculty hires between and within 3 groups of departments. Throughout this study, we compare the ORIE faculty hiring network with similar networks in related academic disciplines. We conclude with a summary and discussion of the implications of our findings.

2. Descriptive statistics of the ORIE network compared to those from related disciplines.

ORIE faculty data were collected during May-June of 2016 from faculty web pages working in institutions in the USA. A list of all Ph.D. granting institutions considered and the criteria used for their inclusion are given in Appendix 1. In total, 1179 faculty from 83 ORIE departments were considered in the analysis. Table 1

lists general descriptive statistics of the ORIE network compared to those of two closely related fields, Computer Science and Business schools, networks analyzed by

Clauset et al.  (2015), excluding their “Earth” (outside US) department and links from and to this vertex. The ORIE network is considerably smaller, with an average department size of 14.2 faculty compared to 21.4 faculty for CS and the much bigger schools of Business with an average number of 70.1 faculty.

Assortativity
Network Vertices Edges (% Female) Self-edges/edges Density Degree MVR Reciprocity
ORIE 83 1179 (19.5) 0.1399 0.1121 0.1452 0.4614 0.1538
CS 205 4388 (25.6) 0.0711 0.0688 0.2964 0.5327 0.1264
Business 112 7856 (16.8) 0.0556 0.2738 0.2661 0.4330 0.2197
Table 1. Descriptive statistics of the ORIE network compared to that of Computer Science departments and Schools of Business. Assortativity is a measure of how much vertices with similar values of an attribute connect together (the Minimum Violation Ranking (MVR) is explained in section 4.)

Two distinctive characteristics of the ORIE network relative to CS and Business (see Table 1) are a) its much larger proportion of “incestuous” Ph.D. hiring, given by the self-edges in the network (14% of all faculty, almost three times that in Business and close to twice that of CS) and b) its lower degree assortativity. Figure 1 (top) depicts the number of self-hires among the 83 ORIE departments, sorted in a hierarchy that will be explained below. Note how prevalent self-hiring is: 74% of the ORIE departments have (as of summer 2016) at least one former Ph.D. student hired as faculty. This is similar than in Business schools (74%) but higher than in Computer Science departments (58%). Almost 14% of all ORIE faculty are hired by their originating department. In some departments, self-hires account for more than 40% of the faculty (see Figure 1, bottom). Our dataset, however, lacks information about cases when a faculty returns to her alma mater after working in other departments, that is, cases where the self-hiring was not direct from graduate school to faculty.

The assortativity index is a correlation measure of how much vertices with similar values of a given scalar attribute tend to be connected together (for this and other descriptive network statistic definitions, see, e.g., Newman (2010)). Shown in Table 1 are the assortativity measurements with respect to two vertex attributes. The first one, assortativity based on the total degree of each vertex (department) indicates a lower tendency for ORIE departments, compared to those in CS and Business, to establish hiring-placement connections with departments with a similar total number of hires and placements. The assortativity with respect to the MVR indices is an indication of how likely departments establish hiring-placement relations with departments equally ranked in a hiring hierarchy to be discussed in section 4. For ORIE, this assortativity is between that of CS and Business.

Figure 1. Top: in blue, observed number of self-hires among the 83 ORIE departments sorted by MVS hierarchy index described later in this paper. Bottom: self hires as a percentage of all faculty in each department, sorted by MVS index. While self-hires are more numerous in the top departments, self-hires as a proportion of total faculty are significant across all the ORIE departments. In red, expected number of self-hires () as predicted by the latent location ERGM model described in section 5.

Other descriptive statistics in the ORIE network shown in Table 1 tend to be also somewhere between those of the CS and the Business networks. The density or connectance of the ORIE network, defined as the number of non-zero entries in the adjacency matrix divided by the maximum number of possible edges, is between CS (which has a very sparse network) and Business (which is densest). As it will be shown below, the ORIE faculty hiring network is actually quite sparse except for a group of departments, those at the top of a hiring hierarchy, which connect more often among them. The proportion of female faculty in ORIE is 19.5%, higher than in CS (16.8%) but lower than in Business schools (25.6%), an inequality that calls for further analysis that is beyond the present study. Also shown in Table 1 is the reciprocity, the proportion of departments with exchanges of mutual hires. The reciprocity of the ORIE network is also between that of CS and Business.

Finally, the inequality in the production of Ph.D.’s among ORIE departments, although quite large, is also between that of CS departments and Business schools. Figure 2 shows the Lorenz curves for the proportion of Ph.D.’s produced. Approximately, 10% of the ORIE departments generate about 50% of the faculty, although this is not as pronounced as in Business schools, which have a very steep Lorenz curve near zero, with around 3% schools generating 40% of the faculty.

Figure 2. Lorenz curves indicating the inequality in the percentage of faculty production in ORIE, Computer Science, and Business as a percentage of the number of departments. About 10% of ORIE departments generate close to half of all ORIE faculty.

3. Existence of a linear dominance hierarchy

Given a faculty hiring network, a question of interest is if it is possible to order the departments in such a way to form a linear dominance hierarchy. The concept of dominance, popular in social networks in ecology and competitive sports, refers to an attribute of the pattern of repeated interactions between two individuals, characterized by a consistent outcome in favor of the same individual in a dyad (de Vries, 1995). The consistent winner is dominant and the loser subordinate. Individuals form a linear dominance hierarchy if and only if a) for every dyad either dominates or dominates and b) every triad is transitive, i.e., for any individuals , and it is true that if dominates and dominates , then dominates . Transitivity is equivalent to a dominance hierarchy that is acyclic (Ali et al. , 1986). The notion of a dominance hierarchy has been studied in faculty networks by Clauset et al.  (2015), Huang et al.  (2015) and Mai et al.  (2015). Applied to faculty hiring networks, if a department hires more Ph.Ds from department than the number hires from , the relationship between these two vertices implies a “dominance” relation of department with respect to department , given that wishes more strongly to have access to the students produced in department and not viceversa (ties are therefore allowed). In his study of animal societies, Landau (1951) defined a score structure where each equals the number of individuals dominated by element . A hierarchy is then a score structure

so that the members of the society can be ordered as , with each dominating all the members below it, and being dominated by all those above. At the opposite extreme is an “egalitarian” society where assuming odd, . Landau (1951) then introduced his “hierarchy index” , a measure of the variability of the ’s normalized so that implies equality (egalitarian dominance) and implies a perfect linear hierarchy, with equal to

Of course, a perfect linear hierarchy may not exist in a society, and some violations to a perfect linear hierarchy may exist, a topic we discuss in section 4.

A practical difficulty when determining if a linear hierarchy exists in a society (and in general, for any other inference one desires to attempt based on observed social interactions) is that we may have few data points about individuals interacting, with many not interacting during the observation period. The basic approach to deal with the spareness of the observed adjacency matrix (or sociomatrix) is to treat it is a realization of a stochastic process, and to use nonparametric tools for statistical inference. Along this approach is de Vries (1995), who introduces a randomization test for the hypothesis of no linear hierarchy based on Landau’s statistic that takes into account unknown tied relationships (in our case, these are departments that have never hired each other’s Ph.D. students). To perform the test, each dyad () is randomized times and the statistic is computed for each random sociomatrix giving a set of numbers . is also computed for the observed sociomatrix. If the empirical p-value of the test, defined as (where

indicates cardinality) is small, this is evidence the observed linear hierarchy is statistically different than that of a random matrix, which has no hierarchy.

Ordering individuals in a linear or near-linear hierarchy is justified only if there is statistical evidence in favor of such hierarchy comparing the existing hierarchy to what could be obtained from a random matrix. Finding a near-linear hierarchy () is a hard combinatorial problem about which we comment below. Figure 3 shows results of the deVries tests applied to the ORIE, CS and Business networks (data for CS and Business are from 2015 taken from Clauset et al.  (2015)). The empirical p-values are zero for all 3 disciplines, implying there is a statistically significant linear hierarchy in each of the three networks.

Figure 3. DeVries’ test for the presence of a linear dominance hierarchy in the complete ORIE network (Left), the Computer Science network (Middle, ) and the Business network (Right,

). Randomization distribution (10K simulations) under the null hypothesis of random graph of Landau’s

values. Red vertical lines are the observed values. In all cases, the empirical p-values are 0.0, indicating strong evidence of the presence of a hierarchy in each of three faculty networks.

If a linear or near-linear hierarchy is significant, a related question is how steep is the hierarchy. In Appendix 2 we statistically test for the significance of the steepness of a linear hierarchy (de Vries et al. , 2006), and find that the ORIE, CS and Business networks all have a hierarchy with a statistically significant slope.

The tests for the significance of a linear hierarchy and for a significant slope were repeated for the top ORIE departments (sorted according to the MVS index in Table A1 in Appendix 1, further discussed below). Table 2 shows the results of these tests applied to an increasing number of (sorted) departments. The slope is quite steep and significant only for the first 10 departments. Overall, for the 83 ORIE departments the slope is statistically different from zero, but its actual numerical value is rather small.

Similarly to the tests in this section, in the sequel we will again treat the observed network as a noisy realization of possible networks, and following Clauset et al.  (2015) we will use a “bootstrapping” approach to network creation to help perform an analysis that considers other possible (sparse) networks that could have been observed by chance.

No. of departments Density(connectance) Observed p-value Slope p-value
Top 10 0.6000 0.5590 0.0307 -0.4281 0.0000
Top 20 0.4025 0.3036 0.0098 -0.2061 0.0000
Top 50 0.1912 0.1328 0.0004 -0.0688 0.0000
All 83 0.1121 0.0701 0.0000 -0.0278 0.0000
Table 2. Dominance hierarchy linearity and steepness test results for different ORIE subnetworks. The hierarchy is steeper closer to the top, where it is also much denser. Hierarchy considered is the MVS hierarchy explained in section 4.

3.1. A caveat: “observational zeroes” in a sparse sociomatrix

The preceding analysis indicates that the 83 ORIE departments can be ranked according to a statistically significant near linear hierarchy using hire-placement data, even though the hierarchy appears to be quite flat for most of the departments except at the very top of the hierarchy. The de Vries’ steepness test (Appendix 2) depends on the number of interactions between the actors or individuals in the network. In animal behavior, the ideal way to find a hierarchy is to observe pairwise competitions in a balanced (designed) “tournament” (David, 1987), but in the field of social network data an abundance of “observational zeroes” makes it more difficult to determine a hierarchy. Although no formal power analysis is available, de Vries (1995) gives some numerical evidence to suggest that the power of the linearity test to detect an existing hierarchy goes down when the frequency of observational zeroes increases, and it is quite possible the same occurs in the steepeness test. Table 2

shows how the connectance (density) of the departments among the top of the hierarchy is much higher than among the rest, and this implies the test statistics have more information about placements and hires within this group that among dyads that include at least one individual ranked lower in the hierarchy.

The predominance of zeroes in a sociomatrix may not necessarily indicate there are no dominance relations among the dyads. As warned by de Vries et al.  (2006) one should be careful when interpreting linearity and steepness tests for societies in which there are very few pairwise interactions recorded. This warning is by extension worth keeping in mind when trying to find a dominance relation or ranking in a faculty hiring network. We can distinguish between three types of “observational zeroes” in the adjacency matrix (or sociomatrix): a) in animal behavior, if two individuals are not observed to interact, it may be because there is a dominance relation present, with the subordinate individual avoiding the dominant individual. We will call the similar case of departments avoiding hiring from higher ranked departments deference or avoidance zeroes. Alternatively, b) there may simply be no dominance-subordinate relation between two individuals that “respect” each other, a case analogous to departments that do not have hiring-placement relations simply due to their finite capacity, which limits the possibility of observing more hiring-placement relations, a case we will call equality zeroes. Finally, c) it could happen that we have not observed long enough the network and the question of whether there is a dominance between two individuals is unresolved. This would be the case when relatively young departments have not established hiring-placement relations with many departments simply because their young age. This case will be called unresolved zeroes. It is not possible, however, to determine from the data what kind of “observational zeroes” one is dealing with in a particular hiring network.

With these precautionary notes we now proceed to find a near linear hierarchy in the ORIE network. For the CS and Business networks, see Clauset et al.  (2015). The presence of “observational zeroes” and the low density of the ORIE network justifies the bootstrapping approach of the next section. Likewise, the different connectance among strata of the hierarchy and its corresponding changes in steepness also justifies the search for groups (communities) of departments, a task we address in Section 5.

4. The Minimum violations and minimum violations and strength indices

Let be the adjacency matrix of a faculty hiring network. Clauset et al.  (2015) define a hierarchical index given by a permutation that induces the minimum number of edges that “point up” the hierarchy. This is found by

(1)

Thus, if there is an edge from to and (i.e., ) this means a lower ranked department in the hierarchy has placed a faculty at a higher ranked department (recall rank one is highest), in other words, we have a “violation” of the hierarchy implied by and this will increase . Similarly, if there is an edge from to and (i.e., ), this is not a violation, and will make decrease. A permutation that minimizes (1) is called a minimum violation ranking (MVR) which has been proposed as the optimal way of rankings players in a round-robin tournament (Ali et al. , 1986). For networks where a perfect linear hierarchy does not exist (, thus violations are unavoidable) solving (1) is a hard combinatorial problem. Problem (1) is in particular equivalent to reordering the columns and rows of the adjacency matrix such that we get an upper triangular matrix, a problem which has been proved to be NP-complete (Charon & Hudry, 2010). For the 83 departments in the ORIE network, however, an exact algorithm for finding the MVR’s, such as Pedings et al.  (2012) binary linear integer programming formulation, is computationally feasible. Note that multiple optimal MVR rankings with the same number of violations may exists in a complex network. Although we will argue that the MVRs do not fully reflect the hierarchy of a faculty hiring network, Table A1 in Appendix 1 lists the MVR rankings of the 83 ORIE departments considered in this study for completeness.

Rather than solving (1), which considers only the number of violations, we could also consider the strength of each violation. To obtain a dominance relation in a society of animals, de Vries (1998) defines the strength of the violations in a sociomatrix as:

(2)

where the difference (with ) measures the strength of a violation in the ranking (which exists if ). In our case, entries under the diagonal are unexpected faculty hires, where department hires a number of faculty from a lower ranked department . We will refer to the resulting rankings from solving (2) the Minimum violations and strength rankings and will denote them by MVS. To obtain these rankings, we modify the stochastic search algorithm in Clauset et al.  (2015) to account for the strength of a violation (see Appendix 3). The ORIE MVS rankings are shown in Table A1 in Appendix 1.

A problem with both the MVR and MVS rankings previously proposed in the literature on hiring networks is that a department that places few of its own Ph.D. students, despite hiring from the top departments, may be ranked very low. For instance, this is the case, under both MVR and MVS rankings, of Naval Postgraduate School OR dept., which received among the lowest rankings using these two indices. A department that consistently hires from top ranked departments should be ranked higher than one that not only does not place its Ph.Ds but also does not have any hiring interactions with the top ranked group. To illustrate, a bar graph of the adjacency matrix sorted according to the MVS rankings indicates the nature of the problem (Figure 4, left): while both (1) and (2) tend to minimize the number of entries below the diagonal, the entries above the diagonal are not considered. Note the large values in the upper right corner of the matrix plot; these correspond to low ranked departments under the MVS criterion that hired repeatedly from the very top departments in the ranking, yet they ended up with a very low MVS ranking. To correct this anomaly, we propose to also account for this type of secondary violation, which we will call unexpected placements i.e., when a top department places too many Ph.Ds in other lower ranked departments that should not be that attractive for their graduating students. Penalizing this kind of violation (and its strength) will result in an improved ranking for a department that “hires high”. A compound criteria, including the strength of both unexpected hires and unexpected placements is:

(3)

where in the second term considers abnormal entries above the diagonal (since ) and measures the strength of this type of violation, similarly to (2). It would be wrong to use this compound criteria as it gives equal weight to the two types of violations, whereas unexpected hires (or violations, i.e., entries below the diagonal) should be accounted more severely than unexpected placements (above the diagonal). Assigning weights to each objective is ad-hoc and therefore not a solution. Instead, to give primary importance to violations (unexpected hires), the pairwise stochastic swapping algorithm that we utilize (Appendix 3) in solving (2) was defined to accept a new ranking (i.e., exchanging the rankings of two departments in question) if either:

  1. the number of violations is lower after the switch, or

  2. the number of violations is the same as before the switch, but the sum of the strengths of the two types of violations, is lower after the switch.

We refer to the resulting rankings as MVS rankings. They are reported in Appendix 1, Table A1.

A concept related to the two types of violations, unexpected hires and unexpected placements, found in the area of sport teams rankings, is that of an adjacent matrix in “hillside” form (Pedings et al.  (2012)). In sport team rankings, the adjacency matrix contains the point or goal differential between all pairs of teams in a tournament. A sports tournament sociomatrix is in hillside form if it is ascending along all rows and descending along all columns, i.e., if and (Pedings et al. , 2012). Associated with this definition there are two types of violations: “upsets”, nonzero entries below the diagonal matrix, and “weak wins”, entries above the diagonal matrix that do not follow a hillside pattern, i.e., when team did not score as many points as it would have been expected when playing team , with .

While the concept of weak wins and upsets in a sports tournament is similar to that of unexpected hires and placements, there are important differences. In a sports tournament, the sociomatrix contains goal differentials, and therefore the hillside form as defined above is the ideal form of a hierarchy. Contrary to a tournament, where every team plays against all other teams and therefore a complete comparison network can be formed, we do not have pairwise comparisons for all dyads in a faculty hiring network, i.e., not all edges are observed as we have “observational zeros” as mentioned before, and this requires special consideration. In addition, in a faculty hiring network the definition of a hillside form adjacency matrix needs to be modified to account for both unexpected hiring and unexpected placements since we wish to find a hierarchy such that we have both descending columns and descending rows as well:

That is, instead of winning by more goals against decreasingly ranked opponents, higher ranked departments are expected to place fewer faculty at decreasingly lower ranked departments. The MVS-ranked matrix (see Figure 4, left) is not in what we define as hillside form since there is a “peak” on the upper right cell of the matrix.

Figure 4 (right) shows the adjacency matrix for the ORIE departments sorted according to the MVS indices. Note how the matrix is closer to hillside form, i.e., a matrix with both rows and columns closer to being monotonically descending (Naval Postgraduate School then deservedly ranks much higher, and the peak on the left plot is moved considerably further to the left of the matrix).

Figure 4. Left: matrix bar plot of the adjacency matrix under the MVS rankings, which considers the strength of a type 1 violation (unexpected placement of a faculty from a lower to a higher ranked department). Right: matrix bar plot of the adjacency matrix under the MVS rankings, which considers the strength of a type 1 violation and also of type 2 violations (unexpected number of hires from lower ranked departments). The plot on the right is closer to “hillside” than the one on the left, a consequence of the different objectives. Note the peak in the upper right corner in the plot on the left, these are departments that hire an unexpected number of faculty from the top departments, yet ended up, under the MVS rankings, in a very low position. The MVS rankings ameliorate this situation to a certain degree.

Bootstrapping the observed network

Given the sparseness of the ORIE network due to observational zeroes, we provide more robust MVS (and MVS

) indices by optimizing 1000 “bootstrapped”, or randomly sampled (with replacement) ORIE networks. The bootstrapped networks are obtained by sampling edges, with replacement, from the observed ORIE network, such that the probability of sampling an edge is proportional to the number of faculty

in that edge. Each bootstrapped network was then optimized according to the or criteria using the stochastic search method of Appendix 3, and the resulting MVS values were recorded for each department. The results for MVS across the ensemble of bootstrapped replications are shown in Figure 5. Note how there is less uncertainty in the MVS values in the extremes of the hierarchy, and more uncertainty for those ranked in the middle, a phenomenon also reported for CS and Business networks (Clauset et al. , 2015). The final indices MVS and MVS reported on Table A1 are the average ranks computed from the 1000 bootstrapped and optimized networks. A network representation of the top 15 MVS departments is shown in Figure 6. This is the most dense part of the ORIE network.

Figure 5. Boxplots of the boostrapped Minimum Violation and Strength (MVS) ranks for all 83 departments in the ORIE network, sorted by mean MVS value (1000 bootstrapped networks). The reported MVS ranks correspond to the average values. Note how the extreme ranks are less uncertain, a characteristic also reported for other disciplines (Clauset et al. , 2015).
Figure 6. ORIE faculty hiring network for the top 15 departments sorted according to MVS index. Edge width is proportional to number of hires. The edge density (or connectance) of the ORIE network is high only for the departments at the top of the hierarchy, but otherwise it is quite sparse, see Table 2.

Correlations between different ranks

Table 3 shows the Spearman correlation coefficients between various rankings computed for the ORIE departments, including two published rankings (NRC and US News & World Report, see Appendix 1) and the MVR and MVS rankings described earlier. All these rankings are shown in Table A1 in the Appendix 1. Not surprisingly, the MVR and MVS rankings are highly correlated, but a more unexpected high correlation is between the MVS rankings and “Hub” importance ranks. In a social network, Hubs and Authorities are types of vertices defined in an intertwined manner: a Hub is a vertex that points to many other vertices with high authority, and an authority is a vertex pointed to by many hubs (Newman, 2010). In a faculty hiring network, vertices with high hub ranking are departments that “feed” faculty to departments sought after by faculty candidates, thus Hub importance refers to placement capacity to departments that in turn are important (similarly, the authority ranks refer to hiring capacity, the ability to attract and hire faculty from important departments). MVS

is also correlated with the out-degree rankings, but the correlation is not as high as with hub importance: a department must not only generate many Ph.Ds but must place them in the best possible departments. The PageRank, (left) Eigenvalue, and out degree rankings are all highly correlated because the importance of “centrality” in a faculty hiring network is bestowed by out degree.

While there is significant correlation between published rankings and the MVS and MVR indices, there are significant differences especially among the top 20 departments (see Appendix 1 Table A1). For the top 20 MVS departments, the Spearman correlation is 0.796 and 0.711 between the MVS and US News and NRC rankings, respectively, with two departments in the top 20 MVS indices not in the US News list.

MVS MVR USN NRC In-deg Out-deg Eigen PgRank Bet. Hub Auth.
MVS 1.000
MVR 0.921 1.000
USN 0.857 0.776 1.000
NRC 0.849 0.819 0.876 1.000
In-deg 0.521 0.275 0.690 0.499 1.000
Out-deg 0.884 0.832 0.825 0.773 0.585 1.000
Eigen 0.873 0.840 0.757 0.789 0.438 0.863 1.000
PgRank 0.861 0.860 0.799 0.775 0.461 0.942 0.938 1.000
Bet. 0.490 0.418 0.479 0.518 0.539 0.754 0.659 0.709 1.000
Hub 0.923 0.855 0.857 0.845 0.569 0.933 0.917 0.906 0.637 1.000
Auth. 0.724 0.557 0.801 0.693 0.776 0.613 0.607 0.574 0.341 0.712 1.000
Table 3. Spearman correlation coefficients between the ranks of common measures of vertex importance, including public rankings, in the observed ORIE network. Only pairs of entries with no NA values were considered. (USN = US News 2016 ranks, NRC = 2011 NRC ranks).

5. Latent location variables and clustering of ORIE departments: a latent exponential random graph model

Descriptive statistics as those in section 2 can only provide partial information about the structure of a complex network. Single indices such as the MVR and MVS indices (or such as published rankings) that try to capture the “prestige” of an academic department are inherently incomplete. One feature that was evident from the descriptive analysis of the ORIE network (Sections 2 and 4) was that there is a core of departments at the top of the hierarchy that form denser connections, while the periphery is much more sparsely connected. Also, there is evidence the steepness of the hierarchy varies within the hierarchy (Appendix 2 and Table 2). This indicates that in order to better understand the structure of the network, rather than finding a linear hierarchy based on single indices, it is worth finding groups of similar departments.

In this section, rather then simply applying clustering algorithms directly to the ORIE network data, we first consider the observed ORIE network as a noisy realization, or sample, from a stochastic network model. We study a particular type of exponential random graph model (ERGM) in which the conditional probability of a tie between two actors, given covariates (if any), depends on the distance between actors in an unobserved or latent “social space” (Krivitsky & Handcock, 2008).

In a latent ERGM model, the sociomatrix (or adjacency matrix)

is viewed as a random variable that depends on parameters

, covariates and positions in a ‘social space”. The latent ERGM model we used is:

We thus adopted a Poisson density for the number of hires between two departments and a link function which is log-linear in the covariates and in the distances in an Euclidean latent space (defined by the ’s). The latent positions are assumed to follow a mixture of spherical multivariate normals in the social space, which could have in principle any dimension :

(4)

where is the probability an individual belongs to group (

Fitting the model implies finding estimates for the parameters

for . A Bayesian formulation has been proposed by Krivitsky et al. (Krivitsky et al. , 2009)

who proposed a Markov Chain Monte Carlo (MCMC) estimation approach based on Gibbs sampling updates for

and a Metropolis-Hastings update for the latent positions . This method has been implemented in the latentnet (Krivitsky & Handcock, 2008) R package. The MCMC fitting routine is in function ergmm. This function allows one to use certain model terms that do consider self-loops, an important characteristic of the ORIE network as discussed in section 1. We use the method described in Krivitsky & Handcock (2008)

to setup the priors for all hyperparameters.

For the MCMC estimation, for each alternative model we used a warm-up period of 50,000 iterations, then ran the MCMC routine for 5,000,000 more iterations, and finally collected statistics only every 100 iterations over 50,000 more iterations.

There are not many techniques available for ERGM model selection. We selected a final model comparing the alternative models’ Bayesian Information Criterion (BIC), considering whether the MCMC estimation procedure converged satisfactorily or not for a given model, and looking at model diagnostics involving the prediction performance of each model considered. Models with 2-dimensional (

) location vectors

had difficulty converging; the chains did not show adequate mixing. For (2 groups or clusters in 3 dimensions), BIC=5171 but the coefficients failed to converge; for and , BIC=5221 (somewhat worse) but it had satisfactory MCMC convergence for all its parameters (see Figures 7 and 8).

The fitted final model, obtained from the posterior means of all parameters, is:

(5)

where if and for is a self “loop” covariate, modeling self-hires, is a “sender” covariate and is a “receiver” covariate (Krivitsky & Handcock, 2008). Numbers in the under braces are the 95% credible posterior intervals for each coefficient, they all exclude zero. The sender and receiver covariates model the propensity of a department to either provide or receive faculty. The motivation for this model was that the MVS indices can provide an indication of the number of faculty flowing between departments. Taking the log(MVS) implies we assume effects are linear, given this is a log-linear model. For instance, suppose MVS and MVS. Then the fitted model predicts self hires on average for the top ORIE department.

Figure 7. MCMC diagnostics for model (5): log-likelihood and values of , and . All convergence diagnostics are adequate, i.e., trace plots are stable and therefore parameter values converge in distribution. Traces shown for last 5M iterations, distributions shown for last 50K iterations.
Figure 8. MCMC diagnostics for model (5), (cont.): values of , and . All convergence diagnostics are adequate, i.e., trace plots are stable and therefore parameter values converge in distribution. Traces shown for last 5M iterations, distributions shown for last 50K iterations.

5.1. Model prediction performance

The fitted latent location ERGM model (5) adequately predicts the expected value of the number of hires within and between departments. Using in (5), the predicted mean self-hire values are obtained, and these are contrasted with the observed self-hired values in the ORIE network in the top plot of Figure 1 (red line), showing excellent agreement. Figure 9 diagrammatically shows the fitted expected number of hires between all 83 ORIE departments, which can be compared to the plot on the right of Figure 4, again demonstrating very close behavior to the observations. More specific model diagnostics for ERGM models are based on Monte Carlo simulations of the fitted network and comparison of the observed statistics to those in the ensemble of simulations.

Figure 9. Predicted mean number of faculty hires in the ORIE network () given by the fitted latent location ERGM model (5). Height equals expected number of faculty, axes are the 83 departments sorted according to MVS. Compare with the observed number of hires, the plot on the right of Figure 4.

We performed posterior checks based on one thousand simulated networks from the posterior of the fitted model using the latentnet R package. We compared posterior properties of the simulated networks against the corresponding statistics of the observed ORIE faculty network. Figure 10 contrasts the actual observed values for the in and out degree distribution (bold red) with the interquartile range of the simulated values from the posteriors (blue boxplots). Both distributions can approximately generate the observations, including the very skewed out-degree distribution. There is some over-generation of nodes with in-degrees equal to 6 and 7, and over-generation of nodes with out-degrees equal to one edge. Figure 11 (left) contrasts posterior simulations of the minimum geodesic distances between the vertices and the actual values, which are reproduced well by the model. The plot on the right contrasts the posterior simulations vs. actual values for the edge-wise shared partners, defined as the number of edges such that both and have common neighbors. The peculiar shape of this distribution is very well reproduced by the latent ERGM model. Finally, in Figure 12 (left) we first computed the posterior edge density distribution of the simulated networks and contrasted it with the actual value (red dotted line), which falls among the simulated values, indicating the model can reproduce networks with the correct density of edges. The number of departments that self-hire ranges from around 49 to 78 in the posterior simulated networks, and this includes the observed value (61) in the real ORIE network (red dotted line in Figure 12, center). Finally, the proportion of Ph.Ds self hired by their original department ranges from about 12% to 20%, again including the observed 14% (red dotted line, Figure 12, right).

Figure 10. Diagnostic plots for the ERGM model (5) based on 1000 simulations. Left: in-degree distribution. Right: out degree distribution. Boxplots give the interquartile ranges and the whiskers the extreme values simulated from the posterior of the fitted model (5), red continuous lines are the observed values. With exception of a slight over-generation of vertices with in-degree 6 an 7 and out-degree equal to one, the fitted model reproduces the observed degree distributions.

We conclude from these “goodness of fit” posterior simulations that the general structure of the ORIE faculty hired network is captured by the fitted latent ERGM (5).

5.2. Determination of groups of departments

The latent ERGM model permits us to determine groups in which the nodes are clustered in the location space . We found the clustering provided by the ERGM model unreliable, the clusters did not seem to form in any particular meaningful way. Instead, to select groups of departments, we took the mean of the posterior of the latent locations , and ran the PAM (Partitioned around medoids) algorithm (Kaufman & Rousseeuw, 2008) to find 3 groups (clusters) of departments. This provided a much better group separation than using the proportions in (4) as the basis of clustering. The number of clusters was found using PAM and the “Gap” statistics (Tibshirani et al. , 2001) that determines a goodness of clustering measure. The best number of clusters is either 1 (no clustering) or 3 from the gap statistics (see Figure 13). We therefore form 3 groups or clusters of departments, and the groups are displayed in different color in Figure 14 and listed in Appendix 1(Table A1).

Figure 11. Diagnostic plots for the latent ERGM model (5) based on 1000 simulations. Left: geodesic distance distribution. Right: edge-wise shared partner distribution. The number of edge-wise shared partners is defined as the number of edges such that both and have common neighbors (”shared partners”). Boxplots give the interquartile range and the whiskers the extreme values simulated from the posterior of the fitted model (5), red continuous lines are the observed values. The fitted model is able to reproduce the observed statistics. Note how almost 40% of the edges in the ORIE network have no common neighbors.
Figure 12. Diagnostic plots for the latent location ERGM model (5) based on 1000 simulations. Left: edge density distribution. center: number of departments that self-hire. Right: proportion of self-hired faculty. Observed values in ORIE network are given by red vertical lines. In all cases, the fitted model reproduces the observed values in the actual network.
Figure 13. Simulated “gap” statistics (Tibshirani et al. , 2001) to determine the number of clusters in the latent variables identified in the exponential random graph model (5). Either 1 or 3 clusters are identified, we chose 3 clusters due to better interpretability.
Figure 14. Latent Positions , , after fitting (5). Vertices are colored according to the three identified groups (black=group 1, red=group 2, and green = group 3). Left: plot over first two latent variables, Right: plot over the 3-dimensional latent space (edges not shown). Numbers correspond to MVS indices, see Table A1. Evident features are the few connections between groups 1 and 3, and how group 2 contains departments with a high degree of “betweenness” (Newman, 2010).

It is notable how well the first latent group includes only departments who more consistently hire from top departments in the hierarchical MVS order. Using the three groups of departments, we can reduce the ORIE network to a simple aggregated network where faculty flow within and between the three groups (Figure 15). Most of the hires (62 %) take place within groups, only 38% is between groups. Note how groups 3 and 1 are very thinly connected: only 3 faculty receiving their Ph.D. in the 34 departments in group 3 have been hired by the 14 departments in group 1. Inversely, only 14 Ph.D.’s from group 1 have been hired in departments from group 3. Group 3 has also provided few (26) faculty to group 2. This indicates that the ORIE network is strongly separated in clusters, with departments in the bottom of the hierarchy (group 3) producing almost no faculty for those on the top. Interestingly, the “intermediate” departments in group 2 (35 departments) include the top 7 departments as ranked by “betweenness” and overall have the lowest (i.e. most important) betweenness average ranks (38.3 for group 2, 42.2 for group 1, 44.7 for group 3). It could be argued that the intermediate departments in group 2 keep the ORIE as a single, connected discipline. An interesting question for further work is to determine if the degree of interaction between groups 1 and 3 is higher or lower than the interaction threshold between different (but close) disciplines, such as OR and Computer Science, for instance.

Figure 15. Simplified ORIE network, showing the faculty hires between the three identified groups of departments. Edge width is proportional to the number of faculty hires, which are the numbers shown on each edge. The bulk of the hire-placements occurs within group. Note also how there are very few connections between groups 1 and 3, with departments in group 3 contributing only to three hires in departments in group 1.

6. Discussion and Conclusions.

We have demonstrated the existence of a near linear hierarchy in the ORIE faculty hiring network through the application of modern techniques in statistical analysis of social networks that exclude subjective assessments. We provided an approximate linear hierarchy index (Minimum Violation and Strength, MVS) obtained through optimizing bootstrapped networks sampled from the original ORIE network and therefore not sensitive to the unequal density of edges (and that stand in contrast to published rankings). Single indices of hierarchy, however, do not capture all important features in a complex network, for instance, the highly connected core of the departments in the top of the hierarchy, the high incidence of self-hires, or the skewed out degree distribution. This is the reason we propose and fit the latent location Exponential Random Graph Model (ERGM) of section 5

and the associated cluster analysis as a better approach for understanding the hiring patterns in the ORIE faculty network (and of any such hiring network), resulting in groups that do not necessarily follow the MVS

ranks. The model successfully captures the main characteristics of the ORIE faculty network, including its node degree distributions, its incidence of self-hiring, and its edge density distribution among others. This model allowed us to simplify the ORIE network to one with only three groups of departments, with most hiring taking place within groups, also showing the little interaction between groups at the extremes of the hierarchy. To be in the first group, a department needs to consistently hire from top departments, even if it does not place that many Ph.D.’s in other top departments, a non-existent notion in competitive sports and ecology. Furthermore, departments in the intermediate group act as a “link” between the other two groups, keeping the network connected as a single discipline, with departments in this group having among the highest “betweenness” importance (Newman, 2010).

ORIE is a field with a high self-hiring rate across the whole network compared to Computer Science and Business Schools (close to two and three times, respectively). A high self-hiring rate may reduce the transfer of information and discipline-specific knowledge (including curriculum innovations) between departments.

We are aware hiring decisions are not only made based on the “prestige” of the potential “sender” and “receiver” departments. Besides the obvious impact of each candidate’s Ph.D. research credentials, they are also a consequence of complex politics, personalities and styles of the faculty in the “receiver” department and its college administration. Sometimes the reputation of the Ph.D. adviser, and not only that of the sending department, matters in a hiring decision. Sometimes department chairs overwrite the majority opinion of their own faculty in hiring decisions, thus some actual hires may not represent the normal behavior of a department. These aspects of the hiring-placement process can be thought to add “noise” to the collected data, which is already sparse at lower levels of the hierarchy. In our analysis we handled this noise via permutation tests or bootstrapping when finding evidence of a hierarchy or trying to determine an index, or through a statistical model for the network. But as discussed in section 3.1, zero valued edges do not necessarily indicate the absence of a dominance relation. Unfortunately, it is not possible to determine from the available data whether an “observational zero” in a social network implies the lack of a dominance relation or rather its presence, with the dominated party trying to avoid interactions with the dominant one (i.e., departments avoiding hiring from higher ranked departments).

The analysis presented in this paper refers to data collected at a single point in time (summer 2016). This neglects the dynamic effect of hires over the years which could be modeled if more complete data about the years of subsequent employments of each single professor in the network were available (our datasets contain partial data, too incomplete to attempt such analysis and this is left for future work). Dynamic exponential random graph models seem a good way to study such dynamic networks.

References

  • Ali et al.  (1986) Ali, I., Cook, W. D., & Kress, M. 1986. On the Minimum Violations Ranking of a Tournament. Management Science, 32(6), 660–672.
  • Burris (2004) Burris, V. 2004. The Academic Caste System: Prestige Hierarchies in PhD Exchange Networks. American Sociological Review, 69, 239–264.
  • Charon & Hudry (2010) Charon, I., & Hudry, O. 2010. An updated survey on the linear ordering problem for weighted or unweighted tournaments. Annals of Operations Research, 175, 107–158.
  • Clauset et al.  (2015) Clauset, A., Arbesman, S., & Larremore, D. B. 2015. Systematic inequality and hierarchy in faculty hiring networks. Science Advances, 1(1), 1–6.
  • David (1987) David, H. A. 1987. Ranking from unbalanced paired-comparison data. Biometrika, 74(2), 432–436.
  • de Vries (1995) de Vries, H. 1995. An improved test of linearity in dominance hierarchies containing unknown or tied relationships. Animal Behavior, 50, 1375–1389.
  • de Vries (1998) de Vries, H. 1998. Finding a dominance order most consistent with a linear hierarchy: a new procedure and review. Animal Behavior, 55, 827–843.
  • de Vries et al.  (2006) de Vries, H., Stevens, J. M. G., & Vervaecke, H. 2006. Measuring and testing the steepness of dominance hierarchies. Animal Behavior, 71, 585–592.
  • Fowler et al.  (2007) Fowler, J. H., Grofman, B., & Masouka, N. 2007. Social Networks in Political Science Hiring and Placement of Ph.D.s, 1960-2002. PS: Political Science and Politics, 40(4), 729–739.
  • Huang et al.  (2015) Huang, B., Wang, S., & Reddy, N. 2015. Who is Hiring Whom: A New Method in Measuring Graduate Programs. 112nd ASEE Annual Conference & Exposition, Seattle, WA.
  • Katz et al.  (2011) Katz, D. M., Gubler, J. R., Zelner, J., Bommarito, I. I., & M. J., Provins. 2011. E., and Ingall, E., “Reproduction of Hierarchy? A social network analysis of the American Law Professoriate”, J. of Legal Education, 61(1), 76–103.
  • Kaufman & Rousseeuw (2008) Kaufman, L., & Rousseeuw, P. J. 2008. Finding Groups in Data. NY: John Wiley & Sons.
  • Krivitsky & Handcock (2008) Krivitsky, P. N., & Handcock, M. S. 2008. Fitting Position Latent Cluster Models for Social Networks with latentnet. J. of Statistical Software, 24(5), 1–23.
  • Krivitsky et al.  (2009) Krivitsky, P. N., Handcock, M. S., Raferty, A. E., & Hoff, P. D. 2009. Representing degree distribution, clustering, and homophily in social networks with latent cluster random effects models. Social Networks, 31, 204–213.
  • Landau (1951) Landau, H. G. 1951. On Dominance relations and the structure of animal societies: I. Effect of Inherent Characteristics. Bulletin of Mathematical Biophysics, 13, 1–19.
  • Mai et al.  (2015) Mai, B., Liu, J., & Gonzalez-Bailon, S. 2015. Network effects in the academic market: mechanisms for hiring and placing Ph.D.s in Communication (2007-2014). J. of Communications, 65, 558–583.
  • Myers et al.  (2011) Myers, S. A., Mucha, P. J., & Porter, M. A. 2011. Mathematical genealogy and department prestige. Chaos, 21(041104).
  • Newman (2010) Newman, M. E. J. 2010. Networks, and introduction. Oxford, UK: Oxford University Press.
  • NRC (2011) NRC, (National Research Council). 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press.
  • Pedings et al.  (2012) Pedings, K. E., Langville, A. N., & Yamamoto, Y. 2012. A minimum violations ranking method. Optimization in Engineering, 13, 349–370.
  • Schichor (1970) Schichor, D. 1970. Prestige of Sociology Departments and the Placing of New Ph.D.’s. The American Sociologist, 5(2), 157–160.
  • Schmidt & Chingos (2007) Schmidt, B. M., & Chingos, M. M. 2007. Ranking doctoral programs by placement: a new method. PS: Political Science and Politics, 40(3), 523–529.
  • Tibshirani et al.  (2001) Tibshirani, R., Walther, G., & Hastie, T. 2001. Estimating the number of data clusters via the Gap statistic. Journal of the Royal Statistical Society B, 63, 411–423.
  • Way et al.  (2016) Way, S. F., Larremore, D. B., & Clauset, A. 2016. Gender, Productivity, and Prestige in Computer Science Faculty Hiring Networks. Pages 1169–1179 of: Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee.

Appendix 1: Institution data

The list of ORIE departments was formed by merging those Ph.D. granting departments in the 2016 US News & World report ”Industrial/Manufacturing/Systems Engineering” graduate rankings (accessed 5/12/2016) with those in the 2011 National Research Council (NRC, 2011) rankings for ”Operations Research, Systems Engineering and Industrial Engineering”. The NRC lists both departments and programs, so an institution may appear more than once; we included each institution only once in the network (their NRC ranking shown below corresponds to the highest ranking of either program or department) and collected the faculty information as if it were a single department. Only departments with existing web pages as of May 2016 and that have a Ph.D. program were included (this excluded U. of Nebraska-Lincoln). The NRC rankings listed in Table A1 below are those given by the average of the 5% and 95% “R” rankings, listed in standard competition order. There are Industrial Engineering departments which are merged with other disciplines in a single department (e.g., Mechanical and Industrial Engineering, or Computer Systems and Industrial Engineering); an effort was made to include only those faculty in the ORIE field. It was assumed, in the absence of information, that a faculty working in an ORIE department obtained his/her Ph.D. in the ORIE department of the listed institution (a number of faculty only lists their alma mater institution but not their alma mater’s department). Also, faculty that received their Ph.D. outside of the USA (or outside of our list of departments) were not considered. Only faculty in tenured or tenure-track positions were included, although it was not always clear if some faculty positions were tenured or not.

It is important to point out that the edges ( of the ORIE network are formed by faculty currently (as of summer 2016) in department who received their Ph.D. in department . This evidently neglects the movement of faculty through some of other intermediate departments between and and is a potential source of error. Our dataset contains partial information with respect to these intermediate departments where each faculty worked before their current job, information too incomplete to attempt an analysis.

The final ORIE dataset is contained in 2 files, one for the 83 institutions (vertices) and its attributes, and one for the 1179 faculty (edges) and its attributes. The data collection effort was undertaken in summer 2016 over a 4 week span. After data gathering, all faculty data was checked manually for errors by 3 persons. Table A1 contains the attributes in the institutions file, which includes ranks of various measures of vertex importance and the latent groups found in section 5.

Ranks
Institution Group MVS MVS MVR USNews NRC In-deg Out-deg Eigen. PageR. Bet Hub Auth.
Stanford 1 1 2 2 4 1 5 2 2 2 17 2 2
UC Berkeley 1 2 1 1 2 4 36 6 3 3 26 3 18
MIT 1 3 3 3 6 3 3 1 1 1 18 1 1
Carnegie M. 1 4 4 4 NA 7 79 12 6 8 14 8 59
Princeton 1 5 5 5 NA 18 66 33 8 16 68 15 16
Cornell 1 7 7 7 7 8 16 10 4 6 25 7 12
Columbia 1 8 8 8 11 20 22 17 16 12 58 17 6
Northwest. 1 11 11 11 4 6 20 15 10 9 39 10 28
U. Penn 1 14 15 18 28 14 14 24 7 17 43 20 7
USC 1 17 19 19 12 29 12 28 22 22 35 25 10
UT (Aus) 1 19 18 16 19 23 46 29 12 15 33 22 21
UNC (Ch-H) 1 31 35 42 NA 34 32 58 23 45 48 38 36
Naval Post. 1 50 83 75 23 NA 23 69 69 78 70 60 14
Wash. U. 1 67 66 43 39 NA 69 71 71 68 67 71 34
U. Michigan 2 6 6 6 2 5 11 3 9 5 5 4 8
Purdue 2 9 9 9 9 9 9 5 13 7 6 6 13
GA Tech 2 10 10 4 1 2 5 4 1 5 1 5 3
U. Illi. (UC) 2 12 14 14 15 33 13 13 17 14 28 14 5
U. Wis (Ma) 2 13 12 12 7 9 21 14 15 11 10 9 22
U. Florida 2 15 13 13 19 23 56 9 19 13 36 16 38
Penn St. 2 16 16 15 12 12 7 8 18 10 4 13 15
Ohio St. 2 18 21 21 17 9 19 11 25 19 12 19 19
U. Minn. 2 20 20 20 32 46 61 39 37 38 61 29 31
U. Maryland 2 21 17 17 NA 16 60 22 11 21 37 11 32
U. Pitt 2 23 24 24 23 39 25 16 32 24 19 24 20
VPI 2 24 25 25 9 15 10 7 26 18 7 12 25
U. Arizona 2 25 27 27 28 35 53 26 41 37 55 27 51
NC State 2 26 33 32 12 12 6 19 40 30 15 30 29
Lehigh 2 27 23 26 18 28 39 34 30 29 47 43 40
SUNY Buf 2 28 29 31 28 37 28 21 27 25 20 26 33
U. Miss (Co) 2 29 26 23 58 31 78 54 34 51 69 45 63
Rutgers 2 30 28 30 21 38 47 35 21 23 34 44 49
Texas A& M 2 32 31 34 15 24 15 20 20 20 2 23 30
U. Virginia 2 33 37 35 28 23 27 37 48 42 38 34 23
U. Mass. 2 34 39 46 36 50 8 25 29 31 23 18 9
Boston U. 2 35 30 28 39 30 62 60 47 57 66 53 24
U. Arkansas 2 36 40 41 39 57 31 44 39 48 40 49 43
RPI 2 39 41 37 21 19 37 36 55 41 46 39 45
U. Wash. 2 40 42 40 26 35 57 48 49 52 59 42 35
Iowa St. 2 43 45 47 26 32 26 40 35 39 21 41 37
U. Conn. 2 44 59 54 NA 54 35 57 58 67 63 31 39
G. Wash. U. 2 45 44 51 53 53 43 42 33 40 50 37 27
Arizona St. 2 47 48 49 23 21 1 18 28 27 3 21 11
Northeastern 2 53 81 82 36 27 4 70 70 79 71 51 4
Stevens 2 55 61 74 39 NA 17 49 50 55 53 40 17
G. Mason 2 56 51 60 32 NA 30 38 36 43 22 35 42
NJIT 2 62 79 80 NA 57 34 83 82 83 82 80 26
Case West. 2 64 32 53 38 43 77 59 14 26 41 36 55
Worcester P 2 73 74 64 53 NA 73 80 75 80 74 82 46
Table A1. Latent group membership and importance measure ranks, ORIE departments, sorted by group and then by MVS index.
Ranks
Institution Group MVS MVS MVR USNews NRC In-deg Out-deg Eigen. PageR. Bet Hub Auth.
U. Iowa 3 22 22 22 39 22 80 31 24 35 54 28 71
U. S. Florida 3 37 38 36 46 54 54 27 42 34 11 33 67
Oklahoma St 3 38 34 38 39 41 40 30 38 36 45 32 44
Kansas St. 3 41 50 39 46 47 49 46 62 49 65 57 48
U. Illi. (Chi) 3 42 36 33 46 45 71 52 45 53 62 48 61
Texas Tech 3 46 47 48 53 40 55 23 51 33 9 47 68
U. Oklahoma 3 48 53 56 46 50 42 53 57 61 49 58 50
Clemson 3 49 58 58 32 47 29 51 63 58 44 61 52
Miss. U. 3 51 43 44 58 62 50 32 44 32 31 50 78
Auburn 3 52 49 52 32 42 48 41 54 47 29 52 66
Wayne St. 3 54 54 61 53 44 44 64 52 60 42 62 41
U. Louisville 3 57 57 50 66 62 51 55 65 54 56 65 58
U. Alabama 3 58 52 45 NA 61 67 47 64 44 57 63 57
UC Florida 3 59 55 57 39 54 38 45 56 50 24 59 62
UT (Dal) 3 60 46 29 58 NA 83 74 53 77 76 55 82
U. Houston 3 61 62 55 53 65 72 50 60 56 52 56 77
Air Force IT 3 63 71 77 46 NA 18 62 72 76 51 54 47
SUNY (Bin) 3 65 80 81 58 57 24 65 76 75 75 67 53
Oregon St. 3 66 77 72 46 49 41 63 73 69 72 69 69
Wichita St. 3 68 69 78 66 NA 33 76 67 73 60 78 56
W. Virginia 3 69 63 70 66 NA 52 56 59 59 13 66 64
UT (Arling) 3 70 60 66 58 NA 45 43 43 46 8 46 70
U. Tenn. 3 71 56 69 58 62 58 61 31 28 27 68 72
NCA&T St 3 72 78 71 66 NA 64 82 78 82 78 81 81
Florida St. 3 74 64 79 66 NA 59 66 46 62 16 64 65
U. NC (Ch.) 3 75 65 73 58 NA 63 73 66 74 32 73 76
U. Wis (Mil) 3 76 76 65 58 NA 74 81 77 81 77 83 60
Old Dom U 3 77 67 76 NA 66 65 68 68 66 64 74 83
Ohio U. 3 78 68 83 66 NA 68 67 61 63 30 72 80
New Mex St 3 79 73 63 NA 60 76 79 83 72 83 75 73
U. Miami 3 80 75 68 66 52 75 75 79 71 79 77 79
U. Ark. LR 3 81 82 67 46 NA 70 72 74 70 73 70 54
Montana St. 3 82 72 59 NA NA 82 78 81 65 81 76 74
Florida IT 3 83 70 62 NA NA 81 77 80 64 80 79 81
Table A1. Latent group membership and importance measure ranks, ORIE departments, sorted by group and then by MVS index (cont.).

Appendix 2: Statistical tests on the steepness of a linear hierarchy in a network

de Vries et al.  (2006) introduces the concept of the steepness of a linear hierarchy, namely, the size of absolute differences between adjacently ranked individuals in their overall success in winning dominance encounters. When these differences are large, the hierarchy is said to be steep and is called shallow otherwise. DeVries’ steepness measure is based on the score by David (1987), a measure of the dominance success of an individual given by the unweighted and weighted sum of the individual’s dyadic proportions of “wins” (in our case, Ph.D. placed in other departments) combined with an unweighted and weighted sum of its dyadic proportion of “losses” (Ph.D’s hired from other departments). The proportions of “wins” are defined as where is the entry in the sociomatrix and is the total number of interactions between and (i.e., ). David’s (normalized) scores are defined as where , , and . The factors containing the total number of individuals scale this term to make it vary between 0 and . To measure the steepness of a hierarchy, order the individuals according to . Call

the rank of the individuals. Then a simple linear regression model

is fit to the data. The estimated coefficient is an estimate of the steepness of the hierarchy.

To test for the significance of the steepness of a hierarchy using a permutation test, assume as null hypothesis that the steepness is that given by a randomly formed network. Fitting linear regression models from the randomly generated networks each with ranks will result in an empirical distribution of the coefficient, which can then be compared to the estimate from the actual network. Small empirical p-values imply the hierarchy is significantly steeper than that given by a simple random graph.

Figure 16 shows the regression models fitted to the David statistics for the ORIE, CS and Business networks. The ORIE and CS networks appear to have a steep hierarchy only for approximately the first 10 departments. This is in contrast to Business schools which have a steeper slope for about a third of the departments. Figure 17 indicates, however, that the slope is significantly different to that of a random graph in all 3 cases. The steepness results shown are related to the connectedness (density) of the 3 networks, since the CS network is the most sparse, followed by ORIE and then Business.

Figure 16. Fitted line and observed David’s statistics for a linear hierarchy. Left: ORIE network (, slope = -0.02782). Middle: Computer Science (, slope = -0.01281). Right: Business (, slope=-0.1358). Note how for ORIE and CS the observed absolute slope increases for the first 10 departments, indicating a steeper dominance at the top of the hierarchy relative to other departments, but otherwise the hierarchy is quite flat.
Figure 17. Randomized distribution (10K simulations) of the De Vries’ test statistic for the significance of the steepness in the dominance hierarchy of (Left) the complete ORIE network, (Middle) the Computer Science network, and (Right) the Business network. Red vertical lines are the observed David’s statistics. In all cases, the empirical p-values are 0.0, indicating significance steepness in the hierarchy of each faculty network.

Appendix 3: Stochastic search algorithm for finding MVS rankings

1:procedure optimize(, burnin, iterations, interval)
2:     for  to  do
3:          bootstrap()
4:          out degree rankings
5:         for  to burnin do
6:               SWAP()          
7:         for  to iterations do
8:               SWAP()
9:              if  int then Save                         
10:         MVS average(saved ’s)      return MVS average(MVSMVS
Algorithm 1 Minimum Violation and Strength (MVS) ranking of boostrapped networks
1:procedure swap()
2:     Randomly select vertices and and swap them to form .
3:     if  or ( and )
4: then
5:         Accept the swap and return .
6:     else return      
Algorithm 2 Stochastic swapping to improve a given ranking

Algorithm 1 is a modification of the optimization approach in Clauset et al.  (2015), who found minimum violation rankings, adapted for finding MVS rankings. They reported exchanges of more than 2 vertices did not improve the solutions found with swapping pairs of vertices. The algorithm takes as initial ranking that of the out-degrees of each node gives preference to departments that place faculty in other departments. In analogy with Markov Chain Monte Carlo methods, the algorithm was run for a burn-in period of iterations (that are discarded), after which 1000 ranks were saved every 100 iterations (i.e., iterations= and interval=100 in Algorithm 1). The averages of these 1000 ranks gives the MVS or MVS ranks. In addition, to incorporate the uncertainties related to the low density areas of the ORIE network, which could be considered as “noise”, the optimization was repeated for different bootstrapped networks, where the edges in each network were randomly sampled with replacement from the edges of the real ORIE network, with sampling probabilities proportional to the edge attributes . The reported MVS and MVS ranks in Table A1 correspond to the ensemble mean of the optimal ranks obtained from each of these bootstrapped replications, using either or in the SWAP procedure (Algorithm 2).