Further validation of the Perceptions of Politics Scale : a multiple sample investigation – POPS – includes appendix
K. Michele Kacmar
The use of political tactics in organizations is widespread. Virtually every employee in America can recount a political incident in which he or she was directly or indirectly involved. The consequences of these political events lead those involved to view organizational politics in their own way. Some, who may have been negatively affected by a political incident, perceive it to be a negative influence in organizations, while others, mostly those whose position was advanced by political means, view it as a useful tool in an organization (Ferris & Kacmar, 1992). Since people act upon their perceptions of reality, not reality per se, recognizing and understanding employees’ perceptions of politics is of importance to organizations (Lewin, 1936; Porter, 1976).
The perceptions individuals hold about the political nature of their work environment influence the way they do their jobs. These perceptions affect how employees feel about their company, boss, and co-workers, and they impact the productivity, satisfaction, and intent to turnover of the workers (Ferris & Kacmar, 1992). Individuals’ perceptions about politics in the organization also determine how political the environment will be. If employees perceive that others get ahead by acting politically, these individuals will be more likely to engage in political behaviors themselves (Ferris, Fedor, Chachere, & Pondy, 1989). Therefore, organizational culture is influenced by the degree of political activity found in an organization and how the employees in that organization react to these activities.
Although perceptions of organizational politics play an important role in the organization, relatively little is known about this process (Ferris, Russ & Fandt, 1989). One reason for a lack of knowledge in this area is that no established scale existed to measure perceptions of organizational politics. However, this void was recently filled by Kacmar and Ferris (1991) who developed and performed an initial validation of a measure entitled the Perception of Politics Scale (POPS). This twelve-item scale was designed to assess the degree to which respondents view their work environments as political. Armed with this information, managers can better understand how employees perceive and react to politics in the organizational environment around them.
While an initial validation of POPS was conducted during the development of the scale, further examination of the dimensionality and construct validity of POPS was performed by Nye and Witt (1993). However, before researchers interested in the area of organizational politics accept this scale for widespread use, further validation is needed. Hence, the purpose of the present study is to provide subsequent empirical validation of this scale.
Perceptions of Organizational Politics
Although politics is virtually endemic in organizational life (Ferris & Kacmar, 1992; Frost, 1987; Kumar & Ghadially, 1989; Porter, Allen & Angle, 1981), only a limited amount of attention has been paid to this research area. Some interest in organizational politics surfaced in the late 1970s and the early 1980s (Farrell & Peterson, 1982; Gandz & Murray, 1980; Madison, Allen, Porter, Renwick & Mayes, 1980; Mayes & Allen, 1977; Porter et al., 1981; Schein, 1977; Tushman, 1977), however, interest in organizational politics as a research area soon waned. This lack of interest may be attributed to the difficulty early organizational politics researchers experienced in defining, quantifying, and measuring this elusive phenomenon. However difficult the task, prior to reviewing the past research performed in the area of organizational politics, some consensus about what constitutes organizational politics must be reached. Therefore, the following section will grapple with the very difficult task of defining organizational politics.
Toward A Definition of Organizational Politics
One thing that becomes immediately evident when reviewing the organizational politics literature is that no one definition of the term has received widespread support (Cropanzano, Kacmar & Bozeman, 1995; Drory & Romm, 1988, 1990; Ferris, Russ & Fandt, 1989; Porter et al., 1981). Virtually every article written in the area includes some reference to the fact that the concept is difficult to define. In 1990, Drory and Romm wrote an entire article on the definition of organizational politics. However, after reviewing and integrating the various definitions of organizational politics that had been presented in the literature, they too avoided offering a definition for the term.
When examining the definitions that have been offered in the literature (Allen, Madison, Porter, Renwick & Mayes, 1979; Farrell & Peterson, 1982; Ferris et al., 1993; Ferris & Judge, 1991; Ferris, Russ & Fandt, 1989; Kumar & Ghadially, 1989; Porter et al., 1981), several commonalities emerge. One theme is the fact that political activities are a means of exercising social influence. A second notion is that political behaviors are designed to promote or protect one’s own self-interests. Finally, the notion that at least two parties must be included and that these two parties have the potential to possess divergent interests is either explicit or implicit in many definitions. Combining these perspectives into one general definition allows one to view organizational politics as social influence attempts directed at those who can provide rewards that will help promote or protect the self-interests of the actor (Cropanzano et al., 1995).
Armed with a working definition of organizational politics, we are now ready to explore past research in this domain. In the sections that follow, theoretical and empirical efforts in the area of organizational politics will be reviewed briefly. The review is organized around the three factors generated by Kacmar and Ferris (1991): general political behavior, which includes the behaviors of individuals who act in a self-serving manner to obtain valued outcomes; go along to get ahead, which consists of a lack of action by individuals (e.g., remain silent) in order to secure valued outcomes; and pay and promotion policies, which involves the organization behaving politically through the policies it enacts.
General Political Behavior
It has been suggested that political behavior in organizations will increase when rules and regulations are not available to govern actions (Drory & Romm, 1990; Fandt & Ferris, 1990; Ferris, Fedor, Chachere & Pondy, 1989; Ferris & King, 1991; Ferris, Russ & Fandt, 1989; Kacmar & Ferris, 1993; Madison et al., 1980; Tushman, 1977). In the absence of specific rules and policies for guidance, individuals have few clues as to acceptable behavior, and therefore, develop their own. When left to their own, individuals often develop rules that are self-serving and better the position of the rule maker. Individuals who are more adept at dealing with uncertain situations and persons who impose their own rules on others are more likely to have their rules adopted.
Another process impacted by uncertainty is decision making. Decision making under uncertainty has been found to be susceptible to political influence (Drory & Romm, 1990). When the information needed to make an informed decision is lacking or ambiguous, decision makers rely upon their own interpretations of the data. Multiple translations of the same information can result in ineffective decisions that may appear political to those not directly involved in the decision making process (Cropanzano et al., 1995).
Scarcity of valued resources (e.g., transfers, raises, office space, budgets) generates competition. Several researchers have suggested that jockeying for a position that will allow one to receive a valued resource is quintessential political behavior (Drory & Romm, 1990; Farrell & Peterson, 1982; Kumar & Ghadially, 1989). This implies that organizations with limited resources will have political environments. Since most organizations will have limited resources in at least one area, political activities may occur in virtually any organization.
Examining exactly why resources are scant can help to predict who the target of the political activities will be, as well as how heated the contest may become. Any individual who has control over critical resources that cannot be secured elsewhere will be a probable target of political influence tactics (Frost, 1987). Further, the attractiveness and immediate benefit of the resource also will factor into the decision to engage in political activities (Drory & Romm, 1990). In some cases, a scarce resource, such as the organization’s tickets to a sporting event, may only be valued by a few individuals, and hence, the actions engaged in to secure this resource may not be as competitive as those used to secure a scare resource valued by all, such as a raise or a promotion.
Go Along to Get Ahead
Conflict is consistently related to organizational politics in the literature (Drory & Romm, 1988; Frost, 1987; Gandz & Murray, 1980; Mintzberg, 1985; Porter et al., 1981; Tushman, 1977). The essence of this connection is that political behavior is self-serving, and thus, has the potential to threaten the self-interests of others. When a threat is followed by retaliation, conflict arises (Porter et al., 1981). According to Drory and Romm (1990), the existence of conflict is a necessary underlying element of organizational politics. Further, the actual influence attempts themselves are an indication of the potential state of conflict that exists between the two parties.
Some individuals may desire to avoid conflict, and therefore, not resist others’ influence attempts. While this may appear to be a nonpolitical act, it can actually be considered a form of political behavior. It has been suggested that the distinction between political and nonpolitical behavior in organizations can be made on the basis of intent (Drory & Romm, 1990). That is, if a behavior is enacted specifically for the purposes of advancing one’s own self-interests, then the individual is acting politically (Frost, 1987). Individuals who “don’t rock the boat” are not viewed as a threatening opponent by those who are acting politically. Hence, the nonthreatening individual may be welcomed into the “in-group” and received valued outcomes simply for not interfering with a politically acting individual’s or group’s agenda. Lack of action, or going along to get ahead, can be a reasonable and profitable approach to take in order to advance one’s own self-interests when working in a political environment.
Pay and Promotion Policies
The final category of organizational politics is how organizations can reward and perpetuate political behavior through policy implementation (Ferris, Fedor, Chachere & Pondy, 1989; Ferris & King, 1991; Kacmar & Ferris, 1993). Even though organizational decision makers may not do so consciously, the human resource systems that are developed and implemented may reward individuals who engage in influence behaviors and penalize those who do not. Such practices will result in a culture in which political activity will be commonplace in virtually every aspect of human resource decisions.
Organizations can design reward systems that perpetuate political behavior in a variety of ways. For example, individually oriented rewards induce individually oriented behavior. Individually oriented behavior, as opposed to organizationally oriented behavior, is often self-interested and political in nature. When this type of behavior is rewarded or reinforced, the tactics used to secure the reward will likely be repeated. Hence, organizations may develop environments that foster and reward political behavior. Rewarding political behavior also can influence those who have not acted politically in the past to do so in the future. That is, individuals who perceive themselves as inequitably rewarded relative to others who engage in organizational politics may be more likely to engage in political behaviors in the future (Ferris, Russ & Fandt, 1989; Kacmar & Ferris, 1993).
Perceptions of Organizational Politics Scale (POPS)
Each of the three factors just discussed are represented in the Perception of Organizational Politics Scale (POPS) (Kacmar & Ferris, 1991) which is the focus of this paper. A brief explanation of previous development and validation efforts of POPS is provided below.
The original development and validation of POPS was a two-phase process (Kacmar & Ferris, 1991). To begin, 31 items were generated by examining research, theory, and anecdotal evidence relevant to organizational politics. Three hundred and eighty-seven responses to these items were factor analyzed using principal components analysis with an orthogonal (varimax) rotation. In addition, a dataset that contained random data equivalent to 387 responses to 31 items was subjected to an identical factor analysis. The resulting eigenvalues from the two factor analyses were plotted over one another to determine how many factors to extract (Humphreys & Montanelli, 1975). Results indicated that the two lines crossed at five factors, so five factors were retained. Next, classical test theory (i.e., reliability analysis; Nunnally, 1967) was applied to the resulting factors to produce the most parsimonious set of items that still had acceptable reliability.
After applying classical test theory, two of the factors reduced to two items. Therefore, in the second phase of this study, nine additional items were generated for these factors. In phase two, a survey containing the original 31 and the additional 9 items was distributed to approximately 3,000 state workers. Also included in the second survey administration were four subscales (i.e., co-workers, pay, promotions, and supervisors) of the Job Descriptive Index (JDI) (Smith, Kendall & Hulin, 1969). These subscales were included as a means of assessing the discriminate validity of POPS because several of the POPS factors from phase one seemed to parallel the concepts tapped by the JDI subscales.
The 40 items from POPS and the four JDI subscales were subjected to a principal components analysis with an orthogonal varimax rotation which yielded five factors, one for each JDI subscale and one for the politics items. However, as expected, several of the POPS items (i.e., 13) loaded on one of the JDI factors. Items that loaded only on POPS and not on a JDI factor were retained. The retained items were subjected to a principal components factor analysis with a varimax rotation resulting in three factors. Finally, classical test theory was applied to the remaining three factors to further reduce them as described above. The final results for the second phase produced a twelve-item, three-factor scale.
This final scale was further examined by Nye and Witt (1993). Specifically, these authors examined the dimensionality and construct validity of POPS. With respect to the dimensionality of POPS, Nye and Witt used both exploratory and confirmatory factor analysis. First, they used principal components analysis with a varimax rotation and selected factors with an eigenvalue over 1.0. They also used structural equation modeling (i.e., LISREL) to compare a three-factor solution to a one-factor solution. The exploratory factor analysis results (eigenvalues greater than 1) indicated that a one-factor solution was appropriate for the sample used. The confirmatory factor analyses results, however, were not as clear. The three-factor model produced a better model-to-data fit (GFI=.87 versus .89, NFI=.93 versus .90), but had less parsimony (.74 versus .72). However, the high factor correlations (-.85,-.94, and .91) provide additional evidence in support of a one-factor solution.
Nye and Witt (1993) also examined the construct validity of POPS by comparing it to the Survey of Perceived Organizational Support (SPOS; Eisenberger, Huntington, Hutchison & Sowa, 1986). Given that SPOS was designed to measure the degree to which individuals viewed their organizations as concerned about their well-being and appreciative of their efforts, a negative relationship between SPOS and POPS was expected and found. Specifically, results from these analyses indicated POPS and SPOS were strongly and inversely related (-.85). Further, each of these scales produced significant but oppositely signed correlations with other job related measures (e.g., job satisfaction-POPS =-.62, SPOS = .68; commitment-POPS =-.58, SPOS = .59). While these results indicate some conceptual overlap between POPS and SPOS, there also are distinctions. The main difference between these two scales is their focus. Each item in SPOS asks the respondent to comment on how the “organization” treats him or her while POPS more explicitly delineates the group to be evaluated (e.g., influential group, supervisors, people in my department). If raters focus on the same referent group when completing SPOS and POPS, the conceptual overlap could be great. However, if top management rather than departmental colleagues and immediate supervisors are the target of individuals’ SPOS ratings, there could be virtually no conceptual overlap.
Several limitations of the Nye and Witt (1993) examination should be noted. First, the wording of two of the POPS items had to be changed in order for the survey to be approved and distributed. The original item “There is no place for yes-men around here; good ideas are desired even when it means disagreeing with supervisors.” (reverse coded) was changed to read “It is safer to agree with managers than to say what you think is right.” Also, the original item “Since I have worked in this department, I have never seen pay and promotion policies applied politically.” (reverse coded) was modified to read “Pay and promotion decisions are consistent with policies.” Hence, the integrity of the scale was violated. Second, analysis of individual items was not performed. That is, POPS and SPOS items were not factor analyzed together as was done with the JDI in the initial validation of the POPS. An in-depth examination of the individual scale items might have produced more useful results.
Since the validation research to date on POPS has not fully established the dimensionality and psychometric properties of the scale, additional work is needed. To address this need, the present study used a structural equation modeling approach (i.e., LISREL) to examine, validate, and modify POPS.
The Current Study
Figure 1 shows the structural model for POPS as empirically derived by Kacmar and Ferris (1991). The model has three latent constructs: General Political Behavior, Go Along to Get Ahead, and Pay and Promotion. Each of these constructs map to specific items presented as Item 1 through Item 12 in Figure 1. Further, because these three subscales were designed to measure portions of the overall construct of perceptions of organizational politics, it is assumed that they are correlated. Hence, paths between the latent factors also should be estimated as indicated in Figure 1.
Study 1 – Dimensionality
Nye and Witt’s (1993) results indicated that the model shown in Figure 1 may not be the best representation of POPS. Therefore, Study 1 was designed to examine the dimensionality of POPS. Specifically, the model proposed in Figure 1 and a one-factor model in which all twelve items were mapped to one general political factor were estimated. The model providing the best model-to-data fit was then further examined in detail for overall model acceptability.
Study 2 – Individual Item Analysis
Study 2 was designed to consider the performance of each individual item in POPS. Specifically, items that did not load significantly on their intended factor or items that loaded on multiple factors were identified. To do this, content adequacy data (Schreisheim, Powers, Scandura, Gardiner & Lankau, 1993) were collected on each of the twelve items in POPS. These data were factor analyzed and mean values for each item across the three factors were calculated. As a third decision criteria, results from exploratory factor analyses of the four datasets containing responses to the 12 POPS items were used. Based on all of these results, items that did not adequately map to the factor to which they belonged were deleted. Structural equation modeling analysis was then applied to the remaining items using a series of four different datasets to determine the overall fit of the model as well as the validity of the remaining items. Finally, additional scales were introduced at this point to examine the convergent and discriminant validity of POPS.
Study 3 – Augmenting POPS
In the third study, additional items were developed to augment the reduced set of items. The development of the new items was based upon the rationale described by Kacmar and Ferris (1991) when they originally developed items for POPS and the theory outlined earlier. Content adequacy, content analysis, and exploratory factor analyses were performed for each of these new items to determine which to keep. Those that remained were then examined via structural equation modeling using a new dataset collected specifically for this purpose.
STUDY 1: DIMENSIONALITY
Sample and Data Collection
The sample in Study 1 consisted of 749 responses (64% response rate) from an attitude survey for a large state agency in which POPS was one of many variables included. The survey was mailed via interoffice mail to all members of the agency. A cover letter written by the director of the agency was included with each survey. The cover letter introduced the project and stressed the importance of participation in the project. An envelope addressed to the researchers also was enclosed so that the respondents could return the surveys directly to the researchers.
Perceptions of politics. The Kacmar and Ferris (1991) 12-item Perception of Politics Scale was administered to respondents from the state agency sample. The internal reliability estimate for this sample was .87.
To assess the dimensionality of POPS, structural equation modeling analysis using LISREL 8 (Joreskog & Sorbom, 1993) was applied to the data from the state agency sample to compare the fit of the three-factor model shown in Figure 1 to a one-factor model in which all 12 items were linked to one general political factor. A [X.sup.2] difference test was used to compare the two models. The best-fitting model was then examined more closely for overall model fit using the available LISREL indicators (e.g., GFI, AGFI, NFI, RFI).
Results and Discussion
Table 1 provides an intercorrelation matrix of the 12 POPS items for the state agency sample. Also included in Table 1 are the means and standard deviations. To examine the dimensionality of POPS, a variance/covariance matrix of the state agency data was used as input. The model shown in Figure 1 and a one-factor model (i.e., all twelve items loading on one factor) were examined using LISREL 8. All of the fit indices for the 3-factor model were better than for the one-factor model (3-factor: [X.sup.2](51)=498.76, p = .00, NFI = .87, CFI = .88, PNFI = .67, RMSEA = .10; 1-factor: [X.sup.2](54) = 919.55, p = .00, NFI = .76, CFI = .77, PNFI = .56, RMSEA = .14), [TABULAR DATA FOR TABLE 1 OMITTED] however, the overall fit was modest, most likely indicating model misspecification. Further, a [X.sup.2] difference test between these two models (426 with 3 degrees of freedom) was significant (p [less than] .01). Finally, the correlations between the three factors were not excessively large (.74, .44, .62). All of these results suggest that the three-factor model best fit the data, so further examination of this model was conducted.
Modification indices. Discriminant validity of the factors can be determined by examining the modification indices of [Lambda.sub.X]. According to Medsker, Williams and Holohan (1994), values less than 4 are acceptable, while values higher than 4 indicate that the items are loading on multiple factors and that the error terms might be correlated. Results from the present data indicated that half (12 of 24) exceeded the recommended cutoff.
Item loadings/lambdas. The standardized parameter estimates, often referred to as the Lambdas, are presented in the middle of Figure 2. In order to determine the significance of the loadings, t-values are examined because they are independent of units of measurement (Joreskog & Sorbom, 1993). The t-values, which ranged from 18.36 to 24.24, were all significant at the p [less than] .01 level. This indicates that each item is significantly related to its specified construct.
Squared multiple correlations. The squared multiple correlations (SMCs) provide information about the reliability of the items as well as the extent to which they measure what they purport to measure. Specifically, SMC values indicate the percent of variance accounted for by each item in the factor. These values will always be less than the composite reliabilities. While composite reliabilities should be greater than .70, it is not possible to even suggest a rule of thumb cutoff for SMCs (Bagozzi & Yi, 1988).
For the current sample, the SMCs ranged from .39 to .66 (mean = .49; median = .48; 9 [greater than] = .45) with a composite reliability of .87. Since this value was above the suggested cutoff of .70, the scale had acceptable reliability.
Chi-square. The chi-square test is a measure of overall fit of the model to the data. The goal when using LISREL is to fail to reject the null hypothesis that the model fit the data (i.e., [Sigma.sub.data] = [Sigma.sub.model]). In this case, the chi-square value was statistically significant ([X.sup.2](51) = 489.76, p [less than] .000), indicating that the model does not fit the data. It is important to note, however, that when the dataset used with this test is large, it is virtually impossible to fail to reject the null hypothesis.
Goodness-of-fit. The goodness-of-fit index (GFI) is another indication of how well the model fits the data. The current data produced a value of .89 which indicates that the model does a reasonable job of fitting the data. However, to adjust for model parsimony, an adjusted goodness-of-fit index (AGFI) is calculated which should exceed .90 (Medsker et al., 1994). The AGFI value for the current sample dropped to .84, which placed it out of the acceptable range.
Normed fit index. To examine the proportion of total variance accounted for by a model, the normed fit index (NFI) is used (Medsker et al., 1994). An acceptable value for the NFI is .90 (Medsker et al., 1994). The NFI for the three-factor model was .87, indicating that this fit statistic falls just below the border of acceptability.
Comparative fit index. The comparative fit index (CFI) is similar to the NFI, except that it overcomes the difficulties associated with sample size (Medsker et al., 1994). The CFI for the present model was .88, which falls just below the acceptable level of .90 (Mulaik et al., 1989).
Parsimony fit index. The parsimony fit index (PFI) reflects the amount of covariance explained by a model when its number of parameters is taken into account (Renn & Vandenberg, 1995). That is, if the same amount of construct covariance is explained by two models, the less complex of the two models will have a higher PFI value. PFI values over .60 are considered acceptable (Mulaik et al., 1989). A value of .67 was found for the present data, indicating adequate parsimony for the model.
Summary. While it is clear that the three-factor model is superior to the one-factor model, the detailed assessment of the overall fit of the three-factor model demonstrated that it had shortcomings. Therefore, further adjustments to the model needed to be made. One approach that can be taken to increase the overall fit of a confirmatory factor analysis model is to remove items that do not load significantly on the intended factor as well as those items that load on multiple factors. To determine which, if any, POPS items fell into one or the other of these categories, each individual item of POPS must be examined. Study 2 was undertaken to ascertain each item’s contribution to the overall model fit.
STUDY 2: INDIVIDUAL ITEM ANALYSIS
A variety of analyses were performed in study 2. First, content adequacy data (Schreisheim et al., 1993) were collected for each of the twelve items in POPS. This required respondents to indicate how closely each POPS item represented a definition of each of the three factors of POPS. To determine which items did not relate to the factor to which they were intended to measure, an exploratory factor analysis of the content adequacy data was conducted. In addition, the mean values for each item across the three factors also were calculated using the content adequacy data. As a third decision criteria, exploratory factor analyses on the four POPS datasets were used to ascertain which items to retain and which to discard. Next, a multiple groups analysis using structural equation modeling was applied to the remaining items using a series of four different datasets to determine the overall fit of the model and validity of the remaining items. Finally, additional scales were introduced to examine the convergent and discriminant validity of the reduced POPS scale. Each of these steps is examined in greater detail below.
Sample and Data Collection
Content adequacy. A total of 102 upper level undergraduate students in the College of Business at a large southern university were asked to complete a content adequacy analysis of the original 12 items of POPS. Specifically, each respondent was asked to determine the degree to which each item of POPS represented a factor definition. This required the judges to rate an item three times, each time comparing it to a different factor definition. For example, a respondent read the following definition for the “Go Along to Get Ahead” factor:
These are actions used to maintain the current way of thinking or the way things are done in an organization in order to get ahead (e.g., not making your opinion known to get along with important others).
and then rated how closely each of the 12 items from POPS fit this factor definition. This process was repeated after each new factor definition was presented. The scale used was a 5-point Likert type scale with definitely not representative (1) and definitely representative (5) as the anchors. The gender mix of the sample included 63 (62%) males and 39 (38%) females. The average age was 23.5 years.
POPS data. A total of four separate samples were used to test the model-to-data fit for the refined POPS scale (i.e., after the content adequacy and exploratory factor analyses were conducted). Each of these samples used POPS as a variable in a larger data collection project. All of the data were collected via mail-out surveys that were returned directly to the researchers. The first sample was from an attitude survey for an electric cooperative from which a total of 466 (94% response rate) responses were included. The next sample came from a survey about current policies, perceptions, and attitudes of Human Resource professionals in a two state area. A total of 581 (39% response rate) responses were included from this survey. The third sample represented 220 (44% response rate) nonfaculty employees at a small northeastern university who responded to an attitude survey. In the fourth sample, 320 (64% response rate) responses were generated from full-time employees in the private sector specifically for POPS validation purposes. A correlation matrix for each of the four samples for each of the twelve individual POPS items is available from the first author.
Perceptions of politics. The Kacmar and Ferris (1991) 12-item perception of politics scale was administered to respondents from each sample. The internal reliability estimate for each sample was as follows: cooperative = .88, human resource professionals = .86, university employees = .89, and validation study respondents = .88.
The validation study respondents also were asked to respond to a variety of scales that were thought to be distinct from but related to POPS as a means of assessing the discriminant validity of POPS. These scales were used in the second set of analyses in which the reduced set of items was examined across a variety of samples. Each of these scales is described below.
Faith in people. The faith in people scale (Rosenberg, 1957) was designed to measure one’s degree of confidence in the trustworthiness, honesty, and goodness in mankind. A reverse scored sample item is “If you don’t watch yourself, people will take advantage of you.” In our study, individuals with high scores represented people who were willing to use unscrupulous means to get ahead and believed in the superior efficacy of “contacts” over ability. Thus, these individuals were likely to believe that political behavior can and should be used to get ahead in life. The Cronbach alpha for this scale was .68.
Alienation. The alienation via rejection scale (Streuning & Richardson, 1965) measures the emotional distance and purposelessness people experience when dealing with others. A sample item is “Most people don’t realize how much their lives are controlled by plots hatched in secret by others.” Thus, individuals high in alienation believe that they cannot trust the people around them. This would be consistent with POPS in that someone who saw activities as political also would think that he or she has to beware of people who have an influence on his or her life. The internal consistency estimate for this scale was .81.
An alienation scale developed by Dean (1961) also was used to examine the alienation concept. This scale taps the powerlessness and normlessness that an individual feels in his or her life. A sample item is “Sometimes I have the feeling that other people are using me.” Thus, individuals scoring high on this scale would feel they have little power over their own lives and view their lives as controlled by others. This concept is very similar to perceptions of political behavior in that an individual who views his or her work environment as political would feel that he or she is negatively affected by the political actions of others. This scale produced an internal reliability estimate of .64.
Cynicism. A cynicism subscale from the New F (Authoritarian) Scale (Webster, Sanford & Freeman, 1955) was used to tap the degree of cynicism or skepticism an individual has toward people in the world. An example item is “I don’t blame anyone for trying to grab all he can get in this world.” Hence, this scale measures ways in which political behavior is enacted in a social context. As with POPS, the higher the cynicism score, the more likely the respondent will view others’ behaviors as political. The Cronbach alpha for this scale was .67.
Altruism. The altruism subscale from the Philosophy of Human Nature Scale (Wrightsman, 1964) was used to measure the extent of unselfishness, sincere sympathy, and concern one has for others. Wrightsman has shown both reliability and validity of this measure. A sample item (reverse scored) is “People pretend to care more about one another than they really do.” The scale was scored such that a high score indicated selfishness and lack of concern for others. The overall internal reliability estimate for this scale was .64.
Trust. Trust, a second subscale from the Philosophy of Human Nature Scale (Wrightsman, 1964), was used to measure the expectancies people have about the way other people generally behave. A sample item that is reverse scored is “Most people would tell a lie if they could gain by it.” The scale was scored such that a high score indicated low overall favorability toward human nature, which would be consistent with POPS. This scale produced an internal consistency estimate of .81.
Social Attitude. Campbell’s (1966) social attitude scale assesses how positive a view one has about humankind. In this scale, a positively viewed item was paired with a negatively viewed item and the respondents were asked to indicate their agreement with one of the paired items. A sample item is “The golden rule is still the best rule to live by” versus “Nice guys finish last.” Similar to POPS, the items were coded such that a high score represented a negative view of mankind. The Cronbach alpha for this scale was .67.
Self-activity inventory. The self-activity scale is used as a general measure of self-concept adjustment (Worchel, 1958). An example of one of the items used is “I am a person who plays up to others in order to advance his/her position.” This scale measures specific behaviors an individual will enact to adapt to his or her environment. Those who attempt to fit into their environment should be more political in nature, and thus be more aware of political behavior around them. Therefore, an individual who scores high on the self-activity inventory also should score high on POPS. This scale produced an internal consistency estimate of .78.
Data generated from the content adequacy analysis were examined using exploratory factor analysis (ptems should be retained.
In order for an item to be retained, it had to load on its intended factor in a minimum of three of the five factor analyses. Further, the mean for the item had to be largest on the factor it was designed to measure, and this value had to be significantly different than the mean for the other two factors. Implementing these rules resulted in items 2, 4, 9, 10, 11, and 12 being retained and items 1, 3, 5, 6, 7, and 8 being deleted.
[TABULAR DATA FOR TABLE 3 OMITTED]
Figure 3 depicts the structural model of POPS after removing the items that did not perform as expected in the content adequacy analysis. The model shown in Figure 3 was analyzed using the multiple group feature of LISREL 8. Specifically, each of the remaining four datasets was used to explore the data-to-model fit of the reduced model.
The first analysis run was a four-group comparison that required the path loadings, factor correlations, and the error variances for each dataset to be equivalent. The [X.sup.2] for the comparison between the datasets was significant with a value of 284.88 and 69 degrees of freedom (p [less than] .000). However, the comparative fit index for the datasets was acceptable (CFI = .90) as was the IFI (.90), and the GFI (.94). These results suggest that the four different datasets map well to the model with respect to the factor loadings, factor correlations, and error variances, indicating that the model is generalizable to a variety of datasets. This model was rerun allowing the path loadings and factor correlations to be re-estimated for each dataset. The fit statistics for this model were stronger (GFI = .97, CFI = .93, IFI = .93). Table 3 presents the specific fit statistics for each of the four datasets and the resulting path loadings and correlations for the freed model.
Convergent and Discriminant Validity
A scale has convergent validity if it is associated positively with other measures of the same construct or other theoretically relevant constructs. That is, different measures of the same construct should converge. A scale has discriminant validity if it is able to discriminate itself from measures of conceptually similar constructs (Kerlinger, 1986). That is, constructs similar to one another should be correlated, just not too highly. A number of different measures that purport to measure constructs that are conceptually similar to POPS were used in order to establish the convergent and discriminant validity of the POPS. Specifically, the validation sample respondents were simultaneously administered POPS and eight other scales that measured theoretically similar constructs to POPS: faith in people scale (Rosenberg, 1957), alienation via rejection scale (Streuning & Richardson, 1955), alienation (Dean, 1961), cynicism (Webster et al., 1955), altruism and trust (Wrightsman, 1964), social attitude (Campbell, 1966), and a self-activity inventory (Worchel, 1958).
To examine the convergent and discriminate validity of POPS, the process described by Anderson and Gerbing (1988) was followed. Anderson and Gerbing suggested developing and testing a measurement model with all of the variables of interest included. This model should include multiple indicators that are linked directly to the variable of interest and to no other variables. These multiple indicators can be individual scale items or composites of these items. The only requirement is that each set of indicators be unidimensional. Therefore, exploratory factor analyses were run on the eight scales used for validity purposes to create subscales where necessary to insure unidimensionality.
Since the overall fit for this model was reasonable (GFI = .88, PGFI = .74, PNFI = .70, CFI = .88), both discriminant and convergent validity could be assessed. Convergent validity is present when each indicator’s estimated path coefficient for its assigned variable is significant. Since all of the path coefficients in the measurement model were significant, convergent validity was achieved.
Discriminant validity can be assessed by examining the confidence intervals for the factor correlations. This required that confidence intervals for the correlations between POPS and faith in people, alienation, alienation via rejection, cynicism, altruism, trust, social attitude, and self-activity inventory be calculated. In each case, the confidence intervals showed that the correlations were significantly different from 1.0. The magnitude of the range of correlations was from .24 to .59, further showing that different constructs were being measured. Thus, the results suggested, both statistically and practically, that the concepts considered here are distinct from POPS.
Study 2 provided many significant and important findings. First, the content adequacy analysis and exploratory factor results showed that several of the original POPS items were not functioning as originally intended. Several items cross loaded or did not clearly relate to the factor that they were intended to measure. These results suggested that the original scale should be reduced to six items (i.e., 2, 4, 9, 10, 11, and 12) with two items remaining on each of the three factors.
Next, the generalizability of the new factor structure for POPS was examined via a multiple group LISREL 8 analysis. Results from this test indicated that the new factor structure did indeed map to the four different datasets used in the analyses. Individual fit statistics for each of the datasets were extremely strong. These results suggested that the remaining items were stable across a variety of datasets representing a variety of samples and that the new refined POPS model fit all of the datasets well.
Finally, the convergent and discriminant validity of the reduced scale was examined. With respect to convergent validity, all of the retained items in POPS loaded significantly on the factor that they were intended to measure. To assess discriminant validity, POPS was compared to a variety of scales that were suggested to represent a construct similar in nature to the underlying construct measured by POPS. All of these analyses indicated that while POPS was similar to these scales, it was distinct (i.e., it showed discriminant validity).
STUDY 3: AUGMENTING POPS
While all of the results reported for Study 2 were positive, the resulting POPS scale had three two-item factors. In general, two-item factors are considered less stable and reliable than factors with more than two items. Therefore, the next step required an expansion of the two-item factors. This was the goal of Study 3. Specifically, additional items were generated for each of the three POPS factors to strengthen them. These items were subjected to the same analyses reported in Study 2 (i.e., content adequacy analysis and structural equation modeling).
Developing Additional Items
A variety of approaches were undertaken to develop new items for POPS. First, the items that were deleted in Study 2 were examined to determine why they did not adequately or uniquely represent the POPS factors, and whether they could be altered to perform as intended. For example, the exploratory factor analysis results showed that item 1 (“There is a group of people in my department who always get things their way because no one wants to challenge them.”) cleanly loaded on the general political factor as originally intended. However, the content adequacy results indicated that respondents could not decide if the item belonged on the general political factor (3.57) or the go along to get ahead factor (3.45). Close examination of this item may suggest why this is the case. A literal interpretation of this item would suggest that this group is acting in a self-serving manner and, hence, the group’s behavior clearly falls under the category of general political behavior. However, a more subtle interpretation might lead to the conclusion that inaction, not crossing the powerful group, is actually going along to get ahead. Thus, it appears that some respondents may have taken the item at face value, while others looked deeper into the actual behaviors being described in the items. To avoid this problem in the future, new items need to clearly represent only the factor on which they are intended to load.
A second method used for generating new POPS items was to use existing organizational politics theory outlined in the literature as a guide. To do this, we searched for themes in the literature that were commonly accepted or agreed upon by scholars. One example was the multitude of definitions of politics provided by the literature. While these definitions were all unique, they shared a common thread. Specifically, all of the definitions incorporated the idea that political behavior was self-serving in nature. Hence, items that clearly showed individuals or groups engaging in self-serving behaviors would fit well with the literature.
Finally, in an effort to make the items seem real to the respondents, a final approach used to develop new POPS items was to rely on anecdotal evidence and our own experiences. We each suggested personal examples of events in which we were affected by organizational politics and then made the specifics surrounding these incidents more generic in nature. Utilizing all three of these approaches allowed us to create 14 new items to be tested in Study 3. A list of these items is provided in the appendix.
Sample and Data Collection
Content adequacy. The same respondents who performed the content adequacy analysis for the original items in Study 2 were used to perform the content adequacy analysis on the new items that were developed to augment the existing POPS scale. The same procedure was followed in that respondents were required to rate each item three times, once for each factor definition. As previously explained, the ratings made determined the degree to which the item measured the definition of the factor being considered.
[TABULAR DATA FOR TABLE 4 OMITTED]
Content analysis. Fifteen graduate students and faculty members were given the fourteen new items on slips of paper and the definitions of the three factors on index cards. In addition, a “none of the above” index card was included. Judges were instructed to content analyze the items by placing each with the index card which best represented it. The judges included 8 males (53%) and 7 females (47%) who had an average age of 35.6 years.
POPS data. The revised and updated POPS scale was mailed to 600 members of the Society for Human Resource Management who lived in the south. Included with the survey was a self-addressed, stamped envelope for them to use to return the survey directly to the researchers. A total of 123 usable surveys were returned (21% response rate). Demographics for this sample included 37 (30%) males and 107 (87%) Caucasians. The average age of the respondents was 45.2 years, while the average tenure with their organizations was 10.7 years.
This sample was augmented with a second sample collected from night students enrolled in a business course at a large western university. Of the 182 respondents, 161 (89%) were employed. The gender composition of the sample included 114 (63%) males and 68 (37%) females. The average age of the sample was 25.4 years, and the average organizational tenure was 3 years. A correlation matrix of the combined sample is available from the first author.
Perceptions of politics. The 6 items that were retained from the Kacmar and Ferris (1991) 12-item Perception of Politics Scale and the 14 new items that were developed for Study 3 composed the measure of perceptions of politics. The internal reliability estimate for these items was .81.
The exploratory factor analysis and mean calculations performed on the content adequacy data in Study 2 were repeated in Study 3 on the new items. As a third decision criteria, 15 people were asked to sort (i.e., content analyze) the 20 items into categories representing the three factor definitions or a none of the above category. Finally, exploratory factor analysis on the POPS data was included as a fourth decision criteria. Once a determination of which new items to include was made, the structural equation modeling analyses described in Study 1 (dimensionality and overall fit of POPS) were performed to test the final POPS scale.
Results and Discussion
As in Study 2, the data from the content adequacy test for the new items were analyzed using a principal components exploratory factor analysis with an oblimin rotation. Also, the mean values for each item across the three factors were calculated using the content adequacy test data. In addition, 15 judges content analyzed the new items providing another decision criteria. Finally, exploratory factor analysis was conducted on the new items using the POPS dataset. Results for these analyses are presented in Table 4.
Results from these four analyses were used to decide which new items to include and exclude. In order to pass the factor analysis tests, an item had to load at .40 or higher on its intended factor and less than or equal to .35 on all others. With respect to the mean decision criteria, the mean for an item had to be greater than or equal to 4.00 on the intended factor and less than 3.5 on all others to pass. Finally 10 judges (71%) had to place the item on its intended factor to meet this decision criteria. Applying these four decisions criteria, 9 of the 14 new items were retained (i.e., 1, 2, 3, 4, 5, 6, 8, 9, 10). When these 9 items were combined with the original 6, the final scale was composed of 15 items (shown in Table 5) that represent three factors.
To examine the dimensionality of the modified POPS scale, a structural equation modeling approach was applied to a variance/covariance matrix of the POPS data created by including the items shown in Table 5. Specifically, the three-factor model shown in Figure 4, as well as a one-factor model (i.e., all fifteen items loading on one factor), were examined using LISREL 8. Results for these analyses indicated that the fit statistics for the three-factor model were better than the fit statistics for the one factor model on all indicators (three-factor: RMSEA = .076, GFI = .91, AGFI = .87, NFI = .86, NNFI = .87, PNFI = .72, CFI = .91, IFI = .91, RFI = .84; 1-factor: RMSEA = .13, GFI = .79, AGFI = .72, NFI = .70, NNFI = .70, PNFI = .60, CFI = .74, IFI = .74, RFI = .66). Additionally, a [X.sup.2] difference test between these two models (276.55 with 3 degrees of freedom) was significant (p [less than] .000). Finally, two of the three correlations between the factors were moderate (.54, .71, and 1.00). All of these results suggest that the three-factor model best fit the data, so further examination of this model was conducted.
Modification indices. An examination of the modification indices of [Lambda.sub.X] can be performed to examine the discriminant validity of the factors. According to Medsker et al. (1994), values less than 4 are acceptable, while values higher than 4 indicate that the items are loading on multiple factors and that the error terms might be correlated. Results from the present data indicated that only 8 of the 30 values (27%) exceeded the recommended cutoff.
Item loadings/lambdas. The completely standardized parameter estimates, often referred to as the Lambdas, are presented in the middle of Figure 4. In order to determine the significance of the loadings, t-values were examined because they are independent of units of measurement (Joreskog & Sorbom, 1993). The t-values, which ranged from 3.02 to 17.89, were all significant at the p [less than] .01 level. This indicates that each item is significantly related to its specified construct.
Squared multiple correlations. The squared multiple correlations (SMCs) provide information about the reliability of the items, as well as the extent to which they measure what they purport to measure. Specifically, SMC values indicate the percent of variance accounted for by each item in the factor. These values will always be less than the composite reliabilities. While composite reliabilities should be greater than .70, it is not possible to even suggest a rule of thumb cutoff for SMCs (Bagozzi & Yi, 1988).
For the current sample, the SMCs ranged from .03 to .75 (mean = .41; median = .34; 6 [greater than] = .45) with a composite reliability of .87 for the overall scale. Since this value was above the suggested cutoff of .70, the scale had acceptable reliability.
Chi-square. The chi-square test is a measure of overall fit of the model to the data. The goal when using LISREL is to fail to reject the null hypothesis that the model fit the data (i.e., [Sigma.sub.data] = [Sigma.sub.model]). In this case, the chi-square value was statistically significant ([X.sup.2](87) = 237.29, p = .00) indicating that the model does not fit the data. However, when using large samples, it is normally impossible to fail to reject the null hypothesis. One way in which the size of the sample can be taken into consideration is by dividing the [X.sup.2] value by the degrees of freedom. Values less than 5 indicate that the model does fit the data (Wheaton, Muthen, Alwin & Summers, 1977). For the present sample, the [X.sup.2]/df ratio was 2.73 indicating good model-to-data fit.
Goodness-of-fit. The goodness-of-fit index (GFI) is another indication of how well the model fit, the data. The current data produced a GFI value of .91, which indicates that the model fits the data. However, to adjust for model parsimony, an adjusted goodness-of-fit index (AGFI) is calculated which should exceed .90 (Medsker et al., 1994). The value for the current sample was .87, which is just below the acceptable range.
Normed fit index. To examine the proportion of total variance accounted for by a model, the normed fit index (NFI) is used (Medsker et al., 1994). An acceptable value for the NFI is .90. The NFI for the three-factor model was .86, indicating that the model does a moderate job in accounting for the variance in the data.
Comparative fit index. The comparative fit index (CFI) is similar to the NFI, except that it overcomes the difficulties associated with sample size (Medsker et al., 1994). The CFI was .91, which is above the acceptable level of .90 (Mulaik et al., 1989).
Parsimony fit index. The parsimony fit index (PFI) reflects the amount of covariance explained by a model taking into account the number of parameters (Renn & Vandenberg, 1995). If the same amount of construct covariance is explained by two models, but one model does so with fewer estimated parameters, the PFI would be greater for this model. PFI values over .60 are considered acceptable (Mulaik et al., 1989). A value of .72 was found for the present data, indicating adequate parsimony for the model.
In an effort to further validate POPS, a three-study investigation of the dimensionality, reliability, and validity of POPS was undertaken. In the first study, a comparison was made of the original three-factor model to a one-factor model. Results suggested that while the three-factor model produced a better model-to-data fit than the one-factor model, the fit of the three-factor model could be improved. Model fit can be improved by adding or deleting paths. Given the format of the model tested, adding paths meant adding new items to the scale, while deleting paths meant deleting items from the scale. Both of these possibilities were investigated in Study 2 and Study 3.
In Study 2, each item of the original scale was examined for its contribution to the overall scale. Six items that either cross loaded or did not load on the intended factor were removed. The reduced scale was then examined across four different samples to assess its stability. Results indicated good fit for the refined scale across each dataset as well as within each. In addition, the new six-item scale demonstrated both convergent and discriminant validity.
The final step was to create new items to add to the scale. In Study 3, fourteen new items were generated and examined for inclusion. Applying four decision criteria to the new items indicated that 9 items should be retained. The resulting fifteen item scale was then examined for overall dimensionality by comparing it to a one-factor model as was done in Study 1. Results indicated that the three-factor model fit substantially better than the one-factor model. Further, the [X.sup.2] difference test was significant, indicating that a three-factor model was more appropriate than a one-factor model. The three-factor model was then examined for overall model fit. Results suggested that the model-to-data fit was strong.
As with any empirical study, there are limitations which need to be mentioned. For example, using students as content adequacy raters may be considered a limitation. It is important to note, however, that content adequacy ratings require judges to compare the content of an item to a predetermined category. Therefore, the only skills required of judges is that they possess sufficient intellectual ability to perform the rating task, making college students a highly appropriate sample choice (Schreisheim et al., 1993). Furthermore, the final model was not confirmed on a second sample. Until this is done, it is impossible to determine whether the current results are only sample specific. Finally, the current study did not include the Perceived Organizational Support Scale (SPOS) in its convergent and discriminate validity tests. Given that past research has shown this scale to correlate highly with POPS, future research efforts that examine POPS also should include SPOS.
Despite these limitations, the present study does make a contribution to the organizational politics literature. First, the study refined and extended a measure of perceptions of politics in the organization. This refinement included deleting items that did not perform as expected, as well as adding new items to produce a better fitting model. Additionally, a variety of datasets, all of which were large and representative of the organizations from which they were drawn, were used to refine the scale.
There are a variety of ways in which future researchers can focus their attention to further contribute to the field of organizational politics. One area in particular that is lacking is theoretical development. In 1989, Ferris, Russ and Fandt proposed three areas that could be examined to advance the field of organizational politics: conditions under which political behavior occurs, the specific types of political behavior enacted, and the antecedents and consequences of political behavior. Over the past several years, Ferris and his colleagues (Fandt & Ferris, 1990; Ferris et al., 1993; Ferris & Buckley, 1990; Ferris, Fedor, Chachere, & Pondy, 1989; Ferris et al., 1996; Ferris, Frink, Gilmore & Kacmar, 1994; Ferris & Judge, 1991; Ferris & Kacmar, 1992; Ferris & King, 1991) have been diligently working in this final area by verifying and testing a theoretical model of perceptions of organizational politics developed by Ferris, Russ and Fandt (1989). This model delineates a collection of antecedents and consequences of perceptions of organization politics, many of which have been found to hold. While this stream of research has been informative, more and different antecedents and consequences need to be explored. One way to expand the list offered by Ferris, Russ and Fandt (1989) is to ask individuals affected by organizational politics to describe the antecedents and consequences they view as relevant. Content analyzing responses from individuals who hold an assortment of positions in a variety of organizations should produce an array of new antecedents and consequences that can be used to augment Ferris, Russ and Fandt’s (1989) model.
Based upon all of the findings reported in this manuscript, the new version of POPS appears to be a reliable and valid measure of perceptions of organizational politics. However, some improvements still could be made. For example, some of the SMCs were low, indicating that the variance accounted for by certain individual items could be improved. In addition, the correlation between factor 1 and factor 2 was high (1.00). To examine whether or not the remaining two items for factor 1 should be included in factor 2, a [X.sup.2] difference test was performed to compare a three-factor model to a two-factor model. Results (16.95 (2), p [less than] .000) indicated that factor 1 was unique and should be included as a separate factor. However, future research efforts should be undertaken to add additional items to this factor in an effort to distinguish it even more from factor 2.
Given that the lack of interest in organizational politics research may be attributed to the difficulty early researchers had defining, quantifying, and measuring this elusive phenomenon, the final scale resulting from this work is an important contribution to the organizational politics literature. Only when consensus is reached about what organizational politics is and how it should be measured will the field be advanced. The development and refinement of POPS is offered as a step toward reaching a meeting of the scholarly minds.
Table 5. Final POPS Scale Items
Factor 1: General Political Behavior
1. People in this organization attempt to build themselves up by tearing others down.
2. There has always been an influential group in this department that no one ever crosses.
Factor 2: Go Along to Get Ahead
3. Employees are encouraged to speak out frankly even when they are critical of well-established ideas.
4. There is no place for yes-men around here; good ideas are desired even if it means disagreeing with superiors.
5. Agreeing with powerful others is the best alternative in this organization.
6. It is best not to rock the boat in this organization.
7. Sometimes it is easier to remain quiet than to fight the system.
8. Telling others what they want to hear is sometimes better than telling the truth.
9. It is safer to think what you are told than to make up your own mind.
Factor 3: Pay and Promotion Policies
10. Since I have worked in this department, I have never seen the pay and promotion policies applied politically.
11. I can’t remember when a person received a pay increase or promotion that was inconsistent with the published policies.
12. None of the raises I have received are consistent with the policies on how raises should be determined.
13. The stated pay and promotion policies have nothing to do with how pay raises and promotions are determined.
14. When it comes to pay raise and promotion decisions, policies are irrelevant.
15. Promotions around here are not valued much because how they are determined is so political.
Acknowledgment: This manuscript was greatly improved by comments received from Martha Andrews, Dennis P. Bozeman, Charles Hofacker, Larry J. Williams, and three anonymous reviewers. The authors would like to thank J. Michael Whitfield for his technical help. Portions of this manuscript were presented at the 1994 National Academy of Management Meetings in Dallas, Texas.
1. When if comes to pay raises and promotion decisions, policies are irrelevant.
2. Agreeing with powerful others is the best alternative in this organization.
4. Promotions around here are not valued much because how they are determined is so political.
5. I have seen changes made here that only serve the purposes of a few individuals, not the whole work unit or department.
6. Sometimes it is easier to remain quiet than to fight the system.
7. Favoritism, rather than merit, determines who gets good raises and promotions around here.
8. Telling others what they want to hear is sometimes better than telling the truth.
9. It is safer to think what you are told than to make up your own mind.
10. Inconsistent with organizational policies, promotions in this organization generally do not go to top performers.
11. None of the raises I have received are consistent with the policies on how raises should be determined.
12. This organization is not known for its fair pay and promotion policies.
13. Rewards such as pay raises and promotions do not go to those who work hard.
14. The stated pay and promotion polices have nothing to do with how pay raises and promotions are determined.
Allen, R.W., Madison, D.L., Porter, L.W., Renwick, P.A. & Mayes, B.T. (1979). Organizational politics: Tactics and characteristics of its actors. California Management Review, 22: 77-83.
Anderson, J.C. & Gerbing, D.W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103: 411-423.
Bagozzi, R.P. & Yi, Y. (1988). On evaluation of structural equation models. Journal of Academy of Marketing Science, 16: 74-94.
Campbell, D. (1966). Unpublished paper, Department of Psychology, Northwestern University.
Cropanzano, R.S., Kacmar, K.M. & Bozeman, D.P. (1995). Organizational politics, justice, and support: Their differences and similarities (pp. 1-18), in R.S. Cropanzano & K.M. Kacmar (Eds.), Organizational politics, Justice and support: Managing social climate at work. Westport, CT: Quorum Books.
Dean, D. (1961). Alienation: Its meaning and measurement. American Sociological Review, 25: 753-758.
Drory, A. & Romm, T. (1988). Politics in organization and its perception in the organization. Organizational Studies, 9: 165-179.
Drory, A. & Romm, T. (1990). The definition of organizational politics: A review. Human Relations, 43: 11331154.
Eisenberger, R., Huntington, R., Hutchison, S. & Sowa, D. (1986). Perceived organizational support. Journal of Applied Psychology, 71: 500-507.
Fandt, P.M. & Ferris, G.R. (1990). The management of information and impressions: When employees behave opportunistically. Organizational Behavior and Human Decision Processes, 45: 140-158.
Farrell, D. & Peterson, J.C. (1982). Patterns of political behavior in organizations. Academy of Management Review, 45: 403-412.
Ferris, G.R., Brand, J.F., Brand, S., Rowland, K.M., Gilmore, D.C., King, T.R., Kacmar, K.M. & Burton, C.A. (1993). Politics and control in organizations (pp. 83-111), in E.J. Lawler, B. Markovsky, J. O’Brien, K. Heimer (Eds.), Advances in group processes (vol. 10). Greenwich, CT: JAI Press.
Ferris, G.R. & Buckley, M.R. (1990). Performance evaluation in high technology firms: Process and politics (pp. 243-263), in L.R. Gomez-Mejia & M.E. Lawless (Eds.), Organizational issues in high technology management. Greenwich, CT: JAI Press.
Ferris, G.R., Fedor, D., Chachere, J.G. & Pondy, L. (1989). Myths and politics in organizational contexts. Group & Organizational Studies, 14: 88-103.
Ferris, G.R., Frink, D.D., Galang, M.C., Zhou, J., Kacmar, K.M. & Howard, J.L. (1996). Political work environments. Human Relations, 49: 233-266.
Ferris, G.R., Frink, D.D., Gilmore, D.C. & Kacmar, K.M. (1994). Understanding politics: Antidote for the dysfunctional consequences of organizational politics as a stressor. Journal of Applied Social Psychology 24: 1204-1220.
Ferris, G.R. & Judge, T.A. (1991). Personnel/human resources management: A political influence perspective. Journal of Management, 17: 447-488.
Ferris, G.R. & Kacmar, K.M. (1992). Perceptions of organizational politics. Journal of Management, 18:93-116.
Ferris, G.R. & King, T.R. (1991). Politics in human resource decisions: A walk on the dark side. Organizational Dynamics, 20: 59-71.
Ferris, G.R., Russ, G.S. & Fandt, P.M. (1989) Politics in organizations. (pp. 143-170), in R.A. Giacalone & P. Rosenfeld (Eds.), Impression management in the organization. Hillsdale, N J: Lawrence Erlbaum.
Frost, P.J. (1987). Power, politics, and influence. In F. Jablin, L. Putnam, K. Roberts, & L. Porter (Eds.), Handbook of organizational communication. Beverly Hills, CA: Sage.
Gandz, J. & Murray, V.V. (1980). The experience of workplace politics. Academy of Management Journal, 23: 237-251.
Humphreys, L.G. & Montanelli, R.G. (1975). An investigation of the parallel analysis criterion for determining the number of common factors. Multivariate Behavioral Research, 10: 193-205.
Joreskog, K.G. & Sorbom, D. (1993). LISREL 8: Structural equation modeling with the SIMPLIS command language. Hillsdale, NJ: SSI.
Kacmar, K.M. & Ferris, G.R. (1991). Perceptions of organizational politics scale (POPS): Development and construct validation. Educational and Psychological Measurement, 51:(193-205.
Kacmar, K.M. & Ferris, G.R. (1993). Politics at work: Sharpening the focus of political behavior in organizations. Business Horizons, 36: 70-74.
Kerlinger, F.N. (1986). Foundations of behavioral research (3rd ed.). New York: Holt, Rinehart, and Winston.
Kumar, P. & Ghadially, R. (1989). Organizational politics and its effect on members of organizations. Human Relations, 42: 305-314.
Lewin, K. (1936). Principles of topological psychology. New York: McGraw-Hill.
Madison, D.L., Allen, R.W., Porter, L.W., Renwick, P.A. & Mayes, B.T. (1980). Organizational politics: An exploration of managers’ perceptions. Human Relations, 33: 79-100.
Mayes, B.T. & Allen, R.W. (1977). Toward a definition of organizational politics. Academy of Management Review, 2: 672-678.
Medsker, G.J., Williams, L.J. & Holohan, P.J. (1994). A review of current practices for evaluating causal models in organizational behavior and human resources management research. Journal of Management, 20: 439-464.
Mintzberg, H. (1985). The organization as political arena. Journal of Management Studies, 22:133-154.
Mulaik, S.A., James, L.R., Van Alstine, J., Bennet, N., Lind, S. & Stillwell, C.D. (1989). An evaluation of goodness-of-fit indices for structural equation models. Psychological Bulletin, 105: 430-445.
Nunnally, J.C. (1967). Psychometric theory. New York: McGraw-Hill.
Nye, L.G. & Witt, L.A. (1993). Dimensionality and construct validity of the perceptions of politics scales (POPS). Education and Psychological Measurement, 53: 821-829.
Porter, L.W. (1976). Organizations as political animals. Presidential address, Division of Industrial-Organizational Psychology, 84th Annual Meeting of the American Psychological Association, Washington, DC.
Porter, L.W., Allen, R.W. & Angle, H.L. (1981). The politics of upward influence in organizations (pp. 109-149), in L.L. Cummings & B.M. Staw (Eds.), Research in organizational behavior (vol. 3). Greenwich, CT: JAI Press.
Renn, R.W. & Vandenberg, R.J. (1995). The critical psychological states: An underrepresented component in job characteristics model research. Journal of Management, 21: 279-303.
Rosenberg, M. (1957). Occupations and values. Glencoe, IL: The Free Press.
Schein, V.E. (1977). Individual power and political behaviors in organizations: An inadequately explored reality. Academy of Management Review, 2: 64-72.
Schreisheim, C.A., Powers, K.J., Scandura, T.A., Gardiner, C.C. & Lankau, M.J. (1993). Improving construct measurement in management research: Comments and a quantitative approach for assessing the theoretical content adequacy of paper-and-pencil survey-type instruments. Journal of Management, 19: 385-417.
Smith, P.C., Kendall, L.M. & Hulin, C.L. (1969). The measurement of satisfaction in work and retirement. Chicago: Rand McNally.
Streuning, E. & Richardson, A. (1965). A factor analytic exploration of the alienation, anomia, and authoritarianism domain. American Sociological Review, 30: 768-776.
Tushman, M.E. (1977). A political approach to organization: A review and rationale. Academy of Management Review, 2: 206-216.
Webster, H., Sanford, N. & Freeman, M. (1955). A new instrument for studying authoritarianism in personality. Journal of Psychology, 40: 73-85.
Wheaton, B.B., Muthen, B., Alwin, D.F. & Summers, G.F. (1977). Assessing reliability and stability in panel models. In D.R. Heise (Ed.), Sociological Methodology, San Francisco: Jossey-Bass.
Worchel, P. (1958). Personality factors in the readiness to express aggression. Journal of Clinical Psychology, 14: 355-359.
Wrightsman, L. (1964). Measurement of philosophies of human nature. Psychological Reports, 14: 743-751.
COPYRIGHT 1997 JAI Press, Inc.
COPYRIGHT 2004 Gale Group