The Center for Media and Public Affairs (CMPA) is a nonpartisan research and educational organization which conducts scientific studies of The Center for Media and Public Affairs (CMPA) employs content analysis to study news coverage. “Content analysis” is a social scientific method for producing an objective and systematic description of communicative material. In order to be scientific, such analysis requires explicit rules and procedures that minimize a researcher’s subjective predispositions. Categories and criteria are rigorously defined and applied consistently to all material. Each system must be reliable, meaning that additional researchers using the same criteria should reach the same conclusions. Because it is both systematic and reliable, content analysis permits the research to transcend the realm of impressionistic generalizations, which are subject to individual preferences and prejudices.
CMPA researchers have honed their skills on a wide variety of projects since 1987, making them among the best trained and most experienced at news media content analysis. Researchers examine news stories on a statement-by-statement level, recording all overt opinions expressed by either the reporter or other individuals quoted in the story. Each opinion is catalogued according to the source of the comment, the target, and the issue under discussion. Researchers do not assign overall positive and negative scores to entire stories, since such an approach fails to fully account for the nuances within each story. Individual statements are logged into a computerized database, allowing statistical analyses to fully describe the relationships among news sources, time periods, the focus of coverage and the tone of coverage.
Depending on the length and breadth of the study, CMPA’s codebooks (which contain the categories and rules for coding) range from 100 to 300 pages long and include 20 to 50 different analytic variables. Research assistants are trained for between 150 and 200 hours before they begin work on a project. During the training process, researchers code sets of stories, and their work is compared to that of previous coders until a minimum reliability level of 80% is reached for all variables. That means that the new coders must reach the same conclusions as their counterparts at least four out of five times. For most variables, the level of agreement is much higher.