Analyzing research methods in Mass Communication

Jose Arrona
6 min readApr 3, 2022

The field of mass communications has been continuously evolving. Driven by technological changes, the need to understand audiences, the effects of media, and everything in between has catapulted research in this field. So, how exactly are mass communication hypotheses and theories evaluated and verified? This blog post will shed some light on two principal research methods often employed by mass communication researchers — quantitative and qualitative research methods. We’ll do this by examining two different mass communication peer-reviewed research studies and highlighting notable contributions they make to the field of mass communication.

Quantitative versus qualitative research.
Image Source: Reveall

Quantitative

As the name implies, quantitative research uses numbers and measurements in its methodology. More specifically, quantitative research focuses on the “exploration of numerical patterns” and discovering relationships among these numerical data sets. Quantitative data is concrete and objective. Some commonly used quantitative methodologies include experiments, surveys, and content analyses.

Analyzing the article, Why do so few people share fake news? It hurts their reputations from the journal New Media and Society should give us a better understanding of the characteristics found in quantitative studies. The first part of the article provides us with background information — including the definition of fake news and statistical data on sharing misinformation online. The authors posit that despite the belief that misinformation is a rampant problem, statistics show that little misinformation is shared online. The authors blame only a few actors for most misinformation found on the Internet. Using this information and previous empirical and experimental studies, the study’s authors develop seven hypotheses and test them using four experiments.

In the first experiment, the authors try to resolve Hypothesis 1: A good reputation is more easily lost than gained. The experiment’s methodology was simple, 1,040 participants received a combination of politically-neutral news stories to read. These included:

  • Three real news stories
  • Three fake news stories
  • Three real news stories and one fake news story (fake story presented last)
  • Three fake news stories and one real news story (real story presented last)

Participants were told the stories originated from a fabricated news source or were shared by an individual on social media. After being given a chance to read all the stories, individuals completed a survey using a Likert scale to rate the reliability of the news stories’ source. Respondents also used a Likert scale to answer how likely they were to revisit (or pay attention) to this source again. The experiment looked to correlate these two variables to understand how reliability affected a person’s willingness to interact with the origin again. A significant decline in trust could be observed in participants given the three-fake-one-real combination compared to those who received the three-real combination. The three-fake to three-fake-one-real variety showed a slight increase in trust. The second experiment mimicked this one but used politically biased content. The third and fourth experiments used Likert scales to explore how much money individuals would want to be paid to share false information anonymously or using personal accounts (and if political impact affected this). Following each experiment, the authors provide detailed data results and conclude the article with a discussion of the findings.

Likert scales are often used to assign numerical values to non-numerical variables | Image from Opinion Stage

Experiments offer the advantage of giving researchers a controlled environment to test within. In this first experiment, all individuals received politically-neutral news articles removing any political biases that could clout their judgment in perceived trustworthiness. The first and second experiments are used as a building base; they confirmed the researchers’ beliefs that sharing misinformation can damage someone’s reputation. The experiments in the article were simple but effectively designed to find a correlation between the sharing of fake news sources and the perceived toll it may inflict on one’s reputation (although it examines this in terms of compensation needed to share the misinformation). One notable pitfall of the study is that the participants’ income levels are not considered — a sum of $100 may be significant to some respondents and insignificant to others. Empirical studies like these allow other researchers to critique, re-evaluate, and possibly disprove or reaffirm the findings.

Qualitative

In qualitative studies, the focus shifts to comprehending the perspectives and understandings of people. These studies focus on subjective interpretations of events experienced by people (subjects) in their natural settings. Some of the tools often employed in qualitative research include interviews, field observations, and focus groups. To better understand qualitative research, we’ll explore an article in the Journal of Media Practice titled Sourcing practices in online journalism: an ethnographic study of the formation of trust in and the use of journalistic sources. The article takes a closer look at online journalists — focusing on what sources online journalists use and how they rationalize their use. To accomplish this, the researchers take an ethnographic approach. Researchers perform field observations and interviews to settle their inquiries.

From The Sweet Spot on YouTube.

The article begins by providing background information on the context of online journalism. It outlines many pitfalls of online sources and the authors’ perceived working environment for online journalists (understaffed newsrooms, overworked journalists, and short deadlines). Next, the authors describe the methodology employed in the study — ethnographic observations coupled with interviews of fifteen Finnish online journalists. Researchers observed each journalist throughout one workday at their work location. The participants knew they were part of a study but did not know the focus of the study. The researchers studied the journalists’ completed products, what sources they used (or considered using), and how they obtained them. Work the journalists had started before the observation period was disregarded (as well as non-journalistic tasks). Researchers also noted the amount of time spent on each task. The interviews asked participants if this was a typical workday for them and if they had any notes to add. Most respondents were young (27–30 years old) and had five years or less of journalistic experience.

Finally, the article provides the findings. The researchers were surprised at the fragmented structure observed in the journalists’ workday. Journalists would bounce from task to task (illustrating stories written by others, editing and re-purposing stories from one medium to another, moderating online comment forums, and answering telephone calls). Researchers noted that half the day was lost to non-journalistic work. The average number of stories produced was 7.5, meaning journalists spent an average of 28 minutes on each story. Not having the time and resources to leave the newsroom bound most journalists to online sources — a quarter of works relied solely on press releases, another quarter on another sole media source. Work that used a combination of sources relied on one, with added telephone or email correspondence with the author or source for elaboration. Finally, the researchers noted a lack of editorial oversight in most offices. In their findings, the researchers outlined five “trust discourses” that journalists used to reason their usage of a particular source (i.e., how they picked the source):

  • Ideological trust — trustworthy by default (e.g., police press releases)
  • Pragmatic trust — some reservation; trustworthy enough
  • Cynically pragmatic trust — high level of distrust, trustworthiness is irrelevant
  • Consensual trust — evaluation of the information contained in the source; trustworthy because the source confirms the data
  • Contextual trust — the source qualities were evaluated, only deemed trustworthy if the circumstances allowed it (researchers consider this the correct method to use)

Ethnographic research studies like this one contribute to the field of mass communication because it looks at actual events happening as they normally would in the field. It would be impossible to replicate the environmental constraints that online journalists have to deal with in a controlled study. That said, there are some shortcomings in this study. The authors themselves admit that the sample size is too small for any generalizations. Further, I would argue that the observation period was too short to determine if these environmental factors are commonplace. Despite the pitfalls, qualitative studies allow the researchers to develop subjective viewpoints that open the channels for further dialogue and discussion.

--

--