Radiologic Technology

Understanding Research

Did your professor tell you that you had to use certain types of articles for your research topic? Learn the distinguishing characteristics of the types of articles by navigating to the following sections:

Journals, Periodicals, & Magazines: What's the Difference?

The terms journal, magazine, and periodicals are often used interchangeably, but this does not mean that they all refer to the same type of publication.

  • Journals consist of scholarly and/or peer-reviewed articles, references, and citations and are authoritative with more credibility.
  • Magazines contain articles that are non-scholarly, are not peer-reviewed and have less academic credibility. Magazines are considered "popular" publications.
  • Periodicals are any type of publication that publish articles on a regular basis (daily, weekly, monthly, quarterly etc...). Examples include journals, magazines, and newspapers.

Both magazines and journals serve a purpose in research. Depending on your topic and guidelines from your professor, you may need to utilize one or both of these types of periodicals.

Scholarly, Peer Reviewed & Non-Peer Reviewed Articles: What's the Difference?

Scholarly articles are written by an expert in the subject matter and the audience is geared towards other experts.


Peer-reviewed articles are written by an expert in the subject matter and the audience is geared towards other experts. Additionally, these types of articles go through an extensive peer review process by other experts before publication. How does the process work?

  • After conducting research (often but not always original research) an expert in a field writes an article and submits it to a journal for publication.
  • If the editor(s) believe the article is a good fit, the article is sent to experts other subject matter experts for peer review. Reviewers usually do not know the name of the article’s author (blind peer review).
  • Once the review process is complete, the article is returned to the journal editors and either rejected or approved for publication in the journal (pending any changes the editors require based on the peer reviewed feedback).
  • Just because an article is "scholarly" or "academic" in nature does not mean it also peer-reviewed. 

Non-peer reviewed articles contain articles that were not reviewed by the author(s) peers prior to publication. Articles of this type appear in magazines, but there are also non-peer reviewed  scholarly academic journals. 

How Can I Identify a Peer-Reviewed Article?

There are several ways to check and see if the article you are looking at is peer-reviewed:

  • If you obtain the journal from one of the databases MCC Libraries has access to you can filter you search results to show you only peer reviewed articles.
  • Check the electronic record. In EBSCO and Gale databases, the electronic record will tell you if the article has been peer-reviewed (sometimes called "refereed").
  • Go to the website of the journal the article appears in to see if it is a peer reviewed publication.\
  • Ask your librarian.

Research Articles vs. Review Articles

 

There are two types of scholarly publications: Research articles and Review articles. Please note that both of these types of articles can be peer-reviewed. Do not assume that because an article is peer-reviewed that it is a research article.


Research articles are a  type of scholarly, peer-reviewed articlesIn a research article, the author(s) performed original research by conducting an experiment or study. Research articles follow a specific format. Sometimes these sections may be labeled differently, but the basic elements below are consistent:

  • Abstract: A brief summary of the article.  
  • Introduction: This introduces the problem, tells you why it’s important, and outlines the background, purpose, and hypotheses the authors are trying to test. The introduction is usually not labeled and contains the following elements (usually not labeled as such):
    • Literature Review: Summarizes and analyzes previous research related to the problem being investigated.
    • Hypothesis or Specific Question: Often (but not always) in quantitative and mixed methods studies, specific questions or hypothesis are stated just before the methodology.
  • ​ Methods: Researchers indicate who or what was studied (source of data), the methods used to gather information (how the experiment or study was conducted), and a procedures summary. This section will contain a lot of charts, graphs, or tables and it is important not to skip over reading them. 
  • Results (findings): Summarizes the data and describes how it was analyzed. It should be sufficiently detailed to justify the conclusions. 
  • Discussion: The author(s) explain how the data fits their original hypothesis, state their conclusions, and look at the theoretical and practical implications of their research. This section is sometimes labeled "Interpretation" or "Analysis."
  • Conclusion: A summary statement that reflects the overall answers to the research questions. Implications and recommendations for future research are also included in this section. This section is not always labeled.

In short, research articles are articles where an original experiment or study was conducted and will typically contain the following section headings: Methods, Discussion, Conclusion. See Anatomy of a Research Article.

Example: Skin Cancer Prevention Behaviors Among Parents of Young Children


Review articles summarize current or existing research on a topic. Experiments or studies were not conducted by the author(s) and as a result they are typically not broken down into the types of sections research articles are broken down into (But don't assume).

  • Will contain an abstract (summary of the article).
  • The core of a review article is summarizing experiments or studies that other researchers performed.
  • Research articles are a great way to quickly learn about new research in a particular field. It is important to remember that review articles can be  peer reviewed articles. So just because you have a peer-reviewed article, don't assume that it must be a research article.
  • Review article are often labeled as such in the title or top of the article. 

In short, review articles are articles where new research in a field is summarized and an original experiment or study was not conducted. If an article does not have  methods, results, and discussion sections; it is a review article.

Example: Dissecting Kawasaki Disease: A State of the Art Review

Permission to use Research vs. Review Articles provided by the creator, Jennifer Lee of University of Calgary LibrariesCopyright © 2014

What are Variables in Research?

 

One of the most commonly used terms in research is variable. A variable is a measurable or quantifiable characteristic of a concept, person, object or phenomenon that can take different values, numerically or categorically. Both quantitative and qualitative variables are used in research.

Make sure to test your knowledge at the end with the Independent vs. Dependent Variables Quiz

Measurement is assigning numbers to indicate different values of a trait, characteristic, or other unit that is being investigated as a variable. The purpose is to quantitatively describe the variables and units of study that are being investigated

 

Quantitative Variables are variables whose values result from measuring or counting something. Examples include  height, age, weight, blood pressure, and number of items sold in a store.

Qualitative Variables (Categorical Variables): are variables that express a quality and whose values do not result from measuring or counting something. Instead, it describes data that fits into categories. Examples include hair color, eye color, and horse breeds. 

Types of variables:

Independent variable - The variable that is used to describe or measure the factors that are assuming to cause or at least influence the other variable is called independent variable. The independent variable is given to the participants, usually for some specified time period.

It is the variable manipulated or changed by the person(s) conducting the study or experiment. 

Dependent Variable - The variable that gets modified under the influence of the independent variable. A good way of looking at it is the dependent variable is "dependent" on the independent variable. As the independent variable is changed, you can observe the changes (if any) in the dependent variable.

Control Variable - A variable or variables that are kept the same during all aspects of an experiment. The only variable the person conducing the experiment wants to change is the independent variable.


 

What is Sampling in Research?

Researchers commonly examine traits or characteristics (parameters) of populations in their studies. A population is a group of individual units with some commonality. 

For example, a researcher may want to study characteristics of female smokers in the United States. This would be the population being analyzed in the study, but it would be impossible to collect information from all female smokers in the U.S. Therefore, the researcher would select individuals from which to collect the data. This is called sampling

Random Sampling Explained:

Simple Random: Every numbered population element has an equal probability of being selected

Example: In a study by Pimenta et al, researchers obtained a list of all elderly enrolled in the Family Health Strategy and, by simple random sampling, selected a sample of 449 participants.

Systematic Sampling: A list of members of the population is used so that each nth element has an equal probability of being selected.

Example: In the study of Kelbore et al, children who were assisted at the Pediatric Dermatology Service were selected to evaluate factors associated with atopic dermatitis, selecting always the second child by consulting order.

Stratified Random: Elements are selected randomly from strata that divide the population. 

Example: A South Australia study investigated factors associated with vitamin D deficiency in preschool children. Using the national census as the sample frame, households were randomly selected in each stratum and all children in the age group of interest identified in the selected houses were investigated.

Complex (cluster) Sampling: Equal groups are identified and selected randomly, and participants in each group selected are used as the sample. 

Example: Five of 25 city blocks, each containing a high percentage of low socioeconomic status families, is selected randomly and parents in each selected block are surveyed. 


Non-Random Sampling Explained:

Convenience: Sample of participants are convenient for researchers to recruit.

Example: Study participants are randomly allocated to the intervention or control group. 

Purposeful: Participants are selected by researchers based on specific  criteria in order to fulfill the study's objective.  

Example: Women between the ages of 40 and 60, diagnosed with rheumatoid arthritis and Sjogren's syndrome, were selected to participate in the study.

Quota: A population is first segmented into mutually exclusive sub-groups. Researcher judgment is then used to select study participants from each sub-group, based on a specified proportion. 

Example: A combination of vemurafenib and cobimetinib versus placebo was tested in patients with locally-advanced melanoma. The study recruited 495 patients from 135 health centers located in several countries

Snowball: Researcher selects an initial group of individuals. Then these participants indicate other potential members with similar characteristics to take part in the study. 

Example: Frequently used in studies investigating special populations, for example, those including illicit drugs users, as was the case of the study by Gonçalves et al, which assessed 27 users of cocaine and crack in combination with marijuana.

List of Research Terms

Case Study: In-depth study of a case (can be a program, an event, activity, or individual) studied over time using multiple sources of information (e.g. observations, documents, archival data, interviews). 

Credibility: Researcher's ability to demonstrate that the object of a study is accurately identified and described, based on the way in which the study was conducted.

Data Analysis: Systematic process of working with data. 

Deductive Reasoning: Form of reasoning in which conclusions are formulated about particulars from general or universal premises. Sometimes referred to as top-down logic

Dependent Variable: Variable that varies due to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. 

Empirical Research: Process of developing systematized knowledge gained from observations formulated to support insights and generalizations about the phenomena being researched.

Error Bar: Representations of the variability of data and used on graphs to indicate the error or uncertainty in a reported measurement. In other words, error bars give a general idea of how precise a measurement is, or conversely, how far from the reported value the true value might be

External Validity: Extent to which the results of a study can be generalized or applied to the population at large.

Graph: Diagram or chart showing the relationship between variable quantities, typically of two variables, each measured along one of a pair of axes at right angles. Graphs are used for understanding large amounts of data. Types of graphs include Bar graphs, Line graphs, Scatter graphs (Scatter plots or Scatter charts)  Pie charts, histograms, and many more types.

Hypothesis: Hypotheses (plural) are educated "guesses" or expectations about a solution to a problem, possible relationships between variables, or variable differences. It is the investigator's prediction or expectation of what the results will show made prior to data collection. 

Complex Hypothesis: Predicts the relationship between two or more independent variables, and two or more dependent variables.

Null Hypothesis: Used when the researcher believes there is no relationship between two variables, or when there is inadequate theoretical or empirical information to state a research hypothesis.

Simple Hypothesis: Predicts the relationship between a single independent variable and a single dependent variable. 

Inductive Reasoning: Form of reasoning in which a general conclusion is formulated from particular instances. Sometimes referred to as bottom-up logic​

Independent Variable: Variable that is not impacted by the dependent variable, and that itself impacts the dependent variable. 

Internal Validity: Rigor with which a study was conducted [e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. 

Margin of Error: Allowed or accepted deviation from the target or a specific value. The allowance for slight error or miscalculation or changing circumstances in a study.

Measurement: Assigning numbers to indicate different values of a trait, characteristic, or other unit that is being investigated as a variable.

Methodology: Theory or analysis of how research does and should proceed.

Method: Systematic approach to a process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline.

Mixed-Method: Research approach that uses two or more methods from both the quantitative and qualitative research categories. 

Peer-Review: Process in which the author of a work has their work evaluated by other experts in the field prior to publication; peer review is most often done anonymously (blind).

Population: Target group under investigation. Samples are drawn from populations.

Probability: Chance that a phenomenon will occur randomly. 

Reliability: Degree to which a measure yields consistent results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results.

Rigor: Degree to which research methods are scrupulously and meticulously carried out. 

Sample: Population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results will be generalized or transferred. Samples can be random or non-random.

Non-Random Sample: Occurs when members of the sampling frame are selected by factors other than random chance.

Random Sample: Occurs when all members of the sampling frame have an equal opportunity of being selected for the study.

♦ For specific example see What is Sampling in Research?

Sensitivity: Ability of the test to correctly identify those without the disease (true negative rate).

Specificity: Ability of a test to correctly identify those with the disease (true positive rate)

Standard Deviation: Measure of variation that indicates the typical distance between the scores of a distribution and the mean. 

Statistical Analysis: Application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data.

Statistical Test: Researchers use statistical tests to make quantitative decisions about whether a study's data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance.

Table (Data Table): Representation of data or information in rows and columns. Tables are used for keeping track of large amounts of data (quantities, names, numbers, and other details)

Testing: Act of gathering and processing information about individuals' ability, skill, understanding, or knowledge under controlled conditions.

Validity: Degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.

Variable: a measurable or quantifiable characteristic of a concept, person, object or phenomenon that can take different values, numerically or categorically.

Hierarchy of Study Design

This page is designed to give you an understanding of different types of clinical medical studies and how they relate to each other

Study designs can be thought of as a pyramid. Case reports are the first articles published on new topics so they make up the base of the pyramid. As we progress up the pyramid, the studies become more evidence-based and less numerous. Meta-Analyses are at the top of the pyramid because they can only be written after much other research has been done on a topic. There are many fewer of them but they offer very strong evidence.

                                 

The below links take you to a brief definition of each design along with a real-life example.

Case Report

Cohort Study

Randomized Controlled Trial (RCT)

Systematic Review

Meta-Analysis

Case Report

An article that describes and interprets an individual case, often written in the form of a detailed story. Case reports often describe:

  • Unique cases that cannot be explained by known diseases or syndromes
  • Cases that show an important variation of a disease or condition
  • Cases that show unexpected events that may yield new or useful information
  • Cases in which one patient has two or more unexpected diseases or disorders

Case reports are considered the lowest level of evidence, but they are also the first line of evidence, because they are where new issues and ideas emerge. This is why they form the base of our pyramid. A good case report will be clear about the importance of the observation being reported.

If multiple case reports show something similar, the next step might be a case-control study to determine if there is a relationship between the relevant variables.

Advantages

  • Can help in the identification of new trends or diseases
  • Can help detect new drug side effects and potential uses (adverse or beneficial)
  • Educational – a way of sharing lessons learned
  • Identifies rare manifestations of a disease

Disadvantages

  • Cases may not be generalizable
  • Not based on systematic studies
  • Causes or associations may have other explanations
  • Can be seen as emphasizing the bizarre or focusing on misleading elements

Example

Hymes KB. Cheung T. Greene JB. Prose NS. Marcus A. Ballard H. William DC. Laubenstein LJ. (1981). Kaposi's sarcoma in homosexual men-a report of eight cases. Lancet. 2(8247),598-600.

This case report was published by eight physicians in New York city who had unexpectedly seen eight male patients with Kaposi’s sarcoma (KS). Prior to this, KS was very rare in the U.S. and occurred primarily in the lower extremities of older patients. These cases were decades younger, had generalized KS, and a much lower rate of survival. This was before the discovery of HIV or the use of the term AIDS and this case report was one of the first published items about AIDS patients.               

Cohort Study

A study design where one or more samples (called cohorts) are followed prospectively and subsequent status evaluations with respect to a disease or outcome are conducted to determine which initial participants exposure characteristics (risk factors) are associated with it. As the study is conducted, outcome from participants in each cohort is measured and relationships with specific characteristics determined.

Advantages

  • Subjects in cohorts can be matched, which limits the influence of confounding variables
  • Standardization of criteria/outcome is possible
  • Easier and cheaper than a randomized controlled trial (RCT)

Disadvantages

  • Cohorts can be difficult to identify due to confounding variables
  • No randomization, which means that imbalances in patient characteristics could exist
  • Blinding/masking is difficult
  • Outcome of interest could take time to occur

Example

Lao, X., Liu, X., Deng, H., Chan, T., Ho, K., Wang, F., ... Yeoh, E. (2018). Sleep Quality, Sleep Duration, and the Risk of Coronary Heart Disease: A Prospective Cohort Study With 60,586 AdultsJournal Of Clinical Sleep Medicine, 14(1), 109-117. https://doi.org/10.5664/jcsm.6894

This prospective cohort study explored "the joint effects of sleep quality and sleep duration on the development of coronary heart disease." The study included 60,586 participants and an association was shown between increased risk of coronary heart disease and individuals who experienced short sleep duration and poor sleep quality. Long sleep duration did not demonstrate a significant association. 

Randomized Controlled Trial (RCT)

A study design that randomly assigns participants into an experimental group or a control group. As the study is conducted, the only expected difference between the control and experimental groups in a randomized controlled trial (RCT) is the outcome variable being studied.

Advantages

  • Good randomization will "wash out" any population bias
  • Easier to blind/mask than observational studies
  • Results can be analyzed with well known statistical tools
  • Populations of participating individuals are clearly identified

Disadvantages

  • Expensive in terms of time and money
  • Volunteer biases: the population that participates may not be representative of the whole
  • Loss to follow-up attributed to treatment

Example

van Der Horst, N., Smits, D., Petersen, J., Goedhart, E., & Backx, F. (2015). The preventive effect of the nordic hamstring exercise on hamstring injuries in amateur soccer players: a randomized controlled trial. The American Journal of Sports Medicine, 43(6), 1316-1323. https://doi.org/10.1177/0363546515574057

This article reports on the research investigating whether the Nordic Hamstring Exercise is effective in preventing both the incidence and severity of hamstring injuries in male amateur soccer players. Over the course of a year, there was a statistically significant reduction in the incidence of hamstring injuries in players performing the NHE, but for those injured, there was no difference in severity of injury. There was also a high level of compliance in performing the NHE in that group of players.

Systematic Review

A document often written by a panel that provides a comprehensive review of all relevant studies on a particular clinical or health-related topic/question. The systematic review is created after reviewing and combining all the information from both published and unpublished studies (focusing on clinical trials of similar treatments) and then summarizing the findings.

Advantages

  • Exhaustive review of the current literature and other sources (unpublished studies, ongoing research)
  • Less costly to review prior studies than to create a new study
  • Less time required than conducting a new study
  • Results can be generalized and extrapolated into the general population more broadly than individual studies
  • More reliable and accurate than individual studies
  • Considered an evidence-based resource

Disadvantages

  • Very time-consuming
  • May not be easy to combine studies

Example 

Parker, H.W. and Vadiveloo, M.K. (2019). Diet quality of vegetarian diets compared with nonvegetarian diets: a systematic review. Nutrition Reviews, https://doi.org/10.1093/nutrit/nuy067

This systematic review was interested in comparing the diet quality of vegetarian and non-vegetarian diets. Twelve studies were included. Vegetarians more closely met recommendations for total fruit, whole grains, seafood and plant protein, and sodium intake. In nine of the twelve studies, vegetarians had higher overall diet quality compared to non-vegetarians. These findings may explain better health outcomes in vegetarians, but additional research is needed to remove any possible confounding variables.

Meta-Analysis

A subset of systematic reviews; a method for systematically combining pertinent qualitative and quantitative study data from several selected studies to develop a single conclusion that has greater statistical power. This conclusion is statistically stronger than the analysis of any single study, due to increased numbers of subjects, greater diversity among subjects, or accumulated effects and results.

Meta-analysis would be used for the following purposes:

  • To establish statistical significance with studies that have conflicting results
  • To develop a more correct estimate of effect magnitude
  • To provide a more complex analysis of harms, safety data, and benefits
  • To examine subgroups with individual numbers that are not statistically significant

If the individual studies utilized randomized controlled trials (RCT), combining several selected RCT results would be the highest-level of evidence on the evidence hierarchy, followed by systematic reviews, which analyze all available studies on a topic.

Advantages

  • Greater statistical power
  • Confirmatory data analysis
  • Greater ability to extrapolate to general population affected
  • Considered an evidence-based resource

Disadvantages

  • Difficult and time consuming to identify appropriate studies
  • Not all studies provide adequate data for inclusion and analysis
  • Requires advanced statistical techniques
  • Heterogeneity of study populations

Example

Nakamura, A., van Der Waerden, J., Melchior, M., Bolze, C., El-Khoury, F., & Pryor, L. (2019). Physical activity during pregnancy and postpartum depression: Systematic review and meta-analysis. Journal of Affective Disorders, 246, 29-41. https://doi.org/10.1016/j.jad.2018.12.009

This meta-analysis explored whether physical activity during pregnancy prevents postpartum depression. Seventeen studies were included (93,676 women) and analysis showed a "significant reduction in postpartum depression scores in women who were physically active during their pregnancies when compared with inactive women." Possible limitations or moderators of this effect include intensity and frequency of physical activity, type of physical activity, and timepoint in pregnancy (e.g. trimester).