Mohave Community College Libraries

This page is here to help you research and find materials specific to your subject.

Research Methodology

What is a hypothesis?  

A hypothesis is a tool of quantitative studies. Hypotheses (plural) are educated "guesses" or expectations about a solution to a problem, possible relationships between variables, or variable differences. It is the investigator's prediction or expectation of what the results will show made prior to data collection. 
 

To be complete the hypothesis must include three components:

  • The variables - dependent and independent.
  • The population - the entire group or elements who meet the sampling criteria. A sample is representative of that population.
  • The relationship between between one variable and another, for example, smoking and lung cancer.

A hypothesis should be:

  • stated in declarative form
  • consistent with known facts, previous research, and theory
  • be testable
  • a statement of relationships between variables
  • limited in scope (focused)
  • clear and concise

Examples of a hypothesis are:

  • Health Education programs influence the number of people who smoke.
  • Newspapers affect people's voting patterns.
  • Attendance at lectures influences exam marks.
  • Diet influences intelligence.

Types of hypothesis

Simple hypothesis - predicts the relationship between a single independent variable (IV) and a single dependent variable (DV).

For example:  Lower levels of exercise postpartum (IV) is associated with greater weight retention (DV).  

Complex hypothesis - predicts the relationship between two or more independent variables, and two or more dependent variables.

For example: The implementation of an evidence based protocol for urinary incontinence (IV) will result in (DV):

  •  decreased frequency of urinary incontinence episodes;
  •  decreased urine loss per episode;
  •  decreased avoidance of activities among women in ambulatory care settings.

Null hypotheses

Used when the researcher believes there is no relationship between two variables, or when there is inadequate theoretical or empirical information to state a research hypothesis.

Glossary  

Case study -- indepth study of a case (can be a program, an event, activity, or individual) studied over time using multiple sources of information (e.g. observations, documents, archival data, interviews). 

Credibility -- researcher's ability to demonstrate that the object of a study is accurately identified and described, based on the way in which the study was conducted.

Data analysis -- systematic process of working with data. 

Deductive -- a form of reasoning in which conclusions are formulated about particulars from general or universal premises. Sometimes referred to as top-down logic

Dependent Variable -- a variable that varies due to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. 

Empirical Research -- the process of developing systematized knowledge gained from observations formulated to support insights and generalizations about the phenomena being researched.

External Validity -- the extent to which the results of a study can be generalized or applied to the population at large.

Hypothesis -- a tentative explanation based on theory to predict a causal relationship between variables.

Inductive -- a form of reasoning in which a general conclusion is formulated from particular instances. Sometimes referred to as bottom-up logic

Independent Variable -- a variable that is not impacted by the dependent variable, and that itself impacts the dependent variable. 

Internal Validity -- the rigor with which the study was conducted [e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. 

Margin of Error -- the allowed or accepted deviation from the target or a specific value. The allowance for slight error or miscalculation or changing circumstances in a study.

Methodology -- a theory or analysis of how research does and should proceed.

Method -- a systematic approach to a process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline.

Mixed-Methods -- a research approach that uses two or more methods from both the quantitative and qualitative research categories. 

Peer-Review -- the process in which the author of a book, article, or other type of publication submits his or her work to experts in the field for critical evaluation, usually prior to publication. 

Population -- the target group under investigation. Samples are drawn from populations.

Probability -- the chance that a phenomenon will occur randomly. 

Reliability -- the degree to which a measure yields consistent results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results.

Rigor -- degree to which research methods are scrupulously and meticulously carried out. 

Sample -- the population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results will be generalized or transferred. 

Sensitivity the ability of the test to correctly identify those without the disease (true negative rate).

Specificity the ability of a test to correctly identify those with the disease (true positive rate)

Standard Deviation -- a measure of variation that indicates the typical distance between the scores of a distribution and the mean. 

Statistical Analysis -- application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data.

Statistical Tests -- researchers use statistical tests to make quantitative decisions about whether a study's data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance.

Testing -- the act of gathering and processing information about individuals' ability, skill, understanding, or knowledge under controlled conditions.

Validity -- the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.

How to Read an Empirical Research Article (12 min)

by morgankenneth12

Elements of a Research Article

Research articles are a specific type of scholarly, peer-reviewed article. They typically follow a particular format and include specific elements that show how the research was designed, how the data was gathered, how it was analyzed, and what the conclusions are. Sometimes these sections may be labeled a bit differently, but these basic elements are consistent:

Abstract: A brief, comprehensive summary of the article, written by the author(s) of the article.This abstract must be part of the article, not a summary in the database. Abstracts can appear in secondary source articles as well as primary source.

Introduction: This introduces the problem, tells you why it’s important, and outlines the background, purpose, and hypotheses the authors are trying to test. The introduction comes first, just after the abstract, and is usually not labeled.

Review of Literature: Summarizes and analyzes previous research related to the problem being investigated. 

Specific Question or Hypothesis: Often (but not always) in quantitative and mixed methods studies, specific questions or hypothesis are stated just before the methodology. 

Method and Design: Researchers indicate who or what was studied (source of data), the methods used to gather information, and a procedures summary.  

Results (findings): Summarizes the data and describes how it was analyzed. It should be sufficiently detailed to justify the conclusions. 

Discussion: The authors explain how the data fits their original hypothesis, state their conclusions, and look at the theoretical and practical implications of their research. Sometimes called "Analysis."

Conclusions: A summary statement that reflects the overall answers to the resarch questions. Implications and recommendations are also included in this section.

References: A listing of the sources cited in the report. 

What is the Research Process?  

 As you read a peer-reviewed (scholarly) article, note the method(s) used by author(s) to plan and conduct their research. There are many different kinds of research methods but they all share two objectives: to assist the researcher to design his or her research in a way that enables the research question to be answered; and to enable the researcher to conduct his or her research in a systematic manner to ensure the findings are scientifically rigorous and have genuine potential for practical application. To achieve these objectives, the researcher follows a stepwise research process. 

What is the Research Process

Research involves a systematic process that focuses on being objective and gathering a multitude of information for analysis so that the researcher can come to a conclusion. This process is used in all research and evaluation projects, regardless of the research method. 

Step 1. Develop the research question

Step 2. Conduct a literature review (a review of existing research) 

Step 3. Select the research approach, also called a research design

  • Design determined by research question and literature review
  • Qualitative design - inductive approach*
  • Quantitative design - deductive approach**
  • Mixed methods design - uses design characteristics from both qualitative and quantitative approaches

Step 4. Select a representative study population and sample size

  • Probability sampling - random selection of participants
  • Non-probability sampling - researcher determines participants (non-random)

Step 5. Collect and code the data

Step 6. Analyze and interpret the data

Step 7. Write and disseminate report

* inductive approach: the "bottom-up" approach. Moves from specific observations to broader generalizations and theories.

** deductive approach: the "top-down" approach. Works from the general to the more specific. 

A Brief Introduction to Research Designs: Quantitative, Qualitative, and Mix Methods (5:40 min)

by Christopher Smallwood

Review of Research Design  

Quantitative Research 

The goal in conducting a quantitative research study is to determine the relationship between one thing [an independent variable] and another [a dependent or outcome variable] within a population. Quantitative research designs are either descriptive [subjects usually measured once] or experimental [subjects measured before and after a treatment]. A descriptive study establishes associations between variables; an experimental study establishes causality (cause and effect).

Characteristics:

  • Data is usually gathered using structured research instruments.
  • Results are based on larger sample sizes that are representative of the population.
  • Research study can usually be replicated or repeated, given its high reliability.
  • Researcher has a clearly defined research question to which objective answers are sought.
  • Research question may be in the form of a hypothesis.
  • Data are in the form of numbers and statistics, often arranged in tables, charts, figures, or other non-textual forms.
  • Researcher uses tools, such as questionnaires or computer software, to collect numerical data.

Qualitative Research 

Qualitative research looks at meaning, perspectives and motivations, rather than cause and effect. It typically has smaller sample sizes, uses focus groups, interviews, and/or observation, and the interviewer often plays an integral role in the investigation. 

Characteristics:

  • Concerned with opinions, feelings and experiences
  • Describe social phenomena as they occur naturally - no attempt is made to manipulate the situation - just understand and describe
  • Takes a holistic perspective / approach, rather than looks at a set of variables
  • Used to develop concepts and theories that help in understanding social phenomena  
  • Qualitative data is collected through direct encounters i.e. through interview or observation, and is rather time consuming

Approaches to qualitative research:

  • Case Study - refers to an in-depth, detailed study of an individual or a small group of individuals. 
  • Ethnography - highly detailed accounts of how people in a social setting lead their lives, based on systematic and long-term observation of, and discussion with, those within the setting.
  • Phenomenology - seeks to understand how people experience a particular situation or phenomenon. Delves into the emotional level of meaning. 

Mixed Methods Research 

Combines elements of qualitative and quantitative design for the purpose of breadth and depth of understanding and corroboration. Mixed methods designs allow one design to strengthen the other, and addresses any inherent weaknesses by using either one alone. 

There are three approaches:

  • Sequential: the design starts with either a quantitative or qualitative phase. Once that phase is completed, the second approach is employed.  
  • Explanatory Sequential: Quantitative methods are used first to collect information. Then qualitative are used next to explain the results from the quantitative phase. 
  • Exploratory Sequential: Qualitative methods are used to gather information that is then used for the subsequent quantitative phase.

  

Sampling: Simple Random, Convenience, Systematic, Cluster, Stratified

by the Statistics Learning Centre

Probability Sampling vs. Non-Probability Sampling

Researchers commonly examine traits or characteristics (parameters) of populations in their studies. A population is a group of individual units with some commonality. For example, a researcher may want to study characteristics of female smokers in the United States. This would be the population being analyzed in the study, but it would be impossible to collect information from all female smokers in the U.S. Therefore, the researcher would select individuals from which to collect the data. This is called sampling. The group from which the data is drawn is a representative sample of the population the results of the study can be generalized to the population as a whole.                                                                        

There are two main types of sampling: probability (random) and non-probability (non-random) sampling. The difference between the two is whether or not the sampling selection involves randomization. Randomization occurs when all members of the sampling frame have an equal opportunity of being selected for the study. See the below video for sampling techniques, random and non-random that are commonly used in clinical research. 

Random sampling techniques

Type Description Example
Simple Random Every numbered population element has an equal probability of being selected

In a study by Pimenta et al, researchers obtained a list of all elderly enrolled in the Family Health Strategy and, by simple random sampling, selected a sample of 449 participants.

Systematic Sampling A list of members of the population is used so that each nth element has an equal probability of being selected.

In the study of Kelbore et al, children who were assisted at the Pediatric Dermatology Service were selected to evaluate factors associated with atopic dermatitis, selecting always the second child by consulting order.

Stratified Random Elements are selected randomly from strata that divide the population. 

A South Australia study investigated factors associated with vitamin D deficiency in preschool children. Using the national census as the sample frame, households were randomly selected in each stratum and all children in the age group of interest identified in the selected houses were investigated.

Complex (Cluster) Sampling Equal groups are identified and selected randomly, and participants in each group selected are used as the sample.  Five of 25 city blocks, each containing a high percentage of low socioeconomic status families, is selected randomly and parents in each selected block are surveyed. 

 

  

Non-Random Sampling Techniques

Type Description Example
Convenience Sample of participants are convenient for researchers to recruit. Study participants are randomly allocated to the intervention or control group. 
Purposeful Participants are selected by researchers based on specific  criteria in order to fulfill the study's objective.   Women between the ages of 40 and 60, diagnosed with rheumatoid arthritis and Sjogren's syndrome, were selected to participate in the study. 
Quota

A population is first segmented into mutually exclusive sub-groups. Researcher judgment is then used to select study participants from each sub-group, based on a specified proportion. 

A combination of vemurafenib and cobimetinib versus placebo was tested in patients with locally-advanced melanoma. The study recruited 495 patients from 135 health centers located in several countries. 

Snowball The researcher selects an initial group of individuals. Then these participants indicate other potential members with similar characteristics to take part in the study. 

Frequently used in studies investigating special populations, for example, those including illicit drugs users, as was the case of the study by Gonçalves et al, which assessed 27 users of cocaine and crack in combination with marijuana.

Types of Data: Nominal, Ordinal, Interval/Ratio (6:19 min)

by the Statistics Learning Centre

Quantitative Variables

One of the most commonly used terms in quantitative research is variable. A variable is a measurable or quantifiable characteristic of a concept, person, object or phenomenon that can take different values, numerically or categorically. 

Examples: Values for variables can be a measurable quantity e.g. height, age, weight, blood pressure, or it may be a qualitative factor, e.g. color, sex, behavior.

Types of variables

Independent variable - The variable that is used to describe or measure the factors that are assuming to cause or at least influence the other variable is called independent variable. The independent variable is given to the participants, usually for some specified time period. It is often manipulated and controlled by the investigator who sets its values by specifying how it will be used in the study. 

Example: if we study role of cholesterol in genesis of hypertension and atherosclerosis, cholesterol is independent variable, and hypertension and atherosclerosis are dependent variables.

Dependent Variable - The variable that gets modified under the influence of some other (independent) variable is called the dependent variable.


Introduction to Measurement 

Measurement is assigning numbers to indicate different values of a trait, characteristic, or other unit that is being investigated as a variable. The purpose is to quantitatively describe the variables and units of study that are being investigated.

Measurement requires that variables be differentiated and there are four ways to achieve this, dependent on the nature of the data. These four methods are referred to as scales of measurement. 

1st Method - nominal variablesthe values assigned to each category are simply labels rather than meaningful numbers. 

Examples: 

  • MEAL PREFERENCE: Breakfast, Lunch, Dinner
  • RELIGIOUS PREFERENCE: 1 = Buddhist, 2 = Muslim, 3 = Christian, 4 = Jewish, 5 = Other
  • POLITICAL ORIENTATION: Republican, Democratic, Libertarian, Green

Nominal Time of Day - categories; no additional information


2nd Method - ordinal variables -  values are placed in meaningful order (categories are rank ordered) but the distances between each unit are not equal

Examples: 

  • RANK: 1st place, 2nd place… last place
  • LEVEL OF AGREEMENT: No, Maybe, Yes
  • POLITICAL ORIENTATION: Left, Center, Right

Ordinal Time of Day - indicates direction or order of occurrence; spacing between is uneven


3rd Method - interval scale variables - values are placed in meaningful order and the distances between each unit are equal. 

Examples: 

  • TIME OF DAY on a 12-hour clock
  • POLITICAL ORIENTATION: Score on standardized scale of political orientation
  • OTHER scales constructed so as to possess equal intervals

Interval Time of Day - equal intervals; analog (12-hr.) clock, difference between 1 and 2 pm is same as difference between 11 and 12 am


4th Method - ratio variables -- In addition to possessing the qualities of nominal, ordinal, and interval scales, a ratio scale has an absolute zero (a point where none of the quality being measured exists).

Examples: 

  • RULER: inches or centimeters    
  • INCOME: money earned last year             
  • NUMBER of children
  • GPA: grade point average
  • YEARS of work experience

Ratio - ​24-hr. time has an absolute 0 (midnight); 14 o'clock is twice as long from midnight as 7 o'clock

What is Descriptive Statistics?  

Descriptive statistics, a subset of statistics, helps researchers and readers understand the information of data collected through organization, summarization, and visualization. It allows readers, patients, and healthcare providers to interpret and make sense of data derived through research, and if appropriate, implement findings into practice.  

Descriptive statistics are broken down into measures of central tendency, and measures of variability (spread). Measures of central tendency include the mean, median, and mode, while measures of variability include the range (the difference between the maximum and minimum observations), variance, standard deviationkurtosis and skewness. Below is a summarized explanation of the most commonly used descriptive statistics in health publications. 

Shape and Normality

Symmetry: when a distribution has the same shape on either side of the medium. Also called a normal distribution. 

Kurtosis: the extent to which a frequency distribution is peaked or flat.

 

Skew: a measure of symmetry. If one tail is longer than another, the distribution is skewed. Also called asymmetric or asymmetrical distributions. 

 

 

Central Tendency

Finding Mean, Median, and Mode: Descriptive Statistics: Probability and Statistics (3:54)

by Khan Academy 

,

Dispersion or Variation

Range, variance and standard deviation as measures of dispersion  (12:33 min)

by Khan Academy 

 

Reliability and Validity  

Reliability refers to the consistency of a measure. Researchers consider three types of consistency: over time (test-retest reliability), across items (internal consistency), and across different researchers (inter-rater reliability).

Validity is the extent to which the scores from a measure represent the variable they are intended to. When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. The fact that one person’s index finger is a centimeter longer than another’s would indicate nothing about which one had higher self-esteem.

Three types of validity tests are:  

  • Face validity is the extent to which a measurement method appears “on its face” to measure the construct of interest. 
  • Content validity is the extent to which a measure “covers” the construct of interest. 
  • Criterion validity is the extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with. 

 

Reliability and Validity (8:18 min)

by ChrisFlipp

Sensitivity and Specificity

Sensitivity and specificity are statistical measures of test sensitivity; test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate).

Sensitivity is essentially how good a test is at finding something if it's there. It is a measure of how often the test will correctly identify a positive among all positive by the gold standard test. For example, a blood test for a virus may have sensitivity as high as 99% or more — meaning that for every 100 infected people testing, 99 or more of them will be correctly identified. This is a good figure to take note of, but doesn't necessarily reflect a test's true effectiveness, as will become apparent.

Specificity is a measure of how accurate a test is against false positives. A sniffer dog looking for drugs would have a low specificity if it is often lead astray by things that aren't drugs — cosmetics or food, for example. Specificity can be considered as the percentage of times a test will correctly identify a negative result. Again, this can be 99% or more for good tests, although a particularly unruly and easily distracted sniffer dog would be much, much lower.

Sensitivity and Specificity (4:43 min)

by  Medmastery - The clinical skills academy

Loading ...