Fundamental Concepts of Research Methodology

Dr. V.K.Maheshwari, M.A(Socio, Phil) B.Sc. M. Ed, Ph.D

Former Principal, K.L.D.A.V.(P.G) College, Roorkee, India

Some people consider research as a movement, a movement from the known to the unknown. It is actually a voyage of discovery.  In fact, research is an art of scientific investigation. The Advanced Learner’s Dictionary of Current English lays down the meaning of research as “a careful investigation or inquiry specially through search for new facts in any branch of knowledge Research in common parlance refers to a search for knowledge. Once can also define research as a scientific and systematic search for pertinent information on a specific topic.” Redman and/or define research as a “systematized effort to gain new knowledge.

In the broadest sense of the word, the definition of research includes any gathering of data, information and facts for the advancement of knowledge.

According to Creswell – “Research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue”. It consists of three steps:

  • Pose a question,
  • collect data to answer the question,
  • present an answer to the question.

Aims and Objectives of Research

The main aim of research is to find out the truth which is hidden and which has not been discovered as yet. The purpose of research is to discover answers to questions through the application of scientific procedures.   Research objectives can be placed in the following broad groupings:

  • To gain familiarity with a phenomenon or to achieve new insights into it .
  • To portray accurately the characteristics of a particular individual, situation or group (studies with this object in view are known as descriptive research studies)
  • To determine the frequency with which something occurs or with which it is associated with something else (studies with this object in view are known as diagnostic research studies);
  • To test a hypothesis of a causal relationship between variables .

Characteristics of Good Research

  • Good research is systematic:
  • Good research is logical
  • Good research is empirical
  • Good research is replicable

Educational Research

Educational  research refers to research conducted by Educationists  , which follows by the systematic plan. Educational   research methods can generally vary along a quantitative/qualitative dimension. Quantitative designs approach Educational  phenomena through quantifiable evidence, and often rely on statistical analysis of many cases (or across intentionally designed treatments in an experiment) to create valid and reliable general claims. Related to quantity. Qualitative designs emphasize understanding of  Educational phenomena through direct observation, communication with participants, or analysis of texts, and may stress contextual and subjective accuracy over generality. Related to quality.  Educational research is the scientific study of society. More specifically, Educational research examines a society’s attitudes, assumptions, beliefs, trends, stratifications and rules. The scope of Educational  research can be small or large, ranging from the self or a single individual to spanning an entire race or country.  Educational research determines the relationship between one or more variables.

Educational Research may be defined as a scientific undertaking which by means of logical and systematized techniques, aims to discover new factor verify a test old facts, analyze their sequence, interrelationship and causal explanation which were derived within an appropriate theoretical frame of reference, develop new scientific tolls, concepts and theories which would facilities reliable and valid study of human behavior. A researcher’s primary goal distant and immediate is to explore and gain an understanding of human behavior  and thereby gain a greater control over time.

Objectives of Educational Research

Educational  Research is a scientific approach of adding to the knowledge about Education   and    Educational phenomena. Knowledge to be meaningful should have a definite purpose and direction. The growth of knowledge is closely linked to the methods and approaches used in research investigation. Hence the Educational  research must be guided by certain laid down objectives enumerated below:

Development of Knowledge:

Education helps us to obtain and add to the knowledge of Educational  phenomena. This is one of the most important objectives of Educational  research.

Scientific Study of Social Life:

Social research is an attempt to acquire knowledge about the social phenomena. Man being the part of a society, social research studies human being as an individual, human behavior and collects data about various aspects of the social life of man and formulates law in this regards.

Welfare of Humanity:

The ultimate objective of the Educational  study is often and always to enhance the welfare of humanity. No scientific research makes only for the sake of study. The welfare of humanity is the most common objective in Education .

Classification of facts:

Educational  research aims to clarify facts. The classification of facts plays important role in any scientific research.

The ultimate objective of many research undertaking is to make it possible, to modify the behavior of particular type of individuals under the specified conditions. In Educational research we generally study of the Educational  phenomena, events and the factors that govern and guide them.

Criteria for Selecting Research Problem

The following points may be observed by a researcher in selecting a research problem or a subject for research

  • Controversial subject should not become the choice of an average researcher;
  • Subject which is overdone should not be normally chosen, for it will be a difficult task to throw any new light in such a case.
  • The subject selected for research should be familiar and feasible so that the related research material or sources of research are within one’s reach.
  • Too narrow or too vague problems should be avoided.

Characteristics of Good Research Title

  • Avoid ambiguous word
  • Avoid duel meaning word
  • Catch the reader’s attention and interest
  • Describe the content of the paper
  • Simple, sharp and short


An abstract  is a brief summary of a research article, thesis, review, conference  proceeding or any in-depth analysis of a particular subject or discipline, and is often used to help the reader quickly ascertain the paper’s purpose.

Structure of Abstract

An academic abstract typically outlines four elements relevant to the completed work:

  • The research focus (i.e. statement of the problem(s)/research issue(s) addressed);
  • The research methods used (Method/Nature/Sampling/Population/Study Area.);
  • The results/findings of the research;
  • The main conclusions and recommendations

Background of the Study

Background research refers to accessing the collection of previously published and unpublished information about a site, region, or particular topic of interest and it is the first step of all good archaeological investigations, as well as that of all writers of any kind of research paper.

Statement of the Problem

Defining a research problem properly and clearly is a crucial part of a research study and must in no case be accomplished hurriedly. The problem to be investigated must be defined unambiguously for that will help to discriminate relevant data from the irrelevant ones

A proper definition of research problem will enable the researcher to be on the track whereas an ill-defined problem may create hurdles. However, in practice this a frequently overlooked which causes a lot of problems later on. Hence, the research problem should  be defined in a systematic manner, giving due weightage to all relating points. The technique for the purpose involves the undertaking of the following steps generally one after the other:

  • Statement of the problem in a general way;
  • Understanding the nature of the problem;
  • Surveying the available literature
  • Developing the ideas through discussions; and Rephrasing the research problem into a working proposition

Literature Review

Review of existing literature related to the research is an important part of any research paper, and essential to put the research work in overall perspective, connect it with earlier research work and build upon the collective intelligence and wisdom already accumulated by earlier researchers. It significantly enhances the value of any research paper.

A literature review surveys scholarly articles, books, dissertations, conference proceedings and other resources which are relevant to a particular issue, area of research, or theory and provides context for a dissertation by identifying past research. Research tells a story and the existing literature helps us identify where we are in the story currently. It is up to those writing a dissertation to continue that story with new research and new perspectives but they must first be familiar with the story before they can move forward.

Technique for this purpose involves the undertaking of the following steps generally one after the other:

  • Statement of the problem in a general way;
  • Understanding the nature of the problem;
  • Surveying the available literature Developing the ideas through discussions;
  • Rephrasing the research problem into a working proposition

Characteristics of Good Literature Review

Demonstrates the researcher’s familiarity with the body of knowledge by providing a good synthesis of what is and is not known about the subject in question, while also identifying areas of controversy and debate, or limitations in the literature sharing different perspectives.

  • Identifies the most important authors engaged in similar work.
  • Indicates the theoretical framework that the researcher is working with
  • Offers an explanation of how the researcher can contribute toward the existing body of scholarship by pursuing their own thesis or research question
  • Organized around issues, themes, factors, or variables that are related directly to the thesis or research question.
  • Places the formation of research questions in their historical and disciplinary context

Aims and Objective of the Study

The aim of the work, i.e. the overall purpose of the study, should be clearly and concisely defined.


  • Are broad statements of desired outcomes, or the general intentions of the research, which ‘paint a picture’ of your research project
  • Emphasize what is to be accomplished (not how it is to be accomplished)
  • Address the long-term project outcomes, i.e. they should reflect the aspirations and expectations of the research topic.
  • Once aims have been established, the next task is to formulate the objectives. Generally, a project should have no more than two or three aims statements, while it may include a number of objectives consistent with them.


Objectives are subsidiary to aims and are the steps you are going to take to answer your research questions or a specific list of tasks needed to accomplish the goals of the project

  • Address the more immediate project outcomes
  • Emphasize how aims are to be accomplished
  • Here is an example of a project aim and subsidiary objectives:
  • Make accurate use of concepts
  • Must be highly focused and feasible
  • Must be sensible and precisely described
  • Should read as an ‘individual’ statement to convey your intentions

Aims and Objectives should:

  • Access your chosen subjects, respondents, units, goods or services.
  • Approach the literature and theoretical issues related to your project.
  • Be concise and brief.
  • Be interrelated; the aim is what you want to achieve, and the objective describes how you are going to achieve that aim.
  • Be realistic about what you can accomplish in the duration of the project and the other commitments you have
  • Deal with ethical and practical problems in your research.
  • Develop a sampling frame and strategy or a rationale for their selection.
  • Develop a strategy and design for data collection and analysis.

Aims and Objectives should not:

  • Be too vague, ambitious or broad in scope.
  • Contradict your methods – i.e. they should not imply methodological goals or standards of measurement, proof or generalisability of findings that the methods cannot sustain.
  • Just be a list of things related to your research topic.
  • Just repeat each other in different terms.

Objectives must always be set after having formulated a good research question. After all, they are to explain the way in which such question is going to be answered.

Hypothesis of the Study

A hypothesis is a logical supposition, a reasonable guess, an educated conjecture. It provides a tentative explanation for a phenomenon under investigation.” (Leedy and Ormrod, 2001).

A research hypothesis is the statement created by researchers when they speculate upon the outcome of a research or experiment.  Hypothesis is a tentative conjecture explaining an observation, phenomenon, or scientific problem that can be tested by further observation, investigation, or experimentation. Hypotheses are testable explanations of a problem, phenomenon, or observation. Both quantitative and qualitative research involve formulating a hypothesis to address the research problem. Hypotheses that suggest a causal relationship involve at least one independent variable and at least one dependent variable; in other words, one variable which is presumed to affect the other.

A hypothesis is important because it guides the research. An investigator may refer to the hypothesis to direct his or her thought process toward the solution of the research problem or subproblems. The hypothesis helps an investigator to collect the right kinds of data needed for the investigation. Hypotheses are also important because they help an investigator to locate information needed to resolve the research problem or subproblems (Leedy and Ormrod, 2001)

Type of Hypothesis

Below are some of the important types of hypothesis

1.            Simple Hypothesis

2.            Complex Hypothesis

3.            Empirical Hypothesis

4.            Null Hypothesis

5.            Alternative Hypothesis

6.            Logical Hypothesis

7.            Statistical Hypothesis

Simple Hypothesis

Simple hypothesis is that one in which there exists relationship between two variables one is called independent variable or cause and other is dependent variable or effect.

Complex Hypothesis

Complex hypothesis is that one in which as relationship among variables exists. I recommend you should read characteristics of a good  research hypothesis. In this type dependent as well as independent variables are more than two.

Empirical Hypothesis

Working hypothesis is that one which is applied to a field. During the formulation it is an assumption only but when it is pat to a test become an empirical or working hypothesis.

Null Hypothesis

Null hypothesis is contrary to the positive statement of a working hypothesis. According to null hypothesis there is no relationship between dependent and independent variable. It is denoted by ‘HO”.

Alternative Hypothesis

Firstly many hypotheses are selected then among them select one which is more workable and most efficient. That hypothesis is introduced latter on due to changes in the old formulated hypothesis. It is denote by “HI”.

Logical Hypothesis

It is that type in which hypothesis is verified logically. J.S. Mill has given four cannons of these hypothesis e.g. agreement, disagreement, difference and residue.

Statistical Hypothesis

A hypothesis which can be verified statistically called statistical hypothesis. The statement would be logical or illogical but if statistic verifies it, it will be statistical hypothesis.

Characteristics of Hypothesis

  • A hypothesis should state the expected pattern, relationship or difference between two or more variables;
  • A hypothesis should be testable;
  • A hypothesis should offer a tentative explanation based on theories or previousresearch;A hypothesis should be concise and lucid.


Variables of the Study

A variable is either a result of some force or it is the force that causes a change in another variable. In experiments, these are called dependent and independent variables respectively

The purpose of all research is to describe and explain variance in the world. Variance is simply the difference; that is, variation that occurs naturally in the world or change that we create as a result of a manipulation. Variables are names that are given to the variance we wish to explain.

A variable is either a result of some force or is itself the force that causes a change in another variable. In experiments, these are called dependent and independent variables respectively.

When a researcher gives an active medication to one group of people and a placebo, or inactive medication, to another group of people, the independent variable is the medication treatment. Each person’s response to the active medication or placebo is called the dependent variable.

This could be many things depending upon what the medication is for, such as high blood pressure or muscle pain. Therefore, in experiments, a researcher manipulates an independent variable to determine if it causes a change in the dependent variable.

As we learned earlier in a descriptive study, variables are not manipulated. They are observed as they naturally occur and then associations between variables are studied. In a way, all the variables in descriptive studies are dependent variables because they are studied in relation to all the other variables that exist in the setting where the research is taking place. However, in descriptive studies, variables are not discussed using the terms “independent” or “dependent.” Instead, the names of the variables are used when discussing the study.

Conceptual Framework

Conceptual Framework is a written or visual presentation that explains either graphically, or in narrative form, the main things to be studied – the key factors, concepts or variables and the presumed relationship among them. The conceptual framework identifies the research tools and methods that may be used to carry out the research effectively The main objective in forming a conceptual framework is to help the researcher give direction to the research..

Theoretical Framework

The theoretical framework enhances overall clarity of the research. It also helps the researcher get through the research faster as he has to look only for information within the theoretical framework, and not follow up any other information he finds on the topic. The objective of forming a theoretical framework is to define a broad framework within which a researcher may work

Difference between the Conceptual and the Theoretical Framework

  • A conceptual framework is the researcher’s idea on how the research problem will have to be explored. This is founded on the theoretical framework, which lies on much broader scale of resolution. The theoretical framework dwells on time tested theories that embody the findings of numerous investigations on how phenomena occur.
  • The theoretical framework provides a general representation of relationships between things in a given phenomenon. The conceptual framework, on the other hand, embodies the specific direction by which the research will have to be undertaken. Statistically speaking, the conceptual framework describes the relationship between specific variables identified in the study. It also outlines the input, process and output of the whole investigation. The conceptual framework is also called the research paradigm.
  • The theoretical framework looks at time-tested theories in relation to any research topic. The conceptual framework is the researcher’s idea on how the research problem will be explored, keeping in mind the theories put forth in the theoretical framework.
  • The theoretical framework looks at the general relationship of things in a phenomenon, while conceptual framework puts forth the methods to study the relationship between the specific variables identified in the research topic
  • Conceptual framework gives a direction to the research that is missing in theoretical framework by helping decide on tools and methods that may be employed in the research.

Research Methodology

Research Methodology is the complete plan of attack on the central research problem. It provides the overall structure for the procedures that the researcher follows, the data that the researcher collects, and the data analyses that the researcher conducts, thus involves planning. It is a plan with the central goal of solving the research problem in mind. Research methodology describing how the study was conducted. It includes; research design, Study population, sample and sample size, methods of data collection, methods of data analysis and anticipation of the study. Research methodology refers to a philosophy of research process. It includes the assumptions and values that serve a rationale for research and the standards or criteria the researcher uses for collecting and interpreting data and reaching at conclusions (Martin and Amin, 2005:63). In other words research methodology determines the factors such as how to write hypothesis and what level of evidence is necessary to make decisions on whether to accept or reject the hypothesis.

Research Method

1.Survey Method:

The Survey method is the technique of gathering data by asking questions to people who are thought to have desired information. Surveys involve collecting information, usually from fairly large groups of people, by means of questionnaires but other techniques such as interviews or telephoning may also be used. There are different types of survey. Surveys are effective to produce information on socio-economic characteristics, attitudes, opinions, motives etc and to gather information for planning product features, advertising media, sales promotion, channels of distribution and other marketing variables.

2.Experiments Method:

Experimental research is guided by educated guesses that guess the result of the experiment. An experiment is conducted to give evidence to this experimental hypothesis. Experimental research, although very demanding of time and resources, often produces the soundest evidence concerning hypothesized cause-effect relationships.

3. Case Study Method:

Case study research involves an in-depth study of an individual or group of individuals. Case studies often lead to testable hypotheses and allow us to study rare phenomena. Case studies should not be used to determine cause and effect, and they have limited use for making accurate predictions. The case study research draws upon their work  on six steps that should be used:

  • Determine and define the research questions
  • Select the cases and determine data gathering and analysis techniques
  • Prepare to collect the data
  • Collect data in the field
  • Evaluate and analyze the data
  • Prepare the report

4.Observation Method:

The observation method involves human or mechanical observation of what people actually do or what events take place during a consumption situation. “Information is collected by observing process at work.

Observational trials study  issues in large groups of people but in natural settings. Studies which involve observing people can be divided into two main categories, namely participant observation and non- participant observation.

A ) In participant observation studies- The researcher becomes (or is already) part of the group to be observed. This involves fitting in, gaining the trust of members of the group and at the same time remaining sufficiently detached as to be able to carry out the observation.

B) In non-participant observation studies- The researcher is not part of the group being studied. The researcher decides in advance precisely what kind of behavior is relevant to the study and can be realistically and ethically observed. The observation can be carried out in a few different ways.

Research Type or Nature of the Research

1.Descriptive Research:

Descriptive research is also called Statistical Research. The main goal of this type of research is to describe the data and characteristics about what is being studied. The idea behind this type of research is to study frequencies, averages, and other statistical calculations. Although this research is highly accurate, it does not gather the causes behind a situation. Descriptive research is mainly done when a researcher wants to gain a better understanding of a topic . Descriptive research is the exploration of the existing certain phenomena. The details of the facts wont be known. The existing phenomena are not known to the person. Descriptive research attempts to describe systematically situation, problem, phenomenon, service or programmed, or provides information about , say, living condition of a community, or describes attitudes towards an issue.

2. Explanatory Research:

Explanatory research attempts to clarify why and how there is a relationship between two or more aspects of a situation or phenomenon.  Explanatory research is research conducted in order to explain any behaviour in the market. It could be done through using questionnaires, group discussions, interviews, random sampling, etc.

3.Exploratory Research:

Exploratory research is conducted into an issue or problem where there are few or no earlier studies to refer to. Exploratory research is undertaken to explore an area where little is known or to investigate the possibilities of undertaking a particular research study (feasibility study/ pilot study).

The focus is on gaining insights andfamiliarity for later investigation. Secondly, descriptive research describes phenomena as they exist. Here data is often quantitative and statistics applied in particular situation. It aims to generalise from an analysis by predicting certain phenomena on the basis of hypothesised general relationships. Exploratory research design is used to determine the best research design, selection of subjects and collection method. This design of research provides finaland conclusive answers toward the research questions.

4.Quantitative Research:

Quantitative research involves analysis of numerical data The emphasis of Quantitative research is on collecting and analyzing numerical data; it concentrates on measuring the scale, range, frequency etc. of phenomena. This type of research, although harder to design initially, is usually highly detailed and structured and results can be easily collated and presented statistically. Quantitative data refers to numeric quantities of the results.

5.Qualitative Research:

Qualitative research involves analysis of data such as words (e.g., from interviews), pictures (e.g., video), or objects (e.g., an artifact). Qualitative data refers to the qualities of the results in observation

Qualitative research is more subjective in nature than Quantitative research and involves examining and reflecting on the less tangible aspects of a research subject, e.g. values, attitudes, perceptions. Although this type of research can be easier to start, it can be often difficult to interpret and present the findings; the findings can also be challenged more easily.

Unit of Analysis

One of the most important ideas in a research project is the unit of analysis. The unit of analysis is the major entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a study:

  • individuals
  • groups
  • artifacts (books, photos, newspapers)
  • geographical units (town, census tract, state)
  • social interactions (dyadic relations, divorces, arrests)

The unit of analysis is the major entity that you are analyzing in your study. It is the ‘what ‘or ‘who’ that is being studied. Units of analysis are essentially the things we examine in order to create summary descriptions of them and explain differences among them. Units of analysis that are commonly used in social science research include individuals, groups, organizations, social artifacts, and social interactions.

Population of the Study

A population as a well-defined group of people or objects that share common characteristics. A population in a research study is a   group of individual’s persons, objects, or items from which samples are taken for measurement .A population is   group about which some information is sought.

A research population is also known as a well-defined collection of individuals or objects known to have similar characteristics. All individuals or objects within a certain population usually have a common, binding characteristic or trait.

Usually, the description of the population and the common binding characteristic of its members are the same. A population is any group of individuals that has one or more characteristics in common and that are of interest to the researcher. As we describe below, there are various ways to configure a population depending on the characteristics of interest.

Population for study, such a population must be specific enough to provide readers a clear understanding of the applicability of  study to their particular situation and their understanding of that same population.


A sample is a subset of the population being studied. It represents the larger population and is used to draw inferences about that population. It is a research technique widely used in the social sciences as a way to gather information about a population without having to measure the entire population.1.

Broadly speaking, there are two groups of sampling technique: probability sampling techniques and non-probability sampling techniques.

Probability sampling techniques

In probability sampling, every individual in the population have equal chance of being selected as a subject for the research.

This method guarantees that the selection process is completely randomized and without bias.

These types of probability sampling technique include simple random sampling, systematic random sampling, stratified random sampling and cluster sampling.

Random sampling

The random sample is the purest form of probability sampling. Each member of the population has an equal and known chance of being selected. When there are very large populations, it is often difficult or impossible to identify every member of the population, so the pool of available subjects becomes biased.

A Stratified Sample-

Stratified random sampling is a method of sampling that involves the division of a population into smaller groups known as strata. In stratified random sampling, the strata are formed based on members’ shared attributes or characteristics. A random sample from each stratum is taken in a number proportional to the stratum’s size when compared to the population. These subsets of the strata are then pooled to form a random sample.

A Cluster Sample

-A cluster sample is obtained by selecting clusters from the population on the basis of simple random sampling. The sample comprises a census of each random cluster selected. Cluster sampling is a method used to enable random sampling to occur while limiting the time and costs that would otherwise be required to sample from either a very large population or one that is geographically diverse.

Non-probability sampling techniques

In this type of population sampling, members of the population do not have equal chance of being selected. Due to this, it is not safe to assume that the sample fully represents the target population. It is also possible that the researcher deliberately chose the individuals that will participate in the study.

Convenience Sampling

Convenience sampling is a non-probability sampling technique where subjects are selected because of their convenient accessibility and proximity to the researcher.

The subjects are selected just because they are easiest to recruit for the study and the researcher did not consider selecting subjects that are representative of the entire population.

Consecutive Sampling

Consecutive Sampling is a strict version of convenience sampling where every available subject is selected, i.e., the complete accessible population is studied. This is the best choice of the Non-probability sampling techniques since by studying everybody available, a good representation of the overall population is possible in a reasonable period of time.

Judgmental sampling

Judgmental sampling is a non-probability sampling technique where the researcher selects units to be sampled based on their knowledge and professional judgment.

The judgemental sampling is used in cases where the specialty of an authority can select a more representative sample that can bring more accurate results than by using other probability sampling techniques. The process involves nothing but purposely handpicking individuals from the population based on the authority’s or the researcher’s knowledge and judgment.

Quota Sampling-

Quota sampling is the Non-probability equivalent of stratified sampling. Like stratified sampling, the researcher first identifies the stratums and their proportions as they are represented in the population. Then convenience or judgment sampling is used to select the required number of subjects from each stratum. This differs from stratified sampling, where the stratums are filled by random sampling Quota sampling is a non-probability technique used to ensure equal representation of subjects in each layer of a stratified sample grouping.

Sequential Sampling

Sequential sampling is a non-probability sampling technique wherein the researcher picks a single or a group of subjects in a given time interval, conducts his study, analyzes the results then picks another group of subjects if needed and so on.

Systematic Sampling

Systematic sampling is a type of probability sampling method in which sample members from a larger population are selected according to a random starting point and a fixed periodic interval. This interval, called the sampling interval, is calculated by dividing the population size by the desired sample size. Despite the sample population being selected in advance, systematic sampling is still thought of as being random if the periodic interval is determined beforehand and the starting point is random.

Snowball or Chain Sampling

Snowball sampling is a special Non-probability method used when the desired sample characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in these situations. Snowball sampling relies on referrals from initial subjects to generate additional subjects. While this technique can dramatically lower search costs, it comes at the expense of introducing bias because the technique itself reduces the likelihood that the sample will represent a good cross section from the population.

A Pre-Test

usually refers to a small-scale trial of particular research components.

Before planning a pilot census, the conduct of a series of pre-test surveys is highly desirable. The objective of the pre-test surveys should be confined mainly to the formulation of concepts and definitions, census questionnaires, instruction manuals, etc., and the evaluation of alternative methodologies and data collection techniques.

A Pilot Study

A pilot study is a mini-version of a full-scale study or a trial run done in preparation of the complete study. The latter is also called a ‘feasibility’ study. It can also be a specific pre-testing of research instruments, including questionnaires or interview schedules.

Methods of Data Collection

1.Focus Groups:

Focus Group Discussion (FGD) is a method of data collection which is frequently used to collect in-depth qualitative data in various descriptive studies such as case studies, phenomenological and naturalistic studies). The main goal of Focus Group Discussion is to provide an opportunity for the participants to talk to one another about a specific area of study. The facilitator is there to guide the discussion. A focus group discussion allows a group of 8 – 12 informants to freely discuss a certain subject with the guidance of a facilitator or reporter.

  • Develop appropriate messages for health education programmes and later evaluate the messages for clarity
  • Explore controversial topic
  • Focus research and develop relevant research hypotheses by exploring in greater depth the problem to be investigated and its possible causes
  • Formulate appropriate questions for more structured, larger scale surveys
  • Help understand and solve unexpected problems in interventions


Interview is one of the popular methods of research data collection. The term interview can be dissected into two terms as, ‘inter’ and ‘view’. The essence of interview is that one mind tries to read the other. The interviewer tries to assess the interviewed in terms of the aspects studied or issues analyzed. Good approach to gather in-depth attitudes, beliefs, and anecdotal data from individual patrons. Personal contact with participants might elicit richer and more detailed responses. Provides an excellent opportunity to probe and explore questions.


Observation is a technique that involves systematically selecting, watching and recording behavior and characteristics of living beings, objects or phenomena. Observation of human behavior is a much-used data collection technique. It can be undertaken in different ways:  Participant observation: The observer takes part in the situation he or she observes. (For example, a doctor hospitalized with a broken hip, who now observes hospital procedures ‘from within’.)  Non-participant observation: The observer watches the situation, openly or concealed, but does not participate. Observations can be overt (everyone knows they are being observed) or covert (no one knows they are being observed and the observer is concealed). The benefit of covert observation is that people are more likely to behave naturally if they do not know they are being observed. However, you will typically need to conduct overt observations because of ethical problems related to concealing your observation.


Best for gathering brief written responses on attitudes, beliefs regarding library programs. Can include both close-ended and open-ended questions. Can be administered in written form or online. Personal contact with the participants is not required. Staff and facilities requirements are minimal, since one employee can easily manage the distribution and collection of surveys, and issues such as privacy, quiet areas, etc. are typically not concerns. Responses are limited to the questions included in the survey. Participants need to be able to read and write to respond. Therefore, surveys may not be the best initial data collection tool.

Tools of Data Collection

1.Interview Schedule (Open-ended/Close-ended):

This method of data collection is very much like the collection of data through questionnaire, with little difference which lies in the fact that schedules (proforma containing a set of questions) are being filled in by the enumerators who are specially appointed for the purpose. These enumerators along with schedules, go to respondents, put to them the questions from the proforma in the order the questions are listed and record the replies in the space meant for the same in the proforma.

2.Questionnaire (Open-ended Question):

This method of data collection is quite popular, particularly in case of big enquiries. It is being adopted by private individuals, research workers, private and public organizations and even by governments. In this method a questionnaire is sent (usually by post) to the persons concerned with a request to answer the questions and return the questionnaire. A questionnaire consists of a number of questions printed or typed in a definite order on a form or set of forms. The questionnaire is mailed to respondents who are expected to read and understand the questions and write down the reply in the space meant for the purpose in the questionnaire itself. The respondents have to answer the questions on their own.


Checklists structure a person’s observation or evaluation of a performance or artifact. They can be simple lists of criteria that can be marked as present or absent, or can provide space for observer comments. These tools can provide consistency over time or between observers. Checklists can be used for Case Study method.

4.Dichotomous Scales

The response options for each question in your survey may include a dichotomous, a three-point, a five-point, a seven-point or a semantic differential scale. Each of these response scales has its own advantages and disadvantages, but the rule of thumb is that the best response scale to use is the one which can be easily understood by respondents and interpreted by the researcher.

A dichotomous scale is a two-point scale which presents options that are absolutely opposite each other. This type of response scale does not give the respondent an opportunity to be neutral on his answer in a question.


  • Yes- No
  • True – False
  • Fair – Unfair
  • Agree – Disagree

Rating Scales

This is a recording form used for measuring individual’s attitudes, aspirations and other psychological and behavioral aspects, and group behavior.

Three-point, five-point, and seven-point scales are all included in the umbrella term “rating scale”. A rating scale provides more than two options, in which the respondent can answer in neutrality over a question being asked.


1. Three-point Scales

  • Good – Fair – Poor
  • Agree – Undecided – Disagree
  • Extremely- Moderately – Not at all
  • Too much – About right – Too little

2. Five-point Scales (e.g. Likert Scale)

  • Strongly Agree – Agree – Undecided / Neutral – Disagree – Strongly Disagree
  • Always – Often – Sometimes – Seldom – Never
  • Extremely – Very – Moderately – Slightly – Not at all
  • Excellent – Above Average – Average – Below Average – Very Poor

3. Seven-point Scales

  • Exceptional – Excellent – Very Good – Good – Fair – Poor – Very Poor
  • Very satisfied –
  • Moderately satisfied –
  • Slightly satisfied –
  • Neutral –
  • Slightly dissatisfied –
  • Moderately Dissatisfied-
  • Very dissatisfied

Semantic Differential Scales

A semantic differential scale is only used in specialist surveys in order to gather data and interpret based on the connotative meaning of the respondent’s answer. It uses a pair of clearly opposite words, and can either be marked or unmarked.

Data Processing

Data processing is an intermediary stage of work between data collection and data analysis. The completed instruments of data collection, like interview schedules/questionnaires/ data sheets/field notes contain. a vast mass of data. They cannot straightaway provide answers to research questions. They, like raw materials, need processing. Data processing involves classification and summarisal1on of data in order to make them amenable to analysis. Data processing consists of a number of closely related operations, like

(1) editing,

(2) classification and coding,

(3) transcription and

(4) tabulation.


The first step in processing of data is editing of complete schedules/questionnaires. Editing of data is a process of examining the collected raw data (specially in surveys) to detect errors and omissions and to correct these when possible.

Data editing is defined as the process involving the review and adjustment of collected survey data. The purpose is to control the quality of the collected data. Data editing can be performed manually, with the assistance of a computer or a combination of both.

Editing methods

Interactive editing

The term interactive editing is commonly used for modern computer-assisted manual editing. Most interactive data editing tools applied at National Statistical Institutes (NSIs) allow one to check the specified edits during or after data entry, and if necessary to correct erroneous data immediately. Several approaches can be followed to correct erroneous data:

Interactive editing is a standard way to edit data. It can be used to edit both categorical and continuous data.[. Interactive editing reduces the time frame needed to complete the cyclical process of review and adjustment.

Selective editing

Selective editing is an umbrella term for several methods to identify the influential errors,  and outliers. Selective editing techniques aim to apply interactive editing to a well-chosen subset of the records, such that the limited time and resources available for interactive editing are allocated to those records where it has the most effect on the quality of the final estimates of publication figures. In selective editing, data is split into two streams:

The critical stream and The non-critical stream

The critical stream consists of records that are more likely to contain influential errors. These critical records are edited in a traditional interactive manner. The records in the non-critical stream which are unlikely to contain influential errors are not edited in a computer assisted manner.

Macro editing

There are two methods of macro editing:

Aggregation method

This method is followed in almost every statistical agency before publication: verifying whether figures to be published seem plausible. This is accomplished by comparing quantities in publication tables with same quantities in previous publications.

Distribution method

Data available is used to characterize the distribution of the variables. Then all individual values are compared with the distribution. Records containing values that could be considered uncommon (given the distribution) are candidates for further inspection and possibly for editing.[8]

Automatic editing

In automatic editing records are edited by a computer without human intervention[9]. Prior knowledge on the values of a single variable or a combination of variables can be formulated as a set of edit rules which specify or constrain the admissible values.


Coding refers to the process of assigning numerals or other symbols to answers so that responses can be put into a limited number of categories or classes. Such classes should be appropriate to the research problem under consideration. They must also possess the characteristic of exhaustiveness (i.e., there must be a class for every data item) and also that of mutual exclusively which means that a specific answer can be placed in one and only one cell in a given category set. Another rule to be observed is that of unidimensional by which is meant that every class is defined in terms of only one concept.


After the transcription of data is over, data are summarized and arranged in a compact form for further analysis. This process is called tabulation. Thus, tabulation is the process of summarizing raw data and displaying them on  compact statistical tables for further analysis. It involves counting of the number of cases falling into each of several categories.


Most research studies result in a large volume of raw data which must be reduced into homogeneous groups if we are to get meaningful relationships. This fact necessitates classification of data which happens to be the process of arranging data in groups or classes on the basis of commoncharacteristics. Data having a common characteristic are placed in one class and in this way the entire data get divided into a number of groups or classes.

Analyze of Data

Data analysis can take the form of simple descriptive statistics or more sophisticated statistical inference. Data analysis techniques include univariate analysis (such as analysis of single-variable distributions), bivariate analysis, and more generally, multivariate analysis. Multivariate analysis, broadly speaking, refers to all statistical methods that simultaneously analyze multiple measurements on each individual or object under investigation ; as such, many multivariate techniques are extensions of univariate and bivariate analysis.

Descriptive Statistics:

Descriptive statistics implies a simple quantitative summary of a data set that has been collected. It helps us understand the experiment or data set in detail and tells us all about the required details that help put the data in perspective.

In descriptive statistics, we simply state what the data shows and tells us. Interpreting the results and trends beyond this involve inferential statistics that is a separate branch altogether.

The data that is collected can be represented in several ways. Descriptive statistics includes statistical procedures that we use to describe the population we are studying. The data could be collected from either a sample or a population, but the results help us organize and describe data. Descriptive statistics can only be used to describe the group that is being studying. That is, the results cannot be generalized to any larger group.

Inferential Statistics:

While descriptive statistics tell us basic information about the population or data set under study, inferential statistics are produced by more complex mathematical calculations, and allow us to infer trends about a larger population based on a study of a sample taken from it. We use inferential statistics to examine the relationships between variables within a sample, and then make generalizations or predictions about how those variables will relate within a larger population.

Most quantitative social science operates using inferential statistics because it is typically too costly or time-consuming to study an entire population of people. Using a statistically valid sample and inferential statistics, we can conduct research that otherwise would not be possible. (Click here to learn more about the different kinds of samples and how to compile and use them.)

Techniques that social scientists use to examine the relationships between variables, and thereby to create inferential statistics, include but are not limited to: linear regression analyses, logistic regression analyses, ANOVA, correlation analyses, structural equation modeling, and survival analysis.

When conducting research using inferential statistics it is important and necessary to conduct test of significance in order to know whether you can generalize your results to a larger population. Common tests of significance include the Chi-square and T-test. These tell us the probability that the results of our analysis of the sample are representative of the population that the sample represents.

Interpretation of Data

Interpretation refers to the task of drawing inferences from the collected facts after an analytical and/or experimental study. In fact, it is a search for broader meaning of research findings. The task of interpretation has two major aspects viz.,

(i)            The effort to establish continuity in research through linking the results of a given study with those of another,

(ii)           The establishment of some explanatory concepts.

Necessity of Data Interpretation

Interpretation leads to the establishment of explanatory concepts that can serve as a guide for future research studies; it opens new avenues of intellectual adventure and stimulates the quest for more knowledge.

It is through interpretation that the researcher can well understand the abstract principle that works beneath his findings. Through this he can link up his findings with those of other studies, having the same abstract principle, and thereby can predict about the concrete world of events. Fresh inquiries can test these predictions later on. This way the continuity in research can be maintained.

Researcher can better appreciate only through interpretation why his findings are what they are and can make others to understand the real significance of his research findings.

The interpretation of the findings of exploratory research study often results into hypotheses for experimental research and as such interpretation is involved in the transition from exploratory to experimental research. Since an exploratory study does not have a hypothesis to start with, the findings of such a study have to be interpreted on a post-factum basis in which case the interpretation is technically described as ‘post factum’ interpretation.

Test of Hypothesis

Hypothesis testing helps to decide on the basis of a sample data, whether a hypothesis about the population is likely to be true or false. Statisticians have developed several tests of hypotheses (also known as the tests of significance) for the purpose of testing of hypotheses which can be classified as:

(a) Parametric tests or standard tests of hypotheses;

(b) Non-parametric tests or distribution-free test of hypotheses.

Parametric Test:

Parametric tests usually assume certain properties of the parent population from which we draw samples. Assumptions like observations come from a normal population, sample size is large, assumptions about the population parameters like mean, variance, etc., must hold good before parametric tests can be used. But there are situations when the researcher cannot or does not want to make such assumptions. (T-test and Z-Test)

Non-parametric Test:

Such tests do not depend on any assumption about the parameters of the parent population. Besides, most non-parametric tests assume only nominal or ordinal data, whereas parametric tests require measurement equivalent to at least an interval scale ( X 2-Test and F-Test).


Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. The chi-square test isalways testing what scientists call the null hypothesis, which states that there is nosignificant difference between the expected and observed result.

An important non-parametric test as no rigid assumptions are necessary in regard to the type of population, no need of parameter values and relatively less mathematical details are involved. Based on frequency not on the parameters like mean or standard deviation.

Can also be applied to a complex contingency table with several classes and as such is a very useful test in research work. Useful for testing hypothesis not for the estimation. X 2 should not be calculated if the expected value in any category is less than 5

Degrees of Freedom

In statistics, the number of degrees of freedom (d.o.f.) is the number of independent pieces of data being used to make a calculation. The number of degrees of freedom is a measure of how certain we are that our sample population is representative of the entire population

The d.o.f. can be viewed as the number of independent parameters available to fit a model to data. Generally, the more parameters you have, the more accurate your fit will be. However, for each estimate made in a calculation, you remove one degree of freedom. This is because each assumption or approximation you make puts one more restriction on how many parameters are used to generate the model.

Measurement Scales

The “levels of measurement”, or scales of measure are expressions that typically refer to the theory of scale types developed by the psychologist Stanley Smith Stevens. Stevens claimed that all measurement in science was conducted using four different types of scales that he called “nominal”, “ordinal”, “interval” and “ratio”, unifying both qualitative (which are described by his “nominal “scale) and quantitative (to a different degree, all the rest of his scales)


A scale that measures data by name only.


A scale that measures by rank order only. Other than rough order, no precise measurement is possible.


A scale that measures by using equal intervals. Here you can compare differences between pairs of values.


Univariate Data:

  • Involving a single variable.
  • Does not deal with causes or relationships.
  • Does not deal with causes or relationships
    • Central tendency – mean, mode, median.
    • Dispersion – range, variance, max, min ,quartiles, standard deviation.
    • Frequency distributions.
    • Bar graph, histogram, pie chart, line graph, box-and-whisker plot

Bivariate Data:

  • Involving two variables
  • Deals with causes or relationships
  • The major purpose of bivariate analysis is to explain
    • Analysis of two variables simultaneously
    • Correlations
    • Comparisons, relationships, causes, explanations
    • Tables where one variable is contingent on the values of the other variable.
    • Independent and dependent variables

Graph, Chart and Figure

  • Chart is circular and represents 100% of a category; each segment or pie is a percentage of the whole- like a bar-chart
  • A figure can be any picture that goes with the text of what someone is writing
  • Graph shows values that are related then comparisons are made such as the tallest and the shortest.

Raw Data

The term raw data is used most commonly to refer to information that is gathered for a research study before that information has been transformed or analyzed in any way. The term can apply to the data as soon as they are gathered or after they have been cleaned, but not in any way further transformed or analyzed.

Characteristics of Conclusion

Every basic conclusion must share several key elements, but there are also several tactics you can play around with to craft a more effective conclusion and several you should avoid in order to prevent yourself from weakening your paper’s conclusion. Here are some writing tips to keep in mind when creating the conclusion for your next research paper.

  • Restate the topic
  • Summarize the main points
  • Add the points up
  • Make a call to action when appropriate

Confidence Interval and Fact

A confidence interval gives an estimated range of values which is likely to include an unknown population parameter, the estimated range being calculated from a given set of sample data.

If independent samples are taken repeatedly from the same population, and a confidence interval calculated for each sample, then a certain percentage (confidence level) of the intervals will include the unknown population parameter. Confidence intervals are usually calculated so that this percentage is 95%, but we can produce 90%, 99%, 99.9% (or whatever) confidence intervals for the unknown parameter.

A fact is something that has really occurred or is actually the case. The usual test for statement of fact is verifiability, that is whether it can be proven to correspond to experience.



This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.