Buddhism – the Rebel child of Hinduism.

Dr. V.K.Maheshwari, M.A(Socio, Phil) B.Sc. M. Ed, Ph.D

Former Principal, K.L.D.A.V.(P.G) College, Roorkee, India

Buddhism is a religion based on the teachings of Siddhartha Gautama, He came to be called “the Buddha,” which means “awakened one,” after he experienced a profound realization of the nature of life, death and existence. Buddha (c. 500s B.C.E.) also known as Gotama Buddha, Siddhārtha Gautama, and Buddha Śākyamuni, was born in Lumbini, in the Nepalese region of Terai, near the Indian border The Buddha’s teaching formed the foundation for Buddhist philosophy, initially developed in South Asia, then later in the rest of Asia.

Instead of teaching doctrines to be memorized and believed, the Buddha taught how we can realize truth for ourselves. The focus of Buddhism is on practice rather than belief. The major outline of Buddhist practice is the Eightfold Path.

The Buddha discouraged his followers from indulging in intellectual disputation for its own sake, which is fruitless, and distracting from true awakening. Nevertheless, the delivered sayings of the Buddha contain a philosophical component, in its teachings on the working of the mind, and its criticisms of the philosophies of his contemporaries.

According to the scriptures, during his lifetime the Buddha remained silent when asked several metaphysical questions. These regarded issues such as whether the universe is eternal or non-eternal (or whether it is finite or infinite), the unity or separation of the body and the self, the complete inexistence of a person after Nirvana and death, and others.

Buddhism and Buddhist philosophy now have a global following. While the Buddha’s view of the spiritual path is traditionally described as a middle way between the extremes of self-indulgence and self-mortification, the Buddha’s epistemology can be interpreted as a middle way between the extremes of dogmatism and skepticism.

Higher Knowledge

Contemplative experiences are of two main types:

meditative absorptions or abstractions (jhāna), and higher or direct knowledge (abhiññā).
Stages of the Eight Jhānas // Meditative Absorption

EIGHT JHĀNAS – In the Pāli canon the Buddha describes eight progressive states of absorption meditation or jhāna. Four are considered to be meditations of form (rūpa jhāna) and four are formless meditations (arūpa jhāna). The first four jhānas are said by the Buddha to be conducive to a pleasant abiding and freedom from suffering.[10] The jhānasare states of meditation where the mind is free from the five hindrances — craving, aversion, sloth, agitation and doubt — and (from the second jhāna onwards) incapable of discursive thinking. The deeper jhānas can last for many hours. Jhāna empowers a meditator’s mind, making it able to penetrate into the deepest truths of existence.

There are four deeper states of meditative absorption called “the immaterial attainments.” Sometimes these are also referred to as the “formless” jhānas (arūpa jhānas) in distinction from the first four jhānas (rūpa jhānas). In the Buddhist canonical texts, the word “jhāna” is never explicitly used to denote them, but they are always mentioned in sequence after the first four jhānas. The enlightenment of complete dwelling in emptiness is reached when the eighth jhāna is transcended.

The Rupa Jhānas

There are four stages of deep collectedness which are called the Rupa Jhāna (Fine-material Jhāna):

  1. First Jhāna – In the first jhana there are – “directed thought, evaluation, rapture, pleasure, unification of mind, contact, feeling, perception, intention, consciousness, desire, decision, persistence, mindfulness, equanimity & attention”
  2. Second Jhāna – In the second jhana there are – “internal assurancerapture, pleasure, unification of mind, contact, feeling, perception, intention, consciousness, desire, decision, persistence, mindfulness, equanimity, & attention”
  3. Third Jhāna – In the third jhana, there are – “equanimity-pleasure, unification of mind, contact, feeling, perception, intention, consciousness, desire, decision, persistence, mindfulness, equanimity & attention”
  4. Fourth Jhāna – In the fourth jhana there are – “a feeling of equanimity, neither pleasure nor pain; an unconcern due to serenity of awareness; unification of mind, contact, feeling, perception, intention, consciousness, desire, decision, persistence, mindfulness, equanimity & attention”.[11]

The Arupa Jhānas

Beyond the four jhānas lie four attainments, referred to in the early texts as aruppas. These are also referred to in commentarial literature as immaterial/the formless jhānas (arūpajhānas), also translated as The Formless Dimensions:

  1. Dimension of Infinite Space – In the dimension of infinite space there are – “the perception of the dimension of the infinitude of space, unification of mind, contact, feeling, perception, intention, consciousness, desire, decision, persistence, mindfulness, equanimity, & attention”
  2. Dimension of Infinite Consciousness – In the Dimension of infinite consciousness there are – “the perception of the dimension of the infinitude of consciousness, unification of mind, contact, feeling, perception, intention, consciousness, desire, decision, persistence, mindfulness, equanimity, & attention”
  3. Dimension of Nothingness – In the dimension of nothingness, there are – “the perception of the dimension of nothingness, singleness of mind, contact, feeling, perception, intention, consciousness, desire, decision, persistence, mindfulness, equanimity, & attention”
  4. Dimension of Neither Perception nor Non-Perception – About the role of this jhana it is said: “He emerged mindfully from that attainment. On emerging mindfully from that attainment, he regarded the past qualities that had ceased & changed: ‘So this is how these qualities, not having been, come into play. Having been, they vanish.’ He remained unattracted & unrepelled with regard to those qualities, independent, detached, released, dissociated, with an awareness rid of barriers. He discerned that ‘There is a further escape,’ and pursuing it there really was for him.”

The four Arūpajhāna

While rupajhanas differ considering their characteristics, arupajhanas differ as their object is determined by the level of the jhana:

  • fifth jhāna: infinite space,
  • sixth jhāna: infinite consciousness,
  • seventh jhāna: infinite nothingness,
  • eighth jhāna: neither perception nor non-perception.

This has to be understood. In the fourth rupajhana, there is already Upekkha, equanimity and Ekkagata, concentration, but the mind is still focused on a “material” object, as any color.

  • In the fifth jhana, the meditator discovers that there is no object, but only an infinite space, which is empty. This perception motivates the interest of claiming arupajhanas.
  • In the sixth jhana, it becomes obvious that space has no existence. There is only infinite consciousness.
  • In the seventh jhana appears the feeling that there is no consciousness, but nothingness.
  • The eighth jhana consists in the most discrete possible state of mind, which justifies the using of “neither perception nor non-perception”.

These “explanations” do not refer to any intellectual, philosophical comprehension, which disappear since the second jhana. They attempt to figure mental process. The arūpajhānas are part of the kammatthanas, and are referred to as the four “formless states”.

The two elements of Arūpajhāna

Some Tipitaka texts identify arūpajhānas as a part of the fourth rūpajhāna, as they include two elements: upekkhā (Sanskrit: upekṣā) and ekaggatā (Skt: ekāgratā).

Upekkha

Upekkhā is a Pali word meaning equanimity. The opposition between comfortable sensations and uncomfortable ones disappears. More importantly, it is one of the fourth Jhāna’s factors, present only in this Jhāna.

Ekaggatā

Ekaggatā or “singlepointedness”, as a Jhāna’s factor, simply means a very deep concentration, which includes the ceasing of stimuli from the exterior world. It is the only jhānic factor present in each Jhāna

There are six classes of higher or direct knowledge:

the first one refers to a variety of supernatural powers including levitation and walking on water; in this sense, it is better understood as a know-how type of knowledge.

The second higher knowledge is literally called “divine ear element” or clairaudience.

The third higher knowledge is usually translated as telepathy, though it means simply the ability to know the underlying mental state of others, not the reading of their minds and thoughts.

The next three types of higher knowledge are especially important because they were experienced by the Buddha the night of his enlightenment, and because they are the Buddhist counterparts to the triple knowledge of the Vedas.

The fourth higher knowledge is retrocognition or knowledge of past lives, which entails a direct experience of the process of rebirth.

The fifth is the divine eye or clairvoyance; that is, direct experience of the process of karma, or as the texts put it, the passing away and reappearing of beings in accordance with their past actions. The sixth is knowledge of the destruction of taints, which implies experiential knowledge of the four noble truths and the process of liberation.

The Four Noble Truths

The Four Noble Truths are thus:

1. Life means suffering

2. The origin of suffering is attachment.

3. The cessation of suffering is attainable.

4. The path to the cessation of suffering.

The First Truth: Dukkhaṃ

In the Sutta on the Turning of the Wheel of Dhamma, the buddha explained the first truth at greater length than at other points in the Pali canon:

This, monks, is the noble truth that is pain. Birth is pain, old age is pain, illness is pain, death is pain, sorrow and grief, physical and mental suffering, and disturbance are pain. Association with things not liked is pain, separation from desired things is pain, not getting what one wants is pain; in short, the five aggregates of grasping are pain.

This is the most expansive description of the first truth of the noble ones: the truth that is pain is birth, old age, illness, death, and so on. An important distinction should be made here: the first truth is not pain in and of itself, but rather the pain that is associated with all of the following conditions: birth is pain, death is pain, not getting what we want is pain, and so on. All of these conditions are characteristic of human life, and thus the first truth is often understood to mean that Buddhism claims that human life is associated with pain, or to use a term from the Abrahamic religions, human life is suffering.

“The one who acts is the one who experiences [the result of the act]” amounts to the eternalist statement, “Existing from the very beginning, stress is self-made.” “The one who acts is someone other than the one who experiences” amounts to the annihilationist statement, “For one existing harassed by feeling, stress is other-made.” Avoiding these two extremes, the Tathagata teaches the dhamma via the middle.

To be born into this world means to suffer. That’s Buddha’s first Noble Truth. This is because human life isn’t perfect and neither are our surroundings. Our life in this world is subject to suffering and physical pain due to sickness, old age, disease, injury and death. We undergo mental suffering and pain due to sadness, disappointment, poverty, lust, love, fear, frustration, greed, injustice and depression.

The Second Truth: Samudayo (Arising)

The second truth of the four noble truths is samudayo, or arising. In the Sutta on the Turning of the Dhamma Wheel, it reads: “This, monks, is the noble truth that is the arising of pain. This is craving that leads to rebirth, is connected with pleasure and passion and finds pleasure in this or that; that is, craving for desire, craving for existence, and craving for existence to fade away.” This second truth is most often understood as laying out the causation of pain, the first truth

The origin of suffering is attachment to impermanence that’s perceived to bring us happiness. This is the second Noble Truth. The transient illusions(wealth, lust, power, beauty) condition our mindset into believing their permanence, thus preventing our mind from overcoming ignorance. We suffer because of our desire, passion, greed, pursue of wealth and status, by striving for fame and acceptance, or in other words – due to craving and attachment.

The Third Truth: Nirodho (Ending)

The third truth is nirodho, or ending. It is explained in the Sutta on the Turning of the Dhamma Wheel: “This, monks is the noble truth that is the ending of pain. This is the complete fading away and ending of that very craving, giving it up, renouncing it, releasing it, and letting go.” This is a natural movement in the sequence of the truths thus far: the first is to recognize the truth “this is pain” or “this is suffering.” The second step is to know why “this is pain.” The three types of thirst or craving lead to things that cause us pain in this life. We stop that pain, we stop that hurting or suffering by stopping craving or thirst: “the complete fading away and ending of that very craving, giving it up, renouncing it, releasing it, and letting it go.” This truth is just a simple fact: to end things that cause us pain, we need to end their arising.

The Buddha explicitly stated that attaining dispassion will eliminate suffering. Nirodha eliminates all forms of craving and attachment thus setting us off on our long journey towards ultimate salvation from suffering. The meaning of Nirodha is elimination of sensual craving and worldly attachment.

The Fourth Truth: Paṭipadā (Way)

The fourth truth, the way, according to the Sutta on the Turning of the Dhamma Wheel, reads:

This, monks, is the noble truth that is the way leading to the ending of pain. This is the eightfold path of the noble ones: right view, right intention, right speech, right action, right livelihood, right effort, right mindfulness, and right concentration.

The fourth truth, again, follows logically after the first three: pain, arising, ending, and the way—or the how. The eightfold path is always found as the explanation of the fourth truth, and is often taken as the buddha’s teaching of “the” path to enlightenment.

The Noble Eightfold Path ( Ariya Ashtanga Marga )

The Noble Eightfold Path ( Ariya Ashtanga Marga ) explains the gradual path of self-improvement towards the cessation of rebirth and its resultant suffering. Lord Buddha described the Eightfold Path as the Middle Path as it avoids extremes of self-indulgence (such as hedonism) and excessive self-mortification (asceticism). This is the Path which leads to the end of Samsara, the cycle of rebirth.

Eightfold Path

The Buddhist system of ethics can be summed up in the eightfold path:

The way of practice leading to the cessation of suffering – precisely is through  Noble Eightfold Path – right view, right intention, right speech, right action, right livelihood, right effort, right mindfulness, right concentration.

The purpose of living an ethical life is to escape the suffering inherent in samsara. Skillful actions condition the mind in a positive way and lead to future happiness, while the opposite is true for unskillful actions. Ethical discipline also provides the mental stability and freedom to embark upon mental cultivation via meditation.

The part of the Noble Eightfold path that covers morality/ethics is right speech, right action and right livelihood. The other parts cover concentration and wisdom, with wisdom being covered by right view and right intention and the remaining three belonging to concentration.

The three aggregates are not included under the noble eightfold path, friend Visakha, but the noble eightfold path is included under the three aggregates. Right speech, right action, & right livelihood come under the aggregate of virtue. Right effort, right mindfulness, & right concentration come under the aggregate of concentration. Right view & right resolve come under the aggregate of discernment.

he Path

1. * Samma-Ditthi — Complete or Perfect Vision, also translated as right view or understanding. Vision of the nature of reality and the path of transformation.

2. Samma-Sankappa — Perfected Emotion or Aspiration, also translated as right thought or attitude. Liberating emotional intelligence in your life and acting from love and compassion. An informed heart and feeling mind that are free to practice letting go.

3. Samma-Vaca — Perfected or whole Speech. Also called right speech. Clear, truthful, uplifting and non-harmful communication.

4. Samma-Kammanta — Integral Action. Also called right action. An ethical foundation for life based on the principle of non-exploitation of oneself and others. The five precepts.

5. Samma-Ajiva — Proper Livelihood. Also called right livelihood. This is a livelihood based on correct action the ethical principal of non-exploitation. The basis of an Ideal society.

6. Samma-Vayama Complete or Full Effort, Energy or Vitality. Also called right effort or diligence. Consciously directing our life energy to the transformative path of creative and healing action that fosters wholeness. Conscious evolution.

7. Samma-Sati Complete or Thorough Awareness. Also called “right mindfulness”. Developing awareness, “if you hold yourself dear watch yourself well”. Levels of Awareness and mindfulness – of things, oneself, feelings, thought, people and Reality.

8. Samma-Samadhi — Full, Integral or Holistic Samadhi. This is often translated as concentration, meditation, absorption or one-pointedness of mind. None of these translations is adequate. Samadhi literally means to be fixed, absorbed in or established at one point, thus the first level of meaning is concentration when the mind is fixed on a single object. The second level of meaning goes further and represents the establishment, not just of the mind, but also of the whole being in various levels or modes of consciousness and awareness. This is Samadhi in the sense of enlightenment or Buddhahood.

The word Samma means ‘proper’, ‘whole’, ‘thorough’, ‘integral’, ‘complete’, and ‘perfect’ – related to English ‘summit’ – It does not necessarily mean ‘right’, as opposed to ‘wrong’.

Early Buddhist schools

The main early Buddhist philosophical schools are the Abhidharma schools, particularly Sarvāstivāda and Theravāda.

Sarvastivadin realism

Early Buddhist philosophers and exegetes of the Sarvāstivādins created a pluralist metaphysical and phenomenological system, in which all experiences of people, things and events can be broken down into smaller and smaller perceptual or perceptual-ontological units called “dharmas”.

Texts of the Sarvastivada Abhidharma

The Sarvāstivāda Abhidharma consists of seven texts. The texts of the Sarvāstivādin Abhidharma are:

Theravada

Theravada promotes the concept of vibhajjavada (Pāli, literally “Teaching of Analysis”) to non-Buddhists. This doctrine says that insight must come from the aspirant’s experience, critical investigation, and reasoning instead of by blind faith. As the Buddha said according to the canonical scriptures:

Do not accept anything by mere tradition … Do not accept anything just because it accords with your scriptures … Do not accept anything merely because it agrees with your pre-conceived notions … But when you know for yourselves—these things are moral, these things are blameless, these things are praised by the wise, these things, when performed and undertaken, conduce to well-being and happiness—then do you live acting accordingly.

Theravada accepts only the Pali Tipitika as scripture. There are a large number of other sutras that are venerated by Mahayana that Theravada does not accept as legitimate.

Buddhism  is divided into two sects: Mahayana and Hinayana. Mahayana literature is written in Sanskrit and Hinayana literature is written in Pali.

Mahayana

Mahayana often adopts a pragmatic concept of truth:[16] doctrines are regarded as conditionally “true” in the sense of being spiritually beneficial. In modern Chinese Buddhism, all doctrinal traditions are regarded as equally valid.[17]

Main Mahayana philosophical schools and traditions include the prajnaparamita, Madhyamaka, Tathagatagarbha, Yogācāra,  schools.

Prajnaparamita

The Prajanaparamita-sutras emphasize the emptiness of the five skandhas.

Madhyamaka

The Mahāyānist Nāgārjuna, asserted a direct connection between, even identity of, dependent origination, selflessness (anatta), and emptiness (śūnyatā). He pointed out that implicit in the early Buddhist concept of dependent origination is the lack of any substantial being (anatta) underlying the participants in origination, so that they have no independent existence, a state identified as emptiness (śūnyatā), or emptiness of a nature or essence (svabhāva).

Tathagatagarbha

The tathāgathagarbha sutras, in a departure from mainstream Buddhist language, insist that the potential for awakening is inherent to every sentient being. They marked a shift from a largely apophatic (negative) philosophical trend within Buddhism to a decidedly more cataphatic (positive) modu.

Yogacara

The Yogacara-school tries to explain the arising of suffering by explaining the workings of our mind. It takes the concepts of the five skandhas and the six consciousnesses, to explain howmanas creates vijnapti, concepts to which we cling.

Chinese Buddhism

The schools of Buddhism that had existed in China prior to the emergence of the Tiantai are generally believed to represent direct transplantations from India, with little modification to their basic doctrines and methods. However, Tiantai grew and flourished as a natively Chinese Buddhist school under the 4th patriarch, Zhiyi, who developed a hierarchy Buddhist sutras that asserted the Lotus Sutra as the supreme teaching, as well as a system of meditation and practices around it.

The principal schools of Buddhism which flourished in China were:

1. The Vinaya School (Lu-tsung)

2. The Realistic School (Chu-she)

3. The Three Treatises School (San-lun)

4.The Idealist School (Fa-hsiang)

5. The Mantra or Tantric School (Mi-tsung or Chen-yen)

6. The Avatamsaka or Flower Adornment School (Hua-yen)

7. The T’ien-t’ai or White Lotus School (Fa-hua)

8. The Pure Land School (Ching t’u)

9. The Dhyana School (Ch’an)

Huayan school

The Huayan developed the doctrine of “interpenetration” or “coalescence” (Wylie: zung-’jug; Sanskrit: yuganaddha),[24][25] based on the Avataṃsaka Sūtra, a Mahāyāna scripture. It holds that all phenomena (Sanskrit: dharmas) are intimately connected (and mutually arising The doctrine of interpenetration influenced the Japanese monk Kūkai, who founded the Shingon school of Buddhism. Interpenetration and essence-function are mutually informing in the East Asian Buddhist traditions, especially the Korean Buddhist tradition.

The founding of the school is traditionally attributed to a series of five “patriarchs” who were instrumental in developing the schools’ doctrines. These five are (Wade-Giles in brackets):

  1. Dushun (Tu-Shun), 杜順, responsible for the establishment of Huayan studies as a distinct field;
  2. Zhiyan (Chih-yen), 智儼, considered to have established the basic doctrines of the sect;
  3. Fazang (Fa-tsang), 法藏, considered to have rationalized the Doctrine for greater acceptance by society;
  4. Chengguan (Ch’eng-kuan), 澄觀, together with Zongmi are understood to have further developed and transformed the teachings
  5. Zongmi (Tsung-mi), 宗密, who is simultaneous a Patriarch of the Chan tradition.

Tibetan Buddhism

The Tibetan tantra entitled the “All-Creating King” (Kunjed Gyalpo Tantra) also emphasizes how Buddhist realization lies beyond the range of discursive/verbal thought and is ultimately mysterious. The Tibetan expression of Buddhism (sometimes called Lamaism) is the form of Vajrayana Buddhism that developed in Tibet and the surrounding Himalayan region beginning in the 7th century CE.

Tibetan Buddhism incorporates Madhyamika and Yogacara philosophy, Tantric symbolic rituals, Theravadin monastic discipline and the shamanistic features of the local Tibetan religion Bön. Among its most unique characteristics are its system of reincarnating lamas and the vast number of deities in its pantheon.

The most famous Tibetan Buddhist text is the Bardo Thodol (“liberation through hearing in the intermediate state”), popularly known as the Tibetan Book of the Dead. The Bardo Thodol is a funerary text that describes the experiences of the soul during the interval between death and rebirth called bardo. It is recited by lamas over a dying or recently deceased person, or sometimes over an effigy of the deceased. It has been suggested that it is a sign of the influence of shamanism on Tibetan Buddhism.

 

 

 

Posted in Uncategorized | Comments Off

Jainism – A Religion of purely human origin

Dr. V.K.Maheshwari, M.A(Socio, Phil) B.Sc. M. Ed, Ph.D

Former Principal, K.L.D.A.V.(P.G) College, Roorkee, India

In ancient times Jainism was known by many names such as the Saman tradition, the religion of Nirgantha, or the religion of Jin.  Jin is one, who has conquered the inner enemies of worldly passions such as desire, hatred, anger, ego, deceit and greed by personal effort.  By definition, a Jin is a human being, like one of us and not a supernatural immortal nor an incarnation of an almighty God.  Jins are popularly viewed as Gods in Jainism. There are an infinite number of Jins existed in the past. The Jins that have established the religious order and revived the Jain philosophy at various times in the history of mankind are known as Tirthankars. The ascetic sage, Rishabhadev was the first Tirthankar and Mahavir was the last Tirthankar of the spiritual lineage of the twenty-four Tirthankars in the current era.

Though it is widely believed that Vardhamana Mahavira (? 599 B.C. – 527 B.C.?) founded Jainism, the Jain tradition maintains that he was the 24thTirthankara of Jainism. Rishabhadeva was the first Tirthankara. Parshvanatha was the 23rd Tirthankara. Jainism, founded about the 6th century B.C by Vardhamana Mahavira, known either as Tirthankaras(Saviours) or as Jinas (Conquerors), rejects the idea of God as the creator of the world but teaches the perfectibility of humanity, to be accomplished through the strictly moral and ascetic life.

The two main sects of Jainism are:

(1)    Digambara

(2)     Shwetambara.

The Digambaras believe that a monk must give up all property including clothes and then only they get moksha. They also deny the right of women to moksha.

Jainism is both a philosophy and a religion. It is a heterodox philosophy in the sense that it does not uphold the authority of the Vedas.  It is atheist and does not accept the existence of God. Jainism rejects the concept of a Supreme Being or the Brahman as the creator of the world. The Tirthankaras are the liberated souls. The followers offer prayers to the Tirthankaras.

Jains believe in the philosophy of karma, reincarnation of worldly soul, hell and heaven as a punishment or reward for one’s deeds, and liberation (Nirvän or Moksha) of the self from life’s misery of birth and death in a way similar to the Hindu and Buddhist beliefs .The Jain philosophy believes that the universe and all its entities such as soul and matter are eternal (there is no beginning or end), no one has created them and no one can destroy them.

Jains do not believe that there is a supernatural power who does favor to us if we please him. Jains rely a great deal on self-efforts and self-initiative, for both – their worldly requirements and their salvation.

Jains believe that from eternity, the soul is bounded by karma and is ignorant of its true nature.  It is due to karma soul migrates from one life cycle to another and continues to attract new karma, and the ignorant soul continues to bind with new karma.

To overcome the sufferings, Jainism addresses the path of liberation in a rational way.   It states that the proper Knowledge of reality, when combined with right Faith and right Conduct leads the worldly soul to liberation (Moksha or Nirvän).

With regards to truth, the Jain philosophy firmly states that the whole truth cannot be observed from a single viewpoint.  To understand the true nature of reality, it is essential to acknowledge the multiple perspectives of each entity, situation or idea. This concept is called Anekäntväd.

The concept of universal interdependence underpins the Jain theory of knowledge, known as Anekäntaväd or the doctrine of many aspects. In this ever-changing universe an infinite number of viewpoints exist. These viewpoints depend on the time, place, circumstances, and nature of individuals. Anekäntaväd means acceptance of all viewpoints, which are positive in nature. This is known as non-absolutism.

Seven-valued logic,

As a consequence of their metaphysical liberalism, the Jain logicians developed a unique theory of seven-valued logic, according to which the three primary truth values are “true,” “false,” and “indefinite” and the other four values are “true and false,” “true and indefinite,” “false and indefinite,” and “true, false, and indefinite.” Every statement is regarded as having these seven values, considered from different standpoints.

This leads to the doctrine of Syädväd or relativity, which states that expression of truth is relative to different viewpoints (Nayas).  What is true from one point of view is open to question from another.  Absolute truth cannot be grasped from any particular viewpoint.  Absolute truth is the total sum of individual (partial) truths from many different viewpoints, even if they seem to contradict each other.

The ultimate goal of Jainism is for the soul to achieve liberation through understanding and realization. This is accomplished through the supreme ideals in the Jain religion of nonviolence, equal kindness, reverence for all forms of life, nonpossessiveness, and through the philosophy of non-absolutism (Anekäntväd).

In essence, Jainism addresses the true nature of reality.  Mahavir explained that all souls are equal in their potential for perfect knowledge, perfect vision, perfect conduct, unlimited energy and unobstructed bliss.

One can detach from karma and attain liberation by following the path of:

  • Right Faith (Samyak-darshan),
  • Right Knowledge (Samyak-jnän),
  • Right Conduct (Samyak-chäritra)

Jainism states that the universe is without a beginning or an end, and is everlasting and eternal.

Six fundamental entities (known as Dravya) constitute the universe.  Although all six entities are eternal, they continuously undergo countless changes (known as Paryäy). In these transformations nothing is lost or destroyed.

Lord Mahavir explained these phenomena in his Three Pronouncements known as Tripadi and proclaimed that Existence or Reality (also known as Sat) is a combination of appearance (Utpäda), disappearance (Vyaya), and persistence (Dhrauvya).

The Six Universal Substances or Entities (Dravyas) are as follows:

Jiva- The soul is the only living substance, which is consciousness and possesses knowledge. Similar to energy, the soul is invisible.  An infinite number of souls exist in the universe.  In its pure form each soul possesses infinite knowledge, infinite vision, perfect conduct, unobstructed bliss, and unlimited energy.

Pudgal- Matter is a nonliving substance, and possesses the characteristics such as touch, taste, smell, and color. Karma is considered matter in Jainism.  Extremely minute particles constitute karma.

Akash-The medium of motion helps the soul and matter to migrate from one place to another in the universe.

The space is divided into two parts.   Lokäkäsh, and Alokäkäsh.

Kal-Time measures the changes in soul and matter.  The wheel of time incessantly rolls on in a circular fashion.

The Doctrine of karma

The doctrine of karma occupies a significant position in Jain philosophy. It provides a rational explanation to the apparently inexplicable phenomena of birth and death, happiness and misery, inequalities in mental and physical attainments, and the existence of different species of living beings. It explains that the principle governing the successions of life is karma.

The seven or nine tattvas or fundamentals are the single most important subject of Jain philosophy. They deal with the theory of karma, which provides the basis for the path of liberation.

The Seven or Nine Tattvas (Fundamentals) are :

1)      Jiva

2)      Ajiva,

3)      Äsrava,

4)      Bandha

5)      Punya,

6)      Päpa,

7)      Samvara

8)      Nirjarä

9)       Moksha.

In Jainism, Ahimsä supersedes all concepts, ideologies, rules, customs and practices, traditional or modern, eastern or western, political or economical, self-centered or social.

Ahimsä (non-violence), Anekäntväd (multiplicity of views) and Aparigraha (non-possessiveness) are the cardinal principles of Jainism

.Aparigraha plays significant role in stopping the physical form of violence. And the proper application of Anekäntväd stops the violence of thoughts and speech. Anekäntväd is also called the intelligent expression of the Ahimsä. Non-violence in the center is guarded by truthfulness, non-stealing, celibacy and non-possessiveness.

Jainism is the first religion that has made vegetarianism a fundamental necessity for transforming consciousness. And they are right. Killing just to eat makes your consciousness heavy, insensitive; and you need a very sensitive consciousness – very light, very loving, very compassionate. It is difficult for a non-vegetarian to be compassionate; and without being compassionate and loving you will be hindering your own progress.

Rajneesh

 

 

Posted in Uncategorized | Comments Off

Causal-comparative Research

Dr. V.K. Maheshwari, Former Principal

K.L.D.A.V (P. G) College, Roorkee, India

Causal-comparative research is an attempt to identify a causative relationship between an independent variable and a dependent variable.The relationship between the independent variable and dependent variable is usually a suggested relationship (not proven) because you (the researcher) do not have complete control over the independent variable.

The Causal Comparative method seeks to establish causal relationships between events and circumstances. In other words, it finds out the causes of certain occurrences or non-occurrenceces. This is achieved by comparing the circumstances associated with observed effects and by noting  the factors present in the instances where a given effect occurs and where it does not occur. This method is based on Miill’s canon of agreement and disaggrement which states that caoses of given observed effect may be ascertained by noting elements which are invariably present when the result is present and which are invariably absent when the result is absent.

Causal-comparative research scrutinizes the relationship among variables in studies in which the independent variable has already occurred, thus making the study descriptive rather than experimental in nature. Because the independent variable (the variable for which the researcher wants to suggest causation) has already been completed (e.g., two reading methods used by a school ), the researcher has no control over it. That is, the researcher cannot assign subjects or teachers or determine the means of implementation or even verify proper implementation.

Sometimes the variable either cannot be manipulated (e.g., gender) or should not be manipulated (e.g., who smokes cigarettes or how many they smoke). Still, the relationship of the independent variable on one or more dependent variables is measured and implications of possible causation are used to draw conclusions about the results.

Also known as “ex post facto” research.  (Latin for “after the fact”) since both the effect and the alleged cause have already occurred and must be studied in retrospect  .In this type of research investigators attempt to determine the cause or consequences of differences that already exist between or among groups of individuals.

Used, particularly in the behavioral sciences. In education, because it is impossible, impracticable, or unthinkable to manipulate such variables as aptitude, intelligence, personality traits, cultural deprivation, teacher competence, and some variables that might present an unacceptable threat to human beings, this method will continue to be used.

Causal-Comparative Research Facts

  • Causal-Comparative Research is not manipulated by the researcher.
  • -Does not establish cause-effect relationships.
  • -Generally includes more than two groups and at least one dependent variable.
  • -Independent variable is causal-comparative studies is often referred to as the grouping variable.
  • -The independent variable has occurred or is already formed.

The Nature of Causal-Comparative Research

A common design in educational research studies,  Causal-comparative research, seeks to identify associations among variables. Relationships can be identified in causal-comparative study, but causation cannot be fully established.

Attempts to determine cause and effect. It is not as powerful as experimental designs  Causal-comparative research attempts to determine the cause or consequences of differences that already exist between or among groups of individuals.

Alleged cause and effect have already occurred and are being examined after the fact. The basic causal-comparative approach is to begin with a noted difference between two groups and then to look for possible causes for, or consequences of, this difference.

Used when independent variables cannot or should not be examined using controlled experiments.  When an experiment would take a considerable length of time and be quite costly to conduct, a causal-comparative study is sometimes used as an alternative.

Main purpose  of causal-comparative research:

  • Exploration of Effects
  • Exploration of Causes
  • Exploration of Consequences

Basic Characteristics of Causal-comparative research

In short it the basic Characteristics of Causal-comparative research can be concluded:

  • -Causal comparative research attempts to determine reasons, or causes, for the existing condition
  • Causal comparative studies are also called ex post facto because the investigator has no control over the exogenous variable. Whatever happened occurred before the researcher arrived.
  • -Causal-comparative research is sometimes treated as a type of descriptive research since it describes conditions that already exist.
  • -Causal-comparative studies attempt to identify cause-effect relationships; correlational studies do not
  • -Causal-comparative studies involve comparison, correlational studies involve relationship.
  • -Causal-comparative studies typically involve two (or more) groups and one independent variable, whereas correlational studies typically involve two or more variables and one group
  • -Causal-comparative studies typically involve two (or more) groups and one independent variable, whereas correlational studies typically involve two or more variables and one group
  • -In causal-comparative  the researcher attempts to determine the cause, or reason, for preexisting differences in groups of  individual.
  • Involves comparison of two or more groups on a single endogenous variables.
  • -Retrospective causal-comparative studies are far more common in educational research
  • -The basic approach is sometimes referred to as retrospective causal-comparative research (since it starts with effects and investigates causes)
  • -The basic approach is sometimes referred to as retrospective causal-comparative research (since it starts with effects and investigates causes)
  • -The basic causal-comparative approach involves starting with an effect and seeking possible causes.
  • The characteristic that differentiates these groups is the exogenous variable.
  • -The variation as prospective causal-comparative research (since it starts with causes and investigates effects)
  • We can never know with certainty that the two groups were exactly equal before the difference occurred.

Three important aspects of Causal Comparative method are:

1-      Gathering of data on factors invariably present in cases where the given result occurs and discarding of those elements which are not universally present

2-      2-Gathering the data on factors invariably present in cases where the given effect does not occur

3- 3 Comparing the two sets of data, or in effect, substracting one from the other to get at the causes responsible for the occurance or otherwise of the effect.

Examples of variables investigated in Causal-Comparative Research

  • -Ability variables (achievement)
  • -Family-related variables (SES)
  • -Organismic variables (age, ethnicity, sex)
  • -Personality variables (self-concept)
  • -School related variables (type of school, size of school)

Causal Comparative Research Procedure

Experimental, quasi-experimental, and causal-comparative research methods are frequently studied together because they all try to show cause and effect relationships among two or more variables. To conduct cause and effect research, one variable(s) is considered the causal or independent variable and

Causal comparative research attempts to attribute a change in the effect variable(s) when the causal variable(s) cannot be manipulated.

For example: if you wanted to study the effect of socioeconomic variables such as sex, race, ethnicity, or income on academic achievement, you might identify two existing groups of students: one group – high achievers; second group – low achievers. You then would study the differences of the two groups as related to socioeconomic variables that already occurred or exist as the reason for the difference in the achievement between the two groups. To establish a cause effect relationship in this type of research you have to build a strongly persuasive logical argument. Because it deals with variables that have already occurred or exist, causal-comparative research is also referred to as ex post facto research.

The most common statistical techniques used in causal comparative research are analysis of variance and t-tests wherein significant differences in the means of some measure (i.e. achievement) are compared between or among two or more groups.

Data Sources

  • Raw scores such as test scores
  • Measures such as grade point averages
  • Judgements, and other assessments made of the subjects involved

Research Tools

  • Standardized tests
  • Surveys
  • Structured interviews

Procedural Considerations

  • The most important procedural consideration in doing causal comparative research is to identify two or more groups which are demonstrably different in an educationally important way such as high academic achievement versus low academic achievement. An attempt is then made to identify the cause which resulted in the differences in the effect (i.e. academic achievement). The cause (i.e. race, sex, income, etc.) has already had its effect and cannot be manipulated, changed or altered. In selecting subjects for causal- comparative research, it is most important that they be identical as possible except for the difference (i.e. independent variable – race, sex, income) which may have caused the demonstrated effect (i.e. dependent variable – academic achievement)
  • Hypotheses are generally used
  • Statistics are extensively used in experimental research and include measures of spread or dispersion such as:
  • t-tests;
  • Chi-Square;
  • analysis of variance as well as measures of relationship such as
  • : Pearson Product-Moment Coefficient;
  • Spearman Rank Order Coefficient; Phi Correlation Coefficient; regression

SPECIAL PROCEDURAL CONSIDERATIONS

  1. Statistics are extensively used in causal comparative research and include measures of relationship such as: Pearson Product-Moment Coefficient; Spearman Rank Order Coefficient; Phi Correlation Coefficient; Regression; as well as measures of spread or dispersion such as: t-tests; Chi-Square; Analysis of Variance.
  2. REPORT PRESENTATION Reports tend to rely on both quantitative and qualitative presentations. Statistical data is almost always provided and supports the overall argument which is used to establish the cause and effect relationship.

Report Presentation

  • Reports tend to rely on quantitative presentations
  • Statistical data is almost always provided and supports the overall cause-effect argument.

CONDUCTING A CAUSAL-COMPARATIVE STUDY

  • -Although the independent variable is not manipulated, there are control procedures that can be exercised to improve interpretation of results.

Design & Procedure

-The researcher selects two groups of participants, the experimental and control groups, but more accurately referred to as comparison groups.

-Groups may differ in two ways.

  • -One group possesses a characteristic that the other does not.
  • -Each group has the characteristic, but to differing degrees or amounts.

-Definition and selection of the comparison groups are very important parts of the causal-comparative procedure.

  • -The independent variable differentiating the groups must be clearly and operationally defined, since each group represents a different population.
  • -In causal-comparative research the random sample is selected from two already existing populations, not from a single population as in experimental research.
  • -As in experimental studies, the goal is to have groups that are as similar as possible on all relevant variables except the independent variable.

-The more similar the two groups are on such variables, the more homogeneous they are on everything but the independent variable.

CONTROL PROCEDURES

-Lack of randomization, manipulation, and control are all sources of weakness in a causal-comparative study.

-Random assignment is probably the single best way to try to ensure equality of the groups.

-A problem is the possibility that the groups are different on some other important variable (e.g. gender, experience, or age) besides the identified independent variable.

MATCHING

  • -Matching is another control technique.
  • -If a researcher has identified a variable likely to influence performance on the dependent variable, the researcher may control for that variable by pair-wise matching of participants.
  • -For each participant in one group, the researcher finds a participant in the other group with the same or very similar score on the control variable.
  • -If a participant in either group does not have a suitable match, the participant is eliminated from the study.
  • -The resulting matched groups are identical or very similar with respect to the identified extraneous variable.
  • -The problem becomes serious when the researcher attempts to simultaneously match participants on two or more variables.

COMPARING HOMOGENEOUS GROUPS OR SUBGROUPS

  • -To control extraneous variables, compare groups that are homogeneous with respect to the extraneous variable.
  • -This procedure may lower the number of participants and limits the generalizability of the findings.
  • -A similar but more satisfactory approach is to form subgroups within each group that represent all levels of the control variable.
  • -Each group might be divided into high, average, and low IQ subgroups.
  • -The existence of comparable subgroups in each group controls for IQ.
  • -In addition to controlling for the variable, this approach also permits the researcher to determine whether the independent variable affects the dependent variable differently at different levels of the control variable.
  • -The best approach is to build the control variable right into the research design and analyze the results in a statistical technique called factorial analysis of variance.
  • -A factorial analysis allows the researcher to determine the effect of the independent variable and the control variable on the dependent variable both separately and in combination.
  • -It permits determination of whether there is interaction between the independent variable and the control variable such that the independent variable operates differently at different levels of the control variable.

ANALYSIS OF COVARIANCE

  • -Is used to adjust initial group differences on variables used in causal-comparative and experimental research studies.
  • -Analysis of covariance adjusts scores on a dependent variable for initial differences on some other variable related to performance on the dependent.
  • -Suppose we were doing a study to compare two methods, X and Y, of teaching fifth graders to solve math problems.
  • -Covariate analysis statistically adjusts the scores of method Y to remove the initial advantage so that the results at the end of the study can be fairly compared as if the two groups started equally.

DATA ANALYSIS AND INTERPRETATION

  • -Analysis of data involves a variety of descriptive and inferential statistics.

-The most commonly used descriptive statistics are

(a)  the mean, which indicates the average performance of a group on some measure of a variable, and

(b)  the standard deviation, which indicates how spread out a set of scores is around the mean, that is, whether the scores are relatively homogeneous or heterogeneous around the mean.

-The most commonly used inferential statistics are

(a) the t test, used to determine whether the means of two groups are statistically different from one another;

(b) analysis of variance, used to determine if there is significant difference among the means of three or more groups; and

(c)  chi square, used to compare group frequencies, or to see if an event occurs more frequently in one group than another.

-Lack of randomization, manipulation, and control factors make it difficult to establish cause-effect relationships with any degree of confidence.

  • -However, reversed causality is more plausible and should be investigated.
  • -It is equally plausible that achievement affects self-concept, as it is that self-concept affects achievement.

-The way to determine the correct order of causality-which variable caused which- is to determine which one occurred first.

  • -The possibility of a third, common explanation in causal-comparative research is plausible in many situations.
  • -One way to control for a potential common cause is to equate groups on that variable.
  • -To investigate or control for alternative hypotheses , the researcher must be aware of them and must present evidence that they are not in fact the true explanation for the behavioral differences being investigated.

Types of Causal-Comparative Research Designs

There are two types of causal-comparative research designs:

Retrospective causal-comparative research

Retrospective causal-comparative research requires that a researcher begins investigating a particular question when the effects have already occurred and the researcher attempts to determine whether one variable may have influenced another variable.

Prospective causal-comparative research

Prospective causal-comparative research occurs when a researcher initiates a study a study begin with the causes and is determined to investigate the effects of a condition. By far, retrospective causal-comparative research designs are much more common than prospective causal-comparative designs….

Basic approach of causal- comparative research

The researcher observe that 2 groups differ on some variable (teaching style) and then attempt to find the reason for (or the results of) this difference. …

- Causal-comparative studies attempt to identify cause-effect relationships.

2- Causal-comparative studies typically Involve two (or more) groups and one independent variable

3- Causal-comparative studies involve comparision.

4-The basic causal-comparative approach involves starting with an effect and seeking possible causes ( retrospective).

5-Retospective causal – comparative studies are far more common in educational research.

Steps for conducting a Causal-comparative research

STEP ONE- Select a topic

For  determining the problem it is necessary for the researcher to focus on the problem that he or she needs to study. They not only need to find out a problem, they also need to determine, analyse and define the problem which they will be dealing with.

Topic studies with Causal-comparative designs typically catch a researcher’s attention based on experiences or situations that have occurred in the real world.

The first step in formulating a problem in causal-comparative research is usually to identify and define the particular phenomena of interest, and then to consider possible causes for, or consequences of, these phenomena.

There are no limits to the kinds of instruments that can be used in a causal-comparative study.

The basic causal-comparative design involves selecting two groups that differ on a particular variable of interest and then comparing them on another variable or variables.

STEP TWO -Review of literature

. Literature Review Before trying to predict the causal relationships, the researcher needs to study all the related or similar literature and relevant studies, which may help in further analysis, prediction and conclusion of the causal relationship between the variables under study.

Reviewing published literature on a specific topic of interest is specially important when conducting  Caucal-comparative research as such a review can assist a researcher in determining which extraneous variable  may exist in the situations that they are considering studying.

STEP THREE- Develop a Research hypothesis

The third step of the  research is to propose the possible solutions or alternatives that might have led to the effect. They need to list out the assumptions which will be the basis of the hypothesis and procedure of the research. Hypothesis developed for Causal-comparative research  to identify the independent and dependent variable Causal-comparative hypothesis should describe the expected impact of the independent variable on the dependent variable.

STEP FOUR-Select participants

The important thing in selecting a sample for a causal-comparative study is to define carefully the characteristic to be studied and then to select groups that differ in this characteristic.

In causal-comparative research participants are already organized in groups. The researcher selects two groups of participants the experimental and control groups but more accurately referred to as  comparison groups because one group does not possess a characteristics or experience possessed by the second group or the two groups differ in the amount the characteristics that they share. The independent variable they share. The independent variable differentiating the groups must be clearly and operationally defined, since each group represent a different variable.

STEP FIVE- Select instruments to measure  variables and collecting data

As all the types of qualitative research Causal-comparative research requires that researcher selects instruments that are reliable and allow researchers  to draw valid conclusions( Link to reliability and validity portion of site ) . They also need to select the scale or construct instrument for collecting the required information / data. After a researcher has selected a reliable and valid instrument, data for the study can be selected.

Causal Comparative: Data Collection

■ You select two groups that differ on the (exogenous) variable of interest.

■ Next, compare the two groups by looking at an endogenous variable that you think might be influenced by the exogenous variable.

■ Define clearly and operationally the exogenous variable.

■ Be sure the groups are similar on all other important variables.

Causal Comparative: Equating groups

■ Use subject matching

■ Use change scores; i.e., each subject as own control

■ Compare homogeneous groups

■ Use analysis of covariance

STEP SIX- Analyze and interpret results

Finally, the researcher needs to analyse, evaluate and interpret the information collected. It is on basis of this step only, the researcher selects the best possible alternative of causes which might have led the effect to occur

Typically in Causal-comparative studies data is reported as a mean or frequency for each group. Inferential statistics is than used to determine whether the mean “ for the groups are significantly differ from each other. Since Causal-comparative research can not definitively determine that one variable  has caused something to occur  Reacher should instead report the findings of Causal-comparative studies as a possible effect or possible cause of an event or occurrence.

Similarly, Jacobs et al. (1992: 81) also proposed that the following steps are involved in conducting an ex-post facto-research:

First Step: The first step should be to state the problem.

Second Step: Following this is the determination of the group to be investigated. Two groups of the population that differ with regard to the variable, should be selected in a proportional manner for the test sample.

Third step: The next step refers to the process of collection of data. Techniques like questionnaires, interviews, literature search etc. are used to collect the relevant information.

Fourth Step: The last step is the interpretation of the findings and the results. Based on the conclusions the hypothesis is either accepted or rejected. It must be remembered that eventhough the ex-post facto research is a valid method for collecting information regarding an event that had already occurred, this type of research has shortcomings, and that only partial control is possible.

Validity of the research

The researcher needs to validate the significance of their research. They need to be cautious regarding the extent to which their findings would be valid and significant and helpful in interpreting and drawing inferences from the obtained results.

Threats to Internal Validity in Causal-Comparative Research

Two weaknesses in causal-comparative research are lack of randomization and inability to manipulate an independent variable.

A major threat to the internal validity of a causal-comparative study is the possibility of a subject selection bias. The chief procedures that a researcher can use to reduce this threat include matching subjects on a related variable or creating homogeneous subgroups, and the technique of statistical matching.

Other threats to internal validity in causal-comparative studies include location, instrumentation, and loss of subjects. In addition, type 3 studies are subject to implementation, history, maturation, attitude of subjects, regression, and testing threats.

In short the Threats to Internal Validity in Causal-Comparative Research can be summerised as:

  • Creating or finding homogeneous subgroups would be another way to control for an extraneous variable
  • One way to control for an extraneous variable is to match subjects from the comparison groups on that variable
  • Subject Characteristics
  • The possibility exists that the groups are not equivalent on one or more important variables
  • The third way to control for an extraneous variable is to use the technique of statistical matching

Other Threats

  • Attitude
  • Data collector bias
  • History
  • Instrument decay
  • Instrumentation
  • Location
  • Loss of subjects
  • Maturation
  • Pre-test/treatment interaction effect
  • Regression

Evaluating Threats to Internal Validity in Causal-Comparative Studies

Involves three sets of steps as shown below:

– Step 1: What specific factors are known to affect the variable on which groups are being compared or may be logically be expected to affect this variable?

– Step 2: What is the likelihood of the comparison groups differing on each of these factors?

– Step 3: Evaluate the threats on the basis of how likely they are to have an effect and plan to control for them.

Data Analysis

1- In a Causal-Comparative Study, the first step is to construct frequency polygons.

2-Means and SD are usually calculated if the variable involved    are quantitative…

3- The most commonly  used inference test is a’ t’ test for differences between  means

Analysis of data also involve a variety of descriptive and inferential statistics

The mean-which indicates the average performance of a group

The most commonly used Descriptive statistics are or some measures of a variable.

The Standard Deviation, which indicates how spread out  a set of score is around the mean, that is whether the scores are relatively homogenous or heterogenous around the mean.

The most commonly used inferential statistics are;

The t test used to determine whether the means of two groups are statistically different from one another.

Analysis of variance, used to determine if there is significant difference among the means of three or more groups

Chi square, used to compare group frequencies or to see if an event occurs

Limitations of use

1-There must be a “pre existing” independent variable, like years of study, gender, age, etc

2-There must be active variable- variable which the research can manipulate ,like the length and number of study session.

3-Lack of randomization, manipulation and control factors make it difficult to establish cause-effect relationships with any degree of confidence.

Causal Comparative: Conclusions

■ Researchers often infer cause and effect relationships based on such studies.

■ Conditions necessary, but not necessarily sufficient, to infer a causal relationship:

• A statistical relationship exists that is unlikely attributable to chance variation

• You have reason to believe the supposed exogenous variable preceded the endogenous.

• You can, with some degree of certainty, rule out other possible explanations.

Comparison of Causal-comparative method and Experimental method

-Neither method provides researchers with true experimental data

  • -Causal comparative studies help to identify variables worthy of experimental investigation
  • -Causal-comparative and experimental research both attempt to establish cause-effect relationships and both involve comparisons.
  • -Ethical considerations often prevent manipulation of a variable that could be manipulated but should not be-If the nature of the independent variable is such that it may cause physical or mental harm to participants, the ethics of research dictate that it should not be manipulated
  • -Experimental research the independent variable is manipulated by the researcher, whereas in causal-comparative research, the groups are already formed and already different on the independent variable
  • -Experimental studies are costly in more ways than one and should only be conducted when there is good reason to believe the effort will be fruitful
  • -Experimental study the researcher selects a random sample and then randomly divides the sample into two or more groups-Groups are assigned to the treatments and the study is carried out
  • -Independent variables in causal-comparative cannot be manipulated, should not be manipulated, or simply are not manipulated but could be
  • -Individuals are not randomly assigned to treatment groups because they already were selected into groups before the research began
  • -Not possible to manipulate organismic variables such as age or gender
  • -Students with high anxiety could be compared  to students with low anxiety on attention span, or the difference in achievement between first graders who attended preschool and first graders who did not could be examined.

Despite many key advantages, causal comparative research does have some serious limitations that should also be kept in mind

-Both the independent and dependent variables would have already occurred, it would not be possible to determine which came first.-It would be possible that some third variable, such as parental attitude might be the main influence on self-concept and achievement.

  • -Causal-comparative studies do permit investigation of variables that cannot or should not be investigated experimentally, facilitate decision making, provide guidance for experimental studies, and are less costly on all dimensions.
  • -Caution must be applied in interpreting results
  • -Caution must be exercised in attributing cause-effect relationships based on causal-comparative research.
  • -In causal-comparative research the researcher cannot assign participants to treatment groups because they are already in those groups.
  • -Only in experimental research does the researcher randomly assign participants to treatment groups.
  • -Only in experimental research is the degree of control sufficient to establish cause-effect relationships.
  • -Since the independent variable has already occurred, the same kinds of controls cannot be exercised as in an experimental study
  • -The alleged cause of an observed effect may in fact be the effect itself, or there may be a third variable
  • -This conclusion would not be warranted because it is not possible to establish whether self-concept precedes achievement or vice versa.

Difference and Similarities in between Causal and Correlational Research

Causal-comparative research involves comparing (thus the “comparative” aspect) two groups in order to explain existing differences between them on some variable or variables of interest. Correlational research, on the other hand, does not look at differences between groups. Rather, it looks for relationships within a single group. This is a big difference…one is only entitled to conclude that a relationship of some sort exists, not that variable A caused some variation in variable B.In sum, causal-comparative research does allow one to make reasonable inferences about causation; correlational research does not.

Although some consider causal and correlational research as similar in nature, there exists a clear difference between these two types of research. Causal research is aimed at identifying the causal relationships among variables. Correlational research, on the other hand, is aimed at identifying whether an association exists or not.

Causal-comparative and correlational designs are similar as:

  • Neither is experimental
  • Neither involves manipulation of a treatment variable
  • Relationships are studied in both
  • Correlational:  focus on magnitude and direction of relationship
  • Causal-Comparative:  focus on difference between two groups
  • The basic similarity between causal-comparative and correlational studies is that both seek to explore relationships among variables.
  • When relationships are identified through causal-comparative research (or in correlational research), they often are studied at a later time by means of experimental research.
  • Both lack manipulation
  • Both require caution in interpreting results
  • Causation is difficult to infer
  • Both can support subsequent experimental research

The key difference between causal and correlational research is that while causal research can predict causality, correlational research cannot. Through this article let us examine the differences between causal and correlational research further.

Difference in meaning

The correlational research attempts to identify associations among variables.  The key difference between correlational research and causal research is that correlational research cannot predict causality, although it can identify associations. Another difference that can be highlighted between the two research methods is that in correlational research, the researcher does not attempt to manipulate the variables. He merely observes.

  • In terms of objective :Causal research aims at identifying causality among variables. This highlights that it allows the researcher to find the cause of a certain variable
  • In terms of Prediction: In causal research, the researcher usually measures the impact each variable has before predicting the causality. It is very important to pay attention to the variables because, in most cases, the lack of control over variables can lead to false predictions. This is why most researchers manipulate the research environment.  In the social sciences especially, it is very difficult to conduct causal research because the environment can consist of many variables that influence the causality that can go unnoticed. Now let us move on to correlational research.
  • In terms Definitions of Causal and Correlational Research: In Causal research aims at identifying causality among variables . In Correlational research attempts to identify associations among variables.
  • In terms of Nature: In causal research, the researcher identifies the cause and effect . In correlational research, the researcher identifies an association.
  • In terms of Manipulation: In causal research, the researcher manipulates the environment. In correlational research, the researcher does not manipulate the environment.
  • In terms of Causality: In Causal research can identify causality. In  Correlational research cannot identify causality among variables
  • In terms of  Subjects Subjects are notassigned to groups. Usually, there is only one group of subjects However, subjects are Randomly selected for participation. In Causal research subjects are not randomly assigned to control and experimental groups because it is logistically But, there are control & experimental groups in this type of design….just no random assignment.If possible, they should be randomly selected for participation.
  • In terms of Variables: An important difference between causal-comparative and correlational research is that causal-comparative studies involve two or more groups and one independent variable, while correlational studies involve two or more variables and one group.   In Correlational research Two variables (X and Y) are measured and the strength and direction of the relationship is determined. In Causal research: Subjects are in pre-formed groups. But, unlike correlational and differential research, an independent variable ismanipulated and the groups are measured& compared on a dependent variable .
  • In terms of  Statistics In Correlational research: Pearson product-moment, correlation (Pearson’s r. In  Causal research: Chi-square, t-test, ANOVA 
  • In terms Conclusions : In Correlational research: Variable X co-varies with variable Y (i.e., there is a relationship between the two variables.)Cause and effect cannot be proven.In Causal research: While we may be able to draw some causal conclusions, we can’t do it with as much confidence as if we had used a true experimental design.

Strengths and Limitations of Causal-comparative Research

No research can be perfect in itself. All methods have their strengths as well as weaknesses. The same is applicable in the case of ex-post factor research too. The strengths of the ex-post facto research are: It is considered as a very relevant method in those behavioural researches where the variables can not be manipulated or altered.

Causal-Comparative Research has its limitations which should be recognized:

1. The independent variables cannot be manipulated. Subjects cannot be randomly, or otherwise, assigned to treatment groups.

2. Causes are often multiple and complex rather than single and simple.

For these reasons scientists are reluctant to use the expression cause and effect in  studies in which the variables have not been carefully manipulated.

They prefer to observe that when variable A appears, variable B is consistently associated, possibly for reasons not completely understood or explained.

Strengths of Causal-comparative Research

Causal-compative Research It is less time consuming as well as economical. It gives a chance to the researcher to analyse on basis of his personal opinion and then come out with the best possible conclusion. The weaknesses as well as the limitations of the ex-post facto research are: As discussed earlier, in Causal-compative Research   research, the researcher can not manipulate the independent variables. The researcher can not randomly assign the subjects to different groups. The researcher may not be able to provide a reasonable explanation for the relationship between the independent and dependent variables under study.

While predicting the causal relationships between the variables, the researcher falls prey to the bias called the post hoc fallacy. The concept of post hoc fallacy says that, it is a tendency of human to arrive at conclusions or predictions when two factors go together, one is the cause and the other is the effect. Because delinquency and parenthood go together, we may come to a conclusion that delinquency is the effect and the parenthood is the cause, whereas in reality the peer group to which the child belongs may be the actual reason.

It can therefore be concluded that the ex-post facto research holds a very good position in the field of behavioural sciences. It is the only method which is retrospective in nature, that is, with the help of this method one can trace the history in order to analyse the cause/ reason/action from an effect/behaviour/ event that has already occurred. Although it is a very significant method, yet it has certain limitations as well . The researcher can not manipulate the cause in order to see the alterations on its effect. This again marks a question on the validity of the findings of the research. Equally the researcher can not randomly assign the subjects in to groups and has no control over the variables. Yet, it is one of the very useful methods as it has several implications in the field

 

 

 

 

Posted in Uncategorized | Comments Off

Literature Review in Behavioral Research

 

Dr. V.K. Maheshwari, Former Principal

K.L.D.A.V (P. G) College, Roorkee, India

Man is the only animal that can take advantage of the knowledge which has accumulated through the centuries. This fact is of particular importance in research, which operates as a continuous function of ever-closer approximation of the truth.The investigator can be sure that his problem does not exist in a vacuum, and that considerable work has already been done on topics which are directly related to his proposed investigation. The success of his efforts will depend in no small measure on the extent to which he capitalizes on the advances- both empirical and theoretical- made by previous researchers.

A literature review is an evaluative report of information found in the literature related to your selected area of study. The review should describe, summaries, evaluate and clarify this literature. It should give a theoretical base for the research and help you (the author) determine the nature of your research. Works which are irrelevant should be discarded and those which are peripheral should be looked at critically.

“In writing the literature review, the purpose is to convey to the reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. The literature review must be defined by a guiding concept (e.g. your research objective, the problem or issue you are discussing, or your argumentative thesis). It is not just a descriptive list of the material available, or a set of summaries.

Many students are instructed, as part of their research program, to perform a literature review, without always understanding what a literature review is.Most are aware that it is a process of gathering information from other sources and documenting it, but few have any idea of how to evaluate the information, or how to present it.

A literature review discusses published information in a particular subject area, and sometimes information in a particular subject area within a certain time period.

A review may be a self-contained unit — an end in itself — or a preface to and rationale for engaging in primary research. A review is a required part of grant and research proposals and often a chapter in theses and dissertations.

A literature review is a critical and in depth evaluation of previous research. It is a summary and synopsis of a particular area of research, allowing anybody reading the paper to establish why you are pursuing this particular research program. A good literature review expands upon the reasons behind selecting a particular research question.

A literature review is the effective evaluation of selected documents on a research topic. A review may form an essential part of the research process or may constitute a research project in itself.

A literature review surveys books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory, and by so doing, provides a description, summary, and critical evaluation of these works in relation to the research problem being investigated. Literature reviews are designed to provide an overview of sources you have explored while researching a particular topic and to demonstrate to your readers how your research fits within a larger field of study.

A literature review surveys scholarly articles, books, dissertations, conference proceedings and other resources which are relevant to a particular issue, area of research, or theory and provides context for a dissertation by identifying past research. Research tells a story and the existing literature helps us identify where we are in the story currently. It is up to those writing a dissertation to continue that story with new research and new perspectives but they must first be familiar with the story before they can move forward.

A literature review is:

  • An integrated synthesis drawing upon a select list of academic sources (mainly journal articles) with a strong relation to the topic in question. It is a paper that includes a description AND a critical evaluation of past research.
  • Focused on a particular question or area of research.
  • The literature review is not merely a list of every item and resource with any possible relation to your topic, no matter how tenuous. It focuses on those resources and materials that are directly relevant to the addressing of your topic, and as such, is highly selective.
  • The literature review is not a widespread, comprehensive list of all materials pertaining to a particular discipline or field of inquiry. Rather, it’s narrowly focused to concentrate only on truly relevant materials.
  • The literature review is not  summary of available materials without any critical description or component; or  an annotated bibliography.

The differences between an annotated bibliography and a literature review:

  • Differences in PURPOSE:
    • A literature review makes a case for further investigation and research, highlighting gaps in knowledge and asking questions that need to be answered for the betterment of the discipline; as such, its contents are selected to make the case.
    • An annotated bibliography is a list of what’s available in a given field, accompanied by a short description. While it may feature a critical component, the criticism is generally directed at the quality of the work, rather than at its value in answering a particular question or buttressing an argument
  • Differences in FORMAT:
    • A literature review is a prose document similar to a journal article or essay, not a list of citations and descriptions. It often has subsections that highlight themes within the literature review.
    • An annotated bibliography is simply that: a bibliography (a list of works or resources), accompanied by annotations. The annotations are usually short descriptions and a brief critical assessment of each work.

To avoid confusion, it should be clear that the literature review is not a chronological catalog of all of the sources, but an evaluation, integrating the previous research together, and also explaining how it integrates into the proposed research program. All sides of an argument must be clearly explained, to avoid bias, and areas of agreement and disagreement should be highlighted.

It is not a collection of quotes and paraphrasing from other sources. A good literature review should also have some evaluation of the quality and findings of the research.

A good literature review should avoid the temptation of impressing the importance of a particular research program. The fact that a researcher is undertaking the research program speaks for its importance, and an educated reader may well be insulted that they are not allowed to judge the importance for themselves. They want to be re-assured that it is a serious paper, not a pseudo-scientific sales advertisement.

Characteristics of Literature Review

A ‘good’ literature review…..

  • ….. has appropriate breadth and depth
  • ….. has clarity and conciseness
  • ….. is a critical evaluation
  • ….. is a synthesis of available research
  • ….. uses rigorous and consistent methods

 

A ‘poor’ literature review is…..

  • ….. confined to description
  • ….. confusing and longwinded
  • ….. constructed in an arbitrary way
  • ….. narrow and shallow
  • …..an annotated bibliography

Purposes of a Literature Review

Every piece of ongoing research needs to be connected with the work already done, to attain an overall relevance and purpose. The review of literature thus becomes a link between the research proposed and the studies already done. It tells the reader about aspects that have been already established or concluded by other authors, and also gives a chance to the reader to appreciate the evidence that has already been collected by previous research, and thus projects the current research work in the proper perspective.

Review of existing literature related to the research is an important part of any research paper, and essential to put the research work in overall perspective, connect it with earlier research work and build upon the collective intelligence and wisdom already accumulated by earlier researchers. It significantly enhances the value of any research paper.

Literature reviews provide you with a handy guide to a particular topic. If you have limited time to conduct research, literature reviews can give you an overview or act as a stepping stone. For professionals, they are useful reports that keep them up to date with what is current in the field. For scholars, the depth and breadth of the literature review emphasizes the credibility of the writer in his or her field. Literature reviews also provide a solid background for a research paper’s investigation. Comprehensive knowledge of the literature of the field is essential to most research papers.

Doing a careful and thorough literature review is essential when you write about research at any level. It is basic homework that is assumed to have been done vigilantly, and a given fact in all research papers. By providing one, usually offered in your introduction before you reach your thesis statement, you are telling your reader that you have not neglected the basics of research.

It not only surveys what research has been done in the past on your topic, but it also appraises, encapsulates, compares and contrasts, and correlates various scholarly books, research articles, and other relevant sources that are directly related to your current research. Given the fundamental nature of providing one, your research paper will be not considered seriously if it is lacking one at the beginning of your paper.

A literature review helps you create a sense of rapport with your audience or readers so they can trust that you have done your homework. As a result, they can give you credit for your due diligence: you have done your fact-finding and fact-checking mission, one of the initial steps of any research writing.

As a student, you may not be an expert in a given field; however, by listing a thorough review in your research paper, you are telling the audience, in essence, that you know what you are talking about. As a result, the more books, articles, and other sources you can list in the literature review, the more trustworthy your scholarship and expertise will be. Depending on the nature of your research paper, each entry can be long or short. For example, if you are writing a doctoral dissertation or master’s thesis, the entries can be longer than the ones in a term paper. The key is to stick to the gist of the sources as you synthesize the source in the review: its thesis, research methods, findings, issues, and further discussions mentioned in the source.

It Helps You Avoid Incidental Plagiarism. Imagine this scenario. You have written a research paper, an original paper in your area of specialization, without a literature review. When you are about to publish the paper, you soon learn that someone has already published a paper on a topic very similar to yours. Of course, you have not plagiarized anything from that publication; however, if and when you publish your work, people will be suspicious of your authenticity. They will ask further about the significance of repeating similar research. In short, you could have utilized the time, money, and other resources you have wasted on your research on something else. Had you prepared a literature review at the onset of your research, you could have easily avoided such mishap. During the compilation of your review, you could have noticed how someone else has done similar research on your topic. By knowing this fact, you can tailor or tweak your own research in such a way that it is not a mere rehashing of someone else’s original or old idea.

It sharpens your research focus. As you assemble outside sources, you will condense, evaluate, synthesize, and paraphrase the gist of outside sources in your own words. Through this process of winnowing, you will be able to place the relevance of your research in the larger context of what others researchers have already done on your topic in the past .

The literature review will help you compare and contrast what you are doing in the historical context of the research as well as how your research is different or original from what others have done, helping you rationalize why you need to do this particular research ,

Perhaps you are using a new or different research method which has not been available before, allowing you to collect the data more accurately or conduct an experiment that is more precise and exact thanks to many innovations of modern technology. Thus, it is essential in helping you shape and guide your research in the direction you may not have thought of by offering insights and different perspectives on the research topic.

Structure of a Literature Review

Generally, a literature review consists of the aim, body, conclusion and references. In some scenarios, a literature review may be integrated into a research proposal. If this is the case, the sections of hypotheses and methods will be included. The sections of aim, hypothesis, and method should be approximately 10% of the length of the literature review.

Aim: The objective of the study; a short explanation of the study being undertaken

Body: Provide a critical review of the context of the research or project topic; an evaluation and analysis on existing knowledge; the outline of theoretical framework; any areas of controversy; limitations of literatures; reasons and purpose of the study being undertaken

Hypothesis: Assumptions or theories that are going to be tested (This section is for the case when a literature review is integrated into a research proposal)

Method: Approaches for data collection and analysis (This section is for the case when a literature review is integrated into a research proposal)

Conclusion: A short paragraph to conclude some key points and arguments

The structure of a literature review should include the following:

  • An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review,
  • Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches entirely],
  • An explanation of how each work is similar to and how it varies from the others,
  • Conclusions as to which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of their area of research.

Fields of Knowledge for Literature Review

It is important to think of knowledge in a given field as consisting of three layers.

  • First, there are the primary studies that researchers conduct and publish.
  • Second, are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the primary studies.
  • Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally that become part of the lore of field.

In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as “true” even though it often has only a loose relationship to the primary studies and secondary literature reviews. Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are a number of approaches you could adopt depending upon the type of analysis underpinning your study.

The evaluation of each Literature Review should consider:

Methodology – were the techniques used to identify, gather, and analyze the data appropriate to addressing the research problem? Was the sample size appropriate? Were the results effectively interpreted and reported?

Objectivity – is the author’s perspective even-handed or prejudicial? Is contrary data considered or is certain pertinent information ignored to prove the author’s point?

Persuasiveness – which of the author’s theses are most convincing or least convincing?

Provenance — what are the author’s credentials? Are the author’s arguments supported by evidence [e.g. primary historical material, case studies, narratives, statistics, recent scientific findings]?

Value — are the author’s arguments and conclusions convincing? Does the work ultimately contribute in any significant way to an understanding of the subject?

Development of the Literature Review

Development of the Literature Review can be done in four Stages

1.  Problem formulation — which topic or field is being examined and what are its component issues?

2.  Literature search — finding materials relevant to the subject being explored.

3.  Data evaluation – determining which literature makes a significant contribution to the understanding of the topic.

4.  Analysis and interpretation – discussing the findings and conclusions of pertinent literature.

Consider the following issues before writing the literature review:

Find Models

Use the exercise of reviewing the literature to examine how authors in your discipline or area of interest have composed their literature review sections. Read them to get a sense of the types of themes you might want to look for in your own research or to identify ways to organize your final review. The bibliography or reference section of sources you’ve already read are also excellent entry points into your own research.

Narrow the Topic

The narrower your topic, the easier it will be to limit the number of sources you need to read in order to obtain a good survey of relevant resources. Your professor will probably not expect you to read everything that’s available about the topic, but you’ll make your job easier if you first limit scope of the research problem. A good strategy is to begin by searching the HOMER catalog for books about the topic and review the table of contents for chapters that focuses on specific issues. You can also review the indexes of books to find references to specific issues that can serve as the focus of your research.

Consider Whether Your Sources are Current

Some disciplines require that you use information that is as current as possible. This is particularly true in disciplines in medicine and the sciences where research conducted becomes obsolete very quickly as new discoveries are made. However, when writing a review in the social sciences, a survey of the history of the literature may be required. In other words, a complete understanding the research problem requires you to deliberately examine how knowledge and perspectives have changed over time. Sort through other current bibliographies or literature reviews in the field to get a sense of what your discipline expects. You can also use this method to explore what is considered by scholars to be a “hot topic” and what is not.

Ways to Organize  Literature Review

Chronology of Events- If your review follows the chronological method, you could write about the materials according to when they were published. This approach should only be followed if a clear path of research building on previous research can be identified and that these trends follow a clear chronological order of development.

By Publication

Order your sources by publication chronology, then, only if the order demonstrates a more important trend

Thematic [“conceptual categories”]

Thematic reviews of literature are organized around a topic or issue, rather than the progression of time. However, progression of time may still be an important factor in a thematic review

Methodological

A methodological approach focuses on the methods utilized by the researcher. A methodological scope will influence either the types of documents in the review or the way in which these documents are discussed.

Other Sections of Your Literature Review

Once you’ve decided on the organizational method for your literature review, the sections you need to include in the paper should be easy to figure out because they arise from your organizational strategy. In other words, a chronological review would have subsections for each vital time period; a thematic review would have subtopics based upon factors that relate to the theme or issue. However, sometimes you may need to add additional sections that are necessary for your study, but do not fit in the organizational strategy of the body. What other sections you include in the body is up to you but include only what is necessary for the reader to locate your study within the larger scholarship framework.

Here are examples of  sections you may need to include depending on the type of review you write:

  • Current Situation: information necessary to understand the topic or focus of the literature review.
  • History: the chronological progression of the field, the literature, or an idea that is necessary to understand the literature review, if the body of the literature review is not already a chronology.
  • Questions for Further Research: What questions about the field has the review sparked? How will you further your research as a result of the review?
  • Selection Methods: the criteria you used to select (and perhaps exclude) sources in your literature review. For instance, you might explain that your review includes only peer-reviewed articles and journals.
  • Standards: the way in which you present your information.

References

An annotated bibliography on Writing Skills containing book reviews, articles and other sites can be found at:

Types of Literature Reviews

Few important types of Literature Review are

Types of Literature Reviews

|

______________________________________________________________________________

|                            |                           |                                     |                                  |                      |

Historical Review

Theoretical Review

Argumentative Review

Integrative Review

Methodological Review

Systematic Review

Historical Review

Few things rest in isolation from historical precedent. Historical literature reviews focus on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Theoretical Review

The purpose of this form is to examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review helps to establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

Argumentative Review

This form examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to make summary claims of the sort found in systematic reviews [see below].

Integrative Review

Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses or research problems. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication. This is the most common form of review in the social sciences.

Methodological Review

A review does not always focus on what someone said [findings], but how they came about saying what they say [method of analysis]. Reviewing methods of analysis provides a framework of understanding at different levels [i.e. those of theory, substantive fields, research approaches, and data collection and analysis techniques], how researchers draw upon a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection, and data analysis.

Systematic Review

This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyze data from the studies that are included in the review. The goal is to deliberately document, critically evaluate, and summarize scientifically all of the research about a clearly defined research problem. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form.

Writing  Literature Review

Tips on Writing (Hart 1998)

  • Consistent Grammar- Use sentences and paragraphs with appropriate use of commas, colours and semi-colours. Incorrect use of punctuation can affect the meaning.          
  • Paragraphs- Group sentences that express and develop one aspect of your topic. Use a new paragraph for another aspect or another topic.       
  • Sentences-Express one idea in a sentence. Ensure that all your sentences have a subject, verb and object.           
  • Transition Words- Use words that link paragraphs and which show contrast and development to your argument e.g. ‘hence’, ‘therefore’, ‘but’, ‘thus’, ‘as a result’, ‘in contrast’.

Pitfalls

  • Insufficient information
  • Irrelevant material
  • Limited range
  • Omission of contrasting view
  • Omission of recent work
  • Vagueness due to too much or inappropriate generalisations

Once you’ve settled on how to organize your literature review, you’re ready to write each section. When writing your review, keep in mind these issues.

Find a Focus

A literature review, like a term paper, is usually organized around ideas, not the sources themselves as an annotated bibliography would be organized. This means that you will not just simply list your sources and go into detail about each one of them, one at a time. No. As you read widely but selectively in your topic area, consider instead what themes or issues connect your sources together

Consider readers expectations

A literature review may not have a traditional thesis statement (one that makes an argument), but you do need to tell readers what to expect. Try writing a simple statement that lets the reader know what is your main organizing principle

Be Selective

Select only the most important points in each source to highlight in the review. The type of information you choose to mention should relate directly to the research problem, whether it is thematic, methodological, or chronological. Related items that provide additional information but that are not key to understanding the research problem can be included in a list of further readings.

Use Evidence

A literature review section is, in this sense, just like any other academic research paper. Your interpretation of the available sources must be backed up with evidence [citations] that demonstrates that what you are saying is valid.

Keep Your Own Voice

While the literature review presents others’ ideas, your voice [the writer's] should remain front and center. Weave references to other sources into what you are writing but maintain your own voice by starting and ending the paragraph with your own ideas and wording.

Use Quotes Sparingly

Some short quotes are okay if you want to emphasize a point, or if what an author stated cannot be easily paraphrased. Sometimes you may need to quote certain terminology that was coined by the author, not common knowledge, or taken directly from the study. Do not use extensive quotes as a substitute for your own summary and interpretation of the literature.

Use Caution When Paraphrasing

When paraphrasing a source that is not your own, be sure to represent the author’s information or opinions accurately and in your own words. Even when paraphrasing an author’s work, you still must provide a citation to that work.

Summarize and Synthesize

Remember to summarize and synthesize your sources within each thematic paragraph as well as throughout the review. Recapitulate important features of a research study, but then synthesize it by rephrasing the study’s significance and relating it to your own work.

General Suggestions for writing Literature Review

Writing the introduction

In the introduction, you should:

  • Define or identify the general topic, issue, or area of concern, thus providing an appropriate context for reviewing the literature.
  • Establish the writer’s reason (point of view) for reviewing the literature; explain the criteria to be used in analyzing and comparing literature and the organization of the review (sequence); and, when necessary, state why certain literature is or is not included (scope).
  • Point out overall trends in what has been published about the topic; or conflicts in theory, methodology, evidence, and conclusions; or gaps in research and scholarship; or a single problem or new perspective of immediate interest.

Writing the body

In the body, you should:

  • Group research studies and other types of literature (reviews, theoretical articles, case studies, etc.) according to common denominators such as qualitative versus quantitative approaches, conclusions of authors, specific purpose or objective, chronology, etc.
  • Provide the reader with strong “umbrella” sentences at beginnings of paragraphs, “signposts” throughout, and brief “so what” summary sentences at intermediate points in the review to aid in understanding comparisons and analyses.
  • Summarize individual studies or articles with as much or as little detail as each merits according to its comparative importance in the literature, remembering that space (length) denotes significance.

Writing the conclusion

In the conclusion, you should:

  • Conclude by providing some insight into the relationship between the central topic of the literature review and a larger area of study such as a discipline, a scientific endeavor, or a profession.
  • Evaluate the current “state of the art” for the body of knowledge reviewed, pointing out major methodological flaws or gaps in research, inconsistencies in theory and findings, and areas or issues pertinent to future study.
  • Summarize major contributions of significant studies and articles to the body of knowledge under review, maintaining the focus established in the introduction.

These are the most common mistakes made in reviewing social science research literature.

  • Does not describe the search procedures that were used in identifying the literature to review;
  • Only includes research that validates assumptions and does not consider contrary findings and alternative interpretations found in the literature.
  • Relies exclusively on secondary analytical sources rather than including relevant primary research studies or data;
  • Reports isolated statistical results rather than synthesizing them in chi-squared or meta-analytic methods; and,
  • Sources in your literature review do not clearly relate to the research problem;
  • Uncritically accepts another researcher’s findings and interpretations as valid, rather than examining critically all aspects of the research design and analysis;
  • You do not take sufficient time to define and identify the most relevant sources to use in the literature review related to the research problem;

Tips for Conducting a Literature Review

  • As a general rule, certainly for a longer review, each paragraph should address one point, and present and evaluate all of the evidence, from all of the differing points of view.
  • Evaluating the credibility of sources is one of the most difficult aspects, especially with the ease of finding information on the internet.
  • The only real way to evaluate is through experience, but there are a few tricks for evaluating information quickly, yet accurately.
  • There is such a thing as ‘too much information,’ and Google does not distinguish or judge the quality of results, only how search engine friendly a paper is. This is why it is still good practice to begin research in an academic library. Any journals found there can be regarded as safe and credible
  • It is very difficult to judge the credibility of an online paper. The main thing is to structure the internet research as if it were on paper. Bookmark papers, which may be relevant, in one folder and make another subfolder for a ‘shortlist.’
  • The easiest way is to scan the work, using the abstract and introduction as guides. This helps to eliminate the non-relevant work and also some of the lower quality research. If it sets off alarm bells, there may be something wrong, and the paper is probably of a low quality.
  • Be very careful not to fall into the trap of rejecting research just because it conflicts with your hypothesis. Failure to do this will completely invalidate the literature review and potentially undermine the research project. Any research that may be relevant should be moved to the shortlist folder.
  • Critically evaluate the paper and decide if the research is sufficient quality. Think about it this way: The temptation is to try to include as many sources as possible, because it is easy to fall into the trap of thinking that a long bibliography equates to a good paper. A smaller number of quality sources is far preferable than a long list of irrelevance.
  • Check into the credentials of any source upon which you rely heavily for the literature review. The reputation of the University or organization is a factor, as is the experience of the researcher. If their name keeps cropping up, and they have written many papers, the source is usually OK.
  • Look for agreements. Good research should have been replicated by other independent researchers, with similar results, showing that the information is usually fairly safe to use.
  • Conducting a good literature review is a matter of experience, and even the best scientists have fallen into the trap of using poor evidence. This is not a problem, and is part of the scientific process; if a research program is well constructed, it will not affect the results.

How to design a good Literature Review Assessment?

  • Decide the length of a literature review .
  • Ensure clear assessment criteria and marking scheme, including grammar, spellings and other issues are provided to the students
  • Ensure students understand the meaning of plagiarism and how to reference a piece of
  • Ensure the students know the primary objective of literature review
  • Ensure the students understand that a literature review is not simply a summary
  • Literature reviews require practice; it is recommended that teachers provide the opportunities. Students may begin with small literature reviews on a narrower topic and build from it. Providing examples will be helpful.
  • Teachers have to decide if they would assess the quality of the resources/literatures chosen by students for the literature review

Steps in reviewing the literature

Reviewing the literature is an important part of the research process. Systematic steps to follow when evaluating the literature include critiquing the following: title, abstract, problem, purpose, theoretical or conceptual framework or model, implications for nursing, review of literature, hypotheses or research questions, variables, instruments, subjects, ethical concerns, research designs, results, conclusions, recommendations, and future studies.

Writing a literature review is often the most daunting part of writing an article, book, thesis, or dissertation. “The literature” seems (and often is) massive. Here is an efficient and effective way of writing a literature review.

1. Choose a topic. Define your research question.

Your literature review should be guided by a central research question.  Remember, it is not a collection of loosely related studies in a field but instead represents background and research developments related to a specific research question, interpreted and analyzed by you in a synthesized way.

Tips:

  • Begin writing down terms that are related to your question. These will be useful for searches later.
  • If you have the opportunity, discuss your topic with your professor.
  • Make sure your research question is not too broad or too narrow.  Is it manageable?

2. Decide on the scope of your review.

How many studies do you need to look at? How comprehensive should it be? How many years should it cover?

Tip:      This may depend on your assignment.  How many sources does the assignment require?

3. Select the databases you will use to conduct your searches.

Make a list of the databases you will search.  Remember to include comprehensive databases      Where to find databases:

Databases categorized by discipline

Librarians create research guides for all of the disciplines on campus! Take advantage of their expertise and see what discipline-specific search strategies they recommend!

4. Conduct your searches and find the literature. Keep track of your searches!

  • Ask your professor or a scholar in the field if you are missing any key works in the field.
  • Review the abstracts of research studies carefully. This will save you time.
  • Use the bibliographies and references of research studies you find to locate others.
  • Write down the searches you conduct in each database so that you may duplicate them if you need to later .

5. Review the literature.

Some questions to help you analyze the research:

  • What was the research question of the study you are reviewing? What were the authors trying to discover?
  • Was the research funded by a source that could influence the findings?
  • What were the research methodologies? Analyze its literature review, the samples and variables used, the results, and the conclusions. Does the research seem to be complete? Could it have been conducted more soundly? What further questions does it raise?
  • If there are conflicting studies, why do you think that is?
  • How are the authors viewed in the field? Has this study been cited?; if so, how has it been analyzed?

Tips:

  • Again, review the abstracts carefully.
  • Keep careful notes so that you may track your thought processes during the research process.

It is not necessary that a research paper only reviews that work which has led to established norms and principles. The literature that deserves to be included in the review may include opposing conclusions, parallel thinking or even work that was done primarily for other purposes, but which throws light or provide useful insights to the current research area.

Key outcomes of Conducting a Literature Review

While there might be many reasons for conducting a literature review, following are four key outcomes of doing the review.

  • Assessment of the current state of research on a topic. This is probably the most obvious value of the literature review. Once a researcher has determined an area to work with for a research project, a search of relevant information sources will help determine what is already known about the topic and how extensively the topic has already been researched.
  • Identification of the experts on a particular topic. One of the additional benefits derived from doing the literature review is that it will quickly reveal which researchers have written the most on a particular topic and are, therefore, probably the experts on the topic. Someone who has written twenty articles on a topic or on related topics is more than likely more knowledgeable than someone who has written a single article. This same writer will likely turn up as a reference in most of the other articles written on the same topic. From the number of articles written by the author and the number of times the writer has been cited by other authors, a researcher will be able to assume that the particular author is an expert in the area and, thus, a key resource for consultation in the current research to be undertaken.
  • Identification of key questions about a topic that need further research. In many cases a researcher may discover new angles that need further exploration by reviewing what has already been written on a topic. For example, research may suggest that listening to music while studying might lead to better retention of ideas, but the research might not have assessed whether a particular style of music is more beneficial than another. A researcher who is interested in pursuing this topic would then do well to follow up existing studies with a new study, based on previous research, that tries to identify which styles of music are most beneficial to retention.
  • Determination of methodologies used in past studies of the same or similar topics. It is often useful to review the types of studies that previous researchers have launched as a means of determining what approaches might be of most benefit in further developing a topic. By the same token, a review of previously conducted studies might lend itself to researchers determining a new angle for approaching research.

Advantages of Literature Reviews

  • Literature Reviews assess different cognitive levels and  enhance analytical skills through identifying differences in previous work and their work
  • Literature Reviews encourage deep learning, and provide an efficient way to assess students on their knowledge and understanding of a particular topic.
  • Literature Reviews give a conceptual framework for research or project planning because students can have a clear idea of what has already been done in the field. This helps students build up new research topics on the basis of existing literatures
  • Time and cost efficient to look for resources (e.g. through the online database)
  • With proper supervision and practices, some graduate attributes such as project management and life-long learning can be learnt and assessed.

Disadvantages of Literature Reviews

  • It is time consuming for the teachers to correct and provide feedback.
  • Literature reviews require good supervision from teachers particularly for students who are inexperienced in this type of assessment.
  • Sometimes, students may not have access to certain information. They may spend unnecessary time and resources on searching for the reviews.

Upon completion of the literature review, a researcher should have a solid foundation of knowledge in the area and a good feel for the direction any new research should take. Should any additional questions arise during the course of the research, the researcher will know which experts to consult in order to quickly clear up those questions

 

 

 

 

 

 

 

Posted in Uncategorized | Comments Off

Rating Scales in Behavioral Research

Dr. V.K. Maheshwari, Former Principal

K.L.D.A.V (P. G) College, Roorkee, India

Surveys are consistently used to measure quality. For example, surveys might be used to gauge customer perception of product quality or quality performance in service delivery.

Statisticians have generally grouped data collected from these surveys into a hierarchy of four levels of measurement:

1.            Nominal data: The weakest level of measurement representing categories without numerical representation.

2.            Ordinal data: Data in which an ordering or ranking of responses is possible but no measure of distance is possible.

3.            Interval data: Generally integer data in which ordering and distance measurement are possible.

4.            Ratio data: Data in which meaningful ordering, distance, decimals and fractions between variables are possible.

Data analyses using nominal, interval and ratio data are generally straightforward and transparent. Analyses of ordinal data, particularly as it relates to Likert or other scales in surveys, are not. This is not a new issue. The adequacy of treating ordinal data as interval data continues to be controversial in survey analyses in a variety of applied fields.1,2

An underlying reason for analyzing ordinal data as interval data might be the contention that parametric statistical tests (based on the central limit theorem) are more powerful than nonparametric alternatives. Also, conclusions and interpretations of parametric tests might be considered easier to interpret and provide more information than nonparametric alternatives.

However, treating ordinal data as interval (or even ratio) data without examining the values of the dataset and the objectives of the analysis can both mislead and misrepresent the findings of a survey. To examine the appropriate analyses of scalar data and when its preferable to treat ordinal data as interval data, we will concentrate on Likert scales.

What is a rating scale?

A rating scale is a tool used for assessing the performance of tasks, skill levels, procedures, processes, qualities, quantities, or end products, such as reports, drawings, and computer programs. These are judged at a defined level within a stated range. Rating scales are similar to checklists except that they indicate the degree of accomplishment rather than just yes or no.

Rating scales list performance statements in one column and the range of accomplishment in descriptive words, with or without numbers, in other columns. These other columns form “the scale” and can indicate a range of achievement, such as from poor to excellent, never to always, beginning to exemplary, or strongly disagree to strongly agree.

What’s the definition of Likert scale?

According to Wikipedia: “A rating scale is a set of categories designed to elicit information about a quantitative or a qualitative attribute. In the social sciences, common examples are the Likert scale and 1-10 rating scales in which a person selects the number which is considered to reflect the perceived quality of a product.”

Rating scales are used quite frequently in survey research and there are many different kinds of rating scales. A typical rating scale asks subjects to choose one response category from several arranged in hierarchical order. Either each response category is labeled or else only the two endpoints of the scale are “anchored.

By definition Rating scales are survey questions that offer a range of answer options — from one extreme attitude to another, like “extremely likely” to “not at all likely.” Typically, they include a moderate or neutral midpoint.

Likert scales (named after their creator, American social scientist Rensis Likert) are quite popular because they are one of the most reliable ways to measure opinions, perceptions, and behaviors.

Compared to binary questions, which give you only two answer options, Likert-type questions will get you more granular feedback about whether your product was just “good enough” or (hopefully) “excellent.” They can help decide whether a recent company outing left employees feeling “very satisfied,” “somewhat dissatisfied,” or maybe just neutral.

This method will let you uncover degrees of opinion that could make a real difference in understanding the feedback you’re getting. And it can also pinpoint the areas where you might want to improve your service or product.

Characteristics of rating scales

Rating scales should:

•             have criteria for success based on expected outcomes

•             have clearly defined, detailed statements

This gives more reliable results.

For assessing end products, it can sometimes help to have a set of photographs

or real samples that show the different levels of achievement. Students can

visually compare their work to the standards provided.

•             have statements that are chunked into logical sections or flow sequentially

•             include clear wording with numbers when a number scale is used

As an example, when the performance statement describes a behaviour or quality,

1 = poor through to 5 = excellent is better than 1 = lowest through to 5 = highest

or simply 1 through 5.

The range of numbers should be the same for all rows within a section (such as

all being from 1 to 5).

The range of numbers should always increase or always decrease. For example, if

the last number is the highest achievement in one section, the last number should

be the highest achievement in the other sections.

•             have specific, clearly distinguishable terms

Using good then excellent is better than good then very good because it is

hard to distinguish between good and very good. Some terms, such as often or

sometimes, are less clear than numbers, such as 80% of the time.

  • be reviewed by other instructors
  • be short enough to be practical
  • have space for other information such as the student’s name, date, course, examiner, and overall result
  • highlight critical tasks or skills
  • indicate levels of success required before proceeding further, if applicable
  • sometimes have a column or space for providing additional feedback

Basic goals for Scale points and their labels

Survey data are only as good as the questions asked and the way we ask them. To that end, let’s talk rating scales.

To get started, let’s outline five basic goals for scale points and their labels:

  • It should be easy to interpret the meaning of each scale point
  • Responses to the scale should be reliable, meaning that if we asked the same question again, each respondent should provide the same answer
  • The meaning scale points should be interpreted identically by all respondents
  • The scale should include enough points to differentiate respondents from one another as much as validly possible
  • The scale’s points should map as closely as possible to the underlying idea (construct) of the scale

Number of Scale point to be included

The number of scale points depends on what sort of question you’re asking. If you’re dealing with an idea or construct that ranges from positive to negative – think satisfaction levels – (these are known as bi-polar constructs) then you’re going to want a 7-point scale that includes a middle or neutral point. In practice, this means the response options for a satisfaction question should look like this:

Scale Point 1

If you’re dealing with an idea or construct that ranges from zero to positive – think effectiveness – (these are known as unipolar constructs) then you’ll go with a 5-point scale. The response options for this kind of question would look like this:

Scale Point 2

Since it doesn’t make sense to have negative effectiveness, this kind of five-point scale is the best practice.

Always measure bipolar constructs with bipolar scales and unipolar constructs with unipolar scales.

In short, the goal is to make sure respondents can answer in a way that allows them to differentiate themselves as much as is validly possible without providing so many points that the measure becomes noisy or unreliable. Even on an 11-point (0-10) scale respondents start to have difficulty reliably placing themselves– 3 isn’t so different from 4 and 6 isn’t so different from 7.

Middle Alternative

There is a general complain that including middle alternatives basically allows respondents to avoid taking a position. Some even mistakenly assume that midpoint responses are disguised “Don’t knows” or that respondents are satisficing when they provide midpoint responses.

However, research suggests that midpoint responses don’t necessarily mean that respondents don’t know or are avoiding making a choice. In fact, research indicates that if respondents that select the midpoint were forced to choose a side, they would not necessarily answer the question in the same way as other respondents that opted to choose a side.

This suggests that middle alternatives should be provided and that they may be validly and reliably chosen by respondents. Forcing respondents to take a side may introduce unwanted variance or bias to the data.

Labeling Response options

Some people prefer to only label the end-points. Others will also label the midpoint. Some people label with words and others label numerically. What’s right?

The most accurate surveys will have a clear and specific label that indicates exactly what each point means. Going back to the goals of scale points and their labels, we want all respondents to easily interpret the meaning of each scale point and for there to be no room for different interpretations between respondents. Labels are key to avoiding ambiguity and respondent confusion.

This means that partially labeled scales may not perform as well as a fully labeled scale and that numbers should only be used for scales collecting numeric data (not rating scales).

Common Rating Scales

One of the most frequent mistakes  when reviewing questionnaires are poorly written scales. Novice survey authors often create their own scale rather than using the appropriate common scale. It’s hard to write a good scale; instead by better off rewording  question slightly  use one of the following.

  • Acceptability Not at all acceptable, Slightly acceptable, Moderately acceptable, Very acceptable, Completely acceptable
  • Agreement Completely disagree, Disagree, Somewhat disagree, Neither agree nor disagree, Somewhat agree, Agree, Completely agree
  • Appropriateness Absolutely inappropriate, Inappropriate, Slightly inappropriate, Neutral, Slightly appropriate, Appropriate, Absolutely appropriate
  • Awareness Not at all aware, Slightly aware, Moderately aware, Very aware, Extremely aware
  • Beliefs Not at all true of what I believe, Slightly true of what I believe, Moderately true of what I believe, Very true of what I believe, Completely true of what I believe
  • Concern Not at all concerned, Slightly concerned, Moderately concerned, Very concerned, Extremely concerned
  • Familiarity Not at all familiar, Slightly familiar, Moderately familiar, Very familiar, Extremely familiar
  • Frequency Never, Rarely, Sometimes, Often, Always
  • Importance Not at all important, Slightly important, Moderately important, Very important, Extremely important
  • Influence Not at all influential, Slightly influential, Moderately influential, Very influential, Extremely influential
  • Likelihood Not at all likely, Slightly likely, Moderately likely, Very likely, Completely likely
  • Priority Not a priority, Low priority, Medium priority, High priority, Essential
  • Probability Not at all probable, Slightly probable, Moderately probable, Very probable, Completely probable
  • Quality Very poor, Poor, Fair, Good, Excellent
  • Reflect Me Not at all true of me, Slightly true of me, Moderately true of me, Very true of me, Completely true of me
  • Satisfaction (bipolar) Completely dissatisfied, Mostly dissatisfied, Somewhat dissatisfied, Neither satisfied or dissatisfied, Somewhat satisfied, Mostly satisfied, Completely satisfied
  • Satisfaction (unipolar) Not at all satisfied, Slightly satisfied, Moderately satisfied, Very satisfied, Completely satisfied

 

This list follows Krosnick’s advice to use 5-point unipolar scales and 7-point bipolar scales.

 

Types of rating scales

Some time more than one rating scale question is required to measure an attitude or perception due to the requirement for statistical comparisons between the categories in the polytomous Rasch model for ordered categories. In terms of Classical test theory, more than one question is required to obtain an index of internal reliability such as Cronbach’s alpha, which is a basic criterion for assessing the effectiveness of a rating scale and, more generally, a psychometric instrument.

All rating scales can be classified into one or two of three types:

  1. numeric rating scale
  2. graphic rating scale
  3. Descriptive graphic rating scale

Considerations for numeric rating scales

If you assign numbers to each column for marks, consider the following:

•             What should the first number be? If 0, does the student deserve 0%? If 1, does the student deserve 20% (assuming 5 is the top mark) even if he/she has done extremely poorly?

•             What should the second number be? If 2 (assuming 5 is the top mark), does the person really deserve a failing mark (40%)? This would mean that the first two or three columns represent different degrees of failure.

•             Consider variations in the value of each column. Assuming 5 is the top mark, the columns could be valued at 0, 2.5, 3, 4, and 5.

•             Consider the weighting for each row. For example, for rating a student’s report, should the introduction, main body, and summary be proportionally rated the same? Perhaps, the main body should be valued at five times the amount of the introduction and summary. A multiplier or weight can be put in another column for calculating a total mark in the last column.

Consider having students create the rating scale. This can get them to think deeply about the content.

Graphic Rating Scale

Graphic Rating Scale is a type of performance appraisal method. In this method traits or behaviours that are important for effective performance are listed out and each personis rated against these traits.

The method is easy to understand and is user friendly.Standardization of the comparison criteria’s Behaviours are quantified making appraisal system easier

Ratings are usually on a scale of 1-5, 1 being Non-existent, 2 being Average, 3 being Good, 4 being Very Good and 5 being Excellent.

Characteristics of a good Graphic Rating scale are:

• Performance evaluation measures against which an employee has to be rated must be well defined.

• Scales should be behaviorally based.

• Ambiguous behaviors definitions, such as loyalty, honesty etc. should be avoided

• Ratings should be relevant to the behaGraphic Rating Scale

Graphic Rating Scale is a type of performance appraisal method. In this method traits or behaviours that are important for effective performance are listed out and each personis rated against these traits.

The method is easy to understand and is user friendly.Standardization of the comparison criteria’s Behaviours are quantified making appraisal system easier

Ratings are usually on a scale of 1-5, 1 being Non-existent, 2 being Average, 3 being Good, 4 being Very Good and 5 being Excellent.

Characteristics of a good Graphic Rating scale are:

• Performance evaluation measures against which an employee has to be rated must be well defined.

• Scales should be behaviorally based.

• Ambiguous behaviors definitions, such as loyalty, honesty etc. should be avoided

• Ratings should be relevant to the behavior being measured.

But in this scale, rating behaviors may or may not be accurate as the perception of behaviors might vary with judges. Rating against labels like excellent and poor is difficult at times even tricky as the scale does not exemplify the ideal behaviors required for a achieving a rating. Perception error like Halo effect, Recency effect, stereotyping etc. can cause incorrect rating. behavior being measured.

But in this scale ,rating behaviors may or may not be accurate as the perception of behaviors might vary with judges. Rating against labels like excellent and poor is difficult at times even tricky as the scale does not exemplify the ideal behaviours required for a achieving a rating. Perception error like Halo effect, Recency effect, stereotyping etc. can cause incorrect rating.

 

Some data are measured at the ordinal level. Numbers indicate the relative position of items, but not the magnitude of difference. Attitude and opinion scales are usually ordinal; one example is a Likert response scale

Some data are measured at the interval level. Numbers indicate the magnitude of difference between items, but there is no absolute zero point. A good example is a Fahrenheit/Celsius temperature scale where the differences between numbers matter, but placement of zero does not.

Some data are measured at the ratio level. Numbers indicate magnitude of difference and there is a fixed zero point. Ratios can be calculated. Examples include age, income, price, costs, sales revenue, sales volume and market share.

Likert scales are a common ratings format for surveys. Respondents rank quality from high to low or best to worst using five or seven levels.

The Likert Scale

A Likert scale (pronounced /ˈlɪkərt/, also /ˈlaɪkərt/) is a psychometric scale commonly used in questionnaires, and is the most widely used scale in survey research, such that the term is often used interchangeably with rating scale even though the two are not synonymous. When responding to a Likert questionnaire item, respondents specify their level of agreement to a statement. The scale is named after its inventor, psychologist Rensis Likert.

Rensis Likert, the developer of the scale, pronounced his name ‘lick-urt’ with a short “i” sound. It has been claimed that Likert’s name “is among the most mispronounced in [the] field.” Although many people use the long “i” variant (‘lie-kurt’), those who attempt to stay true to Dr. Likert’s pronunciation use the short “i” pronunciation (‘lick-urt’).

Sample question presented using a five-point Likert item

An important distinction must be made between a Likert scale and a Likert item. The Likert scale is the sum of responses on several Likert items. Because Likert items are often accompanied by a visual analog scale (e.g., a horizontal line, on which a subject indicates his or her response by circling or checking tick-marks), the items are sometimes called scales themselves. This is the source of much confusion; it is better, therefore, to reserve the term Likert scale to apply to the summated scale, and Likert item to refer to an individual item.

A Likert item is simply a statement which the respondent is asked to evaluate according to any kind of subjective or objective criteria; generally the level of agreement or disagreement is measured. Often five ordered response levels are used, although many psychometricians advocate using seven or nine levels; a recent empirical study found that a 5- or 7- point scale may produce slightly higher mean scores relative to the highest possible attainable score, compared to those produced from a 10-point scale, and this difference was statistically significant. In terms of the other data characteristics, there was very little difference among the scale formats in terms of variation about the mean, skewness or kurtosis.

The format of a typical five-level Likert item is:

1. Strongly disagree

2. Disagree

3. Neither agree nor disagree

4. Agree

5. Strongly agree

Likert scaling is a bipolar scaling method, measuring either positive or negative response to a statement. Sometimes a four-point scale is used; this is a forced choice method[citation needed] since the middle option of “Neither agree nor disagree” is not available.

Likert scales may be subject to distortion from several causes. Respondents may avoid using extreme response categories (central tendency bias); agree with statements as presented (acquiescence bias); or try to portray themselves or their organization in a more favorable light (social desirability bias). Designing a scale with balanced keying (an equal number of positive and negative statements) can obviate the problem of acquiescence bias, since acquiescence on positively keyed items will balance acquiescence on negatively keyed items, but central tendency and social desirability are somewhat more problematic.

 

How to write Likert scale survey questions

Be accurate. Likert-type questions must be phrased correctly in order to avoid confusion and increase their effectiveness. If you ask about satisfaction with the service at a restaurant, do you mean the service from valets, the waiters, or the host? All of the above? Are you asking whether the customer was satisfied with the speed of service, the courteousness of the attendants, or the quality of the food and drinks? The more specific you are, the better your data will be.

Be careful with adjectives. When you’re using words to ask about concepts in your survey, you need to be sure people will understand exactly what you mean. Your response options need to include descriptive words that are easily understandable. There should be no confusion about which grade is higher or bigger than the next: Is “pretty much” more than “quite a bit”? It’s advisable to start from the extremes (“extremely,” “not at all”,) set the midpoint of your scale to represent moderation (“moderately,”) or neutrality (“neither agree nor disagree,”) and then use very clear terms–“very,” “slightly”–for the rest of the options.

Bipolar or unipolar? Do you want a question where attitudes can fall on two sides of neutrality–“love” vs. “hate”– or one where the range of possible answers goes from “none” to the maximum? The latter, a unipolar scale, is preferable in most cases. For example, it’s better to use a scale that ranges from “extremely brave” to “not at all brave,” rather than a scale that ranges from “extremely brave” to “extremely shy.” Unipolar scales are just easier for people to think about, and you can be sure that one end is the exact opposite of the other, which makes it methodologically more sound as well.

Better to ask. Statements carry an implicit risk: Most people will tend to agree rather than disagree with them because humans are mostly nice and respectful. (This phenomenon is called acquiescence response bias.) It’s more effective, then, to ask a question than to make a statement.

5 extra tips on how to use Likert scales

  1. Keep it labeled. Numbered scales that only use numbers instead of words as response options may give survey respondents trouble, since they might not know which end of the range is positive or negative.
  2. Keep it odd. Scales with an odd number of values will have a midpoint. How many options should you give people? Respondents have difficulty defining their point of view on a scale greater than seven. If you provide more than seven response choices, people are likely to start picking an answer randomly, which can make your data meaningless. Our methodologists recommend five scale points for a unipolar scale, and seven scale points if you need to use a bipolar scale.
  3. Keep it continuous. Response options in a scale should be equally spaced from each other. This can be tricky when using word labels instead of numbers, so make sure you know what your words mean.
  4. Keep it inclusive. Scales should span the entire range of responses. If a question asks how quick your waiter was and the answers range from “extremely quick” to “moderately quick,” respondents who think the waiter was slow won’t know what answer to choose.
  5. Keep it logical. Add skip logic to save your survey takers some time. For example, let’s say you want to ask how much your patron enjoyed your restaurant, but you only want more details if they were unhappy with something. Use question logic so that only those who are unhappy skip to a question asking for improvement suggestions.

You have probably known Likert-scale questions for a long time, even if you didn’t know their unique name. Now you also know how to create effective ones that can bring a greater degree of nuance to the key questions in your surveys.

Basics of Likert Scales

Likert scales were developed in 1932 as the familiar five-point bipolar response that most people are familiar with today.3 These scales range from a group of categories—least to most—asking people to indicate how much they agree or disagree, approve or disapprove, or believe to be true or false. There’s really no wrong way to build a Likert scale. The most important consideration is to include at least five response categories. Some examples of category groups appear in Table 1.

 

The ends of the scale often are increased to create a seven-point scale by adding “very” to the respective top and bottom of the five-point scales. The seven-point scale has been shown to reach the upper limits of the scale’s reliability.4 As a general rule, Likert and others recommend that it is best to use as wide a scale as possible. You can always collapse the responses into condensed categories, if appropriate, for analysis.

With that in mind, scales are sometimes truncated to an even number of categories (typically four) to eliminate the “neutral” option in a “forced choice” survey scale. Rensis Likert’s original paper clearly identifies there might be an underlying continuous variable whose value characterizes the respondents’ opinions or attitudes and this underlying variable is interval level, at best.5

Analysis, Generalization to Continuous Indexes

As a general rule, mean and standard deviation are invalid parameters for descriptive statistics whenever data are on ordinal scales, as are any parametric analyses based on the normal distribution. Nonparametric procedures—based on the rank, median or range—are appropriate for analyzing these data, as are distribution free methods such as tabulations, frequencies, contingency tables and chi-squared statistics.

Kruskall-Wallis models can provide the same type of results as an analysis of variance, but based on the ranks and not the means of the responses. Given these scales are representative of an underlying continuous measure, one recommendation is to analyze them as interval data as a pilot prior to gathering the continuous measure.

Table 2 includes an example of misleading conclusions, showing the results from the annual Alfred P. Sloan Foundation survey of the quality and extent of online learning in the United States. Respondents used a Likert scale to evaluate the quality of online learning compared to face-to-face learning.

 

While 60%-plus of the respondents perceived online learning as equal to or better than face-to-face, there is a persistent minority that perceived online learning as at least somewhat inferior. If these data were analyzed using means, with a scale from 1 to 5 from inferior to superior, this separation would be lost, giving means of 2.7, 2.6 and 2.7 for these three years, respectively. This would indicate a slightly lower than average agreement rather than the actual distribution of the responses.

A more extreme example would be to place all the respondents at the extremes of the scale, yielding a mean of “same” but a completely different interpretation from the ac-tual responses.

Under what circumstances might Likert scales be used with interval procedures? Suppose the rank data included a survey of income measuring $0, $25,000, $50,000, $75,000 or $100,000 exactly, and these were measured as “low,” “medium” and “high.”

The “intervalness” here is an attribute of the data, not of the labels. Also, the scale item should be at least five and preferably seven categories.

Another example of analyzing Likert scales as interval values is when the sets of Likert items can be combined to form indexes. However, there is a strong caveat to this approach: Most researchers insist such combinations of scales pass the Cronbach’s alpha or the Kappa test of intercorrelation and validity.

Also, the combination of scales to form an interval level index assumes this combination forms an underlying characteristic or variable.

Steps to Developing a Likert Scale

1.            Define the focus: what is it you are trying to measure? Your topic should be one-dimensional. For example “Customer Service” or “This Website.”

2.            Generate the Likert Scale items. The items should be able to be rated on some kind of scale. The image at the top of this page has some suggestions. For example, polite/rude could be rated as “very polite”, “polite”, “not polite” or “very impolite.” Politeness could also be rated on a scale of 1 to 10, where 1 is not polite at all and 10 is extremely polite.

3.            Rate the Likert Scale items. You want to be sure your focus is good, so pick a team of people to go through the items in step 2 above and rate them as favorable/neutral/unfavorable to your focus. Weed out the items that are mostly seen as unfavorable.

4.            Administer your Likert Scale test.

Hypothesis Tests on Likert Scales

If you known that you’re going to be performing analysis on Likert scale data, it’s easier to tailor your questions in the development stage, rather than to collect your data and then make a decision about analysis. What analysis you run depends on the format of your questionnaire.

There is some disagreement in education and research about whether you should run parametric tests like the t-test or non-parametric hypothesis tests like the Mann-Whitney on Likert-scale data. Winter and Dodou(2010) researched this issue, with the following results:

“In conclusion, the t test and [Mann-Whitney] generally have equivalent power, except for skewed, peaked, or multimodal distributions for which strong power differences between the two tests occurred. The Type I error rate of both methods was never more than 3% above the nominal rate of 5%, even not when sample sizes were highly unequal.”

In other words, there seems to be no real difference between the results for parametric and non-parametric tests, except for skewed, peaked, or multimodal distributions. Which avenue you take is up to you, your department, and perhaps the journal you are submitting to (if any). The most important step at the decision stage is deciding if you want to treat your data as ordinal or interval data.

General guidelines:

•             For a series of individual questions with Likert responses, treat the data as ordinal variables.

•             For a series of Likert questions that together describe a single construct (personality trait or attitude), treat the data as interval variables.

Two Options

Most Likert scales are classified as ordinal variables. If you are 100% sure that the distance between variables is constant, then they can be treated as interval variables for testing purposes. In most cases, your data will be ordinal, as it’s impossible to tell the difference between, say, “strongly agree” and “agree” vs. “agree” and “neutral.”

Ordinal Scale Data

With most variable types (interval, ratio, nominal), you can find the mean. This is not true for Likert scale data. The mean in a Likert scale can’t be found because you don’t know the “distance” between the data items. In other words, while you can find an average of 1,2, and 3, you can’t find an average of “agree”, “disagree”, and “neutral.”

“The average of ‘fair’ and ‘good’ is not ‘fair‐and‐a‐half’; which is true even when one assigns integers to represent ‘fair’ and ‘good’!” – Susan Jamieson paraphrasing Kuzon Jr et al. (Jamieson, 2004)

Statistics Choices

Statistics you can use are:

•             The mode: the most common response.

•             The median: the “middle” response when all items are placed in order.

•             The range and interquartile range: to show variability.

•             A bar chart or frequency table: to show a table of results. Do not make a histogram, as the data is not continuous.

Hypothesis Testing

In hypothesis testing for Likert scales, the independent variable represents the groups and the dependent variable represents the construct you are measuring. For example, if you survey nursing students to measure their level of compassion, the independent variable is the groups of nursing students and the dependent variable is the level of compassion.

Types of test you can run:

•             Kruskal Wallis: determines if the median for two groups is different.

•             Mann Whitney U Test: determines if the medians for two groups are different. Simple to evaluate single Likert scale questions, but suffers from several forms of bias, including central tendency bias, acquiescence bias and social desirability bias. In addition, validity is usually hard to demonstrate.

More Options for Two Categories

If you combine your responses into two categories, for example, agree and disagree, more test options open up to you.

•             Chi-square: The test is designed for multinomial experiments, where the outcomes are counts placed into categories.

•             McNemar test: Tests if responses to categories are the same for two groups/conditions.

•             Cochran’s Q test: An extension of McNemar that tests if responses to categories are the same for three or moregroups/conditions.

•             Friedman Test: for finding differences in treatments across multiple attempts.

Measures of Association

Sometimes you want to know if a one group of people has a different response (higher or lower) from another group of people to a certain Likert scale item. To answer this question, you would use a measure of association instead of a test for differences (like those listed above).

If your groups are ordinal (i.e. ordered) in some way, like age-groups, you can use:

•             Kendall’s tau coefficient or variants of tau (e.g., gamma coefficient; Somers’ D).

•             Spearman rank correlation.

If your groups aren’t ordinal, then use one of these:

•             Phi coefficient.

•             Contingency coefficient.

•             Cramer’s V.

Interval Scale Data

Statistics that are suitable for interval scale Likert data:

•             Mean.

•             Standard deviation.

Hypothesis Tests suitable for interval scale Likert data:

•             T-test.

•             ANOVA.

•             Regression analysis (either ordered logistic regression or multinomial logistic regression). If you can combine your dependent variables into two responses (e.g. agree or disagree), run binary logistic regression.

Scoring and analysis

After the questionnaire is completed, each item may be analyzed separately or in some cases item responses may be summed to create a score for a group of items. Hence, Likert scales are often called summative scales.

Whether individual Likert items can be considered as interval-level data, or whether they should be considered merely ordered-categorical data is the subject of disagreement. Many regard such items only as ordinal data, because, especially when using only five levels, one cannot assume that respondents perceive all pairs of adjacent levels as equidistant. On the other hand, often (as in the example above) the wording of response levels clearly implies a symmetry of response levels about a middle category; at the very least, such an item would fall between ordinal- and interval-level measurement; to treat it as merely ordinal would lose information. Further, if the item is accompanied by a visual analog scale, where equal spacing of response levels is clearly indicated, the argument for treating it as interval-level data is even stronger.

When treated as ordinal data, Likert responses can be collated into bar charts, central tendency summarised by the median or the mode (but some would say not the mean), dispersion summarised by the range across quartiles (but some would say not the standard deviation), or analyzed using non-parametric tests, e.g. chi-square test, Mann–Whitney test, Wilcoxon signed-rank test, or Kruskal–Wallis test. Parametric analysis of ordinary averages of Likert scale data is also justifiable by the Central Limit Theorem, although some would disagree that ordinary averages should be used for Likert scale data.

Responses to several Likert questions may be summed, providing that all questions use the same Likert scale and that the scale is a defendable approximation to an interval scale, in which case they may be treated as interval data measuring a latent variable. If the summed responses fulfill these assumptions, parametric statistical tests such as the analysis of variance can be applied. These can be applied only when more than 5 Likert questions are summed.

Data from Likert scales are sometimes reduced to the nominal level by combining all agree and disagree responses into two categories of “accept” and “reject”. The chi-square, Cochran Q, or McNemar test are common statistical procedures used after this transformation.

Consensus based assessment (CBA) can be used to create an objective standard for Likert scales in domains where no generally accepted standard or objective standard exists. Consensus based assessment (CBA) can be used to refine or even validate generally accepted standards.

Level of measurement

The five response categories are often believed to represent an Interval level of measurement. But this can only be the case if the intervals between the scale points correspond to empirical observations in a metric sense. In fact, there may also appear phenomena which even question the ordinal scale level. For example, in a set of items A,B,C rated with a Likert scale circular relations like A>B, B>C and C>A can appear. This violates the axiom of transitivity for the ordinal scale.

Rasch model

Likert scale data can, in principle, be used as a basis for obtaining interval level estimates on a continuum by applying the polytomous Rasch model, when data can be obtained that fit this model. In addition, the polytomous Rasch model permits testing of the hypothesis that the statements reflect increasing levels of an attitude or trait, as intended. For example, application of the model often indicates that the neutral category does not represent a level of attitude or trait between the disagree and agree categories.

Again, not every set of Likert scaled items can be used for Rasch measurement. The data has to be thoroughly checked to fulfill the strict formal axioms of the model.

The Likert scale is commonly used in survey research. It is often used to measure respondents’ attitudes by asking the extent to which they agree or disagree with a particular question or statement. A typical scale might be “strongly agree, agree, not sure/undecided, disagree, strongly disagree.” On the surface, survey data using the Likert scale may seem easy to analyze, but there are important issues for a data analyst to consider.

General Instructions

1. Get your data ready for analysis by coding the responses. For example, let’s say you have a survey that asks respondents whether they agree or disagree with a set of positions in a political party’s platform. Each position is one survey question, and the scale uses the following responses: Strongly agree, agree, neutral, disagree, strongly disagree. In this example, we’ll code the responses accordingly: Strongly disagree = 1, disagree = 2, neutral = 3, agree = 4, strongly agree = 5.

2. Remember to differentiate between ordinal and interval data, as the two types require different analytical approaches. If the data are ordinal, we can say that one score is higher than another. We cannot say how much higher, as we can with interval data, which tell you the distance between two points. Here is the pitfall with the Likert scale: many researchers will treat it as an interval scale. This assumes that the differences between each response are equal in distance. The truth is that the Likert scale does not tell us that. In our example here, it only tells us that the people with higher-numbered responses are more in agreement with the party’s positions than those with the lower-numbered responses.

3. Begin analyzing your Likert scale data with descriptive statistics. Although it may be tempting, resist the urge to take the numeric responses and compute a mean. Adding a response of “strongly agree”  to two responses of “disagree”  would give us a mean of 4, but what is the significance of that number? Fortunately, there are other measures of central tendency we can use besides the mean. With Likert scale data, the best measure to use is the mode, or the most frequent response. This makes the survey results much easier for the analyst (not to mention the audience for your presentation or report) to interpret. You also can display the distribution of responses (percentages that agree, disagree, etc.) in a graphic, such as a bar chart, with one bar for each response category.

4. Proceed next to inferential techniques, which test hypotheses posed by researchers. There are many approaches available, and the best one depends on the nature of your study and the questions you are trying to answer. A popular approach is to analyze responses using analysis of variance techniques, such as the Mann Whitney or Kruskal Wallis test. Suppose in our example we wanted to analyze responses to questions on foreign policy positions with ethnicity as the independent variable. Let’s say our data includes responses from Anglo, African-American, and Hispanic respondents, so we could analyze responses among the three groups of respondents using the Kruskal Wallis test of variance.

5. Simplify your survey data further by combining the four response categories (e.g., strongly agree, agree, disagree, strongly disagree) into two nominal categories, such as agree/disagree, accept/reject, etc.). This offers other analysis possibilities. The chi square test is one approach for analyzing the data in this way.

 

Considerations for numeric rating scales

If you assign numbers to each column for marks, consider the following:

• What should the first number be? If 0, does the student deserve 0%? If 1, does the student deserve 20% (assuming 5 is the top mark) even if he/she has done extremely poorly?

• What should the second number be? If 2 (assuming 5 is the top mark), does the person really deserve a failing mark (40%)? This would mean that the first two or three columns represent different degrees of failure.

• Consider variations in the value of each column. Assuming 5 is the top mark, the columns could be valued at 0, 2.5, 3, 4, and 5.

• Consider the weighting for each row. For example, for rating a student’s report, should the introduction, main body, and summary be proportionally rated the same? Perhaps, the main body should be valued at five times the amount of the introduction and summary. A multiplier or weight can be put in another column for calculating a total mark in the last column.

Consider having students create the rating scale. This can get them to think deeply about the content.

Rating scale example   : Practicum performance assessment

Expected learning outcome: The student will demonstrate professionalism and high-quality work during the practicum.

Criteria for success: A maximum of one item is rated as “Needs improvement” in each section.

 

Performance area Needs improvement Average Above average Comments
-A. Attitude        
• Punctual        
• Respectful of equipment        
• Uses supplies conscientiously        
B. Quality of work done        
• …        

 

Above average = Performance is above the expectations stated in the outcomes.

Average = Performance meets the expectations stated in the outcomes.

Needs improvement = Performance does not meet the expectations stated in the

Rating scale example : Written report assessment

Expected learning outcome: The student will write a report that recommends one piece of equipment over another based on the pros and cons of each.

Criteria for success: All items must be rated as “Weak” or above.

 

Report Unacceptable

0

Weak

2.5

Average

3

Good

4

Excellent

5

Weight Score
Introduction       1      
Main Body       5      
Summary       1      
Total              

 

Easy Ways to Calculate Rating Scales!

Whether you’re creating a personality quiz, rating scales (aka Likert sales) are one of the best methods for collecting a broad range of opinions and behaviors.

In Cognito Forms, you can easily add either a predefined (Satisfied/Unsatisfied, Agree/Disagree, etc.) or completely custom rating scale to your form. Every question in a rating scale has an internal numerical value based on the number of rating options; for example, on a Good/Poor rating scale, Very Poor has a value of 1, while Very Good has a value of 5. You can reference these values to calculate scores, percentages, and more!

1. Total score

The easiest way to calculate a rating scale is to simply add up the total score. To do this, start by adding a Calculation field to your form, and make sure that it’s set to internal view only.

 

Next, target your individual rating scale questions by entering the name of your rating scale, the rating scale question, and “_Rating”:

=RatingScale.Question1_Rating + RatingScale.Question2_Rating + RatingScale.Question3_Rating

And that’s it! Now, the value of each question will be summed up:

 

If you want to display the total to your users, just insert the Calculation field into your form’s confirmation message or confirmation email using the Insert Field option:

 

2. Weighted score

Calculating a total score is simple enough, but there are a ton of other functions you can perform using rating scale values. For example, divide the rating scale total by the number of questions to calculate the average:

=(RatingScale.Question1_Rating + RatingScale.Question2_Rating + RatingScale.Question3_Rating) / 3

Or, if you have multiple rating scales, you can average each one and add them together:

=((RatingScale.Question1_Rating + RatingScale.Question2_Rating + RatingScale.Question3_Rating) / 3) + ((RatingScale2.Question1_Rating + RatingScale2.Question2_Rating + RatingScale2.Question3_Rating) / 3)

If you want the average of one rating scale to weigh more than another, just multiply each rating scale average by the percentage that it’s worth (in this case, 40%):

=(((RatingScale.Question1_Rating + RatingScale.Question2_Rating + RatingScale.Question3_Rating) / 3) *.4)

3. Percentages

Rather than displaying a total in points, you could also calculate a total percentage. To do this, add a Calculation field to your form, and set it to the Percent type. Next, write an expression that calculates the average of the rating scale divided by the number of possible options. For example, if your rating scale has three questions, and five options to choose from:

=((RatingScale.Question1_Rating + RatingScale.Question2_Rating + RatingScale.Question3_Rating) / 3) /5

Now, the total percentage will be calculated based on a 100 point scale:

 

Checklist for developing a rating scale

In developing your rating scale, use the following checklist.In developing a rating scale:

  • Arrange the skills in a logical order, if you can.
  • Ask for feedback from other instructors before using it with students.
  • Clearly describe each skill.
  • Determine the scale to use (words or words with numbers) to represent the levels of success.
  • Highlight the critical steps, checkpoints, or indicators of success.
  • List the categories of performance to be assessed, as needed
  • Review the learning outcome and associated criteria for success.
  • Review the rating scale for details and clarity. Format the scale.
  • Write a description for the meaning of each point on the scale, as needed.
  • Write clear instructions for the observer.

The Likert Scale: Advantages and Disadvantages

The Likert Scale is an ordinal psychometric measurement of attitudes, beliefs and opinions. In each question, a statement is presented in which a respondent must indicate a degree of agreement or disagreement in a multiple choice type format.

 

The advantageous side of the Likert Scale is that they are the most universal method for survey collection, therefore they are easily understood. The responses are easily quantifiable and subjective to computation of some mathematical analysis. Since it does not require the participant to provide a simple and concrete yes or no answer, it does not force the participant to take a stand on a particular topic, but allows them to respond in a degree of agreement; this makes question answering easier on the respondent. Also, the responses presented accommodate neutral or undecided feelings of participants. These responses are very easy to code when accumulating data since a single number represents the participant’s response. Likert surveys are also quick, efficient and inexpensive methods for data collection. They have high versatility and can be sent out through mail, over the internet, or given in person.

 

Attitudes of the population for one particular item in reality exist on a vast, multi-dimensional continuum. However, the Likert Scale is uni-dimensional and only gives 5-7 options of choice, and the space between each choice cannot possibly be equidistant. Therefore, it fails to measure the true attitudes of respondents. Also, it is not unlikely that peoples’ answers will be influences by previous questions, or will heavily concentrate on one response side (agree/disagree). Frequently, people avoid choosing the “extremes” options on the scale, because of the negative implications involved with “extremists”, even if an extreme choice would be the most accurate.

Critical Evaluation

Likert Scales have the advantage that they do not expect a simple yes / no answer from the respondent, but rather allow for degrees of opinion, and even no opinion at all.  Therefore quantitative data is obtained, which means that the data can be analyzed with relative ease.

However, like all surveys, the validity of Likert Scale attitude measurement can be compromised due social desirability.  This means that individuals may lie to put themselves in a positive light.

Offering anonymity on self-administered questionnaires should further reduce social pressure, and thus may likewise reduce social desirability bias.  Paulhus (1984) found that more desirable personality characteristics were reported when people were asked to write their names, addresses and telephone numbers on their questionnaire than when they told not to put identifying information on the questionnaire.

Posted in Uncategorized | Comments Off

Experimental Research in Education

 

Dr. V.K. Maheshwari, Former Principal

K.L.D.A.V (P. G) College, Roorkee, India

Experimental research is a method used by researchers through manipulating one variable and control the rest of the variables. The process, treatment and program in this type of research are also introduced and the conclusion is observed.

Commonly used in sciences such as sociology, psychology, physics, chemistry, biology and medicine, experimental research is a collection of research designs which make use of manipulation and controlled testing in order to understand casual processes. To determine the effect on a dependent variable, one or more variables need to be manipulated.

The experimental Research is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables

The aim of experimental research is to predict phenomenons. In most cases, an experiment is constructed so that some kinds of causation can be explained. Experimental research is helpful for society as it helps improve everyday life.

Experimental research describes the process that a researcher undergoes of controlling certain variables and manipulating others to observe if the results of the experiment reflect that the manipulations directly caused the particular outcome.

Experimental researchers test an idea (or practice or procedure) to determine its effect on an outcome. Researchers decide on an idea with which to “experiment,” assign individuals to experience it (and have some individuals experience something different), and then determine whether those who experienced the idea or practice performed better on some outcome than those who did not experience it.

Experimental research is used where:

  • time priority in a causal relationship.
  • consistency in a causal relationship.
  • magnitude of the correlation is great.

Key Characteristics of Experimental Research

Today, several key characteristics help us understand and read experimental research.

  • Experimental researchers randomly assign participants to groups or other units.
  • They provide control over extraneous variables to isolate the effects of the independent variable on the outcomes.
  • They physically manipulate the treatment conditions for one or more groups.
  • They then measure the outcomes for the groups to determine if the experimental treatment had a different effect than the non-experimental treatment.
  • This is accomplished by statistically comparing the groups.
  • Overall, they design an experiment to reduce the threats to internal validity and external validity.

Unique Features of Experimental Method

“The best method — indeed the only fully compelling method — of establishing causation is to conduct a carefully designed experiment in which the effects of possible lurking variables are controlled. To experiment means to actively change x and to observe the response in y” .

“The experimental method is the only method of research that can truly test hypotheses concerning cause-and-effect relationships. It represents the most valid approach to the solution of educational problems, both practical and theoretical, and to the advancement of education as a science .

  • After treatment, performance of subjects (dependent variable) in both groups is compared.Bottom of Form
  • Empirical observations based on experiments provide the strongest argument for cause-effect relationships.
  • Extraneous variables are controlled by 3 & 4 and other procedures if needed.
  • Problem statement ⇒ theory ⇒ constructs ⇒ operational definitions ⇒ variables ⇒ hypotheses.
  • Random assignment of subjects to treatment and control (comparison) groups (insures equivalency of groups; ie., unknown variables that may influence outcome are equally distributed across groups.
  • Random sampling of subjects from population (insures sample is representative of population).
  • The investigator manipulates a variable directly (the independent variable).
  • The research question (hypothesis) is often stated as the alternative hypothesis to the null hypothesis, that is used to interpret differences in the empirical data.

Key Components of Experimental Research Design

The Manipulation of Predictor Variables

In an experiment, the researcher manipulates the factor that is hypothesized to affect the outcome of interest. The factor that is being manipulated is typically referred to as the treatment or intervention. The researcher may manipulate whether research subjects receive a treatment

Random Assignment

  • Study participants are randomly assigned to different treatment groups
  • All participants have the same chance of being in a given condition

Random assignment neutralizes factors other than the independent and dependent variables, making it possible to directly infer cause and effect

Random Sampling

Traditionally, experimental researchers have used convenience sampling to select study participants. However, as research methods have become more rigorous, and the problems with generalizing from a convenience sample to the larger population have become more apparent, experimental researchers are increasingly turning to random sampling. In experimental policy research studies, participants are often randomly selected from program administrative databases and randomly assigned to the control or treatment groups.

Validity of Results

The two types of validity of experiments are internal and external. It is often difficult to achieve both in social science research experiments.

Internal Validity

  • When an experiment is internally valid, we are certain that the independent variable (e.g., child care subsidies) caused the outcome of the study (e.g., maternal employment)
  • When subjects are randomly assigned to treatment or control groups, we can assume that the independent variable caused the observed outcomes because the two groups should not have differed from one another at the start of the experiment
  • Since research subjects were randomly assigned to the treatment  and control groups, the two groups should not have differed at the outset of the study.

One potential threat to internal validity in experiments occurs when participants either drop out of the study or refuse to participate in the study. If particular types of individuals drop out or refuse to participate more often than individuals with other characteristics, this is called differential attrition.

External Validity

  • External validity is also of particular concern in social science experiments
  • It can be very difficult to generalize experimental results to groups that were not included in the study
  • Studies that randomly select participants from the most diverse and representative populations are more likely to have external validity
  • The use of random sampling techniques makes it easier to generalize the results of studies to other groups

Ethical Issues in Experimental Research

Ethical issues in conducting experiments relate to withholding the experimental treatment from some individuals who might benefit from receiving it, the disadvantages that might accrue from randomly assigning individuals to groups. This assignment overlooks the potential need of some individuals for beneficial treatment. Ethical issues also arise as to when to conclude an experiment, whether the experiment will provide the best answers to a problem, and considerations about the stakes involved in conducting the experiment.

It is particularly important in experimental research to follow ethical guidelines

The basic ethical principles:

  • Respect for persons — requires that research subjects are not coerced into participating in a study and requires the protection of research subjects who have diminished autonomy
  • Beneficence — requires that experiments do not harm research subjects, and that researchers minimize the risks for subjects while maximizing the benefits for them.

Validity Threats in Experimental Research

By validity “threat,” we mean only that a factor has the potential to bias results.In 1963, Campbell and Stanley identified different classes of such threats.

  • Instrumentation. Inconsistent use is made of testing instruments or testing conditions, or the pre-test and post- test are uneven in difficulty, suggesting a gain or decline in performance that is not real.
  • Testing. Exposure to a pre-test or intervening assessment influences performance on a post-test.
  • History. This validity threat is present when events, other than the treatments, occurring during the experimental period can influence results.
  • Maturation. During the experimental period, physical or psychological changes take place within the subjects.
  • Selection. There is a systematic difference in subjects’ abilities or characteristics between the treatment groups being compared.
  • Diffusion of Treatments. The implementation of a particular treatment influences subjects in the comparison treatment
  • Experimental Mortality. The loss of subjects from one or more treatments during the period of the study may bias the results.

In many instances, validity threats cannot be avoided. The presence of a validity threat should not be taken to mean that experimental findings are inaccurate or misleading. Knowing about validity threats gives the experimenter a framework for evaluating the particular situation and making a judgment about its severity. Such knowledge may also permit actions to be taken to limit the influences of the validity threat in question.

Planning a Comparative Experiment in Educational Settings

Educational researchers in many disciplines are faced with the task of exploring how students learn and are correspondingly addressing the issue of how to best help students do so. Often, educational researchers are interested in determining the effectiveness of some technology or pedagogical technique for use in the classroom. Their ability to do so depends on the quality of the research methodologies used to investigate these treatments.

 

 
Types of experimental research designs
There are three basic types of experimental research designs . These include

1)      True experimental designs

2)      Pre-experimental designs,

3)      Quasi-experimental designs.

The degree to which the researcher assigns subjects to conditions and groups distinguishes the type of experimental design.

True Experimental Designs

True experimental designs are characterized by the random selection of participants and the random assignment of the participants to groups in the study. The researcher also has complete control over the extraneous variables. Therefore, it can be confidently determined that that effect on the dependent variable is directly due to the manipulation of the independent variable. For these reasons, true experimental designs are often considered the best type of research design.

A true experiment is thought to be the most accurate experimental research design. A true experiment is a type of experimental design and is thought to be the most accurate type of experimental research. This is because a true experiment supports or refutes a hypothesis using statistical analysis. A true experiment is also thought to be the only experimental design that can establish cause and effect relationships.

types of true experimental designs

There are several types of true experimental designs and they are as follows:

One-shot case study design

A single group is studied at a single point in time after some treatment that is presumed to have caused change. The carefully studied single instance is compared to general expectations of what the case would have looked like had the treatment not occurred and to other events casually observed. No control or comparison group is employed.

Static-group comparison

A group that has experienced some treatment is compared with one that has not. Observed differences between the two groups are assumed to be a result of the treatment.

Post-test Only Design – This type of design has two randomly assigned groups: an experimental group and a control group. Neither group is pretested before the implementation of the treatment. The treatment is applied to the experimental group and the post-test is carried out on both groups to assess the effect of the treatment or manipulation. This type of design is common when it is not possible to pretest the subjects.

Pretest-Post-test Only Design -

The subjects are again randomly assigned to either the experimental or the control group. Both groups are pretested for the independent variable. The experimental group receives the treatment and both groups are post-tested to examine the effects of manipulating the independent variable on the dependent variable.

One-group pretest-posttest design

A single case is observed at two time points, one before the treatment and one after the treatment. Changes in the outcome of interest are presumed to be the result of the intervention or treatment. No control or comparison group is employed.

Solomon Four Group Design – Subjects are randomly assigned into one of four groups. There are two experimental groups and two control groups. Only two groups are pretested. One pretested group and one unprotested group receive the treatment. All four groups will receive the post-test. The effects of the dependent variable originally observed are then compared to the effects of the independent variable on the dependent variable as seen in the post-test results. This method is really a combination of the previous two methods and is used to eliminate potential sources of error.

Factorial Design

The researcher manipulates two or more independent variables (factors) simultaneously to observe their effects on the dependent variable. This design allows for the testing of two or more hypotheses in a single project.

Randomized Block Design

This design is used when there are inherent differences between subjects and possible differences in experimental conditions. If there are a large number of experimental groups, the randomized block design may be used to bring some homogeneity to each group.

Crossover Design (also known as Repeat Measures Design)

Subjects in this design are exposed to more than one treatment and the subjects are randomly assigned to different orders of the treatment. The groups compared have an equal distribution of characteristics and there is a high level of similarity among subjects that are exposed to different conditions. Crossover designs are excellent research tools, however, there is some concern that the response to the second treatment or condition will be influenced by their experience with the first treatment. In this type of design, the subjects serve as their own control groups.

Criteria of true experiment

True experimental design employ both a control group and a means to measure the change that occurs in both groups.  In this sense, we attempt to control for all confounding variables, or at least consider their impact, while attempting to determine if the treatment is what truly caused the change.  The true experiment is often thought of as the only research method that can adequately measure the cause and effect relationship.

There are three criteria that must be met in a true experiment

  1. Control group and experimental group
  2. Researcher-manipulated variable
  3. Random assignment

Control Group and Experimental Group

True experiments must have a control group, which is a group of research participants that resemble the experimental group but do not receive the experimental treatment. The control group provides a reliable baseline data to which you can compare the experimental results.

The experimental group is the group of research participants who receive the experimental treatment. True experiments must have at least one control group and one experimental group, though it is possible to have more than one experimental group.

Researcher-Manipulated Variable

In true experiments, the researcher has to change or manipulate the variable that is hypothesized to affect the outcome variable that is being studied. The variable that the researcher has control over is called the independent variable. The independent variable is also called the predictor variable because it is the presumed cause of the differences in the outcome variable.

The outcome or effect that the research is studying is called the dependent variable. The dependent variable is also called the outcome variable because it is the outcome that the research is studying. The researcher does not manipulate the dependent variable.

Random Assignment

Research participants have to be randomly assigned to the sample groups. In other words, each research participant must have an equal chance of being assigned to each sample group. Random assignment is useful in that it assures that the differences in the groups are due to chance. Research participants have to be randomly assigned to either the control or experimental group.

Elements of true experimental research

Once the design has been determined, there are four elements of true experimental research that must be considered:

Manipulation: The researcher will purposefully change or manipulate the independent variable, which is the treatment or condition that will be applied to the experimental groups. It is important to establish clear procedural guidelines for application of the treatment to promote consistency and ensure that the manipulation itself does affect the dependent variable.

  • Control: Control is used to prevent the influence of outside factors (extraneous variables) from influencing the outcome of the study. This ensures that outcome is caused by the manipulation of the independent variable. Therefore, a critical piece of experimental design is keeping all other potential variables constant.
  • Random Assignment: A key feature of true experimental design is the random assignment of subjects into groups. Participants should have an equal chance of being assigned into any group in the experiment. This further ensures that the outcome of the study is due to the manipulation of the independent variable and is not influenced by the composition of the test groups. Subjects can be randomly assigned in many ways, some of which are relatively easy, including flipping a coin, drawing names, using a random table, or utilizing a computer assisted random sequencing.
  • Random selection: In addition to randomly assigning the test subjects in groups, it is also important to randomly select the test subjects from a larger target audience. This ensures that the sample population provides an accurate cross-sectional representation of the larger population including different socioeconomic backgrounds, races, intelligence levels, and so forth.

Pre-experimental Design

Pre-experimental design is a research format in which some basic experimental attributes are used while some are not. This factor causes an experiment to not qualify as truly experimental. This type of design is commonly used as a cost effective way to conduct exploratory research.

Pre-experimental designs are so named because they follow basic experimental steps but fail to include a control group.  In other words, a single group is often studied but no comparison between an equivalent non-treatment group is made

Pre-experiments are the simplest form of research design. In a pre-experiment either a single group or multiple groups are observed subsequent to some agent or treatment presumed to cause change.

Types of Pre-Experimental Design

  • One-shot case study design
  • One-group pretest-posttest design
  • Static-group comparison

One-shot case study design

A single group is studied at a single point in time after some treatment that is presumed to have caused change. The carefully studied single instance is compared to general expectations of what the case would have looked like had the treatment not occurred and to other events casually observed. No control or comparison group is employed.

In one-shot case study we expose a group to a treatment X and measure the outcome Y. It lacks a pre-test Y and a control group. It has no basis for comparing groups, or pre- and post-tests

Used to measure an outcome after an intervention is implemented; often to measure use of a new program or service

  • One group receives the intervention
  • Data gathered at one time point after the intervention
  • Design weakness: does not prove there is a cause and effect relationship between the intervention and outcomes -

One-group pretest-posttest design

A single case is observed at two time points, one before the treatment and one after the treatment. Changes in the outcome of interest are presumed to be the result of the intervention or treatment. No control or comparison group is employed.

In one-group pre-test/post-test design we include the measurement of Y before and after treatment X. It has no control group, so no group comparisons

  • Used to measure change in an outcome before and after an intervention is implemented
  • One group receives the intervention
  • Data gathered at 2+ time points
  • Design weakness: shows that change occurred, but does not account for an event, maturation, or altered survey methods that could occur between Static group comparison
  • Used to measure an outcome after an intervention is implemented ◦

Static-group comparison

In static-group comparison we have experimental and control group, but no pre-test. It allows for comparisons among groups, but no pre- and post-tests.

A group that has experienced some treatment is compared with one that has not. Observed differences between the two groups are assumed to be a result of the treatment.

Two non-randomly assigned groups, one that received the intervention and one that did not (control)

  • Data gathered at one time point after the intervention
  • Design weakness: shows that change occurred, but participant selection could result in groups that differ on relevant variables

Validity of Results in Pre-experimental designs

An important drawback of pre-experimental designs is that they are subject to numerous threats to their validity. Consequently, it is often difficult or impossible to dismiss rival hypotheses or explanations.

One reason that it is often difficult to assess the validity of studies that employ a pre-experimental design is that they often do not include any control or comparison group. Without something to compare it to, it is difficult to assess the significance of an observed change in the case.

Even when pre-experimental designs identify a comparison group, it is still difficult to dismiss rival hypotheses for the observed change. This is because there is no formal way to determine whether the two groups would have been the same if it had not been for the treatment. If the treatment group and the comparison group differ after the treatment, this might be a reflection of differences in the initial recruitment to the groups or differential mortality in the experiment.

Advantages in Pre-experimental designs

  • Apply only in situations in which it is impossible to manipulate more than one condition.
  • Are useful in the applied field, emerges as a response to the problems of experimentation in education.
  • As exploratory approaches, pre-experiments can be a cost-effective way to discern whether a potential explanation is worthy of further investigation
  • Do not control the internal validity, so are not very useful in the scientific construction.
  • Meet the minimum condition of an experiment.
  • The results are always debatable.

Disadvantages in Pre-experimental designs

Pre-experiments offer few advantages since it is often difficult or impossible to rule out alternative explanations. The nearly insurmountable threats to their validity are clearly the most important disadvantage of pre-experimental research designs.

Because of strict conditions and control the experimenter can set up the experiment again and repeat or ‘check’ their results. Replication is very important as when similar results are obtained this gives greater confidence in the results.

  • Control over extraneous variables is usually greater than in other research methods.
  • Experimental design involves manipulating the independent variable to observe the effect on the dependent variable. This makes it possible to determine a cause and effect relationship.
  • Quantitative observational designs allow variables to be investigated that would be unethical, impossible or too costly under an experimental design.
  • Cannot infer such a strong cause and effect relationship because there is or greater chance of other variables affecting the results. This is due to the lack of random assignment to groups.
  • Cannot replicate the findings as the same situation will not occur naturally again.
  • Experimental situation may not relate to the real world. Some kinds of behaviour can only be observed in a naturalistic setting.
  • It may be unethical or impossible to randomly assign people to groups
  • Observer bias may influence the results.
  • Quantitative Observational does not allow generalisation of findings to the general population.
  • Elimination of extraneous variables is not always possible.

Quasi-experimental designs

Quasi-experimental designs help researchers test for causal relationships in a variety of situations where the classical design is difficult or inappropriate. They are called quasi because they are variations of the classical experimental design. In general, the researcher has less control over the independent variable than in the classical design.

Main points of Quasi-experimental research designs

Quasi-experimental research designs, like experimental designs, test causal hypotheses.

  • A quasi-experimental design by definition lacks random assignment.
  • Quasi-experimental designs identify a comparison group that is as similar as possible to the
  • treatment group in terms of baseline (pre-intervention) characteristics.
  • There are different techniques for creating a valid comparison group such as regression
  • discontinuity design (RDD) and propensity score matching (PSM).

Types of Quasi-Experimental Designs

1. Two-Group Posttest-Only Design

a. This is identical to the static group comparison, with one exception: The groups are randomly assigned. It has  all the parts of the classical design except a pretest. The random assignment reduces the chance that the groups differed before the treatment, but without a pretest, a researcher cannot be as certain that the groups began the same on the dependent variable.

2. Interrupted Time Series

a. In an interrupted time series design, a researcher uses one group and makes multiple pretest measures before and after the treatment.

3. Equivalent Time Series

a. An equivalent time series is another one-group design that extends over a time period. Instead of one treatment, it has a pretest, then a treatment and posttest, then treatment and posttest, then treatment and posttest, and so on.

Other Quasi-Experimental Designs

There are many different types of quasi-experimental designs that have a variety of applications in specific contexts

The Proxy Pretest Design

The proxy pretest design looks like a standard pre-post design. But there’s an important difference. The pretest in this design is collected after the program is given. The recollection proxy pretest would be a sensible way to assess participants’ perceived gain or change.

The Separate Pre-Post Samples Design

The basic idea in this design (and its variations) is that the people you use for the pretest are not the same as the people you use for the posttest

The Double Pretest Design

The Double Pretest is a very strong quasi-experimental design with respect to internal validity. Why? Recall that the

The double pretest design includes two measures prior to the program.. Therefore, this design explicitly controls for selection-maturation threats. The design is also sometimes referred to as a “dry run” quasi-experimental design because the double pretests simulate what would happen in the null case.

The Switching Replications Design

The Switching Replications quasi-experimental design is also very strong with respect to internal validity. The design has two groups and three waves of measurement. In the first phase of the design, both groups are pretests, one is given the program and both are posttested. In the second phase of the design, the original comparison group is given the program while the original program group serves as the “control

The Nonequivalent Dependent Variables (NEDV) Design

The Nonequivalent Dependent Variables (NEDV) Design is a deceptive one. In its simple form, it is an extremely weak design with respect to internal validity. But in its pattern matching variations, it opens the door to an entirely different approach to causal assessment that is extremely powerful.

The idea in this design is that you have a program designed to change a specific outcome.

The Pattern Matching NEDV Design. Although the two-variable NEDV design is quite weak, we can make it considerably stronger by adding multiple outcome variables. In this variation, we need many outcome variables and a theory that tells how affected (from most to least) each variable will be by the program.

Depending on the circumstances, the Pattern Matching NEDV design can be quite strong with respect to internal validity. In general, the design is stronger if you have a larger set of variables and you find that your expectation pattern matches well with the observed results

The Regression Point Displacement (RPD) Design

The RPD design attempts to enhance the single program unit situation by comparing the performance on that single unit with the performance of a large set of comparison units. In community research, we would compare the pre-post results for the intervention community with a large set of other communities.

Advantages in Quasi-experimental designs

  • Since quasi-experimental designs are used when randomization is impractical and/or unethical, they are typically easier to set up than true experimental designs, which require[random assignment of subjects.
  • Additionally, utilizing quasi-experimental designs minimizes threats to ecological validity as natural environments do not suffer the same problems of artificiality as compared to a well-controlled laboratory setting.
  • Since quasi-experiments are natural experiments, findings in one may be applied to other subjects and settings, allowing for some generalizations to be made about population.
  • This experimentation method is efficient in longitudinal research that involves longer time periods which can be followed up in different environments.
  • The idea of having any manipulations the experimenter so chooses. In natural experiments, the researchers have to let manipulations occur on their own and have no control over them whatsoever.
  • Using self selected groups in quasi experiments also takes away to chance of ethical, conditional, etc. concerns while conducting the study.
  • As exploratory approaches, pre-experiments can be a cost-effective way to discern whether a potential explanation is worthy of further investigation.

Disadvantages of quasi-experimental designs

  • Quasi-experimental estimates of impact are subject to contamination by confounding variables.
  • The lack of random assignment in the quasi-experimental design method may allow studies to be more feasible, but this also poses many challenges for the investigator in terms of internal validity. This deficiency in randomization makes it harder to rule out confounding variables and introduces new threats to internal validity.
  • Because randomization is absent, some knowledge about the data can be approximated, but conclusions of causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment.
  • Moreover, even if these threats to internal validity are assessed, causation still cannot be fully established because the experimenter does not have total control over extraneous variables
  • The study groups may provide weaker evidence because of the lack of randomness. Randomness brings a lot of useful information to a study because it broadens results and therefore gives a better representation of the population as a whole.
  • Using unequal groups can also be a threat to internal validity.
  • If groups are not equal, which is sometimes the case in quasi experiments, then the experimenter might not be positive what the causes are for the results.

Experimental Research in Educational Technology

Here is a sequence of logical steps for planning and conducting research

Step 1. Select a Topic. This step is self-explanatory and usually not a problem, except for those who are “required” to do research  as opposed to initiating it on their own. The step simply involves identifying a general area that is of personal interest and then narrowing the focus to a researchable problem

Step 2. Identify the Research Problem. Given the general topic area, what specific problems are of interest? In many cases, the researcher already knows the problems. In others, a trip to the library to read background literature and examine previous studies is probably needed. A key concern is the importance of the problem to the field. Conducting research requires too much time and effort to be examining trivial questions that do not expand existing knowledge.

Step 3. Conduct a Literature Search. With the research topic and problem identified, it is now time to conduct a more intensive literature search. Of importance is determining what relevant studies have been performed; the designs, instruments, and procedures employed in those studies; and, most critically, the findings. Based on the review, direction will be provided for (a) how to extend or complement the existing literature base, (b) possible research orientations to use, and (c) specific research questions to address.

Step 4. State the Research Questions (or Hypotheses). This step is probably the most critical part of the planning process. Once stated, the research questions or hypotheses provide the basis for planning all other parts of the study: design, materials, and data analysis. In particular, this step will guide the researcher’s decision as to whether an experimental design or some other orientation is the best choice.

Step 5. Determine the Research Design. The next consideration is whether an experimental design is feasible. If not, the researcher will need to consider alternative approaches, recognizing that the original research question may not be answerable as a result.

Step 6. Determine Methods. Methods of the study include (a) subjects, (b) materials and data collection instruments, and (c) procedures. In determining these components, the researcher must continually use the research questions and/or hypotheses as reference points. A good place to start is with subjects or participants. What kind and how many participants does the research design require?

Next consider materials and instrumentation. When the needed resources are not obvious, a good strategy is to construct a listing of data collection instruments needed to answer each question (e.g., attitude survey, achievement test, observation form).

An experiment does not require having access to instruments that are already developed. Particularly in research with new technologies, the creation of novel measures of affect or performance may be implied. From an efficiency standpoint, however, the researcher’s first step should be to conduct a thorough search of existing instruments to determine if any can be used in their original form or adapted to present needs. If none is found, it would usually be far more advisable to construct a new instrument rather than “force fit” an existing one. New instruments will need to be pilot tested and validated. Standard test and measurement texts provide useful guidance for this requirement The experimental procedure, then, will be dictated by the research questions and the available resources. Piloting the methodology is essential to ensure that materials and methods work as planned.

Step 7. Determine Data Analysis Techniques.

Whereas statistical analysis procedures vary widely in complexity, the appropriate options for a particular experiment will be defined by two factors: the research questions and the type of data

Reporting and Publishing Experimental Studies

Obviously, for experimental studies to have impact on theory and practice in educational technology, their findings need to be disseminated to the field.

Introduction. The introduction to reports of experimental studies accomplishes several functions: (a) identifying the general area of the problem , (b) creating a rationale to learn more about the problem , (c) reviewing relevant literature, and (d) stating the specific purposes of the study. Hypotheses and/or research questions should directly follow from the preceding discussion and generally be stated explicitly, even though they may be obvious from the

literature review. In basic research experiments, usage of hypotheses is usually expected, as a theory or principle is typically being tested. In applied research experiments, hypotheses would be used where there is a logical or empirical basis for expecting a certain result

Method. The Method section of an experiment describes the participants or subjects, materials, and procedures. The usual convention is to start with subjects (or participants) by clearly describing the population concerned (e.g., age or grade level, background) and the sampling procedure. In reading about an experiment, it is extremely important to know if subjects were randomly assigned to treatments or if intact groups were employed. It is also important to know if participation was voluntary or required and whether the level of performance on the experimental task was consequential to the subjects. Learner motivation and task investment are critical in educational technology research, because such variables are likely to impact directly on subjects’ usage of media attributes and instructional strategies

Results. This major section describes the analyses and the findings. Typically, it should be organized such that the most important dependent measures are reported first. Tables and/or figures should be used judiciously to supplement (not repeat) the text. Statistical significance vs. practical importance. Traditionally, researchers followed the convention of determining the “importance” of findings based on statistical significance. Simply put, if the experimental group’s mean of 85% on the post test was found to be significantly higher (say, at p < .01) than the control group’s mean of 80%, then the “effect” was regarded as having theoretical or practical value. If the result was not significant (i.e., the null hypothesis could not be rejected), the effect was dismissed as not reliable or important.

In recent years, however, considerable attention has been given to the benefits of distinguishing between “statistical significance” and “practical importance” . Statistical significance indicates whether an effect can be considered attributable to factors other than chance. But a significant effect does not necessary mean a “large” effect.

Discussion. To conclude the report, the discussion section explains and interprets the findings relative to the hypotheses or research questions, previous studies, and relevant theory and practice. Where appropriate, weaknesses in procedures that may have impacted results should be identified. Other conventional features of a discussion may include suggestions for further research and conclusions regarding the research hypotheses/ questions. For educational technology experiments, drawing implications for practice in the area concerned is highly desirable.

Advantages of Experimental Research

1. Variables Are Controlled
With experimental research groups, the people conducting the research have a very high level of control over their variables. By isolating and determining what they are looking for, they have a great advantage in finding accurate results, this provides more valid and accurate results. This research aids in controlling independent variables for the experiments aim to remove extraneous and unwanted variables. The control over the irrelevant variables is higher as compared to other research types or methods.

2. Determine Cause and Effect
The experimental design of this type of research includes manipulating independent variables to easily determine the cause and effect relationship.This is highly valuable for any type of research being done.

3. Easily Replicated
In many cases multiple studies must be performed to gain truly accurate results and draw valid conclusions. Experimental research designs can easily be done again and again, and since all control over the variables is had, you can make it nearly identical to the ones before it. There is a very wide variety of this type of research. Each can provide different benefits, depending on what is being explored. The investigator has the ability to tailor make the experiment for their own unique situation, while still remaining in the validity of the experimental research design.

4. Best Results
Having control over the entire experiment and being able to provide in depth analysis of the hypothesis and data collected, makes experimental research one of the best options. The conclusions that are met are deemed highly valid, and on top of everything, the experiment can be done again and again to prove validity. Due to the control set up by experimenter and the strict conditions, better results can be achieved. Better results that have been obtained can also give researcher greater confidence regarding the results.

5. Can Span Across Nearly All Fields Of Research
Another great benefit of this type of research design is that it can be used in many different types of situations. Just like pharmaceutical companies can utilize it, so can teachers who want to test a new method of teaching. It is a basic, but efficient type of research.

6. Clear Cut Conclusions
Since there is such a high level of control, and only one specific variable is being tested at a time, the results are much more relevant than some other forms of research. You can clearly see the success, failure, of effects when analyzing the data collected.
7.Greater transfer ability

gaining insights to instruction methods, performing experiments and combining methods for rigidity, determining the best for the population and providing greater transferability.

Limitations in Experimental Design

Failure to do Experiment
One of the disadvantages of experimental research is that you cannot do experiments at times because you cannot manipulate independent variables either due to ethical or practical reasons. Taking for instance a situation wherein you are enthusiastic about the effects of an individual’s culture or the tendency of helping strangers, you cannot do the experiment. The reason for this is simply because you are not capable of manipulating the individual’s culture.

External Validity

A limitation of both experiments and well-identified quasi-experiments is whether the estimated impact would be similar if the program were replicated in another location, at a different time, or targeting a different group of students. Researchers often do little or nothing to address this point and should likely do more

Another limitation of experiments is that they are generally best at uncovering partial equilibrium effects. The impacts can be quite different when parents, teachers, and students have a chance to optimize their behavior in light of the program.

Hawthorne Effects

Another limitation of experiments is that it is possible that the experience of being observed may change one’s behavior—so-called Hawthorne effects. For example, participants may exert extra effort because they know their outcomes will be measured. As a result, it may be this extra effort and not the underlying program being studied that affects student outcomes.

Cost

Experimental evaluations can be expensive to implement well. Researchers must collect a wide variety of mediating and outcome variables . It is sometimes expensive to follow the control group, which may become geographically dispersed over time or may be less likely to cooperate in the research process. The costs of experts’ time and incentives for participants also threaten to add up quickly. Given a tight budget constraint, sometimes the best approach may be to run a relatively small experimental study.

Violations of Experimental Assumptions

Another limitation of experiments is that it is perhaps too easy to mine the data. If one slices and dices the data in enough ways, there is a good chance that some spurious results will emerge. This is a great temptation to researchers, especially if they are facing pressure from funders who have a stake in the results. Here, too, there are ways to minimize the problem.

Subject to Human Error

Researchers are human too and they can commit mistakes. However, whether the error was made by machine or man, one thing remains certain: it will affect the results of a study.

Other issues cited as disadvantages include personal biases, unreliable samples, results that can only be applied in one situation and the difficulty in measuring the human experience.

Experimental designs are frequently contrived scenarios that do not often mimic the things that happen in real world. The degree on which results can be generalized all over situations and real world applications are limited.

Can Create Artificial Situations
Experimental research also means controlling irrelevant variables on certain occasions. As such, this creates a situation that is somewhat artificial.By having such deep control over the variables being tested, it is very possible that the data can be skewed or corrupted to fit whatever outcome the researcher needs. This is especially true if it is being done for a business or market study.

Can take an Extensive Amount of Time
With experimental testing individual experiments have to be done in order to fully research each variable. This can cause the testing to take a very long amount of time and use a large amount of resources and finances. These costs could transfer onto the company, which could inflate costs for consumers

Participants can be influenced by environment
Those who participate in trials may be influenced by the environment around them. As such, they might give answers not based on how they truly feel but on what they think the researcher wants to hear. Rather than thinking through what they feel and think about a subject, a participant may just go along with what they believe the researcher is trying to achieve.

Manipulation of variables isn’t seen as completely objective
Experimental research mainly involves the manipulation of variables, a practice that isn’t seen as being objective. As mentioned earlier, researchers are actively trying to influence variable so that they can observe the consequences

Limited Behaviors
When people are part of an experiment, especially one where variables are controlled so precisely, the subjects of the experiment may not give the most accurate reactions. Their normal behaviors are limited because of the experiment environment.

It’s Impossible to control  it all
While the majority of the variables in an experimental research design are controlled by the researchers, it is absolutely impossible to control each and every one. Things from mood, events that happened in the subject’s earlier day, and many other things can affect the outcome and results of the experiment.

In short it can be said that When a researcher decides on a topic of interest, they try to define the research problem, which really helps as it makes the research area narrower thus they are able to study it more appropriately. Once the research problem is defined, a researcher formulates a research hypothesis which is then tested against the null hypothesis.

Experimental research is guided by educated guesses that guess the result of the experiment. An experiment is conducted to give evidence to this experimental hypothesis. Experimental research,although very demanding of time and resources, often produces the soundest evidence concerning hypothesized cause-effect relationships.

 

 

 

 

Posted in Uncategorized | Comments Off

The Survey method – Technique of gathering data

Dr. V.K. Maheshwari, Former Principal

K.L.D.A.V (P. G) College, Roorkee, India

The Survey method is the technique of gathering data by asking questions to people who are thought to have desired information. A formal list of questionnaire is prepared. Generally a non disguised approach is used. The respondents are asked questions on their demographic interest opinion.

The survey is a non-experimental, descriptive research method. Surveys can be useful when a researcher wants to collect data on phenomena that cannot be directly observed (such as opinions on library services). Surveys are used extensively in library and information science to assess attitudes and characteristics of a wide range of subjects, from the quality of user-system interfaces to library user reading habits. In a survey, researchers samplepopulation. Basha and Harter (1980) state that “a population is any set of persons or objects that possesses at least one common characteristic.”

Survey research is one of the most important areas of measurement in applied social research. The broad area of survey research encompasses any measurement procedures that involve asking questions of respondents. A “survey” can be anything form a short paper-and-pencil feedback form to an intensive one-on-one in-depth interview.

Types of Survey

Different types of surveys are actually composed of several research techniques, developed by a variety of disciplines.

Data are usually collected through the use of questionnaires, although sometimes researchers directly interview subjects. Surveys can use qualitative (e.g. ask open-ended questions) or quantitative (e.g. use forced-choice questions) measures. There are two basic types of surveys: cross-sectional surveys and longitudinal surveys. Much of the following information was taken from an excellent book on the subject, called Survey Research Methods, by Earl R. Babbie.

Cross-Sectional Surveys

Cross-sectional surveys are used to gather information on a population at a single point in time

Longitudinal Surveys

Longitudinal surveys gather data over a period of time. The researcher may then analyze changes in the population and attempt to describe and/or explain them. The three main types of longitudinal surveys are trend studies, cohort studies, and panel studies.

Trend Studies

Trend studies focus on a particular population, which is sampled and scrutinized repeatedly. While samples are of the same population, they are typically not composed of the same people. Trend studies, since they may be conducted over a long period of time, do not have to be conducted by just one researcher or research project. A researcher may combine data from several studies of the same population in order to show a trend.

Cohort Studies

Cohort studies also focus on a particular population, sampled and studied more than once. But cohort studies have a different focus.

Panel Studies

Panel studies allow the researcher to find out why changes in the population are occurring, since they use the same sample of people every time. That sample is called a panel.

Techniques of Survey Method

There are mainly 4 methods by which we can collect data through the Survey Method

  1. Telephonic Interview
  2. Personal Interview
  3. Mail Interview
  4. Electronic Interview
  1. Telephonic Interview

Telephone Interviewing stands out as the best method for gathering quickly needed information. Responses are collected from the respondents by the researcher on telephone.

Advantages of Telephonic Interview

  1. It is very fast method of data collection.
  2. It has the advantage over “Mail Questionnaire” of permitting the interviewer to talk to one or more persons and to clarifying his questions if they are not understood.
  3. Response rate of telephone interviewing seems to be a little better than mail questionnaires
  4. The quality of information is better
  5. It is less costly method and there are less administration problems

Disadvantages of Telephonic Interview

    1. They cant handle interview which need props
    2. It cant handle unstructured interview
    3. It cant be used for those questions which requires long descriptive answers
    4. Respondents cannot be observed
    5. People are reluctant to disclose personal information on telephone
    6. People who don’t have telephone facility cannot be approached
  1. Personal Interviewing

It is the most versatile of the all methods. They are used when props are required along with the verbal response non-verbal responses can also be observed.

Advantages of Personal Interview

    1. The person interviewed can ask more questions and can supplement the interview with personal observation.
    2. They are more flexible. Order of questions can be changed
    3. Knowledge of past and future is possible.
    4. In-depth research is possible.
    5. Verification of data from other sources is possible.
    6. The information obtained is very reliable and dependable and helps in establishing cause and effect relationship very early.

Disadvantages of Personal Interview

    1. It requires much more technical and administrative planning and supervision
    2. It is more expensive
    3. It is time consuming
    4. The accuracy of data is influenced by the interviewer
    5. A number of call banks may be required
    6. Some people are not approachable
  1. Mail Survey

Questionnaires are send to the respondents, they fill it up and send it back.

Advantages of Mail Survey

    1. It can reach all types of people.
    2. Response rate can be improved by offering certain incentives.

Disadvantages of Mail Survey

    1. It can not be used for unstructured study.
    2. It is costly.
    3. It requires established mailing list.
    4. It is time consuming.
    5. There is problem in case of complex questions.
  1. Electronic Interview

Electronic interviewing is a process of recognizing and noting people, objects, occurances rather than asking for information. For example-When you go to store, you notice which product people like to use. The Universal Product Code (UPC) is also a method of observing what people are buying.

Advantages of Electronic Interview

    1. There is no relying on willingness or ability of respondent.
    2. The data is more accurate and objective.

Disadvantages of Electronic Interview

    1. Attitudes can not be observed.
    2. Those events which are of long duration can not be observed.
    3. There is observer bias. It is not purely objective.
    4. If the respondents know that they are being observed, their response can be biased.
    5. It is a costly method.

Observation Method

The observation method involves human or mechanical observation of what people actually do or what events take place during a buying or consumption situation. “Information is collected by observing process at work. ”The following are a few situations:-

  1. Service Stations-Pose as a customer, go to a service station and observe.
  2. To evaluate the effectiveness of display of Dunlop Pillow Cushions-In a departmental store, observer notes:- a) How many pass by; b) How many stopped to look at the display; c) How many decide to buy.
  3. Super Market-Which is the best location in the shelf? Hidden cameras are used.
  4. To determine typical sales arrangement and find out sales enthusiasm shown by various salesmen-Normally this is done by an investigator using a concealed tape-recorder.

Advantages of Observation Method

  1. If the researcher observes and record events, it is not necessary to rely on the willingness and ability of respondents to report accurately.
  2. The biasing effect of interviewers is either eliminated or reduced. Data collected by observation are, thus, more objective and generally more accurate.

Disadvantages of Observation Method

  1. The most limiting factor in the use of observation method is the inability to observe such things such as attitudes, motivations, customers/consumers state of mind, their buying motives and their images.
  2. It also takes time for the investigator to wait for a particular action to take place.
  3. Personal and intimate activities, such as watching television late at night, are more easily discussed with questionnaires than they are observed.
  4. Cost is the final disadvantage of observation method. Under most circumstances, observational data are more expensive to obtain than other survey data. The observer has to wait doing nothing, between events to be observed. The unproductive time is an increased cost.

Categories   of Surveys

Surveys can be divided into two broad categories: the questionnaire and the interview. Questionnaires are usually paper-and-pencil instruments that the respondent completes. Interviews are completed by the interviewer based on the respondent says.

Questionnaires

When most people think of questionnaires, they think of the mail survey. There are many advantages to mail surveys. They are relatively inexpensive to administer. You can send the exact same instrument to a wide number of people. They allow the respondent to fill it out at their own convenience. But there are some disadvantages as well. Response rates from mail surveys are often very low. And, mail questionnaires are not the best vehicles for asking for detailed written responses.

A second type is the group administered questionnaire. A sample of respondents is brought together and asked to respond to a structured sequence of questions. Traditionally, questionnaires were administered in group settings for convenience. The researcher could give the questionnaire to those who were present and be fairly sure that there would be a high response rate. If the respondents were unclear about the meaning of a question they could ask for clarification. And, there were often organizational settings where it was relatively easy to assemble the group .

A less familiar type of questionnaire is the household drop-off survey. In this approach, a researcher goes to the respondent’s home or business and hands the respondent the instrument. In some cases, the respondent is asked to mail it back or the interview returns to pick it up. This approach attempts to blend the advantages of the mail survey and the group administered questionnaire. Like the mail survey, the respondent can work on the instrument in private, when it’s convenient. Like the group administered questionnaire, the interviewer makes personal contact with the respondent — they don’t just send an impersonal survey instrument. And, the respondent can ask questions about the study and get clarification on what is to be done. Generally, this would be expected to increase the percent of people who are willing to respond.

Interviews

Interviews are a far more personal form of research than questionnaires. In the personal interview, the interviewer works directly with the respondent. Unlike with mail surveys, the interviewer has the opportunity to probe or ask follow-up questions. And, interviews are generally easier for the respondent, especially if what is sought is opinions or impressions. Interviews can be very time consuming and they are resource intensive.

Telephone interviews enable a researcher to gather information rapidly. Most of the major public opinion polls that are reported were based on telephone interviews. Like personal interviews, they allow for some personal contact between the interviewer and the respondent. And, they allow the interviewer to ask follow-up questions.

Selecting the Survey Method

Selecting the type of survey you are going to use is one of the most critical decisions in many  research contexts.

Population Issues

The first set of considerations have to do with the population and its accessibility.

  • Can the population be enumerated?

For some populations, you have a complete listing of the units that will be sampled. For others, such a list is difficult or impossible to compile.

  • Is the population literate?

Questionnaires require that your respondents can read. While this might seem initially like a reasonable assumption for many adult populations, we know from recent research that the instance of adult illiteracy is alarmingly high. Young children would not be good targets for questionnaires.

  • Are there language issues?

We live in a multilingual world. Virtually every society has members who speak other than the predominant language.

  • Will the population cooperate?

People who do research on immigration issues have a difficult methodological problem. Why would we expect those respondents to cooperate? Although the researcher may mean no harm, the respondents are at considerable risk legally if information they divulge should get into the hand of the authorities.

  • What are the geographic restrictions?

Is your population of interest dispersed over too broad a geographic range for you to study feasibly with a personal interview?

Sampling Issues

The sample is the actual group you will have to contact in some way. There are several important sampling issues you need to consider when doing survey research.

  • What data is available?

What information do you have about your sample? Do you know their current addresses? Their current phone numbers? Are your contact lists up to date?

  • Can respondents be found?

Can your respondents be located? Some people are very busy. Some travel a lot. Some work the night shift. Even if you have an accurate phone or address, you may not be able to locate or make contact with your sample.

  • Can all members of population be sampled?

If you have an incomplete list of the population (i.e., sampling frame) you may not be able to sample every member of the population. Lists of various groups are extremely hard to keep up to date. People move or change their names. Even though they are on your sampling frame listing, you may not be able to get to them.

  • Are response rates likely to be a problem?

Even if you are able to solve all of the other population and sampling problems, you still have to deal with the issue of response rates. Some members of your sample will simply refuse to respond. Others have the best of intentions, but can’t seem to find the time to send in your questionnaire by the due date. Still others misplace the instrument or forget about the appointment for an interview. Low response rates are among the most difficult of problems in survey research.

Content Issues

The content of your study can also pose challenges for the different survey types you might utilize.

  • Can the respondents be expected to know about the issue?

If the respondent does not keep up with the news (e.g., by reading the newspaper, watching television news, or talking with others), they may not even know about the news issue you want to ask them about.

  • Will respondent need to consult records?

Even if the respondent understands what you’re asking about, you may need to allow them to consult their records in order to get an accurate answer.

Administrative Issues

Last, but certainly not least, you have to consider the feasibility of the survey method for your study.

  • costs

Cost is often the major determining factor in selecting survey type. You might prefer to do personal interviews, but can’t justify the high cost of training and paying for the interviewers.

  • facilities

Do you have the facilities (or access to them) to process and manage your study? In phone interviews, do you have well-equipped phone surveying facilities? For focus groups, do you have a comfortable and accessible room to host the group? Do you have the equipment needed to record and transcribe responses?

  • time

Some types of surveys take longer than others. Have you allowed for enough time to get enough personal interviews to justify that approach?

  • personnel

Different types of surveys make different demands of personnel. Interviews require interviewers who are motivated and well-trained. Group administered surveys require people who are trained in group facilitation. Some studies may be in a technical area that requires some degree of expertise in the interviewer.

Question Issues

Sometimes the nature of what you want to ask respondents will determine the type of survey you select.

  • What types of questions can be asked?

Are you going to be asking personal questions? Are you going to need to get lots of detail in the responses? Can you anticipate the most frequent or important types of responses and develop reasonable closed-ended questions?

  • How complex will the questions be?

Sometimes you are dealing with a complex subject or topic. The questions you want to ask are going to have multiple parts. You may need to branch to sub-questions.

  • Will screening questions be needed?

A screening question may be needed to determine whether the respondent is qualified to answer your question of interest. For instance, you wouldn’t want to ask someone their opinions about a specific computer program without first “screening” them to find out whether they have any experience using the program.

  • Can question sequence be controlled?

Is your survey one where you can construct in advance a reasonable sequence of questions? Or, are you doing an initial exploratory study where you may need to ask lots of follow-up questions that you can’t easily anticipate?

  • Will lengthy questions be asked?

If your subject matter is complicated, you may need to give the respondent some detailed background for a question. Can you reasonably expect your respondent to sit still long enough in a phone interview to ask your question?

  • Will long response scales be used?

If you are asking people about the different computer equipment they use, you may have to have a lengthy response list (CD-ROM drive, floppy drive, mouse, touch pad, modem, network connection, external speakers, etc.). Clearly, it may be difficult to ask about each of these in a short phone interview.

Bias Issues

People come to the research endeavor with their own sets of biases and prejudices. Sometimes, these biases will be less of a problem with certain types of survey approaches.

  • Can social desirability be avoided?

Respondents generally want to “look good” in the eyes of others. None of us likes to look like we don’t know an answer. We don’t want to say anything that would be embarrassing. If you ask people about information that may put them in this kind of position, they may not tell you the truth, or they may “spin” the response so that it makes them look better.

  • Can interviewer distortion and subversion be controlled?

Interviewers may distort an interview as well. They may not ask questions that make them uncomfortable. They may not listen carefully to respondents on topics for which they have strong opinions

  • Can false respondents be avoided?

With mail surveys it may be difficult to know who actually responded. Is the person you’re speaking with on the phone actually who they say they are? At least with personal interviews, you have a reasonable chance of knowing who you are speaking with. In mail surveys or phone interviews, this may not be the case.

Constructing the Survey

Constructing a survey instrument is an art in itself. There are numerous small decisions that must be made — about content, wording, format, placement — that can have important consequences for your entire study.

First of all you’ll learn about the two major types of surveys that exist, the questionnaire and the interview and the different varieties of each. There are three areas involved in writing a question:

  • choosing the response format that you use for collecting information from the respondent
  • determining the question content, scope and purpose
  • figuring out how to word the question to get at the issue of interest

Finally, once you have your questions written, there is the issue of how best to place them in your survey.

Types Of Questions

Survey questions can be divided into two broad types:

  1. Structured
  2. Unstructured.

From an instrument design point of view, the structured questions pose the greater difficulties (see Decisions About the Response Format). From a content perspective, it may actually be more difficult to write good unstructured questions.

Dichotomous Questions

When a question has two possible responses, we consider it dichotomous. Surveys often use dichotomous questions that ask for a Yes/No, True/False or Agree/Disagree response.

Questions Based on Level of Measurement

We can also classify questions in terms of their level of measurement

We can also construct survey questions that attempt to measure on an interval level. One of the most common of these types is the traditional 1-to-5 rating (or 1-to-7, or 1-to-9, etc.). This is sometimes referred to as a Likert response scale . Here, we see how we might ask an opinion question on a 1-to-5 bipolar scale (it’s called bipolar because there is a neutral point and the two ends of the scale are at opposite positions of the opinion):

Another interval question uses an approach called the semantic differential. Here, an object is assessed by the respondent on a set of bipolar adjective pairs (using 5-point rating scale):

Finally, we can also get at interval measures by using what is called a cumulative or Guttman scale. Here, the respondent checks each item with which they agree. The items themselves are constructed so that they are cumulative — if you agree to one, you probably agree to all of the ones above it in the list:

Filter or Contingency Questions

Sometimes you have to ask the respondent one question in order to determine if they are qualified or experienced enough to answer a subsequent one. This requires using a filter or contingency question.

Filter questions can get very complex. Sometimes, you have to have multiple filter questions in order to direct your respondents to the correct subsequent questions. There are a few conventions you should keep in mind when using filters:

•             try to avoid having more than three levels (two jumps) for any question

Too many jumps will confuse the respondent and may discourage them from continuing with the survey.

•             if only two levels, use graphic to jump (e.g., arrow and box)

Question Content

For each question in your survey, you should ask yourself how well it addresses the content you are trying to get at.

Do Respondents Have the Needed Information?

Look at each question in your survey to see whether the respondent is likely to have the necessary information to be able to answer the question

Does the Question Need to be More Specific?

Sometimes we ask our questions too generally and the information we obtain is more difficult to interpret.

Is Question Sufficiently General?

You can err in the other direction as well by being too specific.

Is Question Biased or Loaded?

One danger in question-writing is that your own biases and blind-spots may affect the wording .

Will Respondent Answer Truthfully?

For each question on your survey, ask yourself whether the respondent will have any difficulty answering the question truthfully. If there is some reason why they may not, consider rewording the question.

Response Format

The response format is how you collect the answer from the respondent.

Structured Response Formats

Structured formats help the respondent to respond more easily and help the researcher to accumulate and summarize responses more efficiently. But, they can also constrain the respondent and limit the researcher’s ability to understand what the respondent really means. There are many different structured response formats, each with its own strengths and weaknesses. We’ll review the major ones here.

Fill-In-The-Blank. One of the simplest response formats is a blank line. A blank line can be used for a number of different response types

Check the Answer. The respondent places a check next to the response(s). The simplest form would be the example given above where we ask the person to indicate their gender.

Circle the Answer. Sometimes the respondent is asked to circle an item to indicate their response. Usually we are asking them to circle a number

Unstructured Response Formats

While there is a wide variety of structured response formats, there are relatively few unstructured ones. Generally, it’s written text. If the respondent (or interviewer) writes down text as the response, you’ve got an unstructured response format. In almost every short questionnaire, there’s one or more short text field questions

Question Wording

One of the major difficulties in writing good survey questions is getting the wording right. Even slight wording differences can confuse the respondent or lead to incorrect interpretations of the question.

  • Can the Question be Misunderstood?
  • How personal is the wording?
  • Is the time frame specified?
  • Some terms are just too vague to be useful
  • What Assumptions Does the Question Make?

Other Wording Issues

The nuances of language guarantee that the task of the question writer will be endlessly complex. Without trying to generate an exhaustive list, here are a few other questions to keep in mind:

  • Decisions About Placement
  • Does the question contain difficult or unclear terminology?
  • Does the question make each alternative explicit?
  • Is the wording loaded or slanted?
  • Is the wording objectionable?
  • One of the most difficult tasks facing the survey designer involves the ordering of questions.
  • Question Placement

Whenever you think about question placement, consider the following questions:

  • Does question come too early or too late to arouse interest?
  • Does the question receive sufficient attention?
  • Is the answer influenced by prior questions?
  • The Opening Questions

Just as in other aspects of life, first impressions are important in survey work. The first few questions you ask will determine the tone for the survey, and can help put your respondent at ease.. You should never begin your survey with sensitive or threatening questions.

The Golden Rule

You are imposing in the life of your respondent. You are asking for their time, their attention, their trust, and often, for personal information. Therefore, you should always keep in mind the “golden rule” of survey research :

  • Assure the respondent that you will send a copy of the final results
  • Be alert for any sign that the respondent is uncomfortable
  • Be sensitive to the needs of the respondent
  • Do unto your respondents as you would have them do unto you!
  • Keep your survey as short as possible — only include what is absolutely necessary
  • Thank the respondent at the beginning for allowing you to conduct your study
  • Thank the respondent at the end for participating

Interviews

Interviews are among the most challenging and rewarding forms of measurement. They require a personal sensitivity and adaptability as well as the ability to stay within the bounds of the designed protocol.

Preparation

  • The Role of the Interviewer
  • . Respondents may raise objections or concerns that were not anticipated. The interviewer has to be able to respond candidly and informatively.
  • Clarify any confusion/concerns
  • Last, and certainly not least, the interviewer has to conduct a good interview
  • Locate and enlist cooperation of respondents
  • Motivate respondents to do good job
  • Observe quality of responses
  • The interviewer has to be motivated and has to be able to communicate that motivation to the respondent. Often, this means that the interviewer has to be convinced of the importance of the research.
  • The interviewer has to find the respondent. In door-to-door surveys, this means being able to locate specific addresses
  • The interviewer is really the “jack-of-all-trades” in survey research. The interviewer’s role is complex and multifaceted. It includes the following tasks:
  • Whether the interview is personal or over the phone, the interviewer is in the best position to judge the quality of the information that is being receivedConduct a good interview

Training the Interviewers

One of the most important aspects of any interview study is the training of the interviewers themselves. Here are some of the major topics that should be included in interviewer training:

  • Describe the entire study
  • Explain interviewer bias
  • Explain respondent selection procedures, including reading maps
  • Explain scheduling
  • Explain supervision
  • Explain the sampling logic and process
  • identify respondents
  • Interviewers need to  learn about the background for the study, previous work that has been done, and why the study is important.
  • Rehearse interview
  • State who is sponsor of research
  • Teach enough about survey research
  • When you first introduce the interview, it’s a good idea to walk through the entire protocol so the interviewers can get an idea of the various parts or phases and how they interrelate.

The Interviewer’s Kit

Usually, you will want to assemble an interviewer kit that can be easily carried and includes all of the important materials such as:

  • a “professional-looking” 3-ring notebook (this might even have the logo of the organization conducting the interviews)
  • a cover letter from the Principal Investigator or Sponsor
  • a phone number the respondent can call to verify the interviewer’s authenticity
  • maps
  • official identification (preferable a picture ID)
  • sufficient copies of the survey instrument

The Interview

Every interview includes some common components. There’s the opening, where the interviewer gains entry and establishes the rapport and tone for what follows. There’s the middle game, the heart of the process, that consists of the protocol of questions and the improvisations of the probe. And finally, there’s the endgame, the wrap-up, where the interviewer and respondent establish a sense of closure. Whether it’s a two-minute phone interview or a personal interview that spans hours, the interview is a bit of theater, a mini-drama that involves real lives in real time.

Opening Remarks

In many ways, the interviewer has the same initial problem that a salesperson has. You have to get the respondent’s attention initially for a long enough period that you can sell them on the idea of participating in the study.

Gaining entry

  • The first thing the interviewer must do is gain entry. Several factors can enhance the prospects. Probably the most important factor is your initial appearance. The interviewer needs to dress professionally and in a manner that will be comfortable to the respondent. The way the interviewer appears initially to the respondent has to communicate some simple messages — that you’re trustworthy, honest, and non-threatening

Asking the Questions

  • Use questionnaire carefully, but informally
  • The questionnaire is your friend. It was developed with a lot of care and thoughtfulness.
  • Ask questions exactly as written
  • Follow the order given
  • Ask every question
  • Don’t finish sentences

Obtaining Adequate Responses – The Probe

  • Silent probe-The most effective way to encourage someone to elaborate is to do nothing at all – just pause and wait
  • . Overt encouragement- Overt encouragement could be as simple as saying “Uh-huh” or “OK” after the respondent completes a thought.
  • Elaboration- You can encourage more information by asking for elaboration
  • Ask for clarification- Sometimes, you can elicit greater detail by asking the respondent to clarify something that was said earlier.
  • Repetition-This is the old psychotherapist trick. You say something without really saying anything new.

Recording the Response

Although we have the capability to record a respondent in audio and/or video, most interview methodologists don’t think it’s a good idea. In general, personal interviews are still best when recorded by the interviewer using pen and paper. Here, I assume the paper-and-pencil approach.

  • Include all probes
  • Record responses immediately
  • The interviewer should record responses as they are being stated.
  • Use abbreviations where possible

Concluding the Interview

  • When you’ve gone through the entire interview, you need to bring the interview to closure. Some important things to remember:
  • Thank the respondent
  • Don’t forget to do this. Even if the respondent was troublesome or uninformative, it is important for you to be polite and thank them for their time.
  • Tell them when you expect to send results
  • . It’s common practice to prepare a short, readable, jargon-free summary of interviews that you can send to the respondents.
  • Don’t be brusque or hasty
  • Allow for a few minutes of winding down conversation. The respondent may want to know a little bit about you or how much you like doing this kind of work. They may be interested in how the results will be used. Use these kinds of interests as a way to wrap up the conversation.
  • You have to find a way to politely cut off the conversation and make your exit.
  • Immediately after leaving — write down any notes about how the interview went

Analyzing Survey Results

After creating and conducting your survey, you must now process and analyze the results. These steps require strict attention to detail and, in some cases, knowledge of statistics and computer software packages.

Processing the Results

It is clearly important to keep careful records of survey data in order to do effective work. Most researchers recommend using a computer to help sort and organize the data. Additionally, Glastonbury and MacKean point out that once the data has been filtered though the computer, it is possible to do an unlimited amount of analysis.

Jolliffe (1986) believes that editing should be the first step to processing this data. He writes, “The obvious reason for this is to ensure that the data analyzed are correct and complete . At the same time, editing can reduce the bias, increase the precision and achieve consistency between the tables. Of course, editing may not always be necessary, if for example you are doing a qualitative analysis of open-ended questions, or the survey is part of a larger project and gets distributed to other agencies for analysis. However, editing could be as simple as checking the information input into the computer.

All of this information should be used to test for statistical significance. Information may be recorded in any number of ways. Charts and graphs are clear, visual ways to record findings in many cases. For instance, in a mail-out survey where response rate is an issue, you might use a response rate graph to make the process easier. The day the surveys are mailed out should be recorded first. Then, every day thereafter, the number of returned questionnaires should be logged on the graph. Be sure to record both the number returned each day, and the cumulative number, or percentage. Also, as each completed questionnaire is returned, each should be opened, scanned and assigned an identification number.

Analyzing the Results

Before actually beginning the survey the researcher should know how they want to analyze the data. If you are collecting quantifiable data, a code book is needed for interpreting your data and should be established prior to collecting the survey data. This is important because there are many different formulas needed in order to properly analyze the survey research and obtain statistical significance. Since computer programs have made the process of analyzing data vastly easier than it was, it would be sensible to choose this route

After the survey is conducted and the data collected, the results must be assembled in some useable format that allows comparison within the survey group, between groups, or both. The results could be analyzed in a number of ways. A T-test may be used to determine if scores of two groups differ on a single variable–whether writing ability differs among students in two classrooms, for instance. Correlation measurements could also be constructed to compare the results of two interacting variables within the data set.

Reliability and Validity

Surveys tend to be weak on validity and strong on reliability. The artificiality of the survey format puts a strain on validity. Since people’s real feelings are hard to grasp in terms of such dichotomies as “agree/disagree,” “support/oppose,” “like/dislike,” etc., these are only approximate indicators of what we have in mind when we create the questions. Reliability, on the other hand, is a clearer matter. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher’s observations. Careful wording, format, content, etc. can reduce significantly the subject’s own unreliability.

 

Strengths and Weaknesses of Surveys

Strengths:

  • Consequently, very large samples are feasible, making the results statistically significant even when analyzing multiple variables.
  • Many questions can be asked about a given topic giving considerable flexibility to the analysis.
  • Standardization ensures that similar data can be collected from groups then interpreted comparatively (between-group study).
  • Standardized questions make measurement more precise by enforcing uniform definitions upon the participants.
  • Surveys are relatively inexpensive (especially self-administered surveys).
  • Surveys are useful in describing the characteristics of a large population. No other method of observation can provide this general capability.
  • There is flexibilty at the creation phase in deciding how the questions will be administered: as face-to-face interviews, by telephone, as group administered written or oral survey, or by electonic means.
  • They can be administered from remote locations using mail, email or telephone.
  • Usually, high reliability is easy to obtain–by presenting all subjects with a standardized stimulus, observer subjectivity is greatly eliminated.

Weaknesses:

  • A methodology relying on standardization forces the researcher to develop questions general enough to be minimally appropriate for all respondents, possibly missing what is most appropriate to many respondents.
  • As opposed to direct observation, survey research (excluding some interview approaches) can seldom deal with “context.”
  • It may be hard for participants to recall information or to tell the truth about a controversial question.
  • Surveys are inflexible in that they require the initial study design (the tool and administration of the tool) to remain unchanged throughout the data collection.
  • The researcher must ensure that a large number of the selected sample will reply.

Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession, meaning that some professionals in the field focus on survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it

 

 

Posted in Uncategorized | Comments Off

Introduction to Sociometry

Dr. V.K. Maheshwari, Former Principal

K.L.D.A.V(P.G) College, Roorkee, India


Sociometry is “a method for, describing, discovering and evaluating social status, structure, and development through measuring the extent of acceptance or rejection between individuals in groups.” Franz defines sociometry as “a method used for the discovery and manipulation of social configurations by measuring the attractions and repulsions between individuals in a group.” It is a means for studying the choice, communication and interaction patterns of individuals in a group. It is concerned with attractions and repulsions between individuals in a group. In this method, a person is asked to choose one or more persons according to specified criteria, in order to find out the person or persons with whom he will like to associate.

The term sociometry relates to its Latin etymology, socius meaning companion, and metrum meaning measure.    As these roots imply, sociometry is a way of measuring the degree of relatedness among people.  Measurement of relatedness can be useful not only in the assessment of behavior within groups, but also for interventions to bring about positive change and for determining the extent of change.

J.L. Moreno was a psychiatrist born in 1889 in Romania.  He is credited with the development of group therapy, sociodrama and psychodrama.  One of his basic contributions is sociometry. Sociometry is the measurement of social choice, meaning the decisions, both conscious and unconscious, that are made regarding inter-personal affiliation. These measurement tools can be used to facilitate change.    At its most basic level, sociometry addresses the various aspects of human connection.  We are constantly making choices about with whom we choose to affiliate. Sociometry is a method that can be used to concretize and explore these choices.

In developing sociometry, Moreno sought to create a scientific methodology and set of intervention tools that could be used to study and change the basic human feelings such as acceptance and rejection, particularly as applicable to the group process.  Also, to study and where appropriate, intervene with the ways in which groups organize using various sociometric measures such as pairs, clusters, triangles, cleavages and other group formations.  This includes an assessment of those to whom we are drawn and those by whom we feel repelled.  Morenoo wanted to create a method to explore this.

Jacob Levy Moreno coined the term sociometry and conducted the first long-range sociometric study from 1932-38 at the New York State Training School for Girls in Hudson, New York. Jacob Moreno defined sociometry as “the inquiry into the evolution and organization of groups and the position of individuals within them.” He goes on to write “As the …science of group organization -it attacks the problem not from the outer structure of the group, the group surface, but from the inner structure.”Sociometric explorations reveal the hidden structures that give a group its form: the alliances, the subgroups, the hidden beliefs, the forbidden agenda’s, the ideological agreements, and the ‘stars’ of the show

Sociometry is the study of human connectedness. Moreno viewed society as composed of units made up of each individual and the essential persons in his or her life. Moreno called this smallest unit of measurement the social atom, comprised of all the significant figures, real or fantasized, and past and present.

Sociometry is based on the fact that people make choices in interpersonal relationships. Whenever people gather, they make choices–where to sit or stand; choices about who is perceived as friendly and who not, who is central to the group, who is rejected, who is isolated.  As Moreno says, “Choices are fundamental facts in all ongoing human relations, choices of people and choices of things.  It is immaterial whether the motivations are known to the chooser or not; it is immaterial whether [the choices] are inarticulate or highly expressive, whether rational or irrational.  They do not require any special justification as long as they are spontaneous and true to the self of the chooser.  They are facts of the first existential order.”

Sociometry means ‘companion measure’. Moreno developed it as a measure, a new systematic effort He wanted to create a society where all humans achieve their potential to love, to share and face their truth. By making choices overt and active, he hoped individuals would be more spontaneous, authentic and organisations and group structures would become fresh clear and lively.Sociometry enables us to know about the  interpersonal choices, attractions and rejections, and their effects. Sociometry has methods for displaying interpersonal choices, attractions and rejections and assists in exploring and improving the dynamics of relationship

Purpose of Sociometry -

Moren developed sociometry within the new sciences, although its ultimate purpose is transcendence and not science. ‘By making choices based on criteria, overt and energetic, Moreno hoped that individuals would be more spontaneous, and organizations and groups structures would become fresh, clear and lively’.

A useful working definition of sociometry is that it is a methodology for tracking the energy vectors of interpersonal relationships in a group.   It shows the patterns of how individuals associate with each other when acting as a group toward a specified end or goal.

Moreno himself defined sociometry as “the mathematical study of psychological properties of populations, the experimental technique of and the results obtained by application of quantitative methods” .

“Sociometry is a way of measuring the degree of relatedness among people. Measurement of relatedness can be useful not only in the assessment of behavior within groups, but also for interventions to bring about positive change and for determining the extent of change” .Sociometry is based on the fact that people make choices in interpersonal relationships. Whenever people gather, they make choices–where to sit or stand; choices about who is perceived as friendly and who not, who is central to the group, who is rejected, who is isolated.  As Moreno says, “Choices are fundamental facts in all ongoing human relations, choices of people and choices of things.  It is immaterial whether the motivations are known to the chooser or not; it is immaterial whether [the choices] are inarticulate or highly expressive, whether rational or irrational.  They do not require any special justification as long as they are spontaneous and true to the self of the chooser.  They are facts of the first existential order.” .

The purpose of sociometry is to facilitate group task effectiveness and satisfaction of participants by bringing about greater degrees of mutuality amongst people and greater authenticity in relationships.

Sociometry enables us to intervene in the organization systems with both formal and informal research data, and to identify with those involved intervention to release the creativity, leadership and innovation that resides within the informal networks, giving greater satisfaction to group members, and better results.

For sociometric interventions to be successful, participants are asked to account for their choices they make in their interactions, to better understand motivation for choice and the underlying feelings of attraction and repulsion (choosing and not choosing). Because these choices can be made visible, they are measurable and observable enabling group members to recognise the structures their combined choice making creates. Individuals and group members can then evaluate it and make any changes they wish.

Naturally revealing and hearing personal motivations and reasons for choices or not choosing is uncomfortable for some. Mostly this is offset by the value of change, and refreshing of relationships. Many people are relieved to hear the reasons for being chosen, or especially, not chosen, which they may have imagined previously. When these processes are facilitated respectfully, group members gain a lot of satisfaction with the shared information, and creativity and spontaneity is released.

The Underlying Philosophy

Moreno’s sociometric principles and approach are set forth in the editions of Who Shall Survive? Foundations of Sociometry, Group Psychotherapy and Sociodrama (1934, 1953, 1978). He states that he “suffered” from an idee fixe that each person inherits a primordial nature which is immortal and is sacred. And, it is from this nature that rises the capacity for creativity, a creativity which must be directed toward preserving life, that all may survive. To organize a universe of varied cultures, beliefs, and ways of interacting for this supreme task requires a system of sufficient complexity to investigate existing interrelating, and sufficient heart to motivate persons to value one another, actively.,

Sociometric methods result in heightened consciousness; perceptions are identified, corrected and eventually sharpened. Group members may then make informed choices with an awareness of collective choice making and the role his or her choices has on the group as a whole.

According to Tian Dayton, a trainer educator and practitioner of psychodrama, “feeling chosen, unchosen, rejected, invisible, isolated or having star status are issues that emerge naturally in groups and throughout sociometric investigation.  As such, sociometry offers a way to study groups in their concrete form.”

Sociometry is useful when looking at and measuring the choices made by the group as well as by the individual. When looking at sociometry, one can view it from the perspective of the group or the individual.  “When using sociometry in the group, there are four basic positions that individuals occupy:

  • Positive star,
  • Rejection star,
  • Isolate
  • Star of incongruity.

All of these positions have benefits and liabilities. Individuals who are highly selected, that is, who receive the most choices, are called “sociometric stars.”  Because choices can be either positive or negative so can the sociometric title, a positive sociometric star logically receives the most positive choices, while the sociometric rejection star is the individual receiving the greatest number of negative choices.

Branches of sociometry

Sociometry has two main branches:

Research sociometry- Research sociometry is action research with groups exploring the socio-emotional networks of relationships using specified criteria e.g. who in this group do you want to sit beside you at work? Who in the group do you go to for advice on a work problem? Who in the group do you see providing satisfying leadership in the pending project? Sometimes called network explorations, research sociometry is concerned with relational patterns in small (individual and small group) and larger populations, such as organizations and neighborhoods.

Applied sociometry – Applied sociometrists utilize a range of methods to assist people and groups review, expand and develop their existing psycho-social networks of relationships. Both fields of sociometry exist to produce through their application, greater spontaneity and creativity of both individuals and groups.

Applications of Sociometry

Sociometric methods have been applied in:

  • Business and industry, particularly in organization development
  • Children’s camps
  • Congregation revitalization
  • Counselling psychotherapy patients
  • Education, in classrooms and the training of teachers
  • Family therapy
  • Military services
  • Political campaigns
  • Town planning and community building

Basis of Sociometric Methods

Group Building

Sociometry also includes a large number of exercises and activities designed to enhance belonging, cooperation, cohesion, openness and access to roles. Every structured warm up activity is a sociometric event. Each time a leader asks group members to pick a partner, a sociometric event is taking place. The study of sociometry gives attention to the design and underlying principles of those activities.

The Social Atom

The student of sociometry becomes both participant and observer of his/her own life, exploring on paper and in action that nucleus of persons to whom he/she is connected. What is observed and measured is the nearness and distance which exists (or is desired) and accounting for inclusion in or exclusion from one’s circle. The group explored may be a public group or a private circle of friends.

The Sociometric Test

A group explores the collective impact of their choices upon one another and upon the whole. A criterion on which to base choices is selected, and group members identify on paper or in action the range of choices (to choose, to not choose, to remain neutral) and the degree of positive and negative feeling underlying their choices. At times, group members may make perceptual guesses about the choices others may have made for him or her. Following disclosure in pairs, the results are depicted in a sociogram. which is drawn or enacted in ways to highlight several factors: nearness and distance, level of choice (highly overchosen to highly underchosen), level of reciprocity, existence of subgroups. The group then discusses the results and ways to enhance its construction in order to sustain its purposes and goals.

The Role Diagram

Pairs, triads and small groups may investigate their responses to one another by identifying the roles in which they interact and the feeling responses they have toward one another in the role. A list of role names is charted and a system of notation employed for use in identifying feelings, or changes in feelings.

The Encounter

Essential to the sociometrist is the capacity to facilitate exploration of the results of the investigations, principally through the reciprocal process of role reversal. Persons engaged in sociometric procedures will be unable to accurately reveal to themselves or another their true choices if they are unable to rely upon skilled treatment or differences as they arise. The sociometrist facilitates conflicts, impasses, and the meeting of persons who are beginning to know one another.

Branches of Sociometry

Within sociology, sociometry has two main branches:

1- Research sociometry,

. Research sociometry is action research with groups exploring the socio-emotional networks of relationships using specified criteria e.g. Who in this group do you want to sit beside you at work? Who in the group do you go to for advice on a work problem? Who in the group do you see providing satisfying leadership in the pending project? Sometimes called network explorations, research sociometry is concerned with relational patterns in small (individual and small group) and larger populations, such as organizations and neighborhoods.

2- Applied sociometry

Applied sociometrists utilize a range of methods to assist people and groups review, expand and develop their existing psycho-social networks of relationships. Both fields of sociometry exist to produce through their application, greater spontaneity and creativity of both individuals and groups.

Concept of Sociogram

Sociometry is a theoretical and methodological approach which seeks to analyze relations between individuals in small group situations. Sociometry is a form of network analysis. Moreno introduced the idea of a sociogram, which is a diagram representing the relationships between individuals.

The sociogram Is a graphic representation which serves to reveal and analyse the relationships of a person with their family or social circle, or to visualise the relationships within the family or of certain members of the family with their external environment such as health and education services, leisure time activities, work, friends or place in the extended family.

One of Moreno’s innovations in sociometry was the development of the sociogram, a systematic method for graphically representing individuals as points/nodes and the relationships between them as lines/arcs. Moreno, who wrote extensively of his thinking, applications and findings, also founded a journal entitled Sociometry.

Understanding the Sociogram

An ecomap (or, in this case, a community sociogram), is a structural diagram of important relationships with or between people, groups, and organizations. Social workers use the sociogram to show the coalition of resources that seem likely to affect clients’ efforts to meet goals. Below is a sociogram for a town . The interaction matrix is a textual description of these data.

Objectives  of Sociogram

This sociological approach is used in several ways and in pursuing various objectives. The first is to demonstrate the group dynamic surrounding the individual observed, whether it be their immediate family, or others in their surroundings such as their belonging group, their reference group, their functional group or their affinity group.

A human being is born within a family, with parents who protect him, a family circle and a specific environment. This is known as the belonging group, with which, in one way or another, the person maintains a lifelong tie.

A human group, whatever its nature, always presents a particular character, with specific values, distinct cultural tastes, a dynamic and an ideology which make it unique. The persons, groups or organisations which serve as role models for the person’s moral, religious or political conduct is the reference group.

The sociogram can target either one or the other of these groups. The functional group has as its primary objective a professional function, such as worker, nurse, teacher, student or other;whereas an affinity group, is concerned with the persons who associate by choice.

Another objective of the sociogram is to reveal in a concrete and specific manner the type of relationship which a person has with each family member and with the different groups to which they belong. This enables us to understand the strengths and weaknesses of their support network, which is very important for their care, or if we want to know what the relationship is between a worker and his supervisor or work colleagues.

One of Moreno’s innovations in sociometry was the development of the sociogram, a systematic method for graphically representing individuals as points/nodes and the relationships between them as lines/arcs. Moreno, who wrote extensively of his thinking, applications and findings, also founded a journal entitled Sociometry.

A sociogram is a charting of the inter-relationships within a group. Its purpose is to discover group structure: i.e., the basic “network” of friendship patterns and sub-group organization. The relations of any one child to the group as a whole are another type of information which can be derived from a sociogram. A sociogram’s value to a teacher is in its potential for developing greater understanding of group behavior so that he/she may operate more wisely in group management and curriculum development.

A Sociogram is an important tool for teachers. The sociogram is the chart used to actually apply sociometry in the classroom. It charts the interrelationships within a group. Its purpose is to discover group structures and the relation of any one person to the group as a whole. Its value to the teacher is in its potentiality for developing greater understanding of group behaviour so that he may operate more wisely in group management and curriculum . This shows the positive nature of sociometry and the use of it is important for understanding the relationships within classrooms. Once this relationship is understood by the teacher, group work can be better facilitated for greater learning to occur.

Applications to the Classroom

When working with students who tend to socially withdraw or isolate themselves, a sociometric activity can be conducted with the class to determine the peer(s) who would most like to interact with the targeted students. These results can then be used when assigning groups and arranging seating . The use of sociometry has since expanded into other fields such as psychology, sociology, anthropology, and is now being used for education and classroom purposes. The use of sociometry in the classroom is to find the best relationships between students and to see how children see themselves within the social construct of education.

”Every teacher knows that the group of children with which he works is more that an aggregation of individuals. He knows that the group has form and structure; that there are patterns of sub-groups, cliques, and friendships. Some individuals are more accepted by the group then others. Some are more rejected. Theses factors play an important role in determining how the group will react to learning situations and to various types of group management employed by the teacher” (5).

This quote is a very nice summary of the necessity of sociometry in the classroom. It also highlights what sociometrists are trying to accomplish by studying groups in social settings. They are trying to see how people get along in groups and what this means in the context of learning and developing within the classroom.

For group work, sociometry can be a powerful tool for reducing conflict and improving communication because it allows the group to see itself objectively and to analyze its own dynamics. It is also a powerful tool for assessing dynamics and development in groups devoted to therapy or training .

Sociometric criteria  for making choice-

Choices are always made on some basis or criterion.  The criterion may be subjective, such as an intuitive feeling of liking or disliking a person on first impression.  The criterion may be more objective and conscious, such as knowing that a person does or does not have certain skills needed for the group task.

Critarion Selection

The selection of the appropriate criterion makes or breaks the sociometric intervention.   As in all data-collection in the social sciences, the answers you get depend on the questions you ask.  Any question will elicit information but unless the right question is asked, the information may be confusing or distracting or irrelevant to the intervention’s objective.

A good criterion should present a meaningful choice to the person in as simple a format as possible.  Other criteria are: the Rule of adequate motivation: “Every participant should feel about the experiment that it is in his (or her) own cause . . . that it is an opportunity for him (or her) to become an active agent in matters concerning his (or her) life situation.” and the Rule of “gradual” inclusion of all extraneous criteria. Moreno speaks here of “the slow dialectic process of the sociometric experiment

The criterion must be like a surgeon’s knife: most effective when it cleanly isolates the material of interest.  In responding to the question, each person will choose based on an individual interpretation of the criterion.  These interpretations, or sub-criteria, for this particular question could include: do I want a person who works hard, who is a power-broker, who is amiable, a minority, etc.  A clear statement of the criterion will tend to reduce the number of interpretations and will therefore increase the reliability of the data.

Princeples of Critarion Selection

The criterion should be as simply stated and as straightforward as possible.

The respondents should have some actual experience in reference to the criterion, whether ex post facto or present (in Moreno’s language, they are still “warmed up” to them) otherwise the questions will not arouse any significant response.

The criterion should be specific rather than general or vague.  Vaguely defined criteria evoke vague responses.

When possible, the criterion should be actual rather than hypothetical.

A criterion is more powerful if it is one that has a potential for being acted upon.  For example, for incoming college freshmen the question “Whom would you choose as a roommate for the year?” has more potential of being acted upon than the question “Whom do you trust?”

Moreno points out that the ideal criterion is one that helps further the life-goal of the subject.  “If the test procedure is identical with a life-goal of the subject he can never feel himself to have been victimized or abused.  Yet the same series of acts performed of the subject’s own volition may be a ‘test’ in the mind of the tester”  (Moreno, p. 105).  Helping a college freshman select an appropriate roommate is an example of a sociometric test that is in accord with the life-goal of the subject.

“It is easy to gain the cooperation of the people tested as soon as they come to think of the test as an instrument to bring their wills to a wider realization, that it is not only an instrument for exploring the status of a population, but primarily an instrument to bring the population to a collective self-expression in respect to the fundamental activities in which it is or is about to be involved.” (Moreno, 1953, pp. 680-681).

As a general rule questions should be future oriented, imply how the results are to be used, and specify the boundaries of the group (Hale, 1985).  And last, but not least, the criteria should be designed to keep the level of risk for the group appropriate to the group’s cohesion and stage of development.

Sociometric assessment techniques/ Methods

There are a variety of what can be referred to as classic sociometric assessment techniques derived from the work of the 1930s, including peer nomination, peer rankings, and sociometric rankings. In the peer nomination technique, children in a social group or school classroom anonymously identify social preferences for their classmates. For example, children may be asked to provide a list of the three classmates with whom they would most like to play and the three with whom they would least like to play. Another peer nomination technique (see Figure 1) is to provide a list of the names of the children in a classroom along with social acceptance items (e.g., “Who do you like to play with?” “Who is most likely to be alone during recess?” “Who gets into trouble the most?”). The children are asked to identify perhaps one to three classmates who they perceive best fit the item description.

An alternative peer nomination method for early readers is to use photographs with an adult reading the items aloud in either an individual or classroom setting while the children provide a nomination for a child, perhaps by assigning a smiling or frowning face to the photograph that applies. Another variation of the peer nomination method is the class play. In this procedure children cast their peers in positive and negative roles in an imaginary play. The class play has the potential advantage of being more acceptable in school settings because the positive and negative role assignments may be perceived as a more discreet method for identifying children’s social standing. For each of the methods described, the nominations may be summed for each child and the results are used to identify those children who are perceived as most socially positive or negative by their peers.

Two other sociometric techniques can be described as peer ratings and sociometric rankings. Peer ratings are conducted by providing a list of children’s names in the social group or classroom along with a rating for social acceptance items such as “The most fun to play with,”.“The least fun to play with,” and “Has the most friends.” The rating methods that are used may vary, typically ranging from three- to five-point Likert-type responses (e.g., Agree, Neutral, Disagree). In contrast to peer nominations and ratings, sociometric rankings are completed by an adult, most often the classroom teacher who has had the opportunity to observe the children in multiple social settings such as the classroom, playground, and cafeteria. In this method, teachers rank the children on social dimensions similar to those provided by peers.

Each of these sociometric assessment methods has strengths and limitations. Researchers have found that each method appears to be valid for identifying children’s social standing. Peer ratings and adult rankings appear to provide the most reliable or stable measurements and, as such, may be more useful than the peer nomination method. A major issue that arises with each of these methods is the concept of social validity, which refers to the acceptance, usefulness, and potential harm of an assessment procedure. The applications of sociometric assessment methods have resulted in controversy and ethical concerns regarding their use. These concerns center on the use of negative peer nominations and the possibility that children will compare responses which may result in negative social and emotional consequences for children who are not positively perceived by their peers. These concerns contributed to the decline in the acceptance and use of sociometric assessment methods, particularly in school settings. However, researchers have found no strong evidence that negative consequences occur for either the children who are rating or those being rated; therefore, sociometric assessment continues to be used as a research tool for understanding children’s social relationships.

Related Assesment Methods

Although the term sociometrics has been most often applied to the assessment methods described above, in a broader context the term can be applied to related assessment measures of social functioning. These methods tend to focus on children’s social competencies and skills rather than measuring only social standing or peer acceptance. Because these methods are more often used in practical applications in school settings, they are briefly described here.

Social Behavior Rating Scales. Social behavior rating scales represent one of the most frequently used measures of social competence. These rating scales are designed for gathering data on the frequency of occurrence of specific skills or behaviors. Some rating scales focus on social problem behaviors and others are designed specifically to assess children’s social skills. For example, a social skills rating scale may contain items such as “Appropriately invites friends to play” or “Controls temper in conflicts with adults” which are rated on a frequency scale (e.g., Never, Sometimes, Always). Depending on the measure, ratings can be gathered from parents or parent surrogates, teachers, and when appropriate from the children themselves. Rating scales in essence provide summary observations of a child’s social behavior. Gathering data from these multiple sources can facilitate understanding different perspectives regarding a child’s social skills in home and school settings. Well designed social skills rating scales have been found to be reliable and valid measures.

Observation Methods. Observation methods are used to gather information about a child’s social skills in natural settings, such as in the classroom, in the cafeteria, and on the playground. Observation methods can be highly structured wherein defined behaviors are measured for frequency of occurrence or measured for occurrence during specified time periods or intervals. For example, a child’s play behavior may be observed during recess by a school psychologist who records every 30 seconds whether the child was playing alone or with others. Other observation methods are less structured and rely on a narrative approach for describing a child’s social interactions.

Observation methods often include focus on the environmental variables that may increase or decrease a child’s social skills, such as the reactions of peers and adults to a child’s attempts at initiating conversation. Observations also can be conducted in what is known as analogue assessment, which involves having a child role-play social scenarios and observing the child’s performance. Whereas rating scales provide summary measures that rely on some level of recall, observations have the advantage of directly sampling a child’s behavior in actual social contexts or settings, thereby increasing the validity of the assessment.

The limitations of observations are that multiple observers are required to ensure reliable assessment (interobserver agreement) and observations are more time intensive. Thus in applied settings they may provide limited information due to time constraints.

Interview Methods. Interview methods are used to gather information about a child’s social skill strengths and weaknesses, and to aid in the identification of specific skill deficits for intervention. Interviews can be used separately with children, parents or parent surrogates, and teachers, or conjointly with multiple sources. Interviews can be structured, with a focus on the identification and treatment of specific social skills, or interviews can be less structured, with a greater focus on feelings and perceptions about a child’s social skills. As with rating scales, interview data can be viewed as summary recall information which should be validated with direct observation.

The assessment methods described often are combined in a comprehensive social skills assessment that may include rating scales, observations, and interviews. Using multiple methods of assessment is considered best practice because the use of more than one assessment method increases the likelihood that the behaviors which are targeted for classification or intervention are valid, and that specific social skills strengths and deficits are clearly defined. It is also important to use multiple assessment methods to monitor a child’s progress and to assess the effectiveness of an intervention.

Implication of Sociomatric  Assessment  for Educational Practices

In educational practice, sociometric assessment most often is used to determine eligibility for special education and for intervention for adaptive behaviors or socio-emotional problems. Children identified with special education needs, such as learning problems, mental retardation, attention deficit disorders, and autism spectrum disorders, including Asperger’s syndrome, may benefit from assessment and intervention toward enhancing their social skills. In the general education population, children may benefit who are shy, rejected, or engage in bullying or aggressive behaviors or who simply have limited social skills. Most of the classic sociometric assessment methods are not used in educational practice, partly due to issues with acceptability. Furthermore, although these methods have been found to be useful in research, they may not be viewed as being useful in school settings because they do not lead to specific classification for special education nor do they provide specific data that can directly assist in the intervention process. Related sociometric assessment measures such as rating scales often are used because these methods provide more specific information that can be linked to classification and intervention.

One classic sociometric assessment method that has been shown to be effective in educational practice is sociometric rankings. In this procedure teachers rank the children in their classroom who the teacher views as having social behavior problems, sometimes in relation to internalizing and externalizing problem behaviors. (Internalizing behaviors refer to problems such as depression, anxiety, and social withdrawal; externalizing behaviors refer to problems such as aggression, conduct problems, and hyperactivity.) The use of teacher rankings serves as an initial screening device for identifying children who may need additional assessment and intervention. Once identified, the children are screened further with a rating scale or related method to determine the extent of their social difficulties. Those children who are found to have problems are then referred for more assessment intended to specify their problems and provide an intervention, such as social skills training. Researchers have found this method of assessment, known as a multiple gating procedure, to be acceptable and effective in applied settings.

Assessing and understanding children’s and adolescents’ peer relations is important in educational settings for several reasons. From a developmental standpoint, it is important to understand how children develop social skills as they mature. Researchers have found that sociometric assessment can be useful in identifying children’s social standing and predicting positive or negative social outcomes for children. The establishment of friendships and positive social interactions are important for children’s social development and for interacting in the social world, including the school setting. Children with poor peer and adult relationships often experience negative social and emotional consequences that can continue throughout adulthood. These negative consequences can include lower academic achievement, higher rates of school dropout, depression, anxiety, low self-esteem, poor self-concept, social withdrawal, fewer positive employment opportunities, and anti-social behaviors such as aggression and criminality. Researchers have estimated that at least 10%, or one in ten children experience consistent negative peer relationships. Therefore, a large number of children with inadequate social relationships may be at-risk for developing behavioral and emotional difficulties. Children with poor or limited social skills also are at risk for becoming victims of bullying and other aggressive behaviors. Children with disabilities often have social skills deficits and negative peer perceptions that put them at heightened risk.

Given these potentially negative outcomes, social skills assessment is important in educational settings. In research, the identification of the development of social standing and social skills can facilitate understanding the behaviors of socially successful and unsuccessful children. In research settings, both classic sociometric assessment and social skills assessment methods are used to achieve better understanding of social types and behaviors. These behaviors can in turn be used to understand children’s and adolescents’ social skill deficits and can aid in the design and study of social skills assessments and interventions.

Sociometry Test-An Example-The basic technique in sociometry is the “sociometric test.” This is a test under which each member of a group is asked to choose from all other members those with whom he prefers to associate in a specific situation. The situation must be a real one to the group under study, e.g., ‘group study’, ‘play’, ‘class room seating’ for students of a public school.

The typical process for a sociometric intervention follows these basic steps:

(1) Identify the group to be studied

(2) Develop the criterion,

(3) Establish rapport / warm-up,

(4) Gather sociometric data,

(5) Analyze and interpret data,

(6) Feed back data, either: (a) to individuals, or (b) in a group setting,

(7) Develop and implement action plans,

A specific number of choices, say two or three to be allowed is determined with reference to the size of the group, and different levels of preferences are designated for each choice.

Suppose we desire to find out the likings and disliking of persons in a work group consisting of 8 persons. Each person is asked to select 3 persons in order or preference with whom he will like to work on a group assignment. The levels of choices are designated as: the first choice by the’ number 1, the second by 2, and the third by 3

For example, you are with a group of 10 kids. Everyone is asked to choose one person to sit next to them. Show your choice by placing your right hand on the shoulder of the person you choose.  Move about the room as you need to make your choice.  There are only two requirements: (1) you may choose only one person and (2) you must choose someone.”  Typically you and the kids will make their choices after only a little hesitation.

This exercise may be repeated several times in the period of just a few minutes using different criteria each time. The exercise graphically illustrates not only the social reality of choice-making, but also the fact that different criteria evoke different patterns of choices. ”

Regardless of the criterion, the person who receives the most hands on his or her shoulder is what is known as the sociometric star for that specific criterion. Other sociometric relationships which may be observed are mutuals , where two people choose each other; chains, where person A chooses person B who chooses person C who chooses person D and so on; and gaps or cleavages when clusters of people have chosen each other but no one in any cluster has chosen anyone in any other cluster.

This “hands-on” exercise can be very helpful for teaching a group about sociometry and about the reality of the informal organization.  While the group is in each pattern, the consultant can ask the group to describe the pattern, how the pattern reflects “real life”, and what the group would need to do to close up any cleavages. Participants learn very quickly and concretely about the informal organization underlying their formal organization.  As one participant said, “It shows how we really feel, but we don’t say it very often.”

Constructing a sociomatrix for a small group like this one is a simple task, but when the number of people in the group is more than about five or six, the clerical work and calculations become quite tedious and open to error.  With a large matrix, the identification of mutuals begins to resemble a migraine headache.  Fortunately there are computers.  Software exists to automate all the tedious calculations involved in creating a sociomatrix of up to 60 people. The software produces not only the sociomatrix itself but also several useful group and individual reports

Validity of Sociometry

Does sociometry really measure something useful?  Jane Mouton, Robert Blake and Benjamin Fruchter reviewed the early applications of sociometry and concluded that the number of sociometric choices do tend to predict such performance criteria as productivity, combat effectiveness, training ability, and leadership.  An inverse relationship also holds:  the number of sociometric choices received is negatively correlated with undesirable aspects of behavior such as accident-proneness,  and frequency of disciplinary charges” .  The more frequently you are chosen, the less likely you are to exhibit the undesirable behavior.

Limitations of sociometry

To quote Moreno: “there is a deep discrepancy between the official and the secret behavior of members”. Moreno advocates that before any “social program” can be proposed, the sociometrist has to “take into account the actual constitution of the group.”

Sociometry is rarely used in the classroom because it usually cannot be effectively reproduced by teachers in their classrooms. However, studies of aggression and school violence show how and why sociograms should be used .

There has been research conducted pointing out that there is a tendency to use esoteric terms which are intelligible only to the initiated and create barriers to communication .

Sociometric assessment can be defined as the measurement of interpersonal relationships in a social group. Sociomet-ric measurement or assessment methods provide information about an individual’s social competence and standing within a peer group. School-based sociometric assessment often focuses on a child’s relationships with regard to social popularity, peer acceptance, peer rejection, and reputation.

Some sociometric assessment methods derive information on social relationships by assessing children’s positive and negative social perceptions of one another, whereas other methods involve adult (teacher, parent) and self perceptions of children’s social competencies or standing. Sociometric assessment methods were introduced in the 1930s and advanced in the journal Sociometry. In the 1950s, several books were published on the topic and sociometric measurements often were part of research and school-based assessments of social relationships. The use of classic sociometric procedures declined in the following decades, due to the advancement of social behavior rating scales and ethical concerns regarding the use of peer nomination methods with children.

.

 

 

Posted in Uncategorized | Comments Off

Fundamental Concepts of Research Methodology

Dr. V.K.Maheshwari, M.A(Socio, Phil) B.Sc. M. Ed, Ph.D

Former Principal, K.L.D.A.V.(P.G) College, Roorkee, India

Some people consider research as a movement, a movement from the known to the unknown. It is actually a voyage of discovery.  In fact, research is an art of scientific investigation. The Advanced Learner’s Dictionary of Current English lays down the meaning of research as “a careful investigation or inquiry specially through search for new facts in any branch of knowledge Research in common parlance refers to a search for knowledge. Once can also define research as a scientific and systematic search for pertinent information on a specific topic.” Redman and/or define research as a “systematized effort to gain new knowledge.

In the broadest sense of the word, the definition of research includes any gathering of data, information and facts for the advancement of knowledge.

According to Creswell – “Research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue”. It consists of three steps:

  • Pose a question,
  • collect data to answer the question,
  • present an answer to the question.

Aims and Objectives of Research

The main aim of research is to find out the truth which is hidden and which has not been discovered as yet. The purpose of research is to discover answers to questions through the application of scientific procedures.   Research objectives can be placed in the following broad groupings:

  • To gain familiarity with a phenomenon or to achieve new insights into it .
  • To portray accurately the characteristics of a particular individual, situation or group (studies with this object in view are known as descriptive research studies)
  • To determine the frequency with which something occurs or with which it is associated with something else (studies with this object in view are known as diagnostic research studies);
  • To test a hypothesis of a causal relationship between variables .

Characteristics of Good Research

  • Good research is systematic:
  • Good research is logical
  • Good research is empirical
  • Good research is replicable

Educational Research

Educational  research refers to research conducted by Educationists  , which follows by the systematic plan. Educational   research methods can generally vary along a quantitative/qualitative dimension. Quantitative designs approach Educational  phenomena through quantifiable evidence, and often rely on statistical analysis of many cases (or across intentionally designed treatments in an experiment) to create valid and reliable general claims. Related to quantity. Qualitative designs emphasize understanding of  Educational phenomena through direct observation, communication with participants, or analysis of texts, and may stress contextual and subjective accuracy over generality. Related to quality.  Educational research is the scientific study of society. More specifically, Educational research examines a society’s attitudes, assumptions, beliefs, trends, stratifications and rules. The scope of Educational  research can be small or large, ranging from the self or a single individual to spanning an entire race or country.  Educational research determines the relationship between one or more variables.

Educational Research may be defined as a scientific undertaking which by means of logical and systematized techniques, aims to discover new factor verify a test old facts, analyze their sequence, interrelationship and causal explanation which were derived within an appropriate theoretical frame of reference, develop new scientific tolls, concepts and theories which would facilities reliable and valid study of human behavior. A researcher’s primary goal distant and immediate is to explore and gain an understanding of human behavior  and thereby gain a greater control over time.

Objectives of Educational Research

Educational  Research is a scientific approach of adding to the knowledge about Education   and    Educational phenomena. Knowledge to be meaningful should have a definite purpose and direction. The growth of knowledge is closely linked to the methods and approaches used in research investigation. Hence the Educational  research must be guided by certain laid down objectives enumerated below:

Development of Knowledge:

Education helps us to obtain and add to the knowledge of Educational  phenomena. This is one of the most important objectives of Educational  research.

Scientific Study of Social Life:

Social research is an attempt to acquire knowledge about the social phenomena. Man being the part of a society, social research studies human being as an individual, human behavior and collects data about various aspects of the social life of man and formulates law in this regards.

Welfare of Humanity:

The ultimate objective of the Educational  study is often and always to enhance the welfare of humanity. No scientific research makes only for the sake of study. The welfare of humanity is the most common objective in Education .

Classification of facts:

Educational  research aims to clarify facts. The classification of facts plays important role in any scientific research.

The ultimate objective of many research undertaking is to make it possible, to modify the behavior of particular type of individuals under the specified conditions. In Educational research we generally study of the Educational  phenomena, events and the factors that govern and guide them.

Criteria for Selecting Research Problem

The following points may be observed by a researcher in selecting a research problem or a subject for research

  • Controversial subject should not become the choice of an average researcher;
  • Subject which is overdone should not be normally chosen, for it will be a difficult task to throw any new light in such a case.
  • The subject selected for research should be familiar and feasible so that the related research material or sources of research are within one’s reach.
  • Too narrow or too vague problems should be avoided.

Characteristics of Good Research Title

  • Avoid ambiguous word
  • Avoid duel meaning word
  • Catch the reader’s attention and interest
  • Describe the content of the paper
  • Simple, sharp and short

Abstract

An abstract  is a brief summary of a research article, thesis, review, conference  proceeding or any in-depth analysis of a particular subject or discipline, and is often used to help the reader quickly ascertain the paper’s purpose.

Structure of Abstract

An academic abstract typically outlines four elements relevant to the completed work:

  • The research focus (i.e. statement of the problem(s)/research issue(s) addressed);
  • The research methods used (Method/Nature/Sampling/Population/Study Area.);
  • The results/findings of the research;
  • The main conclusions and recommendations

Background of the Study

Background research refers to accessing the collection of previously published and unpublished information about a site, region, or particular topic of interest and it is the first step of all good archaeological investigations, as well as that of all writers of any kind of research paper.

Statement of the Problem

Defining a research problem properly and clearly is a crucial part of a research study and must in no case be accomplished hurriedly. The problem to be investigated must be defined unambiguously for that will help to discriminate relevant data from the irrelevant ones

A proper definition of research problem will enable the researcher to be on the track whereas an ill-defined problem may create hurdles. However, in practice this a frequently overlooked which causes a lot of problems later on. Hence, the research problem should  be defined in a systematic manner, giving due weightage to all relating points. The technique for the purpose involves the undertaking of the following steps generally one after the other:

  • Statement of the problem in a general way;
  • Understanding the nature of the problem;
  • Surveying the available literature
  • Developing the ideas through discussions; and Rephrasing the research problem into a working proposition

Literature Review

Review of existing literature related to the research is an important part of any research paper, and essential to put the research work in overall perspective, connect it with earlier research work and build upon the collective intelligence and wisdom already accumulated by earlier researchers. It significantly enhances the value of any research paper.

A literature review surveys scholarly articles, books, dissertations, conference proceedings and other resources which are relevant to a particular issue, area of research, or theory and provides context for a dissertation by identifying past research. Research tells a story and the existing literature helps us identify where we are in the story currently. It is up to those writing a dissertation to continue that story with new research and new perspectives but they must first be familiar with the story before they can move forward.

Technique for this purpose involves the undertaking of the following steps generally one after the other:

  • Statement of the problem in a general way;
  • Understanding the nature of the problem;
  • Surveying the available literature Developing the ideas through discussions;
  • Rephrasing the research problem into a working proposition

Characteristics of Good Literature Review

Demonstrates the researcher’s familiarity with the body of knowledge by providing a good synthesis of what is and is not known about the subject in question, while also identifying areas of controversy and debate, or limitations in the literature sharing different perspectives.

  • Identifies the most important authors engaged in similar work.
  • Indicates the theoretical framework that the researcher is working with
  • Offers an explanation of how the researcher can contribute toward the existing body of scholarship by pursuing their own thesis or research question
  • Organized around issues, themes, factors, or variables that are related directly to the thesis or research question.
  • Places the formation of research questions in their historical and disciplinary context

Aims and Objective of the Study

The aim of the work, i.e. the overall purpose of the study, should be clearly and concisely defined.

Aims:

  • Are broad statements of desired outcomes, or the general intentions of the research, which ‘paint a picture’ of your research project
  • Emphasize what is to be accomplished (not how it is to be accomplished)
  • Address the long-term project outcomes, i.e. they should reflect the aspirations and expectations of the research topic.
  • Once aims have been established, the next task is to formulate the objectives. Generally, a project should have no more than two or three aims statements, while it may include a number of objectives consistent with them.

Objectives

Objectives are subsidiary to aims and are the steps you are going to take to answer your research questions or a specific list of tasks needed to accomplish the goals of the project

  • Address the more immediate project outcomes
  • Emphasize how aims are to be accomplished
  • Here is an example of a project aim and subsidiary objectives:
  • Make accurate use of concepts
  • Must be highly focused and feasible
  • Must be sensible and precisely described
  • Should read as an ‘individual’ statement to convey your intentions

Aims and Objectives should:

  • Access your chosen subjects, respondents, units, goods or services.
  • Approach the literature and theoretical issues related to your project.
  • Be concise and brief.
  • Be interrelated; the aim is what you want to achieve, and the objective describes how you are going to achieve that aim.
  • Be realistic about what you can accomplish in the duration of the project and the other commitments you have
  • Deal with ethical and practical problems in your research.
  • Develop a sampling frame and strategy or a rationale for their selection.
  • Develop a strategy and design for data collection and analysis.

Aims and Objectives should not:

  • Be too vague, ambitious or broad in scope.
  • Contradict your methods – i.e. they should not imply methodological goals or standards of measurement, proof or generalisability of findings that the methods cannot sustain.
  • Just be a list of things related to your research topic.
  • Just repeat each other in different terms.

Objectives must always be set after having formulated a good research question. After all, they are to explain the way in which such question is going to be answered.

Hypothesis of the Study

A hypothesis is a logical supposition, a reasonable guess, an educated conjecture. It provides a tentative explanation for a phenomenon under investigation.” (Leedy and Ormrod, 2001).

A research hypothesis is the statement created by researchers when they speculate upon the outcome of a research or experiment.  Hypothesis is a tentative conjecture explaining an observation, phenomenon, or scientific problem that can be tested by further observation, investigation, or experimentation. Hypotheses are testable explanations of a problem, phenomenon, or observation. Both quantitative and qualitative research involve formulating a hypothesis to address the research problem. Hypotheses that suggest a causal relationship involve at least one independent variable and at least one dependent variable; in other words, one variable which is presumed to affect the other.

A hypothesis is important because it guides the research. An investigator may refer to the hypothesis to direct his or her thought process toward the solution of the research problem or subproblems. The hypothesis helps an investigator to collect the right kinds of data needed for the investigation. Hypotheses are also important because they help an investigator to locate information needed to resolve the research problem or subproblems (Leedy and Ormrod, 2001)

Type of Hypothesis

Below are some of the important types of hypothesis

1.            Simple Hypothesis

2.            Complex Hypothesis

3.            Empirical Hypothesis

4.            Null Hypothesis

5.            Alternative Hypothesis

6.            Logical Hypothesis

7.            Statistical Hypothesis

Simple Hypothesis

Simple hypothesis is that one in which there exists relationship between two variables one is called independent variable or cause and other is dependent variable or effect.

Complex Hypothesis

Complex hypothesis is that one in which as relationship among variables exists. I recommend you should read characteristics of a good  research hypothesis. In this type dependent as well as independent variables are more than two.

Empirical Hypothesis

Working hypothesis is that one which is applied to a field. During the formulation it is an assumption only but when it is pat to a test become an empirical or working hypothesis.

Null Hypothesis

Null hypothesis is contrary to the positive statement of a working hypothesis. According to null hypothesis there is no relationship between dependent and independent variable. It is denoted by ‘HO”.

Alternative Hypothesis

Firstly many hypotheses are selected then among them select one which is more workable and most efficient. That hypothesis is introduced latter on due to changes in the old formulated hypothesis. It is denote by “HI”.

Logical Hypothesis

It is that type in which hypothesis is verified logically. J.S. Mill has given four cannons of these hypothesis e.g. agreement, disagreement, difference and residue.

Statistical Hypothesis

A hypothesis which can be verified statistically called statistical hypothesis. The statement would be logical or illogical but if statistic verifies it, it will be statistical hypothesis.

Characteristics of Hypothesis

  • A hypothesis should state the expected pattern, relationship or difference between two or more variables;
  • A hypothesis should be testable;
  • A hypothesis should offer a tentative explanation based on theories or previousresearch;A hypothesis should be concise and lucid.

 

Variables of the Study

A variable is either a result of some force or it is the force that causes a change in another variable. In experiments, these are called dependent and independent variables respectively

The purpose of all research is to describe and explain variance in the world. Variance is simply the difference; that is, variation that occurs naturally in the world or change that we create as a result of a manipulation. Variables are names that are given to the variance we wish to explain.

A variable is either a result of some force or is itself the force that causes a change in another variable. In experiments, these are called dependent and independent variables respectively.

When a researcher gives an active medication to one group of people and a placebo, or inactive medication, to another group of people, the independent variable is the medication treatment. Each person’s response to the active medication or placebo is called the dependent variable.

This could be many things depending upon what the medication is for, such as high blood pressure or muscle pain. Therefore, in experiments, a researcher manipulates an independent variable to determine if it causes a change in the dependent variable.

As we learned earlier in a descriptive study, variables are not manipulated. They are observed as they naturally occur and then associations between variables are studied. In a way, all the variables in descriptive studies are dependent variables because they are studied in relation to all the other variables that exist in the setting where the research is taking place. However, in descriptive studies, variables are not discussed using the terms “independent” or “dependent.” Instead, the names of the variables are used when discussing the study.

Conceptual Framework

Conceptual Framework is a written or visual presentation that explains either graphically, or in narrative form, the main things to be studied – the key factors, concepts or variables and the presumed relationship among them. The conceptual framework identifies the research tools and methods that may be used to carry out the research effectively The main objective in forming a conceptual framework is to help the researcher give direction to the research..

Theoretical Framework

The theoretical framework enhances overall clarity of the research. It also helps the researcher get through the research faster as he has to look only for information within the theoretical framework, and not follow up any other information he finds on the topic. The objective of forming a theoretical framework is to define a broad framework within which a researcher may work

Difference between the Conceptual and the Theoretical Framework

  • A conceptual framework is the researcher’s idea on how the research problem will have to be explored. This is founded on the theoretical framework, which lies on much broader scale of resolution. The theoretical framework dwells on time tested theories that embody the findings of numerous investigations on how phenomena occur.
  • The theoretical framework provides a general representation of relationships between things in a given phenomenon. The conceptual framework, on the other hand, embodies the specific direction by which the research will have to be undertaken. Statistically speaking, the conceptual framework describes the relationship between specific variables identified in the study. It also outlines the input, process and output of the whole investigation. The conceptual framework is also called the research paradigm.
  • The theoretical framework looks at time-tested theories in relation to any research topic. The conceptual framework is the researcher’s idea on how the research problem will be explored, keeping in mind the theories put forth in the theoretical framework.
  • The theoretical framework looks at the general relationship of things in a phenomenon, while conceptual framework puts forth the methods to study the relationship between the specific variables identified in the research topic
  • Conceptual framework gives a direction to the research that is missing in theoretical framework by helping decide on tools and methods that may be employed in the research.

Research Methodology

Research Methodology is the complete plan of attack on the central research problem. It provides the overall structure for the procedures that the researcher follows, the data that the researcher collects, and the data analyses that the researcher conducts, thus involves planning. It is a plan with the central goal of solving the research problem in mind. Research methodology describing how the study was conducted. It includes; research design, Study population, sample and sample size, methods of data collection, methods of data analysis and anticipation of the study. Research methodology refers to a philosophy of research process. It includes the assumptions and values that serve a rationale for research and the standards or criteria the researcher uses for collecting and interpreting data and reaching at conclusions (Martin and Amin, 2005:63). In other words research methodology determines the factors such as how to write hypothesis and what level of evidence is necessary to make decisions on whether to accept or reject the hypothesis.

Research Method

1.Survey Method:

The Survey method is the technique of gathering data by asking questions to people who are thought to have desired information. Surveys involve collecting information, usually from fairly large groups of people, by means of questionnaires but other techniques such as interviews or telephoning may also be used. There are different types of survey. Surveys are effective to produce information on socio-economic characteristics, attitudes, opinions, motives etc and to gather information for planning product features, advertising media, sales promotion, channels of distribution and other marketing variables.

2.Experiments Method:

Experimental research is guided by educated guesses that guess the result of the experiment. An experiment is conducted to give evidence to this experimental hypothesis. Experimental research, although very demanding of time and resources, often produces the soundest evidence concerning hypothesized cause-effect relationships.

3. Case Study Method:

Case study research involves an in-depth study of an individual or group of individuals. Case studies often lead to testable hypotheses and allow us to study rare phenomena. Case studies should not be used to determine cause and effect, and they have limited use for making accurate predictions. The case study research draws upon their work  on six steps that should be used:

  • Determine and define the research questions
  • Select the cases and determine data gathering and analysis techniques
  • Prepare to collect the data
  • Collect data in the field
  • Evaluate and analyze the data
  • Prepare the report

4.Observation Method:

The observation method involves human or mechanical observation of what people actually do or what events take place during a consumption situation. “Information is collected by observing process at work.

Observational trials study  issues in large groups of people but in natural settings. Studies which involve observing people can be divided into two main categories, namely participant observation and non- participant observation.

A ) In participant observation studies- The researcher becomes (or is already) part of the group to be observed. This involves fitting in, gaining the trust of members of the group and at the same time remaining sufficiently detached as to be able to carry out the observation.

B) In non-participant observation studies- The researcher is not part of the group being studied. The researcher decides in advance precisely what kind of behavior is relevant to the study and can be realistically and ethically observed. The observation can be carried out in a few different ways.

Research Type or Nature of the Research

1.Descriptive Research:

Descriptive research is also called Statistical Research. The main goal of this type of research is to describe the data and characteristics about what is being studied. The idea behind this type of research is to study frequencies, averages, and other statistical calculations. Although this research is highly accurate, it does not gather the causes behind a situation. Descriptive research is mainly done when a researcher wants to gain a better understanding of a topic . Descriptive research is the exploration of the existing certain phenomena. The details of the facts wont be known. The existing phenomena are not known to the person. Descriptive research attempts to describe systematically situation, problem, phenomenon, service or programmed, or provides information about , say, living condition of a community, or describes attitudes towards an issue.

2. Explanatory Research:

Explanatory research attempts to clarify why and how there is a relationship between two or more aspects of a situation or phenomenon.  Explanatory research is research conducted in order to explain any behaviour in the market. It could be done through using questionnaires, group discussions, interviews, random sampling, etc.

3.Exploratory Research:

Exploratory research is conducted into an issue or problem where there are few or no earlier studies to refer to. Exploratory research is undertaken to explore an area where little is known or to investigate the possibilities of undertaking a particular research study (feasibility study/ pilot study).

The focus is on gaining insights andfamiliarity for later investigation. Secondly, descriptive research describes phenomena as they exist. Here data is often quantitative and statistics applied in particular situation. It aims to generalise from an analysis by predicting certain phenomena on the basis of hypothesised general relationships. Exploratory research design is used to determine the best research design, selection of subjects and collection method. This design of research provides finaland conclusive answers toward the research questions.

4.Quantitative Research:

Quantitative research involves analysis of numerical data The emphasis of Quantitative research is on collecting and analyzing numerical data; it concentrates on measuring the scale, range, frequency etc. of phenomena. This type of research, although harder to design initially, is usually highly detailed and structured and results can be easily collated and presented statistically. Quantitative data refers to numeric quantities of the results.

5.Qualitative Research:

Qualitative research involves analysis of data such as words (e.g., from interviews), pictures (e.g., video), or objects (e.g., an artifact). Qualitative data refers to the qualities of the results in observation

Qualitative research is more subjective in nature than Quantitative research and involves examining and reflecting on the less tangible aspects of a research subject, e.g. values, attitudes, perceptions. Although this type of research can be easier to start, it can be often difficult to interpret and present the findings; the findings can also be challenged more easily.

Unit of Analysis

One of the most important ideas in a research project is the unit of analysis. The unit of analysis is the major entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a study:

  • individuals
  • groups
  • artifacts (books, photos, newspapers)
  • geographical units (town, census tract, state)
  • social interactions (dyadic relations, divorces, arrests)

The unit of analysis is the major entity that you are analyzing in your study. It is the ‘what ‘or ‘who’ that is being studied. Units of analysis are essentially the things we examine in order to create summary descriptions of them and explain differences among them. Units of analysis that are commonly used in social science research include individuals, groups, organizations, social artifacts, and social interactions.

Population of the Study

A population as a well-defined group of people or objects that share common characteristics. A population in a research study is a   group of individual’s persons, objects, or items from which samples are taken for measurement .A population is   group about which some information is sought.

A research population is also known as a well-defined collection of individuals or objects known to have similar characteristics. All individuals or objects within a certain population usually have a common, binding characteristic or trait.

Usually, the description of the population and the common binding characteristic of its members are the same. A population is any group of individuals that has one or more characteristics in common and that are of interest to the researcher. As we describe below, there are various ways to configure a population depending on the characteristics of interest.

Population for study, such a population must be specific enough to provide readers a clear understanding of the applicability of  study to their particular situation and their understanding of that same population.

Sampling

A sample is a subset of the population being studied. It represents the larger population and is used to draw inferences about that population. It is a research technique widely used in the social sciences as a way to gather information about a population without having to measure the entire population.1.

Broadly speaking, there are two groups of sampling technique: probability sampling techniques and non-probability sampling techniques.

Probability sampling techniques

In probability sampling, every individual in the population have equal chance of being selected as a subject for the research.

This method guarantees that the selection process is completely randomized and without bias.

These types of probability sampling technique include simple random sampling, systematic random sampling, stratified random sampling and cluster sampling.

Random sampling

The random sample is the purest form of probability sampling. Each member of the population has an equal and known chance of being selected. When there are very large populations, it is often difficult or impossible to identify every member of the population, so the pool of available subjects becomes biased.

A Stratified Sample-

Stratified random sampling is a method of sampling that involves the division of a population into smaller groups known as strata. In stratified random sampling, the strata are formed based on members’ shared attributes or characteristics. A random sample from each stratum is taken in a number proportional to the stratum’s size when compared to the population. These subsets of the strata are then pooled to form a random sample.

A Cluster Sample

-A cluster sample is obtained by selecting clusters from the population on the basis of simple random sampling. The sample comprises a census of each random cluster selected. Cluster sampling is a method used to enable random sampling to occur while limiting the time and costs that would otherwise be required to sample from either a very large population or one that is geographically diverse.

Non-probability sampling techniques

In this type of population sampling, members of the population do not have equal chance of being selected. Due to this, it is not safe to assume that the sample fully represents the target population. It is also possible that the researcher deliberately chose the individuals that will participate in the study.

Convenience Sampling

Convenience sampling is a non-probability sampling technique where subjects are selected because of their convenient accessibility and proximity to the researcher.

The subjects are selected just because they are easiest to recruit for the study and the researcher did not consider selecting subjects that are representative of the entire population.

Consecutive Sampling

Consecutive Sampling is a strict version of convenience sampling where every available subject is selected, i.e., the complete accessible population is studied. This is the best choice of the Non-probability sampling techniques since by studying everybody available, a good representation of the overall population is possible in a reasonable period of time.

Judgmental sampling

Judgmental sampling is a non-probability sampling technique where the researcher selects units to be sampled based on their knowledge and professional judgment.

The judgemental sampling is used in cases where the specialty of an authority can select a more representative sample that can bring more accurate results than by using other probability sampling techniques. The process involves nothing but purposely handpicking individuals from the population based on the authority’s or the researcher’s knowledge and judgment.

Quota Sampling-

Quota sampling is the Non-probability equivalent of stratified sampling. Like stratified sampling, the researcher first identifies the stratums and their proportions as they are represented in the population. Then convenience or judgment sampling is used to select the required number of subjects from each stratum. This differs from stratified sampling, where the stratums are filled by random sampling Quota sampling is a non-probability technique used to ensure equal representation of subjects in each layer of a stratified sample grouping.

Sequential Sampling

Sequential sampling is a non-probability sampling technique wherein the researcher picks a single or a group of subjects in a given time interval, conducts his study, analyzes the results then picks another group of subjects if needed and so on.

Systematic Sampling

Systematic sampling is a type of probability sampling method in which sample members from a larger population are selected according to a random starting point and a fixed periodic interval. This interval, called the sampling interval, is calculated by dividing the population size by the desired sample size. Despite the sample population being selected in advance, systematic sampling is still thought of as being random if the periodic interval is determined beforehand and the starting point is random.

Snowball or Chain Sampling

Snowball sampling is a special Non-probability method used when the desired sample characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in these situations. Snowball sampling relies on referrals from initial subjects to generate additional subjects. While this technique can dramatically lower search costs, it comes at the expense of introducing bias because the technique itself reduces the likelihood that the sample will represent a good cross section from the population.

A Pre-Test

usually refers to a small-scale trial of particular research components.

Before planning a pilot census, the conduct of a series of pre-test surveys is highly desirable. The objective of the pre-test surveys should be confined mainly to the formulation of concepts and definitions, census questionnaires, instruction manuals, etc., and the evaluation of alternative methodologies and data collection techniques.

A Pilot Study

A pilot study is a mini-version of a full-scale study or a trial run done in preparation of the complete study. The latter is also called a ‘feasibility’ study. It can also be a specific pre-testing of research instruments, including questionnaires or interview schedules.

Methods of Data Collection

1.Focus Groups:

Focus Group Discussion (FGD) is a method of data collection which is frequently used to collect in-depth qualitative data in various descriptive studies such as case studies, phenomenological and naturalistic studies). The main goal of Focus Group Discussion is to provide an opportunity for the participants to talk to one another about a specific area of study. The facilitator is there to guide the discussion. A focus group discussion allows a group of 8 – 12 informants to freely discuss a certain subject with the guidance of a facilitator or reporter.

  • Develop appropriate messages for health education programmes and later evaluate the messages for clarity
  • Explore controversial topic
  • Focus research and develop relevant research hypotheses by exploring in greater depth the problem to be investigated and its possible causes
  • Formulate appropriate questions for more structured, larger scale surveys
  • Help understand and solve unexpected problems in interventions

2.Interviews:

Interview is one of the popular methods of research data collection. The term interview can be dissected into two terms as, ‘inter’ and ‘view’. The essence of interview is that one mind tries to read the other. The interviewer tries to assess the interviewed in terms of the aspects studied or issues analyzed. Good approach to gather in-depth attitudes, beliefs, and anecdotal data from individual patrons. Personal contact with participants might elicit richer and more detailed responses. Provides an excellent opportunity to probe and explore questions.

3.Observation:

Observation is a technique that involves systematically selecting, watching and recording behavior and characteristics of living beings, objects or phenomena. Observation of human behavior is a much-used data collection technique. It can be undertaken in different ways:  Participant observation: The observer takes part in the situation he or she observes. (For example, a doctor hospitalized with a broken hip, who now observes hospital procedures ‘from within’.)  Non-participant observation: The observer watches the situation, openly or concealed, but does not participate. Observations can be overt (everyone knows they are being observed) or covert (no one knows they are being observed and the observer is concealed). The benefit of covert observation is that people are more likely to behave naturally if they do not know they are being observed. However, you will typically need to conduct overt observations because of ethical problems related to concealing your observation.

4.Surveys:

Best for gathering brief written responses on attitudes, beliefs regarding library programs. Can include both close-ended and open-ended questions. Can be administered in written form or online. Personal contact with the participants is not required. Staff and facilities requirements are minimal, since one employee can easily manage the distribution and collection of surveys, and issues such as privacy, quiet areas, etc. are typically not concerns. Responses are limited to the questions included in the survey. Participants need to be able to read and write to respond. Therefore, surveys may not be the best initial data collection tool.

Tools of Data Collection

1.Interview Schedule (Open-ended/Close-ended):

This method of data collection is very much like the collection of data through questionnaire, with little difference which lies in the fact that schedules (proforma containing a set of questions) are being filled in by the enumerators who are specially appointed for the purpose. These enumerators along with schedules, go to respondents, put to them the questions from the proforma in the order the questions are listed and record the replies in the space meant for the same in the proforma.

2.Questionnaire (Open-ended Question):

This method of data collection is quite popular, particularly in case of big enquiries. It is being adopted by private individuals, research workers, private and public organizations and even by governments. In this method a questionnaire is sent (usually by post) to the persons concerned with a request to answer the questions and return the questionnaire. A questionnaire consists of a number of questions printed or typed in a definite order on a form or set of forms. The questionnaire is mailed to respondents who are expected to read and understand the questions and write down the reply in the space meant for the purpose in the questionnaire itself. The respondents have to answer the questions on their own.

3.Checklist:

Checklists structure a person’s observation or evaluation of a performance or artifact. They can be simple lists of criteria that can be marked as present or absent, or can provide space for observer comments. These tools can provide consistency over time or between observers. Checklists can be used for Case Study method.

4.Dichotomous Scales

The response options for each question in your survey may include a dichotomous, a three-point, a five-point, a seven-point or a semantic differential scale. Each of these response scales has its own advantages and disadvantages, but the rule of thumb is that the best response scale to use is the one which can be easily understood by respondents and interpreted by the researcher.

A dichotomous scale is a two-point scale which presents options that are absolutely opposite each other. This type of response scale does not give the respondent an opportunity to be neutral on his answer in a question.

Examples:

  • Yes- No
  • True – False
  • Fair – Unfair
  • Agree – Disagree

Rating Scales

This is a recording form used for measuring individual’s attitudes, aspirations and other psychological and behavioral aspects, and group behavior.

Three-point, five-point, and seven-point scales are all included in the umbrella term “rating scale”. A rating scale provides more than two options, in which the respondent can answer in neutrality over a question being asked.

Examples:

1. Three-point Scales

  • Good – Fair – Poor
  • Agree – Undecided – Disagree
  • Extremely- Moderately – Not at all
  • Too much – About right – Too little

2. Five-point Scales (e.g. Likert Scale)

  • Strongly Agree – Agree – Undecided / Neutral – Disagree – Strongly Disagree
  • Always – Often – Sometimes – Seldom – Never
  • Extremely – Very – Moderately – Slightly – Not at all
  • Excellent – Above Average – Average – Below Average – Very Poor

3. Seven-point Scales

  • Exceptional – Excellent – Very Good – Good – Fair – Poor – Very Poor
  • Very satisfied –
  • Moderately satisfied –
  • Slightly satisfied –
  • Neutral –
  • Slightly dissatisfied –
  • Moderately Dissatisfied-
  • Very dissatisfied

Semantic Differential Scales

A semantic differential scale is only used in specialist surveys in order to gather data and interpret based on the connotative meaning of the respondent’s answer. It uses a pair of clearly opposite words, and can either be marked or unmarked.

Data Processing

Data processing is an intermediary stage of work between data collection and data analysis. The completed instruments of data collection, like interview schedules/questionnaires/ data sheets/field notes contain. a vast mass of data. They cannot straightaway provide answers to research questions. They, like raw materials, need processing. Data processing involves classification and summarisal1on of data in order to make them amenable to analysis. Data processing consists of a number of closely related operations, like

(1) editing,

(2) classification and coding,

(3) transcription and

(4) tabulation.

1.Editing:

The first step in processing of data is editing of complete schedules/questionnaires. Editing of data is a process of examining the collected raw data (specially in surveys) to detect errors and omissions and to correct these when possible.

Data editing is defined as the process involving the review and adjustment of collected survey data. The purpose is to control the quality of the collected data. Data editing can be performed manually, with the assistance of a computer or a combination of both.

Editing methods

Interactive editing

The term interactive editing is commonly used for modern computer-assisted manual editing. Most interactive data editing tools applied at National Statistical Institutes (NSIs) allow one to check the specified edits during or after data entry, and if necessary to correct erroneous data immediately. Several approaches can be followed to correct erroneous data:

Interactive editing is a standard way to edit data. It can be used to edit both categorical and continuous data.[. Interactive editing reduces the time frame needed to complete the cyclical process of review and adjustment.

Selective editing

Selective editing is an umbrella term for several methods to identify the influential errors,  and outliers. Selective editing techniques aim to apply interactive editing to a well-chosen subset of the records, such that the limited time and resources available for interactive editing are allocated to those records where it has the most effect on the quality of the final estimates of publication figures. In selective editing, data is split into two streams:

The critical stream and The non-critical stream

The critical stream consists of records that are more likely to contain influential errors. These critical records are edited in a traditional interactive manner. The records in the non-critical stream which are unlikely to contain influential errors are not edited in a computer assisted manner.

Macro editing

There are two methods of macro editing:

Aggregation method

This method is followed in almost every statistical agency before publication: verifying whether figures to be published seem plausible. This is accomplished by comparing quantities in publication tables with same quantities in previous publications.

Distribution method

Data available is used to characterize the distribution of the variables. Then all individual values are compared with the distribution. Records containing values that could be considered uncommon (given the distribution) are candidates for further inspection and possibly for editing.[8]

Automatic editing

In automatic editing records are edited by a computer without human intervention[9]. Prior knowledge on the values of a single variable or a combination of variables can be formulated as a set of edit rules which specify or constrain the admissible values.

2.Coding:

Coding refers to the process of assigning numerals or other symbols to answers so that responses can be put into a limited number of categories or classes. Such classes should be appropriate to the research problem under consideration. They must also possess the characteristic of exhaustiveness (i.e., there must be a class for every data item) and also that of mutual exclusively which means that a specific answer can be placed in one and only one cell in a given category set. Another rule to be observed is that of unidimensional by which is meant that every class is defined in terms of only one concept.

3.Tabulation:

After the transcription of data is over, data are summarized and arranged in a compact form for further analysis. This process is called tabulation. Thus, tabulation is the process of summarizing raw data and displaying them on  compact statistical tables for further analysis. It involves counting of the number of cases falling into each of several categories.

4.Classification:

Most research studies result in a large volume of raw data which must be reduced into homogeneous groups if we are to get meaningful relationships. This fact necessitates classification of data which happens to be the process of arranging data in groups or classes on the basis of commoncharacteristics. Data having a common characteristic are placed in one class and in this way the entire data get divided into a number of groups or classes.

Analyze of Data

Data analysis can take the form of simple descriptive statistics or more sophisticated statistical inference. Data analysis techniques include univariate analysis (such as analysis of single-variable distributions), bivariate analysis, and more generally, multivariate analysis. Multivariate analysis, broadly speaking, refers to all statistical methods that simultaneously analyze multiple measurements on each individual or object under investigation ; as such, many multivariate techniques are extensions of univariate and bivariate analysis.

Descriptive Statistics:

Descriptive statistics implies a simple quantitative summary of a data set that has been collected. It helps us understand the experiment or data set in detail and tells us all about the required details that help put the data in perspective.

In descriptive statistics, we simply state what the data shows and tells us. Interpreting the results and trends beyond this involve inferential statistics that is a separate branch altogether.

The data that is collected can be represented in several ways. Descriptive statistics includes statistical procedures that we use to describe the population we are studying. The data could be collected from either a sample or a population, but the results help us organize and describe data. Descriptive statistics can only be used to describe the group that is being studying. That is, the results cannot be generalized to any larger group.

Inferential Statistics:

While descriptive statistics tell us basic information about the population or data set under study, inferential statistics are produced by more complex mathematical calculations, and allow us to infer trends about a larger population based on a study of a sample taken from it. We use inferential statistics to examine the relationships between variables within a sample, and then make generalizations or predictions about how those variables will relate within a larger population.

Most quantitative social science operates using inferential statistics because it is typically too costly or time-consuming to study an entire population of people. Using a statistically valid sample and inferential statistics, we can conduct research that otherwise would not be possible. (Click here to learn more about the different kinds of samples and how to compile and use them.)

Techniques that social scientists use to examine the relationships between variables, and thereby to create inferential statistics, include but are not limited to: linear regression analyses, logistic regression analyses, ANOVA, correlation analyses, structural equation modeling, and survival analysis.

When conducting research using inferential statistics it is important and necessary to conduct test of significance in order to know whether you can generalize your results to a larger population. Common tests of significance include the Chi-square and T-test. These tell us the probability that the results of our analysis of the sample are representative of the population that the sample represents.

Interpretation of Data

Interpretation refers to the task of drawing inferences from the collected facts after an analytical and/or experimental study. In fact, it is a search for broader meaning of research findings. The task of interpretation has two major aspects viz.,

(i)            The effort to establish continuity in research through linking the results of a given study with those of another,

(ii)           The establishment of some explanatory concepts.

Necessity of Data Interpretation

Interpretation leads to the establishment of explanatory concepts that can serve as a guide for future research studies; it opens new avenues of intellectual adventure and stimulates the quest for more knowledge.

It is through interpretation that the researcher can well understand the abstract principle that works beneath his findings. Through this he can link up his findings with those of other studies, having the same abstract principle, and thereby can predict about the concrete world of events. Fresh inquiries can test these predictions later on. This way the continuity in research can be maintained.

Researcher can better appreciate only through interpretation why his findings are what they are and can make others to understand the real significance of his research findings.

The interpretation of the findings of exploratory research study often results into hypotheses for experimental research and as such interpretation is involved in the transition from exploratory to experimental research. Since an exploratory study does not have a hypothesis to start with, the findings of such a study have to be interpreted on a post-factum basis in which case the interpretation is technically described as ‘post factum’ interpretation.

Test of Hypothesis

Hypothesis testing helps to decide on the basis of a sample data, whether a hypothesis about the population is likely to be true or false. Statisticians have developed several tests of hypotheses (also known as the tests of significance) for the purpose of testing of hypotheses which can be classified as:

(a) Parametric tests or standard tests of hypotheses;

(b) Non-parametric tests or distribution-free test of hypotheses.

Parametric Test:

Parametric tests usually assume certain properties of the parent population from which we draw samples. Assumptions like observations come from a normal population, sample size is large, assumptions about the population parameters like mean, variance, etc., must hold good before parametric tests can be used. But there are situations when the researcher cannot or does not want to make such assumptions. (T-test and Z-Test)

Non-parametric Test:

Such tests do not depend on any assumption about the parameters of the parent population. Besides, most non-parametric tests assume only nominal or ordinal data, whereas parametric tests require measurement equivalent to at least an interval scale ( X 2-Test and F-Test).

Chi-square

Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. The chi-square test isalways testing what scientists call the null hypothesis, which states that there is nosignificant difference between the expected and observed result.

An important non-parametric test as no rigid assumptions are necessary in regard to the type of population, no need of parameter values and relatively less mathematical details are involved. Based on frequency not on the parameters like mean or standard deviation.

Can also be applied to a complex contingency table with several classes and as such is a very useful test in research work. Useful for testing hypothesis not for the estimation. X 2 should not be calculated if the expected value in any category is less than 5

Degrees of Freedom

In statistics, the number of degrees of freedom (d.o.f.) is the number of independent pieces of data being used to make a calculation. The number of degrees of freedom is a measure of how certain we are that our sample population is representative of the entire population

The d.o.f. can be viewed as the number of independent parameters available to fit a model to data. Generally, the more parameters you have, the more accurate your fit will be. However, for each estimate made in a calculation, you remove one degree of freedom. This is because each assumption or approximation you make puts one more restriction on how many parameters are used to generate the model.

Measurement Scales

The “levels of measurement”, or scales of measure are expressions that typically refer to the theory of scale types developed by the psychologist Stanley Smith Stevens. Stevens claimed that all measurement in science was conducted using four different types of scales that he called “nominal”, “ordinal”, “interval” and “ratio”, unifying both qualitative (which are described by his “nominal “scale) and quantitative (to a different degree, all the rest of his scales)

1.Nominal:

A scale that measures data by name only.

2.Ordinal:

A scale that measures by rank order only. Other than rough order, no precise measurement is possible.

3.Interval:

A scale that measures by using equal intervals. Here you can compare differences between pairs of values.

4.Ratio:

Univariate Data:

  • Involving a single variable.
  • Does not deal with causes or relationships.
  • Does not deal with causes or relationships
    • Central tendency – mean, mode, median.
    • Dispersion – range, variance, max, min ,quartiles, standard deviation.
    • Frequency distributions.
    • Bar graph, histogram, pie chart, line graph, box-and-whisker plot

Bivariate Data:

  • Involving two variables
  • Deals with causes or relationships
  • The major purpose of bivariate analysis is to explain
    • Analysis of two variables simultaneously
    • Correlations
    • Comparisons, relationships, causes, explanations
    • Tables where one variable is contingent on the values of the other variable.
    • Independent and dependent variables

Graph, Chart and Figure

  • Chart is circular and represents 100% of a category; each segment or pie is a percentage of the whole- like a bar-chart
  • A figure can be any picture that goes with the text of what someone is writing
  • Graph shows values that are related then comparisons are made such as the tallest and the shortest.

Raw Data

The term raw data is used most commonly to refer to information that is gathered for a research study before that information has been transformed or analyzed in any way. The term can apply to the data as soon as they are gathered or after they have been cleaned, but not in any way further transformed or analyzed.

Characteristics of Conclusion

Every basic conclusion must share several key elements, but there are also several tactics you can play around with to craft a more effective conclusion and several you should avoid in order to prevent yourself from weakening your paper’s conclusion. Here are some writing tips to keep in mind when creating the conclusion for your next research paper.

  • Restate the topic
  • Summarize the main points
  • Add the points up
  • Make a call to action when appropriate

Confidence Interval and Fact

A confidence interval gives an estimated range of values which is likely to include an unknown population parameter, the estimated range being calculated from a given set of sample data.

If independent samples are taken repeatedly from the same population, and a confidence interval calculated for each sample, then a certain percentage (confidence level) of the intervals will include the unknown population parameter. Confidence intervals are usually calculated so that this percentage is 95%, but we can produce 90%, 99%, 99.9% (or whatever) confidence intervals for the unknown parameter.

A fact is something that has really occurred or is actually the case. The usual test for statement of fact is verifiability, that is whether it can be proven to correspond to experience.

 

 

Posted in Uncategorized | Comments Off

Sampling Techniques in Quantitative Research

 

Dr. V.K.Maheshwari, M.A(Socio, Phil) B.Sc. M. Ed, Ph.D

Former Principal, K.L.D.A.V.(P.G) College, Roorkee, India


When we are interested in a population, we typically study a sample of that population rather than attempt to study the whole population The purpose of sampling techniques is to help you select units  to be included in your sample.

Broadly speaking, there are two groups of sampling technique: probability sampling techniques and non-probability sampling techniques.

Probability sampling techniques

Probability sampling techniques use random selection to help you select units from your sampling frame to be included in your sample. These procedures are very clearly defined, making it easy to follow them.

In  probability Samples, each member of the population has a known non-zero probability of being selected. Probability methods include random sampling, systematic sampling, and stratified sampling. Probability sampling includes simple random sampling, systematic sampling, stratified sampling, cluster sampling and disproportional sampling The advantage of probability sampling is that sampling error can be calculated.

In probability sampling, every individual in the population have equal chance of being selected as a subject for the research.

This method guarantees that the selection process is completely randomized and without bias.

The most basic example of probability sampling is listing all the names of the individuals in the population in separate pieces of paper, and then drawing a number of papers one by one from the complete collection of names.

The advantage of using probability sampling is the accuracy of the statistical methods after the experiment. It can also be used to estimate the population parameters since it is representative of the entire population. It is also a reliable method to eliminate sampling bias.

These types of probability sampling technique include simple random sampling, systematic random sampling, stratified random sampling and cluster sampling.

Random sampling

-The random sample is the purest form of probability sampling. Each member of the population has an equal and known chance of being selected. When there are very large populations, it is often difficult or impossible to identify every member of the population, so the pool of available subjects becomes biased.

This may be the most important type of sample. A random sample allows a known probability that each elementary unit will be chosen. This is the type of sampling that is used in lotteries and raffles.

Types of random Samples

A Simple Random Sample-A simple random sample is obtained by choosing elementary units in search a way that each unit in the population has an equal chance of being selected. A simple random sample is free from sampling bias. However, using a random number table to choose the elementary units can be cumbersome. If the sample is to be collected by a person untrained in statistics, then instructions may be misinterpreted and selections may be made improperly.

A systematic random sample-Systematic sampling is often used instead of random sampling. It is also called an Nth name selection technique. After the required sample size has been calculated, every Nth record is selected from a list of population members. As long as the list does not contain any hidden order, this sampling method is as good as the random sampling method. Its only advantage over the random sampling technique is simplicity. Systematic sampling is frequently used to select a specified number of records from a computer file

A systematic random sample is obtained by selecting one unit on a random basis and choosing additional elementary units at evenly spaced intervals until the desired number of units is obtained.

Concept of Randomization

Contrary to popular opinion, samples are not selected haphazardly. Rather they are chosen in a systematically random way so that chance or the operation of probability is utilized. Where random selection is not possible, other systematic means are used.

Randomization is a sampling method used in scientific experiments. It is commonly used in randomized controlled trials in experimental research.

The concept of randomness has been basic to scientific observation and research. It is based on the assumption that while individual events cannot be predicted with accuracy, aggregate events can. For instance, although it may not predict with great accuracy an individual’s academic achievement, it will predict accurately the average academic performance of a group.

In randomized controlled trials, the research participants are assigned by chance, rather than by choice, to either the experimental group or the control group.

Randomization reduces bias as much as possible. Randomization is designed to “control” (reduce or eliminate if possible) bias by all means.

The fundamental goal of randomization is to certain that each treatment is equally likely to be assigned to any given experimental unit.

Randomization has two important applications in research:

1. Selecting a group of individuals for observation who are representative of the population about which the researcher wishes to generalize, or

2. Equating experimental and control groups in an experiment. Assigning individuals by random assignment (each individual in the sample has an equal and independent chance of being assigned to each of the groups) is the best method of providing for their equivalence.

Randomization- Actually Working

Well, there are different options used by researchers to perform randomization. It can be achieved by use of random number tables given in most statistical textbooks or computers can also be used to generate random numbers for us.

If neither of these available, you can devise your own plan to perform randomization. For example, you can select the last digit of phone numbers given in a telephone directory. For example you have different varieties of rice grown in10 total small plots in a greenhouse and you want to evaluate certain fertilizer on 9 varieties of rice plants keeping one plot as a control.

You can number each of the small plots up to 9 and then you can use series of numbers like 8 6 3 1 6 2 9 3 5 6 7 5 5 3 1 and so on

You can then allocate each of three doses of fertilizer treatment (call them doses A, B, C). Now you can apply dose A to plot number 8, B to 6, and C to 3. Then you apply dose A to 1, B to 2 because dose B is already used on plot 6 and so on.

A Stratified Sample-Stratified sampling is commonly used probability method that is superior to random sampling because it reduces sampling error. A stratum is a subset of the population that share at least one common characteristic. . The researcher first identifies the relevant stratums and their actual representation in the population. Random sampling is then used to select a sufficient number of subjects from each stratum. “Sufficient” refers to a sample size large enough for us to be reasonably confident that the stratum represents the population. Stratified sampling is often used when one or more of the stratums in the population have a low incidence relative to the other stratums.

A stratified sample is obtained by independently selecting a separate simple random sample from each population stratum. A population can be divided into different groups may be based on some characteristic or variable .

Stratified sampling is a probability sampling technique wherein the researcher divides the entire population into different subgroups or strata, then randomly selects the final subjects proportionally from the different strata.

Stratified random sampling is a method of sampling that involves the division of a population into smaller groups known as strata. In stratified random sampling, the strata are formed based on members’ shared attributes or characteristics. A random sample from each stratum is taken in a number proportional to the stratum’s size when compared to the population. These subsets of the strata are then pooled to form a random sample.

The main advantage with stratified sampling is how it captures key population characteristics in the sample. Similar to a weighted average, this method of sampling produces characteristics in the sample that are proportional to the overall population. Stratified sampling works well for populations with a variety of attributes, but is otherwise ineffective, as subgroups cannot be formed.

It is important to note that a random sample is not necessarily an identical representation of the population. Characteristics of successive random samples drawn from the same population may differ to some degree, but it is possible to estimate their variation from the population characteristics and from each other. The variation, known as sampling error, does not suggest that a mistake has been made in the sampling process. Rather, sampling error refers to the chance variations that occur in sampling; with randomization these variations are predictable and taken into account in data-analysis techniques.

It is important to note that the strata must be non-overlapping. Having overlapping subgroups will grant some individuals higher chances of being selected as subject. This completely negates the concept of stratified sampling as a type of probability sampling.

Equally important is the fact that the researcher must use simple probability sampling within the different strata.

The most common strata used in stratified random sampling are age, gender, socioeconomic status, religion, nationality and educational attainment.

Stratified random sampling is used when the researcher wants to highlight a specific subgroup within the population. This technique is useful in such researches because it ensures the presence of the key subgroup within the sample.

Researchers also employ stratified random sampling when they want to observe existing relationships between two or more subgroups. With a simple random sampling technique, the researcher is not sure whether the subgroups that he wants to observe are represented equally or proportionately within the sample.

With stratified sampling, the researcher can representatively sample even the smallest and most inaccessible subgroups in the population. This allows the researcher to sample the rare extremes of the given population.

With this technique, you have a higher statistical precision compared to simple random sampling. This is because the variability within the subgroups is lower compared to the variations when dealing with the entire population.

Because this technique has high statistical precision, it also means that it requires a small sample size which can save a lot of time, money and effort of the researchers.

Types of Stratified Sampling

A-Proportionate Stratified Random Sampling

The sample size of each stratum in this technique is proportionate to the population size of the stratum when viewed against the entire population. This means that the each stratum has the same sampling fraction.

For example, you have 3 strata with 100, 200 and 300 population sizes respectively. And the researcher chose a sampling fraction of ½. Then, the researcher must randomly sample 50, 100 and 150 subjects from each stratum respectively.

Stratum           A          B          C

Population Size           100      200      300

Sampling Fraction       ½         ½         ½

Final Sample Size        50        100      150

The important thing to remember in this technique is to use the same sampling fraction for each stratum regardless of the differences in population size of the strata. It is much like assembling a smaller population that is specific to the relative proportions of the subgroups within the population.

B- Disproportionate Stratified Random Sampling

The only difference between proportionate and disproportionate stratified random sampling is their sampling fractions. With disproportionate sampling, the different strata have different sampling fractions.

precision of this design is highly dependent on the sampling fraction allocation of the researcher. If the researcher commits mistakes in allotting sampling fractions, a stratum may either be overrepresented or underrepresented which will result in skewed results.

A Cluster Sample

-A cluster sample is obtained by selecting clusters from the population on the basis of simple random sampling. The sample comprises a census of each random cluster selected. Cluster sampling is a method used to enable random sampling to occur while limiting the time and costs that would otherwise be required to sample from either a very large population or one that is geographically diverse. Using this method, a one- or two-level randomization process is used the important element in this process is that each one of the criteria have an equal opportunity to be chosen, with no researcher or facility bias.

The area or cluster sample is a variation of the simple random sample that is particularly appropriate when the population of interest is infinite, when a list of the members of the population does not exist, or when the geographic distribution of the individuals is widely scattered.

Non-probability sampling techniques

In this type of population sampling, members of the population do not have equal chance of being selected. Due to this, it is not safe to assume that the sample fully represents the target population. It is also possible that the researcher deliberately chose the individuals that will participate in the study.

Non-probability sampling techniques refer on the subjective judgement of the researcher when selecting units from the population to be included in the sample. For some of the different types of non-probability sampling technique, the procedures for selecting units to be included in the sample are very clearly defined, just like probability sampling techniques. However, in others (e.g., purposive sampling), the subjective judgement required to select units from the population, which involves a combination of theory, experience and insight from the research process, makes selecting units more complicated. Overall, the types of non-probability sampling technique include quota sampling, purposive sampling, convenience sampling, snowball sampling and self-section sampling.

Non-probability population sampling method is useful for pilot studies, case studies, qualitative research, and for hypothesis development.

Non-Probability Sampling, members are selected from the population in some nonrandom manner. Non-probability sampling includes convenience sampling, consecutive sampling, judgmental sampling, quota sampling and snowball sampling. In non-probability sampling, the degree to which the sample differs from the population remains unknown

Convenience Sampling

In all forms of research, it would be ideal to test the entire population, but in most cases, the population is just too large that it is impossible to include every individual. This is the reason why most researchers rely on sampling techniques like convenience sampling, the most common of all sampling techniques. Many researchers prefer this sampling technique because it is fast, inexpensive, easy and the subjects are readily available.

Convenience sampling is a non-probability sampling technique where subjects are selected because of their convenient accessibility and proximity to the researcher.

The subjects are selected just because they are easiest to recruit for the study and the researcher did not consider selecting subjects that are representative of the entire population.

-Convenience sampling is probably the most commonly used technique in  research today . With convenience sampling, subjects are selected because of their convenient accessibility to the researcher. These subjects are chosen simply because they are the easiest to obtain for the study. This technique is easy, fast and usually the least expensive and troublesome convenience sample results when the more convenient elementary units are chosen from a population for observation. Convenience sampling is used in exploratory research where the researcher is interested in getting an inexpensive approximation of the truth. As the name implies, the sample is selected because they are convenient. This Non-probability method is often used during preliminary research efforts to get a gross estimate of the results, without incurring the cost or time required to select a random sample.

Researchers use convenience sampling not just because it is easy to use, but because it also has other research advantages.

In pilot studies, convenience sample is usually used because it allows the researcher to obtain basic data and trends regarding his study without the complications of using a randomized sample.

This sampling technique is also useful in documenting that a particular quality of a substance or phenomenon occurs within a given sample. Such studies are also very useful for detecting relationships among different phenomena.

The most obvious criticism about convenience sampling is sampling bias and that the sample is not representative of the entire population. This may be the biggest disadvantage when using a convenience sample because it leads to more problems and criticisms.

Systematic bias stems from sampling bias. This refers to a constant difference between the results from the sample and the theoretical results from the entire population. It is not rare that the results from a study that uses a convenience sample differ significantly with the results from the entire population. A consequence of having systematic bias is obtaining skewed results.

Another significant criticism about using a convenience sample is the limitation in generalization and inference making about the entire population. Since the sample is not representative of the population, the results of the study cannot speak for the entire population. This results to a low external validity of the study.

When using convenience sampling, it is necessary to describe how your sample would differ from an ideal sample that was randomly selected. It is also necessary to describe the individuals who might be left out during the selection process or the individuals who are overrepresented in the sample.

Consecutive Sampling

Consecutive Sampling is a strict version of convenience sampling where every available subject is selected, i.e., the complete accessible population is studied. This is the best choice of the Non-probability sampling techniques since by studying everybody available, a good representation of the overall population is possible in a reasonable period of time.

Consecutive Sampling is very similar to convenience sampling except that it seeks to include ALL accessible subjects as part of the sample. This non-probability sampling technique can be considered as the best of all non-probability samples because it includes all subjects that are available that makes the sample a better representation of the entire population.

Consecutive sampling is a sampling technique in which every subject meeting the criteria of inclusion is selected until the required sample size is achieved. Consecutive sampling technique involves selecting all individuals who agree to participate, provided they meet pre-extablished criteria, until the number of subjects desired has been recruited. For example, the author of this text once conducted a study of the verbal memory of adult dyslexics who were recruited by means of several techniques including appeals through newspaper and radio advertising. In order to qualify as subjects, several criteria had to be satisfied with respect to age, IQ level, educational achievement, history of remediation, mental and physical status, and scores on standardized tests of reading ability, among other factors. Consecutive sampling can be highly useful when the available subject pool is limited or when using selection criteria so stringent as to reduce the number of subjects to a point that threatens the generality of findings. Although consecutive sampling methods are typically stronger than other nonprobability methods in controlling sampling bias, such confounding influence cannot be ruled out. Response rate, the proportion of the subjects willing to participate of those selected, may also influence the validity of inferences. For instance, subjects who agree to participate may have different motivations or life circumstances than those who do not.

Judgmental sampling

Judgmental sampling is a non-probability sampling technique where the researcher selects units to be sampled based on their knowledge and professional judgment.

The Judgment Sample-Judgmental sampling, also called Purposive Sampling or authoritative sampling , is another form of convenience sampling where subjects are handpicked from the accessible population Subjects usually are selected using judgmental sampling because the researcher believes that certain subjects are likely to benefit or be more compliant A judgement sample is obtained according to the discretion of someone who is familiar with the relevant characteristics of the population. It is a common non-probability method. The researcher selects the sample based on judgment.

The judgemental sampling is used in cases where the specialty of an authority can select a more representative sample that can bring more accurate results than by using other probability sampling techniques. The process involves nothing but purposely handpicking individuals from the population based on the authority’s or the researcher’s knowledge and judgment.

Judgmental sampling design is usually used when a limited number of individuals possess the trait of interest. It is the only viable sampling technique in obtaining information from a very specific group of people. It is also possible to use judgmental sampling if the researcher knows a reliable professional or authority that he thinks is capable of assembling a representative sample.

The two main weaknesses of authoritative sampling are with the authority and in the sampling process; both of which pertains to the reliability and the bias that accompanies the sampling technique.

Unfortunately, there is usually no way to evaluate the reliability of the expert or the authority. The best way to avoid sampling error brought by the expert is to choose the best and most experienced authority in the field of interest.

When it comes to the sampling process, it is usually biased since no randomization was used in obtaining the sample. It is also worth noting that the members of the population did not have equal chances of being selected. The consequence of this is the misrepresentation of the entire population which will then limit generalizations of the results of the study.Purposeful sampling is often used in qualitative research studies.

Quota Sampling-

Quota sampling is the Non-probability equivalent of stratified sampling. Like stratified sampling, the researcher first identifies the stratums and their proportions as they are represented in the population. Then convenience or judgment sampling is used to select the required number of subjects from each stratum. This differs from stratified sampling, where the stratums are filled by random sampling Quota sampling is a non-probability technique used to ensure equal representation of subjects in each layer of a stratified sample grouping.

It is a technique wherein the assembled sample has the same proportions of individuals as the entire population with respect to known characteristics, traits or focused phenomenon.

In addition to this, the researcher must make sure that the composition of the final sample to be used in the study meets the research’s quota criteria.

The first step in non-probability quota sampling is to divide the population into exclusive subgroups.

Then, the researcher must identify the proportions of these subgroups in the population; this same proportion will be applied in the sampling process.

Finally, the researcher selects subjects from the various subgroups while taking into consideration the proportions noted in the previous step.

The final step ensures that the sample is representative of the entire population. It also allows the researcher to study traits and characteristics that are noted for each subgroup.

In a study wherein the researcher likes to compare the academic performance of the different high school class levels, its relationship with gender and socioeconomic status, the researcher first identifies the subgroups.

Usually, the subgroups are the characteristics or variables of the study. The researcher divides the entire population into class levels, intersected with gender and socioeconomic status. Then, he takes note of the proportions of these subgroups in the entire population and then samples each subgroup accordingly.

The main reason why researchers choose quota samples is that it allows the researchers to sample a subgroup that is of great interest to the study. If a study aims to investigate a trait or a characteristic of a certain subgroup, this type of sampling is the ideal technique.

Quota sampling also allows the researchers to observe relationships between subgroups. In some studies, traits of a certain subgroup interact with other traits of another subgroup. In such cases, it is also necessary for the researcher to use this type of sampling technique.

It may appear that this type of sampling technique is totally representative of the population. In some cases it is not. Keep in mind that only the selected traits of the population were taken into account in forming the subgroups.

In the process of sampling these subgroups, other traits in the sample may be overrepresented. In a study that considers gender, socioeconomic status and religion as the basis of the subgroups, the final sample may have skewed representation of age, race, educational attainment, marital status and a lot more.

Sequential Sampling

Sequential sampling is a non-probability sampling technique wherein the researcher picks a single or a group of subjects in a given time interval, conducts his study, analyzes the results then picks another group of subjects if needed and so on.

Sequential sampling technique, initially developed as a tool for product quality control.  The sample size, n, is not fixed in advanced, nor is the timeframe of data collection.  The process begins, first, with the sampling of a single observation or a group of observations.  These are then tested to see whether or not the null hypothesis can be rejected.  If the null is not rejected, then another observation or group of observations is sampled and the test is run again.  In this way the test continues until the researcher is confident in his or her results.

For survey sampling applications, the term sequential sampling describes any method of sampling that reads an ordered frame of N sampling units and selects the sample with specified probabilities or specified expectations. Sequential sampling methods are particularly well suited when applied with computers. They can also be applied for selecting samples of a population resulting from some other process: for example, cars coming off an assembly line, patients arriving at a clinic, or voters exiting the polls. Examples of sequential sampling schemes discussed in this entry include simple random sampling, systematic sampling, and probability proportional to size (PPS) sequential sampling.

This technique can reduce sampling costs by reducing the number of observations needed.  If a whole batch of light bulbs is defective, sequential sampling can allow us to learn this much more quickly and inexpensively than simple random sampling.  However, it is not a random sample and has other issues with making statistical inference.

This sampling technique gives the researcher limitless chances of fine tuning his research methods and gaining a vital insight into the study that he is currently pursuing.

If we are to consider all the other sampling techniques in research, we will all come to a conclusion that the experiment and the data analysis will either boil down to accepting the null hypothesis or disproving the null hypothesis while accepting the alternative hypothesis.

In sequential sampling technique, there exists another step, a third option. The researcher can accept the null hypothesis, accept his alternative hypothesis, or select another pool of subjects and conduct the experiment once again. This entails that the researcher can obtain limitless number of subjects before finally making a decision whether to accept his null or alternative hypothesis.

The researcher has a limitless option when it comes to sample size and sampling schedule. The sample size can be relatively small of excessively large depending on the decision making of the researcher. Sampling schedule is also completely dependent to the researcher since a second group of samples can only be obtained after conducting the experiment to the initial group of samples.

As mentioned above, this sampling technique enables the researcher to fine-tune his research methods and results analysis. Due to the repetitive nature of this sampling method, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.

There is very little effort in the part of the researcher when performing this sampling technique. It is not expensive, not time consuming and not workforce extensive.

This sampling method is hardly representative of the entire population. Its only hope of approaching representativeness is when the researcher chose to use a very large sample size significant enough to represent a big fraction of the entire population.

The sampling technique is also hardly randomized. This contributes to the very little degree representativeness of the sampling technique.

Due to the aforementioned disadvantages, results from this sampling technique cannot be used to create conclusions and interpretations pertaining to the entire population.

Be sure to understand the limitations of the technique.  Sequential modeling is not a probabilistic sampling option.  It can lead to valid statistical conclusions but the means in which these are obtained is separate from probabilistic sampling techniques.

Systematic Sampling

Systematic sampling is a random sampling technique which is frequently chosen by researchers for its simplicity and its periodic quality.

Systematic sampling is a type of probability sampling method in which sample members from a larger population are selected according to a random starting point and a fixed periodic interval. This interval, called the sampling interval, is calculated by dividing the population size by the desired sample size. Despite the sample population being selected in advance, systematic sampling is still thought of as being random if the periodic interval is determined beforehand and the starting point is random.

In systematic random sampling, the researcher first randomly picks the first item or subject from the population. Then, the researcher will select each n’th subject from the list.

The procedure involved in systematic random sampling is very easy and can be done manually. The results are representative of the population unless certain characteristics of the population are repeated for every n’th individual, which is highly unlikely.

Since simply random sampling a population can be inefficient and time-consuming, statisticians turn to other methods, such as systematic sampling. Choosing a sample size through a systematic approach can be done quickly. Once a fixed starting point has been identified, a constant interval is selected to facilitate participant selection.

For example, if you wanted to select a random group of 1,000 people from a population of 50,000 using systematic sampling, all of the potential participants must be placed in a list and a starting point would be selected. Once the list is formed, every 50th person on the list, starting the count at the selected starting point, would be chosen as a participant, since 50,000/1,000 = 50. For example, if the selected starting point was 20, the 70th person on the list would be chosen followed by the 120th, and so on. Once the end of the list was reached, if additional participants are required, the count loops to the beginning of the list to finish the count.

Within systematic sampling, as with other sampling methods, a target population must be selected prior to selecting participants. A population can be identified based on any number of desired characteristics that suit the purpose of the study being conducted. Some selection criteria may include age, gender, race, location, education level and/or profession

The process of obtaining the systematic sample is much like an arithmetic progression.

Starting-Number
The researcher selects an integer that must be less than the total number of individuals in the population. This integer will correspond to the first subject.

1-Interval:
The researcher picks another integer which will serve as the constant difference between any two consecutive numbers in the progression.

The integer is typically selected so that the researcher obtains the correct sample size

For example, the researcher has a population total of 100 individuals and need 12 subjects. He first picks his starting number, 5.

Then the researcher picks his interval, 8. The members of his sample will be individuals 5, 13, 21, 29, 37, 45, 53, 61, 69, 77, 85, 93.

Other researchers use a modified systematic random sampling technique wherein they first identify the needed sample size. Then, they divide the total number of the population with the sample size to obtain the sampling fraction. The sampling fraction is then used as the constant difference between subjects.

Risks Associated with Systematic Sampling

One risk that statisticians must take into account when conducting systematic sampling involves how the list used with the sampling interval is organized. If the population placed on the list is organized in a cyclical pattern that matches the sampling interval, the selected sample may be biased. For example, a company’s human resources department wants to pick a sample of employees and ask how they feel about company policies. Employees are grouped in teams of 20, with each team headed by a manager. If the list used to pick the sample size is organized with teams clustered together, the statistician risks picking only managers (or no managers at all) depending on the sampling interval.

Advantage and Disadvantage of systematic sampling

  • Another advantage of systematic random sampling over simple random sampling is the assurance that the population will be evenly sampled. There exists a chance in simple random sampling that allows a clustered selection of subjects. This is systematically eliminated in systematic sampling.
  • The main advantage of using systematic sampling over simple random sampling is its simplicity. It allows the researcher to add a degree of system or process into the random selection of subjects.
  • The process of selection can interact with a hidden periodic trait within the population. If the sampling technique coincides with the periodicity of the trait, the sampling technique will no longer be random and representativeness of the sample is compromised

Since systematic random sampling is a type of probability sampling, the researcher must ensure that all the members of the population have equal chances of being selected as the starting point or the initial subject.

The researcher must be certain that the chosen constant interval between subjects do not reflect a certain pattern of traits present in the population. If a pattern in the population exists and it coincides with the interval set by the researcher, randomness of the sampling technique is compromised.

Snowball or Chain Sampling

This particular one identifies, cases of interest from people who know people who know what cases are information rich that is good examples for study, good interview subjects. This is commonly used in studies that may be looking at issues like the homeless households. What you do is to get hold of one and he/she will tell you where the others are or can be found. When you find those others they will tell you where you can get more others and the chain continues. Snowball sampling is a special Non-probability method used when the desired sample characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in these situations. Snowball sampling relies on referrals from initial subjects to generate additional subjects. While this technique can dramatically lower search costs, it comes at the expense of introducing bias because the technique itself reduces the likelihood that the sample will represent a good cross section from the population.

Snowball sampling is a non-probability sampling technique that is used by researchers to identify potential subjects in studies where subjects are hard to locate.

Snowball Sampling

Researchers use this sampling method if the sample for the study is very rare or is limited to a very small subgroup of the population. This type of sampling technique works like chain referral. After observing the initial subject, the researcher asks for assistance from the subject to help identify people with a similar trait of interest.

The process of snowball sampling is much like asking your subjects to nominate another person with the same trait as your next subject. The researcher then observes the nominated subjects and continues in the same way until the obtaining sufficient number of subjects.

For example, if obtaining subjects for a study that wants to observe a rare disease, the researcher may opt to use snowball sampling since it will be difficult to obtain subjects. It is also possible that the patients with the same disease have a support group; being able to observe one of the members as your initial subject will then lead you to more subjects for the study.

Types of Snowball Sampling

  • Linear Snowball Sampling

  • Exponential Non-Discriminative Snowball Sampling

  • Exponential Discriminative Snowball Sampling

Advantages and Disadvantages of Snowball Sampling

  • The chain referral process allows the researcher to reach populations that are difficult to sample when using other sampling methods.
  • The process is cheap, simple and cost-efficient.
  • This sampling technique needs little planning and fewer workforce compared to other sampling techniques.
  • The researcher has little control over the sampling method. The subjects that the researcher can obtain rely mainly on the previous subjects that were observed.
  • Representativeness of the sample is not guaranteed. The researcher has no idea of the true distribution of the population and of the sample.
  • Sampling bias is also a fear of researchers when using this sampling technique. Initial subjects tend to nominate people that they know well. Because of this, it is highly possible that the subjects share the same traits and characteristics, thus, it is possible that the sample that the researcher will obtain is only a small subgroup of the entire population.

 

 

Posted in Uncategorized | Comments Off