Archives

A Sociology of Alzheimer’s Disease: Questioning the Etiology

DOI: 10.31038/ASMHS.2022612

Abstract

Even though French sociology has long been interested in Alzheimer’s disease, most studies have been carried out “for” or “in support of” disease treatment, with the aim of analyzing the impact of the disease on the life of patients. This article offers some elements for the sociological study “of” Alzheimer’s disease. Based on a literature analysis centered on the information file on Alzheimer’s disease published by INSERM and on scientific articles and communications addressing the etiology of the disease, this paper aims to show how the entity “Alzheimer’s disease” is constructed today. After examining the way the figures on the disease have been produced, it will show how the etiology is constituted by advances in “diagnostic techniques” and research protocols.

Keywords

Sociologie, Alzheimer, Étiologie, Sociology, Alzheimer, Etiology

Introduction

It is now nearly 30 years since the first studies on Alzheimer’s disease referring to sociology or drawing from its methods were published. Apart from the work of [1], these first articles were often written by doctors working in the field of gerontology and public health [2,3], using methodologies (i.e. focus groups) derived from the social sciences. They focused, among other things, on the way in which the disease impacts the patient’s life course and his/her social network and the way in which he/she learns to cope with the illness. In France, the first sociological publications dealing specifically with Alzheimer’s disease date from the early 2000s [4,5]. After the disease was placed on the political agenda following the 2001 Girard report [6] and with incentives being given to conduct multi- or even inter-disciplinary research, numerous research projects have developed in sociology on the “big issues” model, reflecting the influence of the Anglo-Saxon model on the world of French research.

To obtain funding, sociologists have thus been invited to submit proposals to calls from large associations such as Médéric Alzheimer or France Alzheimer or from public funders such as the Caisse Nationale de Solidarité à l’Autonomie (CNSA – National Solidarity Fund for Autonomy) and the Fondation de Coopération Scientifique pour le plan Alzheimer (Foundation for Scientific Cooperation for the Alzheimer Plan). They have also taken part in programmes led by biomedical science or clinical research laboratories. Patient associations have been largely instrumental in ensuring that research should not confine itself to providing medical answers but should also answer the specific social needs of patients. The third (2004-2007) and fourth (2008-2012) Alzheimer Plan resulted in calls for even more inter-disciplinary research.

In such context, French sociological research has mostly been working “for” or “in support” of disease treatment, rather than dealing with the sociological study “of” the disease : while sociological work “in support” of Alzheimer’s disease treatment mainly aims to shed light on the experience of the patient (which is often little understood by medical professionals), sociology “of” the disease needs to examine the historical, social and scientific construction of what is called Alzheimer’s disease. Despite their theoretical and methodological differences, the former studies shared the common objective of furthering understanding of the disease. They have challenged strictly medical interpretations by showing, among other things, the way in which social context and social status (gender, age, family or professional status) have an impact on the announcement and reception of diagnosis and on adherence to treatment, and, more broadly, on the strategies devised by patients and their relatives to cope with their conditions. They have also shed light on the experience of caretakers and patients which had hitherto been overlooked areas of study [7]. This type of sociological work could also be described as sociological studies of sickness or illness, with “sickness” referring to the social role of sick people as defined by their relatives and professional colleagues and “illness” to the subjective experience of patients.

This paper, on the other hand, undertakes a sociological analysis of Alzheimer’s disease in the sense that it aims to question the way “Alzheimer’s disease” – a disease with biological and/or clinical specificities – has been constituted. Its approach is inspired by the sociology of science approach taken by [8] and Lock [9]. Based on the idea that “the facts of science are made, constructed, modeled and refined to produce data and a stable meaning” [8] p. 182) and that the sociologist’s role is to describe and decode them (Pestre, Ibid), I wish here to examine a few elements related to the etiology of the disease. In order to do so, I use the information file on Alzheimer’s disease published by the Institut National de la Santé et de la Recherche Médicale (INSERM- National Institute of Health and Medical Research). This file synthetizes the scientific knowledge on the disease and is mainly, though not exclusively, based on French research. As it is produced by the leading health research centre in France and signed by prominent French specialists in the field, it qualifies as scientific authority [10]. The aim of this paper is to examine the information presented in this report using scientific publications and presentations and show what data and hypotheses it rests on. I will first take a look at the figures (the prevalence) of the disease and the links with age. This will shed light on the way the figures have been “constructed” and suggest the way age has been used as one of the “explanatory” factors of the disease. Then I will discuss the issues raised, among other things, by advances in “diagnostic techniques” as to the etiology of the disease. Finally, I will suggest that some of the methodological limitations in the clinical investigation of sporadic forms result in the development of scientific protocols that ultimately reinforce the idea of biological and genetic causality.

Age and the Number of Patients

Age is very often used in the literature on Alzheimer’s disease to account for the prevalence of the disease among different age groups as well as to generate hypotheses as to its etiology.

Estimated Prevalence of the Disease by Age

After a short introduction, the section “understanding the disease” in the INSERM file opens with the following paragraph:

Rare before the age of 65, Alzheimer’s disease begins with loss memory, followed over the years by more general and disabling cognitive disorders (…) After 65, the incidence rate of the disease rises from 2 to 4% of the general population. It rises rapidly to reach 15% of the population at age 80. About 900,000 people suffer from Alzheimer’s disease in France today. The number should reach 1, 3 million in 2020, given the increase in life expectancy.

It should first be observed that there are no explanations given for the two age limits chosen – 65 and 80 – which merely seem to refer to the distinction that is commonly made in everyday language between senior citizens and elderly dependents. Age is only considered here in a chronological way. This understanding of age thus seems to derive from convention rather than scientific results or hypotheses about biological deterioration or the effects of social or psychological aging.

As for prevalence, the reading of scientific papers helps trace the way the figures have been established. The most widely cited paper [11] (i.e. cited about 150 times) estimated the proportion of sick people to be 17,8% among people aged 75 or more in 2003, which amounted to 769,000 people. A later, also much cited (47 times), paper [12] indicated that there were about 850,000 cases of Alzheimer’s disease and related syndromes at the time. To support these figures, the second paper referred to the same source as the first one, a survey entitled Personne âgée quid (Paquid). This population-based cohort study, initiated in 1988, targeted 3,777 people aged 65 or more in towns and villages of the Gironde and Dordogne departments and consisted in an epidemiological study of cognitive and functional aging. Dementia and its level of severity were measured using a clinical test, the Mini Mental State (MMS). The number of sick people given by the Paquid survey was then estimated through a projection by age to the general population. The 900,000 cases announced in the INSERM report thus do not correspond to diagnosed cases – I will come back to the meaning of this term below – but to estimates based on a clinical test carried out on a limited sample as pointed out by [13].

Population and Diagnostic Differences Behind Age

As Ankri pointed out, the epidemiological studies giving the incidence and prevalence rates of the disease by age present strong methodological limitations. Apart from the fact that it is difficult to constitute representative samples, they are based on very different patient populations. Ankri explains that the estimated rates often result from the collection of data from surveys of populations affected by very different physio-pathological types of dementia. Moreover, the diagnoses and measurement tools differ depending on the protocols used:

Estimates are most frequently based on nonrepresentative samples and case identification procedures vary with the evolution of diagnostic criteria and the availability of imaging or biological markers. Moreover, whether studies of mild or severe forms of dementia or residents of institutions are included or not can have a strong impact on the results (Ankri, op.cit. p. 458)

While age groups are considered as uniform categories, they include people suffering from different degrees and sometimes even types of dementia. Under chronological age are subsumed different cases, as if people from the same age group were “medically comparable” even though there can be a great number of different risk factors associated to different types of dementia (vascular, Lewy Body, Alzheimer). Moreover, from a methodological point of view, the motivations for taking part (or refusing to take part) in a survey are known to be diverse and to have an impact on the results of clinical tests. When age – merely viewed in its chronological aspect – is used to constitute groupings, researchers lose sight of its social dimension and of its potential influence on the data collected.

Age: A Mere Variable or a Risk Factor?

When these epidemiological data are used, chronological age is considered as a risk factor since incidence rate seems to increase with age. In statistical terms, age even appears to be the main risk factor. Let us look more closely at the findings of epidemiology [14]. The works based on cohort analysis confirm that age is the main risk factor and add that incidence doubles “practically for every five-year age group after 65” (Ibid., p. 738). They also confirm that the incidence rate is higher among women, while indicating that “In the Paquid survey, the incidence of Alzheimer’s disease was higher among men than women before 80, whereas the reverse was true after 80” (Ibid., p. 739). The difference between men and women is accounted for in the following way:

Life expectancy, which is higher for women than men, might explain the results, assuming that men, with increased longevity, are more resistant to neurodegenerative diseases. It can be observed that in some countries like the United States where the gap between life expectancy for men and women is smaller, there is no gender difference in the incidence of Alzheimer’s disease. (Ibid., p. 739)

This is a classical hypothesis in longevity research, which suggests that the selection effect might be stronger for men and that, as a result only the most physically and cognitively strong might reach advanced age [15].

Moreover, interpreting these age-related findings is a complex task because a risk factor means a notable frequency of simultaneous occurrence of two variables, here age and a negative result in a clinical and/or neuropsychological test like the MMS. The risk factor is measured for a population but implies no causality at the individual level. In order to interpret this correlation as causality, other aspects of age must be considered beside its mere chronological reality.

Age as a Cause of the Disease?

Since the literature on Alzheimer’s disease is mostly based on medical and biological research, age is viewed from a physiological point of view. The passage of time is considered as responsible for physiological wear and tear and biogenetic damage and alterations of the human body and brain. It is believed, then, that alterations multiply with age, which is why the prevalence of dementia is understood to increase with chronological age. According to some researchers [16], most of the clinical studies that have investigated the cognitive capacities of centenarians have concluded that 50 to 75% of them suffered from “cognitive impairments”. Although they are a rapidly growing population, centenarians have so far rarely been considered as subjects for the study of Alzheimer’s disease. There are many reasons for this. First, they are considered to be statistically too few in number and to have too short a life expectancy for cohort analysis, and, second, they seem difficult to study. [13] drew the following conclusion:

Finally, because of the increase of prevalence and incidence with age, another source of uncertainty lies in the low representation of the very elderly (over 90 years old) in epidemiological studies, which makes estimation of prevalence and incidence at the most advanced ages uncertain. (…) the lack of data about the very elderly leaves two questions open: either there is an exponential increase of incidence in dementia with age, which means for some that it is an aging-related phenomenon rather than a disease; or the decrease of incidence beyond a certain age, after quasi-exponential growth, shows that it is rather an age-related disease.

The quasi-linear increase of dementia prevalence with age remains a major focus of reflection since it raises questions about the very essence of what is called “Alzheimer’s disease”. According to some researchers [17], what is called Alzheimer’s disease is not in fact a disease (i.e. a clearly defined pathology with a proven etiology), but rather a syndrome, i.e. a set of more or less unified symptoms grouped under the same generic term. These symptoms might then simply be an effect of senescence and manifest themselves with great interindividual variety. This hypothesis stands all the more if the diagnosis – and the subsequent labelling process [18] – rests on clinical tests in which failure is correlated with senescence. As [19], this hypothesis questions the social and political construction of the disease, which is based on a distinction between Alzheimer’s disease, senility and senescence.

In such context, diagnosis is a crucial stage to distinguish between “Alzheimer’s disease” and other possible causes of dementia. Yet diagnostic procedures are intrinsically linked to the etiology of the disease: they depend one on the other.

The Etiology and Diagnosis of the Disease

To understand “diagnostic procedures” and the etiological issues they raise, it is useful to trace the history of the way the disease was defined.

Senile or Presenile Dementia?

One of the oldest and most famous debates about the etiology of Alzheimer’s disease also has to do with the link between age and disease. In his history of Alzheimer’s disease, [20] charts the process of construction of what is called “Alzheimer’s disease” and points out that it was first considered as “presenile dementia”. There were many reasons for this. Continuing the work of Aloïs Alzheimer on the “first patient” Auguste D, Perusini observed correspondences as well as morphological (cerebral modifications) and symptomatic differences with senile dementia. Yet this is not what really motivated the distinction. Berrios (1989, quoted by Gzil. Op. cit.) points out that the anatomopathological features (amyloid plaques and neurofibrillary tangles) that Aloïs Alzheimer considered to be possible specificities had already been identified by Ficher and that he considered them to be relatively frequent occurrences in dementia in elderly people. He therefore proposed the name of “presbyophrenic dementia” for all types of senile and presenile dementia in which plaques and sometimes fibrillary alterations could be observed. The reasons why Alzheimer’s disease was distinguished from senile dementia lie first in the fact that Aloïs Alzheimer had no occasion to conduct histological examinations of elderly patients (as he himself recognized). Another reason was the then popular conception of mental illness, inherited from Kahlnaum (Krepelin’s mentor, Krepelin being himself Alzheimer’s mentor), according to which there were specific diseases for every stage of life. As Gzil points out, in the 19th century, many psychiatrists believed that mental disease was related to age.

The table presented by [20] listing the cases of Alzheimer’s disease published between 1907 and 1914 can provide further insights. The table lists 22 cases, with the youngest patient having been diagnosed at 32 and the oldest at 63. The average age at diagnosis was 57 and, apart from 3 cases, they were all diagnosed after 48. Today most of these people would be considered as young patients, but what did these ages mean in the early 19th century in biological, demographic and social terms? In demographic terms, with life expectancy at birth being about 50 at the time, it is debatable whether these patients could be described as young. If the average age at diagnosis were 7 years higher than life expectancy at birth today, it would be 87. Would the people diagnosed at that age be considered as young patients? Moreover, biologically (wear and tear) and sociologically (status and role in society) speaking, were these people young? It is quite difficult to answer this question, which in turn raises the issue of how to define old age [21].

Whether Alzheimer’s disease is a form of senile or presenile dementia was not decided on the basis of age but of the anatomopathological features of the disease. While clinicians believed there were two separate diseases, at the end of the 1960s, anatomopathologists justified the “merging” of the two on account of their biological manifestations. [19] showed that community and pharmaceutical lobbying also supported this classification under a single label. Today, the only age-related distinction is based on genetic arguments and establishes a separation between autosomal (genetic) and sporadic forms.

Biological Markers: The Causes of the Disease?

The features that Aloïs Alzheimer identified in Auguste D (and Fischer in other patients), i.e. amyloid plaques and neurofibrillary degeneration, are still considered today as the hallmarks of Alzheimer’s disease. The INSERM file indicates that:

Study of the brains of patients with Alzheimer’s disease shows the presence of two types of lesions which make diagnosis of Alzheimer’s disease a certainty: amyloid plaques and neurofibrillary degeneration.

It is important to insist on the fact that these biological features are what makes diagnosis certain because diagnosing the disease is not an easy task as Pr. Philippe Amouvel, one of the French experts of the disease, explained: “Today, we are used to referring to any memory disorder as Alzheimer’s disease while in reality, it takes a very long, very complex work to make a diagnosis” [22]. While in Aloïs Alzheimer’s time, such “alterations” (amyloid plaques and neurofibrillary tangles) could only be identified post mortem, new medical techniques have been developed in order to trace the lesions at the root of cognitive disorders that can then be identified through the use of clinical tests. Two main types of “diagnostic techniques” can be distinguished: the identification of biological and/or genetic markers thanks to lumbar puncture and medical imaging. These examinations are performed on living subjects, either subjects experiencing clinically assessed health problems, or healthy subjects being tested for research purposes. As underlined by some publications [23], the possibilities offered by these technical advances have reinforced a biological understanding of the disease, in which biomarkers are considered both as signs and causes of the disease. This so-called improvement in diagnosis certainty actually results in enhancing the biological aspects of “Alzheimer’s disease” and supporting an etiology based on the “amyloid cascade” hypothesis. This hypothesis posits that the deposition of amyloid-beta peptide in the brain leads to brain disorders. Although this hypothesis is sometimes debated [24], the causal process it describes constitutes the focus of most of the research today. The INSERM file specifies that:

Amyloid beta protein, naturally present in the brain, accumulates over the years under the influence of various genetic and environmental factors, until it forms amyloid plaques (also called “senile plaques”). According to the “amyloid cascade” hypothesis, it would seem that the accumulation of this amyloid peptide induces toxicity in nerve cells, resulting in increased phosphorylation. (…) Hyperphosphorylation of tau protein leads to a disorganization of neuron structure and so-called “neurofibrillary” degeneration which will itself lead, in the long run, to the death of the nerve cell.

While a few years ago, diagnosis was based on the clinical signs of the disease, today clinical-biological criteria are used, leading to an ATN classification system. The deposition of amyloids (A), Tau protein (T) and Neurodegeneration (N) (cerebral modifications) are considered as both biomarkers and causes of the disease. Medical neuroimaging (magnetic resonance imaging and positron emission tomography) makes it possible to visualize cerebral atrophies and hypometabolism which are considered as signs of neuronal and synaptic dysfunction [25].

From a clinical point of view, it is important to detect the biomarkers at an early stage in order to identify “the people who have these biomarkers and are worried about their memory and to offer them, long before they decline, long before they enter the clinical disease stage, strategies to avoid cognitive decline” (Dr. Audrey Gabelle, Pr. of neurology and neuroscience, University of Montpellier, 01/04/2021).

Biological Lesions and Clinical Disorders: An Etiological Paradox?

The significance of early detection rests on the theory that there is a prodromal stage of Alzheimer’s disease in which biological signs are present in the brains of the “patients” even though they do not experience any problem or present any clinically identifiable symptom. Yet some studies have suggested that there is no such clear link between biomarkers and clinically assessed disorders:

Several studies have shown that the extent of neuropathological changes and the degree of cognitive impairment were poorly related in the very elderly. In examinations conducted on centenarians it has been shown that several subjects did not present any cognitive impairment despite extensive neuropathological abnormalities and conversely, that several subjects who presented significant cognitive impairment did not have neuropathological abnormalities. In this context, even beyond the issue of correct interpretation of the epidemiological data, some have raised the conceptual question of whether dementia should be considered as an age-related phenomenon (generally occurring around a specific age) or a normal consequence of aging [26].

On this point, the “Nun Study” sparked considerable discussion in the scientific literature, especially the case of Sister Mary [27]. The study was based on a population of 678 nuns aged from 75 to 103. It focused on nuns in order to better control the environmental (social status) and behavioral (tobacco and alcohol consumption) factors that can have an impact on cognitive impairment. Sister Mary died at 101. Until her death, she had had high scores in cognitive tests and appeared to be “cognitively intact”. Yet the autopsy (the currently used “diagnostic techniques” were not as advanced then as they are now) of her brain revealed large numbers of neurofibrillary tangles and amyloid plaques. Sister Mary is not, in fact, an isolated case. Several studies based on post mortem anatomopathological data have shown that in a significant number of cases, there is no link between the presence or absence of amyloid plaques and neurofibrillary degeneration and the presence or absence of cognitive disorders. The study conducted by Zekri et al. (2005) on 209 autopsied subjects (100 demented and 109 non-demented subjects) indicated that “even more surprising were the observations made in 109 non-demented subjects: in 33% of the cases, the density of neurofibrillary degeneration of the isocortex was equivalent to that of demented subjects” (p. 253). In this study, the brains of 1/3 of the subjects with no clinical sign of dementia had the same biophysiological markers as those of subjects with Alzheimer’s disease.

While the idea that there is a pre-symptomatic stage of the disease has been challenged by these studies, on account of the mismatch between the number of lesions and the presence of cognitive disorder, some suggest that this paradox might be explained through the notions of brain plasticity and cognitive reserve. They believe that some brains have the ability to offset or stave off lesions and continue to function “in a normal way”.

Towards a “Geneticization” of Alzheimer’s Disease?

Another explanation is also used to account for this paradox, whose effect is to redefine the etiology and reinforce the idea that the disease might have genetic origins.

Genetic Causes for the Appearance of Biological Lesions?

In their analysis of the origins of Alzheimer’s disease, some researchers [28] underline the fact that while there is no correlation between the presence of amyloid peptide and the existence of symptoms, the symptoms are correlated with neuronal death which they believe is caused by an abnormal amount of Tau protein. This leads to a rather different causal pattern. In this perspective, genetic factors – particularly the APOE gene [29] – and environmental factors are believed to be responsible for the amyloid cascade and the abnormal production of amyloid peptide affecting Tau protein and leading to neuronal death. This gives rise to a much clearer causal pattern with the following successive, rather than concomitant, stages: genetic (and environmental) factors à amyloid à Tau à neuronal death à clinical symptoms. It should be said that this causal pattern is causing debate among researchers for several reasons. First, some studies [30] point out that the causal succession of amyloid plaques and Tau phosphorylation must be reexamined since Tau protein can appear before the plaques do. Moreover, accumulated Tau protein can also be found in “the brains of elderly and cognitively healthy subjects but in relatively moderate quantities” (Wallon, op. cit). Yet these observations do not call into question the idea that there is a pre-symptomatic stage during which the disease develops in invisible ways. There have been much cited hypotheses and models [31] to describe this development process but researchers do not have sufficient longitudinal data to confirm them yet.

From Genetic Models to Sporadic Forms

Faced with this methodological problem which makes it difficult to confirm or refute the hypotheses and models being discussed, some researchers have turned to genetic models. The first genetic model is based on autosomal Alzheimer’s disease. According to the INSERM file: “Hereditary forms of Alzheimer’s disease account for 1,5% to 2% of the cases. They almost always occur before 65, often around 45 years old. In half of the cases, rare mutations have been identified as the root of the disease”. Researchers have been able to follow the evolution of the disease in these patients carrying a rare genetic marker causing the development of lesions (amyloid and Tau), leading them to think that the pathology might begin 15 years before the clinical signs appear. On this basis, the genetic forms of the disease have been considered as a model to approach sporadic forms. Yet this approach can be questioned since in the general population, 50% of the study subjects with biomarkers (amyloid and Tau) of the disease did not develop any symptom over a ten years’ period [32].

The other genetic model used is an animal model. Several studies of Alzheimer’s disease, including those which gave rise to the amyloid cascade hypothesis [33], are based on in vitro experiments conducted on the brains of mice or other marsupials such as mouse lemurs. They rely on the assumption that the results obtained from mouse brains can be “transferred” to the human brain. Yet comparing the two is by no means easy since mice do not “naturally” develop Alzheimer’s disease as it is today defined and it is debatable whether clinical tests performed on animals can be assimilated to those used to make a diagnosis on human subjects. The mice used in laboratories are “models”, i.e. they have been genetically modified so as to develop Alzheimer’s disease. The studies in immunotherapy carried out by [34] made this point very clear. The mice used, APPswe/PS1ΔE9 models, overexpressed mutated forms of the human APP gene and the human PSEN1 gene and were compared to so-called “wild-type” mice from the Jackson Laboratory.

In addition to the fact that this model appears to be far removed from the reality of sporadic Alzheimer’s disease, it is also questionable whether its results can be used because it completely overlooks “environmental” risk factors in order to promote an exclusively genetic explanation. The limitations inherent in investigations of human and sporadic forms of the disease thus result in the construction of models which are based on comparison and end up eliminating one of the factors that was initially considered as responsible for the disease. This paper suggests that the development of such models is to be understood within a broader movement towards defining the elderly as biologically specific individuals.

Conclusion

Through analysis of the etiological construction of Alzheimer’s disease, this paper provides some insights for a sociological study of Alzheimer’s disease, following previous work in anthropology [35] and sociological studies of other biomedical subjects such as procreation [36]. This approach reveals that natural sciences – however hard they may be considered to be – also construct their research subjects on the basis of technical advances and out of the necessity of bypassing existing methodological obstacles.

This paper has shown that the way age is understood and used in research on Alzheimer’s disease can result in shortcuts, whereby statistical correlations are transformed into causal links, and in classification of the patients into falsely unifying categories. It has also questioned the boundaries between early dementia, late dementia and senescence by showing that the difficult interpretation of chronological age and the almost total lack of data about certain age groups are barriers to reflection and raise questions as to the very nature of what we call “Alzheimer’s disease”.

The medicalization of society which has long been observed by health sociologists is today compounded, in the case of Alzheimer’s disease (but not only), by increasingly biological [37] and genetic interpretations of the human being. Yet, while study of the biomarkers triggering amyloid cascade can yield helpful results, this line of research needs to be carefully scrutinized just as research seeking to identify prognostic biomarkers for psychiatric disorder in children has been [38]. Individuals in the asymptomatic phase are not, clinically speaking, sick. The desire to prevent development of the disease should not blind researchers to the possible social and human consequences. Similarly, advances in genetics should not cause unquestioning acceptance of genomic medicine [39-42] and its probabilistic interpretations of individual fates. I believe that, even before looking at the possible social and political effects of biomedical paradigms and practices on society and individuals, a sociology of Alzheimer’s disease should focus its attention on the research being conducted and show its historicity, constructions and controversial issues as a way to shed light on the modern forms of biopower. Yet, this type of work doesn’t appear to be in line with funders’ and research institutes’ demand for interdisciplinary research. Multi-disciplinary research, which means looking at the same subject from different points of view based on specific epistemological principles, certainly needs to be pursued; on the other hand, inter-disciplinary research, which means orienting different types of disciplinary research towards the same direction, appears to me to be highly counter-productive while trans-disciplinarity (which blurs or erases historical and epistemological differences between disciplines) can be considered as dystopian.

References

  1. Bury M (1988) Arguments about ageing: long life and its consequences in N. WELLS, C. FREER (dir.), The Ageing Population, London, Palgrave 17-31.
  2. Barnes Rf, Raskind MA, Scott M, Murphy C (1981) Problems of families caring for Alzheimer patients: Use of a support group. Journal of the American Geriatrics Society 29: 80-85. [crossref]
  3. Lazarus LW, Stafford B, Cooper K, Cohler B, Dysken M (1981) A pilot study of an Alzheimer patients’ relatives discussion group. The Gerontologist 4: 353-358.
  4. SOUN E (1999) Des trajectoires de maladie d’Alzheimer, Thèse de doctorat en sociologie, Brest, Université de Bretagne.
  5. Ngatcha-Ribert L (2007) La sortie de l’oubli: la maladie d’Alzheimer comme nouveau problème public. Sciences, discours et politiques, Thèse de doctorat de sociologie, Paris, Université Paris-Descartes.
  6. Ngatcha-Ribert L (2012) Alzheimer : la construction sociale d’une maladie, Paris, Dunod.
  7. Chamahian A, Caradec V (2014) Vivre « avec » la maladie d’Alzheimer : des expériences en rupture avec les représentations usuelles de la maladie. Retraite et Société 3: 17-37.
  8. Pestre D (2001) Études sociales des sciences, politique et retour sur soi éléments. Revue du MAUSS 1: 180-196.
  9. Lock M, Gordon D (2012) Biomedicine examined, New York, Springer Science & Business Media.
  10. Bourdieu P (1975) La spécificité du champ scientifique et les conditions sociales du progrès de la raison. Sociologie et sociétés 7 : 91-118.
  11. Ramaroson H, Helmer C, Barberger-Gateau P, Letenneur L, Dartigues JF (2003) Prévalence de la démence et de la maladie d’Alzheimer chez les personnes de 75 ans et plus: données réactualisées de la cohorte Paquid. Revue Neurologique 159: 405-411.
  12. Helmer C, Pasquier F, Dartigues JF (2003) Épidémiologie de la maladie d’Alzheimer et des syndromes apparentés. Médecine/sciences 22: 288-296.
  13. Ankri J (2016) Maladie D’Alzheimer, l’enjeu des données épidémiologiques. Bulletin Hebdomadaire d’Epidémiologie 458-459.
  14. Dartigues JF, Berr C, Helmer C, Letenneur L (2002) Épidémiologie de la maladie d’Alzheimer. Médecine/sciences 18 : 737-743.
  15. Balard F (2013) Des hommes chênes et des femmes roseaux : hypothèse de recherche pour expliquer le paradoxe du genre au grand âge », in I. VOLERY, M. LEGRAND (dir.), Genre et parcours de vie, vers une nouvelle police des corps et des âges 100-106.
  16. Poon LW, Jazwinski M, Green RC, Woodard JL, Martin P, et al. (2007) Methodological considerations in studying centenarians: lessons learned from the Georgia centenarian studies. Annual review of gerontology & geriatrics 27: 231-264.
  17. Whitehouse PJ, George D, Van Der Linden ACJ, Vander Linden M (2009) Le mythe de la maladie d’Alzheimer : ce qu’on ne vous dit pas sur ce diagnostic tant redouté, Louvain la Neuve, Éditions Solal.
  18. Ehrenhberg A (2004) Remarques pour éclaircir le concept de santé mentale. Revue française des affaires sociales 1: 77-88.
  19. Fox P (1989) From senility to Alzheimer’s disease: The rise of the Alzheimer’s disease movement. The Milbank Quarterly 67: 58-102. [crossref]
  20. Gzil F (2009) La maladie d’Alzheimer : problèmes philosophiques, Paris, Presses universitaires de France.
  21. Bourdelais P (1993) L’Âge de la vieillesse, Paris, Odile Jacob.
  22. AMOUYEL P (2020) Avons-nous les outils pour faire un diagnostic dès les premiers signes de la maladie d’Alzheimer ? Troisième conférence de la fondation Alzheimer, le 01/04/2021.
  23. Burnham SC, Colona PM, Li QX, Collins S, Savage G, et al. (2019) Application of the NIA-AA research framework: towards a biological definition of Alzheimer’s disease using cerebrospinal fluid biomarkers in the AIBL study. The journal of prevention of Alzheimer’s disease 6: 248-255. [crossref]
  24. Chételat G (2013) Reply: The amyloid cascade is not the only pathway to AD. Nature Reviews Neurology 9: 356. [crossref]
  25. Chételat G, Arbizu J, Barthel H, Garibotto V, Law I, et al. (2020) Amyloid-PET and 18F-FDG-PET in the diagnostic investigation of Alzheimer’s disease and other dementias. The Lancet Neurology 19: 951-962. [crossref]
  26. Ankri J (2006) Epidémiologie des démences et de la maladie d’Alzheimer. La santé des personnes âgées 42: 42-44.
  27. Snowdon DA (1997) Aging and Alzheimer’s disease: lessons from the Nun Study. The Gerontologist 37: 150-156. [crossref]
  28. Wallon D (2020) Avons-nous les outils pour faire un diagnostic dès les premiers signes de la maladie d’Alzheimer? Troisième conférence de la fondation Alzheimer, le 01/04/2021.
  29. Genin E, Hannequin D, Wallon D, Sleegers K, Hiltunen M, et al. (2011) APOE and Alzheimer disease: a major gene with semi-dominant inheritance. Molecular psychiatry 16: 903-907. [crossref]
  30. Morris GP, Clark IA, Vissel B (2018) Questions concerning the role of amyloid-β in the definition, aetiology and diagnosis of Alzheimer’s disease. Acta neuropathologica 136: 663-689. [crossref]
  31. Jack jr CR., Knopman DS, Jagust WJ, Shaw LM, Aisen PS, et al. (2010) Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade. The Lancet Neurology 9: 119-128. [crossref]
  32. Stomrud E, Minthon L, Zetterberg H, Blennow K, Hansson O (2015) Longitudinal cerebrospinal fluid biomarker measurements in preclinical sporadic Alzheimer’s disease: A prospective 9-year study. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring 1: 403-411. [crossref]
  33. Janus C, Pearson J, Mclaurin J, Mathews PM, Jiang Y, et al. (2000) Aβ peptide immunization reduces behavioural impairment and plaques in a model of Alzheimer’s disease. Nature 408: 979-982. [crossref]
  34. Alves S, Churlaud G, Audrain M, Michaelsen-Preusse K, Fol R, et al. (2017) Interleukin-2 improves amyloid pathology, synaptic failure and memory in Alzheimer’s disease mice. Brain 140: 826-842. [crossref]
  35. Droz Mendelzweig (2009) Constructing the Alzheimer patient: Bridging the gap between symptomatology and diagnosis. Science & Technology Studies 2 : 55-79.
  36. Déchaux JH (2019) L’individualisme génétique: marché du test génétique, biotechnologies et transhumanisme. Revue française de sociologie 60 : 103-115.
  37. Rose N (2013) The human sciences in a biological age. Theory, culture & society 30: 31-34.
  38. Singh I, Rose N (2009) Biomarkers in psychiatry. Nature 460: 202-207.
  39. Déchaux JH (2018) Le gène à l’assaut de la parenté ? Revue des politiques sociales et familiales 126: 35-47.
  40. Bateman RJ, Xiong C, Benzinger Tl, Fagan Am, Goate A, et al. (2012) Clinical and biomarker changes in dominantly inherited Alzheimer’s disease. N Engl J Med 367: 795-804. [crossref]
  41. Gabelle A (2020) Avons-nous les outils pour faire un diagnostic dès les premiers signes de la maladie d’Alzheimer ? Troisième conférence de la fondation Alzheimer, le 01/04/2021.
  42. TREMBLAY MA (1990) L’anthropologie de la clinique dans le domaine de la santé mentale au Québec. Quelques repères historiques et leurs cadres institutionnels, 1950-1990. Anthropologie et sociétés 14: 125-146.
fig 2

Radiation Risk Communication by Nurses

DOI: 10.31038/IJNM.2022312

Abstract

Risk communication is defined by the National Research Council as an interactive process of exchange of information and opinion among individuals, groups, and institutions. Experts do not push risk information on the people involved, but the expert assumes the role of presenting all the options to those involved, carefully explaining the advantages and disadvantages of the options, and then discussing them based on that explanation. After the Fukushima Daiichi Nuclear Power Station disaster, radiation risk communication initiatives were launched using the risk communication approach. Many residents were anxious not only about radiation health risks but also their whole health, including mental illness and lifestyle-related diseases. Thus, nurses play an important role as radiation risk communicators because they can practice radiation risk communication as part of a health consultation. However, nurses in Japan have not been educated about radiation, thus they have anxiety about radiation. To get consultation from those who have radiation anxiety, nurses must have some minimum knowledge on radiation. Similarly, the education of specialists in the field of radiation risk communication is essential and urgent.

What is Risk Communication?

Risk communication is defined by the National Research Council as an interactive process of exchange of information and opinion among individuals, groups, and institutions [1]. “Interactive” does not refer to one-way communication from experts from central and/or municipal governments, companies, and scientists, but rather to many individuals, affiliates, and institutions discussing issues and opinions about risk, i.e., exchanging risk information and coming to a decision among those involved [2]. The most important component in risk communication is to not impose an opinion, but to discuss among the various individuals involved, and then use various measures to arrive at the best decision. Thus, the expert assumes the role of presenting all the options to those involved, carefully explaining their advantages and disadvantages, and then discussing them based on that explanation. In general, there are several phases of risk communication. These are: “raising awareness about the problem,” “providing and sharing information,” “discussing and co-considering,” “building trust,” “stimulating behavioral change,” and “building consensus” [3-5] (Figure 1).

fig 1

Figure 1: Phase of risk communication

Specifically, in “raising awareness about the problem” and “providing and sharing information” the goal is to get the information to those involved through lectures and printed materials. Recently, there have also been reports on the effectiveness of risk communication through lectures using web meeting systems [5], quartet games, and other games used in class to acquire knowledge [6]. However, if the audience is not interested in the information in the first place, there is a high possibility that it will not reach them. Moving to the “discussing and co-considering” phase, this phase should bring about more educational effects by allowing discussion with those involved while co-considering and interacting with them. Furthermore, if we move to the “building trust” phase in the course of repeated dialogues, those involved trust the communicator, and the communicator trusts them, leading to a mutual understanding and trust that should further stimulate risk communication discussions. From this phase of risk communication, we can move to the “stimulating behavioral change” and “building consensus” phases. Risk communication is established through these phases and the processes of dialogue, co-consideration, and collaboration. Therefore, it is important to emphasize and practice “individuality” and “trust” [7].

What is Radiation Risk Communication?

Since the 1986 Chernobyl nuclear power plant accident and that of the 2011 TEPCO Fukushima Daiichi Nuclear Power Station, radiation risk communication has received special attention [8,9]. Radiation risk communication has been targeted at patients undergoing medical radiotherapy and examinations; since the Fukushima Daiichi Nuclear Power Station accident, however, it has been increasingly used in the field of public health. Specifically, after the Fukushima Daiichi accident in 2011, the government announced a policy on radiation risk communication [10], and it is now being practiced more actively. However, until then, radiation experts did not have any knowledge about risk communication, creating a gap between the experts and the people involved [11].

Radiation Risk Communication after the Fukushima Disaster for Fukushima Residents

Immediately after the accident, the International Commission on Radiological Protection (ICRP), which had gained experience from the Chernobyl disaster, launched dialogues with local residents [12] and specialists from universities, which had been conducting research on radiation for a long time, practiced risk communication [13-15]. Thereafter, Japan’s Ministry of the Environment created a facility called the “Radiation Risk Communication Counselor Support Center,” and a system to support local government officials in dealing with residents was established [16]. As a result, it has been reported that the perception of radiation risk among people of the Fukushima Prefecture is improving [17] through the implementation of radiation risk communication by international organizations, universities, research institutes, and central government agencies to local residents, and we believe that certain results have been achieved. Ten years after the accident, many residents have gained knowledge about radiation and seem to have overcome their radiation anxiety; however, latent anxiety remains, which may manifest itself when the topic of radiation is raised. For example, in the aftermath of Typhoon Hagibis in 2019, anxiety rose around concerns that radioactive materials, which had adhered to the soil may have migrated into living spaces [18]. As 10 years have passed since the accident, the degree and causes of anxiety have become different for each individual, and a more individualized approach is becoming necessary. In addition, as each individual’s opinion grows more fixed and complicated, it is necessary to build a relationship of trust to approach them and to continue to respond to them over a long period.

Radiation Risk Communication after the Fukushima Disaster for Evacuees Living Outside of Fukushima Prefecture

As of December 2021, the number of evacuees from Fukushima Prefecture was reported to be about 27,000 nationwide, and many Fukushima residents are still living outside of the prefecture [19]. Eleven years will soon have passed since the accident, and although many residents have moved from “evacuation” to “migration,” there are also those who are living outside of Fukushima Prefecture with feelings for their hometowns. It is estimated that people living outside the Fukushima Prefecture have less information about radiation than those living in it, and that there have been no improvements in radiation risk perception based on correct knowledge—there are many people who still misperceive radiation risks. For instance, evacuees from outside the prefecture often evacuate multiple times, moving from one place to another in the prefecture, and then evacuating to the Kanto region, making it difficult for the local governments where they lived before the accident to keep track of them. As a result, evacuees have not been approached, and residents who want to return to their hometowns often find themselves in an isolated state. These evacuees form communities with fellow evacuees, and psychologists and other professionals support these communities, but radiation specialists rarely intervene. Thus, when a radiation expert nurse practiced risk communication, evacuees raised questions about the situation in Fukushima Prefecture based on misperceptions, and it was assumed that information was not reaching them and that their perceptions were fixed (Figure 2).

fig 2

Figure 2: Radiation risk communication with evacuees by specialists in the field of radiation risk communication

Do Nurses Play a Role as Risk Communicators after Nuclear/Radiation Disaster?

Previous reports have suggested that nurses are the most appropriate professionals to lead radiation risk communication [20,21]. This is because nurses, who look after the whole person’s health, are able to assess each person individually and provide the necessary information. Since the nuclear accident, it has become clear that the rate of mental illness and lifestyle-related diseases among Fukushima residents is increasing [17,22], and it was considered that nurses have the advantage of being able to implement radiation risk communication as part of health counseling. However, nurses in Japan are not educated about the health effects of radiation during their nursing studies. As a result, reports indicate that many nurses have little knowledge on radiation, and it has been shown that nurses themselves have anxiety about radiation [23]. Therefore, it is necessary to provide radiation education in the incumbent education of nurses and to equip them with the knowledge and skills needed to practice radiation risk communication. Furthermore, along with the dissemination of knowledge on radiation and education on risk communication to general nurses, there is an urgent need to train nurses who can respond in a more specialized manner. In Japan, the education of certified nurse specialists (CNS) in radiological nursing has begun [24], and it is hoped that these nurses will have a high level of knowledge on radiation to deal with more difficult cases and will be available to consult with general nurses about radiation risk communication.

According to a study by the Mitsubishi Research Institute (MRI), about half of Tokyo residents believe that the Fukushima accident will cause delayed effects, such as cancer, in people living in the Fukushima Prefecture, and/or that there will be hereditary effects on their children and grandchildren [25]. Many people misunderstand the radiation health risks and situation of the Fukushima Prefecture after the nuclear disaster. Since such misperceptions may lead to discrimination and prejudice, nurses need to play a role in individualizing risk communication to those who are concerned about radiation.

Conclusion

Risk communication has several phases, and its effect differs by phase. Thus, it is necessary to plan and implement risk communication by considering the content based on the type of target and the purpose of the communication. After a nuclear disaster, radiation risk communication plays an important role in relieving those affected and reducing radiation health anxiety. In the wake of the Fukushima Daiichi nuclear disaster, many people were anxious not only about the health effects of radiation but also that of the whole person. Thus, nurses who are able to consult on general health and radiation health effects, among others, play an important role as risk communicators. Nuclear disasters are extremely rare, but it is hoped that all nurses acquire the minimum knowledge necessary on radiation health effects due to their role as risk communicators. It is also necessary to educate not only generalists but also specialist nurses.

References

  1. National Research Council Improving Risk Communication. Washington, DC: The National Academies Press (1989).
  2. World Health Organization. Risk communications.
  3. International Risk Governance Council. IRGC risk governance framework.
  4. Consumer Affairs Agency (2016) Japan. Effectiveness of Risk Communication Provided by Dr. Kanagawa.
  5. Yamaguchi T, Sekijima H, Naruta S, Ebara M, Tanaka M, et al. Radiation Risk Communication for Nursing Students – The learning effects of an online lecture. The Journal of Radiological Nursing Society of Japan.
  6. Yamaguchi T, Horiguchi I (2021) Radiation risk communication initiatives using the “Quartet Game” among elementary school children living in Fukushima Prefecture. Japanese Journal of Health and Human Ecology 87: 274-285.
  7. World Health Organization (2017) Communicating risk in public health emergencies: A WHO guideline for emergency risk communication (ERC) policy and practice.
  8. Lochard J (2007) Rehabilitation of living conditions in territories contaminated by the Chernobyl accident: The ETHOS project. Health Physics 93: 522-526. [crossref]
  9. Yamaguchi I, Shimura T, Terada H, Svendsen ER, Kunugita N (2018) Lessons learned from radiation risk communication activities regarding the Fukushima nuclear accident. Journal of the National Institute of Public Health 67: 93-102.
  10. 1Reconstruction Agency (2017) Japan. Strategies for Dispelling Rumors and Strengthening Risk Communication.
  11. Kanda R (2014) Risk communication in the field of radiation. Journal of Disaster Research 9: 608-618.
  12. International Commission on Radiological Protection ICRP and Fukushima.
  13. Takamura N, Orita M, Taira Y, Fukushima Y, Yamashita S (2018) Recovery from nuclear disaster in Fukushima: Collaboration model. Radiation Protection Dosimetry 182: 49-52. [crossref]
  14. Tokonami S, Miura T, Akata N, Tazoe H, Hosoda M, Chutima K, et al. (2021) Support activities in Namie Town, Fukushima undertaken by Hirosaki University. Annals of the ICRP 50: 102-108.
  15. Murakami M, Sato A, Matsui S, Goto A, Kumagai A, Tsubokura M, et al. (2017) Communicating with residents about risks following the Fukushima nuclear accident. Asia-Pacific Journal of Public Health 29: 74S-89S. [crossref]
  16. Ministry of the Environment (2015) Japan 5.1 Status of Implementation of Decontamination Projects.
  17. Ministry of the Environment Japan. BOOKLET to Provide Basic Information Regarding Health Effects of Radiation (1st edition).
  18. Taira Y, Matsuo M, Yamaguchi T, Yamada Y, Orita M, et al. (2020) Radiocesium levels in contaminated forests has remained stable, even after heavy rains due to typhoons and localized downpours. Scientific Reports 10. [crossref]
  19. Fukushima Prefecture. The number of evacuees from Fukushima Prefecture.
  20. Sato Y, Hayashida N, Orita M, Urata H, Shinkawa T, et al. (2015) Factors associated with nurses’ intention to leave their jobs after the Fukushima Daiichi Nuclear power plant accident. PLOS ONE 10.
  21. Yamaguchi T, Orita M, Urata H, Shinkawa T, Taira Y, et al. (2018) Factors affecting public health nurses’ satisfaction with the preparedness and response of disaster relief operations at nuclear emergencies. Journal of Radiation Research 59: 240-241.
  22. Takahashi A, Ohira T, Okazaki K, Yasumura S, Sakai A, et al. (2020) Effects of psychological and lifestyle factors on metabolic syndrome following the Fukushima Daiichi nuclear power plant accident: The Fukushima health management survey. Journal of Atherosclerosis and Thrombosis 27: 1010-1018. [crossref]
  23. Nagatomi M, Yamaguchi T, Shinkawa T, Taira Y, Urata H, Orita M et al. (2019) Radiation education for nurses working at middle-sized hospitals in Japan. Journal of Radiation Research 60: 717-718. [crossref]
  24. Nishizawa Y, Noto Y, Ichinohe T, Urata H, Matsunari Y, Itaki C et al. (2015) The framework and future prospects of radiological nursing as advanced practice nursing care. The Journal of Radiological Nursing Society of Japan 3: 2-9.
  25. Mitsubishi Research Institute, Inc. Fukushima reconstruction: Current status and radiation health risks.
fig 2

Creating Mindsets for a Carpet Product – Thoughts on the Practical Effects of Clustering Method

DOI: 10.31038/PSYJ.2022415

Abstract

927 respondents each rated purchase interest for each of 48 vignettes about a carpeting product, each vignette comprising 3-4 phrases from a set of 36 phrases, each vignette specified by an underlying experimental design. The results suggest that using terms written by copyrighters for advertising produces strong performing elements, leading to the conclusion that both the ideas in the study and the writing execution make a difference. Two clustering analyses were done, the first using the data from all 36 elements (FULL), the second using six orthogonal factor generated to replace the original 36 elements (FACTOR). The FULL clusters were more intuitive, and easier to suggesting that despite the attractiveness of using orthogonal variable in clustering, it may be better at a practical level to use the original data.

Introduction

The world of business operates on the recognition that people differ from each other. These differences can emerge from who the people ARE, what the people DO, how the people THINK, and so forth. The discovery of meaningful differences across people comprises on of the basic tenets of science, as well as differences in how one will behave, viz., important both at the level of theory and the level of application. One need only look at the Greek philosophers Plato to discover the importance of differences among people in the nature of their ‘rulers,’ [1], or at Aristotle’s science oeuvre [2], which based itself on classification as the first step.

The importance of difference among people found its key business use in the world of marketing. Consumer researchers, tasked with ‘understanding the market’ would instruct respondents to profile themselves on a variety of different characteristics, these ranging from geo-demographics (Who they are), to behavior (what they do, e.g., on the internet search and purchase), to what they believe.

Since the 1960’s consumer researchers have formally recognized the emerging discipline of psychographics, the method dividing people by how they think about the world The early efforts in psychographics assumed that the divisions among people provided a strong new way to think about marketing [3,4]. This belief in major divisions would lead to books such as the Nine Nations of North America [5], and at the most complex, the dozens of different groupings of people in the Prizm, offered by Claritas [6]. The recognition that people differ as much or more by their proclivities, by how they thank rather than by who they are, is to be applauded, even if the massive divisions of people into groups do not predict the precise language to which each group will be attracted when a particular product is offered.

The efforts of consumer researchers to find ‘basic groups’ in the population was not driven as much by science as by the effort to find the ‘magic key’ to a product. It was clear in concept tests (about new products), in product tests (how well did a product perform), and in tracking studies (attitudes and practices) that people who bought the same type of product, or even the same product, often differed in terms of who they were. That difference was an obstacle to even better product performance, because the marketer and the product developer were left with two or more groups wanting the product but wanting substantially different variations.. If the researcher could discover the ‘nature’ of the different physical product and then communication desired by a group of targeted consumers, it would be possible to create the best product for each group and communicate what each group needed to here. Such would be the opportunity for better market performance, especially when talented product designers and talent advertising agencies could work together after understanding the range preferences in a population for this same product. The knowledge-specifics about these different ‘mind-sets’ has a practical consequences of a positive nature in the in the business world.

Mind Genomics and the Focus on the Everyday

The discovery of mind-sets for a product or service has been a long, expensive research task, one which deals with high level issues, then brought to the level of the individual product or service through subsequent smaller scale research building off these large studies. One consequence of the size and expense of the studies is that they are buried in the corporate archives, used well or poorly for business purposes, to guide advertising/marketing, and even new product development. The result ends up being little knowledge about these mind-sets that a non-businessperson can access.

Mind Genomics, the emerging science of the everyday, has as its focus the study of what is relevant in terms of the specifics of everyday experience, as well as the discovery of mind-sets revolving around that experience. The approach differs dramatically from the conventional efforts. Conventional efforts, reflected in big studies, attempt to divide the minds of the consuming public in a grand way, to establish basic groups applicable to many aspects of a person’s behavior The goal is to find a few mind-sets which are relevant across a many different but related topics, such as mind-set of house decorating, mind-sets of the automobile experience, mind-sets of the financial experience, and so forth. In contrast, .Mind Genomics works in the opposite way, from the bottom up, in the style of a pointillist painter. For the Mind Genomics researcher, the focus is the basics, the specifics of a situation, and the existence of mind-sets relevant to that situation.

A great deal has been appeared on Mind Genomics, especially since 2006 [7,8]. The topic of this paper is in the spirit of ‘methodology,’ specifically the study of methods. The essential output of Mind Genomics is the reduction of the population of different people into a set of non-overlapping groups, these groups emerging from the pattern of responses to a set of stimuli emerging from a choice experiment. We will use the templated approach, considering two topics, the nature of mind-sets emerging when the clustering method generates 2, 3 or 4 mindsets, and the type of information and useful of the results if one tried to pre-process the data ahead of time to make the inputs more statistically robust.

The Mind Genomics science traces its origins to methods known collectively as conjoint measurement. Originally an effort in mathematical psychology to create a better form of measurement (Luce & Tukey, 1964), conjoint measurement would go on to spur a great deal of creative work, but in method and in application, spearheaded first by the late Professor Paul Green of Wharton School of Business at the University of Pennsylvania), and carried out and expanded by his colleagues at Wharton and later at other universities around the world [9-13].

The Mind Genomics Process to Understand What to Communicate, and to Whom

At the level of execution, the process is templated, and straightforward. The remaining sections of this paper will deal with the issue of understanding the nature of what is learned, when the research extracts different numbers of mind-sets from the same data (viz., 2 vs 3 vs 4 mind-sets), and when the research pre-processes data to produce what might be thought of as a more tractable set of variables (viz., six orthogonal factors vs 36 original coefficients as inputs for clustering).

1. Choose the Topic

The researcher chooses a topic. typically, the topics of Mind Genomics are of limited scope. The limited scope comes from the conscious decision to create a science from specifics, hypothesis generating, not hypothesis testing. The limited topic, something from the everyday, is not typically of interest to the researcher trying to understand a broad topic such as human decision making under stress, but rather limited to a topic that is often overlooked, such as decision making about the purchase of a flooring item. That topic, usually relegated to the world of business, and often simply overlooked by scientists as irrelevant the larger proscenium arch of behavior, happens to be an important part, or at least a relevant part, of the real world in which people live and behave. The topic of floor coverings has been studied by academics and business because it is so important in daily life, because it has business implications for sales, and because the topics it touches range from ecology, to choice, to the fascination of the mind of the do-it-yourself amateur [14-17]

2. Create the Raw Material, following a Template:

Mind Genomics prescribes a set of inputs, following a template. The templated design selects a certain number of variables (called questions or dimensions). The variables or questions ‘tell a story’. The questions never appear in the study. The questions are used only to guide the researcher who must provide answers to the questions. Sometimes, such as the case with this study, the questions or dimensions are simply bookkeeping tools to make sure that mutually contradictory elements can never appear together in a vignette. For this study, the researchers selected the so-called 6×6 design. as shown in Table 1. The elements are stand-alone phrases, painting a word picture.

Table 1: The six questions and the six elements (answers) for each group (viz. answers). The structure is only a bookkeeping device to ensure that mutually contradictory elements will not appear together in a vignette

table 1

3. Use an Experimental Design to Specific the Combinations

Mind Genomics works by presenting the individual with a large set of vignettes created by the experimental design. The design prescribes the precise combinations, doing so in a which makes each element appear equally often, appear statistically independently of every other element, and in a manner that the combinations evaluated by one person differ from the combinations evaluated by another person. This approach, permuted experiment design [18] ensures that the study covers a great deal of the possible combinations. The experiment design combined these 36 elements into 48 vignettes, combinations of element, with the properties that 36 of the 48 vignettes comprised four elements (at most one element or answer from a question). whereas the remaining 12 of the 48 vignettes comprised three elements (again, at most one element from a question).

4. The Mind Genomics System Creates the Test Stimuli to be Evaluated by the Respondents

Figure 1 shows one of the vignettes. The vignette seems a haphazard collection of elements, presented in a strange, centered format without connectives. To professional marketers this type of format may best disconcerting. The reality, however, is that the format is exactly what is needed to present the relevant information. The respondent cannot ‘guess’ the right answer. Shortly after the start of the evaluation of 48 vignettes, the respondent stops trying to ‘be right’, and simply responds at an intuitive, gut level. it is precisely this gut response, which best matches the ordinary behavior of individuals faced with the task of selecting a product. Despite the feeling of marketers that their ‘offering’ is special, and engages the customer, and despite the best efforts of advertising agencies their ‘creative’ these mundane situations generate are generally faced with indifference. It is decision within the world of indifference that must be understood, not decision making occurring when a mundane situation is focused upon, and unusual amounts of attention to something that would be simply considered, a decision made, and the person then move on.

fig 1

Figure 1: Example of a vignette comprising four elements. The rating scale appears below, showing a 9-point purchase scale (1=Not likely to purchase… 9=Very likely to purchase

5. Orient the Respondents

The respondents were oriented by a screen which provided just enough background information to alert the respondent to the nature of the product whose messages were being tested with Mind Genomics. For the purposes of this paper on method, it is not necessary to identify the manufacturer, but it was identified at the actual study. Figure 2 shows the orientation page.

fig 2

Figure 2: The orientation screen

Analytics

6. Transform the Responses to a More Tractable Form

Our first step of analysis is to consider whether we will keep the 9-point scale, or whether we will change the scale to something more tractable. Most researchers familiar with the 9-point Likert scale, or indeed with any category scale or ratio scale, will wonder why the need for change. It is easier to begin with a good scale, with good anchors and stay with that scale. At the level of science, the suggestion is correct. At the level of the manager working with the data, nothing could be further from reality. Managers are interested in what the scale numbers mean. The statistical tractability of the 9-point scale is a matter of passing interest. It’s the meaning, the usefulness of the data as an aid to make decision which is important.

The conventional approach in consumer research is to transform the data, so that the data becomes a binary scale, yes/no. The manager is more familiar with, and more comfortable with yes/no decisions. There is no issue of ‘what do the numbers’ mean. In the spirit of this ease of use of binary scales, yes/no, the data were transformed. Ratings of 1-6 were transformed to ‘0’ to denote ‘no’, different gradations of not purchasing. Ratings of 7-9 were transformed to ‘0’ to denote ‘yes’. To each of these transformed numbers was added a vanishingly small random number (10-5). That action becomes prophylactic, preventing any individual respondent from generating all 0’s or all 100’s across the 48 vignettes evaluated by the individual. If the respondent were to rate all the vignettes 1-6, showing variation, the transformation would bring these to 0, and the regression analysis to follow would ‘crash.’ In the same way, were the respondent to rate all vignettes 7-9, the transformation would bring these to 100. In the actual data, 12 respondents generated all 0’s, but 284 respondents generated all 100’s because they found enough appealing in each vignette to assign the rating of 7-9.

7. Relate the Response (TOP3) to the Presence/Absence of the Elements Using ALL the Data

Mind Genomics uses so-called dummy variable regression, a variation of OLS (ordinary least squares regression Hutcheson, 2019). The analysis is first done at the level of the total panel. The independent variables are all 36 elements. Each respondent generates 48 rows of data, each row corresponding to one of the 48 vignettes the respondent evaluated. The data matrix for each respondent comprises 36 columns, one column for each element. The cell for a particular vignette has the number ‘0’ when the element is absent from the vignette, and the number ‘1’ when the element is present. There is no interest in the meaning the element. It is simply a case of being presence (1) or absent (0). The objective of the analysis is to determine the ‘weights’ or coefficients of the 36 elements, from the total panel.

The data are now ready for the first pass, viz., combining all the data into one database comprising 48 rows for each respondent, and 927 respondents. The equation is: TOP3 = k0 +k1(A1) +k2(A2) … k6(F6). Although the respondent evaluated combinations comprising three and four elements in a vignette, the OLS regression is easily able to pull out the part-worth contributions, the coefficients. The first estimated parameter, k0, is the so-called the additive constant. The remaining estimated parameters, coefficients k1-k36, are the weights for respective elements.

The coefficients are additive, viz., they can be added to the additive constant. The combination (additive constant + sum of elements in the vignette) provides a measure of how well the vignette is expected to perform. The only requirement is that the vignette comprises 3-4 elements.

8. Interpret the Results from the First Modeling

Table 2 shows the coefficients for the 36 elements as well as for the additive constant. Note that this will be the only time that the full set of 36 coefficients and the additive constant will be shown, to give a sense of the impact of each element. The Mind Genomics process produces what could become an overwhelming volume of data, the sheer wall of numbers disguising the strong performing elements.

Table 2: Coefficients of the 36 elements, sorted by the value in descending. The three strongest performing elements are shown as shaded cells

table 2

The additive constant, 49, is the estimated proportion of times people will rate the vignette as 7-9 (likely to purchase or very likely to purchase) in the absence of elements. The additive constant is a purely estimated parameter because by design all vignettes comprised 3-4 elements. Nonetheless, the additive constant gives a sense of the predisposition to buy. The value 49 means that about 49% viz., about half the people are likely to say to buy, even in the absence of elements which provide information. The additive constant of 50 is typical for a commercial product of moderate interest. s reference points, the additive constant for credit cards is around 10, the additive constant for pizza is about 65. Our first conclusion is that there is a moderate basic interest in the carpet design squares. The elements will have to do a fair amount of work to drive interest. The ‘work’ comprises the discovery of strong elements.

The coefficients in Table 2 may initially disappoint the researcher because out of 36 elements only three elements perform strongly from the total panel of 927 individuals. There might be at least two things going on to produce such poor performing elements. The first is that the messages are simply mediocre, despite the best effort of copyrighters and professionals to offer what they believe to be good messages. In such a case there is no option but to return to the drawing board and start again. The problem is in which direction, and how? The second is that we are dealing with groups of people in the population, mind-sets, who pay attention to different messages. The poor performance may emerge because we mix these people together, and their patterns of preferred elements cancel each, like streams colliding, preventing each other from continuing on their respective paths. In other words, the poor performance from the total panel may emerge from mutual cancellation of what otherwise be strong performance of some elements.

Clustering the Respondents into Two, Three, and Four Mind-sets

9. Create 927 Individual-levels to Prepare for Clustering into Mind-sets

The permuted experimental design is set up so that each respondent evaluated the precise types of combinations needed to run the OLS regression on the data of that individual. Thus, by running the 927 regressions, one per respondent, one gets a signature of the respondent in terms of the respondent’s mind-set regarding the product. The next step in the analysis runs the 927 different OLS regressions, storing them in a single matrix along with the self-profiling classification that the respondent did at the end of the evaluations.

10. Clustering the Respondents

Clustering is a popular technique to divide ‘things’ by the features that they have. Things, e.g., respondents, can be defined by the pattern of their 36 coefficients. Respondents with similar patterns belong in the same cluster, which will be called ‘mind-set’ because the clusters show what the respondents feel to be important for this flooring product. The respondents may not be similar at all in any way, but they are similar in their pattern of responses in this study.

11. Use K-Means Clustering (Likas et. al., 2003)

K-Means measures the distance between two respondents, based upon the similarity of their 36 coefficients. K-Means clustering tries to maximize the ‘distance’ between the two centroids of 36 numbers each computed on the respondents in the cluster, while at the same time minimize the sum of the pairwise distances within a cluster. ‘Distance’ between two respondents based upon the 36 coefficients was operationally defined as the quantity (1-Pearson Correlation Value). The Pearson correlation takes on the value of 1.00 when the 36 elements are perfectly linearly related to each other, making the distance (1-R)= 0. The Pearson correlation takes on the value 0f -1 when the 36 elements are perfectly inversely related to each other, making the distance (1-R)=2 (1 – – 1 = 2).

12. Interpret the Data

Table 3 shows the strong performing elements from three segmentation exercises: breaking the data into two mind-sets (clusters), breaking the data into three mind-sets,), and breaking the data into four mind-sets . There is an abundance of strong performing element within each cluster. We have created an artificial cutoff point of coefficients of 16 or higher being strong, and coefficients of 15 or lower being less relevant. The reality of the product, and the nature of the respondents presented with a real product show the strong performance of elements, performance hard to obtain with theory-based ideas. For this study, the elements in the table are selling points of real products, relevant to everyday life, not theory-based ideas lacking the life-giving power of reality and everyday importance.

Table 3: Strong performing elements four two, three, and four mind-sets emerging from the clustering the original 36 coefficients

table 3

It is important to recognize that the mind-sets are easy to name. The strongly performing coefficients share some ideas in common. Based upon the strong performing elements one gets a sense of the respondent’s way of thinking in each mind-sets. it is also important to note that there is no ‘one correct’ number of mind-sets. The mind-sets tend to repeat but increasingly finer distinctions emerge between and among mind-sets as the number go from two to four

From 36 Down to 6 – Can We Improve the Clustering by Creating Fewer but Uncorrelated Predictors?

13. Hypothesis Based Upon the Efforts to Find ‘Primaries’

Although the 36 elements were put together in a way which may their appearances statistically independent of each other, the reality is that the elements might be skewed to one or another aspect, such as fewer elements in one topic area, and many more elements in another topic area. The Mind Genomics system tries to instill a balance in the nature of the elements used by forcing an equal number of elements or answers for each question. That strategy works for academic subjects but may not be the appropriate when the businessperson is trying to understand the mind of the customer.

With 36 elements, it may be advantageous to reduce the number of elements to a smaller set of ‘pseudo-elements,’ mathematical entities called factors which are uncorrelated with each other [19]. The application of principal components factor analysis to these data, with a moderate but not severe criterion for extracting a factor (eigenvalue > 2) produced a set of six uncorrelated ‘pseudo elements,’ the factors. The six emergent factors were uncorrelated with each other by the process of factor analysis. The factor structure was further simplified by rotating the six factors to a simple form, using Quartimax rotation. Finally, each of the 927 respondents become a point in this new six-dimensional space, where the rotated factor because the new ‘elements’, and thus name pseudo elements.

A Technical Note

The method of reducing the 36 elements to uncorrelated factors involves a great number of alternative choices, as does the method for creating the clusters of mind-sets. This paper simply chooses one way for exploratory purposes. Other factor analyses decisions might lead to different clusters, and a different decision. The foremost stated, this exploration is simply looking at a possible way to improve our knowledge emerging from the experiment, not as a method for ultimate discover of the ‘one array of mind-sets.

14. Interpret the Data

Table 4 recreates the two, three, and four mind-sets, this time using the clustering based upon the six factor scores of each of the 927 respondents, rather than on the original 36 coefficients for each of the 927 respondents. The results at first look promising in terms of many more elements emerging.. We see several interesting departures from what we saw in Table 3, which showed the same clustering, but with the full set of 36 coefficients. Returning to Table 4 we see that one of the additive constants is always high, suggesting that there is one mind-sets which is strongly predisposed to the items. Generally, this group will respond to most elements because their basic interest is high. The other one, two or three mind-sets show much lower constants, but many strong performing elements. The second observation is that these mind-set created after factor analysis are harder to name, because they comprise many more elements. The greater number of viable elements may have emerged because the additive constants are low, however.

Table 4: Strong performing elements among four two, three, and four mind-sets emerging from the clustering the six factor scores emerging from the 36 coefficients

table 4

15. Basic Composition of Mind-sets, Gender and Age

Mind Genomics continues to reveal that there is no simple relation between who a person IS and the mind-set to which a person belongs. Table 5 shows the composition of the mind-sets, by gender and by age. The patterns which emerge from Table 5 can be augmented by much more in-depth tabulations, beginning with more details about WHO the person is, what the person DOES at home regarding home decor, attitudes and behavior regarding SHOPPING, and so forth. The important thing is that by knowing more in-depth about the respondents, as well as the respondent membership, it might be possible to assign a new person to one of the segments.

Table 5: composition of the mind-sets by gender and by age, respectively

table 5

Discussion

16. The thrust of this paper is methodology, the study of method in the true sense of the word. The effort to understand method began with a simple question, ‘how many clusters or mind-sets to extract.’ It devolved into two questions of the same sort, one dealing with extracting mind-sets with the elements as is, and the other with extracting mind-sets after the elements have been reduced to orthogonality through factor analysis. And finally, the third and not directly stated question, why do the elements score so highly in this study, whereas in most Mind Genomics studies the elements rarely score this highly.

17. Question 1: Why do These Elements Score So Well, When in Most Mind Genomics Studies the Elements Score Poorly?

The answer to this comes from two aspects and can be best considered as conjectures. The topic of floor coverings is interesting, comprising interesting stand-alone elements which educate and intrigue people. In contrast, most of the topics worked on by Mind Genomics are more generic, deal with topics that are not so interesting, and fail to incorporate engaging information to present to the respondent. So, for the first answer the conjectures are we are dealing with an interested population, in a topic which can provide interesting information, rather than dealing with a topic whose ideas are usually watered down so that they provide little ‘juicy’ information to think about. In other words, it may be that conventional studies are simply bland.

18. Question 2: How Many Mind-sets to Extract

When we look at Tables 3 and 4, the results from the clustering our issue is that we just don’t know whether we should opt to call the mind-set by the most prevalent type of element in the mind-set, or whether we should accept the mind-set as comprising a mélange of different meanings. This problem of a mélange of different meanings will stop being a problem when we end up allowing six, seven eight or more clusters.

19. In the words of Harvard’s eminent psychology and founder of Modern-day Psychophysics, S.S. Stevens (d. 1973) ‘Validity is a matter of opinion.’ In Stevens’ words, as long as the experiments are performed correctly the answers are valid. All four solutions, Total, two, three and four mind-sets, would be equally valid if one were dealing with stimuli having no cognitive richness. The clustering algorithm does not pay attention to the underlying nuanced meanings of the element. If we were to assume that the elements are in some unknown language, and we extract two, three and found mind-sets, which solution would be correct? All would be equally valid in mathematical terms.

20. The issue is quite different when we work with elements. These elements have a great deal of meaning, cognitive richness. When we extract the clusters, we can look at the meaning of the element, and from the meaning decide upon the nature of the cluster. Based upon Table 3 the best strategy is work with four mind-sets, if these mind-sets can be identified. Each mind-set focuses on a different aspect of floor materials.

21. Question 3: Do Orthogonal Variables, Presumably Balancing Out Different Ideas, Produce More Interpretable, Tighter Clusters or Mind-sets?

Is it better to work with the original set of elements when creating mind-sets, or should we reduce the elements to a set of mathematically independent variables, such as our six factors? Table 4 suggests that it was difficult to find a simple guiding theme for each cluster or mind-set, despite the emergence of high positive coefficients. As a result, it is probably better to work with the original set of elements, and not perform the factor analysis to produce a smaller group. In the end, we want to make sure that the mind-sets we identify are real and meaningful, and that the combinations generated from these mind-sets make sense and score as high as possible.

References

  1. Kamtekar R (2013) Plato: Philosopher-Rulers. In: Routledge Companion to Ancient Philosophy (pp. 229-242). Routledge.
  2. Bayer G, (1998) Classification and explanation in Aristotle’s theory of definition. Journal of the History of Philosophy 36: 487-505.
  3. Gajanova L, Nadanyiova M, Moravcikova D (2019) The use of demographic and psychographic segmentation to creating marketing strategy of brand loyalty. Scientific Annals of Economics and Business 66: 65-84.
  4. Wells WD (1975) Psychographics: A critical review. Journal of Marketing Research 12: 196-213.
  5. Garreau J (1981) The Nine Nations of North America. Avon Books.
  6. Webber R, Sleight P (1999) Fusion of market research and database marketing. Interactive Marketing 1: 9-22.
  7. Moskowitz HR, Gofman A, Beckley J, Ashman H (2006) Founding a new science: Mind Genomics. Journal of Sensory Studies 21: 266-307.
  8. Moskowitz HR, Silcher M (2006) The applications of conjoint analysis and their possible uses in Sensometrics. Food Quality and Preference 17: 145-165.
  9. Carroll JD, Green PE (1995) Green Psychometric methods in marketing research: Part I, conjoint analysis.” Journal of marketing Research 32: 385-391.
  10. Gofman A, Moskowitz HR (2010a) Improving customers targeting with short intervention testing. International Journal of Innovation Management 14: 435-448.
  11. Goldberg SM, Green PE, Wind Y (1984) Conjoint analysis of price premiums for hotel amenities. Journal of Business S111-S132.
  12. Green PE, Krieger AM, Wind Y (2001) Thirty years of conjoint analysis: Reflections and prospects. Interfaces 31: S56-S73.
  13. Wind J, Green PE, Shifflet D, Scarbrough M (1989) Courtyard by Marriott: Designing a hotel facility with consumer-based marketing models. Interfaces 19: 25-47.
  14. Laparra-Hernández J, Belda-Lois JM, Medina E, Campos N, Poveda R (2009) EMG and GSR signals for evaluating user’s perception of different types of ceramic flooring. International Journal of Industrial Ergonomics 39: 326-332.
  15. Macias N, Knowles C (2011) Examining the effect of environmental certification, wood source, and price on architects’ preferences of hardwood flooring. Silva Fennica 45: 97-109.
  16. Roos A, Hugosson M (2008) Consumer preferences for wooden and laminate flooring. Wood Material Science and Engineering 3: 29-37.
  17. Zamora T, Alcántara, E, Artacho MA, Cloquell V (2008) Influence of pavement design parameters in safety perception in the elderly. International Journal of Industrial Ergonomics 38: 992-998.
  18. Gofman, Alex, and Howard Moskowitz (2010b) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
  19. Cureton EE, D’Agostino RB (2013) Factor Analysis: An Applied Approach. Psychology Press.
fig 3

Negotiating to Buy an Economy Car, KIA: A Mind Genomics Cartography of Sales Messages and Dealer Concessions

DOI: 10.31038/MGSPE.2022213

Abstract

Respondents evaluated systematically varied vignettes describing an automobile from brand KIA. The elements, component messages, presented stand-alone information about the product, performance, service, etc. Each respondent evaluated 48 unique vignettes, rating each vignette on purchase intent, and on the monetary concession that the dealer would have to provide to generate a rating of ‘definitely buy’ for that particular vignette. As the respondent proceeded through the sequential evaluation, the average rating of purchase intent decreased but so did the average dollar concession requested from the dealer. Deconstruction of the ratings into the part-worth contributions of each element revealed two minds-sets of equal size for when the mind-sets were derived from purchase intent (MS1 – Focus on car; MS2 – Focus on driver & situation), and two other mind-sets when derived from price concession (MS3 – Focus on the driving feeling of good product, good experience, good interaction with dealer; MS4 – Responds to deferential dealer, and boast-worthy car). A Mind Genomics cartography of a conventional scenario, e.g., person buying a car, can provide additional, easy-to-develop understanding of how the respondent negotiates, as well as reveal the specific messages which drive a respondent to say YES, MAYBE, or NO.

Introduction

With today’s improvements in technology, new opportunities are emerging to improve the skills of negotiation, ranging from courses on negotiation to electronic-based negotiation [1-3], as well as approaches, such as artificial intelligence. It should come as no surprise that along with the developments in the world of sales capabilities, a great deal of research has been published on the mind of the car buyer. The volume of information should not be surprising for the simple reason that cars are so important to the economy of the world. Next to a house, and education of one’s children, it is the car which often is the most expensive discretionary purchase of ‘something’. It should be no wonder that there has been much published [4]. A Google(r) search for ‘buying an automobile’ generates 907,000 hits for Google Scholar (r) and an astonishing 157 million hits for Google(r), both as of January 9, 2022.

This paper approached the issue of car buying from the point of view of one car, KIA. The objective was to understand from a general population what would be the most compelling messages, both in terms of ability to drive purchase intent, and , in a novel twist, the ability to create motivating price concessions from dealers [5]. Rather than qualifying a respondent ahead of time as interested or not interested in buying a KIA (pre-study screening based on one qualifying question), the study worked with a cross-sectional group of respondents, selecting in the end one or approximately four respondents would, when shown vignettes about KIA, rated at least one vignette ‘9’ (definitely buy), and one vignette ‘1’ (definitely not buy).

How Mind Genomics Works, and Differs from Conventional Attitude Research

Mind Genomics studies present respondents with combinations of messages, so-called vignettes, acquire the respondent’s reactions to these vignettes, and show the link between each element in the study, and the response which is engenders. Side analyses are also feasible and often illuminating, especially when the respondent assigns two types of ratings to the same vignette. In this study the respondent rated both purchase intent and amount of monetary concession from the dealer required to drive a rating of the vignette to ‘definitely buy.’ The Mind Genomics study is really an ‘experiment’, although couched in the form of an online research study, almost a survey although quite different from the classical surveys. The approach has been successfully implemented to create landing pages, and marketing messages for museums [6,7]. The approach provides a general way to understand the different points of view in the negotiation [8]. The overarching world-view of Mind Genomics is to create a usable, searchable, and scalable database about a topic that would seem ordinary, often under-explored, but in actuality reflects a relevant and often important aspect of daily life [9].

The Mind Genomics Method Applied to a Situation – Presenting Information about Brand KIA

The easiest way to understand the study is to follow the study process step by step for a study. The study introduced some departures from the standard Mind Genomics process, departures because of the initial commercial focus of the study, and from the realization that one had to work with respondents who could be persuaded to change their minds, rating at least one vignette 1 (definitely not buy), and rating at least one vignette 9 (definitely buy). If respondents could not be persuaded, the study would not allow us to assume we were dealing with individuals who could be persuaded. The criterion of at least one rating of ‘1’ (definitely not buy) along with at least one rating of ‘9’ (definitely buy) allowed us to reduce the set of respondents from 251 to 63. Thus, we can look at the larger study as the ‘screener’ from which we take only respondents who behaviorally could be swayed at least once. The observation that we discard 75% of the data is tempered by the fact that the data is more relevant to KIA because of this criterion.

Step 1: Create the Raw Materials for the Experiment

Select the topic, create six questions or topics relevant to the topic, and for each question provide six answers. Mind Genomics takes this raw material, the answers (not the questions), combines the raw material into vignettes, small combinations of messages, presents the combinations to the respondent and obtains a rating. Table 1 shows the raw material, put into the form of a table, comprising the six questions and the six answers (also called elements) to each question.

Table 1: The elements for the KIA study

table 1

It is important to keep in mind that the format of question and answer helps to drive the creation of the answers, viz., the raw material that will be shown to the respondent. When Mind Genomics was first introduced in the 1990’s, some thirty years ago, the request by users was to create a system which could handle many alternatives, while at the same time ensuring that a test stimulus, the so-called vignette or combination of elements, would never present mutually contradictory elements. By putting all mutually contradictory elements into a single question, and by ensuring that vignette would comprise at most one answer to a question, it was certain that the mutually contradictory elements would not appear together.

The second reason for the question and answer format is that it made creating the elements easier. Rather having to think about the topic in the abstract, the evolving Mind Genomics applications began feature a template, allowing the researcher to create a story. The respondent had to fill in the questions for the story (different aspects of the same topic), and the answers (elements) for each question. The process was easier because the researcher was given a structure within which to work (Table 1).

Step 2: Create Vignettes, Combining Messages, These Vignettes to be Evaluated by Respondents

Rather than instructing the respondent to rate each message one at a time, of course in random order to reduce bias, Mind Genomics works with combinations of messages, the vignettes. The vignettes are prescribed by an underlying experimental design, a recipe book, specifically created for Mind Genomics. Rather than creating the vignettes by randomly combining the elements, the underlying experimental design ensures that that each element appears equally often, that the combinations of elements allow for the analysis at the level of the respondent, and that the actual vignettes evaluated by each respondent differ from the vignettes evaluated by the other respondents. In this way the experimental design investigates much of the possible combinations (space filling), increasing the chances of discovery by testing more of the design albeit with less precision, instead of small parti of the design space but with more precision (Gofman & Moskowitz, 2011). Mind Genomics is best suited for finding out what really works, in a simulated real world situation where the test stimuli are compound, as they are in nature.

The experimental design prescribes by Mind Genomics for the array of six questions and six answers (elements) per question requires 48 different vignettes. The 48 vignettes comprise 36 vignettes having four elements, and 12 vignettes having the three elements. No question contributed more than one element to a vignette. The experimental design prescribed 36 vignettes in which two questions of six did not contribute an element, and 12 vignettes in which three questions did not contribute an element. The specific elements absented from the combination was dictated by the underlying experimental design, making the entire process straightforward, creatable by a template.

The benefit of the design as described above, viz. 3-4 elements, is that the design allows the researcher to estimate the absolute value of the coefficient, simply because the elements are not collinear. The issue may seem purely ‘theoretical’ until one realizes that many managers demand that their vignettes be complete, incorporating exactly one element from each question (in our case vignettes each of six elements), not realizing that this demand reduces the power of the analysis. Fortunately, Mind Genomics avoid the collinearity issue entirely.

Figure 1A shows an example of a vignette, instructing the respondent to rate the vignette on the Likert scale of likely to buy. The scale is anchored at both ends, but not in the middle. The respondent reads the vignette as a single offering, and rate the vignette on the 9-point scale. The effort is easy because the respondent is presented with a vignette, a combination of elements. It makes ‘sense’ to rate the combination. One does not have to have a lot of information to rate the combination; it suffices simply to have a sense that this could be a real offering. It should be kept in mind that the scale below presents the two ends of the scale, not the middle. The rating ‘9’ (Definitely Will Buy, also called TOP1) will play a featured role in the analyses.

fig 1a

Figure 1A: Example of a four element vignette, with the instructions to rate the vignette

Another aspect of the Mind Genomics effort is the introduction of economics into the study, in this study through price as a rating scale. There are many way ways to incorporate price, such as price as one of the elements, as in Table 1. When price becomes an element (or really several prices become several elements), the objective is to discover how price drives the interest in buying the car. In such a case the typical observation is that people are less interested in buying the car assigned low ratings on the 9-point scale when the same car is offered at the higher price.

Another way to incorporate price is to ask a respondent how much she or he would pay for the car. Experience with price as a rating scale in Mind Genomics suggests that the price willing to pay for a car positively related to the liking of the car but the range of economic ratings are far more constrained than the range emotional ratings. That is, people may love the vignette describing the car (a response of their emotional or hedonic mind’ homo emotionalis), but they are not willing to pay a lot. Emotion is one thing, money is another.

The world of selling and buying presents us with a different problem, more of the type ‘how much of a discount does one have to give to a person for that person to seriously consider buying the product’. We need only look at the signs which features price discounts, or go to an automobile sales office to see the negotiation in real life. The salesperson is trained to reduce price until the buyer agrees to buy the car, walks out, or the process stops because the buyer and the salesperson cannot agree upon a price acceptable to both parties. This study attempted to replicate the give and take by asking the second question ‘If you could get these valuable offerings for less, what monthly savings (if any) would entice you to buy this car over a competitor’s car?

Figure 1B shows the same vignette, this time with second question replacing the first. The rationale for presenting the two questions, one after the other, is to reduce the effort on the respondent, who find the 48 vignettes sufficiently taxing to evaluate, and are compensated for their efforts. Doubling the amount of stimuli is simply infeasible.

fig 1b

Figure 1B: The same vignette, this time with the price question

Step 3: Create the Orientation Page

The Mind Genomics interview comprises two parts, one of which is the evaluation of the systematically varied vignettes (Figures 1A and 1B), and the second is the completion of the self-profiling questionnaire. The respondent who participates usually does not know the reason for the study, and probably has never done this type of study (or experiment) before. The orientation, viz. the first screen that the respondent reads, presents information about the study.

Figure 2 shows the orientation screen. The screen presents just enough information to tell the respondent about the topic, but little more. It is the job of the elements shown in Table 1 to drive the judgment. Thus, the screen is simply a list of expectations that the respondent should have, such as the meaning of the scales, and the requirement that the respondent ‘mentally integrate’ the information into one idea, something which comes naturally to people. No effort is made to tell the respondent anything else. One recent practice, not done here, is to tell the respondent to give their immediate response, the practice emerging from post-study discussions with respondents who worried that they were not giving the ‘right answers.’ In this study, with the name KIA featured in the elements, and in the rating sale, it was deemed better to let the respondent evaluate the information in the way she or he ordinarily evaluates information when buying a car.

fig 2

Figure 2: Orientation page for the study

Step 4: Obtain Respondents, Orient the Respondent, and Collect the Data

The respondents were provided by an on-line panel provider, Turk Prime, Inc., located in the metro New York area, with respondents across the entire United States. A total of 251 respondents agreed to participate, and competed the study, the entire process taking about three days, as different waves of invitations were dispatched. The only requirement was that the respondents had to be older than 21 years old. No effort was made to match the sample to any target. The information about the respondents was obtained by the self-profiling classification, whose questions are shown in Table 2.

Table 2: Self profiling questions

table 2

Step 5 – Identify the ‘Discriminators’ Who Could be Swayed

The typical Mind Genomics study focuses on issues of ‘how people think about the topic.’ This study dealt with responses to a specific car brand, KIA. The objective was to identify the relevant elements which would convince a prospective customer to say YES, viz. to say ‘I will definitely purchase this KIA car, when confronted with at least one vignette, and would also say ‘I will definitely NOT purchase this KIA car’ when confronted with another vignette. This criterion, viz., at least one vignette driving to a rating of ‘9’, and another vignette driving to a rating of ‘1’, reduced the 251 respondents to 63 respondents whose ratings showed that they could be swayed strongly, both positively (assigning at least one rating of 9, Definitely Buy), and negatively (assigning at least one rating of 1, Definitely NOT Buy). Table 3 shows the distribution of these 63 respondents in terms.

Table 3: Base sizes of key groups of the 63 respondents whose data are analyzed

table 3

Step 6: Is There a Pattern of Covariation between Interest in Purchasing the KIA and Price Concession?

The question is now the pattern, if any, between the rating of purchase intent (rows in Table 4) and the desired concession from the dealer (columns in Table 4). We might think that that a respondent who is ready to purchase the car would require less of a concession from the dealer, because the basic presentation of the car in the vignette is already attractive. The dealer concession would be a ‘sweetener’, but not the major driver, since the respondent has already said that she or he would buy the car (viz., a rating of 9, 8 or 7, respectively).

Table 4: Cross tabulation of the percent of respondents selecting a specific dealer concession for each level of rating assigned by the respondents. The rows add up to 100%.

table 4

The pattern which emerges from Table 4 is not what we have expected.

  1. There is a linear relation between rated purchase intent and amount desired to close the deal, but paradoxically, the relation goes in the opposite direction from what might be expected.
  2. Those vignettes rated 9 (Definitely Buy) are overwhelming associated with a dealer incentive of $450. The dealer incentive is not to change the interest but to close the deal.
  3. For those vignettes rated 1 (Definitely not buy), there is no incentive to get the respondent to change her or his mind. 63% of the vignettes rated ‘1’ (definitely not buy) are associated with ‘no dealer concession can change my mind’.
  4. We see from the pattern of dealer concession an unexpected, somewhat paradoxical pattern. People who like something (as shown by their higher purchase intent ratings) also rate the vignette rating of price ‘higher’, viz., want a greater price concession from the dealer.

Step 7: Percent Respondents Choosing Definitely Buy When Offered a $100 dealer concession?

Each respondent profiled himself or herself on who the respondent is (e.g., male female), how the person shops (frugal vs. deal seeker vs. occasional splurger), and the importance of six different factors considered when purchasing a car. Three of them were information (consumer reports, rating by JD Power, word of mouth of friends). The other three were aspects of the car (fuel efficiency, safety, and service).

To review first, each respondent rated 48 different vignettes on a 9-point rating scale. The scale point ‘9’ was transformed to the value 100 to denote definitely buy. The remaining ratings, 1-8, were transformed to the value ‘0’ to denote ‘not definitely buy.’ In turn, the dealer concession scale (rating #2) was converted to the actual numbers. This set of transformations produces metric numbers to be used in a regression analysis, the regressions each estimated at the level of the individual respondent. To prepare for the regression analysis, a vanishingly small random number (<10-5) was added to each transformed number to ensure a minimum level of variation for regression, but at the same time a level that would not affect the coefficients of the regression model.

The final analysis was to estimate the relation between definitely buy vs. concession price. The equation was: (Definitely Buy) = k1 (Dealer Price Concession). The coefficient k1 tells us the amount of Percent definitely buy given a $100 dollars of dealer price concession.

The equation was estimated for each respondent. Each respondent generates a different value of k1. Figure 3 shows the distribution of individual coefficients, . Here is where the 100$ goes the further, keeping in mind that we are looking at the distribution for a subgroup of people. The groups which are likely to be most responsive to offers are: Females, Deal Seekers, Readers of Consumer Reports, Prize Fuel Efficiency, Prize Safety.

fig 3

Figure 3: Distribution of a person’s Definitely Buy (TopP1) votes gained when a dealer gives a monthly price concession of $100. Each filled circle is corresponds to a respondent. Each key group of 12 key groups comprises a separate analysis. The abscissa percentages (0-10% additional definitely buy ratings).

Step 8 – The Effect of Repeated Exposures to Offers across the 48 Evaluations

One of the structural foundations of Mind Genomics is that each respondent is to be exposed to the right combination of vignettes, that ‘right combination’ structured by the underlying experimental design. Depending upon the specific design, the Mind Genomics study might comprise as many as 60 vignettes evaluated by a respondent (the 4×9 design, 4 questions, 9 answers or elements), or 48 vignettes (the 6×6 design used here), or 24 vignettes (the 4×4 design). Since 2019 the 4×4 design has been used increasingly frequently, the reason being the practical goal of making the respondent’s task easier. The last three years have witnessed massive oversampling by those parties who want who wants ‘feedback’ on services, and so forth.)

As respondents move through their 48 ratings, do the respondents change their criteria? It is impossible to answer this question by the simple method of repeating the same stimulus again and again, because this strategy to answer the question would entirely disrupt the Mind Genomics protocol. The respondent would either assign the same rating, or more likely assign the same rating and soon terminate the experiment with irritation.

Recognizing that each respondent evaluates a unique set of vignettes, another way we can answer question about changing criterion looks at averages at each test point, averages computed across all the respondents. For the study here we divided the vignettes into eight sequences of six vignettes each, defined as vignettes 1-6, 7-12,… 43-48. Within a single sequence we average the ratings for question #1 (purchase intent), and then average the ratings for question #2 (amount of a dealer concession to get the respondent to say ‘buy’). Thus, each respondent generates 16 new numbers, rather than 96. We then plot the average rating of purchase and concession on separate graphs, side by side, to show how the average rating changes as respondents moves through the valuations.

Figure 4 show these scatterplots of order x rating, for the total panel, and for respondents divided by WHO they are (left panel) and by what they say is most important. The ordinate is labelled ‘new order’ to show that it comprises averages across sets of six vignettes.

fig 4

Figure 4: How purchase intent (left scatterplot and desired prices concession from dealer (right scatterplot) change as the evaluation of the 48 vignettes proceed. Each point is the average of 6 sequential ratings (viz., vignettes 1-6, 7-12, 13-18, etc.). The groups at the left are standard geo-demographics. The groups at the right are those who feel that the feature or benefit is extremely important.

For the most part, the curves are parallel. The key departures are:

  1. Most of the curves show decreasing interest in purchase with repeated exposure, and decreasing magnitude of desired dealer concession with repeated exposure
  2. With repeated exposures, high income respondents defy the pattern, and show a flatter slope for dealer concession versus
  3. Those who say brand is most important show no reduction in purchase intent with increasing exposure, whereas every other group does show the drop in purchase intent with repeated exposure.
  4. Those who say that warranty period is the most important show a strange pattern, of increasing purchase and increase requested dealer concession.

Step 9 – How Messages Drive Rating for Total Panel and Pairs of and Emergent Mind-sets

Our final analysis goes deeply into the messaging. A key benefit of Mind Genomics is the ability to estimate the power of individual messages, even without instructing the respondent to provide a judgment of how impactful each message might be. It is likely that the respondent would have an idea of what is very important, such as safety, price, warranty, etc., or at least the industry, its marketers and its researchers, as well as the advertising agencies would like to believe. Whether one is really cognizant of what is important, including the respondent herself or himself, remains an ongoing issue, not solved even after a century.

The benefits of Mind Genomics emerge when we consider that important need not be stated, but can be statistically inferred by the ability of an element to ‘drive’ a response, whether the response be the rating of interest in buying the car based on the vignette, or the dollar value of dealer concession that the element would command. We assume that in the case of DEF BUY, a high value associated with the element means that the element is a powerful driver of purchase. In contrast, in the caste of PRICE, we assume that a high value associated with the element means that if the message were to include that element, the dealer better be ready to give a bigger concession. In other words, with DEF BUY, bigger is better; with PRICE smaller is better.

Our final analyses relate the presence/absence of the 36 elements to Top1, at the level of the individual respondent: DEF BUY = k1(A1) + k2(A2) … k36(F6). Each of our 63 respondents generates an individual equation, made possible by the underlying experimental design associated with the data of each separate respondent. Unliked previous studies which included an additive constant, the individual-level (and subsequent group-level) modeling does not include an additive constant. The decision to not estimate the constant was to be able to compare estimated coefficients for DEF BUY, with estimated coefficients for PRICE. To do so, we run the same type of linear modeling for price versus elements, first at the level of the individual, and then at the level of the group.

The starting database for each variable (DEF BUY, Price, respectively) comprised 63 rows of data, one row per respondent. For each dependent variable, in turn, a cluster analysis divided the 63 respondents into two groups, based upon the pattern of coefficients. The clustering, k-means clustering [10], used the terms (1-Pearson correlation) to estimate the ‘distance between every pair of individuals. The k-means clustering then puts the 63 individuals into two non-overlapping sets, attempting to make the individuals in a cluster be similar based on the pattern of their coefficients (low distance between people), and at the same time make the distance as high as possible between the centroids of the clusters, viz., the average coefficient for each of the 36 elements, in each of two clusters.

Clustering is purely formal and mathematical, attempting to satisfying mathematical criteria. Clustering is only a heuristic; many different methods exist for clustering, and many different measures of pairwise distance exist within each method. The choice of k-means clustering and the use of the distance measure (1-Pearson Correlation) is simply a choice, with many other choices equally valid. Good research practice extracts as few clusters as possible (parsimony) while at the same time ensures that that each cluster ‘tells a story’ (interpretability). Parsimony is very important; one could tell better and better stories with more and smaller clusters, but the power of clustering to reduce the data to a manageable set would decrease, and general insights would be obscured by a wall of numbers.

Once the clustering is complete, the clustering program assigns each respondent to one of the two clusters for DEF BUY (called Mind-Set 1 and Mind-Set 2, respectively). The second run of the clustering program, based on Price, assigns the same respondents to one of the two other cluster for PRICE (called Mind-Set 3 and Mind-Set 4, respectively).

Table 5A shows the total panel and MS1, MS2, two emergent mind-sets (clusters) for DEF BUY. Table 5B shows the total panel and MS3, MS4, two other emergent mind-sets for price. All coefficients are shown for Total Panel, both strong performer, and weak performer alike. For the mind-sets, however, weak coefficients are simply deleted to make the patterns emerge more clearly. We call Table 5A homo emotionalis, because we consider the respondents to assign their ratings based upon their inner feelings about buying. We call Table 5B homo economicus, because the concession data invokes economics, and a presumably more rational way of thinking.

Table 5A: Clustering based on DEF BUY coefficients (purchase intent; homo emotionalis). Elements sorted by coefficients for MS1 and then MS2

table 5a

Table 5B: Clustering based on Price Coefficients (homo emotionalis). Elements sorted by coefficients for MS3 and then MS4

table 5b

DEF BUY MS1 – Focus on car;

DEF BUY MS2 -Focus on driver and situation

PRICE MS3 – Focus on the driving feeling of good product, good experience, good interaction with dealer;

PRICE MS4 – Responds to deferential dealer, and boast-worthy car.

The clustering approach, doable as a short intervention in the marketing process, ahead of the messaging efforts, enables the company to increase the likely fit between the buyer and the salesperson. The potential exists for developing a knowledge-base of messaging (viz., a ‘wiki’ of the mind) for the topic of sales negotiations [11]. The results shown here suggest that such a wiki could be created rapidly, inexpensively, and scaled across different topics in the automobile category, and across countries. Simply knowing that people are different, and having a sense of ‘what works’ in the negotiation, available both to buyers and sellers, might produce a new dynamic in the world of marketing and sales.

An Update on the Purchasing of Cars – Changes Occurring Since the Study was Run

The authors wish to note that the data analyzed for this study were collected prior to the coronavirus pandemic, which began in March 2020. During the pandemic and up to the time of publication, lack of critical computer chips, a decline in new supply, and high demand for both new and used vehicles conspired to create a temporary situation where demand is outstripping supply. With vehicles of any type scarce, pricing for any car is at historic levels. Recent used cars, for example, are selling for prices at or near their original selling price, and new cars are being sold for premiums over MSRP. For these reasons, our findings should be seen as reflecting the pre-pandemic market. We expect that after the shortages ease, the market will return to its historical dynamics and that our findings will be hold.

Design for an ‘Updatable’ Mind Wiki of the World of Automobile Purchasing

We might say that Mind Genomics is a disciplined hypothesis-generating method, which even if it does not emerge with hypotheses about the way a specific part of the ‘world works,’ nonetheless provides a solid, archival database of the world of the mind, for a common behavior, in a known society, at a defined time, under specific circumstances. The fact that these Mind Genomics studies are easy to do, inexpensive, rapid, makes the creation of a database of the mind, a ‘Wiki of the mind of everyday situations’ well in the research of virtually every serious researcher.

What might this wiki look like, what would be its time and cost to develop, but most of all, what might this wiki add to the knowledge of people? If we move away from the world of the hypothetico deductive, and move to the systematic collection of data, such as the features of a KIA, we might lay out the wiki as follows:

  1. Basic design of a simple study = 4 questions, 4 answer per question, one rating scale (relevant for the situation)
  2. Number of situations =7 (e.g., thinking about a car, searching for information about a car, visiting a dealer, sitting down with the dealer, reading information about cars, closing the deal, specifying the financial arrangement, specifying service for after-purchase). For each situation, an in-depth set of say the 16 elements
  3. Number of brands = 10 (for each brand the same information, but the study is totally brand specific, including a ‘no brand at all’ as a brand)
  4. Number of countries = 10 (study is replicated the precise same way in each of 10 different countries, of course with the same car brand, or matching car brand if necessary)
  5. Number of respondents per study = 100
  6. Estimated time using Mind Genomics (BimiLeap.com) = six months (assuming team of individuals do the studies)
  7. Published costs (assume easy to find respondents) – $6/respondent, or $600/study
  8. Number of studies and cost per country – 70 studies x $600 = $42,000
  9. Number of countries – 10 or 700 studies x $600 = $420,00 for the entire wiki (plus time). The number of respondents can be increased by half to 150 for an additional $210,00

Discussion and Conclusions

Mind Genomics provides a tool by which to study the psychology of the everyday, in a way that might be called ‘from the inside out.’ The different analyses presented here are meant as a vade mecum, a guide to what might be learned in a simple Mind Genomics cartography. The cartography is exactly what it says, the act of mapping. There is no hypothesis testing in a Mind Genomics study, at least no formal hypothesis testing. Rather the study, indeed the experiment, is set up to observe everyday behavior, but in a situation where one can easily uncover relationships among behaviors and link behavior (or least verbal judgments) to the nature of the test stimuli [12,13].

With the foregoing as a post-script, what then can we say we have learned, or more profoundly, what are the types of information that Mind Genomics has provided, and which allow us to claim it as a valid method for science? It is certainly not in the traditional of the hypothetico-deductive system, which observes nature, creates a hypothesis about what might be happening, sets up the experiment, and through the experiment confirms or disconfirms that hypothesis. The hypothetico-deductive system is the most prevalent, popular way to advance science, building one block at time, fitting that block into the ‘wall of knowledge’, and creating an understanding of the world. The foregoing is hypothesis-testing.

When we look at the sequence of analyses presented here, we might see a different pattern. The pattern would not be one of offering hypotheses about the way the world works, even the world of automobile negotiation. We might create an experiment on negotiation to prove a point, such as the conjecture that a person who is ready to say YES wants more of a price concession than a person who is not ready to say yes. That would be the hypothesis, perhaps buttressed by reasons ‘why’.

References

  1. Beenen G, Barbuto Jr, J.E (2014) Let’s make a deal: A dynamic exercise for practicing negotiation skills. Journal of Education for Business 89: 149-155.
  2. Page D, Mukherjee A (2007) Promoting critical-thinking skills by using negotiation exercises. Journal of Education for Business 82: 251-257.
  3. Huang SL, Lin FR (2007). The design and evaluation of an intelligent sales agent for online persuasion and negotiation. Electronic Commerce Research and Applications 6: 285-296.
  4. Wu WY, Liao, YK, Chatwuthikrai A (2014) Applying conjoint analysis to evaluate consumer preferences toward subcompact cars. Expert Systems with Applications 41: 2782-2792.
  5. Kolvenbach C, Krieg S, Felten C (2003) Evaluating brand value A conjoint measurement application for the automotive industry. Conjoint Measurement Springer, Berlin, Heidelberg. pp: 523-540.
  6. Gofman A (2011) Consumer driven innovation in website design: Structured experimentation in landing page optimization. International Journal of Technology Marketing 6: 72-84.
  7. Gofman A, Moskowitz HR, Mets T (2011). Marketing museums and exhibitions: What drives the interest of young people. Journal of Hospitality Marketing & Management 20: 601-618.
  8. Moskowitz HR, Gere A (2020) Selling to the ‘mind’ of the insurance prospect: A Mind Genomics cartography of insurance for home project contracts.
  9. Moskowitz HR, Gofman A (2007) Selling Blue Elephants: How to Make Great Products that People Want Before They Even Know They Want Them. Pearson Education.
  10. Likas A, Vlassis N, Verbeek JJ (2003) The global k-means clustering algorithm. Pattern Recognition 36: 451-461.
  11. Gofman A, Moskowitz HR (2010B) Improving customers targeting with short intervention testing. International Journal of Innovation Management 14: 435-448.
  12. Moskowitz H, Baum E, Rappaport S, Gere A. (2020) Estimated stock price based on company communications: Mind Genomics and Cognitive Economics as knowledge-creations tools for Behavioral Finance.
  13. Gofman A, Moskowitz H (2010A) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
fig 1b

The Mind of the Reader: Mind Genomics Cartographies of E-Readers versus ‘New’ Magazines

DOI: 10.31038/PSYJ.2022414

Abstract

In two separate experiments, groups of 50 respondents evaluated vignettes comprising systematically varied combinations of elements, experiment 1 dealing with the content of magazines, experiment 2 dealing with the features of an e-book reader. The vignettes were evaluated on 9-point Likert scales. Equations relating the presence or absence of the 36 elements in each experiment revealed unusually high coefficients. Clustering the patterns of coefficients revealed two mind-sets for the magazine contents, three mind-sets for the e-book reader. The mind-sets were not diametrically opposite, in the way the clustering would show for most products. Rather, the mind-sets suggested different patterns of preference, instead of preference/rejection. The argument is made that for many products with positive features, mind-set segmentation will reveal groups differing in the order of preference, with most features liked, rather than revealing the more typical finding that the mind-sets exhibit strong and opposite patterns of acceptance/rejection.

Introduction

The 21st century abounds in media, formerly just printed and broadcast, now electronic. Over the past decades readers have been introduced to the benefits of e-readers, virtually small computers created for the presentation of written material of many sorts, from books presented as searchable files, to pictures, presentations, to audio books, and the like. At the same time, the 21st century abounds in the printed word, on traditional media, such as newspapers, magazines, books, and so forth.

The focus of the two studies reported here was on the response to magazines (study #1), and to e-book readers (study #2), from the point of view of first- and second-year college students entering the world of higher education. The idea was to find out what features they thought would be relevant to people, and in turn, how people felt about combinations of these features in small vignettes (descriptions of offerings) and evaluated by respondents.

The academic literature as well as the business literature focuses on who reads magazines [1] and who uses e-book readers and the reasons [2-6]. The studies on media give one a sense of looking from the outside in, from the point of view of a third-party observer trying to make sense of a situation and reporting on the various features of situation. The observer is describing what she or he sees, and the potential organizing patterns which might be emerging, based what is observed, and the intuition of the observer. There is a sense of the ‘inside of the mind’, but not a feeling of immediacy, the type of immediacy when one reads a description of a product or service, and feels an excitement, a sense of ‘that’s just what I want.’

Rationales for the Two Studies Reported Here

The original studies were conducted as part of a set of studies at Queens College, (CUNY, NY), by students turned experimenters. The focus was on exploring the world of the everyday. One remarkable event emerged from the two studies. The study magazines were perceived by many of the respondents as fairly boring. Many of the elements were simply uninteresting, and in fact 22 of the respondents did not end up liking anything in that was being offered. In contrast, all the elements in the e book reader were considered interesting. Thus, it was of interest to compare the two.

The Mind Genomics process makes what was a typical questionnaire into an experiment. The questionnaire and the experiment both try to uncover what respondents feel to be important. The questionnaire works by presenting the respondent with a single set of stimuli, messages or elements presenting different ideas, and analyzing the ratings. The stimuli may be of the same type, presenting alternatives of a single idea, or the stimuli may be of entirely different categories of messages. In contrast, Mind Genomics can be said to an experiment in which the respondent rates combinations of messages, simulating a typical reality [7-9].

The approach is illustrated by a series of steps, each step comparing the two studies.

Step 1: Select the Topic, the Questions, and the Answers (elements)

Mind Genomics works with the experience of the everyday. It is critical, therefore, to select a delimited topic, and create a story framed by questions, in the manner that a story might be related by a person. The questions provide the structure to move the story forward. The story need not be the type of story with a plot. Rather, the story merely needs to provide a set of smaller ‘sub-topics’, aspects of the main topic, but aspects that can be dealt with by simple stand-alone phrases which ‘describe.’ The topic will be introduced to the respondent, so the respondent knows to what the test stimuli pertain. Th questions are never shown to the respondent, but simply serve as an aid to creating the answers, the elements, which will be shown to the respondent in test combinations.

Table 1A shows the structure of topic, questions, and answers for the magazine, something with which people were very familiar at the time of the study, in 2012. The topic was particularized to a subscription to the magazine, rather than interest in general in the magazine. The elements would be looked at in the light of a call to action, to subscribe or not to subscribe to the magazine.

Table 1A: Questions and answers (elements) for the magazine

table 1A(1)

table 1A(2)

Table 1B shows the structure of topic, questions, and answers for the E-reader. At the time of the study, E-readers were coming into vogue. Amazon had introduced the Amazon Kindle series, E-book readers, so the product idea was becoming better known. Technology was evolving quickly. The focus of the study was features and capabilities of the product.

Table 1B: Questions and answers (elements) for the E-Book Reader.

table 1b(1)

table 1b(2)

Allowing people to collaborate, especially students who are as yet unfettered by the cynicism of adults, generates ideas which run the gamut. The elements shown in Tables 1A and 1B emerged from students, not from professional copyrighters, not from professional ‘creatives’ whose job it is to come up with winning ideas. The Mind Genomics system encourages the exploration of new ideas, often ideas in the mind of young people. It will be interesting to measure how well these ideas perform. they are certainly different from many of the tried-and-true ideas proffered as the output of professionally moderated creative session. The performance will be measured empirically below

Step 2: Combine the Elements into Short, Easy to Read Vignettes, Using Experimental Design

Mind Traditional efforts to teach the ‘scientific method’ are founded on the belief that a variable must be isolated, and studied, but only after all of the possible variation, the ‘noise’ around the variable has been eliminated, either by suppressing the noise (testing the element by itself i the simplest form), and/or by averaging out the noise (e.g., testing with dozens or even hundreds of people, so that the individual variation averages out).

Mind Genomics was founded on the basic tenet judgment data of real-world stimuli should be done in a way which best resembles the real world, namely mixtures.,,, namely identify the variables to be tested, combine them in a way which resembles th type of compound stimulus which one encounters in nature. By combining the different elements in structured way, using an experimental design, which mixes and matches the different independent variable, one presents the respondents with more realistic test stimuli. We encounter mixtures all the time and react to them. Thus, the mixtures tested by the respondents are more similar to what the person would face. The key difference is that the experimental design permits the research to deconstruct the reaction to this ‘combination’ into the contributions of the components, the variables of interest.

The requirements for a Mind Genomics experimental design are that the elements should appear equally often, that the vignettes be ‘incomplete’ (viz., some vignettes are absent elements or answers from a question), that the elements be statistically independent of each other, and the experimental design be valid down to a base size of one respondent. Finally, the experimental design be permutable, so that by permuting elements or answers in a single question new combinations emerge, based upon the same design structure [10].

It is important to note that with the foregoing approach, each respondent evaluates a different set of 48 vignettes, prescribed by the underlying experimental design (called the 6×6 design; six questions, six answers or elements per question). With 50 or so respondents, there are 50×48 or 2400 vignettes evaluated by the respondents, most of which are different from each other. In that way the Mind Genomics system is metaphorically like the MRI machine, which takes pictures of the same tissue from different angles and combines these pictures by computer to arrive at a single 3-dimensional image of the underlying tissue.

The output of the experimental design appears in Figure 1A, showing a vignette for the magazine, and Figure 1B showing a vignette for the e-book reader.

fig 1a

Figure 1A: Example of a four-element vignette for the magazine study

fig 1b

Figure 1B: Example of a 3-element vignette for the e-book reader

Step 3: Execute the Mind Genomics Study on the Internet

Beginning in the late 1990’s, a great deal of consumer research migrated to the web, to the internet. Companies found that the data generated by web-based interviews seemed to be just as valid as data generated by in-person interviews and mailed-out paper questionnaires. Establishing web-interviews as a valid way, and indeed far less expensive way, to obtain data gave a boost to interviews which need technology embedded in their backbone. Min Genomics is one of the approaches which proposed, because each respondent was to evaluate a unique set of elements. The only practical way was to have a computer combine the elements in ‘real time’, following the underlying 3expeirmental design. The process became streamlined over time. The respondent would log in, following a link, be presented with an orientation screen, and then a set of systematically varied combinations, created ‘in real time’, at the site of the respondent’s computer.

Figure 2A shows the orientation screen for the magazine study, Figure 2B shows the orientation screen for the e-reader study. The respondents were recruited by an online panel provider, Turk Prime, Inc. which provided respondents in the United States. The compensation to the respondents was set by Turk Prime, Inc. as part of their internal policies. These policies as well as the identification of the respondents, were not available through the service. The only guarantee was that the respondents were vetted by Open Venue Ltd., part of their panel.

fig 2a

Figure 2A: Orientation for the magazine study

fig 2b

Figure 2B: Orientation screen for the e-reader study

Figures 2A and 2B show the orientation screen. Very little information is given regarding the purpose of the study, and the rationale for selecting the elements. Just the topic is given. The rest of the screen provides information about the number of question (two), and the type (scalar, Likert Scale for Question 1, presented here; selection of emotion for each vignette, not presented here).

The orientation screen goes out of its way to reassure the respondent that all the screens are different from each other, and that the study will take 10-15 minutes. These two reassurances were put in after the early experience on the Internet, when respondents kept saying upon exist that the concepts they evaluated seemed to have many repeats (not possible with the design), and that they wanted to know how long the interview would be. Rather than giving a precise time, it was deemed better to give them a reasonable range of 10-15 minutes. Most respondents finished earlier.

Observations of respondents doing these types of studies in a central location revealed that the respondents often begin by trying to ‘outsmart’ the research, trying to figure out the appropriate answer. With single elements rated, this outsmarting or gaming the system is possible. With 48 different combinations, however, it is impossible for the respondent to game the system. The respondent may begin with an effort to outsmart the system, but almost universally the respondent relaxes, and simply answers in what the respondent feels is an uninterest way, barely paying attention. That tis precisely the right state for the respondent, because in that state the answers come from the heart, without being edited to be politically correct.

Step 4 – Acquire the Data and Prepare It for Analysis

Each respondent evaluated 48 difeferent vignettes, consturcted according to an experimental deesign. The respondent first rated the vignette in terms of interest using a 9-point category or Likert Scale (subscribe fo the magazine, purchase for the e-book reader). The respondent then ratedthe vignette in terms of emotion experience after reading the vignette. Those data are not presented here.

The foundations of Mind Genomics lie in the fields of experimental psychology, consumer research, and statisitics, respectively. Experimental psychologists do not usually convert the data from the Likert Scales, preferring the granularity, which allows statistical analysis to uncover more statically significant effects using tests of difference. In contrast,, users of Mind Genomics data, typically managers want to use the data for decision making (e.g., use/not use; go/don’t go). It is important for them to interpret the data to make their decision. All too often, the manager presented with averages across people from a projecting using Likert Scalesw will begin the interaction by asking a question like ‘what does a 6.9 average on the rating scale ‘mean’, and what should i do?’

The tradition in consumer research and in Mind Genomics, followed here, transforms the Likert sale to a two point scale, 0 and 1 or 0 and 100, repectively. They two transformed scales, 0-1, 0-100, are different expessions of the same data, but present the data either with decimals (0-1), or without decimals (0-100), We chose the 0/100 tranfomration,  Ratings of 1-6 were coded 0, ratings of 7-9 were coded 100, and a vanishingly small random number (<105) was added to make sure the transformed rating would always have variation acfrosss the 48 vignettes for a single respondent. This prophylatic measure ensure that one could use regression modeling at the level of the individual respondent, even in those cases when the respondent confined the ratings to one region, viz., 1-6 or 7-9 respectively.

Step 5 – Create Individual Level Models, through Regression, Relating the Presence/Absence of the Elements to the Transformed Response

It is at Step 5 that the real analysis begins, an analysis which is virtually mechanical in nature, yet which repeatedly shows how the consumer mind makes decisions. The data were prepared in at Step 4. Step 5 uses OLS (ordinary least squares) multiple regression to relate the presence/absence of the 36 elements to the transformed rating. The equation is expressed as:

Binary Transformed Rating = k0 + k1(A1) + k2(A2)….k36(F6)

For those respondents whose ratings were all between 1 and 6, the coefficients were all near 0 an the additive constant was around 0 as well. For those respondents whose rating whose ratings were all between 7 and 9, the coefficients again were all near 0, and the addiive consatn was around 100. Out of 52 respondents, 22 respondent showed this pattern for magazine, none showed this pattern for e-book reader. The data from these 22 respondents were eliminated from the database, leaving only respondents who showed variation in their transformed binary response.

Step 6 – Cluster the Respondents into Either Two Groups (Magazine) or Three Groups (e-Book Reader)

Step 6 attempts to divide the respondents in a study into clusters, doing so that the the respondents in a clusters are ‘similar’ to each other, while at the same time the pattern of the 36 averages of the coefficients are very different between two clusters or very different across three clsuters. The process can be done very easily using k-means clustering [11]. The clustering program returns with the assignment of each respondent to exactly one of the two clusters (for magazines), or one of the three clusters (for e-book readers). Afterwards, run one equation for all the respondents in a study, and two separate equations for all respondents in each of the two mind-sets (magazine), three separate eqations for all respondents in each of the three mind-sets (e-book reader).

The clustering procedures are mathematics-based, attempting to bring some definable order into what might otherwise be a blooming, buzzing confusion, in the words of noted Harvard psychologist, William James. The clusters themselves do not have any concrete reality, but simply represent intuitively reasonable ways to divide objects. Clustering can be done on anything, as long as the measure(s) are comparable across the different objects.

When we look at the clusters, recognizing that we are dealing with a mathematically based system, our judgment should be based on at least two criteria. The first criterion is parsimony. We know that we will get perfect clustering if each of our respondents becomes her or his own cluster. That would defeat the purpose. The idea is to create as few clusters as possible, to be as parsimonius as possible, even at the cost of some ‘noise’ in the system which makes the clustering far less than perfect. Thus, the first rule is the fewer the number of clusters, the better. The second criterion is interpretability, that the clusters should each tell a story. One may want the story to be tight, meaning more clusters, and less parsimony. Or one may allow the story to be less tight, with more open issues, but with more parsimony, viz., fewer clusters. It is always a trade-off; more parsimony versus more interpretability. There is no right answer. In this study, the effort will be towards parsimony, given the range of possible elements that can fit either in a magazine or an e-book reader [12].

One last issues remains to be mentioned. That issue the nature of the variables (elements) considered in the clustering. The traditional approach in Mind Genomics has been to use the coefficients of all of the elements, but not to use the additive constant. There is always the potential that the clustering might be unduly affected by the nature of the elements selected. With 36 elements, one would hope that the elements deal with different aspects in equal ways. But what happens, for example, if most of the elements deal with usage, and only a few elements deal with product features? Would that generate the same clusters were the elements to be configured differently, with only a few elements dealing with usage, and most elements dealing with product features? In other words, is the mind-set segmentation affected by the distribution of the topics dealt with in the study?

To answer the foregoing question, the nature of the variables used in clustering, each study was analyzed twice, AFTER the respondents with all coefficients around the value 0 were eliminated from the data. The first clustering was done with the original 36 coefficients. Both studies comprised featured six sets of six elements each, so the clustering was similar.

The second analysis reduced the dimensionality of the 36 elements using principal components factor analysis [13]. Even though the 36 elements were statistically independent of each other by design, the pattern of 36 cofficients shows substantial co-variations, simply because the elements were similar, generated similar patterns. The PCA isolated eight factors for the magazine subscription, and 15 factors for the e-book reader. The nature of the factors is not important. Rather, the factors are statistically independent of each other. The factors were rotated by Quartimax to make the data matrix as simple as possible. Each respondent was then located in the 8-dimensional factor space for the magazine, or the 15-dimensional factor space for the e-book reader. After the factor spaces were creatd, the clustering was done again, with two mind-sets extracted for the magazine

Step 7: Interpreting the Results – Magazine

Table 2A shows the results for the magazine based upon the clustering into two groups. Three groups did not produce any clearer result. The “Total Panel” data shows all coeficients, positive and negative. For these results, we show only the very strong positive coefficients, 15 or higher.

Table 2A: Coefficients for the magazine, for total and two mind-sets, based on using all 36 elements for clustering.

table 2a(1)

table 2a(2)

The three groups, Total, Mind-Set 1 and Mind-Set 2 generate similar, low values for the additive constant, 16-20. The additive constant is the conditional probability of a person wanting to subscribe to the magazine in the absence of elements. The underlying experimental design ensured that each vignette would comprise 3-4 elements, never zero elements. The additive constant is a convenient parameter, estimating the intercept, the likely score in terms of ‘top 3’ that would be obtained in the impossible case of a vignette with no elements.

Table 2B shows the same type of analysis, this time based on the factors of the respondents on eight independent factors (dimensions), rather than on 36 elements.Comparing the two types of segmentation, first based on all 36 elements and the second based on the elements after factor analysis, it is clear the the clustering generates clearer results when the original data is used, confirming the insights of others focused on thepractical uses, opportunities, and pitfalls encountred in clustering [14,15]. In reality, Table 2A, based on all 36 elements, suggests one major mind-set, those interested in experiences. The other mind-set barely enters the picture, only with one element, which scores near the bottom cutoff. Table 2B shows the same pattern as well [16].

Table 2B: Coefficients for the magazine, for total and two mind-sets, based on using all eight factors for clustering, factors derived from the 36 elements.

table 2b(1)

table 2b(2)

The final notworthy finding in this study of magazine content is the unusually large number of very strong elements, nine of thirty-six, one quarter, having coefficients of +15 or higher. This is an unusual finding, and may well be attributed to the creative abilities of younger people, ages 17-23, focusing on what is important to them. What is important is the specific, the concrete, the focused feature, not the grand abstraction that a marketer or ‘creative’ in an agency would propose as a coherent, summarizing theme. The responents want specifics.

Step 8: Interpreting the Results – E-Book Readers

Table 3A shows the results for the E-book reader based upon the clustering into three groups. Unlike the findings for the magazine, the three mind-sets for the E-Book Reader made sense. Once again we see low additive constants. When we divide the respondents into mind-sets based either upon the original 36 elements or upon the 15 factors emerging, we see two very low additive constants, and low additive constant around 27.

Table 3A: Coefficients for the magazine, for total and three mind-sets, based on using all 36 elements for clustering.

table 3a(1)

table 3a(2)

Table 3B: Coefficients for the magazine, for total and three mind-sets, based on using all 15 factors for clustering, factors derived from the 36 elements.

table 3b(1)

table 3b(2)

Like the results for magazines in Tables 2A and 2B, we find that some coefficients are quite high, some of the highest ever recorded for a Mind Genomics study. The hypothesis proferred in the previous section may still hold, viz., that having young, colleage-age students, create the elements is the secret to strong performing elements. It may be that the students think in a more concrete, feature-oriented way, a way which generates a great deal more interest than professional creatives who may think of ‘grand solutions’, rather than of specific features. It may also be that the topic of e-book readers is by its nature simply far more interesting, and au courant

Discussion and Conclusion

Why High Coefficients?

The most surprising outcome from these two studies is the emergence of elements with exceptionally high coefficients. The studies were run in 2012, a decade ago, but that does not provide an explanation for the strong positive coefficients. Hypotheses about in the absence of fact. We have only two examples. What are common about them is that the elements are provided by young people (ages 18-21) rather than by professionals, viz., the so-called highly paid ‘creatives’ in the marketing companies and advertising agencies. and the topics talk to presentations of information, capabilities given to the reader or the user That is, the elements are fundamentally ‘interesting’ to the reader, not just simply recitations of what is. There is a sense of ‘excitement’, perhaps because we are talking about items with clearly interesting, people-oriented features. There are no elements dealing with ‘good practices’, elements that might be necessary in an offering but elements which really do not convince.

The notion that the topic is interesting certainly has merit in the world of Mind Genomics. Most Mind Genomics studies deal with social or medical issues, issues that are not ‘interesting,’ nor issues that people would pay for. Social problem, medical problems are issues about which one gathers information. The elements in this study are used to excite a buyer to buy the product. There is no sense of elements put in because they are legally necessary, or for completeness as one of recommended best practices.

Polarized versus Non-polarized Mind-sets

As noted above, the unusually high coefficients emerging from the total panel for some elements , and the exceptionally high coefficients emerging several times from the separate mind-sets, suggest that we are dealing with a new type of preference pattern, not frequently seen in Mind Genomics, but one easy to recognize. We are dealing with what one might call the ‘pizza phenomenon’. Most people love pizza. It is the toppings which differentiate people. For most people it’s a matter of order of preference, which varies from person to person. The result is that the total panel generates strong liking of the pizza, with the differentiator being the rank order of preference of the topping. There are people who actively dislike certain toppings, but for the most part the mind-sets that would emerge from a study of pizza and those representing different rank orders of items already liked.

In contrast to the above, the pizza phenomenon, where the mind-sets are simply patterns of liking of the same elements, there are those situations where the person likes one element but hates another This pattern is very different from the pizza pattern. The pattern is more similar to the pattern of likes and dislikes of flavors. Flavors themselves strongly polarize people. Some people love a certain flavor, whereas others hate the flavor. One hears those words again and again.

Let’s move this analogizing to the topics of e-book readers and magazine subscriptions. For the most part the coefficients are positive. There are relatively few elements which are strongly negative. There are no moderately negative elements for the e-book reader. Here are the most negative elements for the magazine

C4      Sneak Previews of the upcoming year in music and entertainment          -6

A3      Executives read it . . Uneducated ones look at the pictures             -6

D5      Social network pages with up to the hour updates that can be discussed with friends      -7

A2      Rockers read it. Pop Stars read it.       -9

The patterns emerging for both the magazine (less so) and the e-book reader (more so) is that the creation of a product, especially one with electronic features (the e-book reader) is most likely to generate higher coefficients, than, for example a study on shopping for, using, or servicing the product.

Developing a Culture of Iteration

There is a culture in business which promotes experimentation, but does not prescribe what the experiment should be. The data presented here from students, rather than from experts, show a much greater ‘success’ in early stage experimentation. We see a great number of strong performing elements, yet many elements are still moderate performers. The results give hope that the number of strong positives can increase. With repeated efforts there should be more strong performing elements.

In business the process would be different. In most businesses the unspoken norm to ‘manage for appearances.’ That is, in business, people all too often manage each other, rather than managing for the best results. Bringing that observation to the world of Mind Genomics, the typical business approach would be to spend a long time preparing for the study, making sure that the elements are ‘just right’, and conducting the Mind Genomics experiment with several hundred people, to ensure that ‘the results are solid.’ This approach of ‘letting the perfect be the enemy of the good’ ends up generating one well-prepared Mind Genomics study. The effort is expended in the wrong way. The effort should be on iterating, with small Mind Genomics experiment, each with 50 respondents, each done in the space of no more than 24 hours. The study here, run by students, relative amateurs in the world of business, shows the power of ‘just doing it.

Appendix

The effort to create this system generated a patented approach (REF), available now world-wide, on an automated basis, for a reduced size (4 questions, 4 answers or elements). The system is essentially free, except for minor processing charges on a per respondent basis to defray the maintenance. The website is www.BimiLeap.com

References

  1. Witepski L (2006) When is a magazine not a magazine? Journal of Marketing 2: 34-37.
  2. Behler A, Lush B (2010) Are you ready for e-readers? The Reference Librarian 52: 75-87.
  3. Griffey J (2012) E-readers now, e-readers forever!. Library technology reports 43: 14-20.
  4. Massis BE (2010) E-book readers and college students. New Library World 111: 347-350.
  5. Thayer A, Lee CP, Hwang LH, Sales H, Sen P, et al. (2011) The imposition and superimposition of digital reading technology: the academic potential of e-readers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 29: 17-26.
  6. Williams MD, Slade EL, Dwivedi YK (2014) Consumers’ intentions to use e-readers. Journal of Computer Information Systems 54: 66-76.
  7. Milutinovic V, Salom J (2016) Introduction to basic concepts of Mind Genomics. In: Mind Genomics. Springer, Cham 1-29.
  8. Moskowitz H, Rappaport S, Moskowitz D, Porretta S, Velema B, et al. (2017) Product design for bread through mind genomics and cognitive economics. In: Developing New Functional Food and Nutraceutical Products. Academic Press 249-278.
  9. Moskowitz HR, Gofman A (2007) Selling Blue Elephants: How to Make Great Products that People Want Before They Know They Even Want Them. Pearson Education.
  10. Gofman A, Moskowitz H (2010) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
  11. Kanungo T, Mount DM, Netanyahu NS, Piatko CD, Silverman R, et al. (2002) An efficient k-means clustering algorithm: Analysis and implementation. IEEE Transactions on Pattern Analysis and Machine Intelligence 24: 881-892.
  12. Gere A, Moskowitz H (2021) Assigning people to empirically uncovered mind-sets: A new horizon to understand the minds and behaviors of people. In Consumer-based New Product Development for the Food Industry. Royal Society of Chemistry 132-149.
  13. Widaman KF (1993) Common factor analysis versus principal component analysis: Differential bias in representing model parameters?. Multivariate Behavioral Research 28: 263-311.
  14. Aldenderfer MS, Blashfield RK (1984) Cluster Analysis, Sage Publications, Beverly Hills, CA.
  15. Fiedler J, McDonald JJ (1993) Market figmentation: Clustering on factor scores versus individual variables. Paper Presented to the AMA Advanced Research Techniques Forum.
  16. Huhmann BA, Brotherton TP (1997) A content analysis of guilt appeals in popular magazine advertisements. Journal of Advertising 26: 35-45.

Proportion of High Risk Mothers Attending Antenatal Clinic (ANC), PGIMER, Chandigarh 2018-20

DOI: 10.31038/IJNM.2022311

Abstract

Introduction: Pregnancy with high risk conditions is threatening the life of the mother as well as fetus. Each year, globally 529,000 women and girls die due to complications associated with pregnancy. Most of the complications are preventable with preventive measures. So, all the pregnant mothers should be evaluated for the high risk factors. This study assessed the proportion of high risk mothers in Antenatal clinic OPD PGIMER Chandigarh.

Aim: To assess the proportion of high risk mothers.

Material and method: Pre-experimental design was used where total 200 antenatal mothers were enrolled by purposive sampling technique. Data were collected by using interview schedule in the period of July to December 2019. An assessment proforma were used for the assessment of antenatal mothers with high risk conditions regarding maternal and fetal outcome.

Results: Finding of the study shows that mean age of high risk women were 28.6 years of age, attained menarche at the age of 13 years of age. Majority (63%) of the mothers belongs to Hindu family. More than 60% of the high risk mothers were having Anemia followed by Hypothyroidism (57.5%), Gestational diabetes mellitus (28.5%), Gestational Hypertension (15%), Previous history of caesarean section (14.5%), Age ≥35 years (8.5%), Rh negative mothers (5.5%), Height <145 cm (3.5%).

Conclusion: It is concluded that highest percentage of Antenatal women (63%) were with anemia followed by 57.5% with Hypothyroidism.

Keywords

Gestational diabetes mellitus, Gestational hypertension, High risk mothers

Introduction

Pregnancy is an inimitable, stirring, and joyful time in a women’s life as it express the woman’s incredible, innovative and fostering powers while providing a link to the future. It brings a new sense to the thought of beauty and this time a woman cherishes with enormous joy and anticipation. The emotion of carrying in a little soul within in her is glorious. A baby fills a peace in the mother’s heart that she never knew was empty [1]. Each week of pregnancy brings with a new changes and thoughts that may require some explanations and hold up to the pregnant woman. It is the period during which a baby is in the mother’s womb for about 280 days. Progression of both physiological and psychological changes occur during pregnancy [2]. A pregnant women passes through period of pregnancy, labor and puerperium, it is important to provide antenatal, Intranatal and postnatal care. The year 2016 and 2030, is considered as the Sustainable Development Goals, where the target is to reduce MMR to less than 70 per 100 000 live births globally [7]. According to study, there is 20 -30% high risk pregnancies in India which leads to 75% of perinatal mortality and morbidity. So, for the reduction of maternal mortality, it is necessary to detect high risk pregnancy and their management in early stage [8]. High risk factors includes obstetric factors- Grand multipara, Age less than 18 years and more than 35 years, Height less than 145 cm, multipara with bad obstetric history like (loss of baby, cesarean section, Hypertension in previous pregnancy, recurrent premature labour and abortion, Intrauterine growth retardation), case of disproportion, Malpresentation, multiple pregnancy, obstetric complications includes hemorrhage during pregnancy (threatened abortion, Antepartum hemorrhage), pregnancy induced hypertension (Preeclampsia, eclampsia), high risk fetus (premature labor, RH incompatibility fetus, post maturity, intrauterine growth retarded fetus). Medical factors includes (anemia and malnutrition, cardiac diseases (pulmonary tuberculosis, hepatitis, syphilis, psychiatric disorders, thyroid disorders and others), social factors include unwed pregnancy, no or less than 3 antenatal checkup or low socioeconomic group. In western countries this incidence of high risk pregnancy comes to about one third in all the pregnancies. This incidence can be seen at least double numbers, because of anemia, under nutrition, poor social factors and parity [3]. Each pregnancy has three trimesters. First trimester is first 12 weeks of pregnancy, second trimester starts from 13 weeks to 28 weeks and third trimester starts from 29 weeks to 40 weeks of pregnancy. The first trimester is the most essential for the development of a fetus. A women’s body goes through many changes during the first 12 weeks of gestation. Body structure and organ systems of the baby develop during this period. Most miscarriages and birth defect can be seen during this period [4]. During 2nd trimester, nausea and vomiting usually resolve, there are fewer complications can occur like pregnancy induced hypertension, diabetes mellitus, Oligohydromnia, Polyhydromnia, anemia, cardiac diseases, abortion. During third trimester, various complications can arise like Gestational diabetes, preeclampsia, preterm labour, premature rupture of membrane, intrauterine growth retardation; malpresentation [5]. High risk pregnancy refers to pregnancy where complications are faced by the mother and her unborn child and also it will affect the life of both mother and baby. Nesbitt, 1969 scored high risk pregnancy under eight factors on initial history, physical and laboratory examinations at the time of booking. These factors were age of the mother, race and marital status, parity, past obstetric history (abortions premature, fetal death, neonatal death, and congenital anomaly), medical and obstetric history and nutrition (systemic illness, specific infections, and diabetes), Rh problem, social and economic history, emotional survey. Each factor was attached penalty points 0.5.10, 20, 30. The total score of all eight categories were subtracted from a potential ideal score of 100; the score lying at or below 70 was high risk and above 70 was low to moderate risk. The outcome of pregnancy on the point of abortion, premature birth, low birth weight, prenatal complication, labour complication, perinatal mortality, neonatal morbidity and poor outcome were identified with high percentage with high risk scores. However, this score did not include risks developed during ongoing pregnancy and delivery. Currently, comprehensive risk scoring is made on initial score, continuing pregnancy and labour risk score, postpartum, maternal and neonatal risk monitoring [3]. Pregnancy checkup is necessary for at least ten times in case of high risk pregnant women and five times in case of normal pregnancies [6]. Prenatal assessment and screening of high risk cases through antenatal assessment, review lab orders/investigations, obtaining Ultrasonography report, identification of high risk and follow up prevent the complication of high risk pregnancy.

Objective

To assess the prevalence of high risk mothers in Antenatal clinic OPD PGIMER Chandigarh.

Methodology

Study design was pre-experimental. Sample was selected by using purposive sampling technique. Data were collected by using interview schedule in the period of July to December 2019. Antenatal women with high risk conditions were approached during their clinical visit in antenatal clinic, outpatient department (OPD). women were informed about the aim of the study and written consent was obtained. A structured interview schedule was used to gather information regarding identification data. An assessment proforma were used for the assessment of antenatal mothers with high risk conditions regarding maternal and fetal outcome. Content validity of the tool and protocols was confirmed for the completeness, content and language clarity by the Guide, Co-guides and experts from National Institute Of Nursing Education (NINE), and Department Of Obstetrics And Gynecology. Ethical approval was taken from institute ethics committee, PGIMER, Chandigarh vide no. NK/5163/Msc/10. A written Informed consent was obtained from the participants. Data was analyzed using descriptive statistics.

Results

Table 1a depicts the Sociodemographic profile of antenatal women with high risk conditions. Majority of the women with high risk conditions were in age group of 26-30 years resulting in the mean age of 28.65 ± 4.28. Majority of antenatal mothers were educated up to secondary. More than 60% of the antenatal women were Hindu, belongs to joint family and lived in urban area. Most of the antenatal women were vegetarian and per capita income between Rs 3504-7007.

Table 1a: Sociodemographic profile of Antenatal mother with High risk conditions.

Variables

Antenatal mother with high risk conditions (N=200)

f (%)

Age(years)

20-25

26-30

31-34

≥35

 

51(26)

87(44)

45(22)

17(8)

Educational status

Primary

Secondary

Graduate

Postgraduate

 

7(3)

 92 (46)

 48(24)

 53(27)

Religion

Hindu

Muslim

Sikh

 

126(63.0)

9(4)

65(33)

Per capita income(Rs)

<1050

1051-2101

2102-3503

3504-7007

7008 and above

 

2 (1.0)

31 (15)

55(28)

60(30)

52(26)

Type of family

Nuclear

Joint

 

72(36.0)

128(64.0)

Habitat

Urban

Rural

 

126(63.0)

74(37.0)

Dietary habits

Vegetarian

Non vegetarian

 

155(77.5)

45 (22.5)

Age Mean ± SD=28.65 ± 4.28; Range=20-45.
Per capita income Mean ± SD =5514.75 ± 4133.48; Range=1000-25000.

Table 1b shows the menstrual and obstetric profile of antenatal mothers with high risk conditions. Majority of women attained menarche at the age of 13 years, having regular menstrual periods and duration of menstruation more than 3 days. Majority of the women had marriage between the age 18-27 years and 71.5% had duration of marriage ≤5 years. Majority of antenatal women were primigravida and had history of one live birth. 77% of the antenatal women were having gestation between 29-42 weeks and 23% were having gestation 13-28 weeks. 2 out of 200 antenatal women were having the history of Post partum haemorrhage (PPH) in previous pregnancy.

Table 1b: Menstrual and Obstetric profile of Antenatal mothers with High risk conditions.

Variables

Antenatal mother with high risk condition (N=200) f (%)

 Age at menarche (years)

12

13

14

 

 37 (18.5)

 153 (76.5)

 10 (5.0)

Menstrual pattern

Regular

Irregular

 

177(88.5)

23(11.5)

Duration of menstruation(days)

≤ 3 days

>3 days

 

 66(33.0)

 134(67)

Age of marriage (years)

<18

18-27

28-35

 

11 (5.5)

152(76.0)

37 (18.5)

Duration of marriage (years)

5

6-10

11-15

>15

 

143 (71.5)

40 (20.0)

12(6.0)

5(2.5)

Gravida

Primigravida

Multigravida

 

115(57.5)

85(42.5)

Live birth

1

2

 

34(17.0)

7(4)

Period of gestation

13-28 weeks

29-40 weeks

 

46(23.0)

154 (77.0)

Previous history of PPH

 2(1.0)

Age of marriage Mean ± S.D =23.98 ± 3.582; Range: 16-35.

Table 1c depicts clinical profile of antenatal women with high risk conditions. More than 50% of the antenatal mother had Hemoglobin level (Hb) less than 11 gm/dl and TSH level more than normal. Less than 8% of the antenatal women were Rh-ve, blood pressure more than 140/90 mm of hg, presence of albumin and ketone in urine. Only Three percent of the antenatal mother had HbA1c more than normal. Nearly one third of the antenatal women had fasting blood sugar level more than 95 mg/dl and post-prandial more than 126 mg/dl.. Further table, shows that 31 or less than 31 % having pylectesis, ventricular septal defect, ventriculomegaly, choroid plexus cyst, fetal growth restriction based on ultrasound finding.

Table 1c: Clinical profile of Antenatal mother with High risk conditions.

Variables

Antenatal mothers with high risk conditions (N=200)

 f (%)

Blood group

Rh +ve

Rh –ve

 

189(94.5)

11(5.5)

Blood pressure

systolic

<140 mm of hg

>140 mm of hg

Diastolic

<90 mm of hg

>9 0mm of hg

 

 

195(97.5)

5(2.5)

 

186(93)

14(7)

Hb%

< 11 gm /dl

>11 gm/dl

 

126(63.0)

 74(37.0)

Blood sugar level

FBS(<95 mg/dl)

(≥95 mg/dl)

PPBS (<126 mg/dl)

(≥126 mg/dl)

HbAIc

Normal(<5.6)

Abnormal (≥5.6)

not done

 

140(70)

60(30)

 151(75.5)

49(24.5)

 

 51(25.5)

6(3.0)

143(71.5)

Urine testing

Presence of albumin

Presence of ketone

 

4(2.0) 10(5.0)

TSH level

Normal

abnormal

 

40(35.0)

75(65.0)

 Based on ultrasound findings N=13

Ventriculomegaly

2 (15)

Ventricular septum defect

1 (8)

Hydronephrosis

2 (15)

Choroid plexus cyst

3 (23)

Pylectesis

4 (31)

Fetal growth restriction and oligohydromnias

1 (8)

Table 2 illustrates the proportion of antenatal mother with high risk conditions. 63% of antenatal women had Anaemia followed by Hypothyroidism (57.5%), previous history of abortion (30%), Gestational diabetes mellitus (28.5%), Gestational Hypertension (15%), Previous history of caesarean section (14.5%), Age ≥35 years (8.5%), Rh negative mothers (5.5%), previous history of preterm baby (5%), Height <145 cm (3.5%), Oligohyramnios (3%), placenta previa (2%), Polyhydramnios (1%).

Table 2: Proportion of antenatal mother with high risk conditions.

Variables

Antenatal mother with high risk conditions (N=200)

f (%)

Height <145 cm

 7 (3.5)

Age ≥35 years

 17(8.5)

Rh-ve mothers

 11(5.5)

Previous history of pre-term baby

10 (5.0)

Previous history of abortion

60(30.0)

Previous history of LSCS

29(14.5)

Anaemia

126 (63.0)

Gestational Hypertension

30(15.0)

Gestational diabetes mellitus

57 (28.5)

Hypothyroidism

115(57.5)

Placenta previa

4 (2.0)

Oligohydromnia

5(3)

Polyhydromnia

1 (1)

Gestational diabetes mellitus with Anaemia

21(10.5)

Hypothyroidism with GDM with Anaemia

 12(6)

Hypothyroidism with Polyhydromnia with Anaemia

1(.5)

Hypertension with Placenta previa

1(0.5)

Hypertension with Anaemia

8(4)

Hypothyroidism with Anaemia

41(20.5)

Hypothyroidism with Gestational hypertension with Anaemia

3(1.5)

Hypothyroidism with Gestational hypertension

1(0.5)

Hypothyroidism with Oligohydromnias with Anaemia

1(0.5)

Hypertension with oligohydromnias

1(0.5)

Hypothyroidism with Gestational diabetes mellitus

5(2.5)

Hypothyroidism with Gestational hypertension with GDM

3(1.5)

Hypothyroidism with GDM with Gestational hypertension with Placenta previa with Anaemia

1(0.5)

Hypothyroidism with GDM with Gestational hypertension+Anemia

4(2)

Gestational Hypertension with oligohydromnias with Placenta previa with anaemia

1(0.5)

*Number is more because of more than one high risk conditions.

Discussion

High risk pregnancy can affect the health of mother or baby and complications are faced by the mother and her unborn child. If initially detection and effective management of high risk pregnancy can considerably be helpful for the reduction of maternal and neonatal mortality and morbidity rate. Present study was conducted with the objective to assess the proportion of high risk mothers. Two hundred women who fulfilled the inclusion criteria were chosen as subjects from Antenatal OPD, Obstetrics and Gynecology department of PGIMER, Chandigarh. The study was conducted from the month of July to august 2019. The collected data was analyzed using SPSS version 2.0, descriptive statistics were used for analyzing the data. Present study exhibit that 30% of the mother had history of abortion, history of caesarean section (14.5%) and 8.5% were elderly gravida. Findings are almost similar with the study conducted by Jaideep et al. [7], Kambaba Nazi Michel [8] found high risk mothers with history of abortion (27%), age ≥35 years (5.5%) and history of caesarean section (13.6%). They recommended that carefully monitoring is important for high risk women to avoid the occurrence of maternal mortality. Our study Shows that majority of the high risk mothers were having Anemia followed by Hypothyroidism, Gestational diabetes mellitus, Gestational Hypertension, Previous history of cesarian section, Age ≥35 years, Rh negative mothers, Height <145 cm. Kabamba Nzaji Michel et al. found that majority of high risk factors are history of maternal infection (18.5%), unexplained fetal or neonatal death antecedent (12.4%) [8]. Jaideep et al. also found the high risk factors. 59.8% were having bad obstetric history, 4% were having pregnancy induced hypertension, 3.2% were RH negative [7].

References

  1. Introduction to Pregnancy – Pregnancy [Internet]. [cited 2019 Feb 3].
  2. High-risk pregnancy. In: Wikipedia [Internet]. 2019 [cited 2019 Feb 3].
  3. Dawn CS (1986) Rule of Ten MCH care and education, uterine maturity score, textbook of obstetrics current edition Calcutta.
  4. What are symptoms of complications during the first trimester of pregnancy? | 1st Trimester Of Pregnancy [Internet]. Sharecare. [cited 2019 Feb 10].
  5. The Second Trimester of Pregnancy: Complications [Internet]. [cited 2019 Feb 10].
  6. Maternal mortality [Internet]. [cited 2019 Feb 6].
  7. Jaideep KC, Prashant D, Girija A (2017) Prevalence of high risk among pregnant women attending antenatal clinic in rural field practice area of Jawaharlal Nehru Medical College, Belgavi, Karnataka, India. International Journal of Community Medicine And Public Health. 28: 1257-1259.
  8. Michel KN, Ilunga BC, Astrid KM, Blaise IK, Mariette KK, et al. (2016) Epidemiological Profile of High-Risk Pregnancies in Lubumbashi: Case of the Provincial Hospital Janson Sendwe. Open Access Library Journal. 3:1.

Consequences of the COVID-19 Pandemic: A Study from India

DOI: 10.31038/PSYJ.2022413

Abstract

A study was carried out in India considering the consequences, which could have been faced by people due to the first wave of COVID-19 pandemic 2020. Data collected online through questionnaire using the snow ball sampling technique from 400 respondents from 13 States of India was considered. The questionnaire contained total 17 negative and positive items related to the consequences/outcome of the pandemic, which could also psychologically influence people unfavourably and favourably. The responses were scored to work out the total consequences score. The data was analyzed using Factor Analysis and Odds Ratio test and interpreted as proportion and scores. The results of the consequences score show that majority of the respondents have faced medium level of consequences, while some of them faced low consequences only. Negative consequences such as mental stress, income/job loss, less social interaction, increase in health problems, unrest or quarrel in the family, social interaction/transportation/recreation/capability of old people to support themselves/health care for medical problems being affected, work from home not helpful, and less reduction in family expenses during the pandemic have been observed under the study. Positive consequences of the pandemic such as reduced pollution and better environmental conditions due to lock down, lock down time used for learning agriculture/fisheries, and increase in time spent with family are also evident. Factor analysis shows that age, education, and no. of family members of the respondents explain 69.9% of the variability in their total consequences score. Odds ratio reveals that people aged more than 40 years, with PG and Degree qualifications, and having more than 4 family members faced less COVID related consequences. This is also substantiated by the comparatively higher proportion of people under these categories of the three characteristics giving favourable responses for positive and negative consequences items under the study.

Keywords

COVID-19, Pandemic, Consequences, Consequences score

Introduction

COVID-19 (Coronavirus Disease 2019) was first identified in China on November 17 2019 [1]. From there, it spread to other countries very rapidly and hence, WHO declared the disease as pandemic. The first case of COVID-19 reported in India was on 30th January 2020 [2]. The disease mainly spreads through respiratory droplets and the symptoms range from cough, throat infection, fever, body pain to the death of an individual. Older people are considered more prone to COVID-19 owing to their weak immune system [3].

The emergence of COVID-19 came as a shock to the entire world since the disease was spreading rapidly and most of the nations declared lockdown measures to contain the spread of the virus. This resulted in large scale economic disruption as most of the firms shutdown their production and business houses were closed. Many people lost their jobs and experienced difficulties in their lives due to the pandemic.

This study was carried out taking into consideration the consequences, which could have been faced by people due to the COVID-19 pandemic.

Methods and Materials

The study was conducted during the first wave of COVID-19 pandemic 2020 in India. Data was collected online through questionnaire survey using the snow ball sampling technique. The questionnaire was initially sent to some people through WhatsApp/email, with a request to forward it to more people. Accordingly, responses were obtained from 412 respondents from the States of Kerala, Karnataka, Tamil Nadu, Andhra Pradesh, Telangana, Maharashtra, Gujarat, Haryana, Rajasthan, Odisha, Bihar, West Bengal and UP in India. After removing random and incomplete data, 400 samples were considered for analysis.

The questionnaire contained 17 items related to the consequences/outcome of the pandemic. Both negative and positive consequences items were considered, which could also psychologically influence people unfavourably and favourably respectively. They were selected based on media reports, review of literature etc. The negative items relate to the direct psychological consequence of the pandemic such as mental stress and those which could indirectly affect people psychologically such as loss of income, job etc. The positive items relate to aspects such as reduced pollution and better environmental conditions due to the lock down, lock down time used for learning agriculture/fisheries etc.

The five-point continuum to the items on how much the respondents were affected due to the first wave of the pandemic were: Very much, Moderately, Less, Very less, and Not at all. These responses for the negative consequences items were scored from 1 to 5 and reverse scored for the positive items. The total score of all the items was considered as the total COVID consequences score. A higher score indicates less consequences faced by the respondent and vice versa. The level of higher consequences faced due to the pandemic in relation to the bench mark level of “No consequences” faced (as considered in this study) was calculated as follows: The total consequences score of the respondent is subtracted from the maximum possible score of 85 (which will be obtained by a respondent who has faced “No consequences” at all), divided by 85 and expressed in percentage as the level of higher consequences faced in relation to the bench mark level of “No consequences”.

The characteristics of the respondents such as sex, age, education, marital status and no. of family members were also included in the questionnaire. Data was analyzed using statistical techniques such as Factor Analysis and Odds ratio test and interpreted as proportion and scores.

Results

COVID Related Consequences Faced

Since there are negative as well as positive consequences due to the COVID-19 pandemic analyzed in this study, the terms consequences as well as outcome have been used in Table 1. 17.5% respondents were of the opinion that the COVID-19 pandemic 2020 has affected their lives very much, while, it affected 42.5% moderately. 16.5% and 10.5% mentioned that it affected them only less and very less respectively, while the pandemic did not affect 13% respondents at all (Table 1). It can be made out from Table 1 that 27.5% respondents experienced very much and 45% moderate mental stress due to the pandemic. The income of 51% respondents only were found to be affected either very much or moderately due to the pandemic, while, regarding loss of job, 34.5% report not at all affected, 12% very less and 23.5% less affected (Table 1). With respect to health care for existing/new medical problems, only 8% are very much and 25.5% moderately affected. Similarly, the respondents affected very much and moderately through increase in health problems is comparatively less than those who report less, very less and not at all affected.

Table 1: Consequences/outcome of the COVID-19 pandemic 2020.

Sl. No.

Consequence/outcome of the pandemic Respondents (%) Total (%)

Extent of consequence/outcome faced

Very much Moderately Less Very less

Not at all

1 Mental stress

27.5

45.0 13.5 5.5 8.5

100

2 Affected income

20.5

30.5 17.5 11.0 20.5

100

3 Affected due to loss of job

15.5

14.5 23.5 12.0 34.5

100

4 Affected health care for existing/new medical problems

8.0

25.5 23.5 20.0 23.0

100

5 By remaining more at home, unrest/quarrel in the family increased

4.0

12.5 22.0 15.5 46.0

100

6 Social interaction affected

42.5

31.0 12.5 6.5 7.5

100

 7 Affected freedom of movement

59.0

26.0 6.0 4.5 4.5

100

 8 Transportation affected

55.0

27.5 9.5 2.0 6.0

100

 9 Other health problems increased

2.5

17.5 24.0 20.5 35.5

100

10 Leisure/recreation activities affected

37.5

31.0 13.5 9.5 8.5

100

11 School closure increased load on parents*

32.1

28.5 15.0 10.8 13.6

100

12 Affected the capacity of old persons to support themselves**

23.3

40.7 17.4 7.6 11.0

100

13 Lock down reduced pollution and created better environmental conditions

63.5

25.5 4.5 3.5

3.0

100

14 Lock down time was used for learning agriculture/fisheries & other hobbies

100

 26.0

37.0 13.0 8.5

15.5

15 Time spent with family increased

55.0

28.0 7.0 3.0 7.0

100

 

16

Working from home helped me and my family

13.0

12.5 19.5 31.5 23.5

100

17 Family expenses reduced

13.5

11.5 21.5 42.5 11.0

100

*Among those who have children.
**Among those having old persons in their house.

Unrest/quarrel in the family has not at all increased through remaining more at home for 46% respondents, while 15.5% and 22% report very less and less increase in this respectively. 42.5% and 31% respectively reported that social interaction was affected very much and moderately due to the pandemic. 59% and 26% are of the opinion that freedom of movement has been affected very much and moderately respectively, while almost similar proportion mention that transportation was affected very much and moderately.

37.5% and 31% respondents report that their leisure/recreation activities were affected very much and moderately respectively. A total of 64% respondents report that the pandemic affected the capability of old persons to support themselves either very much or moderately. Work from home during the pandemic period was less and very less helpful for 51% respondents, while it did not help 23.5% respondents at all.

A total of 64% respondents reports only less or very less reduction in family expenses during the pandemic period.

63.5% and 25.5% respondents are of the opinion that the pandemic induced lock down very much and moderately reduced pollution and created better environmental conditions respectively. Similarly, the lock down time was used for learning agriculture/fisheries & other hobbies by a total of 63% respondents very much and moderately. Time spent with their families increased very much during the pandemic period for 55% and moderately for 28% respondents, even though the level of social interaction with other people was restricted very much for 42.5% and moderately for 31% respondents.

COVID Consequences Score

Table 2 shows the total COVID consequences score of the respondents categorised based on the quartile method. A high score indicates that the respondents have faced low consequences and vice versa for a low score. Majority (44.5%) of the respondents in the study have faced medium COVID related consequences, while 27.5% faced low consequences only. It may be made out from Table 3 that in the case of 77.5% respondents, more consequences faced (in relation to the condition of “No consequences faced”) is in the range of 57.6% to 35.3%. More consequences faced is in the lowest range of 34.1 to 14.1% only for 13.7% respondents.

Table 2: Categories of total COVID consequences score.

Total consequences score category*

Mean score Minimum score Maximum score

Respondents (%)

High**

57.14

52 73

27.5

Low***

36.17

16 41

28.0

Medium

46.26

42 51

44.5

Total

46.43

16 73

100

*Based on quartile method.
**Low consequences faced.
***High consequences faced.

Table 3: Range of total COVID consequences score.

Range of total consequences score

Range (%) of more consequences faceda

Respondents (%)

16-35

81.2-58.8

8.8

36-55

57.6-35.3

77.5

56-73

34.1-14.1

13.7

Total

100

aIn relation to the condition of “No consequences faced”.
Lower the score, higher the consequences faced.

Characteristics Contributing to the Consequences Score

Factor analysis was carried out to determine the major characteristics of the respondents contributing to the total COVID consequences score. The results are presented in Table 4, which shows that the first four factors show significant eigen value (>1) and explain 69.92% of the variability in the total score of the respondents. Among the characteristics, age, education, and no. of family members contribute significantly (factor loading>0.50) to the factor components observed in the total consequences score.

Table 4: Factor analysis of total COVID consequences score.

Characteristics

Factor loading

Factor

1

2 3

4

Age

0.77

-0.02 0.64

0.00

Sex

0.29

0.10 0.25

-0.34

Education

-0.31

0.90 0.30

0.00

Marital status

0.37

0.02 0.19

0.55

No. of family members

-0.69

-0.44 0.58

0.00

Family members less10 years of age

-0.32

-0.10 0.11

0.30

Marital status

-0.02

-0.10 0.44

-0.07

Income

-0.03

0.39 0.21

0.15

Initial Eigen values

1.78

1.44 1.29

1.06

Variance (%)

22.36

18.11 16.12

13.32

Cumulative %

22.36

40.47 56.60

69.92

Chances to Obtain High Total COVID Consequences Score for People with Different Age, Education and No. of Family Members

Table 5 shows the results of the statistical test of odds ratio with respect to high total consequences score (less consequences faced) with respect to age, education and no. of family members, which showed high factor loading (Table 4). It can be made out from Table 5 that respondents with more than 4 family members have 0.37 times more chances of obtaining high score (indicting less consequences) than those with less than 4 family members. Similarly, respondents aged more than 40 years have 0.79 times more chances of obtaining high score (indicting less consequences) than those aged less than 40. However, PhD holders have 0.33 times less chances of obtaining high score (indicting less consequences) than those who have PG and Degree.

Table 5: Odds ratios of personal characteristics on total COVID consequences score.

Characteristic

Category

Odds ratio*

Age

>40 vs.<40

1.79

No. of family members

>4 vs.<4

1.37

Education

PhD vs PG and Degree

0.67

*Indicating the chances of respondents to have a high total score (less consequences faced).

Considering 13.7% respondents shown in Table 3 who have the highest range of total score of 56 to 73 (which implies that only 34.1% to 14.1% more consequences have been faced by them than the condition of “No consequences faced”), 63.6% of these respondents are found to have a total score of 60 and above. Total consequence score of 60 and above implies that the higher consequences faced by them in relation to the condition of “No consequences faced” is 29.4% and less only.

Hence, based on the results of factor analysis (Table 4) and odds ratio (Table 5), the proportion of respondents under different categories of age, education and no. of family members (the characteristics considered in working out the odds ratio) was worked out for those getting a total consequence score of 60 and above. The results are shown in Table 6.

Table 6: Age, Number of family and education of respondents having high total COVID consequences score.

Respondents (%) with total consequences score of 60 and above

Age No. of family members

Education

Up to 40

>40

Up to 4 >4 PhD

PG and Degree

26.0

74.0

40.7 59.3 26.0

74.0

The maximum total score of respondents in the study was 73.

It can be made out from Table 6 that while 74% respondents aged more than 40 years have total consequences score of 60 and above, only 26% below 40 years of age have this score. This could be the reason for the odds ratio of 1.79 for age (Table 5), which implies that respondents aged more than 40 years have 79% more chance of obtaining high score (less consequences) than those aged less than 40.

Similarly, while 59.3% of respondents with more than 4 family members get a total consequence score of of 60 and above, the figure is only 40.7% for those with less than 4 members (Table 6). This could be why the odds ratio of 1.37 is there for no. of family members (Table 5), indicating that respondents with more than 4 family members have 37% more chance of obtaining high score (less consequences) than those with less than 4 family members.

However, with regard to education, while 74% respondents with PG and Degree have total consequences score of 60 and above, only 26% with PhD are having this score. The odds ratio was 0.67 for education (Table 5), which means that PhD holders have 33% less chance of obtaining high score (less consequences) than those with PG and Degree qualifications.

For better interpretation of the influence of age, education and no. of family members (family size) on the total COVID consequences score (whose results were observed in the odds ratio test), the variation in proportion of responses to different consequences items were worked out for these characteristics. Only perceptible differences in the responses to the consequences items between various categories of the characteristics have been included in the concerned tables which follow.

Age wise responses to different consequences items are shown in Table 7. With respect to the negative consequence item, namely, income affected due to the COVID-19 pandemic, while 31.7% respondents up to 40 years of were very much affected, only 10.5% of those with more than 40 years of age report in this manner. Further, 27% of those aged more than 40 reports that income was not at all affected due to the occurrence of the pandemic, when compared to only 12.3% of those less than 40 years of age (Table 7). While 19% of respondents up to the age of 40 were affected very much due to loss of job, the figure for more than 40 age respondents is only 6%. 19.5% of respondents with age more than 40 were less affected due to job loss, while only 15.5% of people up to 40 years of age report in this manner (Table 7).

Table 7: Age wise responses to consequences items.

Sl. No.

Consequence item Age group Respondents (%)
Very much Moderately Less Very less

Not at all

 1 Income affected

Up to 40

31.7 NA* NA NA

12.3

>40

10.5 NA NA NA

27.0

 2 Job loss

Up to 40

19.0 NA 15.5 NA

NA

>40

 6.0 NA 19.5 NA

NA

 3 Time spent with the family increased

Up to 40

NA NA 7.8 NA

7.2

>40

NA NA 4.8 NA

5.4

 4 Due to lockdown, quarrel/unrest in the family increased

Up to 40

6.1 17.8 NA NA

34.0

>40

2.5 7.6 NA NA

51.2

 5 Affected freedom of movement

Up to 40

61.1 NA NA 3.9

3.9

>40

56.7 NA NA 5.4

7.4

 6 Transportation affected

Up to 40

62.8 NA 8.3 0.6

NA

>40

48.0 NA 10.4 3.7

NA

 7 Stress level including fear of virus infection increased

Up to 40

34.4 NA NA NA

7.8

>40

18.8 NA NA NA

12.8

 8 Other diseases/health problems increased

Up to 40

NA 21.7 NA 18.3

26.7

>40

NA 14.4 NA 20.9

37.2

 9 Health care for existing/new medical problems increased

Up to 40

11.1 NA 18.9 15.7

NA

>40

 6.0 NA 21.3 18.4

NA

 10 Leisure/recreation activities affected

Up to 40

37.8 NA NA 7.2

11.6

>40

34.2 NA NA 9.5

13.8

 11 School closure increased pressure/load in children and parents

Up to 40

33.9 25.0 7.8 6.1

NA

>40

11.0 14.9 12.6 12.6

NA

 12 Affected the capacity of older people to support themselves

Up to 40

26.2 NA 14.4 3.3

7.2

>40

13.8 NA 16.3 9.1

12.9

 13 Lockdown time was used in learning/doing agriculture/fisheries etc.

Up to 40

NA 31.1 16.2 NA

NA

>40

NA 41.6 10.1 NA

NA

*Data not shown since perceptible difference was not observed in these responses for the consequences items

Now, considering a positive consequence item -time spent with family increased during the pandemic period, Table 7 shows that while a higher proportion (7.8%) respondents under the age group of more than 40 report as less time spent with the family, only 4.8% respondents with more than 40 age report so. Further, while 7.2% of up to 40 age report as not all spent time with the family, only 5.4% of people aged more than 40 report in this manner.

Similarly, considering the other consequences items shown in Table 7, it can be inferred that a comparatively lower proportion of respondents above the age of 40 report affected very much/moderately for the negative consequences items than those with up to 40 years of age, while a higher proportion of respondents above the age of 40 report affected less/very less/not at all for the negative consequences items, when compared to the respondents aged up to 40 years. Similarly, with regard to the positive consequences items shown in Table 7, a comparatively higher proportion of respondents above the age of 40 report as experiencing very much/moderately for the positive consequences items than those with up to 40 years of age, while a lower proportion of respondents above the age of 40 report as less/very less/not at all for the positive items, when compared to respondents aged up to 40 years.

These trends indicate that people with more than 40 years of age have faced comparatively less consequences than those aged less than 40 years. This would also help to substantiate the results of the odds ratio of 1.79 for age of the respondents (Table 5), which implies that respondents in the study who are aged more than 40 years have 79% more chance of obtaining a high score/facing less consequences) than those aged less than 40.

As in the case of age, it can be inferred from the data presented in Table 8 that a comparatively lower proportion of respondents with PG and Degree qualification report affected very much/moderately for the negative consequences items than those having PhD, while a higher proportion of respondents with PG and Degree report as affected less/very less/not at all for the negative consequences items, when compared to those having PhD. Similarly, with regard to the positive consequences items, a comparatively higher proportion of respondents with PG and Degree report as experiencing very much/moderately for the positive consequences items than those with PhD, and a lower proportion of PG and Degree respondents report less/very less/not at all for the positive items, when compared to respondents having PhD qualification.

Table 8: Education wise responses to consequences items.

Sl. No.

Consequence item Education Respondents (%) reporting
Very much Moderately Less Very less

Not at all

1 Income affected PG and Degree

NA

28.0 NA* NA

21.4

PhD

NA

36.2 NA NA

16.2

2 Loss of job PG and Degree

NA

NA 19.4 NA

28.2

PhD

NA

NA 15.0 NA

23.7

4 Due to lockdown, quarrel/unrest in the family increased PG and Degree

NA

12.9 NA 13.6

44.4

PhD

NA

15.0 NA 10.0

32.5

5 Social interaction and cohesion affected PG and Degree

NA

27.8 12.3 7.3

NA

PhD

NA

40.0 8.8 2.5

NA

6 Affected freedom of movement PG and Degree

57.0

NA NA NA

5.4

PhD

60.0

NA NA NA

2.5

7 Transportation affected

 

PG and Degree

51.0

NA 9.7 NA

7.4

PhD

 

62.4

NA 8.8 NA

3.8

8 Stress level including fear of virus infection increased PG and Degree

NA

43.4 14.4 6.0

10.0

PhD

NA

57.4 8.8 3.8

7.5

9 Other diseases/health problems increased PG and Degree

NA

15.7 NA 33.7

4.4

PhD

NA

25.0 NA 27.5

Nil

10 Health care for existing/new medical problems increased PG and Degree

NA

15.7 NA 23.0

33.7

PhD

NA

25.0 NA 11.3

27.5

11 School closure increased pressure/load in children and parents PG and Degree

20.2

NA NA 8.7

NA

PhD

25.0

NA NA 5.0

NA

12 Affected the capacity of older people to support themselves PG and Degree

NA

35.1 NA 11.0

NA

PhD

NA

37.5 NA 7.5

NA

*Data not shown since perceptible difference was not observed in these responses for the consequences items

These findings indicate that people with PG and Degree qualifications have faced comparatively less consequences than those having PhD, which would also support the result of odds ratio of 0.67 for Education (Table 5), which implies that PhD holders have 33% less chance of obtaining high score/facing less consequences than those with PG and Degree qualifications.

It can be made out from Table 9 that comparatively less proportion of respondents having more than 4 family members report affected very much/moderately for the negative consequences items than those with a family size of 4 members, while a higher proportion of respondents with family size of more than 4 members report affected less/very less/not at all for the negative consequences items than the respondents with a family size of 4 members. Similarly, for the positive consequences items, comparatively high proportion of respondents with more than 4 family members report experiencing the positive consequences items very much/moderately than those with only 4 members, and a lesser proportion with more than 4 family members report less/very less/not at all for the positive items, when compared to respondents with a family size of 4.

Table 9: Family size wise responses to consequences items.

Sl. No.

Consequence item No. of family members Respondents (%) reporting
Very much Moderately Less Very less

Not at all

1 Work from home helped me/my family

Up to 4

7.9 30.9 NA NA NA
>4 11.1 45.3 NA NA

NA

2 Social interaction and cohesion affected

Up to 4

NA 43.2 9.6 5.3

4.7

>4

NA 18.7 17.9 9.3

13.4

3 Affected freedom of movement

Up to 4

NA 38.0 6.1 2.1

1.2

>4

NA 19.3 9.5 10.6

8.4

4 Affected the capacity of older people to support themselves

Up to 4

NA 36.6 14.3 4.7

NA

>4

NA 25.6 22.6 14.0

NA

5 Lockdown reduced pollution and created better environmental conditions

Up to 4

52.2 NA 11.0 NA

NA

>4

74.9 NA  1.2 NA

NA

6 Lockdown time was used in learning/doing agriculture/fisheries etc.

Up to 4

24.9 NA NA 14.6

11.5

>4

38.4 NA NA  4.5

5.9

*Data not shown since perceptible difference was not observed in these responses for the consequences items

Similar to age and education, these results substantiate the odds ratio of 1.37 for the characteristic, namely, no. of family members (family size), which indicates 37% more chance for respondents with a family size of more than 4 members to get a high COVID consequences score/face less consequence than those having a family size of less than 4.

Discussion

The study shows that a high proportion of respondents representing various States of India experienced very much and moderate mental stress due to the pandemic. WHO has warned of a “massive increase in mental health conditions” arising from the pandemic. Mental health experts in Mumbai have observed an increase in feelings of anger, frustration and helplessness. [4]. However, in a study conducted in Kerala State of India by WEDO (NGO), majority of the respondents did not experience high level of negative feelings/mental state on the COVID pandemic, while most of them experienced the positive feelings well [5].

A survey found that 77% of economically active adults in India had lost income due to the pandemic (https://www.hindustantimes.com/india-news/77-indian-adults-lost-income-due-to-covid-19-pandemic-survey/story-QjCVwkt4xNmJwcHw4I5wMP.html-retrieved 22 Aug 2021). According to WHO, the COVID-19 pandemic has decimated jobs and many are without the means to earn an income and the access to quality health care during the pandemic induced lockdown (Source: Impact of COVID-19 on people’s livelihoods, their health and our food systems-Joint statement by ILO, FAO, IFAD and WHO. October 2020. https://www.who.int/news/item/13-10-2020-impact-of-covid-19-on-people’s-livelihoods-their-health-and-our-food-systems-retrieved 22nd August 2021)). Health is defined by WHO as the “state of complete physical, mental and social well-being and not merely the absence of disease or infirmity” (World Health Organization (WHO). Naming the coronavirus disease (COVID19) and the virus that causes it. https://www.who.int/emergencies/diseases/novelcoronavirus-2019/technical-guidance/naming-the-coronavirus-disease-(covid2019)-and-the-virus-that-causes-it. – retrieved 1st November 2021). However, in the present study, income of about 50% of the respondents only were found to be affected either very much or moderately due to the pandemic, while 70% respondents mention as not at all affected, very less and less affected with respect to job. Health care for existing/new medical problems are very much and moderately affected on account of the pandemic for some respondents only. Similarly, those who are affected very much and moderately through increase in health problems is comparatively less than the total proportion reporting less, very less and not at all affected.

Not only is the infection with COVID-19 disease a risk, but people are limiting their social interactions with others, working from home, and avoiding unnecessary gatherings. In this study also, social interaction was affected very much and moderately due to the pandemic for a very high proportion of respondents.

While overcoming the COVID-19 pandemic relies on an efficient strategy that involves the whole population, the elderly people are disproportionately affected by this disease [6]. In this study also a good proportion mention that the pandemic affected the capability of old persons to support themselves either very much or moderately.

The advantages of working from home include reduced commuting time, avoiding office politics, using less office space, increased motivation, improved gender diversity (e.g. women and careers), healthier workforces with less absenteeism and turnover, higher talent retention, job satisfaction, and better productivity [7,8]. However, the present study has shown that work from home during the pandemic period was not at all, very less and less helpful for a very high proportion of people.

Slowdown in spending by Indian households is reported to have saved additional $200 billion during Covid pandemic and lockdowns. (https://economictimes.indiatimes.com/news/economy/indicators/indians-saved-additional-200-billion-during-covid-pandemic-and-lockdowns/articleshow/80386426.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst- retrieved 24th August 2021). However, in the present study, high proportion of respondents representing various States of India reported only less or very less reduction in family expenses during the pandemic period.

The study has also shown some positive outcomes during the pandemic period. Very high proportion of respondents report that the COVID 19 induced lock down reduced pollution and created better environmental conditions very much and moderately. Similarly, the lock down time was used for learning agriculture/fisheries and other hobbies very much and moderately by a high proportion of respondents. Even though the level of social interaction outside the family was significantly restricted, time spent with their families increased very much and moderately during the pandemic period for many respondents. Unlike the past, the onset of the COVID pandemic and the resultant lockdown has given families across India and the world a new lease of familial bonding that was otherwise hard to come by. For the first time in a long time, many parents and kids and even grandparents are all under the same roof round-the-clock. This enforced togetherness can deepen relationships for years to come. According to Brad Wilcox, a professor of sociology and director of the National Marriage Project at the University of Virginia, people and families when faced with a global crisis, and especially one of this scale, tend to respond by orienting themselves in a less self-centred way and in a more family-centric way (https://timesofindia.indiatimes.com/life-style/spotlight/how-the-lockdown-is-cementing-relationships-and-bringing-families-together/articleshow/75731732.cms- retrieved 23rd August 2021).

The results reveal that majority of the respondents have faced medium to low COVID related consequences only. Further, people aged more than 40 years, with PG and Degree qualifications, and having more than 4 family members have faced less COVID related consequences only. This is substantiated by the comparatively higher proportion of people under these categories of age, education and no. of family members giving favourable responses for positive and negative consequences items. These findings also support the odds ratio values observed for these categories of the characteristics, which indicate the chances for people falling under the particular categories to face less COVID consequences.

To conclude, majority of the respondents under the study have faced medium level of COVID-19 related consequences, while some of them faced low consequences only. Negative consequences include mental stress, income/job loss, less social interaction, increase in health problems, unrest or quarrel in the family, social interaction/transportation/recreation/capability of old people to support themselves/health care for medical problems being affected, work from home not helpful, and less reduction in family expenses during the pandemic. Positive consequences of the pandemic such as reduced pollution and better environmental conditions due to lock down, lock down time used for learning agriculture/fisheries, and increase in time spent with family are also observed in the study. Age, education, and no. of family members of the respondents explain 69.9% of the variability in their total consequences score. People aged more than 40 years, those with PG and Degree qualifications, and people having more than 4 family members are found to have faced less consequences only. This is also substantiated by the comparatively higher proportion of people under these categories of age, education and no. of family members giving favourable responses for positive and negative consequences items under the study.

It would be worthwhile if studies on the consequences of the COVID-19 pandemic occurring during different periods are carried out in various parts of the affected countries in order to facilitate the health and other field level workers to introduce location specific measures/strategies to address the problems faced by people. The development of useful information through such studies appears to be essential in the days to come for the policy makers also, keeping in mind the fact that the pandemic is continuing in time, space and severity in different parts of the world even now.

References

  1. Balkhi F, Nasir A, Zehra A, Riaz R (2020) Psychological and behavioral response to the coronavirus (COVID-19) pandemic. Cureus. 12: 5.[crossref]
  2. Annamuthu P, Shenbagavadivu, T, Arthi S (2020) A study on the perception and precautionary measures taken by the general public amidst COVID-19. Int J Modern Trends Sci Technol 6: 169-74.
  3. Mikaberidze A (2020) Letter To the Editor: “Letter to the Editor.” International Journal of Phytoremediation 20: 135-136.
  4. Fuad Bakioğlu, Ozan Korkmaz, Hülya Ercan (2020) Fear of COVID-19 and Positivity: Mediating Role of Intolerance of Uncertainty,Depression, Anxiety, and Stress. Int J Ment Health Addict. 28: 1-14.[crossref]
  5. Madhava Chandran K, Naveena K, Valsan T, Sreevallabhan S (2021). Analysis of the Mental State of People on COVID-19 Pandemic. International Journal of Indian Psychology 9: 839-845.
  6. Daoust J-F (2020) Elderly people and responses to COVID-19 in 27 Countries. [crossref]
  7. Mello JA (2007)Managing Telework Programs Effectively. Employee Responsibilities and Rights Journal 19: 247-261.
  8. Robertson MM, Maynard WS, McDevitt JR (2003) Telecommuting: Managing the Safety of Workers in Home Office Environments. Professional Safety 48: 30-36.
fig 2

Maladjustment of Pressure Settings of Programmable Shunt Valves by Weak Magnetic Fields – A Case Report

DOI: 10.31038/PSYJ.2022412

Abstract

Introduction: Hydrocephalus is caused by the progressive accumulation of cerebral spinal fluid (CSF) within the intracranial space. Resulting in an abnormal expansion of cerebral ventricles and, consequently, in brain damage. The standard treatment of hydrocephalus in children and adults is implantation of a shunt valve (i.e. Codman-Hakim shunt valve from Johnson & Johnson). This study shows easy maladjustment of a Codman-Hakim programmable valve even with magnetic field strengths as they occur in daily life.

Methods and Materials: The Codman-Hakim valve is a programmable CSF shunt valve with an opening pressure between 30 and 200 mm H2O. The valve relies on a special ball-in-cone system. A spherical ruby ball is pressed against a conical valve seat by a stainless-steel spring. The spring is attached to a spiral cam. If the pressure difference across the valve exceeds a preset pressure adjustment, the ball rises from the seat and vents CSF. To provide a larger valve orifice, the ball moves further away from the seat once the flow rate through the valve increases.

Findings and Outlook: Electromagnetic locking mechanism of common hospital doors employs magnetic field amplitudes strong enough to unintentionally change the patient’s shunt settings We experimentally verified that even weak (5-25 mT) magnetic fields can lead to significant changes in the spiral cam setting of Codman-Hakim shunt valves weak magnetic fields of up to 25 mT suggest that shunt valve might even interfere with household objects when brought in close proximity (i.e. refrigerator magnets) Our everyday life involves electronic and technological advances, the number of potentially interfering devices is likely to increase systematic characterization of various shunt valves with respect to everyday’s objects might be of significant importance to prevent ‘artificially’ created psychiatric symptomatic.

Keywords

Codman-Hakim programmable shunt valves, Maladjustment, Case report, Hydrocephalus

Introduction

Hydrocephalus is caused by a progressive accumulation of cerebral spinal fluid (CSF) within the intracranial space resulting in an abnormal expansion of cerebral ventricles and, consequently, in brain damage.

Implantation of ventriculo-peritoneal shunts (VP-shunts) is the standard treatment of hydrocephalus in children and adults. Most of the currently used shunt systems involve a valve to control pressure and drain CSF if needed [1-3].

In the last few years, malfunctions of programmable VP-shunts have been reported in cases in which patients have encountered powerful electromagnetic fields, e. g. Magnetic Resonance Imaging (MRI) [4,5]. However, the effects of small magnetic fields on VP-shunts are not well known.

In this study we present a case from Forensic Psychiatry in which pressure settings of an implanted Codman-Hakim programmable valve were changed when using electromagnetically controlled doors in a hospital ward.

Case Report

The patient is a 53-year-old man with a triventricular hydrocephalus due to cerebri stenosis of aqueductus, diagnosed in January 2013 – randomly discovered via MRI because of a newly developed insecure gait without Hakim’s triad. Also, an increasing psychomotoric slowdown and affective flattening were described. A treatment with a left ventriculoperitoneal programmable Codman Hakim valve and a Miethke-shunt-assistant was selected.

The pressure of the Codman-Hakim programmable valve was preset at 60 mm H2O, since the patient developed hygroma as a sign of overdrainage in June 2018.

In September 2018 the patient’s behavior was slightly changing. He showed an increasing affective flattening and modifications in psychopathology like repellent behavior. Often a loss of motivation and discouraged answering were recognized.

In skull x-ray a change in preset pressure from 60 to 50 mm H2O was recognized. In consideration of observed ventricle range, previous patient history of overdrainage and maladjusted pressure setting of 50 mm H2O, the valve pressure was changed to 40 mm H2O. One day after changing the pressure setting, the patient felt better and the described symptoms became less.

In mid-January the same symptoms recurred. Skull x-ray revealed a pressure setting of 50 instead of the preset 40 mm H2O and excluded a shunt disconnection. Again, maladjustments in the pressure setting were thought to have caused behavioral changes, and the valve pressure was subsequently reprogrammed to 40 mm H2O. Again, the patient improved clinically. Due to the rigorous absence of mobile phones or any other external electromagnetic equipment, the valve’s pressure setting had to be changed by some device present in Forensic Psychiatry – the magnetically closure assistance of the doors.

Methods

The Codman Hakim valve (Codman, Johnson & Johnson Company) is a programmable CSF shunt with an opening pressure between 30 and 200 mm H2O. The valve relies on a special ball-in-cone system. A spherical ruby ball is biased against a conical valve seat by a stainless-steel spring. Atop the spring sits a rotating spiral cam that contains a stepper motor. If the pressure difference across the valve exceeds a predefined popping pressure the ball rises from the seat to vent CSF. To provide a larger valve orifice the ball moves further away from the seat if the flow rate through the valve increases. Therefore, the pressure drop across the orifice never rises much above the predefined popping pressure.

To adjust a particular opening pressure an external handheld programming device is placed over the valve and the four programmer’s coils enclose the spiral cam centrically (Figure 1). Generating an electrically induced alternating magnetic field only few magnets are attracted by one coil or another. By switching on and off the electric current the spiral cam rotates step by step. This enables setting the opening pressure non-invasively within 18 steps with a range of 10 mm H2O each.

fig 1

Figure 1: Sketch of a Codman-Hakim shunt valve

In addition to the described case, our internal testing in Forensic Psychiatry showed also changes of valve’s pressure settings. To evaluate interactions between the Codman Hakim valve and the doors, a field experiment was conducted. A similar, unused Codman Hakim shunt valve was held up at patient’s face level while walking through different doors in the hospital ward. Before and after passing a door, the angle of the spiral cam was measured using an optical microscope. Before and after the walk through a doorway, the angle of the spiral cam was measured with an optical microscope (Figure 2).

fig 2

Figure 2: Rotating spiral cam before and after passing a door the angle of the spiral cam was measured using an optical microscope.

Conclusion

The described case and our internal testing suggest that even weak magnetic fields below 80 mT may lead to significant changes in the cam setting of Codman-Hakim shunt valves. Therefore, even common household items may interfere with Codman-Hakim shunt valves. In fact, any item that creates a magnetic field with a corresponding trajectory of movement, even devices in the healthcare environment, could potentially influence pressure settings. Because our everyday life involves more and more electronic and technological advances, the number of potentially interfering devices is very likely to increase. Both low-intensity and strong magnetic fields carry the risk of interacting with the pressure settings of shunt valves, a problem that both patients and medical professionals should be made aware of.

Even though the validation and reproducibility of our tests may have been somewhat limited, our results underline the fragility of Codman-Hakim shunt valves against even the weakest magnetic fields and pave the way for safe medical devices. Because our everyday life involves more and more electronic and technological advances, the number of potentially interfering devices is very likely to increase. Both low-intensity and strong magnetic fields carry the risk of interacting with the pressure settings of shunt valves, a problem that both patients and medical professionals should be made aware of [6,7].

References

  1. Akbar M, Aschoff A, Georgi JC, Nennig E, Heiland S etal (2010) Adjustable Cerebrospinal Fluid Shunt Valves in 3.0-Tesla MRI: a Phantom Study using Explanted Devices. Rofo 182: 594-602 [crossref]
  2. Kahle KT, Kulkarni AV, Limbrick DD, Warf BC (2016) Hydrocephalus in children. Lancet. 387: 788-799. [crossref]
  3. Mirzayan MJ, Klinge PM, Samii M, Goetz F, Krauss JK (2012) MRI safety of a programmable shunt assistant at 3 and 7 Tesla. Br JNeurosurg 26(3): 397-400. [crossref]
  4. Okazaki T, Oki S, Migita K, Kurisu K (2005) A rare case of shunt malfunction attributable to a broken Codman-Hakim programmable shunt valve after a blow to the head. Pediatr Neurosurg 41: 241-243 [crossref]
  5. Portillo Medina SA, Franco JVA, Ciapponi A, Garotte V, Vietto V (2017) Ventriculo- peritoneal shunting devices for hydrocephalus. Cochrane Database Syst Rev. [crossref]
  6. PROCEDURE GUIDE Codman Hakim Programmable Valve System for Hydrocephalus.
  7. Schneider T, Knauff U, Nitsch J (2002) Electromagnetic field hazards involving adjustable shunt valves in hydrocephalus. J Neurosurg 96: 331-334 [crossref]

Home is Where the Heart is, but Where is “Home”?

DOI: 10.31038/PSYJ.2022411

 

Due to constant political and financial instability, many young adults are leaving Argentina moving to various places around the world searching for a more promising future. This emigration has been raising on and on for the last few years …

For those of us who have been living abroad for some time, we know that living abroad is not easy and finding a new place to call home takes some time, one of the first questions you get when you meet someone is, where are you from? Of course, the answer to that question is easy. Later, when they get to know you, comes a second and sometimes tricky question, where is home for you.

Where is Home for Me

I was brought up in a family that moved from one country to another. Take into account that the internet and “family-based technology” are younger than me; so, staying connected was hard.… It was my dad’s job. We all just followed the league. Every three or four years we would come back to “homeland” Argentina, but, for me, that was not home. No friends, no school, no known neighborhood…Home for me was where my parents lived, no special land, no matter the country, just that place where I could be myself. I was from “my family”, that was when I discovered that for me home was where my heart was.

I grew up and discovered I had the “moving bee” inside. I just went on traveling and moving from one place to another. While I studied animal behavior, I saw that animals would try to take possession of the place they lived. Usually, they would mark it with their smell just to make it theirs. Make it home for them and their family. This way they could also let the rest know that place was theirs. Well, I guess humans, or at least me, do the same in some way. We decorate the places, do the lawn. We make it home.

Attachment to Home

There is a connection, a cognitive-emotional bond between us humans and our settings, this attachment to what we call home is a common human experience that is why moving isn’t as easy as it might sound. To ignore this fact of minimizing its effects might make the emigration process harder.

It is no secret that people develop a strong attachment to what we call home. Nor related to a specific place, it is related to a sense of control, predictability stability.

Home is Where My Heart is

As I see a home as a part of my self-definition, I made a home of every place where every place I lived. I considered each of those places my home at one time or another, whether it was for months or years I made it mine. Home then was where I was, where my heart is. Me myself, that was my home.

So if you have decided to emigrate, if you chose to move to another country just remember to allow yourself the time to make that place you choose your home. This does not mean you regret what you left, it means your home is where your heart is.

Familiarity with Caspian Kutum (Rutilus kutum)

DOI: 10.31038/AFS.2022412

Abstract

Caspian kutum is one of the valuable and economical species in the Caspian Sea basin, which in most years of exploitation accounts for half of the amount of bony fish catch and has two forms of autumn and spring, the spring form of most of the stocks of this fish gives. These reserves have decreased due to various reasons such as irresponsible fishing, changes in the water level of the Caspian Sea, construction of dams, etc., and for this reason, they have resorted to artificial reproduction of this fish to compensate for this issue. It is an Anadromoys and migrates to the river to reproduce when it reaches sexual maturity and then returns to the sea. After spawning and returning to the sea, Caspian kutum feed on the shallow shores of the Caspian Sea, a land rich in benthic animals, for the remainder of spring and summer. In late summer, due to the very high temperature, Caspian kutum leave the shallow shores and live in deeper places, and when the autumn temperature rotates, they return to the shallow parts of the shores with a depth of less than 20 meters for feeding.

Keywords

Kutum, Caspian Sea, Anadromoys

Introduction

The Caspian Sea is the largest lake in the world and the unique and major habitat of Caspian kutum [1-4]. Caspian kutum is a bony fish belonging to the family Cyprinidae of the genus Rutilus with the scientific name of Rutilus kutum, a native fish of the Caspian Sea. Caspian kutum are migratory and rudimentary and spend most of their lives in the salty waters of the sea and migrate to the fresh water of the river every year in the spring (mid-March to the end of May) for spawning and reproduction [5].

Caspian kutum food is very diverse and numerous, in fact Caspian kutum is omnivorous and gluttonous. The intensity of feeding varies at different times, for example during reproductive times and when they migrate to the river to lay eggs, and the intestines of these fish are often thick and empty, and also in late winter and with decreasing temperature, this index decreases sharply Finds [6].

Sexual Management of Caspian kutum

Sexual maturity in fish is affected by various environmental factors such as temperature, length of light period, water salinity and various other factors. Changes in these factors can have adverse effects on fish reproduction [7].

Sexual Intercourse Consists of Six Stages

Stage 1 – Immature

Very small sexual organs close to the spine, testicles and ovaries transparent and grayish in color, eggs invisible to the naked eye (ovogony)

Stage 2 – Immature

In the testicles and ovaries are semi-transparent, gray, half or slightly more than half the length of the abdominal area, the eggs are solitary and with a visible magnifying glass, spawning fish (resting) are placed in this class (Primary eggs).

Stage 3 – Developing

The testicles and ovaries are dark, reddish with blood capillaries occupy half of the abdomen and the eggs are visible to the naked eye in the form of copper grains. (Hollow eggs)

Stage 4 – Preparation for Spawning

The genitals fill the abdominal area and the testicles are white, the sperm fluid is shed due to pressure and the eggs are completely round and some are semi-transparent.

Stage 5 – Spawning

Eggs and sperm are released at low pressure, most of the eggs are translucent with a number of clear eggs.

Stage 6 – Spawning

The ovaries are loose and wrinkled, the abdomen is completely empty and the eggs are empty [8].

Migration of Spring and Autumn forms of Caspian kutum

The maximum age of Caspian kutum is 9 to 10 years and its maximum weight is 5 to 6 kg. Male Caspian kutum mature at three years old and female Caspian kutum at four years old. Caspian kutum spawn on aquatic plants as well as on bedrock rocks and pebbles. Spawning peaks of spring Caspian kutum occur in April and May, when the water temperature is between 13 and 15 degrees Celsius.

After migrating to the sea, Caspian kutum spend their feeding and growth stages in the sea and after reaching the age of sexual maturity, they enter the fresh water environment of Anzali wetland and the rivers leading to the Caspian Sea for natural reproduction and reproduction.

Autumn migratory Caspian kutum, if the conditions are right, usually enter the sea from early October and through the canal, first the male fish and then the females. The Shijan region in the eastern lagoon spends time in deep areas and then, as the weather warms up in late winter, they migrate to rivers that are covered with marginal vegetation such as reeds and loess, and carry out propagation operations on them, which is why the reason for this form of Caspian kutum is called phytophilus. But now the main population of Caspian kutum in the Caspian Sea belongs to the spring form, which accounts for more than 98% of the reserves [9].

Artificial Reproduction

The annual extraction rate of Caspian kutum from 1980 to 2006 was between 8 to 11 thousand tons per year. Comparison of these release and catch values shows that during the last 30 years, more Caspian kutum stocks have been provided as a result of artificial reproduction, and the available evidence indicates that during this period, the natural reproduction conditions of Caspian kutum become more unsuitable every year and the share of natural reproduction in existing reserves Caspian kutum in the Caspian Sea have been declining to a very small extent. Caspian kutum feed and grow in the sea and after reaching sexual maturity are used for spawning in very few rivers as the main places for spawning and artificial reproduction of this species. Reconstruction of reserves involves capturing part of the population and reproducing them in captivity and releasing them into the wild. In this method, the broods are caught from the rivers of the Caspian Sea and after artificial reproduction, the fertilized eggs are transferred to the breeding center and finally the larvae weighing 2g are released into the sea, thus the annual fishing center has about 200 million The larvae are produced through artificial reproduction and this release plays a key role in restoring the stocks of this species [10-12].

References

  1. Kouchesfahani NE, Vajargah MF (2021) A SHORT REVIEW ON THE BIOLOGICAL CHARACTERISTICS OF THE SPECIES ESOX LUCIUS, LINNAEUS, 1758 IN CASPIAN SEA BASIN (IRAN). Transylvanian Review of Systematical & Ecological Research 23: 73-80.
  2. Forouhar Vajargah M, Sattari M, Imanpour Namin J, Bibak M (2021) Evaluation of trace elements contaminations in skin tissue of Rutilus kutum Kamensky 1901 from the south of the Caspian Sea. Journal of Advances in Environmental Health Research 9: 139-148.
  3. Forouhar Vajargah M, Sattari M, Imanpour Namin J, Bibak M (2020) Length-weight, length-length relationships and condition factor of Rutilus kutum (Actinopterygii: Cyprinidae) from the southern Caspian Sea, Iran. Journal of Animal Diversity 2: 56-61.
  4. Vajargah MF, Sattari M, Namin JI, Bibak M (2021) Predicting the Trace Element Levels in Caspian Kutum (Rutilus kutum) from south of the Caspian Sea Based on Locality, Season and Fish Tissue. Biological Trace Element Research 200: 354-363. [crossref]
  5. Vajargah MF, Mohsenpour R, Yalsuyi AM, Galangash MM, Faggio C (2021) Evaluation of Histopathological Effect of Roach (Rutilus rutilus caspicus) in Exposure to Sub-Lethal Concentrations of Abamectin. Water, Air, & Soil Pollution 232: 1-8.
  6. Sattari M, Vajargah MF, Bibak M, Bakhshalizadeh S (2020) Relationship between Trace Element Content in the Brain of Bony Fish Species and Their Food Items in the Southwest of the Caspian Sea Due to Anthropogenic Activities. Avicenna Journal of Environmental Health Engineering 7: 78-85.
  7. Forouhar Vajargah M, Sattari M, Imanpour J, Bibak M (2020) Length-weight relationship and‎ some growth parameters of‎ Rutilus kutum (Kaminski 1901) in‎ the South Caspian Sea. Experimental animal Biology 9: 11-20.
  8. Sattari M, Namin JI, Bibak M, Vajargah MF, Hedayati A, et al. (2019) Morphological comparison of western and eastern populations of Caspian kutum, Rutilus kutum (Kamensky, 1901)(Cyprinidae) in the southern Caspian Sea. International Journal of Aquatic Biology 6: 242-247.
  9. Sattari M, Imanpour Namin J, Bibak M, Forouhar Vajargah M, Bakhshalizadeh S, et al. (2020) Determination of trace element accumulation in gonads of Rutilus kutum (Kamensky, 1901) from the south Caspian Sea trace element contaminations in gonads. Proceedings of the National Academy of Sciences, India Section B: Biological Sciences 90: 777-784.
  10. Vajargah MF, Hedayati A, Yalsuyi AM, Abarghoei S, Gerami MH, et al. (2014) Acute toxicity of Butachlor to Caspian Kutum (Rutilus frisii Kutum Kamensky, 1991). Journal of Environmental Treatment Techniques 2: 155-157.
  11. Sattari M, Bibak M, Forouhar Vajargah M (2020) Evaluation of trace elements contaminations in muscles of Rutilus kutum (Pisces: Cyprinidae) from the Southern shores of the Caspian Sea. Environmental Health Engineering and Management Journal 7: 89-96.
  12. Forouhar Vajargah M, Bibak M (2021) Pollution zoning on the southern shores of the Caspian Sea by measuring metals in Rutilus kutum Biological Trace Element Research 1-11. [crossref]