Author Archives: rajani

Association of Pre-Pregnancy BMI and Gestational Weight Gain with Neonatal Body Size: A Cross- Sectional Study

DOI: 10.31038/IGOJ.2022513

Abstract

Background: Pre-pregnancy BMI and GWG partially reflect maternal nutrition. The study aimed to explore the effects of pre-pregnancy BMI and GWG on the body size of neonates at birth.

Methods: A total of 546 mothers and their babies were selected from August 2017 to April 2018 at Obstetrical Department of the 3rd Affiliated Hospital of Zhengzhou University. The levels of leptin and adiponectin in cord blood were measured. The mass of placenta was evaluated based on the size. The maternal subjects were defined as low (BMI<18.5), normal (18.5≤BMI<25.0) and overweight/ obese (BMI≥25.0) groups. Moreover, the maternal subjects were divided into low, normal and high GWG groups corresponding to the guidelines of GWG. The neonates were divided into small (SGA), large (LGA) and appropriate (AGA) for gestational age groups based on their birth weight and gestational weeks.

Results: The incidence of SGA was higher in low pre-pregnant weight group than that in normal and overweight/obese groups (both P<0.05). The incidence of LGA was higher in high GWG group than that in normal and low GWG groups (both P<0.05). The correlation analysis showed that the birth weight (BW), body length (BL), head circumference (HC), and Ponderal Index (PI) of neonates were positively correlated with pre-pregnancy BMI and GWG (P<0.05, P<0.01). Neonatal BW, BL, HC, PI, placental weight and placental volume were positively correlated with the levels of adiponectin and leptin in umbilical cord blood respectively (P<0.05).

Conclusions: Pre-pregnancy BMI and GWG are positively correlated with the full term neonatal size. It is crucial for neonatal physical development to maintain appropriate BW before and during pregnancy. The adiponectin and leptin in cord blood were positively correlated with neonatal physical development suggests that both them play an important role in regulating fetal growth and development.

Keywords

Gestational weight gain, LGA, BMI, SGA

Introduction

For the newborns, Birth Weight (BW), Body Length (BL), and Head Circumference (HC) are the most intuitive indicators of physical development. Abnormal physical development not only increases the risk of neonatal illness or death, but also significantly affects the occurrence of several chronic diseases in childhood/adulthood [1]. Adequate maternal nutrition plays a crucial role in providing nourished uterine environment for fetal development. Inadequacy or deficiency of maternal nutrition is associated with disruption of fetoplacental exchange. Maternal nutrition not only impacts neonatal development, but also has a long term influence on his/her health till adulthood [2]. Several epidemiological investigations and experimental studies confirmed that malnutrition during pregnancy will cause neonatal organ dysplasia, endocrine disorder, and even some chronic diseases during adulthood [3-6]. Optimized nutrition at early life, especially during fetal period is the most important factor for one’s whole life. Some studies have indicated that nutrition at early stage of life has an impact on the development of some chronic non-communicated diseases at adulthood, such as obesity, diabetes, gout, hypertension, and coronary heart disease [7,8].

As the critical parameter of prenatal care, Gestational Weight Gain (GWG) consisted of fetus, placenta, amniotic fluid, maternal adipose tissue and breast tissue growth, could reflect the health and nutrition condition of the pregnant women. Other researches showed that maternal malnutrition could influence neonatal development and even increase the incidence of low BW, while maternal over nutrition and excessive GWG also increases the risk of adverse birth outcomes [9-11].

Reasonable dietary intake during gestation is important for appropriate neonatal growth and also helpful to prevent the chronic diseases in the adulthood. Up to date, the data regarding the effect of pre-pregnant Body Mass Index (BMI) and GWG on neonatal physical development were limited. Therefore, this study would explore the association of pre-pregnant BMI and GWG with neonatal development.

Methods

Subject Inclusion and Information Collection

A cross-sectional study was conducted. The subjects were the pregnant women who planned to deliver their babies at Obstetrical Department of the 3rd Affiliated Hospital of Zhengzhou University, from August 2017 to April 2018. The inclusion criteria were monocyesis and full-term delivery without significant diseases both of the mother and baby. The exclusion criteria included multiplets, premature delivery, pregnancy complications, such as gestational diabetes mellitus, hypertension, and pre-eclampsia, and accompanied with other severe diseases. The basic information of the subjects was collected through medical record and questionnaires, and informed consents were obtained.

This research was in accord with the Helsinki Declaration, and was approved by Zhengzhou University Life Science Ethics Review Board (ZZUIRB 2021 – 139).

Grouping of the Subjects

According to the standard of BMI for Asian adults [12], the maternal subjects were divided into three groups based on their pre-pregnant BMI: low weight (BMI<18.5), normal weight (18.5≤BMI<25.0), and overweight/obese (BMI≥25.0). The case number of obese women was small in the study, thus the overweight and obese subjects were combined as overweight/obese.

The levels of GWG recommended by Institute of Medicine guideline (IOM, 2009) are 12.5~18 kg, 11.5~16 kg, 7~11.5 kg, and 5~9 kg, for low weight, normal weight, overweight, and obese women of pre-pregnancy, respectively [13]. Based on the recommendation of IOM, the subjects were divided into low (below IOM guideline), normal (within the range of IOM guideline), and high (above IOM guideline) GWG groups.

The Neonatal Body Size

The body measurements included BW, BL, and HC of the neonates. The newborns were weighted using electronic scale at the accuracy scale of 0.01 kg and measured for BL and HC using measuring tape at the accuracy of 0.1cm. Ponderal Index (PI) [14] was calculated based on BW and BL, which is the index for estimating the nutrition condition of the neonates [PI = 100 × weight (g)/ length (cm)3].

Based on the BW and gestational age, the neonates were divided into three groups [15]: (1) Small for gestational age (SGA): BW below the 10th percentile for the corresponding gestational age; (2) Large for gestational age (LGA): BW above the 90th percentile for the corresponding gestational age; (3) Appropriate for gestational age (AGA): BW between the 10th and 90th percentile for the corresponding gestational age.

Measurement of Adiponectin and Leptin in Umbilical Cord Blood

After fetal delivery, 10 ml of umbilical venous blood was drawn immediately before delivery of the placenta. Then the serum was separated after centrifuged at 3000 rpm for 10 min and stored at -80°C for later tests. The serum levels of leptin and adiponectin were determined through Enzyme-linked immunoassay. The assays were conducted according to instructions using the ELISA kits (Shanghai Fusheng Industrial Co., Ltd. China).

After delivery, the placenta was weighed and its volume was estimated based on the formula. Placental volume (cm3) = π/4 × long diameter (cm) × short diameter (cm) × thickness (cm) (placental surface was considered as oval like).

Statistical Analysis

The database was established using Epi Data 3.1 and the software of SPSS 21.0 was employed for data analysis. Continuous variables were expressed as mean±SD (x-bar ± s), and categorical variables were presented as frequencies and percentages. The chi-square test, t-test, analysis of variance, and bivariate correlation analysis were used to analyze the data. The significant level was set as α=0.05.

Results

General Information of the Subjects

A total of 546 mothers and their newborns were included in the study. The average age of the mothers was 29.5 ± 4.4 years old, the means of pre-pregnancy BMI and GWG were 21.2 ± 2.7 kg/m² and 17.2 ± 4.9 kg, respectively. According to the pre-pregnancy BMI, 374 (68.5%) were in normal weight group, 89 (16.3%) and 83 (15.2%) were in low weight and overweight groups, respectively. Additionally, among the 546 pregnant women, 180 (33.0%) were in normal GWG, 50 (9.2%) were in low GWG, and 316 (57.9%) were in high GWG groups. The average BW, BL, and HC of neonates were 3.4 ± 0.4 kg, 51.1 ± 1.9 cm, and 34.8 ± 1.2 cm, respectively. Among the 546 neonates, 25 were in SGA (4.6%), 356 were in AGA (65.2%) and 165 were in LGA (30.2%) groups respectively (Table 1).

Table 1: General information of the pregnant women and newborns (n=546)

 

n (%)

`x ± s

Mothers    
Age (y)

29.5 ± 4.4

Educational Level  

Middle school or lower

53 (9.7)

 

High school

110 (20.1)

 

College and above

383 (70.1)

 
Parity  

1

371 (67.9)

 

≥2

175 (32.1)

 
Delivery pattern  

Vaginal Delivery

244 (44.7)

 

Cesarean Section

302 (55.3)

 
Gestational weeks

39.3 ± 1.2

Pre-pregnancy BMI (kg/m2)

21.2 ± 2.7

GWG (kg)

17.2 ± 4.9

Pre-pregnancy BMI    

Low Weight

89 (16.3)

 

Normal Weight

374 (68.5)

 

Overweight/Obese

83 (15.2)

 
GWG    

Low

50 (9.2)

 

Normal

180 (33.0)

 

High

316 (57.8)

 
Newborns    
BW (kg)

3.4 ± 0.4

BL (cm)  

51.1 ± 1.9

HC (cm)  

34.8 ± 1.2

SGA

25 (4.6)

 
AGA

356 (65.2)

 
LGA

165 (30.2)

Note: BMI: Body Mass Index; GWG: Gestational Weight Gain; BW: Birth Weight; BL: Body Length; HC: Head Circumference; SGA: Small for Gestational Age; AGA: Appropriate for Gestational Age; LGA: Large for Gestational Age

Relationship between Pre-pregnancy BMI and GWG

Noticeably, the highest GWG was in pre-pregnant normal weight group and the lowest GWG was in overweight group, but the differences among the three groups were not significant (P>0.05) (Table 2).

Table 2: Association of GWG with pre-pregnancy BMI (x ± s)

Pre-pregnancy BMI

n (%)

GWG (kg)

Low Weight

89 (16.3)

16.83 ± 4.52

Normal Weight

374 (68.5)

17.50 ± 4.89

Overweight/Obese

83 (15.2)

16.38 ± 5.46

F

2.112

P

0.122

Note: BMI: body mass index; Low weight: pre-pregnancy BMI<18.5 kg/m2; Normal weight: 18.5 kg/m2≤pre-pregnancy BMI<25.0 kg/m2; Overweight/Obese: pre- pregnancy BMI≥25.0 kg/m2. GWG: gestational weight gain

Effect of Pre-pregnancy BMI on Neonatal Size

The frequency distribution of newborn birth weight was different among pregnant women with different pre-pregnant BMI (χ2=17.625, P<0.01). Through pairwise comparison (α=0.05/3), the distribution of neonatal birth weight in low pre-pregnant weight group was distinctly different from normal weight and overweight groups (χ2=11.224, P<0.01; χ2=15.404, P<0.01). By further analysis, we found that the incidence of SGA in low pre-pregnant weight group was significantly higher, but the incidence of LGA was lower than that in pre-pregnancy normal and overweight groups (both P<0.05) (Table 3).

Table 3: The distribution of neonatal body size in different pre-pregnancy BMI groups

Pre-pregnancy BMI

n SGA n (%) AGA n (%) LGA n (%) BW (kg) BL (cm) HC (cm)

PI (g/cm3)

Low Weight

89

8(9.0) 67 (75.3) 14(15.7) 3.2 ± 0.5 50.7 ± 2.1 34.5 ± 1.24

2.5 ± 0.2

Normal Weight

374

14(4.0) 244(65.0) 116(31.0) 3.4 ± 0.4* 51.2±1.8* 34.8 ± 1.2*

2.5 ± 0.2

Overweight/Obese

83

3(3.6) 45(54.2) 35(42.2) 3.5±0.5*# 51.4±1.8* 35.0 ± 1.1*

2.6±0.3*

χ2/F

17.625

7.95 3.688 3.233

3.109

P

0.001

<0.001 0.026 0.040

0.045

Note: BMI: Body Mass Index; Low weight: pre-pregnancy BMI<18.5 kg/m2; Normal weight: 18.5 kg/m2≤pre-pregnancy BMI<25.0 kg/m2; Overweight/Obese: pre-pregnancy BMI≥25.0 kg/m2; SGA: small for gestational age; AGA: appropriate for gestational age; LGA: large for gestational age. Compared with low weight group, *P<0.05; Compared with normal weight group, #P<0.05

The effect of pre-pregnancy BMI on neonatal BW, BL, HC and PI was remarkable (P<0.01, P<0.05, P<0.05, P<0.05). By pairwise comparison, the average BW of neonates in pre-pregnant normal weight and overweight groups was significantly higher than that in low pre-pregnant weight group (both P<0.01), and the average BW in pre-pregnant overweight group was higher than that in normal weight group (P<0.05); Besides, the BL (P<0.05, P<0.01) and HC (both P<0.05) of neonates were higher in pre-pregnant normal and overweight groups than that in low weight group; Moreover, the neonatal PI were significantly higher in pre-pregnant overweight group than that in low weight group (P<0.05) (Table 3). Correlation analysis demonstrated that BW, BL, HC, and PI of neonates were positively correlated with pre-pregnant BMI (P<0.01, P<0.05, P<0.01, P<0.05).

To investigate whether GWG has the effect on the neonatal BW, BL, HC, and PI among women with different pre-pregnancy BMI levels, we studied the associations between BW, BL, HC, and PI and pre-pregnancy BMI in the low, normal, and high GWG groups. The results showed that in normal GWG group, the average BW and BL of neonates were significantly higher in normal than that in low pre-pregnant BMI group (P<0.05, P<0.05); In high GWG group, the PI was higher in overweight than that in low pre-pregnant BMI group. Moreover, in same GWG group, the BW, BL, HC, and PI of neonates had the trend being higher along with the increase of pre-pregnancy BMI, but the difference was not significant (P>0.05) (Table 4).

Table 4: Association of pre-pregnancy BMI with neonatal body size in different GWG groups (x-bar ± s)

GWG

pre-pregnancy BMI n BW (kg) BL (cm) HC (cm)

PI (g/cm3)

Low            
Low weight

11

3.1 ± 0.4 49.7 ± 2.5 34.0 ± 1.1

2.5 ± 0.2

Normal weight

37

3.1 ± 0.4 50.5 ± 2.1 34.2 ± 1.4

2.4 ± 0.2

Normal
Low weight

46

3.1 ± 0.5 50.2 ± 1.6 34.3 ± 1.3

2.5 ± 0.2

Normal weight

121

3.3 ± 0.4* 50.8 ± 1.8* 34.7 ± 1.2

2.5 ± 0.2

Overweight/Obese

13

3.3 ± 0.5 50.8 ± 2.4 34.5 ± 0.9

2.5 ± 0.2

High
Low weight

32

3.4 ± 0.4 51.7 ± 2.2 34.9 ± 1.1

2.5 ± 0.2

Normal weight

216

3.4 ± 0.4 51.5 ± 1.8 35.0 ± 1.1

2.5 ± 0.2

Overweight/Obese

68

3.5 ± 0.5 51.6 ± 1.7 35.1 ± 1.1

2.6 ± 0.3*

Note: GWG: gestational weight gain; Low: GWG below IOM guideline; Normal: GWG within IOM guideline; High: GWG above IOM guideline; BMI: body mass index; Low weight: pre-pregnancy BMI<18.5 kg/m2; Normal weight: 18.5 kg/m2≤pre-pregnancy BMI<25.0 kg/m2; Overweight/Obese: pre-pregnancy BMI≥25.0 kg/m2; BW: birth weight; BL: body length; HC: head circumference; PI: Ponderal index; Compared with low weight group in same GWG group, *P<0.05

Effect of GWG on Neonatal Physical Development

The frequency distribution of newborn birth weight was different among different GWG groups (χ2=36.274, P<0.01). Through pairwise comparison (α=0.05/3), the distribution of neonatal birth weight in high GWG was distinctly different from normal and low GWG groups (χ2=18.629, P<0.01; χ2=25.248, P<0.01). Further analysis showed that the incidence of LGA was higher in high GWG group than that in low and normal GWG groups (both P<0.05), and the incidence of SGA was lower than the other two groups (both P<0.05) (Table 5).

Table 5: The distribution of neonatal body size in different GWG groups

GWG

n SGA

n(%)

AGA

n(%)

LGA

n(%)

BW

(kg)

BL

(cm)

HC

(cm)

PI

(g/cm3)

Low

50

6(1.2) 40 (80.0) 4(8.0) 3.1 ± 0.4 50.3 ± 2.1 34.1 ± 1.3

2.4 ± 0.2

Normal

180

11(6.1) 131(72.8) 38(21.1) 3.3 ± 0.4* 50.6 ± 1.8 34.6 ± 1.2*

2.5 ± 0.2*

High

316

8(2.5) 185(58.5) 123(38.9) 3.5 ± 0.4*# 51.5 ± 1.8*# 35.0 ± 1.1*#

2.5 ± 0.2*

χ2/F

36.274

22.089 18.043 15.41

3.696

P

<0.001

<0.001 <0.001 <0.001

0.025

Note: GWG: gestational weight gain; Low: GWG below IOM guideline; Normal: GWG within IOM guideline; High: GWG above IOM guideline; SGA: small for gestational age; AGA: appropriate for gestational age; LGA: large for gestational age. Compared with low GWG group, *P<0.05; Compared with normal GWG group, #P<0.05

The BW, BL, HC, and PI of neonates were significantly different (P<0.01, P<0.01, P<0.01, P<0.05) in the three GWG groups. The average neonatal BW in normal and high GWG groups were significantly higher than that in low GWG group (P<0.05, P<0.01) and neonatal BW was notably higher in high GWG than that in normal GWG group (P<0.01). Besides, neonatal BL was significantly longer in high GWG group than that in low and normal GWG groups (both P<0.01). Neonatal HC was significantly larger in high and normal GWG groups than that in low GWG group (both P<0.01), and HC was significantly larger in high GWG than that in normal GWG group (P<0.01). Moreover, the neonatal PIs were significantly higher in normal and high GWG groups than that in low GWG group (P<0.05, P<0.01) (Table 5). The correlation analysis showed that the BW, BL, HC, and PI of neonates were positively correlated with GWG (all P<0.01).

After adjusting pre-pregnancy BMI, in low pre-pregnant BMI group, the neonatal BW, BL, and HC were significantly higher in high GWG than that in low (P<0.05, P<0.01, P<0.05) and normal GWG groups (P<0.01, P<0.01, P<0.05). In addition, in normal pre-pregnant BMI group, neonatal BW, HC and PI were significantly higher in the normal GWG group than that in low GWG group (P<0.05, P<0.05, P<0.01), and the BW and BL were significantly higher in high GWG group than that in low (both P<0.01) and normal GWG group (both P<0.01), and the HC and PI of neonates were significantly higher in high GWG group than that in low GWG group (both P<0.01). Within the pre-pregnant overweight group, the BW, BL, HC, and PI of neonates in different GWG groups had no significant difference (P>0.05) (Table 6).

Table 6: Association of GWG with neonatal body size in different pre-pregnancy BMI groups (x-bar ± s)

Pre-pregnancy BMI

GWG n BW (kg) BL (cm) HC (cm)

PI (g/cm3)

Low weight
Low

11

3.1 ± 0.4 49.7 ± 2.5 34.0 ± 1.1

2.5 ± 0.2

Normal

46

3.1 ± 0.5 50.2 ± 1.6 34.3 ± 1.3

2.5 ± 0.2

High

32

3.4±0.4*## 51.7 ± 2.2**## 34.9 ± 1.1*#

2.5 ± 0.2

Normal weight
Low

37

3.1 ± 0.4 50.5 ± 2.1 34.2 ± 1.4

2.4 ± 0.2

Normal

121

3.3 ± 0.4* 50.8 ± 1.8 34.7 ± 1.2*

2.5 ± 0.2**

High

216

3.4±0.4**## 51.5 ± 1.8**## 35.0 ± 1.1**

2.5 ± 0.2**

Overweight/Obese
Normal

13

3.3 ± 0.5 50.8 ± 2.4 34.5 ± 0.9

2.5 ± 0.2

High

68

3.5 ± 0.5 51.59 ± 1.7 35.08 ± 1.1

2.6 ± 0.3

Note: BMI: body mass index; Low weight: pre-pregnancy BMI<18.5 kg/m2; Normal weight: 18.5 kg/m2≤pre-pregnancy BMI<25.0 kg/m2; Overweight/Obese: pre- pregnancy BMI≥25.0 kg/m2; GWG: gestational weight gain; Low: GWG below IOM guideline; Normal: GWG within IOM guideline; High: GWG above IOM guideline; BW: birth weight; BL: body length; HC: head circumference; PI: Ponderal index; Compared with low GWG group, *P<0.05, **P<0.01. Compared with normal GWG group, #P<0.05, ##P<0.01

Comparison of Leptin and Adiponectin in Umbilical Cord Blood of Different Pre-pregnancy BMI, GWG and Neonatal Birth-weight

The levels of leptin and adiponectin in cord blood were not significantly different among different Pre-pregnancy BMI groups (P>0.05), as well as different GWG groups (P>0.05) (Table 7).

Table 7: Serum leptin and adiponectin levels of cord blood in different pre-pregnancy BMI, GWG and neonatal birth-weight groups (x-bar ± s)

Groups

n Leptin (μg/L) F P Adiponectin (pg/ml) F

P

Pre-pregnancy BMI
Low

21

14.2 ± 6.0 0.461 0.631

2081.9 ± 866.1

1.192

0.307

Normal

106

13.5 ± 4.2

1769.1 ± 648.5

Overweight/Obese

37

12.8 ± 4.1

1836.9 ± 629.9

GWG
Low

19

12.5 ± 4.5 1.138 0.324

1766.3 ± 676.9

0.160

0.853

Normal

53

12.8 ± 4.2

1851.9 ± 634.1

High

92

13.8 ± 4.5

1786.0 ± 683.3

Newborns
SGA

6

11.2 ± 4.6 6.102 0.003

1539.9 ± 488.4

5.096

0.007

AGA

108

12.4 ± 3.7

1696.6 ± 605.2

LGA

50

14.9±5.1*##

2043.8 ± 746.3##

Note: BMI: body mass index; Low: pre-pregnancy BMI<18.5 kg/m2; Normal: 18.5 kg/m2 ≤ pre-pregnancy BMI<25.0 kg/m2; overweight/obese: pre-pregnancy BMI≥25.0 kg/m2. GWG: gestational weight gain; Low: GWG below IOM guideline; Normal: GWG within IOM guideline; High: GWG above IOM guideline. SGA: small for gestational age; AGA: appropriate for gestational age; LGA: large for gestational age. Compared with SGA group, *P<0.05; Compared with AGA group, ##P<0.05

The levels of leptin and adiponectin in umbilical cord blood were significantly different among different neonatal birth-weight groups (P<0.01). The serum level of leptin was higher in LGA group than that in SGA and AGA groups (P<0.05, P<0.01), and the level of adiponectin was higher in LGA group than that in AGA group (P<0.01) (Table 7).

Relationship between Serum Leptin, Adiponectin of Cord Blood and Neonatal Body Size

The levels of leptin and adiponectin in cord blood were positively correlated with neonatal BW, BL, HC, PI, Placental volume and Placental weight (P<0.05) (Table 8).

Table 8: Relationship between serum leptin, adiponectin of cord blood and neonatal body size

Indexes

BW (kg) BL (cm) HC (cm) PI (g/cm3) PV (cm3) PW (g)
r P r P r P r P r P r

P

Leptin (μg/L)

0.309

0.000 0.254 0.002 0.213 0.010 0.174 0.035 0.179 0.032 0.222

0.038

Adiponectin (pg/ml)

0.273

0.001 0.198 0.016 0.175 0.037 0.178 0.030 0.195 0.019 0.213

0.011

Note: BW: birth weight; BL: body length; HC: head circumference; PI: Ponderal index; PV: Placental volume; PW: Placental weight

Discussion

Maintaining optimal GWG and pre-pregnancy BMI are essential for health and well-being of both mother and child. This study investigated the effects of GWG and pre-pregnant BMI on neonatal size. BW is a key index in evaluating neonatal health condition and predicting some adulthood chronic diseases, too low or too high BW could increase the risk of neonatal diseases [16-19].

In our study, the means of pre-pregnant BMI and GWG were 21.15 ± 2.7 kg/m² and 17.22 ± 4.93 kg, respectively. The percentages of pre-pregnant low weight and overweight were 16.3% and 15.2%, respectively, which are consistent with another study in China [20]. Women with low pre-pregnancy BMI are associated with an increased risk of preterm deliveries and having an SGA infant [21]. It is reported that infants at smaller birth size and born at SGA have higher incidences of neonatal morbidity and mortality than those normal birth weight ones [22]. In addition, pre-pregnancy overweight may increase the risk of adverse neonatal outcomes. The incidences of macrosomia and dystocia are increased along with the increase of pre-pregnant BMI [23]. Therefore, pre-pregnancy BMI is an important predictor of fetal growth. Our study showed that the percentages of low, normal, and high GWG were 9.2%, 33.0%, and 57.9% respectively, which means that more than half of the pregnant women gained more body weight than the recommended level, especially in the pre-pregnant normal and overweight groups, which is associated with some misleading information such as more food intake, especially high protein intake is good for pregnancy, might contribute to the over GWG [24,25]. There is an eminent need for the scientific and reasonable guidance of the pre-pregnancy BMI and GWG.

The present study showed that the incidences of SGA, AGA, and LGA were 4.6%, 65.2%, and 30.2% respectively, which is different from a cohort study [24], but similar to the MINA cohort study in Lebanon and Qatar25. The 4.6% proportion of SGA from our study was slightly lower than the 6.7% among MINA participants in Lebanon and Qatar [25]. However, LGA was found in about 30.2% of infants which is slightly higher than that reported recently from the MINA cohort in Lebanon and Qatar (24.6%) [25]. Some reports indicated that the incidences of LGA and macrosomia are higher in obese women than that in normal weight ones [26-29]. In our study, the incidence of SGA was lower while the incidence of LGA was higher in pre-pregnant normal and overweight groups than that in pre-pregnant low weight group. Moreover, the incidence of LGA was higher in high GWG group than that in low and normal GWG group, while the incidence of SGA was higher in low GWG group than that in the other groups, which is similar to other studies [20,31,32]. Excessive GWG and pre-pregnant overweight imply that pregnant women have more fat deposit and even have potential risk of dyslipidemia [33], which could result in increased energy flow to fetus through the placenta [34].

In present study, the average BW, BL, and HC of newborns were 3.4 ± 0.4 kg, 51.1 ± 1.9 cm, and 34.8 ± 1.2 cm respectively, which were similar with other studies [25,35,36]. The three parameters plus PI were positively correlated with pre-pregnant BMI and GWG. In the other words, the BW, BL, HC, and PI of neonates are increased along with the increase of pre-pregnant BMI and GWG. These findings are in accord with the reported study by Stamnes Koepp et al [37]. However, after the adjustment of GWG, the association of pre-pregnancy BMI with BW, BL, HC, and PI of neonates could not be seen, which implied that the effect of pre-pregnancy BMI on neonatal BW, BL, HC, and PI may not necessarily be involved with GWG or may be related to the small sample size of each group after stratification. Nevertheless, too low or high pre-pregnant BMI is not conducive to the health of mother and child. Women who are underweight or overweight and obese should try to achieve a healthy weight before pregnancy in order to have a better pregnancy outcome. After adjusting pre-pregnant BMI, the neonatal BW, BL, HC, and PI were increased along with GWG in low and normal pre-pregnant weight group, which indicates that the influence of GWG on BW, BL, HC, and PI is constant regardless of pre-pregnancy BMI. Nutritional plan should be personalized based on the pre-pregnancy BMI and the importance of appropriate GWG should be emphasized for the optimal fetal growth [38-40].

Leptin is a protein product expressed by obesity genes. As an intermediary molecule linking to fetal neuroendocrine system and adipose tissue, leptin participates in the regulation of fetal body mass growth throughout the gestational period, especially in the 2nd and 3rd trimesters [41]. Adiponectin is mainly secreted by adipocytes and plays important roles in the insulin sensitivity, anti-inflammation, anti-atherosclerosis, and maintenance of metabolism and energy balance. A study found that changes in serum adiponectin levels could reflect weight gain in the early period of newborns [42]. Our research found that the serum leptin and adiponectin levels in umbilical cord blood were not significantly different among different Pre-pregnancy BMI or GWG groups. Theoretically, substances with molecular weights of more than 500 Da could not pass through the placental barrier [43]. However, the molecular weights of leptin and adiponectin are 16 kDa and 30 kDa [44,45], respectively. Therefore, maternal serum leptin and adiponectin could not contribute to the leptin and adiponectin levels in the fetal circulation. Our study also found that the serum leptin and adiponectin levels in umbilical cord blood were higher in LGA group than that in SGA and AGA groups, and were significantly positively correlated with neonatal BW and Placental weight, which suggests that placenta and fetal adipose tissue, rather than maternal production, may be the main source of leptin and adiponectin production. This finding was consistent with the previous report [46,47]. Moreover, the significant correlation between the serum leptin and adiponectin levels in umbilical cord blood and the neonatal body size may imply that they can participate in the growth and development of fetuses.

Several limitations of this study should be addressed. First, the sample size is relative small and the results need to be further confirmed through large scale study or prospective cohort studies. Second, the study focused only the effect of pre-pregnancy BMI and GWG on BW, BL, HC, PI, Placental volume and Placental weight of neonates, without considering the effects of heredity and ethnicity.

Conclusion

The present study indicated that both pre-pregnancy BMI and GWG are positively associated with physical development of neonates. Pre-pregnant low weight strongly associates with the incidence of SGA, and excess GWG might increase the risk of LGA. Therefore, both pre-pregnant body weight and GWG should be considered for optimal physical development of neonates, which requires appropriate nutritional guide for child-bearing women. Moreover, the positive correlation between serum leptin and adiponectin of cord blood and neonatal physical development suggests that cord blood levels of leptin and adiponectin might be involved in the regulation of fetal growth and development.

Acknowledgments

We would like to thank the obstetrical department of the 3rd Affiliated Hospital of Zhengzhou University for their support during the study. We are grateful to all the participants in this study.

Funding

This study was supported by a Grant for Key Research Items (project number: 201203063) in Medical science and Technology Project of Henan Province from Henan Provincial Health Bureau. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  1. Zhang Q, Wu Y, Zhuang Y, Cao J, et al. (2016) Neurodevelopmental outcomes of extremely low birth weight and very low birth weight infants and related influencing factors. Chinese Journal of Contemporary Pediatrics 18: 683-687. [crossref]
  2. Ramakrishnan U, Grant F, Goldenberg T, Zongrone A, et al. (2012) Effect of women’s nutrition before and during early pregnancy on maternal and infant outcomes: a systematic review. Paediatr Perinat Epidemiol 26: 285-301. [crossref]
  3. Fall CH (2013) Fetal malnutrition and long-term outcomes. Nestle Nutr Inst Workshop Ser 74: 11-25. [crossref]
  4. Karchmer S, Aguilar Guerrero JA, Cinco Arenas JE, Chávez Auela J, et al. (1967) Influence of maternal malnutrition on pregnancy, puerperium and on the newborn. Gac Med Mex 97: 1310-1326. [crossref]
  5. Yan X, Zhao X, Li J, He L, et al. (2018) Effects of early-life malnutrition on neurodevelopment and neuropsychiatric disorders and the potential mechanisms. Prog Neuropsychopharmacol Biol Psychiatry 83: 64-75.
  6. Ramakrishnan U, Imhoff-Kunsch B, Martorell R (2014) Maternal nutrition interventions to improve maternal, newborn, and child health outcomes. Nestle Nutr Inst Workshop Ser 78: 71-80. [crossref]
  7. Alderman H, Fernald L (2017) The nexus between nutrition and early childhood development. Annu Rev Nutr 37: 447-476. [crossref]
  8. Moreno Villares JM (2016) Nutrition in early life and the programming of adult disease: the first 1000 days. Nutr Hosp 33: 8-11. [crossref]
  9. Kaur S, Ng CM, Badon SE, Jalil RA, et al. (2019) Risk factors for low birth weight among rural and urban Malaysian women. BMC Public Health 19: 539. [crossref]
  10. Ben Naftali Y, Chermesh I, Solt I, Friedrich Y (2018) Achieving the recommended gestational weight gain in high-risk versus low-risk pregnancies. Isr Med Assoc J 20: 411-414. [crossref]
  11. Haby K, Berg M, Gyllensten H, Hanas R (2018) Mighty Mums – a lifestyle intervention at primary care level reduces gestational weight gain in women with obesity. BMC Obes 5: 16. [crossref]
  12. Sun C (2017) Nutrition and Food Hygiene. 8th edition. Beijing: People’s Medical Publishing House 215.
  13. Rasmussen KM, Yaktine AL & Institute of Medicine (US) and National Research Council (US) Committee to Reexamine IOM Pregnancy Weight Guidelines (Eds.) (2009). Weight Gain During Pregnancy: Re-examining the Guidelines.. Washington (DC): National Academies Press (US). doi: 10.17226/12584.
  14. Nawal M Nour (2017) Obstetrics and gynecology in low-resource settings: a practical guide. Cambridge, MA: Harvard University Press.
  15. Xue X (2013) Pediatrics. 2nd Beijing: People’s Medical Publishing House 100.
  16. Barker DJ, Gelow J, Thornburg K, Osmond C, et al. (2010) The early origins of chronic heart failure: impaired placental growth and initiation of insulin resistance in childhood. Eur J Heart Fail 12: 819-825. [crossref]
  17. McGuire SF (2017) Understanding the implications of birth w Nurs Womens Health 21: 45-49. [crossref]
  18. Wang J, Moore D, Subramanian A, Cheng KK, et al. (2018) Gestational dyslipidaemia and adverse birthweight outcomes: a systematic review and meta-analysis. Obes Rev 19: 1256-1268. [crossref]
  19. Li C, Zeng L, Wang D, Dang S, et al. (2019) Effect of maternal pre-pregnancy BMI and weekly gestational weight gain on the development of infants. Nutr J 18: 6.
  20. Zhao R, Xu L, Wu ML, Huang SH, et al. (2018) Maternal pre-pregnancy body mass index, gestational weight gain influence birth weight. Women Birth 31: e20-25. [crossref]
  21. Watanabe H, Inoue K, Doi M, Matsumoto M, et al. (2010) Risk factors for term small for gestational age infants in women with low prepregnancy body mass index. J Obstet Gynaecol Res 36: 506-512. [crossref]
  22. McIntire CD, Bloom SL, Casey BM, Leveno KJ (1999) Birth weight in relation to morbidity and mortality among newborn infants. N Engl J Med 340: 1234-1238. [crossref]
  23. Wang F, Chen Q, Yang L, Cai X, et al. (2020) Effect of pre-pregnancy weight and gestational weight gain on neonatal birth weight: a prospective cohort study in Chongqing City. Wei Sheng Yan Jiu 49: 705-710. [crossref]
  24. Horng HC, Huang BS, Lu YF, Chang WH, et al. (2018) Avoiding excessive pregnancy weight gain to obtain better pregnancy outcomes in Tai wan. Medicine (Baltimore) 97: e9711. [crossref]
  25. Arora P, Tamber Aeri B (2019) Gestational weight gain among healthy pregnant women from Asia in comparison with Institute of Medicine (IOM) Guidelines-2009: a systematic review. J Pregnancy 2019: 3849596. [crossref]
  26. Kurtoğlu S, Hatipoğlu N, Mazıcıoğlu MM, Akın MA, et al. (2012) Body weight, length and head circumference at birth in a cohort of Turkish newborns. J Clin Res Pediatr Endocrinol 4: 132-139. [crossref]
  27. Abdulmalik MA, Ayoub JJ, Mahmoud A, Nasreddine L, et al. (2019) Pre-pregnancy BMI, gestational weight gain and birth outcomes in Lebanon and Qatar: Results of the MINA cohort. PLoS One 14: e0219248. [crossref]
  28. Athukorala C, Rumbold AR, Willson KJ, Crowther CA (2010) The risk of adverse pregnancy outcomes in women who are overweight or obese. BMC Pregnancy Childbirth 10: 56. [crossref]
  29. Nowak M, Kalwa M, Oleksy P, Marszalek K, et al. (2019) The relationship between pre-pregnancy BMI, gestational weight gain and neonatal birth weight: a retrospective cohort study. Ginekol Pol 90: 50-54. [crossref]
  30. Life Cycle Project-Maternal Obesity and Childhood Outcomes Study Group, Voerman E, Santos S, Inskip H, Amiano P, et al. (2019) Association of gestational weight gain with adverse maternal and infant outcomes. JAMA 321: 1702-1715. [crossref]
  31. Morisaki N, Nagata C, Jwa SC, Sago H, et al. (2017) Pre-pregnancy BMI specific optimal gestational weight gain for women in Japan. J Epidemiol 27: 492-498. [crossref]
  32. Abreu LRS, Shirley MK, Castro NP, Euclydes VV, et al. (2019) Gestational diabetes mellitus, pre-pregnancy body mass index, and gestational weight gain as risk factors for increased fat mass in Brazilian newborns. PLoS One 14: e0221971. [crossref]
  33. Nelson SM, Matthews P, Poston L (2010) Maternal metabolism and obesity: Modifiable determinants of pregnancy outcome. Reprod. Update 16: 255-275. [crossref]
  34. Alfaradhi MZ, Ozanne SE (2011) Developmental programming in response to maternal over nutrition. Front Genet 2: 27. [crossref]
  35. Chen Y, Wu L, Zou L, Li G, et al. (2017) Update on the birth weight standard and its diagnostic value in Small for Gestational Age (SGA) infants in China. J Matern Fetal Neonatal Med 30: 801-807. [crossref]
  36. Davis SM, Kaar JL, Ringham BM, Hockett CW, et al. (2019) Sex differences in infant body composition emerge in the first 5 months of life. J Pediatr Endocrinol Metab 32: 1235-1239. [crossref]
  37. Stamnes Koepp UM, Frost Andersen L, Dahl-Joergensen K, Stigum H, et al. (2012) Maternal pre-pregnant body mass index, maternal weight change and offspring birthweight. Acta Obstet Gynecol Scand 91: 243-249. [crossref]
  38. Goldstein RF, Abell SK, Ranasinha S, Misso M, et al. (2017) Association of Gestational Weight Gain With Maternal and Infant Outcomes: A Systematic Review and Meta-analysis. JAMA 317: 2207-2225. [crossref]
  39. Goldstein RF, Abell SK, Ranasinha S, Misso ML, et al. (2018) Gestational weight gain across continents and ethnicity: systematic review and meta-analysis of maternal and infant outcomes in more than one million women. BMC Med 16: 153. [crossref]
  40. Shi X, Yue J, Lyu M, Wang L, et al. (2019) Influence of pre-pregnancy parental body mass index, maternal weight gain during pregnancy, and their interaction on neonatal birth weight. Zhongguo Dang Dai Er Ke Za Zhi 21:783-788. [crossref]
  41. Raghavan R, Zuckerman B, Hong X, Wang G, et al. (2018) Fetal and Infancy Growth Pattern, Cord and Early Childhood Plasma Leptin, and Development of Autism Spectrum Disorder in the Boston Birth Cohort. Autism Res 11: 1416-1431. [crossref]
  42. Li J, Tang W, Zheng H, Lu X (2015) Correlation study of adiponectin in umbilical cord blood of severe pre-eclampsia with the neonatal outcomes. Jiangxi Medical Journal 50: 19-22.
  43. Cunningham FG, Gant NF, Leveno KJ, Gilstrap LC, et al (2006) Williams’ Obstetrics, 21th ed. China, Shandong science & technology press.
  44. Zhu D (2009) Physiology. Beijing: People’s Medical Publishing House 374-375.
  45. Kishore U, Reid KB (1999) Modular organization of proteins containing C1q-like globular domain. Immunopharmacology 42: 15-21. [crossref]
  46. Chan TF, Yuan SS, Chen HS, Guu CF, et al. (2004) Correlations between umbilical and maternal serum adiponectin levels and neonatal birth weights. Acta Obstet Gynecol Scand 83: 165-169. [crossref]
  47. Ma W, Xu N. Research progress of neonatal cord blood leptin. Journal of Baotou Medical College 2007: 209-213.
fig 1

Are Ingested B. anthracis Spores a Contribution to Anthrax Disease Progression in the Mouse Aerosol Challenge Model?

DOI: 10.31038/IDT.2022313

Abstract

Balb/c mice were challenged orally with increasing amounts or either B. anthracis Sterne or Ames spores in order to determine lethal gastrointestinal dose levels. Only a single animal succumbed at the 1010 spore challenge dose for Sterne. The oral LD50 for Ames was 108 spores with 100% survival at a challenge dose of 105. Re-challenge of the 109 and 1010 Sterne challenge and the surviving 106, and 107 Ames challenge animals with a lethal aerosol challenge of Ames resulted in all animals succumbing and no increase in mean time to death indicating no lasting immunological response was elicited after survival of oral-dosed spore challenge.

Keywords

Anthrax, Mouse, Oral challenge, Spores

Introduction

The murine-anthrax aerosol challenge model has become a proof of concept standard in the evaluation and development of therapeutics for the treatment of B. anthracis infections [1-6]. Because the model relies on a whole body exposure there have been concerns raised that murine ingestion of the anthrax spores through daily grooming after challenge may lead to gastrointestinal infection via the oral route thus complicating interpretation of study results. Additionally, post therapy survival could be enhanced by elicitation of an immune response through ingestion of anthrax spores [7,8].

Materials and Methods

B. anthracis Ames and Sterne spores were prepared according to the method of Leighton and Doi and were maintained in sterile water for injection [9]. Spores were diluted in sterile water to concentrations ranging from 100 to 1011 CFUs/ml to deliver in a 0.1ml oral volume challenge doses were administered by oral gavage to female Balb/c mice (6-8 weeks old) ranging from 10 to 1010 CFUs/mouse. To verify final bacterial concentrations and exposure doses, colonies were enumerated after serial dilution and plating on sheep blood agar (SBA) plates. The plates are incubated at 35ºC and colonies enumerated. Animals were observed 4 times per day and deaths recorded. All analyses were performed employing a stratified Kaplan-Meyer analysis with a log-rank test as implemented on Prism Version 5, GraphPad Software. Research was conducted under an IACUC approved protocol in compliance with the Animal Welfare Act, PHS Policy, and other Federal statutes and regulations relating to animals and experiments involving animals. The facility where this research was conducted is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care, International and adheres to principles stated in the 8th Edition of the Guide for the Care and Use of Laboratory Animals, National Research Council, 2011.

Surviving animals from the 109 and 1010 CFU Sterne and the 106 and 107 CFU Ames oral challenge groups were re-challenged two months later with an inhaled dose of 50-75 LD50 (LD50 = 3.4 x 104 CFU) of B. anthracis Ames strain spores by whole-body aerosol [1]. Aerosol was generated using a three-jet Collison nebulizer [10]. All aerosol procedures were controlled and monitored using the Automated Bioaerosol Exposure system operating with a whole-body rodent exposure chamber [11]. Integrated air samples were obtained from the chamber during each exposure using an all-glass impinger (AGI). Aerosol spore concentrations were determined from the AGIs by serially dilution and plating on SBA, as described above. The inhaled dose (CFU/mouse) of B. anthracis was estimated using Guyton’s formula [12].

Results

Survival results for the Ames spore oral challenge are shown in Figure 1. All mice receiving oral doses of 105 CFUs and below resulted in no deaths and all animals remained active without any clinical signs of infection. More importantly for oral challenge the development of clinical signs of illness or death was found to be two orders of magnitude above the aerosol LD50 of 3.4 x 104 CFUs for whole body exposure [1]. Animals challenged with spores from the Sterne strain were unaffected with only a single death observed at the highest dose of 1010 CFUs. The remainder of the animals all appeared healthy and active throughout the post challenge period. The lack of afforded protection as measured by survival (Figure 2) using mice pre orally challenged with either Ames or Sterne to a lethal aerosol challenge dose of Ames spores indicates that there is no long term immunity conveyed by orally delivered spores. In addition there was no shift in the calculated mean time to death of 48hrs between any of these groups and the control group, further evidence indicating a lack of protection by orally delivered spores.

fig 1

Figure 1: Female Balb/c mice (6-8 weeks old) in groups of 10 animals were challenged with oral doses of spores prepared from the B. anthracis Ames strain. Challenge amounts ranged from 10 to 1010 spores per mouse in 0.1 ml. Animals were observed and deaths recorded. The 10-105 challenges doses resulted in no deaths. A similar experiment was performed with spores of the Sterne strain resulting in only a single death at the 1010 CFU challenge dose (data not shown)

fig 2

Figure 2: Surviving animals from the oral-LD50 studies were challenged two months after initial oral challenge with multiple LD50s of aerosolized B. anthracis Ames spores

Discussion

The oral LD50 for the Ames strain at 108 CFUs is well above any theoretical ingestion possibility in the aerosol model. Even if one were to assume that an entire aerosol dose would be deposited on only the fur of all the caged mice and one animal groomed itself and all nine of the cage mates, the maximum theoretical oral dose possible would be 105 CFUs which would be still be well below the LD50. Clearly, considering a realistic distribution of the spores during an aerosol challenge experiment, the maximum potential ingested dose would be one or more magnitudes below this predicted 105 CFU limit. In addition, from these experiments the observation of only a single death at only the 1010 CFUs challenge dose for the Sterne strain again indicates the importance of the capsule for virulence in any murine challenge model. These results are also consistent with previously described gastrointestinal models, which used Sterne strain susceptible mouse strains A/J [13] or DBA/2 [14] and required >107 CFUs/mouse in combination with anti-acid addition to achieve an LD50. Therefore the data would indicate that potential ingestion of anthrax spores following whole body aerosol challenge does not affect the currently understood inhalational disease progression as observed in the Balb/c mouse [1]. Additional evidence is in the lack of any pathology associated with the digestive tract following aerosol challenges [1,15(D. Fritz personal communication)]. The lack of any increase in mean time to death would also seem to reduce the possibility that any orally ingested spores would affect therapeutic results and their interpretation. These results do not rule out the possibility of short term stimulation of an innate immune response after aerosol challenge resulting from animal ingestion of spores. However, based on the results from this study, the oral dose would be so low it seems unlikely to invoke a meaningful immunologic response.

Conclusion

In conclusion, potential oral ingestion of anthrax spores after whole-body aerosol challenge is highly unlikely to have any effect on mortality, disease progression, immunity or therapeutic outcomes.

Funding

This research was funded by a Joint Science and Technology Office – Defense Threat Reduction Agency – Chemical Biological Defense grant: CB3848 (CBCALL12-THRFDA1-2-0209) PPE3.

References

  1. Heine HS, Bassett J, Miller L, Hartings JM, Ivins BE, et al. (2007) Determination of Antibiotic Efficacy against Bacillus anthracis In a Mouse Aerosol Challenge Model. Antimicrob Agents Chemother 51: 1373-1379. [crossref]
  2. Heine HS, Bassett J, Miller L, Purcell BK, Byrne WR (2010) Efficacy of Daptomycin Against Bacillus anthracis in a Murine Model of Anthrax-Spore Inhalation. Antimicrob Agents Chemother 54: 4471-4473. [crossref]
  3. Heine HS, Purcell BK, Bassett J, Miller L, Goldstein BP (2010) Activity of Dalbavacin against Bacillus anthracis In Vitro and in a mouse Inhalation Anthrax Model. Antimicrob Agents Chemother 54: 991-996. [crossref]
  4. Gill SC, Rubino CM, Bassett J, Miller L, Ambrose PG, et al. (2010) Pharmacokinetic-Pharmacodynamic Assessment of Faropenem in a Lethal Murine Bacillus anthracis Inhalation Postexposure Prophylaxis Model Antimicrob Agents Chemother 54: 1678-1683. [crossref]
  5. Heine HS, Bassett J, Miller L, Bassett A, Ivins BE, et al. (2008) Efficacy Oritavancin in a Murine Model of Bacillus anthracis Spore Inhalation Anthrax. Antimicrob Agents Chemother 52: 3350-3357. [crossref]
  6. Heine HS, Shadomy SV, Boyer AE, Chuvala L, Riggins R, et al. (2017) Evaluation of combination drug therapy for treatment of antibiotic-resistant inhalational anthrax in a murine model. Antimicrob Agents Chemother 61: e00788-17. [crossref]
  7. Kathania M, Zadeh M, Lightfoot YL, Roman RM, Sahay B, et al. (2013) Colonic Immune Stimulation by Targeted Oral Vaccine. Plos One 8: e55143. [crossref]
  8. Glomski IJ, Pris-Gimenez A, Huerre M, Mock M, Goossens PL (2007) Primary involvement of pharynx and peyer’s patch in inhalational and intestinal anthrax. Plos Path 3: e76. [crossref]
  9. Leighton TJ, Doi RH (1971) The stability of messenger ribonucleic acid during sporulation in Bacillus anthracis. J. Biol. Chem 246: 3189-3195. [crossref]
  10. May KR (1973) The Collison nebulizer description, performance and applications. J. Aerosol Sci 4:235-243.
  11. Hartings JM, Roy CJ (2004) The automated bioaerosol exposure system: preclinical platform development and a respiratory dosimetry application with nonhuman primates. J Pharm and Toxicol Meth 49:39-55.
  12. Guyton AC (1947) Measurement of the respiratory volumes of laboratory animals. Am J Physiol 150:70-77. [crossref]
  13. Xie T, Sun C, Uslu K, Auth RD, Fang H, et al. (2013) A new murine model for gastrointestinal anthrax infection. PLOS one 8: e66943.
  14. Tonry JH, Popov SG, Narayanan A, Kashanchi F, Hakami RM, et al. (2013). In vivo murine and in vitro M-like cell models of gastrointestinal anthrax. Microbes and Infection 15:37-44. [crossref]
  15. Lyons CR, Lovchik J, Hutt J, Lipscomb MF, Wang E, et al. (2004) Murine model of pulmonary anthrax: kinetics of dissemination, histopathology, and mouse strain susceptibility. Infect. Immun 72:4801-4809. [crossref]

Future in Physiological Pacing? State of the Art

DOI: 10.31038/IMROJ.2022711

 

I have read recently some papers about sophisticated methods for cardiac pacing [1]. Here are some comments and present perspectives.

Physiological pacing as a new paradigm has been the subject of papers from a good number of authors for many years. In my particular case I have been witness of discussions within the global electrophysiology community about the best pacing site in terms of physiological pacing and the future of cardiac resynchronization therapy [2].

It is well known that different physiological pacing modalities are being used: selective His bundle pacing S-HBP), non-selective His bundle pacing or NS-HBP (which we prefer to call para-Hisian pacing) and more recently left bundle branch pacing (LBBP).

There is an ongoing evolution on the reasons for using different pacing techniques and electrode site, among them long-term safety. In my opinion S-HBP is neither the safest nor the most effective in patients with conduction disturbances. This technique is losing preference and this is the reason for the wider use of LBBP trying to avoid the known difficulties of S-HBP. However, there is not enough experience with LBBP at the moment.

The third option is NS-HBP or para-Hisian pacing. There is still some resistance to use it due to the lack of specific reference about the optimal pacing site. In our group we use the so-called Synchromax mapping for para-Hisian pacing. It is a simple and effective technique to achieve the best lead location [3].

The paper that we are commenting here [1] shows a tri-dimensional mapping for optimal lead placement, others look for His positioning using a recording from the catheter, but the reason for the existence of so many techniques shows that none of them is generically accepted.

In our South American region we are using the above mentioned system, based in the ECG without the need of special tools, sheaths or navigators; this is important here due to the lack of huge resources in the healthcare system. This noninvasive method also allows an important reduction in implant time which is in turn a good safety issue because we reduce the infections risks.

My purpose is to acknowledge the initiative shown in this paper and to contribute with new tools.

Time will tell which the best pacing time is, but in general I believe that the future will be para-Hisian pacing aided with the most convenient mapping method, good for all patients including those with conduction disturbances or heart failure.

References

  1. Bastian D, Gregorio C, Buia V (2022) His bundle pacing guided by automated intrinsic morphology matching is feasible in patients with narrow QRS complexes. Sci Rep 12: 3606.
  2. Ortega DF (2019) Is Traditional Resynchronization Therapy Obsolete? Is Para-Hisian Pacing the New Paradigm? Editorial by Rev Electro y Arritmias 11: 38-40.
  3. Daniel O, Emilio L, Analía P, Nicolás M, María Paula Bonomini MS (2020) Novel implant technique for septal pacing. A noninvasive approach to nonselective his bundle pacing Journal of Electrocardiology 63: 35-40.
fig 6

Advancing Ubiquitous Collaboration for Telehealth – A Framework to Evaluate Technology-mediated Collaborative Workflow for Telehealth, Hypertension Exam Workflow Study

DOI: 10.31038/JPPR.2022513

Introduction

Healthcare systems are under siege globally regarding technology adoption; the recent pandemic has only magnified the issues. Providers and patients alike look to new enabling technologies to establish real-time connectivity and capability for a growing range of remote telehealth solutions. The migration to new technology is not as seamless as clinicians and patients would like since the new workflows pose new responsibilities and barriers to adoption across the telehealth ecosystem. Technology-mediated workflows (integrated software and personal medical devices) are increasingly important in patient-centered healthcare; software-intense systems will become integral in prescribed treatment plans [1]. My research explored the path to ubiquitous adoption of technology-mediated workflows from historic roots in the CSCW domain to arrive at an expanded method for evaluating collaborative workflows. This new approach for workflow evaluation, the Collaborative Space – Analysis Framework (CS-AF), was then deployed in a telehealth empirical study of a hypertension exam workflow to evaluate the gains and gaps associated with a technology-mediated workflow enhancements. My findings indicate that technology alone is not the solution; rather, it is an integrated approach that establishes “relative advantage” for patients’ in their personal healthcare plans. Results suggest wider use of the CS-AF for future technology-mediated workflow evaluations in telehealth and other technology-rich domains.

Need for a Collaborative Evaluation Framework

The adoption of new technology has permeated every aspect of our personal and professional lives with the promise of performing work processes more efficiently and with greater capability. In 1984, the term, “computer-supported cooperative work,” (CSCW) was coined by Grudin [2:19] in order to focus on the “understanding of the way people work in groups with enabling technologies,” i.e., technology-mediated workflows. My research built on the core CSCW mission with an updated context for CSCW to include the seamless integration of the three key elements of infrastructure, interaction (i.e., collaboration), and informatics into a system aimed at improved efficiency and expanded capability. New technologies impact the way we function in our daily lives – both from a personal perspective as consumers and in our professional lives as knowledge workers. The integration of new technology into collaborative workflows introduces many variables of great concern to companies, organization, and individuals (e.g., costs of development, switching costs associated with migrating from the current workflow to a new technology-mediated workflow, and details of how the new workflow functions, compared to the current workflow). What processes should be avoided? What should be retained? What should be revised? How is user behavior associated with adoption of the new technology? Organizations have a difficult time determining the scope of a new technology initiatives, including how the capability and complexity of new technology will provide measurable benefit (i.e., relative advantage) in some quantified or qualified way, compared to the existing workflow (Figure 1).

fig 1

Figure 1: Cross-disciplinary domains incorporated into the CS-AF [3]

A need is apparent for a cross-disciplinary generalizable approach to evaluate a collaborative technology-mediated workflow that focuses on a specific task to be done in a specific workflow – a model that incorporates a view at the current approach, compared to the enhanced approach resulting from the new technology. My research incorporated collaborative evaluation metrics from Computer Science/Human Computer Interaction (CSCW/HCI), Behavioral Sciences, Organizational Management, and Industrial Engineering (IE) domains to formulate an evaluation model and methodology (Collaborative Space – Analytical Framework, CS-AF) and tests this framework with a comprehensive empirical study for hypertension exam workflow.

Collaborative Workflow Evaluation – Related Works

CSCW strives to incorporate a wide terrain of interdisciplinary interests, thus establishing a single generalizable model to evaluate “collaborative activities and their coordination” [4] has been difficult. Historically, CSCW tends to focus on qualitative research guided by frameworks with varying degrees of flexibility. Neal et al. suggest that there are three types of CSCW frameworks that emerge from CSCW research: methodology-oriented, conceptual, and concept-oriented. Each CSCW framework type has a valuable focus, but no single framework addresses the full range of CSCW needs [5,6]. To this day, CSCW and HCI continue with heightened interest to understand the obstacles and opportunities associated with integrating technology-mediated enhancements into existing workflows in order to promote a better collaborative experience [1]. Two important perspectives emerge: the evaluation and measurement of the impact that technology-mediated enhancements have on humans, both individually and collaboratively, and the impact that new technology has on the organization, which ultimately equates to a financial impact. The primary contributions of Weiser, one of the original authors of “ubiquitous computing,” is the promotion for ethnomethodologically-oriented ethnography, which “ … reveal[s] that it is not the setting of action that is the important element in design, but uncovering what people do in the setting and how they organize what they do” [7:399]. Goulden et al. posit the importance of ethnographic research in computer science [8]. Conducting ethnographic work practices research with a scientific methodology to observe the user of a workflow in the natural state, while incorporating the principles of reflexivity, was a complementary element of my research. This important contribution from the social sciences domain fortifies the methodology and goals of this research towards a generalizable model to observe and to analyze collaborative workflows in multiple domains [9]. The integration of reflexivity into ethnographic practice enables a closed-loop process for semi-structured field engagement, based on theoretical process that iteratively informs the next field engagement [10]. Peneff suggests that ethnographic researchers need to cope with the ad hoc nature of field settings by “formalizing tasks in a manner naturalistic enough that the human participant might engage as if it was a conversation with a trusted acquaintance” [11:520]. Computing systems from their inception purport a value proposition of efficiency, expanded capability, and collaborative integration for the benefit of both humans and the organization. Carroll defines the mission of HCI as “… understanding and creating software and other technology that people will want to use, will be able to use, and will find effective when used…We (CSCW) will most likely need to develop new concepts to help us understand collaboration in complex organizations” [12:514]. Weiseth et al. posit that organizations must “take action and make it possible for people to collaborate in effective ways” [13:242]. The researchers suggest that organizations must provide collaborative support in the form of organizational measures (collaborative best practices), services (collaborative process), and tools (collaborative methods) to enable technology-mediated workflow enhancements. Weiseth et al. introduced the Wheel of Collaboration Tools as a topology of collaborative functions in efforts to illuminate the important connection between the subtle day-to-day collaborative activities of workers and the integration of the “system” (infrastructure, content [information/informatics], and human-interface) for collaborative gain [13]. Neale, Carroll, and Rosson introduce the “Activity Awareness Model” and identified three historic issues associated with evaluating collaborative workflows: logistics of remote locations, complex number of variables, and the need to validate the re-engineered of future-state workflow [5]. “Few methods have been developed with creating engineering solutions in mind. It is possible, but researchers must be continually cognizant about how data collection and analysis methods will translate into design solutions” [5:114]. The re-engineered workflow needs to be examined in its natural setting in order to understand the collaborative impact of the technology-mediated enhancements and that this is the “central priority in CSCW evaluation.” In order to accomplish the goals of ubiquitous computing and deliver collaborative human-computer interactive systems, a comparative evaluation of incremental improvements made through each technology-mediated transformation is important [14]. Kellogg et al. posit that success in HCI comes from “immersive understanding of the ever-evolving tasks and artifacts” [15:84]. Millen et al. state that understanding the context of the user environment and interaction is increasingly recognized as a key to new product innovation and good product design [16]. A need is apparent for a generalizable approach to evaluate a collaborative technology-mediated workflow that focuses on a specific task to be done in a specific workflow – a model that incorporates a view at the current approach, compared to the enhanced approach as a result of the new technology. Arias et al. suggest that a shift to intended use or intended work vs. the computing system is necessary [17]. Baeza-Yates posits that future work should focus on the research method, the data collection, the data analysis, and the domain of study [18]. Plowman, Rogers, and Ramage add that designers might attend to the “work” of the setting, as well as the interactional methods or practices of the members as the work is being performed. The “job of work” in the “work of a setting” are the actions and interactions that inhabit and animate the work setting [19,20]. CSCW and HCI involve the integration of many unique disciplines; therefore, accurately framing the environment and conditions associated with the targeted cooperative work is necessary for a precise evaluation [16,21]. Millen states that “understanding the context of the user environment and interaction is increasingly recognized as a key to new product/service innovation and good product design” [16:285]. CSCW and HCI conceptual models help researchers formulate a framework to describe a particular context in focus [22]. Neale et al. posit activity awareness as an overarching concept to describe a comprehensive view of collaboration from the activity perspective [5,6]. The research of Neale et al. attempts to identify the relationship between important collaboration variables; contextual factors are foundational, and work coupling is assessed from loosely to tightly coupled, depending on the distributed nature of the work. The research posits that the more tightly coupled the work, the more cooperative and collaborative it needs to be in order to be effective. The research is intended as a “step in the direction of better approaches for evaluation of collaborative technologies” [5,6]. The Model of Coordinated Action (MoCA) is another conceptual model developed for framing the context of complex collaborative situations [23]. A new model is needed beyond the focus on work or technology to include rapidly increasing diversity of socio-technical configurations. The MoCA ties together the significant contextual dimension that have been covered in CSCW and HCI literature into one integrated contextual model. The MoCA provides a way to tie up many loose threads. It provides “conceptual parity to dimensions of coordinated action that are particularly salient for mapping profoundly socially dispersed and frequently changing coordinated actions” [23:184]. Lee and Paine suggest that this model provides a “common reference” for defining contextual settings, “similar to GPS coordinates” [23:191].

The primary focus of Davis’s TAM (Technology Assessment Model) and its wide-scale use is the parsimonious focus on two primary vectors used to evaluate adoption: Ease-of-Use (EU) and Perceived Usefulness (PU) [24]. At the most basic level, humans look for two resonating value propositions from new technology: an easy and more efficient way to perform an existing task, and/or opportunities for new features previously unavailable to them [24]. Davis et al. state that the “goal of the TAM is to be capable of explaining user behavior across a broad range of end-user computing technologies and user populations, while at the same time being both parsimonious and theoretically justified” [24:985]. The TAM is easy to understand and deploy, and it has been adapted by other researchers to include additional attributes that deliver complementary determinants [24]. The first modified version of the TAM was proposed in 2000, also by Davis and Venkatesh, to address two primary areas: (1) to introduce new determinants; to uncover social influences and “cognitive instrumental processes” and (2) to provide a view at specific time intervals that were meaningful to users associated with determining technology acceptance [25:187]. The notion of conducting a time view at key intervals of adoption has been a particular interest of mine. In TAM 2, Davis and Venkatesh evaluate three time-intervals (pre-implementation, one-month post-implementations, and three- month post-implementations); this approach provides a valid snapshot, yet it does not go far enough to establish a detailed quantitative baseline measure that can be easily compared in a complementary sense with the qualitative survey questions. It is my belief that there is an opportunity for improvement to the TAM with more a rigorous time-interval evaluation using the Industrial Engineering (IE) technique of Value Stream Mapping (VSM). VSM, combined with TAM and other components, will address limitations expressed with the TAM approach and introduce a much-needed task orientation to the evaluation. Specifically, this research incorporated the integration of the VSM approach used in Industrial Engineering to complement the evaluation breadth of the TAM. VSM incorporates quantitative time-series data into the analysis of workflow at the task-level which fortifies weakness identified with TAM and other less rigorous approaches. The TAM can also be extended to include the USE questionnaire developed by Lund 2001 [26] to uncover the relationship among Ease-of-Use, Perceived Usefulness, Satisfaction, and Ease of Learning. The USE questionnaire is used to gauge the user’s confidence in the system. The results of the USE analysis are represented in a four-quadrant radar chart. The percentage of positive reactions is based on the maximum percentage of positive feedback from the user experience. When the USE questionnaire is combined with traditional TAM questions and other evaluation metrics, such as Net PromoterÔ [27], a more comprehensive view of each user’s perspective toward the new technology can be identified and analyzed.

Health Information Technology (HIT) Related Works

The HIT domain, like many other collaborative workflow domains, is charged with the complex task of vetting the emerging needs of users (i.e., patients and practitioners) and of assessing opportunities for new technologies that might be integrated to deliver better efficiency, new capability, or both. The patient-centered healthcare approach assumes expanded participation and collaboration by doctors and patients, yet is riddled with gaps in the processes, technology, and human computer interaction (HCI) necessary for optimum workflow. Technology adoption opportunities in this space are complicated by the collision of consumer electronics technology with HIT. Wide-scale adoption of micro-health devices and Web surfing for health and wellness information are mainstream consumer-patient activities. Simultaneously, hospitals and practitioners strive for improved connectivity through patient-portals enabled through Electronic Health Records (EHR), integration of high-tech equipment, and mining of big data as means to advance services, while making them more patient-centered. The HIT domain is a complex domain with tremendous needs for constant evaluation and advancement with new technology. Patients actively seek more information on medical conditions, lifestyle information, treatment protocols, and natural versus prescription options, etc. Websites such as WebMD provide rich content that patients actively seek in an effort to reconcile various healthcare information options. Pew Research found that “53% of internet users 18-29 years old, and 71% of users 50-64 years old have gone online for health information” [28]. Further integration complexity is introduced for patients with the growing number of personalized microsensor devices available. Real-time patient data from non-clinical sources, such as microdevices, has potential to enhance patient-centered care, yet clinicians are not inclined to reference that data, since there is no standardization of the data nor of the interface. Estrin states that we need to capture and record our small data. “Systems capture data reported by clinicians and about clinical treatment (EHR), not patients’ day-to-day activities” [29:33]. The microdata from daily activities can be leveraged with other data to provide a 360-degree patient view. Winbladh et al. state that “patient-centered healthcare puts responsibility for important aspects of self-care and monitoring in patients’ hands, along with the tools and support they need to carry out that responsibility” [1:1]. Patients armed with rich content pose a unique collaborative problem for practitioners, who must now deal with the reconciliation of non-doctor-vetted content with patients. Research conducted by Dr. Helft, University of Indiana, found that “when a patient brings online health information to an appointment, the doctor spends about 10 extra minutes discussing it with them” [30]. Neel Chokshi, MD, the Director of the Sports Cardiology and Fitness Program at Penn Medicine’s research team, “we haven’t really told doctors how to use this information. Doctors weren’t trained on this in medical school” [31,32:2]. Collaboration is the fulcrum point for enabling optimized workflow in HIT systems. A complete understanding of collaboration is essential in order to refine certain aspects of the workflow that affect a streamlined process. Weir et al. provide a functional definition of collaboration as “the planned or spontaneous engagements that takes place between individuals or among teams of individuals, whether in-person or mediated by technology, where information is exchanged in some way (explicitly, i.e., verbally/written; or implicitly, i.e., through shared understanding of gestures, emotions, etc.), and often occur across different roles (i.e., physician and nurse) to deliver patient care” [33:64]. Skeels and Tan found that more collaborative communications across the “care setting” can provide a large impact on the quality of services for patients [34]. Successful integration of personalized health data with other meaningful data sources is an important HCI requirement for end-to-end HIT solutions. Eikey et al.’s systematic review of the role of collaboration in HIT over the past 25 years comprised a list of 943 articles with HIT collaboration references; the compilation was refined to 224 articles that were reviewed, analyzed, and, categorized [35]. Their study summaries a composite view into the key elements that affect collaboration in HIT with their Collaborative Space Model (CSM) (Figure 2).

fig 2

Figure 2: Eikey et al.’s HIT Collaborative Space Model [35]

The CSM illustrates a foundational view summarized by the researchers as a starting place for future investigation into the critical dynamics of collaboration in HIT. Although the CSM is a useful reference model for categorizing the various aspects of collaboration, based on a systematic HIT literature review, the model was not field tested, and does not cover attitude and behavior perspectives. Eikey et al. suggest that future research should “focus on the expanded context of collaboration to include patients and clinicians, and collaborative features required for HIT systems” [35:274]. This research builds on the observations of Eikey and others in the HIT domain, with the introduction of a cross-disciplinary evaluation framework (CS-AF) and field engagement methodology. Prior to conducting this hypertension exam workflow study, a complete pilot study was conducted in the graphic arts domain to test the CS-AF approach [36]. Increased focus and demand in telehealth has heightened the need for continuous monitoring and improvement to the doctor-patient collaborative workflows in telehealth. Piwek et al. posit that “moving forward, practitioners and researchers should try to work together and open a constructive dialogue on how to approach and accommodate these technological advances in a way that ensures wearable technology can become a valuable asset for health care in the 21st century [37]. In the research of consumers’ adoption of wearable technology, Kalantari et al. suggest that future research should test “demonstrability” (i.e., whether the outcome of using the device can be observed and communicated), mobility, and the experience of flow and immersion when using these devices [38]. The objective for this research was to utilize the CS-AF and methodology to evaluate doctor-patient collaborative workflow for hypertension by using a blood pressure device and a smartphone app that is common to doctors, and most importantly, by incorporating doctors and their patients in this empirical study. This research and empirical study included the documentation and analysis of the current hypertension workflow for a set of patients and two medical doctors using the CS-AF, the development and integration of a technology-mediated workflow that would be introduced to the same set of users, and the analysis of both the current and technology-enabled workflows using the CS-AF.

Current-state Workflow: Hypertension (Blood Pressure) Exam

The current or baseline hypertension (i.e., blood pressure) exam workflow incorporates a clinician and outpatients needing their blood pressure (BP) measured (i.e., a current-state workflow). One dilemma associated with hypertension treatment is the obtaining of timely and accurate patient BP readings. The current workflow requires patients to visit their doctor’s office for a BP reading. This current-state workflow process is time-consuming and riddled with issues affecting the accuracy of readings (time-of-day fluctuations, “white-coat hypertension”, food consumption or hours of sleep) [39]. From a doctor’s perspective, there is no current way to view and analyze patient-introduced microdevice BP data in the context of their standard practice and workflow. Their only way of collecting patient BP data is an office visit, a time-consuming and prohibitive practice when close monitoring of hypertension patients happens on a more frequent basis. The American Heart Association’s protocol is: take two BP readings first thing in the morning (before food or medication), one minute apart, then averaged, followed by two readings at the end of the day (before bed), one minute apart, then averaged. The a.m. and p.m. averages are then averaged for the daily BP reading [40,41]. This would be impossible in an in-office setting. Patient reading of BP data, while extremely valuable (i.e., timely and accurate) when compared to in-office BP data, is not well-integrated within the doctors’ standard workflow, nor does it provide real-time visibility or opportunities for doctors to collaborate with patients. This research included an empirical study of 50 hypertension patients, assigned as “matched pairs” by gender and age bands. The matched pairs were evaluated on the current state BP exam workflow for hypertension, introduced an alternative workflow: “technology-mediated” or “manual workflow” (control group). A second evaluation to determine the gains and gaps between the two pre- and post-hypertension exam workflows was also conducted. This research introduced the Collaborative Space-Analysis Framework (CS-AF) and methodology as means to measure and evaluate alternative workflows (technology-mediated and manual), compared with a baseline workflow, through a cross-disciplinary set of evaluation metrics. The technology-mediated workflow designed for this study attempts to address the problems identified in the current-state workflow with the development of a custom-designed Apple/Android smartphone app (Wise&Well) integrated with the Omron BP Monitor to facilitate a remote asynchronous hypertension exam telehealth workflow.

Collborative Space – Analyis Framework (CS-AF) Model and Methodology

Collaborative Space – Analysis Framework

The CS-AF methodology is utilized onsite where work gets done.

It comprises a carefully integrated set of cross-disciplinary components that have been purposefully selected to enhance the view that any one single approach has on its own and to integrate the complementary attributes that each of these best-in-class models generates. The CS-AF’s five areas of investigation are Context, Process, Technology, Attitude and Behavior, and Outcomes.

CS-AF: Context Determinants

The Model of Coordinated Action (MoCA) was developed for framing the context of complex collaborative situations [42]. The seven dimensions of MoCA (Synchronicity, Distribution, Scale, Number of Communities of Practice, Nascence, Planned Permanence, and Turnover) provide researchers, developers, and designers with a vocabulary and range of concepts that can be used to tease apart the aspects of a coordinated action that make them easy or hard to design for” [42:191]. Using the MoCA as a standard component of the CS-AF fortifies the overall framework with a practical and structured approach to capturing the workflow context.

CS-AF: Process Determinants

The IE workflow analysis method of Value Stream Mapping (VSM) has been incorporated into the CS-AF [43], [44,45]. VSM incorporates a hierarchical task analysis technique to uncover a quantitative view of the workflow from a cycle-time perspective (by task) and qualitative measures of the information quality at each workflow juncture.

For the empirical study conducted for this research, logical workflow steps were defined. The research engaged users with semi-structured observation, and structured and unstructured questions associated with each step in the workflow and the overall workflow experience. [45-50].

CS-AF: Technology Determinants

The Technology Acceptance Model (TAM) introduces two crucial constructs aimed to uncover user perspectives related to the adoption of technology. Does the technology enhance the workflow and deliver a more useful and easier to use solution? Davis et al. believed that the two determinants, Perceived Usefulness (PU – enhancement of performance) and Perceived Ease of Use (PEU – freedom from effort), are the essential elements of technology acceptance, and when coupled with a view of the user’s attitude toward using the technology, provide a parsimonious and functional model that can deliver a meaningful evaluation of technology adoption [51]. The survey approach used in empirical studies for the original TAM can be complemented with Lund’s USE questionnaire [52]. When TAM survey questions surrounding PU and PEU are complemented with two other determinants (Satisfaction and Ease-of-Learning), a more comprehensive evaluation of the collaborative experience can be collected, analyzed, and compared. The CS-AF also integrates the TAM approach with the USE questionnaire, represented in a 4-facet radar chart that provides the researcher with a visual representation of each facet simultaneously [52].

CS-AF: Attitude & Behavior Determinants

Establishing a baseline view of the workflow from several vantage points, then capturing an updated view of the same workflow from the same metrics for new technology-mediated improvements enables a meaningful comparison and respects the research principles suggested by Ajzen et al. [53]. They establish four different elements from which attitudinal and behavior entities may be evaluated: “the action (work task), the target at which the action is directed, the context in which the action is performed, and the time at which it is performed” [emphasis theirs] [53,54]. These four elements have been incorporated into the CS-AF. The original TAM includes evaluation of Attitude Towards Using and Behavioral Intent to Use determinants adapted from Ajzen, et al. [53,54]. In order to collect an expanded assessment of the user’s perspective towards the workflow, the baseline TAM attitude and behavior constructs are complemented in the CS-AF by additional semi-structured qualitative questions. CS-AF also incorporates the Net Promoter ScoreÔ (NPS) [55] in attempts to further understand the Attitude determinant [51]. It measures how likely users are to promote the product to others in their circle of influence.

CS-AF: Outcomes Determinants

Critics of the TAM believe that putting too much weight on external variables and behavior intentions, and not enough on user goals in the acceptance and adoption of technology, is a limitation of the TAM [56,57]. The CS-AF incorporates a provision to evaluate user goals leveraging CSCW/HCI concepts in awareness and goals setting established in the Activity Awareness Model [56,58]. The five elements of the CS-AF (Context, Process, Technology, Attitude and Behavior, and Outcomes) are integrated with a field survey and statistical evaluation methodology for empirical studies of collaborative workflows (Figure 3).

fig 3

Figure 3: Collaborative Space – Analysis Framework [3]

CS-AF Field-Engagement Methodology

All information was collected on-site through detailed workflow audits and semi-structured interviews following the CS-AF survey instrument with the participants in the workflow. The research also requires a development and implementation phase whereby the technology-mediated enhancements are integrated into the workflow. Following the transformation of the collaborative workflow, the same participants are re-evaluated using the same CS-AF survey instrument and procedures. When all the data for both the current-state and technology-mediated collaborative workflows are collected, the two workflow scenarios are evaluated and analyzed, and a summary perspective is derived. The CS-AF methodology includes five sequential steps [36] (Figure 4).

fig 4

Figure 4: Bondy’s CS-AF Field Study Methodology [3]

Field Trial Step 1

Immersive discovery in the target domain. Ethnographic analysis of the target workflow, including contextual inquiry, work-task analysis, use-case modeling was conducted to determine the specific workflow steps and existing user requirements. From this immersive discovery, the CS-AF survey instrument is adjusted to represent the specific steps for the targeted workflow. The hypertension exam workflow included five workflow steps (Pre-Visit, Registration, Exam, Treatment, and Post-Visit).

Field Trial Step 2

Baseline evaluation (all 50 test participants) using the CS-AF survey instrument for the current-state in-office BP exam workflow.

Field Trial Step 3

Participants randomly assigned to two groups that incorporate the alternate workflows to be evaluated.

Group 1: Manual BP exam workflow (control group)

Group 2: Technology-mediated BP exam workflow

Field Trial Step 4

All test participants (both Group 1 and Group 2) conducting a second CS-AF evaluation survey using the same CS-AF survey instrument as was used for the baseline.

Field Trial Step 5

Systematic analysis of the survey data recorded from the two surveys, including a comparison of the between and within groups across each of the determinants.

CS-AF Statistical Analysis Methodology

The CS-AF survey instrument is an integrated set of qualitative statements ranked by participants using a 7-point Likert scale (from 1- Extremely Easy through 7 – Extremely Difficult) for the five major areas of investigation (Context, Process, Technology, Attitudes & Behaviors, and Outcomes). The survey instrument incorporates single-response statements such as “How easy-to-use is the technology that is incorporated in each step of the ‘at home’ manual BP exam workflow to you?.” For this research, with validation of a normal distribution, a parametric repeat measures ANOVA (rANOVA) was run across five workflow stages for each group. When rANOVA within and between groups analysis generates significant p-values <0.05, subsequent 2-sample matched-pairs t-test was used to analyze whether there is statistical evidence that the mean difference between paired observations on a particular outcome is significantly different from zero for specific group-to-group analysis at the determinate or dependent variable level.

CS-AF Statistical Basis and Analysis Procedure

The CS-AF survey data was collected for both the pre- and post- workflow trials for Group 1 and Group 2, and the following analysis (as shown in Figure 4 and described in more detail in Section 3.1.1) was conducted using the CS-AF survey data (Figure 5).

fig 5

Figure 5: CS-AF statistical analysis process [3]

Empirical Study: Pre-Post-Hypertension Exam Workflow

The baseline (current-state) workflow analysis of 50 hypertension test participants (selected on age/gender) was conducted using the CS-AF survey instrument, followed by a random selection of one participant from each pair to the manual workflow (control group) and one to the technology-mediated workflow. The field engagement was completed via a second survey of all participants, enabling a thorough evaluation, comparison, and analysis of the current-state workflow, compared to the alternative workflows using the CS-AF survey instrument (baseline workflow vs. the manual and technology-mediated workflows).

CS-AF Field Methodology (Survey Instrument and Test Protocol)

The CS-AF survey instrument incorporated 104 (7-point) Likert-scale questions, 20 quantitative time-series questions, and 15 subjective questions across the five components of the CS-AF. The CS-AF survey questions are revised for any empirical study to reflect the unique steps in the workflow; the exact same survey is used for the pre-/post-surveys. All participants were trained on the survey and associated workflow technology via remote video sessions for each group, and responded to the CS-AF surveys via an online digital survey platform.

The target sample size was 50 participants – 25 matched-pairs, matched on gender and 1 of 6 age bands. Of the 80 participants who were recruited, 50 were selected; all 50 participants completed the study. The hypertension exam workflow study included a baseline evaluation and survey of the current in-doctor’s-office blood pressure (BP) exam by all 50 test participants. Participants were randomly divided into two groups based on their specific matched-pairs (described above). The participants in the manual workflow group (Group 1 – control group) were assigned a wrist-cuff blood pressure device. Those in the technology group (Group 2) were assigned a Bluetooth wireless bicep-cuff blood pressure device and a blood pressure app (iOS/Android) developed specifically for this study. The clinician team involved in the study participated with patients directly during the baseline BP exam workflow and remotely through the app (BP alerts and doctor push messages) for the technology-mediated workflow, and with limited interaction for the manual wrist-cuff workflow. All test participants attended a training session on specific test protocol and operational use of the systems they were provided. All 50 test participants conducted twice-daily BP readings per the American Heart Association’s BP reading protocol [41]: two in the am (1 minute apart) and two in the pm (1 minute apart). All BP data was averaged for each day based on those four BP readings. Participants from Group 1 and 2 completed a second CS-AF survey (identical to the first), following a three-week trial period. The CS-AF survey data was analyzed within groups and between groups. The hypertension exam workflow survey dataset comprised the analysis of 10,400 Likert-scale questions, time-series data, and 1500 subjective responses.

Sample Size and Participants

The sample-size determination for the two-sample, paired t-test is estimated by the following process, resulting in a sample-size of approximately 25 pairs.

  • Type I error rate alpha = 0.05 (default value in most studies)
  • The least power of the test wanted to achieve (=70%)
  • Effect size (here, for example, = 0.5, for a pilot study to estimate this effect size)
  • Standard deviation of the change in the outcome (for example, = 1; a pilot study can be used to estimate this parameter).

To conduct a matched-pair t-test based on age and gender, 25 pairs of male and female patients were needed. A minimum of four male and four female hypertension patients from each of the six age bands were selected for this study; there was a minimum of 25 pairs or 50 patient-participants. Within each pair, subjects were randomly assigned to two groups (Group 1: manual workflow and Group 2: technology-mediated workflow). Based on the data, a paired test could be performed to evaluate the response values between the baseline workflow of two groups and their respective manual workflow vs. technology-mediated workflow. The hypothesis examined the difference of the observation means between two groups. If the assumption of a normal distribution of the differences was unjustified, a non-parametric paired two-sample test (Wilcoxon matched-pairs signed-ranks test) would be performed [59-64]. Following the initial data collection for the current-state BP exam workflow using the CS-AF survey instrument and training on the manual or technology-mediated workflows, respectively, test participants conducted twice-daily readings (two per interval) for a three-week period following a consistent BP measurement procedure. The three-week test period duration was followed to adequately accommodate a complete technology adoption-cycle (introduction, highly motivated use, through acceptance, and tailing-off of use) [65,66].

Baseline – Current-state Hypertension (BP) Exam Workflow

For the current-state (in-office) hypertension workflow, the completed preliminary field work involved shadowing and recording the specific sequential steps as a silent observer. Care was taken for this preliminary analysis to observe the natural setting and hypertension reading process in an obstructed manner with no interactions with the administrative staff, patient, nor clinician. The discrete workflow steps identified for the hypertension exam workflow were defined as a result of the initial field analysis and were reviewed for completeness with the doctors participating in this study.

This current-state hypertension exam workflow process established for this empirical study followed these steps:

  • Pre-Visit: Patient or Doctor determines the need for an in-office BP reading and schedules the appointment with the administrative staff.
  • Registration: For the appointment, the patient arrives at the doctor’s office and checks-in at the registration desk. Following check-in, the patient waits for a clinician to conduct the BP exam.
  • Exam: The clinician leads the patient to the examination room and conducts the BP exam. After completing the BP exam, the clinician advises the doctor that the exam is complete.
  • Treatment: The doctor enters the examination room, greats the patient, reviews the BP exam results, and discusses the results and possible follow-up treatment plan with the patient.
  • Post-Visit: The doctor updates the patient’s electronic health record, and patient checks out with the administrative staff, leaves the office, and completes any follow-up treatment prescribed by the doctor (e.g., self-treatment; follow-up visits with the doctor, lab, or specialists) (Figure 6).

fig 6

Figure 6: Current-state (baseline) Hypertension Exam Workflow

Manual Workflow (Control Group)

The manual hypertension BP exam workflow was used to establish the control group for the field trial (Group 1). Patients enrolled into the manual BP workflow group received a personal wrist-cuff BP monitor device, along with instructions and a daily BP log form to manually record daily BP readings. Test participants enrolled into the manual BP exam workflow followed a daily BP exam workflow; all BP readings performed on the wrist-cuff BP monitor were recorded manually on the log form that provided to each participant. Test participants conducted two a.m. BP readings, then took those the values and divided them by two, then wrote that a.m. average on the form; those participants completed the exact same procedure for the two p.m. BP readings. Manual BP test participants (Group 1) received an online video training session, accompanied by a printed instructional manual that describes the daily procedure to be followed for the manual BP workflow process (Figure 7).

fig 7

Figure 7: Manual BP Exam Workflow (Group 1)

Technology-Mediated Workflow

The technology-mediated BP exam workflow development goals are to enable a more streamlined and collaborative workflow that addresses both the needs of the doctor and those of the patient together in an integrated experience. The Wise & Well Blood Pressure Monitor (WW-BPM) was designed to facilitate the timely and accurate BP reading, and the communication of patient BP data in real time to the patient’s doctor in a collaborative application that enables doctor-patient interaction. The WW-BPM user interface allows users to monitor the statistics of their BP readings. To provide a more accurate representation of the patient’s true BP, the readings are averaged daily. The application also delivers this BP data and notices to the doctors when patients’ BP readings are elevated beyond an acceptable range. Based on their specific health profile, patients also received wellness data associated with hypertension accelerators (e.g., smoking, salt intake, diet, exercise, weight, and alcohol consumption). To facilitate future informatics portraying the functional use of the system, the application incorporated a database of transactions that can be further monitored and analyzed. Technology introduced in this research (Omron BP monitor and the Wise & Well BP Monitor (WW-BPM) that is integrated with the patient’s doctor) reflected in the technology-mediated workflow to follow, as shown in Figure 8. A complete Design Verification test and Usability Test was conducted for the technology-mediated workflow prior to formal engagement with test participants (Figure 8).

fig 8

Figure 8: Technology -Mediated BP Exam Workflow (Group 2)

Results and Analysis

The CS-AF Summary Scorecard incorporates summary ratings of each workflow evaluated with metrics from the CS-AF, including a color-coded visualization of the progress of each key metric toward the ultimate goal of a highly adopted solution by participants across all facets of the CS-AF (Context, Process, Technology, Attitude and Behavior, and Outcomes). The rANOVA was incorporated to compare mean values for each CS-AF determinant within and between groups. When statistically significant change in mean values occurred (p-value <0.05), further pair-wise t-test analysis was conducted to compare means at the workflow stage-level; positive and negative changes in mean values were recorded as a method for evaluating the gains and gaps between the workflows tested. This statistical approach proved to be a valid and replicable method for evaluating the workflows studied. From subjective questions across the five sections of the survey, participants expressed further details regarding each CS-AF aspect in question. Results were collected and analyzed to determine significant themes that might complement or contradict the statistical findings from the Likert-scale survey mean-data previously analyzed via rANOVA and paired t-test.

Within Group 1 Summary Analysis

The Context for the manual BP exam workflow, compared with the respective baseline, indicates an expected shift to a remote asynchronous workflow, which is indicative of a self-exam context. This manual workflow has transformed to become more distributed across more locations, with fewer participants and communities of practice, somewhat more developing and short-term in nature, and with less turnover than for the baseline workflow. There were no surprises with these results; Group 1 responded as predicted. CS-AF reveals a marked improvement in the Process times of the manual workflow, compared with the baseline, as participants recorded dramatic time reduction and overall workflow optimization. The enabling of the manual workflow to conduct the BP exam at home and on their own was the primary reason for the time optimization. However, the manual solution required recording of BP data by hand and no contact with clinicians, which translated to minimal impact of the relevance and importance of the BP information obtained versus the baseline. From a technology adoption perspective, participants did not view the manual BP exam process (device and procedure) to be particularly “useful” or “easy to use”. In fact, participants actually felt the process was less useful and easy to use than the traditional in-office BP exam. Further exploration using the USE model did show participants to be more satisfied with the manual BP workflow, yet felt that the workflow as not as easy to learn, compared with the baseline. Attitude and Behavior proved to be difficult metrics to advance regarding the manual workflow; in every instance, all responses (other than the NPS metric) decreased from an already low level recorded for the baseline workflow. The results indicate a serious need for a much more comprehensive solution that motivates participants’ “attitude toward use” and “intent to use” the manual workflows which are required for successful adoption. The NPS advanced from a negative-state (Detractor) to a neutral-state (Passive), which was a significant advance, yet more opportunity exists for improvement here. Group 1 participants also felt that there was less “awareness” of their goals amongst clinicians in the manual workflow, compared with the baseline, and “information quality” was only enhanced by their own efforts to record manual BP readings. These factors form the Group 1 participants’ opinion that there was a decrease in goal alignment, indicating a belief that they were isolated with their BP data and there was no collaborative exchange with clinicians during the process.

Within Group 2 Summary Analysis

The Context for the technology-mediated BP exam workflow, compared with the baseline, indicates a shift to a remote asynchronous workflow, as hypothesized, which is indicative of a self-exam context. This technology-mediated workflow has transformed to become more distributed across more locations, with fewer participants and communities of practice, somewhat more developing and short-term in nature, and with less turnover than for the baseline workflow. There we no surprises with these results; Group 2 responded as predicted. CS-AF reveals a marked improvement in the Process times of the technology-mediated workflow, compared with the baseline, as participants recorded dramatic time reduction and overall workflow optimization, as hypothesized. The fact that the technology-mediated workflow enabled participants to conduct the BP exam at home and on their own was the primary reason. The technology-mediated solution automated the recording of BP data and enabled real-time visibility of all participants’ BP data with clinicians. Clinicians also had the option to send personal notes to participants; all received a series of time-sequenced info graphs segmented to be relevant to their specific profile in the form of a push notification of proactive information. These features translated to only a slight positive movement on the relevance and importance of the BP information obtained for the technology-mediated workflow, versus the baseline. From a technology adoption perspective, participants did not view the technology-mediated BP exam workflow (Wise&Well and Omron device) to be significantly “useful” or “easy to use”, compared with the baseline. Group 2 participants recorded a slight improvement in all areas of the workflow, except for Stage 3 (BP exam), which was rated less useful and easy to use than the traditional in-office BP exam. Further exploration using the USE model did show participants to be more satisfied with the technology-mediated BP workflow, yet they felt that the workflow was not as easy to learn, compared with the baseline. Similar to results from Group 1, Attitude and Behavior also proved to be difficult metrics to advance regarding the technology-mediated workflow; all responses (other than the NPS metric) decreased from an already low level recorded for the baseline workflow for Group 2. The results indicate a serious need for a much more comprehensive solution that motivates participants’ “attitude toward use” and “intent to use” the technology-mediated workflows for successful adoption. The NPS advanced from a negative-state (Detractor) to a neutral-state (Passive); this was a significant advance; yet more opportunity exists for improvement towards the promotability of the solution. Group 2 participants also felt that there was less “awareness” of their goals amongst clinicians for the first three stages of the workflow in the technology-mediated workflow, compared with the baseline. There was, however, a slight increase awareness, information quality and goal alignment for Stages 4 and 5, including a significant increase in goal alignment for Stage 4 of the tech-mediated workflow. The data reflects an improvement in the areas of treatment and post-exam, indicating that Group 2 participants felt more empowered and informed regarding their BP than did the participants in the baseline workflow. This is a small move in the positive direction, yet there remains a large gap in the front-end part of the workflow and the exam itself to more tightly integrate the collaborative efforts of patients with clinicians. Telehealth technologist will need to investigate ways to improve the collaborative workflow between patients and clinicians during remote self-care exams to positively impact the goal alignment of patients and more beneficial outcomes.

Between Group 1 and Group 2 Summary Analysis

Analysis between Group 1-Manual Workflow and Group 2-Technology-Mediated Workflow participants indicates similar results. Both of the workflows proved to be successful regarding process times; in fact, Group 1’s manual workflow was the most optimized in all stages of the workflow except for Stage 3 (the BP Exam). The data reflects the simplicity of the manual wrist-cuff workflow as more optimized for all stages except the BP Exam since all BP data was recorded manually, in comparison to the more automated readings of the technology-mediated workflow. Group 1 participants did not have any complex technology to contend with, other than the simple wrist-cuff device itself. The tech-mediated workflow also scored better in the areas of information relevance and importance than did Group 1, indicating the graph-plots of real-time BP information, info graphs, alerts, and doctor messages slightly improved the quality of the information from the manual workflow. Technology adoption determinants rated lower than hypothesized for both workflows; yet, the technology-mediated solution proved slightly more “useful” than the manual solution for the first three stages of the workflow where the results flipped for Stages 4 and 5. Participants from both groups indicated that technology could improve usefulness; however, the lowest rating for this variable was in Stage 3, indicating participants’ perspective that technology could be more impactful in the front- and back-ends of the respective workflows. Group 1 participants rated the manual workflow to be “easier to use” than Group 2 participants rated their respective workflow. The manual solution was reported to be easier to use, compared with tech-mediated solution; however, Group 2 participants reported a higher rating for technology’s ability to improve the ease of use, most significantly in the front-end process (stages 1, 2). Both groups agreed that the BP exam workflow would be more beneficial with automation for the registration and appointment scheduling aspects of the workflow. Group 1 participants were overall more satisfied with the manual workflow than Group 2 participants were with the tech-mediated workflow. Both groups found the “ease of learning” for the alternative workflow to be difficult, with a surprising, slight advantage in ease-of-learning to Group 2. Both groups rated variables for Attitude and Behavior for the alternative workflows evaluated as low overall for all stages. Group 2 scored slightly higher for all but Stage 5 for “attitude toward using” and for “intent to use”. Group 2 was also slightly higher than Group 1 for all stages but Stage 2. This data indicates a slightly improved attitude and behavioral intent of Group 2 participants to the technology-mediated workflow than to the manual workflow. However, of all the metrics incorporated in the CS-AF, the attitude and behavior determinates were overall the lowest score reported. This underscores the tremendous importance of attitude and behavior on adoption in collaborative workflow and a target area for further discussion. The comparison of Outcomes between groups indicated a similar reaction by participants for “awareness” and “information quality”, with lower scores from their respective baseline workflows in Stages 1, 2, and 3, and some minor improvements in Stages 4 and 5. These low scores indicate a lack of collaborative connection with clinicians in the alternative workflow. Participants stated that they would like more interaction and access to clinicians during the exam process to ask real-time questions and obtain support as needed. Regarding “goal alignment”, Group 1 reported lower scores for the first four stages of the manual workflow and a slight increase in Stage 5. Group 2 reported a slight increase in goal alignment for Stages 1, 4, and 5, with a Stage 4 increase being significant, compared with the baseline. Both groups reported that the problem areas in the workflow associated with goal alignment are primarily in the front-end process (pre-visit, register). This data confirms other CS-AF data and subjective comments from participants that clinicians seem detached from their specific goals in the baseline workflow; this theme extends further in the alternate workflow, since being remote is a further disconnect from clinicians that is already problematic. Further effort is needed in the goal alignment and communication for patients to be satisfied with the remote nature of telehealth self-exams.

Discussion

The hypertension exam study (the collaborative BP exam workflow) proved to be valuable for testing the capability of the CS-AF and its expanded analysis methodology to investigate collaborative technology-mediated workflows. A variety of themes emerged from the study regarding the learnings and limitations derived from the CS-AF approach and the data that was analyzed.

Theme 1: Capture the Context

The context of the workflow in its current state is an essential reference point to secure future evaluations and comparisons. Barrett et al. posit that understanding the context for telehealth is an essential aspect of evidenced-based research and is critical to refinement of the applications in this space [39]. The CS-AF integrates “context determinants” from the MoCA (Synchronicity, Physical Distribution, Participants, Communities of Practice, Nascence, Planned Permanence, Turnover) because it ties together the context-centric construct from Ajzen with significant contextual dimension from CSCW and HCI literature into one integrated contextual model. The MoCA provides a way to tie up many loose threads related to context. More specifically, the researchers posit that the model provides “conceptual parity to dimensions of coordinated action that are particularly salient for mapping profoundly socially dispersed and frequently changing coordinated actions” [42:184]. Lee and Paine suggest that this model provides a “common reference” for defining contextual settings, “similar to GPS coordinates” [42:191] (Figure 9-10).

fig 9

Figure 9: CS-AF Context Scorecard [3]

fig 10

Figure 10: CS-AF–MoCA [cite] Context determinants [3]

Theme 2: A Holistic “Task-focused” View is Needed

This study underscored the importance of an end-to-end view of the workflow and participants’ perspectives at each workflow stage. Early examples of the TAM in field research incorporated data point intervals at various times pre- and post-technology-mediated implementation; however, in most instances, the TAM approach lacks the pre- and post-technology-mediated implementation view at the task level necessary to pinpoint where in the workflow the gain and gaps exist. Yousafzai et al. posit that the “lack of task-focus in evaluating technology” with the TAM has led to some mixed results. They further suggest that an opportunity to incorporate usage models for the TAM may strengthen predictability, yet caution is needed to manage model complexity [67], [68]. The CS-AF approach leads the evaluation effort down the path of a holistic view of the workflow taking into account all five aspects of the CS-AF for the entire workflow experience. The CS-AF integrates the practice of Value Stream Mapping (VSM) into the evaluation to collect and analyze quantitative time data for each step of the targeted workflow that are weakly defined in the TAM [67,68]. Incorporating VSM into the CS-AF established a common language and procedural methodology for characterizing the BP exam workflow in a quantitative manner; each step in the workflow was measured for both the baseline and alternative workflow. By identifying each significant step in the workflow, and collecting time and quality data, a value stream map was created, indicating the cycle/lag time for the workflow and identifying all quality issues throughout the BP exam process. This approach confirms the important role of “task and technology” stated by adoption experts Brown, Dennis, and Venkatesh [69] in research on technology adoption. Incorporating VSM with the CS-AF proved to be a valuable guiding focus for this study and was instrumental in uncovering specific gains and gaps for the workflow evaluated with formal measurement and analysis at the task-level often invisible to developers (Figure 11).

fig 11

Figure 11: CS-AF Scorecard Process determinants [3]

Theme 3: Time Equals Money, but is not the Only Answer

Further value of collecting and analyzing task data using the CS-AF approach is evidenced in the potential use of process times for financial analysis of technology adoption. Although financial analysis is outside the scope of this research, collection of the task-time data enables further cost-effectiveness analysis (CEA) analysis, if necessary. Woertman et al. posit that CEA is an integral part of technology adoption assessments globally in health care [70]. Their research underscores the importance of calculating the cost associated with a current process and evaluating the financial benefit of the new innovation. Most management metrics associated with CEA are derived from process times and are calculated as efficiency gains or gaps. This research identified specific time comparisons between the baseline workflow, then alternative workflows at the task level. Participants across the board were pleased with the optimization of the alternative workflows; however, even with a marked improvement in time, participants did not feel the solutions were more “useful,” and their attitude and behavioral “intent to use” was actually reduced, compared with the baseline workflows. The data underscore the importance of process time and identifies that, although time-optimization is crucial, it is far from being the only key to collaborative workflow adoption. It essential that technology solutions providers realize that time optimization is just the beginning of creating a successful collaborative workflow (Figure 12).

fig 12

Figure 12: CS-AF Process Times: VSM time series analysis [3]

Theme 4: Technology is not a Substitute for 1:1 Communication

The CS-AF captured an important assessment of information quality across the stages in the workflows evaluated. The data showed a large gap in the expectations of participants regarding communication with clinicians during the telehealth experience. Group 2 participants were exposed to a variety of “automated” communications options in the technology-mediated workflow, including graph-plots of real-time BP information, info graphs, alerts, and doctor messages; yet these technology enhancement only showed a slight improvement in the quality of the information from the baseline and manual workflow. The collaborative information flow is under-supported for telehealth. Practitioners are not trained for, or equipped to, support a growing network of remote asynchronous patients, and the technology is not designed for real-time in-app support and communications. As growth in telehealth continues, expanded capability and resources are needed in the area of patient facilitators. In a study of the role of patient-site facilitators in tele-audiology, Coco et al. identified gaps with the number of facilitators in support of the growing telehealth demand and the associated training to equip these individual with the knowledge needed to successfully support remote telehealth patients [71] (Figure 13).

fig 13

Figure 13: CS-AF Scorecard – Technology determinants [3]

Telehealth patients also bear some responsibility for the connection and flow of quality information in the workflow. Juin-Ming Tsai et al., in their research of “acceptance and resistance of telehealth” research, suggest that “… individuals should establish the concept of healthy self-management and disease prevention. Only when the public is more aware of self-health management can they fully benefit from telehealth services” [72:9]. The migration to self-health requires added commitment of patients towards the information and processes associated with telehealth. Until patients’ attitude and behaviors are accepting of this added responsibility, telehealth adoption will be challenged, regardless of the technology available and the support of patient-site facilitators. The distinct requirement for quality information exchange across telehealth workflows puts further demands on both providers and patients for timely communications, monitoring, and support.

Theme 5: Technology that is Easy to Use, is not Always Adopted

The integration of TAM determinates for “usefulness” and “ease of use” within the CS-AF uncovered interesting results associated with collaborative workflow adoption in telehealth. This research reveals the complexity of technology-mediated innovation and the synchronization of the features with users’ propensity to adopt. Adoption researchers have shown that Perceived Usefulness has a significant impact on technology adoption and Ease of Use is less of a determinate for adoption (Juin-Ming, et al., 2019, Chen & Hsiao, 2012; Cheng, 2012; Cresswell & Sheikh, 2012; Despont-Gros et al., 2005; Kim & Chang, 2006; King & He, 2006; McGinn et al., 2011; Melas et al., 2011; Morton & Wiedenbeck, 2009; Yusof et al., 2008). Juin-Ming et al.’s research states, “Telehealth has a close connection with individual health. Therefore, a user-friendly interface is not the first priority. In other words, as long as telehealth can improve users’ quality of life and provide better healthcare service, users will be more likely to try the functions that it provides” [72:7]. They further state that developers should focus on Perceived Usefulness to help patients find the practical integration path to incorporating the technology-mediated solution into their health management plans. “Therefore, individuals should establish the concept of healthy self-management and disease prevention” [72:9]. Developing an easy-to-understand user experience is an important aspect of the solution; however, the research shows the solution needs to be determined as a useful and viable solution with practical use on a daily basis for patients to increase their intention to use. Obviously, there is also a direct connect between users’ attitudes and behavior, and their perception that the technology-mediated workflow will be a useful experience. The important point verified in this study is that user perception on Ease of Use and Perceived Usefulness both scored lower than were hypothesized; the reason was not necessarily the user interface, but likely the misalignment on the complete solution with the integrated way that users would like to experience telehealth. Both the provider facilitation and personal health management come into play as adoption enablers (Figure 14).

fig 14

Figure 14: CS-AF – USE (Lund) Technology Acceptance determinants [3]

Theme 6: Relative Advantage Drives Attitude and Behavior to Adopt

Ajzen et al.’s research found a high correlation between attitude and behavior, specifically when there was both a direct correspondence between attitude and behavior [53]. A key omission of the Eikey, et. al theoretical Collaborative Space Model (CSM) for health information technology . The researchers suggest that “to predict behavior from attitude, the investigator has to ensure high correspondence between at least the target and action elements of the measures he employs” [54:188]. The CS-AF evaluates both behavior and attitude across the five stages of the BP exam workflow. The data reveal a more negative “attitude towards”, and “behavioral intent to use” the alternative workflows from the baseline workflows measured. Participants were not convinced that the alternate solution provided enough of a relative advantage to deem it as “useful” enough to shift their beliefs (Figure 15).

fig 15

Figure 15: CS-AF Attitude and Behavior Scorecard [3]

This is an important understanding uncovered by other researchers in telehealth technology adoption. Zanaboni and Wootton’s research [73] builds off of Rogers’ Diffusion of Innovations research to investigate how adoption occurs in telehealth. The research finds that, of the five Rogers attributes for adoption (relative advantage, compatibility, trialability, observability, and complexity), relative advantage is the key determinant effecting attitude and behavior to adopt in telehealth [73:2]. The importance of helping users identify with the “advantages” of the technology-mediated workflow is the crucial determinant of the speed of adoption of technology in healthcare, as reported by Greenhalgh et al. [74] and Scott, et al. [75].

Theme 7: Goal Alignment Requires Group Alignment

As large populations shift to telehealth, “awareness” and “common ground”, instinctive in the face-to-face setting, may be overlooked in remote asynchronous telehealth workflows. Reddy et al. posit that “awareness” is not as natural, and breaks-downs occur in technology-mediated telehealth workflows [76:269]. Furthermore, technology-mediated telehealth solutions can disrupt the traditional approach that healthcare providers have toward establishing common ground, or shared goals, amongst their patients [77] (Figure 16).

fig 16

Figure 16: CS-AF Outcomes Scorecard [3]

The CS-AF incorporates determinants for evaluating both awareness and goal alignment across the stages in the BP exam workflow. The results of the analysis showed a slight positive movement in goal alignment and awareness with the technology-mediated solutions, yet the progress in this area was still not acceptable. Much more emphasis is needed to deliver holistic solutions for telehealth that allow patients to feel as connected toward their goals in a remote context as they feel in the face-to-face setting. Eikey et al. state that “HIT needs to be designed to support specific processes of collaborative care delivery and integrate the collaborative workflows of different healthcare professionals [35:270]. Whitten and Mackert suggest that providers have an integral role in the deployment of telehealth solutions, including the use of project managers and remote-care facilitators to show overall provider awareness and to establish dependable common ground with remote patients for telehealth to be adopted widescale [78:517-521].

Limitations

Incorporating more participants for a longer period of time, with perhaps multiple check points, would provide a long-term view and potentially more information. Because of the COVID pandemic, all semi-structured sessions were covered via video conference, creating somewhat of a communications barrier regarding typical interactivity that would happen in a face-to-face setting. Self-reporting of BP exam timing could pose some inconsistency in reporting; however, the baseline data was similar between the two independent groups for BP exam timings. In retrospect, there were too many subjective questions (15 total) for 50 participants across 2 surveys (1500 responses). The analysis was cumbersome and time-consuming, yet the themes extracted were complementary to the statistical analysis of the survey questions. Expanded support from the clinician team for the alternate workflow experiences would be more beneficial to participants. The support for the alternative workflow was delivered by this researcher and, although responsive, may not have been excepted, as well, had the support come from the same clinical team.

Implications for Healthcare Providers

For the provider-clinician community to be successful with telehealth, it must be viewed as an entire new implementation paradigm, complementary with on-site care system, yet with a different set of objectives, leadership, and sponsorship. Practitioners need to understand that technologies are moving at a faster rate than the medical system’s ability to incorporate new capability into their operations. The pace of technology will not slow; it is more likely to accelerate. Practitioners must establish permanent operational processes for continuous technology adoption, ensuring that a pipeline of new technologies at various stages of maturity are properly vetted, prototyped, and integrated into the telehealth system. Practitioners incorporating telehealth services must learn to redefine the context of a “patient” and the support mechanisms that will empower patients to be successful in their remote and asynchronous environments. Clinicians will need to establish new teams, including remote-care facilitators, project managers, and technical support specialists that are properly trained and assigned to the charter of telehealth delivery [79].

Proper protocols and technology infrastructure are needed to allow the telehealth solutions to be led by a structured deployment system that anticipates all possible threats. Sanders et al.’s research on barriers to participation adoption found that some telehealth patients expressed concern with being “dependent” on technology [78]. Greenhalgh et al. reported findings that telecare users had concerns about security and that there was a “perception of surveillance” [74]. Practitioners will need to understand that many of telehealth users are elderly and may have sight, hearing, and dexterity issues, amongst the typical anxiety concerns evidenced in this demographic’s perception of new technology [72,80].

Implications for Patients

Telehealth users have a responsibility to establish their own health plan in a manner the improves their own attitude to use, then adopt telehealth solutions and advocate for their specific healthcare plan with the practitioner community. Telehealth users should spend the time to define a formal healthcare plan in a manner the fleshes out the ambiguity for themselves and provides a formal reference for providers to better understand their specific healthcare needs. Equally as important is the need for future telehealth users to have a technology-adoption mindset. Patients need to know that there is a learning curve associated with technology and assume that there will be start-up difficulty, but work to overcome these barriers with a mindset that the upside use of the technology far outweighs the hurdles to establishing a new norm. Bem’s research in self-perception theory states that when individuals rely on their past behavior as a guiding force towards new adoption, they wrongfully position themselves to poorly perceive the relative advantage of the new technology [80]. Davis, the originator of the TAM, states that individuals accept a technology to the extent that they believe it will meet their needs; when users shift their mindset to include the cost of adoption, they are more accepting of a delay in relative advantage to accommodate the learning curve [51].

Implications for Developers

Developers of telehealth technology can benefit from this research by shifting attention to the functional use of the technology in the field with real patients through iterative agile development involving lead users. Since the telehealth ecosystem is just now formulating, real insight into the unmet needs of patient will be found by working directly with patients that have an interest in adopting telehealth; they can be spokespeople for their community needs [81,82]. Developers need to comprehend the findings in this study associated with the subtle migration of non-adopters to adopters and realize that the primary motivator is a relative advantage that triggers attitude towards use and behavioral intent to use, which feeds perceived usefulness of the technology-mediated solution for new telehealth users [73-75]. Developers will also need to explore the technology’s future space and contemplate new systems design platforms that integrate a variety of telehealth solutions into a common patient dashboard, so that patients can quickly habituate with a user experience paradigm. This approach will allow patients to gain additional relative advantage by adding in additional telehealth capability into an already familiar framework that they are comfortable with [43,83]. Developers will need to explore new ways to collaborate with the practitioner community during each stage in the product development lifecycle. Yen and Bakken advocate an extended development lifecycle with emphasis on the front-end part of the process and iterative in nature with lead users [83,84]. The telehealth development community is not as established as other sectors, such as consumer electronics and business software solutions. Developers need to investigate best practices in more mature sectors and incorporate those development lifecycle practices into their standard operating procedures to ensure predictability [85,86].

Implications for Researchers

This builds off of the historic CSCW research in collaborative workflows to introduce the CS-AF as replicable approach for evaluating workflows with the aim at workflow improvements. The research expands on the future research directives suggested by Eikey et al.’s comprehensive review of collaboration in HIT by expanding on their summary view of the space and need for “field investigation methods”, including the key omission of attitude and behavior measures [35]. The research successfully incorporated a select set of cross-disciplinary elements in efforts to obtain a comprehensive view of the collaborative workflows. The research objectives of the CS-AF addressed not only the those identified by Eikey, but it also addressed directives from a host of HCI/CSCW researchers, such as Grudin and Weiser, amongst others, that challenge researchers to continue to refine approaches to engage in immersive discovery on the specific tasks at the point where work is done. “We (CSCW) will most likely need to develop new concepts to help us understand collaboration in complex organizations” [58:514]. Rojas et al. conducted a literature review of process evaluation techniques in healthcare (examining 74 papers), to determine reoccurring approaches; they concluded that, “Efforts should be made to ensure that there are tools or solutions in place which are straightforward to apply, without the need of detailed knowledge of the tools, algorithms or techniques relating to the process mining field. In addition, new methodologies should emerge, which use reference models and be able to consider the most frequently posed questions by healthcare experts” [86:234]. Bringing the expertise of CSCW researchers to the telehealth domain in a collaborative effort with HIT professionals and the use of the CS-AF will undoubtedly facilitate a comprehensive view of the workflow. The CS-AF field engagement methodology and cross-disciplinary survey instrument provide a functional methodology for researchers to design, conduct, and statistically evaluate subsequent collaborative workflows, enabling a clear visibility to the gains and gaps of each workflow iteration.

Keywords

Telehealth, Ubiquitous collaboration, Workflow, Technology-mediated, Adoption

CCS Concepts: Ubiquitous Computing, Telehealth, Doctor-Patient Collaboration, Human-Centered Computing, Applied Computing, Health Informatics, Health Information Technology

References

  1. Winbladh K, Ziv H, Richardson DJ (2011) Evolving requirements in patient-centered software, in Proceedings of the 3rd Workshop on Software Engineering in Health Care, Honolulu, HI, May 22-23.
  2. Grudin J (1994) Computer-Supported cooperative work: History and focus. Computer 27 (5): 19-26.
  3. Bondy, Christopher (2021) A Framework for Evaluating Technology-Mediated Collaborative Workflow. Thesis. Rochester Institute of Technology.
  4. Ackerman M (2000) The intellectual challenge of CSCW: The gap between social requirements and technical feasibility. Human-Computer Interaction 15 (2): 179-203.
  5. Neale DC, Carroll JM, Rosson MB (2004) Evaluating computer-supported cooperative work: Models and frameworks, in Proceedings of Conference on Computer Supported Cooperative Work.
  6. Neale DC, Hobby L, Carroll JM, Rosson MB (2004) “A laboratory method for studying activity awareness,” in Proceedings of 3rd Nordic Conference on Human-Computer Interaction 1: 313-322.
  7. Weiser M (1996) Ubiquitous Computing.
  8. Goulden M, Greiffenhagen C, Crowcroft J, McAuley D, Mortier R, et al. (2017) Wild interdisciplinarity: Ethnography and computer science. International Journal of Social Research Methodology 20(2): 137-150.
  9. Blomberg J, Karasti H (2013) Reflections on 25 Years of Ethnography in CSCW. The Journal of Collaborative Computing and Work Practices 22: 373-423.
  10. Clough PT (1992) Poststructuralism and postmodernism: The desire for criticism. Theory and Society 21: 543-552.
  11. Peneff J (1988) The observers observed: French survey researchers at work. Social Problems 35: 520-535.
  12. Carroll J (2002) Human-Computer Interaction in the New Millennium. New York: ACM Press.
  13. Weiseth PE, Munkvold BE, Tvedte B, Larsen S (2006) “The wheel of collaboration tools: A typology for analysis within a holistic framework,” in Proceedings of the 20th Anniversary Conference of Computer Supported Cooperative Work. pg: 239-248.
  14. Norman DA (2011) Living with Complexity. Cambridge, Mass: MIT Press.
  15. Carroll JM, Kellogg WA, Rosson MB (1991) “The Task-Artifact Cycle” In Carroll JM (eds) Designing Interaction: Psychology at the Human-Computer Interface. New York: Cambridge University Press.
  16. Millen D (2000) Strategies for HCI Field Research. Red Bank, NJ: AT&T Labs-Research.
  17. Arias E, Eden H, Fischer G, Gorman A (2000) Transcending the individual human mind: Creating shared understanding through collaborative design. Transactions on Computer-Human Interaction 7(1): 84-113.
  18. Baeza-yates, Pino JA (1997) A First Step to Formally Evaluate. In Proceedings of the International ACM SIGGROUP on Supporting Group Work: The Integration Challenge, Phoenix, Arizona.
  19. Plowman L, Rogers Y, Ramage M, (1996) “What are workplace studies for?” in Proceedings of the Fourth European Conference on Computer Supported Cooperative Work, Stockholm, Sweden, April.
  20. Workman M (2004) Performance and perceived effectiveness in computer-based and computer-aided education: Do cognitive styles make a difference. Computers in Human Behavior 20(4): 517-534.
  21. Baecker RM, eds (1995) Readings in Human-Computer Interaction: Toward the Year 2000. Burlington, Massachusetts: Morgan Kaufmann Publishers.
  22. The Encyclopedia of Human-Computer Interaction (2nd ed.) 2009 “Human computer interaction: Brief introduction” Interaction Design Foundation.
  23. Lee CP, Paine D (2015) From the matrix to a model of coordinated action (MoCA) in Proceedings of the18th ACM Conference on Computer Supported Cooperative Work and Social Computing.
  24. Davis RF (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13.
  25. Venkatesh V, Morris M, Davis G, Davis F (2003) User acceptance of information technology: Toward a unified view. MIS Quarterly 27(3): 425-47.
  26. Lund AM (2001) Measuring usability with the USE questionnaire. Usability Interface. 8(2): 3-6.
  27. Reichheld FF (2006) “The microeconomics of customer relationships. MIT Sloan Management Review 47: 73-78.
  28. Fox S, Rainie L (2002) Vital Decisions: A Pew Internet Health Report. Pew Research Center: Internet and Technology 22.
  29. Estrin (2014) Small data, where n=me. Communications of the ACM 57(4): 32-34.
  30. Helft PR, Hlubocky F, Daugherty CK (2003) “American oncologists’ views of internet use by cancer patients: A mail survey of American society of clinical oncology members. Journal of Clinical Oncology 21.  [crossref]
  31. Chokshi NP (2018) Loss‐Framed financial incentives and personalized goal‐setting to increase physical activity among ischemic heart disease patients using wearable devices: The ACTIVE REWARD randomized trial. The Journal of the American Heart Association 13. [crossref]
  32. Brown D (2019) Doctors say most metrics provided by your Apple Watch, Fitbit aren’t helpful to them. USA TODAY.
  33. Weir CR, Hammond KW, Embi PJ, Efthimiadis EN, Thielke SM, et al. (2011) An exploration of the impact of computerized patient documentation on clinical collaboration. International Journal of Medical Informatics 80 (8): e62–e71. [crossref]
  34. Skeels MM, Tan D (2010) “Identifying opportunities for inpatient-centric technology,” presented at International Health Informatics Symposium (IHI’10) Arlington, Virginia, November 11-12,.
  35. Eikey E, Reddy M, Kuziemsky C (2015) Examining the role of collaboration in studies of health information technologies in biomedical informatics: A systematic review of 25 years of research. Journal of Biomedical Informatics 57: 263-277. [crossref]
  36. Bondy, Christopher (2018) Exploring the association between current state and future state technology-mediated collaborative workflow: Graphic communications workflow technical association of the graphic arts. Presented at The Annual TAGA Convention, Washington, D.C., March 19.
  37. Piwek l, Ellis D, Andrews S, Joinson A (2016) The Rise of consumer health wearables: Promises and barriers. PLOS Medicine 13(2). [crossref]
  38. Kalantari M (2016) Consumers adoption of wearable technologies: Literature review, synthesis, and future research agenda. International Journal of Technology Marketing. 12(3): 274-307.
  39. Rothwell P (2010) Limitations of the usual blood-pressure hypothesis and importance of variability, instability, and episodic hypertension. The Lancet 375(9718): 938-948. [crossref]
  40. Ogedegbe G, Pickering T (2010) Principles and techniques of blood pressure measurement. Cardiology Clinics 28(4): 571-586. [crossref]
  41. American Heart Association recommended blood pressure levels (2018) 501.
  42. Lee CP, Paine D (2015) “From the matrix to a model of coordinated action (MoCA)” in Proceedings of the18th ACM Conference on Computer Supported Cooperative Work and Social Computin,.
  43. Bondy C (2017) “Understanding critical barriers that impact collaborative doctor-patient workflow,” presented at The 2017 IEEE International Conference on Biomedical and Health Informatics, Orlando, Florida.
  44. Musat D, Rodríguez P (2010) Value Stream Mapping integration in Software Product Lines PROFES ’10 Copenhague, Denmark.
  45. Rother M (2009) Learning to See: Value Stream Mapping to Add Value and Eliminate MUDA. Boston, MA: Lean Enterprise Institute, Inc.
  46. Shoua W, Wang J, Wu P, Wang X, Chong HY (2017) A cross-sector review on the use of value stream mapping. International Journal of Production Research 55(13): 3906-3928.
  47. Snee RD (2010) Lean Six Sigma: Getting better all the time. International Journal of Lean Six Sigma 1 (1): 9-29.
  48. Haizatul S, Ramian R (2015) Patient Process Flow Improvement: Value Stream Mapping. Journal of Management Research 7(2).
  49. Roth N, Franchetti M (2010) Process improvement for printing operations through the DMAIC Lean Six Sigma approach: A case study from Northwest Ohio, USA. International Journal of Lean Six Sigma 1(2): 119-133.
  50. Acharyulu GVRK (2014) Supply chain management practices in printing industry. Operations and Supply Chain Management 7(2): 39-45.
  51. Davis RF (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13(3).
  52. Lund AM (2001) Measuring usability with the USE questionnaire. Usability Interface 8(2): 3-6,.
  53. Ajzen I (1991) The Theory of Planned Behavior. Organizational Behavior and Human Decision Processes 50(2): 179-211.
  54. Ajzen J, Fishbein M (1977) Attitude-Behavior relations: A theoretical analysis and review of empirical research. Psychological Bulletin 84(5): 888-918.
  55. Reichheld FF (2006) The microeconomics of customer relationships. MIT Sloan Management Review 47: 73-78.
  56. Oliveira T, Oliveira Martins (2010) Literature review of information technology adoption models at firm level. Revista de Administração 45(1): 110-121.
  57. Samaradiwakara GDMN (2014) Comparison of existing technology acceptance theories and models to suggest a well improved theory/model. International Technical Sciences Journal 1(1).
  58. Neale DC, Carroll JM, Rosson MB (2004) “Evaluating computer-supported cooperative work: Models and frameworks,” in Proceedings of Conference on Computer Supported Cooperative Work.
  59. Divine G, Norton HJ, Hunt R MD, Dienemann J (2013) Review of Analysis and Sample Size Calculation Considerations for Wilcoxon Tests. Anesthesia and Analgesia 117(3): 699-710.
  60. Minitab Blog Editor, “Best way to analyze Likert item data: Two sample t-test versus Mann-Whitney” The Minitab Blog.
  61. Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics Bulletin 1(6): 80-83.
  62. Meek GE, Ozgur C, Kenneth D (2007) Comparison of the t vs. Wilcoxon Signed-Rank Test for Likert scale data and small samples. Journal of Modern Applied Statistical Methods 6(1).
  63. Salkind NJ (2020) Repeated Measures Design. SAGE Research Methods,
  64. Glen S, ANOVA test: Definition, types, examples.” Com: Elementary Statistics for the Rest of Us!.
  65. Brandao ARD (2016) Factors influencing long-term adoption of wearable activity trackers. RIT Scholar Works.
  66. Barrett D, Thorpe J, Goodwin N (2015) “Examining perspectives on telecare: Factors influencing adoption, implementation, and usage. Smart Homecare Technology and TeleHealth 3: 1-8.
  67. Yousafzai S, Foxall S, G., and J. (2007) “Technology acceptance: A meta‐analysis of the TAM: Part 1”, Journal of Modelling in Management 2(3): 251-280.
  68. Yousafzai S, Gordon RF, John GP (2007) Technology acceptance: A meta‐analysis of the TAM: Part 2. Journal of Modelling in Management 2(3): 281-304.
  69. Brown SA, Dennis AR, Venkatesh V (2010) Predicting collaboration technology use: Integrating technology adoption and collaboration research. Journal of Management Information Systems 27(2): 9-53.
  70. Woertman WH, Van De Wetering G, Adang EMM (2014) Cost-effectiveness on a local level: Whether and when to adopt a new technology. Medical Decision Making 34(3): 379-386. [crossref]
  71. Coco L, Davidson A, Marrone N (2020) The role of patient-site facilitators in tele-audiology: A scoping review. American Journal of Audiology 29: 661-675. [crossref]
  72. Tsai JM, Cheng MJ, Tsai HH, Hung SW, Chen YK, et al., (2019) Acceptance and resistance of telehealth: The perspective of dual-factor concepts in technology adoption. International Journal of Information Management 49: 34-44.
  73. Zanaboni P, Wootton R (2012) Adoption of Telemedicine: From pilot stage to routine delivery,” BMC Medical Informatics and Decision Making 12(1). [crossref]
  74. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O (2004) “Diffusion of innovations in service organizations: Systematic review and recommendations,” Milbank Quarterly 82(4): 581-692. [crossref]
  75. Scott SD, Plotnikoff RC, Karunamuni N, Bize R, Rodgers W (2008) Factors influencing the adoption of an innovation: An examination of the uptake of the Canadian Heart Health Kit (HHK). Implementation Science 3(41).
  76. Reddy MC, Shabot MM, Bradner E (2008) Evaluating collaborative features of critical care systems: A methodological study of information technology in surgical intensive care units. Journal of Biomedical Informatics 41: 479-487. [crossref]
  77. Weir CR, Hammond KW, Embi PJ, Efthimiadis EN, Thielke SM, et al. (2011) An exploration of the impact of computerized patient documentation on clinical collaboration. International Journal of Medical Informatics. 80: e62–e71. [crossref]
  78. Whitten PS, Mackert MS (2005) Addressing telehealth’s foremost barrier: Provider as initial gatekeeper. International Journal of Technology Assessment in Healthcare 21(4): 517-521,
  79. Abdullah F, Ward R (2016) Developing a General Extended Technology Acceptance Model for E-Learning (GETAMEL) by analyzing commonly used external factors. Computers in Human Interaction 56.
  80. Bem DD (1972) Self-Perception Theory, In Berkowitz L (eds) Advances in Experimental Social Psychology. New York: Academic Press.
  81. von Hippel E, Thomke S, Sonnack M (1999) Creating breakthroughs at 3M. Harvard Business Review 79: (5): 3-9.
  82. Jalil S, Myers T, Atkinson I, Soden M (2019) Complementing a clinical trial with human-computer interaction: Patients’ user experience with telehealth. JMIR Human Factors 6(2).
  83. Yen PY, Bakken S (2012) Review of health information technology usability study methodologies. Journal of the American Medical Informatics Association 19(3): 413-422.
  84. Bondy C, Rahill J, Povio ML (2007) Immersion & iteration: Leading edge approaches for early stage product planning,” Masters’ Project, Product Development, Rochester Institute of Technology, Rochester, NY.
  85. Dorsey E, Topol E (2016) State of Telehealth. The New England Journal of Medicine 375(2): 154-161.
  86. Eric Rojas, Jorge Munoz-Gama, Marcos Sepúlveda, Daniel Capurro (2016) Process mining in healthcare: A literature review. Journal of Biomedical Informatics 61: 224-236. [crossref]
fig 1

Rumination Behavior and Its Association with Milk Yield and Composition of Dairy Cows Fed Partial Mixed Ration Based on Corn Silage

DOI: 10.31038/IJVB.2022611

Abstract

The objective of this study was to characterize the variation in rumination time and its association with milk yield and composition in dairy Holstein Friesian cows fed with corn silage. Rumination time was recorded 24 h/day using direct visual observation. Cows were divided into 2 groups to facilitate the visual observation and to ensure similar parties, days in milk (DIM) and milk yields between groups of cows. All the cows were fed with a partial mixed ration (PMR) based on corn silage. Rumination was defined as the time cow spends chewing a regurgitated bolus until it swallows back. Each cow was recorded continuously for periods of 2 hours at a time to complete a full 24 hours (12 values per day). Data from cows were assigned to three groups: based on individual cow average daily rumination time – low rumination cows up to 451 min/day (L=up to 25th rumination percentile), medium rumination cows from 451 to 566 min/day (M=between the 25th and 75th percentile) and high rumination cows above 566 min/day (H from the 75th percentile). Cows from all the groups (H. M. L.) ruminated approximately 497.5 min/day ranging from 311 to 594 min/day. High rumination cows (mean 581 min/day) produced 4.05% more energy corrected milk (ECM) compared with L rumination cows (mean 403 min/day). Rumination time was found to be positively associated with milk yield of cows fed a PMR based on corn silage.

Keywords

Corn silage, Holstein Friesian cow, Milk production, Partial mixed ration, Rumination behavior

Introduction

Dairy producers, animal nutritionist and veterinarians have long recognized the importance of rumination as an indicator of dairy cattle health and performance. The rumination process allows dairy cattle to eat forage that are not able to be eaten by other non-ruminant animals.

The mechanics of eating and ruminating in cattle are well understood [1]. During eating the lips, teeth, and tongue of the cow are used to move feed intro the mouth. Where is chewed. Feed is chewed by lateral movements of the mandible, resulting in a grinding action that shears, rather than cuts the feed. The feed is chewed by the molar teeth on one side of the mouth at given time [1]. A large amount of saliva is secreted during the eating process to enable a bolus to be formed and swallowed [2].

Rumination is a unique defining characteristic of ruminants. During rumination, digesta from the rumen is regurgitated, remasticated and reswallowed [3]. This clinical process is influenced by several primary factors including dietary and forage-fiber characteristics, health status, stress and the cow management environment [4,5]. Rumination is controlled by the internal environment of the rumen and the external environment of the cow, i.e. the management environment.

Rumination facilities digestion, particle size reduction and subsequent passage from the rumen thereby, influencing dry matter intake. Rumination also stimulates salivary secretion and improves ruminal function buffering [6]. Rumination is positively related to feeding time end dry matter intake (DMI). Following periods of high feed intake, cows spend more time ruminating. Restriction feed intake reduces rumination a 1-kg decrease in dry matter intake (DMI) has been associated with a 44 min/day reduction in rumination [7].

Rumination activity has been consistently associated with intake of physically effective NDF (peNDF) which combines dietary particle length and dietary Neutral Detergent Fibre (NDF) content and is directly related to chewing activity and rumination [8]. As the level of peNDF increases in the diet the cows is stimulated to ruminate more [9]. Under acute and chronic stress environments, ruminations is depressed. Several key components of the management environment that may reduce the cow’s expected rumination response to dietary peNDF, fiber digestibility or fiber fragility are heat stress (-10 to -22%), overcrowding (-10 to -20%), excessive head leek (-14%), mixed parity pens (-15%) [10].

Under ideal conditions mature cows will spend 480 to 540 min/day ruminating [11]. If rumination is depressed by 10 to 20% due to poor management, then we can reasonably predict compromised ruminal function and greater risk for associated problems such as sub-acute rumen acidosis, poor digestive efficiency, lameness and lower milk fat and protein output [10]. Dominance hierarchy also affects rumination activity, lower ranked cows ruminated 35% less than higher ranked cows [12]. The effect of social interactions on rumination needs to be considered in grouping strategies for a farm; primiparous cows ruminate and lie down less when are mixed with mature cows. Grant (2012) [13] measured up to a 40% reduction in rumination activity for primiparous cows when they were resting in stalls known to be preferred by dominant cows within a pen.

Cows prefer to ruminate with lying down [14,15]. Most rumination occurs at night and during afternoon. When ruminating, whether lying or standing, cows are quiet relaxed, with heads down and eyelids lowered. The cow’s favorite resting posture is sternal recumbency with left side laterality (55-60% left-side preference). The left-side laterality and upright posture is thought to optimize positioning of the rumen within the body for most efficient rumination [16,17]. Rumination activity also increases with advancing age as do number of boli and time spent chewing each bolus [10]. Total ruminative chewing increases linearly from 2 years of age forward [18]

A decrease in rumination time is a good sign that something is affecting ruminal function and cow well-being. Rumination often responds to a stressor 12 to 24 hours sooner than traditionally observed measures such as elevated body temperature, depressed feed intake or reduced milk yield [19]. Changes in rumination time for a variety of management routines and biological processes have been reported based on accumulated on-farm observations with diverse monitoring systems such as visual observation (V.O.), automated systems (transducer that transformed jaw movements into electrical signals), pressure sensors, pneumatic systems or microphone-based monitoring system [20]. Deviations in rumination from a baseline provides useful management information.

Cows ruminate for approximately 500-550 minutes per day and reported deviations in rumination include: calving – 255 min/day; estrus – 75 min/d; heaf trimming – 39 min/d; heat stress – 20 to 70 min/d and mastitis – 63 min/d [20]. The target for making management decisions would be a deviation in rumination of greater than 30 to 50 min/d for either an individual cow or a group cows [10]. Often, changes in rumination measured on-farm reflect changes in feed or feed management, cow grouping or cow movement, and overall cow comfort. It is not necessarily to be monitored the time spent ruminating each day, but the change in rumination time from day-to-day it is most important.

Currently, several companies produce commercially available rumination monitoring systems. The rumination sensors are usually integrated into activity monitor devices, ear tags or neck collars. Some rumination monitoring systems use a bolus placed in the rumen of the animal or a pressure sonsor located on a nose band. Numerous independent research studies have validated the accuracy and precision of some systems on the market ([21,22] for CowManger Sensoor ear tags and [23,25] for SCR Hi-Tag neck collars).

In recent years, there has been an increase in research studies regarding using rumination as an indicator of changes in animal performance and welfare. Activity and rumination monitoring systems are growing in popularity, but their on farm applications are mostly focused on management of reproduction and health [25].

The objective of this study was to characterize the variation in rumination time and its influence on milk, fat and protein production in dairy Holstein Friesian cows.

Material and Methods

Animals

Dairy cows used in this experiment were located at Agriculture Research Development Station (ARDS) Simnic – Craiova, Romania. The experiment was performed in compliance with European Union Directive 86/609/Ec. on Holstein Friesian dairy cattle that belonged to a long and large genetic improvement program. The dairy farm has a 140 – cow Holstein Friesian milking herd. Six trials were conducted during 2018, 2019 and 2020.

Trial 1 (January, 2018): Six multiparous milking cows were selected and balanced for days in milk (DIM: mean ± SD 101.5 ± 4.3 days), milk production (9219.3 ± 279.7 kg) and number of lactation (L=3). The cows were then allocated to 2 different groups: group 1(G1) DIM 97.6 ± 1.7 d and milk production 9024.6 kg and groups 2 (G2) DIM 105.3 ± 3.0 d and milk production 9414.0 ± 200 kg, with 3 cows in each group. Each group was housed (loose housing) in contiguous pens that share identical characteristics: area of feed and water trough, rest area with straw (5 m2/cow). Cows were fed with a partial mixed ration (PMR): corn silage 60% (fresh weight PMR proportion) alfalfa hay 3% concentrate mix 30% and fodder beet 7% with additional concentrate fed to yield in the house. Water was supplied at libitum. The cows were milked twice daily at 06:00 and 17:00.

Trial 2 (November 2018): Six multiparous milking cows were selected and balanced for DIM 103.3 ± 2.2, milk production 9011.6 ± 106.3 kg number of lactation (L=3). The cows were than allocated to 2 different groups: groups 1 (G1) DIM (104 ± 2 days), milk production (8923.3 ± 66.6 kg) and number of lactation (L=3) and G2 DIM (102.6 ± 2.5 d), milk production (9100 ± 20 kg) and number of lactation (L=3) with 3 cows in each group, housed (loose housing) in contiguous pens that share identical characteristics: area of feed and water troughs, rest area with straw and exercise area. Cows were fed with a partial mixed ration (PMR): corn silage 58% (fresh weight PMR proportion), alfalfa hay 3%, concentrate mix 32% and fodder beet 7%, with additional concentrate fed to yield in the house. Water was supplied ad libitum, and cows were milked twice daily at 06:00 and 17:00.

Trial 3 (February 2019): Six multiparous milking cows were selected and balanced for DIM (97.5 ± 189.2 kg) and number of lactation (L=4). The cows were allocated to 2 different groups: G1 DIM (97.3 ± 1.5 d), milk production (8806.7 ± 162.9 kg) and number of lactation (L=4) and G2 DIM (97.6 ± 2.5 d), milk production (9060 ± 12.6 kg) and number of lactation (L=4), with 3 cows in each group. Each group was housed in the same pens as in trial 2. Cows were fed a PMR (corn silage 56 %, fresh weight PMR proportion) alfalfa hay 4% concentrate mix 30% and fodder beet 10%, with additional concentrate fed to yield in the house. Cows were milked twice daily at 06:00 and 17:00.

Trial 4 (December 2019): Eight multiparous milking cows were selected and balanced for DIM (109.6 ± 2.6 days), milk production 8978.7 ± 135 kg and number of lactation (L = 3). The cows were allocated to 2 different groups: G1 DIM (108 ± 1.8), milk production (8895 ± 147.3) and number of lactation (L = 3) and G2 DIM (111.2 ± 2.5) and number of lactation (L =3) with 4 cows in each group. Each group was housed (loose housing) in contiguous pens that share identical characteristics: area of feed and water troughs, rest area (5 m2/cow) with straw and exercise area (5 m2/cow). Cows were fed with a PMR: corn silage 60% (fresh weight PMR proportions), alfalfa hay 4%, concentrate mix 28% and fodder beet 8%, with additional concentrate fed to milk yield in the house. Water was supplied at libitum. The cows were milked twice daily at 06:00 and 17:00.

Trial 5 (February 2020): Six multiparous milking cows were selected and balanced for DIM (119.8 ± 5.4 d), milk production (8866.6 ± 169 kg) and number of lactation (L = 4). The cows were allocated to 2different groups: G1 DIM (116 ± 4 d), milk production (8960 ± 158.7 kg) and number of lactation (L = 4), and G2 DIM (123.6 ± 3.8), milk production (8773.3 ± 141.9 kg) and number of lactation (L = 4), with 3 cows in each group. Each group was housed (loose housing) in contiguous pens as in trial 2.

Cows were fed a PMR: corn silage 57% (fresh weight PMR proportion) alfalfa hay 4%, concentrate mix 30% and fodder beets 9% with additional concentrate fed to milk yield in the house. Cows were milked twice daily at 06:00 and 17:00. Trial 6 (November 2020): Eight multiparous milking cows were selected and balanced for DIM (116.3 ± 6.1 d), milk production (8620 ± 141.7 kg) and number of lactation (L = 4). The cows were allocated to 2 different groups: G1 DIM (111.3 ± 3 d), milk production (8575 ± 121.5 kg) and number of lactation (L = 4), and G2 DIM (121.3 ± 3 d), milk production (8665 ± 163.4), and number of lactation (L = 4) with 4 cows in each group. Each group was housed in the same pens as in trial 4. Cows were fed a PMR: corn silage 56% (fresh weight PMR proportion), alfalfa hay 5%, concentrate mix 29% fodder beet 10%, with additional concentrate fed to milk yield in the house. Water was supplied ad libitum and cows were milked twice daily at 06:00 and 17:00.

In this experiment cows were divided into 2 groups to facilitate the visual observation and to ensure similar parties, DIM and milk yields between groups of cows. All the cows were identified with a unique number by color spray. After milking cows received a minimum of 0.5 kg and a maximum 5 kg of concentrate per cow and day. Cows were given 2 weeks to adapt with diet and house and the measurements were taken in the third week.

Data Collection

Visual observation is the standard and more reliable method to reassure rumination [24]. This can be done either through direct observation or by analysis of video recordings.

In this experiment we used direct observation by a trained research personnel; one for G1; observer 1, and one for G2; observer 2.

All cows were housed indoors. The observers were standing in places of the house where all the behaviors of a specific cow were easily recorded and the observer’s presence had no effect on the cow’s routine and behavior [24]. Behaviors (eating, drinking, idling, and ruminating), were recorded according to the ethogram ([24], Table 1). Rumination was defined as the time a cow spends the time a cow spends chewing a regurgitated bolus until it swallows back. Each cow was recorded continuously for periods of 2 hours at a time to complete a full 24 hours period per week.

Table 1: Behavioral ethogram used in trials 1 to 6

Behavior Definition
Eating Cow head over or in the feed trough
Drinking Cow head over or in the water trough
Ruminating Time the cow spends chewing a regurgitated bolus until swallowing back
Idling No ruminating, eating or drinking behaviour

Daily milk production was obtained from the farm management system (DeLaval 2×5), and fat and protein content was analysed in the laboratory with Ekomilk Ultrasonic Milk Analysers (Bultuh 2000 LTD). Fat and protein contents were used for calculating energy-corrected milk (ECM). The ECM was calculated according to Reist et al., (2002) [26] as [(0.038 x g crude fat + 0.024 x g crude protein + 0.017 x g lactose)] x kg milk/3.14.

Forage, concentrate and PMR representative samples were collected for analysis using wet chemistry. The particle size distribution of PMR samples was determined using. Pen State Particle Separator system with 3 sieves (19 mm, 8 mm, 1.18 mm and a bottom pan) [27]. The mean retention of particle were: 6% > 19 mm, 48% 8-19 mm, 40.5% 1.8-8 mm, and 5.5% < 1.18 mm. PMR and concentrates ingredients and nutritional value are shown in Table 2.

Table 2: Average ingredients and nutrient composition of PMR and concentrates

PMR ingredients (fresh weight PMR proportion):
Corn silage

57.8

Alfalfa hay

3.8

Concentrate mix

29.8

Fodder beet

8.5

Nutritional value:
Net Energy Lactation

1.51 Mcal/kg DM

Crude protein

148 g/kg DM

Rumen undegradable protein

33%

Neutral detergent fiber

348 g/kg DM

Acid detergent fiber

228 g/Kg DM

Non-fiber carbohydrates

380 g/kg DM

Concentrate mix:
Net energy lactation

1.7 Mcal/ kg DM

Crude protein

230 g/kg S.U.

Additional concentrate:
Net energy lactation

1.9 Mcal/kg DM

Crude protein

260 g/kg S.U.

Concentrate mix and additional concentrate based on soybean meal, sunflower, corn, wheat and barley grains and minerals, vitamins and feed additives.

Statistical Analysis

The data were entered into Microsoft Excel computer program 2007 – STATA Version 14 was used to summarize the data and descriptive statistics were used to express the results. The p-values obtained for the difference between the estimated means for rumination group were adjusted using Tukey’s method.

Results and Discussion

Rumination behavior was recorded in 480 – 2 – hour – periods from all cows (n = 40) and all were used for the analysis to determine their influences on milk performance of Holstein Friesian cows. Data from cows were assigned to three groups based on individual cow average daily rumination time: low rumination cows up to 451 min/day (L = up to 25th rumination percentile), medium rumination cows from 451 to 566 min/day (M between the 25th and 75th percentile) and high rumination cows above 566 min/day (H. from the 75th percentile). Each observer recorded rumination data in 2 hours intervals (i.e., 12 values per day), and rumination time was measured in minutes recorded within each 2 – hour interval. The daily rumination time of cow was calculated by adding 12 measurements of the day.

Differences in rumination time were observed between all the three groups: L (402.7 ± 28.4 min/day), M (508.8 ± 31.6 min/d) and H (581.1 ± 9.2 min/day).

Daily pattern of rumination time expressed in minutes per 2 – hour intervals for all three groups of cows is presented in Figure 1. The means rumination time for L, M and H groups of cows were 33.6 minutes, 42.4 minutes and 48.4 minutes respectively per 2 hours. Most rumination activity occurs during night (Figure 1). The system used in our trials to measure Rumination Time (RT) allowed us to record the pattern of RT during daytime and night time.

fig 1

Figure 1: Daily pattern of rumination time expressed in minutes per 2 hours intervals for all three groups of cows (L = green bars; M = blue bars and H = yellow bars).
* mean in minutes per 2 hours

High rumination cows had a mean milk production of 27.76 kg compared with the M and L groups (27.5 kg and 27.2 kg, respectively; Table 3). Low rumination cows had a mean milk fat percent of 3.51% compared with the M and H groups (3.58% and 3.61%, respectively). High rumination cows had a mean milk protein percent of 3.15% compared with the M and L groups (3.11% and 3.04% respectively).

The fat and protein ratio was higher in high rumination cows (1.16) compared to the Low (1.15) and medium (1.15) rumination cows 5 (Table 3). High rumination cows had an effect on milk production (1.7% more milk) compared with Low rumination cows. Also, high rumination cows produced 4.05% more ECM compared with low rumination cows, and 1.17% more ECM compared with medium rumination cows (Table 3). Medium rumination cows produced 1.03% more ECM compared with low rumination cows.

Table 3: Means rumination time and milk production low (L), medium (M) and high (H) ruminations cows

Rumination groups

L M

H

Rumination time (min/day)

402.7 ± 28.4a

508.8 ± 31.64b

581.1 ± 9.24c

Milk (kg/day)

27.200a

27.500b

27.670b

ECM (kg/day)

24.950a

25.660b

25.960c

Fat (%)

3.51a

3.58b

3.61b

Protein (%)

3.04a

3.11b

3.15c

Fat: protein

1.15a

1.15a

1.16b

The means within a row with different superscripts differ (p < 0.05)

Mean fat percent of High rumination cows was 3.61% compared with 3.58% and 3.51% for medium rumination cows and low rumination cows respectively. Mean fat: protein ratio of High rumination cows was 1.16 compared with 1.15 for medium and low rumination cows. Cows from all the groups (H, M and L) ruminated approximately 497.5 min/day ranging from 311 to 594 min/day.

White et al., 2017 [28] reported a mean rumination time of 436 min/day ranging from 236 to 610 min/day. Zetouni et al., 2018 [29] recorded 443 min/day in Danish Holstein cows. A positive relationship between rumination time and milk production in early lactation was reported by Soriani et al., 2013 [30]. Main factors of rumination time are connected with the chemical and physical characteristics of the diet. Beauchemin et al. [31] described a positive relationship between rumination time and dry mater intake in dairy cows.

An increase in rumination time should be directly connected with better rumen homeostasis and fiber microbial degradation and an increase in fat percentage [9].

Rumination time had a slight effect on milk protein percentage (3.15% for High rumination cows compared with 3.11% and 3.04% for Medium and Low rumination cows respectively). Kaufman et al., [32] found no association between milk protein and rumination time in dairy cows in early lactation.

Conclusion

Measurements of RT obtained by direct visual observation proved to be acceptable for the conditions of this study when cows were housed inside the shed. Rumination time was found to be positively associated with milk yield of dairy fed with a PMR based on corn silage. Further research is needed to support the use of RT as predictor for milk yield different conditions.

References

  1. Hofmann RR (1988) Anatomy of the gastrointestinal tract. In: The Ruminant Animal: Digestive Physiology and Nutrition. D.C. Church (ed.). Prentice Hall, Englewood Cliffs New Jersey. pp. 14-43.
  2. Church DC (1975) Ingestion and mastication of feed D.C. Church (Ed.) The Ruminant Animal: Digestive Physiology and Nutrition of Ruminants. Vol. 1 Digestive physiology (2nd ) O&B Books, Corvallis, OR Pg No: 46-60.
  3. Ruckebusch Y (1988) Motility of the gastro intestinal tract. D.C. Church (Ed.) The Ruminant Animal: Digestive Physiology and Nutrition. Prentice Hall, Englewood Cliffs NJ pp: 64-107.
  4. Grant RJ, JL Albright (2000) Feeding behavior, in Farm Animal Metabolism and Nutrition. J.P.F. D’Mello, ed. CABI International, Wallingford, UK, Pg No: 365-382.
  5. Calamari LN, Soriani, G Panella, F Petrera, A Minuti, et al. (2014) Rumination time around calving: an early signal to detect cows at greater risk of disease. Dairy Sci 97: 1-13.
  6. Beauchemin KA (1991) Ingestion and mastication of feed by dairy cattle. Clin. North Am. Food Anim. Pract. 7: 439-463.
  7. Metz JHM (1975) Time patterns of feeding and rumination in domestic cattle. Ph. D. Dissertation Agricultural University, Wageningen, The Netherlands.
  8. Yang WZ, KA Beauchemin (2006) Increasing the physically effective fiber content of dairy cow diets may lower efficiency of feed use. Dairy Sci 89: 2694-2704.
  9. Zebeli Q, JR Aschenbach, M Tafaj, J Boguhn, BN Ametaj, et al. (2012) Invited review: Role of physically effective fiber and estimation of dietary fiber adequacy in high-producing dairy cattle. Dairy Sci 95: 1041-1056.
  10. Grant RJ, HM Dann (2015) Biological Importance of Rumination and Its Use On-Farm. Presented at the 2015 Cornell Nutrition Conference for Feed Manufactures. Department of Animal Science in the College of Agriculture and Life Sciences at Cornell University.
  11. Van Soest PJ (1994) Nutritional ecology of the ruminant. Cornell University Press, Ithaca, N.Y., USA.
  12. Ungerfeld R, C Cajarville, MI Rosas, JL Repetto (2014) Time budget differences of high- and low-social rank grazing dairy cows. New Zeeland J. Agric. Res 57: 122-127.
  13. Grant RJ (2012) Economic benefits of improved cow comfort. NOVUS int. St. Charles, MO. https://www.dairychallenge.org/pdfs/2015_National/resources/NOVUS.Economic_Benefits.
  14. Cooper MD, DR Arney, CJ Phillips (2007) Two-or four-hour lying deprivation on the behavior of lactating dairy cows. Dairy Sci 90: 1149-1158.
  15. Schirmann K, N Chapinal, DM Weary, W Heuwieser, MAG von Keyserlingk (2012) Rumination and its relationship to feeding and lying behavior in Holstein dairy cows. Dairy Sci 95: 3212-3217.
  16. Grant RJ, VF Colenbrander, JL Albright (1990) Effect of particle size of forage and rumen cannulation upon chewing activity and laterality in dairy cows. Dairy Sci 73: 3158-3164.
  17. Albright JL, CW Arave (1997) The behavior of cattle. CAB International, New York, NY, USA.
  18. Gregorini P, B DelaRue, M Pourau, CB Glassey, JG Jago (2013) A note on rumination behavior of dairy cows under intensive grazing system. Prod. Sci 158: 151-156.
  19. Bar D, R Solomon (2010) Rumination collars: what can they tell us? Pages 214-215 in Proc. First North Am Corif. Precission Management, Toronto, Canada.
  20. SCR (2013) Rumination monitoring white paper. SCR Engineers, Ltd. SCR Israel, Netanya, Israel.
  21. Bikker JP, H van Laar, P Rump, J Doorenbos, K van Meurs, (2014) Technical note: Evaluation of an ear-attached movement sensor to record cow feeding behavior and activity. Dairy Sci 97: 2974-2979.
  22. Borchers MR, YM Chang, IC Tsai, BA Wadsworth, JM Bewley (2016) A validation of technologies monitoring dairy cow feeding, ruminating and lying behaviors. Dairy Sci 99: 7458-7466.
  23. Schirmann K, MA von Keyserlingk, DM Weary, DM Veira, W Heuwieser (2009) Technical note: Validation of a system for monitoring rumination in dairy cows. Dairy Sci 92: 6052-6055.
  24. Ambriz-Vilchis V, NS Jessop, RH Fawcett, DJ Shaw, AI Macrae (2015) Comparison of rumination activity measured using rumination collars against direct visual observation and analysis of video recordings of dairy cows in commercial farm environments. Dairy Sci 98: 1750-1758.
  25. LS Sjostrom, BJ Heins, MI Endres, RD Moon, JC Paulson (2016) Short communication: Relationship of activity and rumination to abundance of pest flies among organically certified cows fed 3 levels of concentrate. Dairy Sci 99: 9942-9948.
  26. Reist M, D Erdin, D von Euw, K Tschuemperlin, H Leuenberger, (2002) Estimation of Energy Balance at the individual and Herd level using Blood and Milk Traits in High-Yielding Dairy Cows. Dairy Sci 85: 3314-3327.
  27. Kononoff PJ, AJ Heinrichs, DR Buckmaster (2003) Modification of the Penn State Forage and Total Mixed Ration Particle separator and the Effects of Moisture Content on its Measurements. Dairy Sci 86: 1858-1863.
  28. White RR, MB Hall, JL Firkins, PJ Kononoff (2017) Physically adjusted neutral detergent fiber system for lactating dairy cow rations. II: Development of feeding recommendations. Dairy Sci 100: 9569-9584.
  29. Zetouni L, GF Difford, J Lassen, MV Byskov, E Norberg, (2018) Is rumination time an indicator of methane production in dairy cows? Dairy Sci 101: 11074-11085.
  30. Soriani N, G Panella, L Calamari (2013) Rumination time during the summer season and its relationship with metabolic conditions and milk production. Dairy Sci 96: 5082-5094.
  31. Beauchemin KA (2018) Invited review: Current perspectives on eating and rumination activity in dairy cows. Dairy Sci 101: 4762-4784.
  32. Kaufman EL, VH Asselstine, SJ LeBlanc, TF Duffield, TJ DeVries (2017) Association of rumination time and health status with milk yield and composition in early-lactation dairy cows. Dairy Sci 101: 462-471.

Incidence of COVID-19 Infection in Advanced Lung Cancer Patients Treated with Chemotherapy and or Immunotherapy

DOI: 10.31038/JCRM.2022522

Abstract

Background: The standard treatment for advanced non-small cell lung cancer without driver mutations is represented by chemotherapy and/or immunotherapy. Few data regarding the incidence of Coronavirus Disease 19 (COVID 19) in these patients are available, compared to the general population and it is not known whether this incidence is higher among patients receiving chemotherapy rather than immunotherapy.

Methods: We retrospectively collected data from advanced lung-cancer patients treated with chemotherapy and/or immune-checkpoint inhibitors consecutively from 1st April 2020 to 31st December 2020. We performed an oral-nasopharyngeal swab within 48 from the start of the treatment and we repeated it every other cycle. A swab was also required in case of the appearance of symptoms suspected of COVID. In the present work, we evaluated both the correlation between COVID and type of anticancer treatment and the incidence of positive swabs in patients with lung cancer and in the general population of our province.

Results: The rate of COVID in our patients with lung cancer was 8.4% (4 out of 43). In the same period, the percentage of positive swabs in the resident population of our province was 1.3% (range 0.08-3.2). All but one lung cancer patients recovered without specific therapy and without need for hospitalization. The molecular swab was negative after a median period of 36 days (range 21-46). One chemotherapy-treated patient died of COVID at home. We grouped cancer patients in two categories: those receiving chemotherapy only and those treated with chemotherapy + immune-checkpoint inhibitor or immune-checkpoint inhibitor alone. We observed no statistical differences in the incidence of COVID.

Conclusion: Our data suggest that patients with advanced lung cancer were at higher risk of COVID compared to the general population and there was no difference in the incidence of infections between patients treated with chemotherapy and those receiving immunotherapy.

Keywords

COVID-19, Advanced NSCLC, Immune checkpoints inhibitor

Introduction

Lung cancer is the leading cause of cancer-related deaths in Western countries. Non-Small-Cell Lung Cancer (NSCLC) accounts for more than 85% of primary lung cancers and approximately two-thirds of NSCLC patients are diagnosed at an advanced stage and their prognosis remains poor [1].

The discovery of driver oncogene alterations such as Epidermal Growth Factor Receptor (EGFR) mutations and Anaplastic Lymphoma Kinase (ALK) rearrangements, as well as the identification of their targeted inhibitors, has dramatically improved the outcomes in highly selected patients [2,3]. In parallel, the improvements in the knowledge of cancer immune editing and the discovery of immune-checkpoint inhibitors have provided important new treatment opportunities for driver mutation negative NSCLC [4,5]. So far, immune-checkpoint inhibitors (administered alone or in combination with chemotherapy) have become the standard of care in metastatic disease and they gained the role of maintenance therapy after chemo-radiation in locally advanced disease [6]. A recent report highlights a mortality reduction in patients with advanced driver mutation negative NSCLC, probably due to the introduction of these new strategies in daily clinical practice [7]. So far, chemotherapy alone remains the treatment reserved for those patients without a driver mutation and with specific contra-indications for immunotherapy (i.e. autoimmune disorders).

COVID-19, a respiratory tract infection disease caused by the Severe Acute Respiratory Syndrome Corona Virus 2 (SARS-COV-2), has been spreading worldwide since late 2019 [8]. The rapid circulation of the virus and the hypothesis that patients with cancer could be particularly at risk if infected, has led many scientific societies to recommend on the one hand to minimize hospital admissions (to prevent infection) and on the other to study strategies to be able to maintain therapeutic standard for patients with cancer [9,10]. Evidences that patients with a history of cancer have a higher mortality rate due to COVID-19 compared with the general population have been established over time [11-15]. Patients with lung cancer may be more susceptible to infection by SARS-CoV-2. This finding is probably multifactorial and could be due to the systemic immunosuppression caused either by the tumour itself or the anticancer treatments, either by the older age of lung cancer patients than those with other type of cancer and also to most prevalence of chronic lung diseases, cardiovascular comorbidities and smoking exposure in this population [16].

We paid particular attention to immune checkpoint inhibitors whose pulmonary adverse events were thought to potentially, and theoretically, complicate and/or hide a SARS-COV-2 infection, even if in the absence of scientific evidence [17].

The purpose of the present study is twofold: to evaluate whether the incidence of SARS-COV-2 infection in patients with NSCLC is higher than in the general population of our province (Lucca) and to assess the incidence of SARS-COV-2 in patients receiving immunotherapy compared to patients treated with chemotherapy.

Patients and Methods

This is a retrospective study carried out at the Medical Oncology Unit of Lucca, Tuscany region, in Italy. Each investigator identified patients through a database. The election criteria were: documented stage IV NSCLC, Eastern Cooperative Oncology Group performance status (ECOG PS) <2 and treatment with an immune-checkpoint inhibitor, a chemotherapy or both, as indicated in daily clinical practice. An adequate bone marrow reserve and good liver and renal functions were required. The exclusion criteria were: EGFR, ALK or ROS-1 aberrations, active or suspected autoimmune disease requiring systemic steroid administration (>10 mg daily, prednisone- equivalent) or other immunosuppressive medications, medical history of active hepatitis B or C, positive test of Human Immunodeficiency Virus (HIV). We included all eligible patients treated consecutively in the period from 1st April 2020 to 31st December 2020. Patient data were collected retrospectively from medical records and included: demographics, histological and molecular characteristics, number of metastatic sites, number and presence of comorbidities. Data of SARS COV2 positivity rate in our province were collected by Health Ministry reports [18] and are summarized them in Tables 1 and 2.

Table 1: Distribution of oral-nasopharyngeal swabs and COVID-positive cases in Lucca province

April

May June July August Septemb October Novem

Decemb

Day

ONS NP ONS NP ONS NP ONS NP ONS NP ONS NP ONS NP ONS NP ONS NP
1 771 42 1289 13 1362 1 1351 0 1385 1 1536 0 1832 11 4650 177 10480

73

2

802 31 1295 6 1363 1 1351 0 1388 3 1538 2 1874 42 4763 113 10548 68
3 843 41 1304 9 1364 1 1351 0 1391 3 1551 13 1893 19 4871 108 10640

92

4

855 12 1308 4 1364 0 1351 0 1392 1 1559 8 1911 18 5038 167 10776 136
5 872 17 1310 2 1364 0 1351 0 1398 6 1572 13 1937 26 5281 243 10860

84

6

888 16 1314 4 1364 0 1351 0 1402 4 1583 11 1957 20 5505 224 10949 89
7 920 32 1316 2 1364 0 1362 11 1405 3 1589 6 1988 31 5739 234 11049

100

8

954 34 1319 3 1364 0 1362 0 1407 2 1595 6 2015 27 6001 262 11072 23
9 979 25 1324 5 1364 0 1362 0 1414 7 1596 1 2078 63 6157 156 11130

58

10

988 9 1328 4 1365 1 1362 0 1417 3 1602 6 2141 63 6328 171 11200 70
11 1006 18 1329 1 1366 1 1362 0 1419 2 1621 19 2194 53 6585 257 11285

85

12

1020 14 1329 0 1366 0 1362 0 1423 4 1636 15 2235 41 6739 154 11349 64
13 1060 40 1331 2 1366 0 1364 2 1429 6 1642 6 2268 33 6944 205 11424

75

14

1061 1 1335 4 1366 0 1365 1 1430 1 1642 0 2305 37 7178 234 11484 60
15 1073 12 1336 1 1366 0 1365 0 1440 10 1644 2 2362 57 7397 219 11532

48

16

1134 61 1338 2 1366 0 1366 1 1444 4 1645 1 2464 102 7707 310 11612 80
17 1158 24 1348 10 1367 1 1367 1 1451 7 1668 23 2531 67 8061 354 11692

80

18

1165 7 1352 4 1367 0 1367 0 1455 4 1678 10 2609 78 8490 429 11756 64
19 1197 32 1352 0 1369 2 1367 0 1457 2 1697 19 2649 40 8674 184 11835

79

20

1213 16 1352 0 1369 0 1367 0 1472 15 1721 24 2679 30 8914 240 11902 67
21 1215 2 1352 0 1369 0 1371 4 1479 7 1732 11 2746 67 9143 229 11973

71

22

1221 6 1356 4 1369 0 1371 0 1485 6 1745 13 2841 95 9376 233 12011 38
23 1225 4 1357 1 1370 1 1371 0 1490 5 1752 7 3022 181 9501 125 12074

63

24

1230 5 1360 3 1351 0 1374 3 1496 6 1773 21 3212 190 9577 76 12149 75
25 1244 14 1360 0 1351 147 1376 2 1497 1 1784 11 3402 190 9710 133 12229

80

26

1256 12 1360 0 1351 0 1377 1 1498 1 1788 4 3583 181 9866 156 12307 78
27 1265 9 1361 1 1351 0 1380 3 1504 6 1794 6 3693 110 10024 158 12325

18

28

1269 4 1361 0 1351 0 1380 0 1509 5 1811 17 3835 142 10172 148 12351 26
29 1273 4 1361 0 1351 0 1381 1 1513 4 1812 1 4007 172 10300 128 12408

57

30

1276 3 1361 0 1351 0 1384 3 1525 12 1821 9 4271 264 10407 107 12465 57
31 / / 1361 0  / / 1384 0 1536 11 / / 4473 202 / / 12546

81

ONS: Oral-nasopharyngeal swab; NP: Number of COVID-positive cases

Table 2: Distribution by month of the COVID positivity rate in Lucca province

Month

Total number of ONS Total NP

Positivity rate%

April

32,433

547

1.68

May

41,459

85

0.20

June

40,871

156

0.37

July

42,355

33

0.08

August

44,951

152

0.34

September

50,127

285

0.57

October

83,007

2,652

3.20

November

229,098

5,934

2.60

December

359,413

2,139

0.59

Total

923,714

11,983

1.30

ONS: Oral-nasopharyngeal swab; NP: Number of COVID-positive cases

The study was approved by the local ethics committee with Protocol Number 20412 and conducted according to the Good Clinical Practice Guidelines and to the World Medical Association Helsinki Declaration.

Evaluation Criteria

Pre-treatment evaluation included medical history, physical examination, complete blood-cell count with routine chemistry and Computed-Tomography (CT) scan of chest and abdomen. All patients were asymptomatic at the baseline and we performed an oral-nasopharyngeal swab (PCR test) was performed within 48 hours before the start of the treatment and repeated it before each subsequent cycle of therapy. The oral-nasopharyngeal swab was also required in the presence of symptoms suspected for COVID 19.

Statistical Analysis

This is a descriptive observational study for which a calculation of the population sample to be included is not necessary. We divided Patients into two groups: those who received chemotherapy only and those who received chemotherapy plus immunotherapy or immunotherapy alone. We assessed the correlation between the incidence of positive swabs in treated patients and in the general population as well as the correlation between positive swabs and patient groups using Fisher’s exact test with 0.05 set as significance level of P-values.

Results

From 1st April 2020 to 31 December 2020, we treated 43 patients, with the previously specified inclusion criteria. All patients tested negative for the molecular swab performed at baseline. Most were men (65.1%) with good performance status (ECOG-PS = 0 – 51.2%) and with adenocarcinoma (55.8%). The clinical characteristics of the patients are listed in Table 3. Eleven patients out of 31 did not have any comorbidity, 9 out of 31 presented one comorbidity and 11 patients had 2 or more comorbidity. The most frequent comorbidities were cardiovascular disease and chronic lung disease.

Table 3: Clinical characteristics

Characteristics

Patients, n (%)

No. Patients

43

Age, median yrs

Range

71

39-84

Sex

Male

28(65.1)

Female

15 (34.9)

ECOG-PS

0

22 (51.2)

1

21 (48.8)

Histology

Adenocarcinoma

24 (55.8)

Epidermoid

14 (32.6)

Large cells

2 (4.6)

SCLC

3 (7.0)

Smoking status

Never

3 (7.0)

Former

29 (67.5)

Current

9 (20.9)

ND

2 (4.6)

Stage

IIB

2 (4.6)

III

9 (21.0)

IV

32 (74.4)

Comorbidities

0

13 (30.2%)

1

13 (30.2%)

2

14 (32.5%)

>3

3 (7.1%)

Cardiovascular or cerebrovascular disease

17

Lung diseases

13

Diabetes and other Endocrine disorders

9

Chronic Kidney Failure

1

Other malignancies

2

Depressive syndrome

3

ECOG: Eastern Cooperative Oncology Group; PS: Performance Status; SCLC: Small-cell Lung Cancer; ND: Not Declared

Most of our patients were treated in first line (64.1%): 18 patients received platinum-based chemotherapy alone, 1 received Gemcitabine only, 4 received platinum-based chemotherapy plus immune-checkpoint inhibitors (all of these received Platinum-Pemetrexed-Pembrolizumab), 18 patients were treated with immune-checkpoint inhibitors alone (pembrolizumab, durvalumab, atezolizumab or nivolumab). Finally, 2 patients were included in clinical trials. Only one patient received platinum-based chemotherapy associated with radiotherapy for locally advanced inoperable disease (Table 4). Symptoms suspected of Covid-19 infection occurred in 5 patients, but only one tested positive at the molecular swab. On the contrary, 3 asymptomatic patients tested positive at screening swab for a total of 4 positive patients. Two of them were receiving immune-checkpoint inhibitors and 2 chemotherapy alone.

Table 4: Treatments

Treatment line

N. (%)

1 line*

25 (64.1)
2 line*

11 (28.2)

> 3 line*

3 (7.7)
*for the metastatic disease: 39 patients.
Treatment type

N. (%)

Platinum based Chemotherapy

18 (42%)
Monochemotherapy

1 (2%)

Chemothearapy+ immune checkpoint inhibitors

4 (9.5%)
Immune checkpoint inhibitors

18 (42%)

Clinical Trials

2 (4.5%)
Platinum-Gemcitabine

6

Platinum-Pemetrexed

4
Platinum-Paclitaxel

4

Platinum-Vinorelbine

2
Platinum-Etoposide

2

Gemcitabine

1
Platinum-Pemetrexed-Pembrolizumab

4

Pembrolizumab

7
Nivolumab

7

Durvalumab

3
Atezolizumab

1

We observed no correlation between number or type of comorbidities and incidence of COVID-19.

Therefore, in the period from 1st April 2020 to 31 December 2020 the rate of Sars-Cov 2 infections in our population of NSCLC patients was 8.4% (4 out of 43). No patient received any specific treatment; the molecular swab was negative after a median period of 36 days (21-46). A patient undergoing chemotherapy died of COVID-19 at home. We did not observe any statistical differences in terms of incidence of COVID 19 infection between patients receiving chemotherapy only and those treated with chemotherapy + immune-checkpoint inhibitor or immune-checkpoint inhibitor alone (Fisher’s exact test; P=1). In the same period, as reported by Health Ministry data [18], in our province (387,876 inhabitants) a total of 923,714 oral-nasopharyngeal swabs (PCR test) were performed and the total number of positive tests was 11,983, with a positivity rate of 1.3%, range 0.08-3.2 (Tables 1 and 2).

The incidence of COVID 19 infection among our lung cancer patients was statistically higher than in the general population (Fisher’s exact test P= 0.0055).

Discussion

Maintaining cancer care during the pandemic has represented a challenge that has required new flexible strategies and a careful weighing between the COVID-19 risk and the optimal oncological therapeutic standard.

To date, there has been no standard-of-care approach for treating patients with lung cancer during the pandemic. Several organizations and groups of experts shared general recommendations for management of cancer patients [19-21]. The European Society of Medical Oncology (ESMO), for example, recommended prioritizing outpatient visits (for patients with) in case of a new diagnosis of lung cancer, in order to keep the standard work-up without undue delay [22,23].

However, early in the pandemic, it was clear that patients with chronic diseases, including cancer patients, presented a greater risk of severe COVID-19, with high mortality [24-29]. Moreover, patients with lung cancer seemed to be particularly vulnerable to lung infections compared to those with other cancers or to the general population [30]. This observation agreed with our data. In fact, in our series of lung cancer patients for whom the basal molecular swab and the subsequently periodic screening tests were mandatory, we registered a positivity rate of 7.8% in 9 months. This figure was significantly higher than that observed in the resident population in the same period, which was 1.3%; P=0.0055 [18].

The higher rate of positivity could be partially explained by the median age of our patients at diagnosis (71 years) and by the fact that 65% of them had two or more comorbidities in addition to metastatic lung cancer. The report of Memorial Sloan Kettering Cancer Center (MSKCC) suggested that several baseline clinical features were associated with increased risk of COVID-19 severity, including age, obesity, smoking history, chronic lung disease, hypertension and congestive heart failure. On the contrary, cancer features, such as presence of active/metastatic lung cancer or history of prior thoracic radiation or thoracic surgery, PD-L1 immunohistochemistry did not appear to impact severity of COVID-19. The report concluded that patient-specific features, rather than cancer-specific characteristics and type of treatments, are the most significant determinants of severity of COVID-19 disease [31]. However, the multivariate analysis of TERAVOLT study showed that smoking history was the only feature associated with COVID death in lung cancer patients [32].

Although our sample size is too small to draw definitive conclusions, in our series the number and severity of comorbidities did not impact on COVID 19 severity.

One out of 4 lung cancer patients died of COVID, for a mortality rate of 25%.

Our data seemed to agree with those available in other reports [16,31-33] and suggested that patients with thoracic cancer have a higher risk of death than those with other type of cancers and then the general population. In addition, Spanish data showed that mortality rate might be higher in lung cancer patients (32.7 %) [16], in agreement with the meta-analysis of Saini and coll [28] and the meta-analysis of Tagliamento and coll [34]. Similar results were reported by the TERAVOLT registry in patients with thoracic malignancies [32] and by UK Coronavirus Cancer Monitoring Project (UKCCMP) [35]. On the contrary, in a Chinese meta-analysis, the authors did not show a significant difference in mortality between lung cancer patients and those with other types of tumors [36].

One of our patients died without being admitted to intensive care unit; as the life expectancy of patients with advanced lung cancer has increased with the introduction of new treatment options, their early access to the intensive care unit should be taken into account and decided in a multidisciplinary team [37]. In the COVID era, many of the lung cancer-related symptoms such as cough, fever, asthenia or some of the treatment-related adverse events can be misinterpreted and might complicate the management of clinical daily life. In addition, the pulmonary adverse events of immunotherapy may need a careful evaluation in order not to be confused with SARS-COV2 pneumonia. Moreover, radiographic findings of COVID-19 may be indistinguishable from pneumonitis caused by lung cancer treatment, including immunotherapy [38]. We observed that programmed death 1 (PD-1) blockade exposure was not associated with increased risk or severity of COVID-19; in fact, we did not report any differences in COVID infection rate between treatments with immune-checkpoint inhibitors and chemotherapy. We can hypothesize that immunotherapy does not increase susceptibility to COVID-19 infection, nor does increase mortality. Luo and coll [39] and Trapani and coll [40] suggested that there was no significant difference in COVID severity regardless of PD-1 blockade exposure. TERAVOLT [32] and CCC19 studies [26] reached the same conclusions.

The main limitation of our study is the sample size, which affects the ability to perform adjustments for multiple potential confounding factors. Moreover, a control group of non-cancer patients or other-cancer patients is missing. Larger studies are needed in order to generalize these results.

Conclusion

We suggest that patients with advanced lung cancer are very fragile and they seem to be at higher risk of Sars-Cov 2 infection and COVID-19 mortality compared to the general population. Moreover, we observed no differences in the incidence of COVID between patients treated with chemotherapy and those receiving immunotherapy. Finally, in the management of these fragile patients, the risk-benefit ratio of anticancer therapy must be carefully evaluated and should be considered an early and prompt COVID treatment in case of infection.

References

  1. Jemal A, Bray F, Center MM, Ferlay J, Ward E, et al. (2011) Global cancer statistics. CA Cancer J Clin 61: 69-90. [crossref]
  2. Lee JK, Hahn S, Kim DW, Suh KJ, Keam B, et al. (2014) Epidermal growth factor receptor tyrosine kinase inhibitors vs conventional chemotherapy in non-small cell lung cancer harboring wild-type epidermal growth factor receptor: a meta-analysis. JAMA 311: 1430-1437. [crossref]
  3. Shaw AT, Kim DW, Nakagawa K, Seto T, Crinó L, et al. (2013) Crizotinib versus chemotherapy in advanced ALK-positive lung cancer. N Engl J Med 368: 2385-2394. [crossref]
  4. Keir ME, Butte MJ, Freeman GJ, Sharpe AH (2008) PD-1 and its ligands in tolerance and immunity. Ann Rev Immunol 26: 677-704. [crossref]
  5. Chen DS, Mellman I (2013) Oncology meets immunology: the cancer-immunity cycle. Immunity 39: 1-10. [crossref]
  6. Russo A, McCusker MG, Scilla KA, Arensmeyer KE, Mehra R, et al. (2020) Immunotherapy in Lung Cancer: From a Minor God to the Olympus. Adv Exp Med Biol 1244: 69-92. [crossref]
  7. Siegel RL, Miller KD, Jemal A (2020) Cancer Statistics, 2020. CA Cancer J Clinic 70: 7-30.
  8. Chen N, Zhou M, Dong X, Qu J, Gong F, et al. (2020) Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet 395: 507-513. [crossref]
  9. Indini A, Aschele C, Cavanna L, Clerico M, Daniele B, et al. (2020) Reorganisation of medical oncology departments during the novel coronavirus disease-19 pandemic: a nationwide Italian survey. Eur J Cancer 132: 17-23. [crossref]
  10. Blais N, Bouchard M, Chinas M, Lizotte H, Morneau M, et al. (2020) Consensus statement: summary of the Quebec Lung Cancer Network recommendations for prioritizing patients with thoracic cancers in the context of the COVID-19 pandemic. Curr Oncol 27: e313-e317. [crossref]
  11. Docherty AB, Harrison EM, Green CA, Hardwick HE, Pius R, et al. (2020) Features of 20 133 UK patients in hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: prospective observational cohort study. BMJ 369: m1985. [crossref]
  12. Pinato DJ, Lee AJX, Biello F, Seguí E, Aguilar-Company J, et al. (2020) Presenting features and early mortality from SARS-CoV-2 infection in cancer patients during the initial stage of the COVID19 pandemic in Europe. Cancers (Basel) 12: 1841. [crossref]
  13. Lievre A, Turpin A, Ray-Coquard I, Le Malicot K, Thariat J, et al. (2020) Risk factors for Coronavirus Disease 2019 (COVID-19) severity and mortality among solid cancer patients and impact of the disease on anticancer treatment: a French nationwide cohort study (GCO-002 CACOVID-19). Eur J Cancer 62-81. [crossref]
  14. Kuderer NM, Choueiri TK, Shah DP, Shyr Y, Rubinstein SM, et al. (2020) Clinical impact of COVID-19 on patients with cancer (CCC19): a cohort study. Lancet 395: 1907-1918. [crossref]
  15. Lee AJX, Purshouse K (2021) COVID-19 and cancer registries: learning from the first peak of the SARS-CoV-2 pandemic. Br J Cancer 124: 1777-1784.
  16. Provencio M, Mazarico Gallego JM, Calles A, Antoñanzas M, Pangua C, et al. (2021) Lung cancer patients with COVID-19 in Spain: GRAVID study. Lung Cancer 157: 109-115. [crossref]
  17. Robilotti EV, Babady NE, Mead PA, et al. (2020) Determinants of COVID-19 disease severity in patients with cancer. Nat Med 26: 1218-1223.
  18. https://www.salute.gov.it/portale/home
  19. Dingemans AC, Soo RA, Jazieh AR, Rice SJ, Kim YT, et al. (2020) Treatment Guidance for Patients with Lung Cancer during the Coronavirus 2019 Pandemic. J Thorac Oncol 15: 1119-1136. [crossref]
  20. Singh AP, Berman AT, Marmarelis ME, et al. (2020) Management of Lung Cancer during the COVID-19 Pandemic. JCO Oncol Pract 16: 579-586. [crossref]
  21. Lambertini M, Toss A, Passaro A, et al. (2020) Cancer care during the spread of coronavirus disease 2019 (COVID-19) in Italy: young oncologists’ perspective. ESMO Open 5: e000759. [crossref]
  22. Passaro A, Addeo A, Von Garnier C, Blackhall F, Planchard D, et al. (2020) ESMO Management and treatment adapted recommendations in the COVID-19 era: Lung cancer ESMO Open 5. [crossref]
  23. Curigliano G, Banerjee S, Cervantes A, Garassino MC, Garrido P, et al. (2020) Managing cancer patients during the COVID-19 pandemic: an ESMO multidisciplinary expert consensus. Ann Oncol 31: 1320-1335. [crossref]
  24. Zhou F, Yu T, Du R, et al. (2020) Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet 395: 1054-1062.
  25. Zhou Y, Yang Q, Yeet J, Wu X, Hou X, et al. (2021) Clinical features and death risk factors in COVID-19 patients with cancer: a retrospective study. BMC Infect Dis 21: 760. [crossref]
  26. Grivas P, Khaki AR, Wise-DraperTM, French B, Hennessy C, et al. (2021) Association of clinical factors and recent anticancer therapy with COVID-19 severity among patients with cancer: a report from the COVID-19 and Cancer Consortium. Ann Oncol 32: 787-800. [crossref]
  27. Tian J, Yuan X, Xiao J, Zhong Q, Yang C, et al. (2020) Clinical characteristics and risk factors associated with COVID-19 disease severity in patients with cancer in Wuhan, China: a multicentre, retrospective, cohort study. Lancet Oncol 21: 893-903. [crossref]
  28. Saini KS, Tagliamento M, Lambertini M, McNally R, Romano M, et al. (2021) Mortality in patients with cancer and coronavirus disease 2019: a systematic review and pooled analysis of 52 studies. J. Cancer 139: 43-50. [crossref]
  29. Sharafeldin N, Bates B, Song Q, Madhira V, Yan Y, et al. (2021) Outcomes of COVID-19 in Patients with Cancer: Report From the National COVID Cohort Collaborative (N3C). J Clin Oncol 39: 2232-2246. [crossref]
  30. Rogado J, Pangua C, Serrano-Montero G, Obispo B, Marino AM, et al. (2020) Covid-19 and lung cancer: A greater fatality rate? Lung Cancer 146: 19-22. [crossref]
  31. Luo J, Rizvi H, Preeshagul IR, Egger JV, et al. (2020) COVID-19 in patients with lung cancer. Ann Oncol 10: 1386-1396. [crossref]
  32. Garassino MG, Whisenant JG, Huang L-C, et al. (2020) COVID-19 in patients with thoracic malignancies (TERAVOLT): first results of an international, registry-based, cohort study. Lancet Oncol 21: 914-922.
  33. Piper-Vallillo AJ, Mooradian MJ, Meador CB, Yeap BY, Peterson J, et al. (2021) Coronavirus Disease 2019 Infection in a Patient Population with Lung Cancer: Incidence, Presentation, and Alternative Diagnostic Considerations. JTO Clin Res Rep 2: 100124. [crossref]
  34. Tagliamento M, Agostinetto E, Bruzzone M, Ceppi M, Saini KS, et al. (2021) Mortality in adult patients with solid or hematological malignancies and SARS-CoV-2 infection with a specific focus on lung and breast cancers: A systematic review and meta-analysis. Crit Rev Oncol Hematol 163: 103365. [crossref]
  35. Lee LYW, Cazier JB, Starkey T, Briggs SEW, Arnold R, et al. (2020) COVID19 prevalence and mortality in patients with cancer and the effect of primary tumour subtype and patient demographics: a prospective cohort study. Lancet Oncol 21: 1309-1316. [crossref]
  36. Lei H, Yang Y, Zhou W, Zhang M, Shen Y, et al. (2021) Higher mortality in lung cancer patients with COVID-19? A systematic review and meta-analysis. Lung Cancer 157: 60-66. [crossref]
  37. Nadkarni AR, Vijayakumaran SC, Gupta S, Divatia JV (2021) Mortality in Cancer Patients With COVID-19 Who Are Admitted to an ICU or Who Have Severe COVID-19: A Systematic Review and Meta-Analysis. JCO Glob Oncol 7: 1286-1305. [crossref]
  38. Calabro’ L, Peters S, Soria JC, Di Giacomo AM, Barlesi F, et al. (2020) Challenges in lung cancer therapy during the COVID-19 pandemic. Lancet Respir Med 8: 542-544. [crossref]
  39. Luo J, Rizvi H, Egger JV, Preeshagul IR, Wolchok JD, et al. (2020) Impact of PD-1 Blockade on Severity of COVID-19 in Patients With Lung Cancers. Cancer Discov 10: 1121-1128. [crossref]
  40. Trapani D, Marra A, Curigliano G (2020) The experience on coronavirus disease 2019 and cancer from an oncology hub institution in Milan, Lombardy Region. Eur J Cancer 132: 199-206. [crossref]

Reporting and Journalistic Ethics as an Ecological Issue in Contemporary Times

DOI: 10.31038/CST.2022724

Summary

The report is a narration of the present. It distinguishes itself from the literature by its commitment to informative objectivity. At the same time, cyberjournalism, presenting new possibilities of “storytelling”, also ends up transforming the practice of print journalism and its genres. In this scenario, the ethics of the press, the journalistic work and the journalist’s function become a field of study and investigation for a lifetime, even thinking about how the function agitates and influences the social work, the well-being of man, the oikos. It is implicitly investigated in what sense the cyberjournalist is still a journalist, indispensable to society, or has become a “virtual gossiper”. In this context, it is questioned how journalistic information, journalistic ethics and their actions, what of the facts that we are now narrating actually reach the population as facts and sources of real information and self-care, ecologically present in the life of society. As ecology, as we understand it today, is much more than just thinking about the environment, but all life in society, we believe that the journalistic function becomes essential for ecologically well-being. How much ethics necessarily present in all our doing makes life and this good living ecologically active. Without ethics there is not even the idea of ecology.

Keywords

Journalism, Desertification, Transport, Energy, Technology, Water

First of all, we must explain that the understanding today of what ecology is expanding enormously, transforming what was previously thought to be the study of the environment, of felling trees, of fires, of studies of the ozone layer and, as a consequence, of a super-invasion of UV rays, it becomes the broad study of living in society, but not only, a care of the self, a search for good living in a broad way. Thus, we resume some concepts of journalism and its expansion nowadays, cyberjournalism, fake news and the discomfort caused by the misuse of information or even flawed and inaccurate information that reach the large number of the population.

Journalism has always lived in constant transformation. Sometimes in terms of form, sometimes in terms of content, journalistic narrative founded styles, influenced literature, disseminated facts, informed, formed public opinion, provoked controversies, incited disputes, transformed the world by transforming itself. It is the report – where the news is told, narrated – a privileged journalistic genre. It is a narrative – with characters, dramatic action and description of the environment – separated from literature by a commitment to informative objectivity. This mandatory link with objective information reminds us that, whatever the type of report, the “pure direct style” is imposed on the writer, that is, the narration without comments, without subjectivations. The exemption of subjectivity, or the supposed neutrality, is increasingly utopian, unattainable, almost impossible.

Cyberjournalism, with exponentially expanded platforms, unlimited spaces and the possibilities of hypertext, has transformed journalism and, consequently, journalistic narrative. After all, cyberjournalism is not just a transposition of cool printed texts and images to the Internet environment. Journalism ends up different, with singularities and particularities to this new “storytelling”. Cyberjournalism, in a “competition” for the reader, thus also modifies print journalism and its genres, especially news and reporting. In this sense, reflecting on journalistic ethics, the ethics of the press, is an indispensable task for everyone who is enrolled in this field of work, a task that is reflected in the very practice of the profession. The analysis of the journalist’s role, of the formation of this professional, in this scenario, is relevant.

From the transformations of journalistic work, a fact that is clearer in the face of changes in the media, since one ends up imposing ways of operating on the others – especially after the emergence of cyberjournalism – what is the role of the journalist today and therefore their ethics : he remains indispensable to society, or has he become just a “virtual gossiper”, in which reality ends up being submerged in a “subreality” of “facts”, in which the real is only what is conveyed by the media, distorted or even invented?

If the 17th and 18th centuries were those of publicist journalism and the 19th century, that of educational and sensationalist journalism, the 20th century was that of testimonial journalism [1]. It doesn’t mean that everyone (citizens, journalists, press entrepreneurs) understood them that way. Social representations endure beyond the conditions that gave rise to them: as much as the publicistic vision of journalism, they survived the sensationalist and educational visions, as well as journalistic practices that fall into each of these categories.

The fact, according to Lage (2003), however, is that information is no longer just or mainly a factor of cultural addition or recreation to become essential to people’s lives.

Information thus becomes a fundamental raw material and the journalist becomes a translator of speeches. In short, the reporter, in addition to translating, must confront different perspectives and select facts and versions that allow the reader to orient himself in the face of reality. The public’s right to information is a fundamental rule for journalists, not for many of their interlocutors, even liberal ones. It is also (cf. Lage, 2003) the basis of any ethics acceptable to journalists: “however, what is informed to the public is what is of their real interest, not always of their curiosity” (p.94-95). As far as sources are concerned, ethically, their right is to have kept the content (not the form) of what they reveal. This means not only respecting the semantic value of what is informed, but also the inferences that result from the comparison between what was informed and the context of the information. It is up to the journalist to pursue the truth of the facts in order to inform the public well. In this sense, journalistic activity fulfills a social function before being a business. [2] also adds that objectivity and balance are values that underpin good reporting. The discussion of journalistic work, based on applied ethics, press ethics or journalist ethics, is thus essential to the practice of news and reporting:

Corroborating this idea of journalistic practice and ethics, an example of an ecological question postulated back in the 20th century is taken, more precisely in 1957, when the magazine Seleções carried out an extensive report on Lithium, an abundant metal on Earth, when studies on its exploitation and use began: “Having recently appeared on the industrial scene, hundreds of applications will be able to help in the operation of thermonuclear generating plants of the future”. Continuing on the spectrum of the published report, it says that in 1817, Johann Arivedson names the metal silvery white and, 125 years later, the metal was considered useless. The common textbooks of industrial chemistry did not even publish it. The metal is found in the crust of the globe. Every shovel of earth we dig up in our garden contains traces of it. No one knew what to do with matter of singular characteristics. If not kept immersed in oil or in an airtight container until a solid piece of metal decomposes. The metal is used in many thermonuclear materials and as a fuel in intercontinental rockets.

Let’s talk a little about metal: lithium is the lightest solid in existence. It is the third among the lightest elements by atomic weight in the universe. Only hydrogen and helium, both gases, have a lower atomic weight. In gasoline, lithium floats. When a match comes to it, it burns with an intense white flame and melts. A knife cuts you like cheese. It has an insatiable appetite for water and air. Immersing it in water it effervesces like soda. Before World War II, it was used as an ingredient in the Edison accumulator, used in mine locomotives and submarines, where it contributed to providing a constant surge of energy. As you can see Hydrogen when combined with water, packs of lithium hydride were placed in rescue cases to inflate balloons like radio antennas, marking the place where pilots of downed planes and life rafts were [4-22].

Compounds were used in submarines to purify the air, sucking in carbon dioxide and other noxious gases, and to de-icing aircraft wings, as lithium’s freezing point is low. It has an affinity for water. LiOH is also used in the manufacture of lubricants for hotter, colder and wetter climates where other greases melt, freeze or become saturated with water.

In 1948, there was a new surge in the use and studies with lithium, with a view to the manufacture of air conditioning and refrigeration; the lithium present in these devices absorbs moisture like a sponge. Bathrooms, refrigerators and enamelware of all kinds are manufactured and lithium is used in them. Lithium compounds are also used in the production of synthetic vitamin A and antihistamines. Added to skin creams, lithium keeps them solid in the heat and soft in the cold. It is also present in the manufacture of items such as optical lenses, phonograph records and blackboards. With it, the chalk slides without noise. Added to oil, it acts as a detergent, cleaning the engine while lubricating it. In 1952, there is a shortage of material with large orders for LiOH by the US Atomic Energy Commission, which proved to be a mystery as lithium is neither radioactive nor fissile. Lithium reserves in the Earth’s crust, in ocean waters are inexhaustible. They also serve as high-energy fuel for rockets and guided projectiles. There are lithium deposits all over the globe, but its biggest reserve is in North America. Today lithium is widely used in medicines used to combat the disease of the century, depression, the old PMD (today bipolar processes) and, amazingly, in computer and cell phone batteries (in all the technological components of the so-called “wonderful future” that presents itself to us).

Let’s take up, from the example above, how important and absolutely and completely necessary journalistic ethics is. In the discussion of ethics and the press, Bucci (2000) cites Paul Johnson, an influential thinker in contemporary liberal thought. Historian, essayist and journalist, Johnson is the author of articles in the British magazine Spectator, which have served as a reference for the debate on ethics in the press around the world. Not for what they pontificate, but for the problems they point out. He proposes an analysis grid for the most frequent errors in journalism: he listed seven deadly sins and, as antidotes, ten commandments.

The first of the seven deadly sins, pointed out by him, is “Distortion, deliberate or inadvertent”, perhaps the most crass, followed by the cult of false images: “When journalism moves more than informs, there is an ethical problem, which is the negation of its function of promoting the debate of ideas in the public space”. (BUCCI, 2000, p.144-145). One of the main ethical functions of the press – whose obligation is to critically report events – has become to criticize the cult of false images, a function which it rarely takes care of. (p.147). Still, in the list of deadly sins, Johnson lists the invasion of privacy, the murder of reputation, the overexploitation of sex, the poisoning of children’s minds and the abuse of power.

Against ills and failures, the journalist lists ten commandments: 1. dominant desire to discover the truth; 2. think about the consequences of what is published; 3. telling the truth is not enough – it can be dangerous without informed judgment; 4. have an impulse to educate; 5. distinguish public opinion from popular opinion; 6. willingness to lead; 7. show courage; 8. willingness to admit one’s mistake; 9. general equity; 10. respect and honor words. Lists like this one, by Paul Johnson, are present in studies of ethics and the press, based in other ways, from other references, or even reformulated. Marcelo Leite, former Folha de São Paulo ombudsman, and Ciro Marcondes Filho, in A saga dos dogs lost, for example, created other lists, aiming to guide journalists in their work.

Bucci (2000) emphasizes that it is the right of access to information (and culture) that democratically justifies the existence of all forms of social communication. Ethics is present in every decision that seeks quality information. Openly debating ethical issues, in the light of real events, is a public service: it educates the critical spirit of citizens and helps to improve the press. Bucci (2000) recalls the importance of differentiating what is public interest from what is perverse curiosity of the public (which asks for scandal, hurts whoever it hurts). Undoubtedly, no one can draw the universal boundary between one and the other: “there is no abstract recipe that is valid for all situations, but the simple reminder of this caution already brings more elements to a good decision on the concrete cases that present themselves” (p.155).

These issues can be better analyzed, interpreted and explained with practical examples, seeking to identify, from these references and others (philosophical, sociological, psychoanalytic), how ethics applied to the press is involved. Good examples can be found in what is conventionally called “report-books”, more extensive, contextualized reports that allow the reader, sometimes, to analyze not only the explicit content, but also the journalistic practice itself, like Truman Capote, in “In Cold Blood” [3].

Lage (2003) states that “what happens to celebrities and type-characters draws the attention not only of journalists, but of anyone” (p.97). Ecology, literally translated, is the study of the echoes, the oikos, the home, the house. What can we say about our Earth/house/home? Based on what we have just brought as journalistic information, journalistic ethics and their actions, what of the facts that we are now narrating actually reach the population as facts and sources of real information and self-care, ecologically present in the life of society?

Let us cite more examples, to corroborate our ecologically posed questions: for many years the large cattle producers, in order to obtain a better response from the land, used to burn the extension of land so that the grass would then appear more showy (including to better feed the cattle). Many geos scholars said that, in a short time, the response to fires really resulted in a more fertile soil, but in a little longer period, what you got (and get) is soil erosion and, therefore, soil erosion and desertification. They even said that SAARA is the result of that. Even today (and the fires in Brazil, extensively criticized by international bodies and the press, use only information about the burning of the ozone layer). What, effectively, do we do and inform about all the factors involved in the burning of land, trees, soil and its components?It is known that the use of railways, in addition to being “romantically” more pleasant, pollutes less, brings more savings and less pollution to the environment. Even so, in a country the size of Brazil, more was invested in highways, road transport, more pollution, more spending on roads and almost total dependence on oil for this purpose, since, in addition to gasoline, diesel (highway), kerosene (aviation), we still use waste from oil companies for paving (the holes in the highways and the corruption in the “builders” – who “work” the highways – are just a consequence!). We have reached the absurdity of, in such a city, cementing train tracks instead of reactivating them, at least for the transport of grain production.

The latest news from this pandemic period is that Europe is experiencing a lack of manpower, including for road transport of food, causing shortages for the population. Investments in solar and wind energy, particularly in Brazil, would bring us world dominance in terms of economy and use of clean energy, in addition to climate benefits, since we have sunshine practically all year round and viable winds for the use of wind power. But let’s see, what should be researched is the direct use of these energies and not just as a “bridge” in electrical networks (as this leads us to deviations and more corruption). Cutting down trees and planting new ones “in place” of so many replicates have already become commonplace. But nobody says that the tree is a living being, and by cutting it down, we kill. When someone dies, does another replace that one?

It appears that lithium batteries are being replaced by sodium-ion batteries, being produced by Chinese giant CATL, claiming lithium shortages. Will it be? Sodium batteries have a lower energy density, but allow fast charging and are more resistant to low temperatures. We are entering a serious water crisis, even with large extensions of forests. Europe has been suffering from water problems for a long time (see its large perfume industry). Lithium does not “just” suck water from bodies, but from the environment, reducing the water we so desperately need for survival. Lithium is still used by the pharmaceutical industry (for a long time) as a drug to treat complications of what is now called bipolar disorder, but even doctors who prescribe the drug cannot effectively indicate how it works in the bodies.

What are we actually doing with life on Earth? Why does no one think, or even open dialogue, about these issues? Why are these discussions and themes never even brought up in meetings on the climate, on the environment? Do the economic gains obtained outweigh the damage caused?

References

  1. LAGE, Nelson (2003)The report: theory and technique of interview and journalistic research.
  2. BUCCI, Eugenio (2000) On ethics and the press. São Paulo: Companhia das Letras.
  3. SODRÉ, Muniz; FERRARI, Maria Helena. (1986) Reporting technique: notes on journalistic narrative.
  4. Aristotle (1985) Nicomachean Ethics.
  5. D’AGOSTINI, France (1999) Analytics and Continentals: A Guide to the Philosophy of the Last Thirty Years.
  6. DELEUZE, Gilles, GUATTARI, Felix (1966) The Anti-Oedipus: Capitalism and Schizophrenia.
  7. DERIDA, Jacques (2001) States-of-the-soul of psychoanalysis.
  8. HABERMAS, Jürgen (1991) Comments on the Ethics of Discourse..
  9. JURANVILLE, Alain. (1987) Lacan and Philosophy.
  10. KEHL, Maria Rita (2002) On ethics and psychoanalysis.
  11. LACAN, Jacques (1998) Seminar 7: the ethics of psychoanalysis.
  12. MARCONDES FILHO, Ciro (2000) The saga of the lost dogs.
  13. The social production of madness,
  14. NOVAES, Adauto (org.) (1992) ethics. São Paulo: Companhia das Letras.
  15. OLIVEIRA, Manfredo. (2000) Fundamental currents of contemporary ethics.
  16. PENA, Philip (2013) Journalism Theory.
  17. PLATO (1991) The Banquet (The Thinkers Collection).
  18. RAJCHMAN, John (1993) Eros and Truth – Lacan, Foucault and the question of ethics.
  19. RINALDI, Doris (1996) The ethics of difference: a debate between psychoanalysis and anthropology.
  20. Lithium, the magic metal. Selections, May 1957, São Paulo, Reader’s Digest.
  21. SODRÉ, Muniz; FERRARI, Maria Helena (1986) Reporting technique: notes on journalistic narrative.

Genital Cancer in Men and Women: A Review

DOI: 10.31038/IGOJ.2022511

Abstract

Considering the medical, economic and social importance of genital cancer occurring in men and in women, with a world-wide distribution, we have as objectives in this manuscript to contribute to the knowledge of the factors that constitute a risk for genital cancer and its principal consequences in the infected persons.

Keywords

Genital cancer, Carcinoma, Sarcoma, Metastasis, Breast and fallopian tubes cancer, Gynecology

Introduction

Cancer is any “malignant” tumor, including carcinoma and sarcoma. It arises from abnormal and uncontrolled division of cells that then invade and destroy the surrounding tissues. Spread of cancer cells (metastasis) may occur via the bloodstream or the lymphatic canal or across body cavities such as the pleura and peritoneal spaces, thus setting up secondary tumors at sites distant from the original tumors. Each individual primary tumor has its own pattern of local behavior and metastasis; for example, bone metastasis is very common in breast cancer, but very rare in the cancer of the ovary.

There are, probably, many causative factors some of which are known; for example, cigarette smoking is associated with lung cancer, radiation with some bone sarcoma and leukemia. Some tumors, such as retinoblastoma are inherited.

Treatment of cancer depends on the type of tumor, the site of the primary tumor, and the extent of spread. In a general context, according [1] “cancer is a leading cause of death worldwide, accounting for nearly 10 million deaths in 2020. The most common in 2020 (in terms of new cases of cancer) were: 1. Breast (2.26 million cases); 2. Lung (2.21 million cases); 3. Colon and rectum (1.93 million cases); 4. Prostate (1.41 million cases); 5. Skin (non-melanoma)”.

Between the cancer risks, there are some that can be changed and others like the age or family history can’t be changed.

For this manuscript we have selected: prostate cancer; ovarian cancer; breast cancer and fallopian tubes cancer.

(A) Considering that prostate cancer is a cancer that has strong effects on the health of men the risk factors are:

Age

About 6 in 10 cases of prostate cancer are found in men older than 65. On the other hand, it is rare in men younger than 40, but the chance of having prostate cancer rises rapidly after age 50;

Geography

The reasons for prostate cancer being most common in North America, northwestern Europe, Australia, and on Caribbean islands but less common in Asia, Africa, Central America, and South America, are not clear. But, it is possible that a more intensive screening for prostate cancer in some developed countries accounts for at least part of this difference, but other factors such as lifestyle differences (diet, etc.) are likely to be important as well.

Family History

Considering that prostate cancer seems to run in some families, which suggests that in some cases there may be an inherited or genetic factor. Still, most prostate cancers occur in men without a family history of it.

In a general context it has been observed that men having a father or brother with prostate cancer more than double a man’s risk of developing this disease. (The risk is higher for men who have a brother with the disease than for those who have a father with it.) The risk is much higher for men with several affected relatives, particularly if their relatives were young when the cancer was found.

Gene Changes

Several inherited gene changes (mutations) seem to raise prostate cancer risk, but they probably account for only a small percentage of cases overall. For example:

Inherited mutations of the BRCA1 or BRCA2 genes, which are linked to an increased risk of breast and ovarian cancers in some families, can also increase prostate cancer risk in men (especially mutations in BRCA2).

Men with Lynch syndrome (also known as hereditary non-polyposis colorectal cancer, or HNPCC), a condition caused by inherited gene changes, have an increased risk for a number of cancers, including prostate cancer.

According to [2] the most important known risk factors for prostate cancer are age, ethnicity, and inherited genetic variants.

(B) Ovarian cancer, fallopian tube cancer and peritoneum cancer.

Concerning genital cancer in women, we go to emphasize: ovarian epithelial cancer, peritoneum and fallopian tube cancer. Ovarian epithelial cancer, fallopian tube cancer, and primary peritoneal cancer are diseases in which malignant (cancer) cells form in the tissue covering the ovary (a pair of organs in the female reproductive system. They are in the pelvis, one on each side of the uterus (the hollow, pear-shaped organ where a fetus grows) or lining the fallopian tube.

  1. Ovarian epithelial cancer, fallopian tube cancer, and primary peritoneal cancer form in the same type of tissue and are treated the same way.
  2. Women who have a family history of ovarian cancer are at an increased risk of ovarian cancer.
  3. Some ovarian, fallopian tube and primary peritoneal cancers are caused by inherited gene mutations (changes).
  4. Cancer sometimes begins at the end of the fallopian tube near the ovary and spreads to the ovary.
  5. The peritoneum is the tissue that lines the abdominal wall and covers organs in the abdomen. Primary peritoneal cancer is cancer that forms in the peritoneum and has not spread there from another part of the body. Cancer sometimes begins in the peritoneum and spreads to the ovary.
  6. Cancer sometimes begins at the end of the fallopian tube near the ovary and spreads to the ovary.
  7. The peritoneum is the tissue that lines the abdominal wall and covers organs in the abdomen. Primary peritoneal cancer is cancer that forms in the peritoneum and has not spread there from another part of the body. Cancer sometimes begins in the peritoneum and spreads to the ovary.
  8. The fallopian tubes are a pair of long, slender tubes, one on each side of the uterus. Eggs pass from the ovaries, through the fallopian tubes, to the uterus. Cancer sometimes begins at the end of the fallopian tube near the ovary and spreads to the ovary.

Such how was referred back, inherited mutations of the BRCA1 or BRCA2 genes, are linked to an increased risk of breast and ovarian cancers in some families.

Conclusions

  1. We think that it was here demonstrated that genital cancer has an impact on the health of women and men.
  2. We hope that the diminution of the attention that was given to this disease, during the combat COVID-19, is corrected in a short/medium time.
  3. To combat cancer, it is necessary:
  • to have persons specialized for the different types of combat;
  • more research in this area.

to divulge to the public in general information on prevention and importance of the knowledge of the risk factors for cancer, a disease that can affect the human population of all ages.

References

  1. https://www.who.int/news-room/fact-sheets/detail/cancer
  2. Cheng HH, Nelson PS. Genetic risk factors for prostate cancer. UpToDate. 2019. Accessed at https://www.uptodate.com/contents/genetic-risk-factors-for-prostate-cancer on March 19, 2019. National Cancer Institute. Physician Data Query (PDQ).
fig 2

Peripartum Cardiomyopathy (PPCM): Epidemiological, Clinical, Therapeutic, Evolutionary and Prognostic Aspects at the Amirou Boubacar Diallo National Teaching Hospital in Niamey (HNABD)

DOI: 10.31038/JCCP.2022514

Abstract

Title: Epidemiological, clinical, therapeutic, evolutionary and prognostic aspects of PPCM in the Internal Medicine and Cardiology department of the Amirou Boubacar Diallo National Hospital: Retrospective and prospective descriptive study about 64 cases.

Introduction: PPCM is a heart failure secondary to left ventricular systolic dysfunction with LVEF <45% or a fraction of shortening <30%, Occurred towards the end of pregnancy or in the months following childbirth (mainly the month following the childbirth) without any other identifiable cardiac cause it is a worldwide disease whose epidemiology varies considerably with a multifactorial etiology. The true incidence or prevalence of PPCM in Africa and many other populations remains unknown. The objective of the study was to bring out the epidemiological, clinical, therapeutic, evolutionary and promostic aspects of PPCM.

Methodology: This is a retrospective and prospective descriptive and analytical study on PPCM in the internal medicine and cardiology department of HNABD from January 1, 2017 to December 31, 2019 for the retrospective part and from January 1, 2020 as of December 31, 2020 for the prospective part.

Results: The prevalence of PPCM was 8.68% for heart failure; 3.82% compared to all heart disease and 2.06% compared to all entries. The average age is 28.2 years with extremes of 15 and 5 years. The clinical presentation was essentially that of global heart failure in 81.3% of cases. The alteration of the ejection fraction of the left ventricle was found in all patients, ie 100% divided between moderate in none, moderate in 53.1% and severe in 46.9% of cases. The treatment of PPCM in our series is that of heart failure. Four therapeutic measures were the basis of symptomatic treatment in all hospitalized patients: diet and diet regimen in 100% of them and ACE inhibitors in 95.3%. B blockers were used in 73.4% of patients outside the acute phase of the disease. Anticoagulants were used in 27 of the patients, i.e. 42.1% Digoxin in 16patients or 25% and antiplatelet drugs in 54.7% of patients. SLGT2 Inhibitors was used in 32.8% (21 patients). Bromocriptine was used 9.4% of the time. Thromboembolic complications are the most frequent, namely EP in 10.9% of cases, DVT 4.7%. Only one case of arrhythmia was found. Like other complications of pneumonia, pleurisy and severe anemia have been found. The mortality rate 17.2% related to cardiogenic shock in 36.36% of cases. The prognostic factors found are Young age, primiparity, 7 out of 11 deceased patients were between 15 and 20 years old and had only one parity, i.e. 63.6%, with no statistically significant link with respectively (P=0.09) and (P=0.18). 63 1% of deceased patients came from poor and underprivileged rural areas without any statistically significant link (P=0.66). 81.8% of deceased patients had a low socioeconomic level, with no statistically significant association between SSE and death (P=0.21). Nine (9) of the eleven (11) deceased patients had a severe alteration of LVEF on admission, ie 81.8%, with a very significant link (P=0.005).

Conclusion: Knowledge of the epidemiological aspects of PPCM is necessary for the optimization of patient care. SLGT2 Inhibitors and Bromocriptine seem effective in these cases.

Keywords

PPCM, Epidemiology, Clinical, Therapeutic, SLGT2 inhibitors, Bromocriptine, Niger

Introduction

Peripartum cardiomyopathy (PPCM) is a rare heart failure characterized as an idiopathic cardiomyopathy presenting with heart failure secondary to left ventricular systolic dysfunction affecting women in late pregnancy or in the months following childbirth [1-3]. This is a diagnosis of exclusion.

Its incidence is estimated at 1/3000-4000 births [4]. The highest incidence in Africa is in the Sudano-Sahelian zone with a prevalence of 2.7 per 1000 pregnancies [5,6]. It is responsible for 10% of female heart disease in Niamey (Niger) [6]. Several factors seem to play a role in promoting hormonal changes during childbirth (fall in cardioprotective fetal estrogen levels, synthesis of cardiotoxic 16KDa-prolactin) [7]. The classic picture is that of heart failure (HF), which generally mimics the signs of a normal pregnancy, often leading to delayed diagnosis and avoidable complications [8]. Transthoracic echocardiography is the key examination, making it possible to confirm the diagnosis, eliminate differential diagnoses and monitor progress [7]. Medical management is similar to that of heart failure with reduced left ventricular ejection fraction of other etiologies, but adjustments during pregnancy are necessary to ensure fetal safety [8]. It is a serious pathology whose evolutionary potential is extremely rapid and totally unpredictable, with the possibility of sudden onset of refractory cardiogenic shock in the first 24 to 48 hours justifying treatment in a center with specialized cardiovascular resuscitation [2,7]. Complete recovery from PPCM is possible in half of the patients, while the other half will retain dilated cardiomyopathy responsible for more or less severe chronic heart failure [7]. Subsequent pregnancy carries a substantial risk of relapse and even death in the event of incomplete myocardial recovery. [8].

Given its frequency, its high morbidity, the absence of known etiology as well as the multiplicity of contributing factors; an update has been initiated.

Methods

This is a descriptive and analytical retro-prospective study which was spread over a period of 4 years (January 1, 2017 to December 31, 2019 for the retrospective part and from January 1, 2020 to December 31, 2020 for the prospective part) in the cardiology department of the National Amirou Boubacar Diallo (HNABD) hospital in Niamey.

Were included in the study, women regardless of their age, their race who presented heart failure (HF) between the eighth month of pregnancy and the first five months postpartum without etiology found and in whom dilated cardiomyopathy (DCM) was diagnosed on cardiac ultrasound. Not included in the study were women with onset of heart failure before the eighth month of pregnancy or after the first five months postpartum, women with known heart disease or any other cause of heart failure.

Data collection was carried out from hospitalization records and the consultation register using a survey sheet taking into account the epidemiological, clinical, paraclinical, therapeutic and evolutionary aspects during the study period. The variables studied included: demographic data and prenatal consultation diaries, cardiovascular risk factors, mode of onset or decompensation of heart failure, mode and course of delivery. The chronology of the signs of CI in relation to childbirth, and the data of the physical and paraclinical examination. Certain examinations were systematically requested such as chest X-ray, electrocardiogram, cardiac echo-Doppler, blood count, blood sugar, creatinine. D-dimers, cardiac enzymes and chest CT angiography were requested depending on the clinical and electrical context.

In addition to the absence of a cause of heart failure, the following ultrasound criteria were essential to retain the diagnosis of PPCM: the dilation of at least the left ventricle (DTDVG>52 mm) associated with left ventricular systolic dysfunction, i.e. say a lower left ventricular ejection fraction (LVEF) of 0.50 and/or a shortening fraction <30%.

Definition of Variables

Estimation of Socioeconomic Status

In our study we arbitrarily estimated the socio-economic level (SES) of our patients taking into account three parameters which are diet, physical work during pregnancy and level of education, according to the following scale:

Affluent

Patient having to eat a rich and varied diet regularly; exempted from intense physical work during pregnancy; secondary or higher school level.

Average

Patient regularly having a satiety diet, not exempted from intense physical work during pregnancy, primary school level or illiterate.

Low

Patient who does not regularly have enough food, subjected to intense physical work during pregnancy; illiterate.

Classification of LVEF according to the Latest ESC 2018 Guidelines: [7]

Preserved LVEF

This is an LVEF greater than 50%.

Moderately reduced LVEF

This is an LVEF between 40% and 49%.

Low LVEF

This is an LVEF below 40%.

Severely Altered LVEF

This is an LVEF <30%

High PRVG

Translated by an E / A ratio > 2.

Normal PRVG

Translated by an E/A ratio <1.

Favorable Evolution

It was defined by the remission of the symptoms and the relaxation of the patients.

Death

These are all patients with PMPC who died regardless of the cause of death.

We identified 91 suspected cases of PPCM. We rejected 27 patients who had not benefited from cardiac ultrasound. Thus we have a sample of 64 patients who met all our criteria.

Data Analysis

The data was entered, processed and analyzed on a computer with IBM SPSS statistics version 20 Data editor software after creating an input mask. The results were presented in the form of tables and graphs using the Office 2016 package (Word and Excel). The statistical test used in this study was the chi² with a degree of significance P < 0.05.

Limits of the study

– Some patients are seen after the acute phase, explaining the absence of certain signs or their low proportion;

– Financial difficulties preventing distant patients from coming for consultation and/or carrying out certain additional examinations;

– The lack of information in certain files;

Results

Epidemiological Aspects

Sixty-four cases of peripartum cardiomyopathy (PPCM) were recorded in 4 years, an average of 16 cases per year. The average age was 28.2 years (extremes of 15 and 55 years). The age group 15 and 20 was the majority (31.30%) (Figure 1). PPCM represented 1/356 births, its hospital prevalence was 2.06% (64 PPCM/3094 patients admitted), 3.82% of heart disease (64/1672), 8.68% of total CI (64/737). Unemployed women made up 90.6% of the sample. The majority of patients had unfavorable socioeconomic conditions (65.6%) (Figure 2). Multiparous women were the most represented with 38.80% of cases (Table 1). The average parity was 3.07 (extremes of 1 to 6).

fig 1

Figure 1: Distribution of patients by age group

fig 2

Figure 2: Distribution of patients according to socio-economic status

Table 1: Data on epidemiological aspects

Paramètres

Numbers

Percentage (%)

Occupation

Housewife

-official

– Pupil/student

-shopkeeper/seamstress

 

58

3

2

1

 

90,60

4,70

3,10

1,60

Education level

-Non school

-University

-High school

-Middle School

-Primary

 

44

2

3

10

5

 

68,80

3,10

4,70

15,60

7,80

History

Personnal

-HBP

-Chirurgical

-Gynaeco-obsetrics

 Number of children;

1

2-3

4-5

6 and more

Abortion

0

1

2

3

 

 

6

10

 
 
 

24

15

11

14

 

60

3

0

1

 
 

9,40

15,60

 
 
 

37,5

23,4

17,2

21,9

 

93,7

4,7

0

1,6

Parity

-Grand multiparity

-Multiparity

-Pauciparity

-Primiparity

 

12

13

16

23

 

18,80

20

25

35,90

Twins

7

10,9

PNC

19

30

Type of birth

-Low way

-Caesarean section

 

58

6

 

90,60

9,40

NB prognosis

– living child

-Deceased children

-Breastfeeding

 

53

14

48

 

88,20

21,9

75

Risk factors

-High sodium diet

-Hot bath

– Taking medication during pregnancy

– Intense physical work during pregnancy

 

48

64

3

 

38

 

75

100

5

 

60

Family history

-HBP

-Diabetes

-Heart disease

 

12

6

1

 

18,80

9,40

1,60

PNC: Pre-natal Consult, HBP: High Blood Pressure, NB: New Born

Clinical Aspects

In 85.9% of cases, symptoms appeared postpartum (Figure 3). There was a delay in consultation in all patients with an average delay of 30 days (extreme: 06 to 120 days). Symptoms were dominated by exertional dyspnoea (100%) (Table 2). The decompensation was done on the mode of isolated left IC in 18.8% of the cases and on the mode of global IC in 81.2% of the cases. Table 2 summarizes the data of the clinical examination.

fig 3

Figure 3: Distribution of patients according to time to onset in relation to childbirth

Table 2: The clinical data of the patients

Parameters

Number (n)

Percentage (%)

Functional signs

-Dyspnea

-Cough

-Chest pain

-Hemoptysis

 

64

43

42

12

 

100

89,10

65,52

34,4

General signs

General condition (GC)

– altered

– passable

– Good

CONJUNCTIVES

-Colored

-Little colored

-Blades

Blood pressure

-Normal

-Low

-HighTempérature

– Feverish

-Non Feverish

 

 

29

25

10

 

32

15

17

 

33

21

10

22

42

 

 

45,3

39,1

15,6

 

50

23,4

26,6

 

51,60

32,80

15,60

34,4

65,5

PHYSICAL SIGNS

-Turgescence of the jugular veins

-Ascites

-Breath of TI

-MI Breath

– Peripheral edema

– Hepato-jugular reflux

-Hepatomegaly

-Crackling rales

– Deflected tip shock

-Gallop sound

-Tachycardia

 

10

24

3

35

61

52

57

53

45

29

59

 

15,60

37

4,7

56,30

95,30

81,30

89,10

82,80

70,30

45,30

92,20

 

Left heart failure

12

18,80

Global heart failure

52

81,30

Consultation times

-One week

-three weeks

-a month

-two months

-three months

-four months

 

19

7

22

10

3

3

 

30

10

35

15

5

5

MI: Mitral Insufficiency, TI: Tricuspid Insufficiency

X-ray and Electrocardiographic Signs

Cardiomegaly was found in all patients (100%) with an average cardiothoracic index of 0.70 (extreme 0.57 to 0.86).

Sinus tachycardia as well as left ventricular hypertrophy were noted in the respective proportions of 87.5% and 59.4%. Thirty-three patients (51.6%) had left atrial hypertrophy. Two patients had a conduction disorder (3.10%). In 29 cases (28.1%) there were nonspecific repolarization disorders associated with ventricular hypertrophy (Table 2).

Echocardiographic Data

LV dilation was noted in all patients with a mean end-diastolic diameter of 61.93 mm (range 54 to 76 mm). The left atrium was dilated in 43 patients with an average diameter of 46.17 mm (range 23.60 and 59). The right ventricle was dilated in 7 patients (10.93%). The average EF was 30.99% with extremes of 14 and 44.60%. The average of the FR was 12.85%. LV systolic dysfunction was severe in 43.60% of cases. All patients presented with global parietal hypokinesia. Cardiac Doppler echo had objectified a left intraventricular thrombus in 14 patients (21.9%). LV filling pressures were increased in 58 patients (90.6%). Pulmonary arterial hypertension was significant in 59.37% of cases with an average of 50.43mmHg (extremes: 23 and 74 mmHg). Pericardial effusion was noted in 19 patients (29.7%).

Treatment

A lifestyle and diet was recommended in 100% of our patients. Diuretics were the most used molecules; followed by CE inhibitors, then beta-blockers and digitalis with respectively 100%; 95.3%; 73.4% and 25%. Platelet antiaggregants were used in 54.7% of cases, anticoagulants in 42.2% of cases and AVKs in 21.9% of cases. SLGT2 Inhibitors were used in 21 CASES(32.8). Bromocriptine was only used in 9.4% of cases.

Evolution in Hospitalization

Complications occurred in 15 patients, i.e. 23.4%, including 10 cases of thromboembolic disease, including 7 cases (10.9%) of PE, 3 cases (4.7%) of deep vein thrombosis (DVT), 4 cases cardiogenic shock (6.3%) and 1 case of arrhythmia (1.6%). The evolution was favorable in 82.8% of our patients and death occurred in 17.2% of cases.

The average duration of hospitalization of our patients was 10 days (extreme 7 to 33 days). Fifty-three patients (82.8%) had found a clinical cure with disappearance of the signs. We noted 11 deaths (17.2%). 81.80% of deceased patients had severe LVEF impairment with a statistically significant link (P=0.034). Within our sample, in the multivariate analysis after logistic regression, it appears that only severe alteration of LVEF, low socio-economic status and primiparity were factors statistically associated with an unfavorable evolution with respectively P=0.034; P=0.044 and P=0.025.

Discussion

The hospital prevalence of PPCM was 2.06% in our study. In Africa, the prevalence of PPCM varies according to the studies [9,10]. In the West, the disease appears to be less frequent with an incidence of 1/3000 to 1/15000 [11]. These results confirm that PPCM is a pathology that is more prevalent in women of black origin [12]. Other associated factors are advanced maternal age [13]. In this study, the average parity was 3.07 (extremes: 1 and 6) with 82.93% multiparous. This testifies to the frequency of this condition in multiparous women [13]. In addition, 93.75% of our patients were from low socio-economic conditions. We agree with the authors that low socioeconomic status is also a risk factor for PPCM [14].

In total, maternal age over 30 years, multiparity, unfavorable socio-economic conditions are the risk factors for PPCM found in this study. Other factors such as the notion of chronic hypertension and prolonged use of tocolytics, twin pregnancies cannot be formally retained in this study [13].

We have observed a great delay in consultation among our patients. Dyspnea on exertion was the main symptom with an advanced stage (NYHA classification). The same observations were reported in the African literature [10,14]. These are patients in reality in whom the symptoms start earlier but ignorance and poverty would be the causes of delays in consultation and most of the patients are found in a table of global IC with a state of anasarca. The women considered the edema of the limbs as a normal fact linked to a pregnancy and it was in view of the increasing intensity of the dyspnea that the majority had consulted. The other symptoms found were precordialgia and cough. Precordial pain ranged from simple precordial tingling to angina-like pain with chest tightness. Their frequency varies according to the authors [10,11,14]. These chest pains associated with coughing pose a real diagnostic problem because they can raise the suspicion of a pulmonary embolism. In all cases, the patients are sufficiently put on anticoagulants at a curative dose. Tachycardia with a galloping sound, systolic murmur of mitral insufficiency and crackles were the most frequent auscultatory data found in our patients. Several authors have reported these same physical examination data but at widely varying rates [10,14]. These statistical disparities are explained by the subjective nature of clinical examinations, hence the need for paraclinical examinations. Cardiomegaly was noted in 100% of cases in this study. Cardiomegaly is constant in heart failure but remains non-specific [4]. This is an essential element in our regions where cardiac ultrasound is rare and inaccessible to the population.

On the EKG, serious arrhythmias are reported in the literature. Ferrière out of 11 observations noted 1 case of ventricular tachycardia [15]. It is a ventricular tachycardia detected by Holter ECG recording. Sinus tachycardia, LVH and nonspecific repolarization disorders are frequently found electrical abnormalities [10,14].

Faced with a recent woman who complained of dyspnea, the discovery of cardiomegaly associated with sinus tachycardia and LVH should lead to the PPCM being withheld until proven otherwise. Cardiac ultrasound will only come to confirm the diagnosis and assess the impact and complications.

Faced with a recent woman who complained of dyspnea, the discovery of cardiomegaly associated with sinus tachycardia and LVH should lead to the PPCM being withheld until proven otherwise. Cardiac ultrasound will only come to confirm the diagnosis and assess the impact and complications.

Echocardiographic signs are one of the criteria for defining PPCM and global parietal hypokinesia is the constant disorder found [15]. Cavitary dilatation as well as LV systolic dysfunction were severe in our patients. These are the consequences of the delay in consultation and diagnosis. PPCM is a highly emboligenic pathology [11,16]. The reasons mentioned are multiple: blood hypercoagulability during pregnancy [17], dilated cardiomyopathy which appears in a recent childbirth, reduced maternal mobility during the last months of pregnancy, compression of the IVC by the fetal mobile. All these reasons justify curative anticoagulant treatment in our patients.

The evolution of PPCM is unpredictable [18]. The inter-birth interval depends on the time taken for systolic function to return to normal. When heart failure persists beyond the sixth month after delivery, mortality is 28% in one year and 85% in 5 years [19]. Forms resistant to medical treatment represent 10% of cases [18]. When PPCM is cured, the risk of recurrence in a subsequent pregnancy cannot be excluded. The advice to be given to patients is therefore adapted to each case. Some elements are considered poor prognosis. These are of African origin, age greater than 30 years, a delay in the appearance of symptoms greater than 3 months, the persistence of clinical signs 6 months after the onset of the disease, an ICT greater than 0.6 and the characteristics of the left ventricle: insignificant dilation (LTDVD <55-60 mm), an ejection fraction < 30%, a shortening fraction less than 20% at the time of diagnosis [17,20]. If we consider these factors, we would say that all of our patients had a poor prognosis.

The prognosis of the disease is unpredictable. Many patients die despite the treatment, while others progress quite favorably and after 6 to 12 months of treatment, complete recovery occurs [2,17,20,21]. Between recovery and death, the evolution is that of chronic heart failure with DMC [22]. The obstetrical prognosis is poor. Heart failure occurs in 50 to 80% of cases in subsequent pregnancies, with mortality that can reach 60% [21,23]. Given a very high mortality rate during subsequent pregnancies, we agreed with our multiparous patients to opt for a contraindication to definitive pregnancy. Primiparas wishing to have another pregnancy are monitored and decisions will be made on a case-by-case basis.

Conclusion

Peripartum cardiomyopathy is a serious cardiac complication of pregnancy. It is common in Niger as in other black African countries. It occurs preferentially in the postpartum. The risk factors were: maternal age over 30, multiparity and unfavorable socioeconomic conditions. There was a significant delay in diagnosis. The clinical picture was that of global heart failure with significant dilation of the heart chambers and severe alterations in myocardial performance. SLGT2 Inhibitors and Bromocriptine added to the classical HF therapy seem effective.

References

  1. Karen Sliwa, Mark C Petrie, Peter van der Meer, Alexandre Mebazaa, Denise Hilfiker-Kleiner, et al. (2020) Clinical presentation and management,6-month outcomes in women with peripartum cardiomyopathy: an ESC EORP registry. European Heart Journal 1-10.
  2. Koenig.T, Hilfiker-Kleiner.D, Bauersachs.J (2018) Peripartum cardiomyopathy. Department of Cardiology and Angiology, Hannover Medical School, Hannover, Germany.
  3. Gibelin P (2020) Cardiomyopathie du péripartum. Presse Med Form.
  4. Moili M, Valenzano, Menada M, Bentivoglio G, Ferrero (2010) Péripartum cardiomyopathy. Arch Gynecol Obstet 281: 183-8
  5. Seronde M-F (2018) La cardiomyopathie du péri-partum.Arch Mal Coeur Vaiss Prat.
  6. Cenac A, Moumio OM, Develoux M, Soumane I, Lamothe F, Gaulier Y et al. (1985) COLL. Cardiopathie de l’adulte à Niamey (Niger), enquete épidemologique à propos de 162 cas, obsertion. Cardiol Trop 11: 125-133.
  7. Diarra A (1983) La myocardiopathie du post-partum. (Syndrome de Meadows). Thèse Med, Bamako 4: 93.
  8. Vanzetto G, Martin A, Bouvais H, Marliere S, Durand MJ (2018) Cardiovascular diseases during pregnancy. European Haert Journal 10: 1093
  9. Melinda B. Davis, MD, Zolt Arany, MD, PHD, Dennis M. McNamara et al. (2020) Peripartum Cardiomyopathy JACC State-of-the-Art Review.
  10. Niakara A, Belemwire S, Nebie L, Drabo Y (2000) Cardiomyopathie du post-partum de la femme noire africaine: Aspects épidémiologiques, cliniques et évolutifs de 32 cas. Cardiol Trop 26: 69-73.
  11. Mielniczuk LM, Williams L, Davis DR, et al. (2006) Frequency of péripartum cardiomyopathy. Am J Cardiol.;97(12): 1765-1768. [crossref]
  12. Gentry MB, Dias JK, Luis A, Patel R, Thornton J, Reed GL (2010) African-American women have a higher risk for developing peripartum cardiomyopathy. Journal of the American College of Cardiology.;55(7): 654-659. [crossref]
  13. Heider AL, Kuller JA, Strauss RA, Wells SR (1999) Peripartum cardiomyopathy: a review of the literature. Obstetrical and Gynecological Survey 54: 526-531. [crossref]
  14. Katibi I (2003) Peripartum cardiomyopathy in Nigeria. Hesp Med 64: 249.
  15. Ferriere M, Sacrez A, Bouhour J, et al. (1990) La myocardiopathie du péripartum: aspects actuels. Etude multicentrique: 11 observations. Arch Mal C’ur 83: 1563-1569.
  16. Bennani S, Loubaris M, Lahlou I, Badidi, Haddour N, Bouhouch R, Cheti M, Arharbi M (2003) Cardiomyopathie du péripartum révélée par l’ischémie aigue d’un membre inférieur. Annales de cardiologie et angéologie 52: 382-385.
  17. Walkira S, Carnério de carvalho M, Cleide K, Rossi E (2002) Pregnancy and Peripartum Cardiomyopathy: A Comparative and Prospective Study. Arq Bras Cardiol 79: 489-493. [crossref]
  18. Burban M (2002) Pregnancy and dilated or hypertrophic cardiomyopathies. Arch Mal C’urVaiss 95: 287-291.
  19. Silwa K, Skudick D, Candy G, Bergemann A, Hopley M, Sareli P (2002) The addition of pentoxifylline to conventional therapy improves outcome in patients with péripartum cardiomyopathy. The European Journal of Heart Faillure 4: 305-309.
  20. Chapa JB, Heiberger HB, Weinert L, DeCara J, Lang R, Hibbard U (2005) Prognostic value of echocardiography in peripartum cardiomyopathy. Obstet Gynecol 105: 1303-1308. [crossref]
  21. Goland S, Modi K, Bitar F, et al. (2009) Clinical profile and predictors of complications in peripartum cardiomyopathy. J Card Fail 15: 645-650. [crossref]
  22. Rifat K (1995) La cardiomyopahie du peripartum (CMPP) Méd Et Hyg 53: 2548-2555.
  23. Hamdoum L, Mouelhi C, Kouka H, et al. (1993) Cardiomyopathie du péripartum. Analyse de trois cas et revue de la littérature. Rev FR Gynécol Obstét 88: 273-275.

A Case of Chronic Schizophrenia with Emergent Dementia: Successful Medication Reduction and its Explication

DOI: 10.31038/JCRM.2022521

Abstract

The emergence of dementia during the treatment of schizophrenia is a problem that occurs in many patients. Despite the appearance of strong cautionary language on the product labels of antipsychotic drugs, advising clinicians to avoid the use of these medications in the setting of dementia, there is no specific guidance for the management of dementia in schizophrenia. Here, we report the case of successful antipsychotic drug reduction in a 63-year-old male with paranoia and severely impaired cognition. We explain possible explanations and the implications of this result.

Introduction

Throughout the history of medicine, there has been great confusion about the characterization and management of psychotic features in dementia, and/or cognitive deficits in psychosis. In 1893, the term dementia praecox was introduced as a specific disease entity by German psychiatrist Emil Kraepelin. Kraepelin’s label – which emphasized deficits in attention and memory in the context of delusions (paranoia), abnormal movement (catatonia), or disorganized thoughts (hebephrenia) – predicted a chronic and progressively deteriorating course [1,2]. When, in the 1950s, dementia praecox was eventually replaced by the word, schizophrenia, this expectation of cognitive decline continued.

Following the identification of premature mortality in elderly demented patients who participated in randomized controlled trials of antipsychotic drugs (a problem later confirmed by naturalistic and observational studies), the U.S. Food and Drug Administration attached Black Box Warnings to the labels of dopamine blocking drugs [3-5]. However, no guidance was issued with respect to the management of dementia which emerges in the course of treating chronic or recurring psychosis. The purpose of this case report is to present an example of successful medication reduction in the latter scenario, and to briefly consider the treatment implications of this result.

Case Report

Prior to our involvement with this case, a 63-year-old male with a longstanding history of schizophrenia and remitted alcoholism had undergone two recent medical admissions to the hospital: first, for the stabilization of starvation ketosis caused by the delusion that his food and medications were being poisoned; second, for acute delirium and ataxia which were likely precipitated by antipsychotic drug treatment [6-8]. A workup for dementia in the previous admission had included an MRI of the brain which displayed cortical atrophy, moderate ventriculomegaly, and (by our review) marked deep white matter hyperintensities, consistent with Fazekas stage three changes. The patient was discharged to his residential care home on one psychotropic medication (olanzapine 5 mg bid).

Four weeks later, the patient presented to the emergency room with the recurrence of failing hygiene and the belief that his food, beverages, and medication were being poisoned. He was admitted to the psychiatric unit, where initial examination was notable for poor grooming and thin body habitus. The patient was edentulous and appeared fifteen years older than his chronological age. The initial treatment team prescribed olanzapine 5 mg bid and added Vitamin D3 1000 IU qd. The patient initially declined all medications and demonstrated limited acceptance of fluids and food.

By hospital day #3, members of the nursing team noticed deficits in short-term and long-term memory, including an inability to recall the names of staff members; an inability to recall his age or birthdate; and an inability to recall what he had eaten for breakfast. Though he remained alert and calm, he was disoriented to date, identifying the year as 1934.

On hospital day #4, the authors of this case report assumed care of the patient concurrent with a rotation in treatment teams. We administered the St. Louis University Mental Status Examination, on which the patient scored 3 out of 30 possible points (oriented to date and year, but unable to perform any other elements of this screening test). This reflected severe deficits in attention, memory, spatial orientation (apraxia), language (aphasia), and executive functioning.

Based upon a reported history of several months of steady cognitive decline, severe enough to impact baseline social functioning, the authors prioritized a working diagnosis of dementia. Consultations were requested from physical therapy (recommending close supervision as the patient’s “path” would deviate due to poor attention) and speech therapy (recommending soft foods due to mild dysphagia). The nursing team attended to the patient’s hygienic needs: trimming nails; shampooing hair; assisting with clean clothing.

A revised dementia workup was undertaken, ruling out syphilis (RPR was negative), anemia (iron and ferritin levels were within normal limits), and nutritional deficiencies (normal levels of B12, B6, B1, folate, zinc, and copper; Vitamin D 25-oh was 38 ng/mL).

Olanzapine was discontinued due to its anticholinergic effects. Risperidone (1 mg bid) was prescribed to prevent neuroleptic withdrawal symptoms. Other treatments were revised to include memantine (5 mg at bedtime), B12 (1 mg daily), Vitamin D3 (increased to 4000 IU daily), selenium sulfide shampoo (for dry scalp) and chlorhexidine gluconate mouth wash (for halitosis/gum health).

By hospital day #5, the patient remained vague, confused, and hypersomnolent. However, his paranoid delusions subsided. He continued to be free of auditory or visual hallucinations, thought blocking, or other features of psychosis.

By day #8, he demonstrated consistent compliance with medical treatments. He was eating well and participated regularly in scheduled activities on the unit. The patient was oriented to self, date, and place, but was unaware of his cognitive limitations (anosognosia). Deficits remained in the domains of memory (inability to identify his diagnoses, recite the events which had led to the admission, identify his treatments, or recall the names of his doctors); speech (illogical mumbling, poor verbal fluency), and executive functioning (inability to organize instrumental activities of daily living, inability to attend to hygiene without supervision). We believe that the severity and persistence of these problems, which continued despite the resolution of paranoia, supported the ascendancy of dementia as the primary condition in this case.

With the consent of his conservator, the patient was discharged back to his residential care home due to his continuing inability to independently coordinate food, clothing, shelter, medical care, or finances. Follow-up was planned with psychiatry, primary care, and neurology – the latter, to confirm our working diagnosis of dementia due to multiple etiologies. Discharge medications included: risperidone 1 mg po twice a day, Vitamin D3 4000 IU po daily, B12 1 mg po daily, and memantine 5 mg po at bedtime.

Discussion

Upon admission, the patient exhibited signs of poor grooming and hygiene, as well as paranoia regarding his food and medications. As we were not convinced that his delusions were entirely attributable to the historical diagnosis of schizophrenia, we considered a broad differential etiology of cognitive and psychotic symptoms.

Past diagnoses had included alcohol dependence, from which the patient had been entirely in remission for at least one year. This history raised the specter of Wernicke Korsakoff syndrome as a contributing problem. Thiamine levels were within normal limits during the previous and recent admissions, ruling out Wernicke encephalopathy. The presence of Korsakoff dementia with psychotic features remained a possibility. Creutzfeldt-Jakob disease, which may emerge with or after Wernicke Korsakoff syndrome, was considered but ruled out, as our patient lacked ataxia, hallucinations, or myoclonus [9]. A recent brain scan via MRI had demonstrated ventriculomegaly, cortical atrophy, and deep white matter hyperintensities: the latter, consistent with small vessel disease. These findings suggested a strong component of vascular dementia. Risk factors in our patient included a history of smoking, alcoholism, and years of exposure to psychotropic medications.

With respect to schizophrenia, our patient had been placed under the conservatorship of a relative for approximately two years prior to our encounter, and concurrent with his placement into a residential care home. However, based upon collateral information, severe cognitive deficits had emerged only within recent months and had not been typical of the patient’s presentation.

Several epidemiological investigations have highlighted an increased risk of dementia in patients diagnosed with schizophrenia [10-12]. Like others, though, we are not convinced that cognitive decline is a necessary component of the schizophrenic condition [13,14]. Neither are we convinced that the syndrome known as schizophrenia – nor the dementia which may appear in its course – are correctly diagnosed in many patients. It is far from clear how often past cases of dementia praecox, or modern cases of schizophrenia, have reflected undiagnosed manifestations of infections affecting the central nervous system — such as viral encephalitis, tuberculosis, neuroborreliosis, neurosyphilis, or various hepatides; nutritional deficiencies; genetic anomalies; endocrine imbalances; autoimmune conditions (including paraneoplastic or non-paraneoplastic limbic encephalitis); seizure disorders; or unrecognized toxidromes [15-19].

Based upon similar environmental and behavioral risk factors, the organic precursors of cognitive decline in schizophrenia appear to be no different than those which occur in the general population [20]. A notable exception occurs with respect to the anatomic and physiologic effects of dopamine blocking drugs. A strong line of research evidence, involving autopsy and biomarker studies, links the old and new antipsychotic drugs to the neurodegenerative changes associated with Alzheimer’s disease [21-27]. Research has also implicated the same pharmaceuticals in cerebrovascular disease [28,29]. Causal mechanisms may be inferred from studies in lab animals and humans in which investigators have detected drug-induced mitochondrial disruptions, enhancement of oxidative stress, perturbations of the blood brain barrier, disturbances of metallochemistry, alterations in tau phosphorylation, dysregulation of microglia, and induction of insulin resistance [30-36].

We believe that the iatrogenic risk of drug-induced dementia is often overlooked in psychiatric patients, but particularly among those who have been diagnosed with schizophrenia. The present case demonstrates the benefits of realigning diagnosis and treatment in a middle-aged man with cognitive decline. We are mindful of the fact that other professionals have published positive results in which pharmaceutical dose reductions have benefitted patients with similar histories [37,38].

By continuously reorienting our patient; by attending to his physical and nutritional needs; by establishing a warm, caring rapport; and by reducing antipsychotic medication, we were able to facilitate substantial clinical improvement. While in the USA, governmental drug product labels advocate the cautious use of antipsychotics in the setting of dementia; our case implies that it is equally prudent to heed this advice in the treatment of dementia in schizophrenia.

References

  1. GE Berrios, R Luque, JM Villagran (2003) Schizophrenia: A conceptual history. International Journal of Psychology and Psychological Therapy 3: 111-140.
  2. CR Bowie, PD Harvey (2006) Cognitive deficits and functional outcome in schizophrenia. Neuropsychiatry and Disease Treatment 2: 531-536. [crossref]
  3. LS Schneider, KS Dagerman, P Insel (2005) Risk of death with atypical antipsychotic drug treatment for dementia: a meta-analysis of randomized placebo-controlled trials. JAMA 294: 1934-1943. [crossref]
  4. S Schneeweis, S Setoguchi, A Brookhart, et al. (2007) Risk of death with the use of conventional versus atypical antipsychotic drugs among elderly patients. CMAJ 176: 627-632. [crossref]
  5. J Yan (2018) FDA extends black box warning to all antipsychotics. Psychiatric News.
  6. D MacKintosh (2017) Olanzapine induced delirium – a ‘probable’ adverse drug reaction. Annals of Palliative Medicine 6: S257-S259. [crossref]
  7. JI Park (2017) Delirium associated with olanzapine use in the elderly. Psychogeriatrics 17: 142-143. [crossref]
  8. RC Sharma, A Aggarwal (2010) Delirium associated with olanzapine therapy in an elderly male with bipolar affective disorder. Psychiatry Investigation 7: 153-154. [crossref]
  9. T Nagashima, M Okawa, T Kitamoto, et al. (1999) Wernicke encephalopathy-like symptoms as an early manifestation of Creutzfeldt-Jakob disease in a chronic alcoholic. Journal of Neurological Sciences 163: 192-198. [crossref]
  10. Nicolas G, Beherec L, Hannequin D, Opolczynski G, Rotharmel M, et al. (2014) Dementia in middle-aged patients with schizophrenia. Journal of Alzheimer’s Disease 39: 809-822. [crossref]
  11. Ribe AR, Laursen TM, Charles M, Katon W, Fenger-Gron M, Davydow D, et al. (2015) Long-term risk of dementia in persons with schizophrenia: a Danish population-based cohort study. JAMA Psychiatry 72: 1095-1101. [crossref]
  12. Ku H, Lee EK, Lee MY, Kwon JW (2016) Higher prevalence of dementia in patients with schizophrenia: A nationwide population-based study. Asia-Pacific Psychiatry 8: 145-153. [crossref]
  13. Hyde TM, Nawroz S, Goldberg TE, Bigelow LB, Strong D, et al. (1994) Is there cognitive decline in schizophrenia? A cross-sectional study. British Journal of Psychiatry 164: 494-500. [crossref]
  14. Shah JN, Qureshi SU, Jawaid A, Schulz PE (2012) Is there evidence for late cognitive decline in chronic schizophrenia?. The Psychiatric Quarterly 83: 127-144. [crossref]
  15. J Castro, S Billick (2013) Psychiatric presentations/manifestations of medical illnesses. Psychiatric Quarterly 84: 351-362. [crossref]
  16. KA Welch, AJ Carson (2018) When psychiatric symptoms reflect medical conditions. Clinical Medicine 18: 80-87. [crossref]
  17. M Boyle (1990) Is schizophrenia what it was? A re-analysis of Kraepelin’s and Bleuler’s population. Journal of the History of the Behavioral Sciences 26: 323-333. [crossref]
  18. L Cai, YH Yang, L He, Chou KC (2016) Modulation of Cytokine Network in the Comorbidity of Schizophrenia and Tuberculosis. Current Topics in Medicinal Chemistry 16: 655-665. [crossref]
  19. AM Doherty, J Kelly, C McDonald, O’Dywer AM, Keane J, et al. (2013) A review of the interplay between tuberculosis and mental health. General Hospital Psychiatry 35: 398-406. [crossref]
  20. B Kirkpatrick, M Johnson, K McGuire, Fletcher RH (1986) Confounding and the dementia of schizophrenia. Psychiatry Research 19: 225-231. [crossref]
  21. MA Rapp, M Schnaider-Beeri, DP Purohit, Reichenberg A, et al. (2010) Cortical neuritic plaques and hippocampal neurofibrillary tangles are related to dementia severity in elderly schizophrenia patients. Schizophrenia Research 16: 2010. [crossref]
  22. NA Clarke, T Hartmann, EL Jones, Ballard CG, Francis PT, et al. (2011) Antipsychotic medication is associated with selective alterations in ventricular cerebrospinal fluid AB40 and tau in patients with intractable unipolar depression. International Journal of Geriatric Psychiatry 26: 1283-1291. [crossref]
  23. GB Frisoni, A Prestia, C Geroldi, Adorni A, Ghidoni R, et al. (2011) Alzheimer’s CSF markers in older schizophrenia patients. International Journal of Geriatric Psychiatry 26: 640-648. [crossref]
  24. V Albertini, L Benussi, A Paterlini, Glionna M, Prestia A, et al. (2012) Distinct cerebrospinal fluid amyloid-beta peptide signatures in cognitive decline associated with Alzheimer’s disease and schizophrenia. Electrophoresis 33: 3738-3744. [crossref]
  25. V Bloniecki, D Aarsland, K Blennow, Cummings J, Falahati F, et al. (2017) Effects of risperidone and galantamine treatment on Alzheimer’s disease biomarker levels in cerebrospinal fluid. Journal of Alzheimer’s Disease 57: 387-393. [crossref]
  26. S Mehta, ML Johnson, H Chen, Aparasu RR (2010) Risk of cerebrovascular adverse events in older adults using antipsychotic agents: a propensity-matched retrospective cohort study. Journal of Clinical Psychiatry 71: 689-698. [crossref]
  27. PH Hsieh, FY Hsiao, SS Gau, Gau CS (2013) Use of antipsychotics and risk of cerebrovascular events in schizophrenic patients: a nested case-control study. Journal of Clinical Psychopharmacology 33: 299-305. [crossref]
  28. MC Cotel, EM Lenartowicz, S Natesan, Modo MM, Cooper JD, et al. (2015) Microglial activation in the rat brain following chronic antipsychotic treatment at clinically relevant doses. European Neuropsychopharmacology 25: 2098-2107. [crossref]
  29. LE Laskaris, MA DiBiase, I Everall, Chana G, Christopoulos A, et al. (2016) Microglial activation and progressive brain changes in schizophrenia. British Journal of Pharmacology 173: 666-680. [crossref]
  30. CX Gong, S Shaikh, I Grundke-Iqbal, Iqbal K (1996) Inhibition of protein phosphatase-2B (calcineurin) activity towards Alzheimer abnormally phosphorylated tau by neuroleptics. Brain Research 741: 95-102. [crossref]
  31. C Burkhardt, JP Kelly, YH Lim, Filley CM, Parker WD Jr (1993) Neuroleptic medications inhibit complex I of the electron transport chain. Annals of Neurology 33: 512-517. [crossref]
  32. D Ben-Shachar, E Livne, I Spanier, Leenders KL, Youdim MB (1994) Typical and atypical neuroleptics induce alteration in blood-brain barrier and brain 59FeCl3 uptake. Journal of Neurochemistry 62: 1112-1118. [crossref]
  33. KJ Burghardt, B Seyoum, A Mallisho, Burghardt PR, Kowluru RA, et al. (2018) Atypical antipsychotics, insulin resistance, and weight; a meta-analysis of healthy volunteer studies. Progress in Neuropsychopharmacology and Biological Psychiatry 83: 55-63. [crossref]
  34. E Elmorsy, PA Smith (2015) Bioenergetic disruption of human micro-vascular endothelial cells by antipsychotics. Biochemical and Biophysical Research Communications 460: 857-862. [crossref]
  35. T Suzuki, H Uchida, KF Tanaka, Tomita M, Tsunoda K, et al. (2013) Reducing the dose of antipsychotic medications for those who had been treated with high-dose antipsychotic polypharmacy: an open study of dose reduction for chronic schizophrenia. International Clinical Psychopharmacology 18: 323-329. [crossref]
  36. T Suzuki, H Uchida, H Takeuchi, Nomura K, Tanabe A, et al. (2005) Simplifying psychotropic medication regimen into a single night dosage and reducing the dose for patients with chronic schizophrenia. Psychopharmacology 181: 566-575. [crossref]
  37. H Takeuchi, T Suzuki, G Remington, Bies RR, Abe T, et al. (2013) Effects of risperidone and olanzapine dose reduction on cognitive function in stable patients with schizophrenia: an open-label, randomized, controlled, pilot study. Schizophrenia Bulletin 39: 993-998. [crossref]
  38. Y Zhou, G Li, D Li, Cui H, Ning Y (2018) Dose reduction of risperidone and olanzapine can improve cognitive function and negative symptoms in stable schizophrenic patients: A single-blinded, 52-week randomized controlled study. Journal of Psychopharmacology 32: 524-532. [crossref]