Archives

fig 3

Functional Impact of Osteosuture in Medial Bilateral Clavicular Physeal Fracture in Teenagers

DOI: 10.31038/JNNC.2020341

Abstract

Proximal physeal fracture of the medial clavicular physis is a rare specific injury occurring in the immature skeletal. Several studies describe unilateral cases with posterior or anterior displacement and the following complications (vascular and mediastinal compression). An immediate diagnosis and management are necessary to avoid complications. The clinical diagnostic might be obvious or difficult, pain and swelling in the sternoclavicular joint area, sometimes a deformity and focal tenderness. A chest X-Ray may help and a three-dimensional reconstructed computed tomography scan has to be done to evaluate the lesions before surgery. The imaging is useful to confirm and specify the diagnostic and the displacement. After reviewing the literature of the unilateral clavicular physeal fracture, we can conclude that the ideal management of these injuries has not been well described. An open reduction associated an osteosuture with non-resorbable suture was performed. One-year follow-up, both of them had full recovery without any functional impact or any complains. This management of the proximal physeal fracture of the medial clavicle on children shows an excellent result according our cases and the literature. The purpose of this study is to evaluate the functional impact of osteosuture in medial bilateral clavicular physeal fracture in teenagers after 1-year follow-up. We present 4 cases of proximal physeal fracture of the medial clavicular physis in 2 male-teenagers with bilateral displacement, one posterior and the other asymmetric.

Introduction

Clavicular physeal fracture is an uncommon pediatric fracture [1-9], especially bilateral. The lesion is not the same as in adults, who have a sternoclavicular disjunction. In children, it is a physeal fracture, included in Salter & Harris classification [5,7,10]. The diagnosis, management and treatment totally differ in adults. These injuries mostly occur during sport activities [8], with high energy trauma, due to a direct force applied to the medial clavicle or an indirect force on the shoulder [1,3,6,9]. The clinical diagnostic might be obvious or difficult [8,10], pain and swelling in the sternoclavicular joint area [1], sometimes a deformity, focal tenderness and a clinical instability [2]. Therefore X-Ray and CT scan must be done in order to confirm the diagnosis [1,3-6,10]. Three-dimensional reconstructed computed tomography scans may help to evaluate the lesions before surgery [2,3]. Several complications [8,10], such as tracheal or vascular compression [11,12], which are revealed by dyspnea, dysphagia or odynophagia [1,5], can be noticed. Because of the risk of complications in retrosternal displacement of the medial clavicular metaphysic [8,9], a surgical treatment has to be performed [4,5,10,13]. The purpose of this treatment is different than in adults, the primary instability caused by the fracture will be resolute with the osteosuture and the bone healing. In adults, an arthrodesis is necessary to access the definitive stability of the sterno-clavicular articulation. The aim of this report is to demonstrate the functional impact on the medial bilateral clavicular physeal fracture after 1-year follow-up. This report explains 4 cases of proximal physeal fracture of the medial clavicular physis in two teenage boys treated surgically, one of them suffered from a bilateral posterior displacement of clavicular physeal fracture, and the other one, an asymmetric displacement clavicular physeal fracture.

Materials and Methods

In our center, we received 4 cases of proximal physeal fracture of the medial clavicular physis in two teenage boys. One of them, a 13-year-old skier, presented a bilateral proximal physeal fracture of the medial clavicular physis with a posterior displacement. He complained about dysphonia, dysphagia and dizziness. Initial radiography of the right clavicle was suspicious. A CT scan was performed finding a left physeal fracture with posterior (Salter and Harris I) and a right physeal fracture with posterior displacement (Salter and Harris II) associated with a 4-cm hematoma. The other one, a 15-year-old boy sustained a high-energy trauma, presented a bilateral proximal physeal fracture of the medial clavicular physis with an asymmetric displacement. A chest X-Ray was performed and wasn’t contributive (Figure 1). The CT-scan showed a left physeal fracture with posterior displacement (Salter and Harris I) and a right physeal fracture with anterior (Salter and Harris I) displacement. The diagnosis was uncertain about the right injury but confirmed intraoperatively because of the clinical instability (Figure 2).

fig 1

Figure 1: Chest X-Ray.

fig 2

Figure 2: Frontal and three-dimensional reconstructed computed tomography scans confirm the diagnosis of a left physeal fracture (Salter and Harris I) and a right physeal fracture (Salter and Harris II) of the medial clavicular physis.

Surgical Technique: Surgical treatment of these injuries was performed under general anesthesia after consent of the patients and their legal representatives. The patient was lying in the supine position, arms stretched along the body. After asepsis, a draping taking both clavicles was carried out (Figure 3). An arcuate incision centered on the right sternoclavicular joint was performed in the first place. The incision was then continued over the defect to the physis and then to the articulation while the hemostasis was achieved. Once the lesions were exposed after dissection, right physeal fracture with posterior displacement was confirmed as seen in the tomography scan (Figure 2). The clavicle was reduced using a bone hook. Stability was then reassessed and the periosteal incision was closed with three non resorbable simple stitches with Mersuture® 1 to perform the osteosuture (ORIF: Open Reduction and Internal Fixation) (Figure 4). A drain was necessary and the closing was achieved with 5-0 resorbable rapid sutures. The same procedure was carried out on the other side. Any vascular complication was notice during the intervention. After the surgery, the patient was kept in a shoulder immobilizer only for post-operative care in order to reduce the pain, during approximately 24 hours. The Disabilities of the Arm, Shoulder and Hand (DASH) Score (HAS) can be used to appreciate the progress during the recovery [14]. According the DASH Score [14], 3 themes are evaluate (30 questions, sports activities and musical activities) with a grade from 0 (Disability) to 100 (Full recovery). To demonstrate the functional impact on the medial bilateral clavicular physeal fracture after 1-year follow-up, we used the DASH Score [14], the resumption of the sports activities and the cicatrization of the incisions (Figures 5-7).

fig 3

Figure 3: Drawing of the incision and intraoperatively seeings showing the different steps.

fig 4

Figure 4: Schematic view of the osteosuture.

fig 5

Figure 5: Chest X-Ray non-contributory.

fig 6

Figure 6: Preoperative three-dimensional reconstructed computed tomography scans show a left physeal fracture (Salter and Harris I) and a right anterior physeal fracture (Salter and Harris I) of the medial clavicular physis.

fig 7

Figure 7: 6 weeks post-operative three-dimensional reconstructed computed tomography scans.

Results

One-year follow-up, both of them had full recovery without any functional impact or any complains. The DASH Score (French National Authority for Health) reached 100/100 [14]. The disability of practicing sports activities lasted 3 months at least. The scars on the clavicles were thin and didn’t occur any problem (Figure 8).

fig 8

Figure 8: Scars 4 months post-operative.

Discussion

The primary purpose of this study is to evaluate the functional prognostic of a displaced physeal clavicular fracture after a surgical treatment. The clavicular physeal fracture concern the pediatric population mostly the teenagers with an average of 13 year-old (0; 23) (Table 1). Usually it occurs during sports activities after a direct fall on the shoulder [2-5]. In the first place, swelling and pain in the area of the sternoclavicular joint can be noticed with a limitation of the shoulder movements and with the attitude of the upper limb’s traumatism [6,7].

Table 1: Review of the literature.

Author, year n Age Displacement Treatment Complications
Gobet et al. 2004 3 6-10 Ant 3 ORIF (Osteosuture) Dysphagia (2)
3 8-15 Post 2 closed reduction + 1 ORIF (Osteosuture) Dysphagia (2)
Laffosse et al. 2010 13 15-20 Post 13 ORIF (5 failure of closed reduction/Different techniques) Dysphagia (3)
Tennent et al. 2012 7 14-19 Post 7 ORIF (Osteosuture) Dysphagia (2) / Dyspnea (6)
Garg et al. 2012 1 12 Post 1 ORIF (Osteosuture) X
Gil-Albarova et al. 2012 3 11-13 Ant 2 ORIF (Osteosuture) + 1 Gilchrist X
1 11 Post 1 closed reduction X
Lee et al. 2014 20 13-18 Post 2 closed reduction + 18 ORIF (Osteosuture)

(2 failure of closed reduction)

Mediastinal compression (6) (Dysphagia, odynophagia)
Ozer et al. 2014 1 16 Post 1 closed reduction Dyspnea Left brachiocephalic vein compression
Tepolt et al. 2014 6 7-17 Post 6 ORIF (Osteosuture) (2 failure) Dysphagia + Dyspnea
Kassé et al. 2016 3 0-17 Ant 1 ORIF (Osteosuture) + 2 orthopedics X
3 16-19 Post 3 ORIF (1 crossed pins + 1 excision 1/3 internal clavicle with osteosuture + 1 osteosuture) Odynophagia (1) Vascular compression (1)
Beckmann et al. 2016 1 15 Post 1 ORIF (1 failure of closed reduction) X
Elmekkaoui et al. 2011 1 16 Ant (Salter II) 1 ORIF (Osteosuture + 1 pin) X
Deganello et al. 2012 1 13 Post 1 ORIF (Osteosuture) x
Emms et al. 2002 1 23 Post (Salter II) Excision of the first rib Subclavian vein compression

According the literature, many patients presented immediate complications such as dysphagia [7,12], dyspnea or vascular compression [9,13,15,16]. One of our patient complained about dyspnea and dysphonia that were resolute after a surgical treatment. To avoid complications in the retrosternal structures by either the instable fragment or callus formation, an immediate surgical treatment has to be attempted [1,11]. In adults as in children, the risk of complications is the same due to the displacement and the compression.

Several authors described [7,11,12], sometimes this kind of injuries can be missed in the first place. The recurrence of the pain with an initial clinical exam subtle and an inadequate imaging can lead to a tardive diagnosis [12]. A shortening of the acromio-clavicular distance may help to diagnose the fracture but can be less specific when the both clavicles are injured. The position of the clavicular epiphysis has to be found to specify the diagnosis. A clinical instability can reveal a reduced medial clavicular physeal fracture with a normal tomography scan. Despite the delayed diagnosis none of their patients present any functional complication [7,12]. Intraoperatively, the diagnosis can be more accurate than the initial imaging could reveal [7]. The difference between dislocation and physeal fracture can be determine intraoperatively [7]. The type of the fracture in the Salter & Harris classification [2,5,7] can be adjust during the surgery. Surgery is the only way to confirm the definitive diagnosis and allow the most appropriate treatment for the patient. Many unilateral cases are described in the literature with diverse treatment: non operative [15], closed reduction [6,7,13] or ORIF with osteosuture [2-5,7]. Siebenmann published a case series and review of the literature about the management of epiphysiolysis type Salter and Harris I of the medial clavicle with posterior displacement [1]. He recommends an open reduction and fixation (ORIF) of the injury with posterior displacement. The term “epiphysiolysis” [1] is maladapted to characterize the lesion because there isn’t a lysis of the physis but a traumatism of the physis as describe by Salter and Harris. In our case, open reduction and osteosuture were performed bilaterally even if, in one of them, the displacement was anterior. This is an instable lesion so the orthopedic treatment is inadequate. This technique was used to avoid an asymmetric result and in order to have a good esthetic result. This treatment is easy to realize, affordable (surgical suture material mostly) and with excellent results. It appears to be important to evaluate the functional prognostic of this injury in the follow-up [7,13]. Our patients reach a total of 100/100 after 1-year follow-up with the DASH Score [14]. Mostly return to sports activities between 3 months and a 1 year of recovery [1,3,4] without any complication reported. Esthetic complications can be reported such as hypertrophic or keloid scars where a surgical revision can be useful [7,15]. None of our patients complain of their scars.

Conclusion

This study and literature review demonstrate that a quick surgical treatment of the bilateral clavicular physeal fracture with anterior or posterior displacement has to be done. We highly recommend an ORIF with osteosuture using non resorbable sutures to avoid sequels and hoping a full recovery with a resumption of the sports activities. A thoracic three-dimensional reconstructed computed tomography scan has to be realize to define the lesion. This diagnosis is underreported because this fracture can be occult on imaging. Therefore, all skeletally immature patients with suspected sternoclavicular joint injury have to be carefully examined and especially the signs of complications such as vascular or mediastinal compression. The management of the physis fracture of the medial clavicle totally differ from the management of the sternoclavicular disjunction on the adult population, this is a specific diagnosis, treatment and recovery. The primary instability caused by the fracture will be resolute with the osteosuture and the bone healing. In adults, an arthrodesis is necessary to access the definitive stability of the sterno-clavicular articulation.

References

  1. Siebenmann C, Ramadani F, Barbier G, Gautier E, Vial P (2018) Epiphysiolysis Type Salter I of the Medial Clavicle with Posterior Displacement: Case Series and Review of the Literature. Case Rep Orthop [crossref]
  2. Beckmann N, Crawford L (2016) Posterior sternoclavicular Salter-Harris fracture-dislocation in a patient with unossified medial clavicle epiphysis. Skeletal Radiol 45: 1123-7. [crossref]
  3. Deganello A, Meacock L, Tavakkolizadeh A, Sinha J, Elias DA (2012) The value of ultrasound in assessing displacement of a medial clavicular physeal separation in an adolescent. Skeletal Radiol [crossref]
  4. El Mekkaoui MJ, Sekkach N, Bazeli A, Faustin JM (2011) Proximal clavicle physeal fracture -separation mimicking an anterior sterno-clavicular dislocation. Orthop Traumatol Surg Res 97: 349-352.
  5. Garg S, Alshameeri ZA, Wallace WA (2012) Posterior sternoclavicular joint dislocation in a child: a case report with review of literature. J Shoulder Elbow Surg 21: 11-16. [crossref]
  6. Gil-Albarova J, Rebollo-González S, Gómez-Palacio VE, Herrera A (2013) Management of sternoclavicular dislocation in young children: considerations about diagnosis and treatment of four cases. Musculoskelet Surg 97: 137-143. [crossref]
  7. Gobet R, Meuli M, Altermatt S, Jenni V, Willi UV (2004) Medial clavicular epiphysiolysis in children: the so-called sterno-clavicular dislocation. Emerg Radiol 10: 252-255. [crossref]
  8. Laffosse J-M, Espié A, Bonnevialle N, Mansat P, Tricoire J-L, et al. (2010) Posterior dislocation of the sternoclavicular joint and epiphyseal disruption of the medial clavicle with posterior displacement in sports participants. J Bone Joint Surg Br 92: 103-109. [crossref]
  9. Tepolt F, Carry PM, Heyn PC, Miller NH (2014) Posterior sternoclavicular joint injuries in the adolescent population: a meta-analysis. Am J Sports Med. 42: 2517-2524.
  10. Chaudhry S (2015) Pediatric Posterior Sternoclavicular Joint Injuries: J Am Acad Orthop Surg 23: 468-475. [crossref]
  11. Lee JT, Nasreddine AY, Black EM, Bae DS, Kocher MS (2014) Posterior Sternoclavicular Joint Injuries in Skeletally Immature Patients: J Pediatr Orthop 34: 369-375. [crossref]
  12. Özer UE, Yalçin MB, Kanberoglu K, Bagatur AE (2014) Retrosternal displacement of the clavicle after medial physeal fracture in an adolescent: MRI. J Pediatr Orthop B 23: 375-378. [crossref]
  13. Tennent TD, Pearse EO, Eastwood DM (2012) A new technique for stabilizing adolescent posteriorly displaced physeal medial clavicular fractures. J Shoulder Elbow Surg 21: 1734-1739. [crossref]
  14. DASH Score. 2000. Available at: https://www.s-f-t-s.org/images/stories/documentations/EPAULE_SCORE_DASH.pdf. Accessed November 15, 2019.
  15. Kassé AN, Mohamed Limam SO, Diao S, Sané JC, Thiam B, et al. (2016) [Fracture-separation of the medial clavicular epiphysis: about 6 cases and review of the literature] Pan Afr Med J 25: 19 [crossref]
  16. Emms NW, Morris AD, Kaye JC, Blair SD (2002) Subclavian vein obstruction caused by an unreduced type II Salter Harris injury of the medial clavicularphysis. J Shoulder Elbow Surg 11: 271-273.
venn diagram

A Critical Mathematical Review on Protein Sequence Comparison Using Physio-Chemical Properties of Amino Acids

DOI: 10.31038/JMG.2020332

Abstract

The review tries to list out maximum number of physical and chemical properties of amino acids, which are being used directly or indirectly in protein sequence comparison. Next it tries to sum up different types of methodologies used so far in protein sequence comparison based on physio-chemical properties of amino acids. It also tries to examine critically all the methods under mathematical precision. Finally it tries to point out how to modify the methods, in case they are not sound. It also suggests some possible open problems.

Purpose

The purpose of the review is three fold: First to highlight different types of methodologies used so far in connection with protein sequence comparison based on physio-chemical properties of amino acids; second to find out if there is any mathematical discrepancies in any one of the methodologies, and if so, to suggest proper way out to make them sound and workable; lastly to suggest some novel methods of comparison for protein sequence comparison based on physio-chemical properties of amino acids.

Pre-Requisites

To begin with, it may be mentioned that in case of genome sequences owing to Bio-chemical properties, the nucleotides are classified in the following groups: (R/Y) [Purine-Pyrimidine],(M/K) [Amino-Keto] and (W/S) [Weak-Strong H-Bonds], where R=(A, G) and Y=(C, T), M=(A, C) and K=(G, T), W=(A, T) and S=(C, G). Representations based on such classified groups are obtained for Genome sequences and methodologies are developed for their comparisons accordingly. But there is no method of comparison using directly Bio-chemical properties, because they are not enough for the purpose.

To discuss similar aspects in protein sequences, we are to understand what we actually mean by a protein sequence. In fact, by a protein sequence, we mean primary structure of a protein. Externally Protein’s primary structures are sequences of 20 peptides (amino acids). It is a polypeptide chain. The peptides are linked together just as the compartments of a train by what is called a peptide bond. As sequences, Protein’s primary structures differ externally due to the number and relative position of the peptides in the chain. So to understand protein sequences, we are to first understand structure of amino acids (peptides) given below:

Structure of Amino Acids

Structure of Amino Acids

The α carbon is joined on the left by an amino group, on the right by a carboxyl (acid) group; this justifies the name amino acid. On the top it is connected by a R group, which gives the side chain and it is joined below by a H atom. This is a three dimensional structure formed by four vertices of a tetrahedron. This structure without R is called the backbone structure of the amino acid. By a protein sequence we mean sequences of backbone structures of its amino acids.

Mechanism of the Process

Mechanism of the Process

Individual amino acids are linked together, one after another. What happens is that a –OH group is removed from the first amino acid and H is removed from the next one linked to the first. As a whole, what is removed is a water molecule. The exposed broken bonds left on the two amino acids are then attached together, producing a linkage, called a peptide bond. It is a covalent bond. All amino acids have their back-bone structures same. They differ only in having different R group (side chains) as given below:

Amino Acids with Hydrocarbon R-groups (Six)

Amino Acids with Hydrocarbon R-groups (Six)

Amino Acids with Neutral R-Groups (Seven)

Seven of the twenty amino acids that make up proteins have neutral R-groups:

Amino Acids with Neutral R-Groups (Seven)

Amino Acids with Basic or Acidic R-Groups (Seven)

Seven of the twenty amino acids that make up proteins, six of them have acid or base R-groups. Glycine may be taken in this group also.

Amino Acids with Basic or Acidic R-Groups (Seven)

Glycine may be taken along with the six elements of the first group.

Amino acids can be broadly classified into two general groups based on the properties of the “R” group in each amino acid. Amino acids can be polar or non-polar. Polar amino acids have “R” groups that are hydrophilic, meaning that they seek contact with aqueous solutions. Polar amino acids may be positively charged (basic) or negatively charged (acidic). Non-polar amino acids are the opposite (hydrophobic) meaning that they avoid contact with liquid. Aliphatic amino acids are those non polar amino acids, which contains an aliphatic side chain. Aromatic amino acids are amino acids that have an aromatic ring in the side-chain. Details of the classifications are shown in the following Venn Diagram:

venn diagram

List of values of some of the Physical properties of amino acids:

Amino Acid Abb. Sym. Relative Dis. RD Side-chain Mass Specific Volume Residue Volume Residue Wt Mole  Vol
Alanine Gly S .2227 15 .64 43.5 71.08 31
Cysteine Ala C 1.000 47 .74 60.6 103.14 55
Methionine Thr M .1882 75 .70 77.1 131.191 105
Proline Ser P .2513 41 .63 60.8 97.12 32.5
Valine V Pro V .1119 43 .76 81 99.13 84
Phenylalanine Val F .2370 91 .86 91.3 147.17 132
Isoleucine Leu I .1569 57 .90 107.5 113.16 111
Leucine Ile L .1872 57 .90 107.5 113.16 111
Tryptophan Met W .4496 130 .75 105.1 186.21 170
Tyrosine Phe Y .1686 107 .77 121.3 163.18 136
Aspartic acid Tyr D .3924 59 .71 123.6 115.09 54
Lysine Trp K .1739 72 .68 144.1 128.17 119
Asparagine Asn N .2513 58 .62 78.0 114.10 56
Arginine Glu R .0366 100 .66 90.4 156.19 124
Serine Asp S .2815 31 .60 74.1 87.08 32
Glutamic acid Gln E .1819 73 .67 93.9 129.12 83
Glycine Lys G .3229 1 .82 108.5 57.05 3
Histidine Arg H .0201 81 .70 111.5 137.14 96
Glutamine His Q .0366 72 .67 99.3 128.13 85
Threonine Cys T 0 45 72.5 101.11 61

List of values of some Chemical properties of amino acids:

Amino acid Abb. Sym bol pKa-COOH17 pKa-NH3 +17 Hydropathy Index h Hydrophobicity Hydrophillicity Isoelectric Point pI Polar requirement
Alanine Gly A 2.34 9.69 1.8 -0.4 1.8 6.01 7.0
Cysteine Ala C 1.71 9.69 2.5 1.8 -4.5 5.07 4.8
Methionine Thr M 2.18 9.21 1.9 -0.7 -3.5 5.74 5.3
Proline Ser P 1.41 10.60 -1.6 ­-0.8 -3.5 6.48 6.6
Valine V Pro V 2.32 9.62 4.2 -1.6 2.5 5.97 5.6
Phenylalanine Val F 1.83 9.13 2.8 -4.2 -3.5 5.48 5.0
Isoleucine Leu I 2.36 9.60 4.5 3.8 -3.5 6.02 4.9
Leucine Ile L 2.36 9.60 3.8 4.5 -3.5 5.98 4.9
Tryptophan Met W 2.38 9.39 -0.9 1.9 -0.4 5.89 5.2
Tyrosine Phe Y 2.20 9.11 -1.3 2.8 3.2 5.66 20.5
Aspartic acid Tyr D 2.09 9.82 -3.5 -1.3 4.5 2.77 13
Lysine Trp K 2.18 8.95 -3.9 -.09 3.9 9.74 10.1
Asparagine Asn N 2.02 8.80 -3.5 -3.5 1.9 5.41 10
Arginine Glu R 2.17 9.04 -4.5 -3.5 2.8 10.76 9.1
Serine Asp S 2.19 9.15 -0.8 -3.5 -1.6 5.68 7.5
Glutamic acid Gln E 2.19 9.67 -3.5 -3.5 -0.8 3.22 12.5
Glycine Lys G 2.34 9.60 -0.4 -3.9 -0.7 5.97 7.9
Histidine Arg H 1.82 9.17 -3.2 -4.5 -0.9 7.59 8.4
Glutamine His Q 2.17 9.13 -3.5 -3.2 -1.3 5.65 8.6
Threonine Cys T 2.63 10.43 -0.7 2.5 4.2 5.87 6.6

Introduction

We like to consider protein sequence comparison based on physio-chemical properties of amino acids sequentially as follows: First we like to consider protein sequence comparison based on classified groups of amino acids. The main classified groups of amino acids based on physio-chemical properties of amino acids, which have been used so far in protein sequence comparison are mainly the following:

(i) (a) 3 group Classification [1]: Dextrorotatory E, A, I, K, V ; Levorotatory N, C, H, L, M, F, P, S; Irrotational G, Y, R, D, Q (i)(b) 3 group Classification [1]: hydrophobic amino acids H={C, M, F, I, L, V, W, Y}; hydrophilic amino acids P={N, Q, D, E, R, K, H}; and neutral amino acids N={A, G, T, P, S}.

(ii) (a) 4 group Classification [2]: Strongly Hydrophilic (POL) R, D, E, N, Q, K, H; strongly hydrophobic (HPO) L, I, V, A, M, F; Weakly Hydrophilic or weakly Hydrophobic (Ambiguous) Ambi S, T, Y,W; Special (none) C, G, P;

(ii) (b) 4 group Classification [3]: Hydrophobic (H) Non-polar A, I, L, M, F, P, W, V; Negative polar class D, E; Uncharged polar class N , C, Q, G, S, T, Y; Positive polar class R, H, K;

(iii) 5 group Classification [4]: I=C, M, F, I, L, V, W, Y; A=A, T, H; G=G, P; E=D, E; K=S, N, Q, R;

(iv) (a) 6 group Biological Classification based on side chain conditions: Side chain is aliphatic G, A, V, L, I; Side chain is an organic acid D, E, N, Q; Side chain contains a sulphur M, C; Side chain is an alcohol S, T, Y; Side chain is an organic base R, K, H; Side chain is aromatic F, W, P;

(iv) (b) 6 group Theoretical Classification [5]: I=I; L=L,R; A=V A, G, P, T; E=F, C, Y, Q, N, H, E, D, K; M=M,W;S=S.

Use of Classified Groups in Protein Sequence Comparison

Representations based on such classified groups of amino acids of different cardinalities and corresponding methodologies are also tried in several papers [6-8]. Obviously there was a need to develop a unified method of comparison of protein sequences based on classified groups of all cardinalities. Hopefully this is also done in [9].

Next we consider protein sequence comparison based on pair of classified groups of different cardinalities. Such protein sequence comparison based on pair of classified groups of cardinality three is found in [10]. The classifications are given by (i)(a) and (i)(b). Now the classified group (i)(a) of order three based on chilarity property is clear, the same based on hydrophobic and hydrophilic property (i)(b) is doubtful. In fact, if we compare (i)(b) with (ii)(a), it is seen that S, T, Y, W belong to ambiguous class; nothing definite can be said about their positions in POL or HPO. Again it is sure that C does not belong to neither of the classes POL and HPO. But in this paper, C is placed in the HPO class. Also no sufficient reference is given in support of class (i) (b). It is to be changed accordingly. Also the methodology is not sound; it is a mere trial and error policy. It has got to be improved. Proper methodology may be that given in [11] or 2D FFT [12] under ICD method modified accordingly.

Lastly we consider Protein sequence comparison directly based on physio-chemical properties of amino acids. There are several papers based on representations under physio-chemical properties of amino acids. In the article [13], the authors first outline a 2-D graphical representation on a unit circle based on the physicochemical properties of amino acids, mainly the hydrophobicity properties. Anyway this gives the two dimensional coordinates (x, y) on the circle. Next they consider relative weight, a physical property of amino acids. Based on these values they determine the z-coordinates and a 3D representation of amino acids is obtained. This consists of 20 distinct three dimensional points. With the help of tensors of moments of inertia the protein sequences are compared. It may be noted that the 3D representation is a degenerate representation. It is better if the corresponding non-degenerate representation could be made before calculating the descriptors. The paper [14] is based on purely chemical properties of amino acids. The properties are pKa COOH, pKa NH3+. The corresponding pKa values for terminal amino acid groups COOH and NH3 give two complementary properties of amino acids. These are of major importance in Biochemistry, as they may be used to construct protein map, and to determine the activity of enzymes. In the paper [15], two indices of physicochemical properties of 20 amino acids, hydrophobicity value and isoelectric point, are considered for graphical representation of protein sequences. The graphical representation has no degeneracy. The descriptor is calculated based on the ratio between the distance and the cosine of correlation angle of two vectors corresponding to two curves. Similarities/ dissimilarities of 9 different ND5 proteins are obtained and the results are compared with those obtained under ClustalW by using correlation and significance analysis. The present results show improvements. A novel position-feature-based model [16] for protein sequences is developed based on physicochemical properties of 20 amino acids and the measure of graph energy. The physio-chemical properties considered give pI and pKa values of the amino acids. The method obtains a characteristic B-vector. Afterwards, the relative entropy to the sequences representing B-vectors is applied to measure their similarity/dissimilarity. The numerical results obtained in this study show that the proposed methods leads to meaningful results compared with competitors such as Clustal W. Side-chain mass and hydrophobicity of the 20 native amino acids are used in getting the coordinates in the 2D-Cartesian frame [17]. The graphic curve is called the ‘‘2D-MH’’ curve, where ‘‘M’’ stands for the side-chain mass of each of the constituent amino acids, and ‘‘H’’ for its hydrophobic value. The graphic curve thus generated is a one-to-one correspondence relation without circuit or degeneracy. A metric is used for the ‘‘evolutionary distance’’ of a protein from one species to the other. It is anticipated that the presented graphic method may become a useful tool for large scale analysis of protein sequences. A 2-D graphical representation of proteins is outlined based on 2-D map of amino acids [18]. The x and y coordinates are taken as the pKa COOH value and pKa NH3 value respectively. The plot of the difference between the (x, y) coordinates of two graphical representations of proteins gives visual inspection of protein alignment. The approach is explained on segments of a protein of the yeast Saccharomyces cerevisiae. The 2D graphical representation of protein sequences based on six physicochemical properties of 20 amino acids is obtained- the properties are relative molecular weight, volume, surface area, specific volume, pKa (-COOH) and pKa (-NH3)and the relationship between them [19]. Moreover, a specific vector from the graphical curve of a protein sequence could be obtained to calculate the distance between two sequences. This approach avoids considering the differences in length of protein sequences. Finally, using this method the similarities/dissimilarities of ND5 and 36PDs are obtained. The analysis show better results compared with ClustalX2. In the article [20], a new mapping method for protein sequences is developed by considering 12 major physicochemical properties of amino acids – these are (p1: chemical composition of the side chain; p2: polar requirement; p3: hydropathy index; p4: isoelectric point; p5: molecular volume; p6: polarity; p7: aromaticity; p8: aliphaticity; p9: hydrogenation; p10: hydroxythiolation; p11: pK1(–COOH); p12: pK2(-NH3). By applying method of PCA, the percentages of amino acids along the 12 principal axes are obtained. Accordingly a simple 2D representation of the protein sequences is derived. Lastly a 20D vector is obtained for each sequence for its descriptor. The method is first validated with nine ND6 proteins. Next another application is done on the HA genes of influenza A (H1N1) isolates. To validate the proposed method, a comparison of protein sequences is made; this consists of nine ND6 proteins. The similarity/dissimilarity matrix for the nine ND6 proteins correctly reveals their evolutionary relationship. Next, we another application is done for the cluster analysis of HA genes of influenza A (H1N1) isolates. The results are consistent with the known evolution of the H1N1 virus. A 2D graphical representation of protein sequences based on six physicochemical properties of amino acids is outlined [21]. The properties are Mra, pIa, Solubilityb [g/100g, 250C], Specific rotation [α]D25, (5N HCl)c, Hydropathy indexd, Melting pointc (°C). The numerical characterization of protein graphs is given as the descriptor. It is useful for comparative study of proteins and also to encode innate information about the structure of proteins. The coefficient of determination is taken as a new similarity/dissimilarity measure. Finally, the result is tested with the ND6 proteins for eight different species. The results show that the approach is convenient, fast, and efficient. A powerful tool for protein classification is obtained in the form of a protein map [22]. It considers phylogenetic factors arising from amino acid mutations and also it provides computational efficiency for the huge amount of data. Ten different amino acid physico-chemical properties are used for the purpose. These are the chemical composition of the side chain, two polarity measures, hydropathy, isoelectric point, volume, aromaticity, aliphaticity, hydrogenation, and hydroxythiolation. The proposed method gives for protein classification greater evolutionary significance at the amino acid sequence level. A protein sequence is first converted into a 23 dimensional vector by considering three physicochemical properties indexes PI, FH and Hp. Finally, based on the Euclidean distance, the similarities of ND5 proteins of nine species are obtained [23]. Also to check utility of the present method, correlation analysis is provided to compare the present results and the results based on other graphical representation with the Clustal W’s. A novel family of iterated function system (IFS) is introduced using different physicochemical properties of amino acids, which are pK1, h, pK2 and pI [24]. This gives rise to a 2D graphical representation of protein sequences; then a mathematical descriptor is suggested to compare the similarities and dissimilarities of protein sequences from their 2D curves. Similarities/dissimilarities are obtained among sequences of the ND5 proteins of nine different species, as well as sequences of eight ND6 proteins. The phylogenetic tree of the nine ND5 proteins is constructed according to Fuzzy cluster analysis. By correlation analysis, the ClustalW results are compared with the present results and other graphical representation results to demonstrate the effectiveness of this approach. A novel method to analyze the similarity/dissimilarity of protein sequences based on Principal Component Analysis-Fast Fourier Transformation (PCA-FFT) is proposed [25]. The nine different physio-chemical properties of amino acids considered in the analysis are mW, hI, pk1, pK2, pI, S, cN, F(%) and vR. PCA is applied to transform protein sequences into time series and they are finally changed to frequency domain by applying FFT on them. Comparison is done on the frequencies expressed by complex numbers. The similarity/dissimilarity of 16 different ND5 protein sequences and 29 different spike protein sequences, are studied. Furthermore, the correlation analysis is presented for comparing with others methods. It may be noted that while comparing two complex sequences the authors use the sum of the absolute values of the complex numbers. But this is mathematically wrong. There is no ordering in complex numbers. The smaller absolute difference does not imply that the two sequences are nearer. Naturally such a measure fails to contribute anything in phylogeny analysis. This could be avoided by applying ICD (inter coefficient distance) method, which has been used earlier in such situations. So the paper needs modifications. Lastly we mention one paper, where complex representation based on physio-chemical properties of amino acids is used. It may be mentioned that based on two properties Volume and Polarity, a multiple sequence alignment program MAFFT was developed [26]. But no attempt was made for the use of FFT in protein sequence comparison. The complex representation based on the properties of hydrophobicity and residue volume was given [27]. But no protein sequence comparison based on this representation was considered. The complex representation of amino acids based on the properties of hydrophilicity and residue volumes is used [28]. The representation is not the same as the earlier one [27]. In this paper, the represented sequence is transferred to the frequency domain by Fourier transform. But the transformation is something special, as the original sequence under consideration is a complex sequence, not a real one. Anyway ICD method for such a transformation is modified accordingly and with suitable descriptor protein sequence comparison is carried out using Euclidean norm as the distance measure. Interestingly, the protein sequences are compared for both types of representations given in [27,28]. It is found that in the later case, the result is better. It proves that the property of hydrophillicity (polarity) is a better choice than hydrophobicity for protein sequence comparison.

Some Open Problems

a. Can the results of [27,28] be developed avoiding complex representations? Are the conclusions same?

b. Can the results be developed by taking only hydrophobicity and only hydrophillicity properties separately? Does the conclusion remain the same?

c. Is it possible to ascertain bio-logically the minimum number of physio-chemical properties which are most important? If so, methodology is to be developed accordingly using only those properties.

Conclusion

Protein sequence comparison based directly on the physio-chemical properties of amino acids is still an open area of research.

References

  1. Yu-hua Yao, Fen Kong, Qi Dai, Ping-an He (2013) A Sequence segmented method applied to the similarity analysis of Long Protein Sequence. MATCH Commun Math Comput Chem 70: 431-4502.
  2. Wang J, Wang W (1999) A computational approach to simplifying the protein folding problem: Nat Struct Biol 6: 1033-1038. [crossref]
  3. Zu-Guo, Vo Anh, Ka-Sing Lau (2004) Chaos game representation of protein sequences based on the detailed HP model and their multi-fractal and correlation analysis. Journal of Theoretical Biology 226: 341-348. [crossref]
  4. Chun Li, Lili Xing, Xin Wang (2007) 2-D graphical representation of protein sequences and its application to corona virus phylogeny. BMB Reports 41: 217-222. [crossref]
  5. Soumen Ghosh, Jayanta Pal, Bhattachara DK (2014) Classification of Amino Acids of a Protein on the basis of Fuzzy set theory-International Journal of Modern Sciences and Engineering Technology (IJMSET) 1: 30-35.
  6. Chun Li, Lili Xing, Xin Wang (2007) 2-D graphical representation of protein sequences and its application to corona virus phylogeny. BMB Reports 41: 217-222. [crossref]
  7. Yusen Zhang, Xiangtian Yu (2010) Analysis of Protein Sequence similarity.
  8. Ghosh Pal SJ, Das S, Bhattacharya DK (2015) Differentiation of Protein Sequence Comparison Based on Biological and Theoretical Classification of Amino Acids in Six Groups. International Journal of Advanced Research in Computer Science and Software Engineering 5: 695-698.
  9. Soumen Ghosh, Jayanta Pal, Bansibadan Maji, Dilip Kumar Bhattacharya (2018) A sequential development towards a unified approach to protein sequence comparison based on classified groups of amino acids. International Journal of Engineering & Technology 7: 678-681.
  10. Yu-hua Yao, Fen Kong, Qi Dai, Ping-an He (2013) A Sequence-Segmented Method Applied to the Similarity Analysis of Long Protein Sequence. MATCH Commun Math Comput Chem 70: 431-450.
  11. Yu-hua Yao, Xu-ying Nan, Tian-ming Wang (2006) A new 2D graphical representation—Classification curve and the analysis of similarity/dissimilarity of DNA sequences. Journal of Molecular Structure: THEOCHEM 764: 101-108.
  12. Brian R King, Maurice Aburdene, Alex Thompson, Zach Warres (2014) Application of discrete Fourier inter-coefficient difference for assessing genetic sequence similarity. EURASIP Journal on Bioinformatics and Systems Biology 2014: 8. [crossref]
  13. Wenbing Houa, Qiuhui Panab, Mingfeng He (2016) A new graphical representation of protein sequences and its applications. Physica A 444: 996-1002.
  14. Jia Wen, Yu Yan Zhang (2009) A 2D graphical representation of protein sequence and its numerical characterization. Chemical Physics Letters 476: 281-286.
  15. Yuxin Liu, Dan Li, Kebo Lu, Yandong Jiao, Ping-An He (2013) P-H Curve, a Graphical Representation of Protein Sequences for Similarities Analysis. MATCH Commun Math Comput Chem 70: 451-466.
  16. Lulu Yu, Yusen Zhang, Ivan Gutman, Yongtang Shi & Matthias Dehmer () Protein Sequence Comparison Based on Physicochemical Properties and the Position-Feature Energy Matrix. Sci Rep 7: 46237. [crossref]
  17. Zhi-Cheng Wu, Xuan Xiao, Kuo-Chen Chou (2010) 2D-MH: A web-server for generating graphic representation of protein sequences based on the physicochemical properties of their constituent amino acids –Journal of Theoretical Biology 267: 29-34.
  18. Milan Randic (2007) 2-D graphical representation of proteins based on physico-chemical properties of amino acids. Chemical Physics Letters 444: 176-180.
  19. Dandan Suna, Chunrui Xua, Yusen Zhang (2016) A Novel Method of 2D Graphical Representation for Proteins and Its Application MATCH Commun Math Comput Chem 75: 431-446.
  20. Zhao-Hui Qi, Meng-Zhe Jin, Su-Li Li, Jun Feng (2015) A protein mapping method based on physicochemical properties and dimension reduction. Computers in Biology and Medicine 57: 1-7. [crossref]
  21. YU-HUA YAO, DAI QI, LING LI, XU-YING NAN, PING-AN HE, et al. (2009) Similarity/Dissimilarity Studies of Protein Sequences Based on a New 2D Graphical Representation.
  22. Chenglong Yu, Shiu-Yuen Cheng, Rong L He, Stephen ST Yau (2011) Protein map: An alignment-free sequence comparison method based on various properties of amino acids. Gene 486: 110-118. [crossref]
  23. Yan-ping Zhang, Ji-shuo Ruan, Ping-an He (2013) Analyzes of the similarities of protein sequences based on the pseudo amino acid composition. Chemical Physics Letters 590: 239-244.
  24. Tingting Ma, Yuxin Liu, Qi Dai, Yuhua Yao, Ping-an He (2014) A graphical representation of protein based on a novel iterated function system. Physica A 403: 21-28.
  25. Pengyao Ping, Xianyou Zhu, Lei Wang (2017) Similarities/dissimilarities analysis of protein sequences based on PCA-FFT. Journal of Biological Systems 25: 29-45.
  26. Katoh K, Misawa K, Kuma K, Miyata T (2002) MAFFT: a novel method for rapid multiple sequence alignment based on fast Fourier transform. Nucleic Acids Res 30: 3059-3066. [crossref]
  27. Changchuan Yin, Stephen ST Yau (2020) Numerical representation of DNA sequences Based on Genetic Code Context and its applications in Periodicity Analysis Genomes.
  28. Pal J, Maji B, Bhattacharya DK (2018) Protein sequence comparison under a new complex representation of amino acids based on their physio-chemical properties. International Journal of Engineering & Technology 7: 181-184.
fig 4

Albedo Changes Drive 4.9 to 9.4°C Global Warming by 2400

DOI: 10.31038/ESCC.2020212

Abstract

This study ties increasing climate feedbacks to projected warming consistent with temperatures when Earth last had this much CO2 in the air. The relationship between CO2 and temperature in a Vostok ice core is used to extrapolate temperature effects of today’s CO2 levels. The results suggest long-run equilibrium global surface temperatures (GSTs) 5.1°C warmer than immediately “pre-industrial” (1880). The relationship derived holds well for warmer conditions 4 and 14 million years ago (Mya). Adding CH4 data from Vostok yields 8.5°C warming due to today’s CO2 and CH4 levels. Long-run climate sensitivity to doubled CO2, given Earth’s current ice state, is estimated to be 8.2°C: 1.8° directly from CO2 and 6.4° from albedo effects. Based on the Vostok equation using CO2 only, holding ∆GST to 2°C requires 318 ppm CO2. This means Earth’s remaining carbon budget for +2°C is estimated to be negative 313 billion tonnes. Meeting this target will require very large-scale CO2 removal. Lagged warming of 4.0°C (or 7.4°C when CH4 is included), starting from today’s 1.1°C ∆GST, comes mostly from albedo changes. Their effects are estimated here for ice, snow, sulfates, and cloud cover. This study estimates magnitudes for sulfates and for future snow changes. Magnitudes for ice, cloud cover, and past snow changes are drawn from the literature. Albedo changes, plus their water vapor multiplier, caused an estimated 39% of observed GST warming over 1975-2016. Estimated warming effects on GST by water vapor; ocean heat; and net natural carbon emissions (from permafrost, etc.), all drawn from the literature, are included in projections alongside ice, snow, sulfates, and clouds. Six scenarios embody these effects. Projected ∆GSTs on land by 2400 range from 2.4 to 9.4°C. Phasing out fossil fuels by 2050 yields 7.1°C. Ending fossil fuel use immediately yields 4.9°C, similar to the 5.1°C inferred from paleoclimate studies for current CO2 levels. Phase-out by 2050 coupled with removing 71% of CO2 emitted to date yields 2.4°C. At the other extreme, postponing peak fossil fuel use to 2035 yields +9.4°C GST, with more warming after 2400.

Introduction

The December 2015 Paris climate pact set a target of limiting global surface temperature (GST) warming to 2°C above “pre-industrial” (1750 or 1880) levels. However, study of past climates indicates that this will not be feasible, unless greenhouse gas (GHG) levels, led by carbon dioxide (CO2) and methane (CH4), are reduced dramatically. Already, global air temperature at the land surface (GLST) has warmed 1.6°C since the 1880 start of NASA’s record [1]. (Temperatures in this study are 5-year moving averages from NASA, Goddard Institute for Space Studies, in °C. Baseline is 1880 unless otherwise noted.) The GST has warmed by 2.5°C per century since 2000. Meanwhile, global sea surface temperature (=(GST – 0.29 * GLST)/0.71) has warmed by 0.9°C since 1880 [2].

The paleoclimate record can inform expectations of future warming from current GHG levels. This study examines conditions during ice ages and during the most recent (warmer) epochs when GHG levels were roughly this high, some lower and some higher. It strives to connect future warming derived from paleoclimate records with physical processes, mostly from albedo changes, that produce the indicated GST and GLST values.

The Temperature Record section examines Earth’s temperature record, over eons. Paleoclimate data from a Vostok ice core covering 430,000 years (430 ky) is examined. The relations among changes in GST relative to 1880, hereafter “∆°C”, and CO2 and CH4 levels in this era colder than now are estimated. These relations are quite consistent with the ∆°C to CO2 relation in eras warmer than now, 4 and 14 Mya. Overall climate sensitivity is estimated based on them. Earth’s remaining carbon budget to keep warming below 2°C is calculated next, based on the equations relating ∆°C to CO2 and CH4 levels in the Vostok ice core. That budget is far less than zero. It requires returning to CO2 levels of 60 years ago.

The Feedback Pathways section discusses the major factors that lead from our present GST to the “equilibrium” GST implied by the paleoclimate data, including a case with no further human carbon emissions. This path is governed by lag effects deriving mainly from albedo changes and their feedbacks. Following an overview, eight major factors are examined and modeled to estimate warming quantities and time scales due to each. These are (1) loss of sulfates (SO4) from ending coal use; (2) snow cover loss; (3) loss of northern and southern sea ice; (4) loss of land ice in Antarctica, Greenland and elsewhere; (5) cloud cover changes; (6) water vapor increases due to warming; (7) net emissions from permafrost and other natural carbon reservoirs; and (8) warming of the deep ocean.

Particular attention is paid to the role that anthropogenic and other sulfates have played in modulating the GST increase in the thermometer record. Loss of SO4 and northern sea ice in the daylight season will likely be complete not long after 2050. Losses of snow cover, southern sea ice, land ice grounded below sea level, and permafrost carbon, plus warming the deep oceans, should happen later and/or more slowly. Loss of other polar land ice should happen still more slowly. But changes in cloud cover and atmospheric water vapor can provide immediate feedbacks to warming from any source.

In the Results section, these eight factors, plus anthropogenic CO2 emissions, are modeled in six emission scenarios. The spreadsheet model has decadal resolution with no spatial resolution. It projects CO2 levels, GSTs, and sea level rise (SLR) out to 2400. In all scenarios, GLST passes 2°C before 2040. It has already passed 1.5°. The Discussion section lays out the implications of Earth’s GST paths to 2400, implicit both in the paleoclimate data and in the development of specific feedbacks identified for quantity and time-path estimation. These, combined with a carbon emissions budget to hold GST to 2°C, highlights how crucial CO2 removal (CDR) is. CDR is required to go beyond what emissions reduction alone can achieve. Fifteen CDR methods are enumerated. A short overview of solar radiation management follows. It may be required to supplement ending fossil fuel use and large-scale CDR.

The Temperature Record

In a first approach, temperature records from the past are examined for clues to the future. Like causes (notably CO2 levels) should produce like effects, even when comparing eras hundreds of thousands or millions of years apart. As shown in Figure 1, Earth’s surface can grow far warmer than now, even 13°C warmer, as occurred some 50 Mya. Over the last 2 million years, with more ice, temperature swings are wider, since albedo changes – from more ice to less ice and back – are larger. For GSTs 8°C or warmer than now, ice is rare. Temperature spikes around 55 and 41 Mya show that the current one is not quite unique.

fig 1

Figure 1: Temperatures and Ice Levels over 65 Million Years [3].

Some 93% of global warming goes to heat Earth’s oceans [4]. They show a strong warming trend. Ocean heat absorption has accelerated, from near zero in 1960: 4 zettaJoules (ZJ) per year from 1967 to 1990, 7 from 1991 to 2005, and 10 from 2010 to 2016 [5]. 10 ZJ corresponds to 100 years of US energy use. The oceans now gain 2/3 as much heat per year as cumulative human energy use or enough to supply US energy use for 100 years [6] or the world’s for 17 years. By 2011, Earth was absorbing 0.25% more energy than it emits, a 300 (±75) million MW heat gain [7]. Hansen deduced in 2011 that Earth’s surface must warm enough to emit another 0.6 Wm-2 heat to balance absorption; the required warming is 0.2°C. The imbalance has probably increased since 2011 and is likely to increase further with more GHG emissions. Over the last 100 years (since 1919), GSTs have risen 1.27°C, including 1.45°C for the land surface (GLST) alone [1]. The GST warming rate from 2000 to 2020 was 0.24°C per decade, but 0.35 over the most recent decade [1,2]. At this rate, warming will exceed 2°C in 2058 for GST and in 2043 for GLST only.

Paleoclimate Analysis

Atmospheric CO2 levels have risen 47% since 1750, including 40% since 1880 when NASA’s temperature records begin [8]. CH4 levels have risen 114% since 1880. CO2 levels of 415 parts per million (ppm) in 2020 are the highest since 14.1 to 14.5 Mya, when they ranged from 430 to 465 ppm [9]. The deep ocean then (over 400 ky) ranged around 5.6°C±1.0°C warmer [10] and seas were 25-40 meters higher [9]. CO2 levels were almost as high (357 to 405 ppm) 4.0 to 4.2 Mya [11,12]. SSTs then were around 4°C±0.9°C warmer and seas were 20-35 meters higher [11,12].

The higher sea levels in these two earlier eras tell us that ice then was gone from almost all of the Greenland (GIS) and West Antarctic (WAIS) ice sheets. They hold an estimated 10 meters (7 and 3.2 modeled) of SLR between them [13,14]. Other glaciers (chiefly in Arctic islands, the Himalayas, Canada, Alaska, and Siberia) hold perhaps 25 cm of SLR [15]. Ocean thermal expansion (OTE), currently about (~) 1 mm/year [5], is another factor in SLR. This corresponds to the world ocean (to the bottom) currently warming by ~0.002°Kyr-1. The higher sea levels 4 and 14 Mya indicate 10-30 meters of SLR that could only have come from the East Antarctic ice sheet (EAIS). This is 17-50% of the current EAIS volume. Two-thirds of the WAIS is grounded below sea level, as is 1/3 in the EAIS [16]. Those very areas (which are larger in the EAIS than the WAIS) include the part of East Antarctica most likely to be subject to ice loss over the next few centuries [17]. Sediments from millions of years ago show that the EAIS then had retreated hundreds of kilometers inland [18].

CO2 levels now are somewhat higher than they were 4 Mya, based on the current 415 ppm. This raises the possibility that current CO2 levels will warm Earth’s surface 4.5 to 5.0°C, best estimate 4.9°, over 1880 levels. (This is 3.4 to 3.9°C warmer than the current 1.1°C.) Consider Vostok ice core data that covers 430 ky [19]. Removing the time variable and scatter-plotting ∆°C against CO2 levels as blue dots (the same can be done for CH4), gives Figure 2. Its observations span the last 430 ky, at 10 ky resolution starting 10 kya.

fig 2

Figure 2: Temperature to Greenhouse Gas Relationship in the Past.

Superimposed on Figure 2 are trend lines from two linear regression equations, using logarithms, for temperatures at Vostok (left-hand scale): one for CO2 (in ppm) alone and one for both CO2 and CH4 (ppb). The purple trend line in Figure 2, from Equation (1) for Vostok, uses only CO2. 95% confidence intervals in this study are shown in parentheses with ±.

(1) ∆°C = -107.1 (±17.7) + 19.1054 (±3.26) ln(CO2).

The t-ratios are -11.21 and 11.83 for the intercept and CO2 concentration, while R2 is 0.773 and adjusted R2 is 0.768. The F statistic is 139.9. All are highly significant. This corresponds to a climate sensitivity of 13.2°C at Vostok [19.1054 * ln (2)] for doubled CO2, within the range of 180 to 465 ppm CO2. As shown below, most of this is due to albedo changes and other amplifying feedbacks. Therefore, climate sensitivity will decline as ice and snow become scarce and Earth’s albedo stabilizes. The green trend line in Figure 2, from Equation (2) for Vostok, adds a CH4 variable.

(2) ∆°C = -110.7 (±14.8) +11.23 (±4.55) ln(CO2) + 7.504 (±3.48) ln(CH4).

The t-ratios are -15.05, 4.98, and 4.36 for the intercept, CO2, and CH4. R2 is 0.846 and adjusted R2 is 0.839. The F statistic of 110.2 is highly significant. To translate temperature changes at the Vostok surface (left-hand axis) over 430 ky to changes in GST (right-hand axis), the ratio of polar change to global over the past 2 million years is used, from Snyder [20]. Snyder examined temperature data from many sedimentary sites around the world over 2 My. Her results yield a ratio for polar to global warming: 0.618. This relates the left- and right-hand scales in Figure 2. The GST equations, global instead of Vostok local, corresponding to Equations (1) and (2) for Vostok, but using the right-hand scale for global temperature, are:

(3) ∆°C = -66.19 + 11.807 ln(CO2) and

(4) ∆°C = -68.42 + 6.94 ln(CO2) + 4.637 ln(CH4).

Both equations yield good fits for 14.1 to 14.5 Mya and 4.0 to 4.2 Mya. Equation 3 yields a GST climate sensitivity estimate of 8.2° (±1.4) for doubled CO2. Table 1 below shows the corresponding GSTs for various CO2 and CH4 levels. CO2 levels range from 180 ppm, the lowest recorded during the past four ice ages, to twice the immediately “pre-industrial” level of 280 ppm. Columns D, I and N add 0.13°C to their preceding columns, the difference the 1880 GST and the 1951-80 mean GST used for the ice cores. Rows are included for CO2 levels corresponding to 1.5 and 2°C warmer than 1880, using the two equations, and for the 2020 CO2 level of 415 ppm. The CH4 levels (in ppb) in column F are taken from observations or extrapolated. The CH4 levels in column K are approximations of the CH4 levels about 1880, before human activity raised CH4 levels much – from some mixture of fossil fuel extraction and leaks, landfills, flooded rice paddies, and large herds of cattle.

Other GHGs (e.g., N2O and some not present in the Vostok ice cores, such as CFCs) are omitted in this discussion and in modeling future changes. Implicitly, this simplifying assumption is that the weighted rate of change of other GHGs averages the same as CO2.

Implications

Applying Equation (3) using only CO2, now at 415 ppm, yields a future GST 4.99°C warmer than the 1951-80 baseline. This translates to 5.12°C warmer than 1880, or 3.99°C warmer than 2018-2020 (2). This is consistent not only with the Vostok ice core records, but also with warmer Pliocene and Miocene records using ocean sediments from 4 and 14 Mya. However, when today’s CH4 levels, ~ 1870 ppb, are used in Equation (4), indicated equilibrium GST is 8.5°C warmer than 1880. Earth’s GST is currently far from equilibrium.

Consider the levels of CO2 and CH4 required to meet Paris goals. To hold GST warming to 2°C requires reducing atmospheric CO2 levels to 318 ppm, using Equation (3), as shown in Table 1. This requires CO2 removal (CDR), at first cut, of (415-318)/(415-280) = 72% of human CO2 emissions to date, plus any future ones. Equation (3) also indicates that holding warming to 1.5°C requires reducing CO2 levels to 305 ppm, equivalent to 81% CDR. Using Equation (4) with pre-industrial CH4 levels of 700 ppb, consistent with 1750, yields 2°C GST warming for CO2 at 314 ppm and 1.5°C for 292 ppm CO2. Human carbon emissions from fossil fuels from 1900 through 2020 were about 1600 gigatonnes (GT) of CO2, or about 435 GT of carbon [21]. Thus, using Equation (3) yields an estimated remaining carbon budget, to hold GST warming to 2°C, of negative 313 (±54) GT of carbon, or ~72% of fossil fuel CO2 emissions to date. This is only the minimum CDR required. First, removal of other GHGs may be required. Second, any further human emissions make the remaining carbon budget even more negative and require even more CDR. Natural carbon emissions, led by permafrost ones, will increase. Albedo feedbacks will continue, warming Earth farther. Both will require still more CDR. So, the true remaining carbon budget may actually be in the negative 400-500 GT range, and most certainly not hundreds of GT greater than zero.

Table 1: Projected Equilibrium Warming across Earth’s Surface from Vostok Ice Core Analysis (1951-80 Baseline).

table 1

The difference between current GSTs and equilibrium GSTs of 5.1 and 8.5°C stem from lag effects. The lag effects come mostly from albedo changes and their feedbacks. Most albedo changes and feedbacks happen over days to decades to centuries. Ones due to land ice and vegetation changes can continue over longer timescales. However, cloud cover and water vapor changes happen over minutes to hours. The specifics (except vegetation, not examined or modelled) are detailed in the Feedback Pathways section below.

However, the bottom two lines of Table 1 probably overestimate the temperature effects of 500 and 560 ppm of CO2, as discussed further below. This is because albedo feedbacks from ice and snow, which in large measure underlie the derivations from the ice core, decline with higher temperatures outside the CO2 range (180-465 ppm) used to derive and validate Equations (1) through (4).

Feedback Pathways to Warming Indicated by Paleoclimate Analysis

To hold warming to 2°C or even 1.5°, large-scale CDR is required, in addition to rapid reductions of CO2 and CH4 emissions to almost zero. As we consider the speed of our required response, this study examines: (1) the physical factors that account for this much warming and (2) the possible speed of the warming. As the following sections show, continued emissions speed up amplifying feedback processes, making “equilibrium” GSTs still higher. So, rapid emission reductions are the necessary foundation. But even an immediate end to human carbon emissions will be far from enough to hold warming to 2°C.

The first approach to projecting our climate future, in the Temperature Record section above, drew lessons from the past. The second approach, in the Feedback Pathways section here and below, examines the physical factors that account for the warming. Albedo effects, where Earth reflects less sunlight, will grow more important over the coming decades, in part because human emissions will decline. The albedo effects include sulfate loss from ending coal burning, plus reduced extent of snow, sea ice, land-based ice, and cloud cover. Another key factor is added water vapor, a powerful GHG, as the air heats up from albedo changes. Another factor is lagged surface warming, since the deeper ocean heats up more slowly than the surface. It will slowly release heat to the atmosphere, as El Niños do.

A second group of physical factors, more prominent late this century and beyond, are natural carbon emissions due to more warming. Unlike albedo changes, they alter CO2 levels in the atmosphere. The most prominent is from permafrost. Other major sources are increased microbial respiration in soils currently not frozen; carbon evolved from warmer seas; release of seabed CH4 hydrates; and any net decreased biomass in forests, oceans, and elsewhere.

This study estimates rough magnitudes and speeds of 13 factors: 9 albedo changes (including two for sea ice and four for land ice); changes in atmospheric water vapor and other ocean-warming effects; human carbon emissions; and natural emissions – from permafrost, plus a multiplier for the other natural carbon emissions. Characteristic time scales for these changes to play out range from decades for sulfates, northern and southern sea ice, human carbon emissions, and non-polar land ice; to centuries for snow, permafrost, ocean heat content, and land ice grounded below sea level; to millennia for other land ice. Cloud cover and water vapor respond in hours to days, but never disappear. The model also includes normal rock weathering, which removes about 1 GT of CO2 per year [22], or about 3% of human emissions.

Anthropogenic sulfur loss and northern sea ice loss will be complete by 2100 and likely more than half so by 2050, depending on future coal use. Snow cover and cloud cover feedbacks, which respond quickly to temperature change, will continue. Emissions from permafrost are modeled as ramping up in an S-curve through 2300, with small amounts thereafter. Those from seabed CH4 hydrates and other natural sources are assumed to ramp up proportionately with permafrost: jointly, by half as much. Ice loss from the GIS and WAIS grounded below sea level is expected to span many decades in the hottest scenarios, to a few centuries in the coolest ones. Partial ice loss from the EAIS, led by the 1/3 that is grounded below sea level, will happen a bit more slowly. Other polar ice loss should happen still more slowly. Warming the deep oceans, to reestablish equilibrium at the top of the atmosphere, should continue for at least a millennium, the time for a circuit of the world thermohaline ocean circulation.

This analysis and model do not include changes in (a) black carbon; (b) mean vegetation color, as albedo effects of grass replacing forests at lower latitudes may outweigh forests replacing tundra and ice at higher latitudes; (c) oceanic and atmospheric circulation; (d) anthropogenic land use; (e) Earth’s orbit and tilt; or (f) solar output.

Sulfate Effects

SO4 in the air intercepts incoming sunlight before it arrives at Earth’s surface, both directly and indirectly via formation of cloud condensation nuclei. It then re-radiates some of that energy upward, for a net cooling effect at Earth’s surface. Mostly, sulfur impurities in coal are oxidized to SO2 in burning. SO2 is converted to SO4 by chemical reactions in the troposphere. Residence times are measured in days. Including cooling from atmospheric SO4 concentrations explains a great deal of the variation between the steady rise in CO2 concentrations and the variability of GLST rise since 1880. Human SO2 emissions rose from 8 Megatonnes (MT) in 1880 to 36 MT in 1920, 49 in 1940, and 91 in 1960. They peaked at 134 MT in 1973 and 1979, before falling to 103-110 during 2009-16 [23]. Corresponding estimated atmospheric SO4 concentrations rose from 41 parts per billion (ppb) in 1880 (and a modestly lower amount before then), to 90 in 1920, 85 in 1940, and 119 in 1960, before reaching peaks of 172-178 during 1973-80 [24] and falling to 130-136 over 2009-16. Some atmospheric SO4 is from natural sources, notably dimethyl sulfides from some ocean plankton, some 30 ppb. Volcanoes are also an important source of atmospheric sulfates, but only episodically (mean 8 ppb) and chiefly in the stratosphere (from large eruptions), with a typical residence time there of many months.

Figure 3 shows the results of a linear regression analysis, in blue, of ∆°C from the thermometer record and concentrations of CO2, CH4, and SO4. SO4 concentrations between the dates referenced above are interpolated from human emissions, added to SO4 levels when human emissions were very small (1880). All variables shown are 5-year moving averages and SO4 is lagged by 1 year. CO2, CH4, and SO4 are measured in ppm, ppb and ppb, respectively. The near absence of an upward trend in GST from 1940 to 1975 happened at a time when human SO2 emissions rose 170% from 1940 to 1973 [23]. This large SO4 cooling effect offset the increased GHG warming effect, as shown in Figure 3. The analysis shown in Equation (5) excludes the years influenced by the substantial volcanic eruptions shown. It also excludes the 2 years before and 2-4 years after the years of volcanic eruptions that reached the stratosphere, since 5-year moving temperature averages are used. In particular, it excludes data from the years surrounding eruptions labeled in Figure 3, plus smaller but substantial eruptions in 1886, 1901-02, 1913, 1932-33, 1957, 1979-80, 1991 and 2011. This leaves 70 observations in all.

fig 3

Figure 3: Land Surface Temperatures, Influenced by Sulfate Cooling.

Equation (5)’s predicted GLSTs are shown in blue, next to actual GLSTs in red.

(5) ∆°C = -20.48 (±1.57) + 09 (±0.65) ln(CO2) + 1.25 (±0.33) ln(CH4) – 0.00393 (±0.00091) SO4

R2 is 0.9835 and adjusted R2 0.9828. The F-statistic is 1,312, highly significant. T-ratios for CO2, CH4, and SO4 respectively are 7.10, 7.68, and -8.68. This indicates that CO2, CH4, and SO4 are all important determinants of GLSTs. The coefficient for SO4 indicates that reducing SO4 by 1 ppb will increase GLST by 0.00393°C. Deleting the remaining human 95 ppb of SO4 added since 1880, as coal for power is phased out, would raise GLST by 0.37°C.

Snow

Some 99% of Earth’s snow cover, outside of Greenland and Antarctica, is in the northern hemisphere (NH). This study estimates the current albedo effect of snow cover in three steps: area, albedo effect to date, and future rate of snow shrinkage with rising temperatures. NH snow cover averages some 25 million km2 annually [25,26]. 82% of month-km2 coverage is during November through April. 25 million km2 is 2.5 times the 10 million km2 mean annual NH sea ice cover [27]. Estimated NH snow cover declined about 9%, about 2.2 million km2, from 1967 to 2018 [26]. Chen et al. [28] estimated that NH snow cover decreased by 890,000 km2 per decade for May to August over 1982 to 2013, but increased by 650,000 km2 per decade for November to February. Annual mean snow cover fell 9% over this period, as snow cover began earlier but also ended earlier: 1.91 days per decade [28]. These changes resulted in weakened snow radiative forcing of 0.12 (±0.003) W m-2 [28]. Chen estimated the NH snow timing feedback as 0.21 (±0.005) W m-2 K-1 in melting season, from 1982 to 2013 [28].

Future Snow Shrinkage

However, as GST warms further, annual mean snow cover will decline substantially with GST 5°C warmer and almost vanish with 10°. This study considers analog cities for snow cover in warmer places and analyzes data for them. It follows with three latitude and precipitation adjustments. The effects of changes in the timing of when snow is on the ground (Chen) are much smaller than from how many days snow is on the ground (see analog cities analysis, below). So, Chen’s analysis is of modest use for longer time horizons.

NH snow-covered area is not as concentrated near the pole as sea ice. Thus, sun angle leads to a larger effect by snow on Earth’s reflectivity. The mean latitude of northern snow cover, weighted over the year, is about 57°N [29], while the corresponding mean latitude of NH sea ice is 77 to 78°N. The sine of the mean sun angle (33°) on snow, 0.5454, is 2.52 times that for NH sea ice (12.5° and 0.2164). The area coverage (2.5) times the sun angle effect (2.52) suggests a cooling effect of NH snow cover (outside Greenland) about 6.3 times that for NH sea ice. [At high sun angles, water under ice is darker (~95% absorbed or 5% reflected when the sun is overhead, 0°) than rock, grass, shrubs, and trees under snow. This suggests a greater albedo contrast for losing sea ice than for losing snow. However, at the low sun angles that characterize snow latitudes, water reflects more sunlight (40% at 77° and 20% at 57°), leaving much less albedo contrast – with white snow or ice – than rocks and vegetation. So, no darkness adjustment is modeled in this study]. Using Hudson’s 2011 estimate [30] for Arctic sea ice (see below) of 0.6 W m-2 in future radiative forcing, compared to 0.1 to date for the NH sea ice’s current cooling effect, indicates that the current cooling effect of northern snow cover is about 6.3 times 0.6 W m-2 = 3.8 W m-2. This is 31 times the effect of snow cover timing changes, from Chen’s analysis.

To model evolution of future snow cover as the NH warms, analog locations are used for changes in snow cover’s cooling effect as Earth’s surface warms. This cross-sectional approach uses longitudinal transects: days of snow cover at different latitudes along roughly the same longitude. For the NH, in general (especially as adjusted for altitude and distance from the ocean), temperatures increase as one proceeds southward, while annual days of snow cover decrease. Three transects in the northern US and southern Canada are especially useful, because the increases in annual precipitation with warmer January temperatures somewhat approximate the 7% more water vapor in the air per 1°C of warming (see “In the Air” section for water vapor). The transects shown in Table 2 are (1) Winnipeg, Fargo, Sioux Falls, Omaha, Kansas City; (2) Toronto, Buffalo, Pittsburgh, Charleston WV, Knoxville; and (3) Lansing, Detroit, Cincinnati, Nashville. Pooled data from these 3 transects, shown at the bottom of Table 2, indicate 61% as many days as now with snow cover ≥ 1 inch [31] with 3°C local warming, 42% with 5°C, and 24% with 7°C. However, these degrees of local warming correspond to less GST warming, since Earth’s land surface has warmed faster than the sea surface and observed warming is generally greater as one proceeds from the equator toward the poles; [1,2,32] the gradient is 1.5 times the global mean for 44-64°N and 2.0 times for 64-90°N [32]. These latitude adjustments for local to global warming pair 61% as many snow cover days with 2°C GLST warming, 42% with 3°C, and 24% with 4°C. This translates to approximately a 19% decrease in days of snow cover per 1°C warming.

Table 2: Snow Cover Days for Transects with ~7% More Precipitation per °C. Annual Mean # of Days with ≥ 1 inch of Snow on Ground.

table 2

This study makes three adjustments to the 19%. First, the three transects feature precipitation increasing only 4.43% (1.58°C) per 1°C warming. This is 63% of the 7% increase in global precipitation per 1°C warming. So, warming may bring more snowfall than the analogs indicate directly. Therefore the 19% decrease in days of snow cover per 1°C warming of GLST is multiplied by 63%, for a preliminary 12% decrease in global snow cover for each 1°C GLST warming. Second, transects (4) Edmonton to Albuquerque and (5) Quebec to Wilmington NC, not shown, lack clear precipitation increases with warming. But they yield similar 62%, 42%, and 26% as many days of snow cover for 2, 3, and 4°C increases in GST. Since the global mean latitude of NH snow cover is about 57°, the southern Canada figure should be more globally representative than the 19% figure derived from the more southern US analysis. Use of Canadian cities only (Edmonton, Calgary, Winnipeg, Sault Ste. Marie, Toronto, and Quebec, with mean latitude 48.6°N) yields 73%, 58%, and 41% of current snow cover with roughly 2, 3, and 4°C warming. This translates to a 15% decrease in days of snow cover in southern Canada per 1°C warming of GLST. 63% of this, for the precipitation adjustment, yields 9.5% fewer days of snow cover per 1°C warming of GLST. Third, the southern Canada (48.6°N) figure of 9.5% warrants a further adjustment to represent an average Canadian and snow latitude (57°N). Multiplying by sin(48.6°)/sin(57°) yields 8.5%. The story is likely similar in Siberia, Russia, north China, and Scandinavia. So, final modeled snow cover decreases by 8.5% (not 19, 12 or 9.5%) of current amounts for each 1°C rise in GLST. In this way, modeled snow cover vanishes completely at 11.8°C warmer than 1880, similar to the Paleocene-Eocene Thermal Maximum (PETM) GSTs 55 Mya [3].

Ice

Six ice albedo changes are calculated separately: for NH and Antarctic (SH) sea ice, and for land ice in the GIS, WAIS, EAIS, and elsewhere (e.g., Himalayas). Ice loss in the latter four leads to SLR. This study considers each in turn.

Sea Ice

Arctic sea ice area has shown a shrinking trend since satellite coverage began in 1979. Annual minimum ice area fell 53% over the most recent 37 years [33]. However, annual minimum ice volume shrank faster, as the ice also thinned. Estimated annual minimum ice volume fell 73% over the same 37 years, including 51% in the most recent 10 years [34]. Trends in Arctic sea ice volume [34] are shown in Figure 4, with their corresponding R2, for four months. One set of trend lines (small dots) is based on data since 1980, while a second, steeper set (large dots) uses data since 2000. (Only four months are shown, since July ice volume is like November’s and June ice volume is like January’s). The graph suggests sea ice will vanish from the Arctic from June through December by 2050. Moreover, NH sea ice may vanish totally by 2085 in April, the minimum ice volume month. That is, current volume trends yield an ice-free Arctic Ocean about 2085.

fig 4

Figure 4: Arctic Sea Ice Volume by Month and Year, Past and Future.

Hudson estimated that loss of Arctic sea ice would increase radiative forcing in the Arctic by an amount equivalent to 0.7 W m-2, spread over the entire planet, of which 0.1 W m-2 had already occurred [30]. That leaves 0.6 W m-2 of radiative forcing still to come, as of 2011. This translates to 0.31°C warming yet to come (as of 2011) from NH sea ice loss. Trends in Antarctic sea ice are unclear. After three record high winter sea ice years in 2013-15, record low Antarctic sea ice was recorded in 2017-19 and 2020 is below average [27]. If GSTs rise enough, eventually Antarctic land ice and sea ice areas should shrink. Roughly 2/3 of Antarctic sea ice is associated with West Antarctica [35]. Therefore, 2/3 of modeled SH sea ice loss corresponds to WAIS ice volume loss and 1/3 to EAIS. However, to estimate sea ice area, change in estimated ice volume is raised to the 1.5 power (using the ratio of 3 dimensions of volume to 2 of area). This recognizes that sea ice area will diminish more quickly than the adjacent land ice volume of the far thicker WAIS (including the Antarctic Peninsula) and the EAIS.

Land Ice

Paleoclimate studies have estimated that global sea levels were 20 to 35 meters higher than today from 4.0 to 4.2 Mya [13,14]. This indicates that a large fraction of Earth’s polar ice had vanished then. Earth’s GST then was estimated to be 3.3 to 5.0°C above the 1951-80 mean, for CO2 levels of 357-405 ppm. Another study estimated that global sea levels were 25-40 meters higher than today’s from 14.1 to 14.5 Mya [11]. This suggests 5 meters more of SLR from vanished polar ice. The deep ocean then was estimated to be 5.6±1.0°C warmer than in 1951-80, in response to still higher CO2 levels of 430-465 ppm CO2 [11,12]. Analysis of sediment cores by Cook [20] shows that East Antarctic ice retreated hundreds of kilometers inland in that time period. Together, these data indicate large polar ice volume losses and SLR in response to temperatures expected before 2400. This tells us about total amounts, but not about rates of ice loss.

This study estimates the albedo effect of Antarctic ice loss as follows. The area covered by Antarctic land ice is 1.4 times the annual mean area covered by NH sea ice: 1.15 for the EAIS and 0.25 for the WAIS. The mean latitudes are not very different. Thus, the effect of total Antarctic land ice area loss on Earth’s albedo should be about 1.4 times that 0.7 Wm-2 calculated by Hudson for NH sea ice, or about 1.0 Wm-2. The model partitions this into 0.82 Wm-2 for the EAIS and 0.18 Wm-2 for the WAIS. Modeled ice mass loss proceeds more quickly (in % and GT) for the WAIS than for the EAIS. Shepherd et al. [36] calculated that Antarctica’s net ice volume loss rate almost doubled, from the period centered on 1996 to that on 2007. That came from the WAIS, with a compound ice mass loss of 12% per year from 1996 to 2007, as ice volume was estimated to grow slightly in the EAIS [36,37] over this period. From 1997 to 2012, Antarctic land ice loss tripled [36]. Since then, Antarctic land ice loss has continued to increase by a compound rate of 12% per year [37]. This study models Antarctic land ice losses over time using S-curves. The curve for the WAIS starts rising at 12% per year, consistent with the rate observed over the past 15 years, starting from 0.4 mm per year in 2010, and peaks in the 2100s. Except in CDR scenarios, remaining WAIS ice is negligible by 2400. Modeled EAIS ice loss increases from a base of 0.002 mm per year in 2010. It is under 0.1% in all scenarios until after 2100, peaks from 2145 to 2365 depending on scenario, and remains under 10% by 2400 in the three slowest-warming scenarios.

The GIS area is 17.4% of the annual average NH sea ice coverage [27,38], but Greenland experiences (on average) a higher sun angle than the Arctic Ocean. This suggests that total GIS ice loss could have an albedo effect of 0.174 * cos (72°)/cos (77.5°) = 0.248 times that of total NH sea ice loss. This is the initial albedo ratio in the model. The modeled GIS ice mass loss rate decreases from 12% per year too, based on Shepherd’s GIS findings for 1996 to 2017 [37]. Robinson’s [39] analysis indicated that the GIS cannot be sustained at temperatures warmer than 1.6°C above baseline. That threshold has already been exceeded locally for Greenland. So it is reasonable to expect near total ice loss in the GIS if temperatures stay high enough for long enough. Modeled GIS ice loss peaks in the 2100s. It exceeds 80% by 2400 in scenarios lacking CDR and is near total by then if fossil fuel use continues past 2050.

The albedo effects of land ice loss, as for Antarctic sea ice, are modeled as proportional to the 1.5 power of ice loss volume. This assumes that the relative area suffering ice loss will be more around the thin edges than where the ice is thickest, far from the edges. That is, modeled ice-coved area declines faster than ice volume for the GIS, WAIS, and EAIS. Ice loss from other glaciers, chiefly in Arctic islands, Canada, Alaska, Russia, and the Himalayas, is also modeled by S-curves. Modeled “other glaciers” ice volume loss in the 6 scenarios ranges from almost half to almost total, depending on the scenario. Corresponding SLR rise by 2400 ranges from 12 to 25 cm, 89% or more of it by 2100.

In the Air: Clouds and Water Vapor

As calculated by Equation (5), using 70 years without significant volcanic eruptions, GLST will rise about 0.37°C as human sulfur emissions are phased out. Clouds cover roughly half of Earth’s surface and reflect about 20% [40] of incoming solar radiation (341 W m–2 mean for Earth’s surface). This yields mean reflection of about 68 W m–2, or 20 times the combined warming effect of GHGs [41]. Thus, small changes in cloud cover can have large effects. Detecting cloud cover trends is difficult, so the error bar around estimates for forcing from cloud cover changes is large: 0.6±0.8 Wm–2K–1 [42]. This includes zero as a possibility. Nevertheless, the estimated cloud feedback is “likely positive”. Zelinka [42] estimates the total cloud effect at 0.46 (±0.26) W m–2K –1. This comprises 0.33 for less cloud cover area, 0.20 from more high-altitude ones and fewer low-altitude ones, -0.09 for increased opacity (thicker or darker clouds with warming), and 0.02 for other factors. His overall cloud feedback estimate is used for modeling the 6 scenarios shown in the Results section. This cloud effect applies both to albedo changes from less ice and snow and to relative changes in GHG (CO2) concentrations. It is already implicit in estimates for SO4 effects. 1°C warmer air contains 7% more water vapor, on average [43]. That increases radiative forcing by 1.5 W m–2 [43]. This feedback is 89% as much as from CO2 emitted from 1750 to 2011 [41]. Water vapor acts as a warming multiplier, whether from human GHG emissions, natural emissions, or albedo changes. The model treats water vapor and cloud feedbacks as multipliers. This is also done in Table 3 below.

Table 3: Observed GST Warming from Albedo Changes, 1975-2016.

table 3

Albedo Feedback Warming, 1975-2016, Informs Climate Sensitivities

Amplifying feedbacks, from albedo changes and natural carbon emissions, are more prominent in future warming than direct GHG effects. Albedo feedbacks to date, summarized in Table 3, produced an estimated 39% of GST warming from 1975 to 2016. This came chiefly from SO4 reductions, plus some from snow cover changes and Arctic sea ice loss, with their multipliers from added water vapor and cloud cover changes. On the top line of Table 3 below, the SO4 decrease, from 177.3 ppb in 1975 to 130.1 in 2016, is multiplied by 0.00393°C/ppb SO4 from Equation (5). On the second line, in the second column, Arctic sea ice loss is from Hudson [30], updated from 0.10 to 0.11 W m–2 to cover NH sea ice loss from 2010 to 2016. The snow cover timing change effect of 0.12 W m–2 over 1982-2013 is from Chen [28]. But the snow cover data is adjusted to 1975-2016, for another 0.08 W m-2 in snow timing forcing, using Chen’s formula for W m-2 per °C warming [28] and extra 0.36°C warming over 1975-82 plus 2013-16. The amount of the land ice area loss effect is based on SLR to date from the GIS, WAIS, and non-polar glaciers. It corresponds to about 10,000 km2, less than 0.1% of the land ice area.

For the third column of Table 3, cloud feedback is taken from Zelinka [42] as 0.46 W m–2K–1. Water-vapor feedback is taken from Wadhams [43], as 1.5 W m–2K–1. The combined cloud and water-vapor feedback of 1.96 W m–2K–1 modeled here amounts to 68.8% of the 2.85 total forcing from GHGs as of 2011 [41]. Multiplying column 2 by 68.8% yields the numbers in column 3. Conversion to ∆°C in column 4 divides the 0.774°C warming from 1880 to 2011 [2] by the total forcing of 2.85 W m-2 from 1880 to 2011 [41]. This yields a conversion factor of 0.2716°C W-1m2, applied to the sum of columns 2 and 3, to calculate column 4. Error bars are shown in column 5. In summary, estimated GST warming over 1975-2016 from albedo changes, both direct (from sulfate, ice, and snow changes) and indirect (from cloud and water-vapor changes due to direct ones), totals 0.330°C. Total GST warming then was 0.839°C [2]. (This is more than the 0.774°C (2) warming from 1880 to 2011, because the increase from 2011 to 2016 was greater than the increase from 1880 to 1975.) So, the ∆GST estimated for albedo changes over 1975-2016, direct and indirect, comes to 0.330/0.839 = 39.3% of the observed warming.

1975-2016 Warming Not from Albedo Effects

The remaining 0.509°C warming over 1975-2016 corresponds to an atmospheric CO2 increase from 331 to 404 ppm [44], or 22%. This 0.509°C warming is attributed in the model to CO2, consistent with Equations (3) and (1), using the simplification that the sum total effect of other GHGs changes as the same rate as for CO2. It includes feedbacks from H2O vapor and cloud cover changes, estimated, per above, as 0.686/(1+1.686) of 0.509°C, which is 0.207°C or 24.7% of the total 0.839°C warming over 1975-2016. This leaves 0.302°C warming for the estimated direct effect of CO2 and other factors, including other GHGs and factors not modeled, such as black carbon and vegetation changes, over this period.

Partitioning Climate Sensitivity

With the 22% increase in CO2 over 1975-2016, we can estimate the change due to a doubling of CO2 by noting that 1.22 [= 404/331] raised to the power 3.5 yields 2.0. This suggests that a doubling of CO2 levels – apart from surface albedo changes and their feedbacks – leads to about 3.5 times 0.509°C = 1.78°C of warming due to CO2 (and other GHGs and other factors, with their H2O and cloud feedbacks), starting from a range of 331-404 ppm CO2. In the model, for projected temperature changes for a particular year, 0.509°C is multiplied by the natural logarithm of (the CO2 concentration/331 ppm in 1975) and divided by the natural logarithm of (404 ppm/331 ppm), that is divided by 0.1993. This yields estimated warming due to CO2 (plus, implicitly, other non-H2O GHGs) in any particular year, again apart from surface albedo changes and their feedbacks, including the factors noted that are not modelled in this study.

Using Equation (3), warming associated with doubled CO2 over the past 14.5 million years is 11.807 x ln(2.00), or 8.184°C per CO2 doubling. The difference between 8.18°C and 1.78°C, from CO2 and non-H2O GHGs, is 6.40°C. This 6.40°C climate sensitivity includes the effect of albedo changes and the consequent H2O vapor concentration. Loss of tropospheric SO4 and Arctic sea ice are the first of these to occur, with immediate water vapor and cloud feedbacks. Loss of snow and Antarctic sea ice follow over centuries to decades. Loss of much land ice, especially where grounded above sea level, happens more slowly.

Stated another way, there are two climate sensitivities: one for the direct effect of GHGs and one for amplifying feedbacks, led by albedo changes. The first is estimated as 1.8°C. The second is estimated as 6.4°C in epochs, like ours, when snow and ice are abundant. In periods with little or no ice and snow, this latter sensitivity shrinks to near zero, except for clouds. As a result, climate is much more stable to perturbations (notably cyclic changes in Earth’s tilt and orbit) when there is little snow or ice. However, climate is subject to wide temperature swings when there is lots of snow and ice (notably the past 2 million years, as seen in Figure 1).

In the Oceans

Ocean Heat Gain: In 2011, Hansen [7] estimated that Earth is absorbing 0.65 Wm-2 more than it emits. As noted above, ocean heat gain averaged 4 ZJ per year over 1967 to 1990, 7 over 1991-2005, and 10 over 2006-16. Ocean heat gain accelerated while GSTs increased. Therefore, ocean heat gain and Earth’s energy imbalance seem likely to continue rising as GSTs increase. This study models the situation that way. Oceans would need to warm up enough to regain thermal equilibrium with the air above. While oceans are gaining heat (now ~ 2 times cumulative human energy use every 3 years), they are out of equilibrium. The ocean thermohaline circuit takes about 1,000 years. So, if human GHG emissions ended today, this study assumes that it could take Earth’s oceans 1,000 years to thermally re-equilibrate heat with the atmosphere. The model spreads the bulk of that over 400 years, in an exponential decay shape. The rate peaks during 2130 to 2170, depending on the scenario. The modeled effect is about 5% of total GST warming. Ocean thermal expansion (OTE), currently about 0.8 mm/year [5], is another factor in SLR. Changes to its future values are modeled as proportional to future temperature change.

Land Ice Mass Loss, Its Albedo Effect, and Sea Level Rise: Modeled SLR derives mostly from modeled ice sheet losses. Their S-curves were introduced above. The amount and rate parameters are informed by past SLR. Sea levels have varied by almost 200 meters over the past 65 My. They were almost 125 meters lower than now during recent Ice Ages [3]. SLR reached some 70 meters higher in ice-free warm periods more than 10 Mya, especially more than 35 Mya [3]. From Figure 1, Earth was largely ice-free when deep ocean temperature (DOT) was 7°C or more, for SLR of about 73 meters from current levels, when DOT is < 2°C. This yields a SLR estimate of 15 meters/°C of DOT in warm eras. Over the most recent 110-120 ky, 110 meters of SLR is associated with 4 to 6°C GST warming (Figure 2), or 19-28 meters/°C GST in a cold era. The 15:28 warm/cold era ratio for SLR rate shows that the amount of remaining ice is a key SLR variable. However, this study projects only 1.5 to 4 meters rate of SLR by 2400 per °C of GST warming, but still rising. The WAIS and GIS together hold 10-12 meters of SLR [15,16]. So, 25-40 meter SLR during 14.1-14.5 Mya suggests that the EAIS lost about 1/3 to 1/2 of its current ice volume (20 to 30 meters of SLR, out of almost 60 today in the EAIS [45]) when CO2 levels were last at 430-465 ppm and DOTs were 5.6±1.0°C [11,12]. This is consistent with this study’s two scenarios with human CO2 emissions after 2050 and even 2100: 13 and 21 meters of SLR from the EAIS by 2400, with Δ GLSTs of 8.2 and 9.4°C. DeConto [19] suggested that sections of the EAIS grounded below sea level would lose all ice if we continue emissions at the current rate, for 13.6 or even 15 meters of SLR by 2500. This model’s two scenarios with intermediate GLST rise yield SLR closest to his projections. SLR is even higher in the two warmest scenarios. Modeled SLR rates are informed by the most recent 19,000 years of data ([46,47], chart by Robert A. Rohde). They include a SLR rate of 3 meters/century during Meltwater Pulse 1A for 8 centuries around 14 ky ago. They also include 1.5 meters/century over the 70 centuries from 15 kya to 8 kya. The DOT rose 3.3°C over 10,000 years, for an average rate of 0.033°C per century. However, the current SST warming rate is 2.0°C per century [1,2], about 60 times as great. Although only 33-40% as much ice (73 meters SLR/(73+125)) is left to melt, this suggests that rates of SLR will be substantially higher, at current rates of warming, than the 1.5 to 3 meters per century coming out of the most recent ice age. In four scenarios without CDR, mean rates of modeled SLR from 2100 to 2400 range from 4 to 11 meters per century.

Summary of Factors in Warming to 2400

Table 4 summarizes the expected future warming effects from feedbacks (to 2400), based on the analyses above.

Table 4: Projected GST Warming from Feedbacks, to 2400.

table 4

The 3.5°C warming indicated, added to 1.1°C warming since 1880, or 4.6°C, is 0.5°C less than the 5.1°C warming based on Equation (4) from the paleoclimate analysis. This gap suggests four overlapping possibilities. First, underestimations (perhaps sea ice and clouds) may exceed overestimations (perhaps snow) for the processes shown in Table 4. Underestimation of cloud feedbacks, and their consequent warming, is quite possible. Using Zelinka’s 0.46 Wm–2K–1 in this study, instead of the IPCC central estimate of 0.6, is one possibility. Moreover, recent research suggests that cloud feedbacks may be appreciably stronger than 0.6 Wm–2K–1 [48]. Second, change in the eight factors not modelled (black carbon, vegetation and land use, ocean and air circulation, Earth’s orbit and tilt, and solar output) may provide feedbacks that, on balance, are more warming than cooling. Third, temperatures used here for 4 and 14 Mya may be overestimated or should not be used unadjusted. Notably, the joining of North and South America about 3 Mya rearranged ocean circulation and may have resulted in cooling that led to ice periodically covering much of North America [49]. Globally, Figure 1 above suggests this cooling effect may be 1.0-1.6°C. In contrast, solar output increases as our sun ages, by 7% per billion years [50], so that solar forcing is now 1.4 W m–2 more than 14 Mya and 0.4 more than 4 Mya. A brighter sun now indicates that, for the same GHG levels and albedo levels, GST would be 0.7°C warmer than it would have been 14 Mya and 0.2°C warmer than 4 Mya. Fourth, nothing (net) may be amiss. Underestimated warming (perhaps permafrost, clouds, sea ice, black carbon) may balance overestimated warming (perhaps snow, land ice, vegetation). The gap would then be due to a lower albedo climate sensitivity than 6.4°C, as discussed above using data for 1975-2016, because all sea ice and much snow vanish by 2400.

Natural Carbon Emissions

Permafrost: One estimate of the amount of carbon stored in permafrost is 1,894 GT of carbon [51]. This is about 4 x carbon that humans have emitted by burning fossil fuels. It is also 2 x as much as in Earth’s atmosphere. More permafrost may lie under Antarctic ice and the GIS. DeConto [52] proposed that the PETM’s large carbon and temperature (5-6°C) excursions 55 Mya are explained by “orbitally triggered decomposition of soil organic carbon in circum-Arctic and Antarctic terrestrial permafrost. This massive carbon reservoir had the potential to repeatedly release thousands of [GT] of carbon to the atmosphere-ocean system”. Permafrost area in the Northern Hemisphere shrank 7% from 1900 to 2000 [53]. It may shrink 75-88% more by 2100 [54]. Carbon emissions from permafrost are expected to accelerate, as the ground in which they are embedded warms up. In general, near-surface air temperatures have been warming twice as fast in the Arctic as across the globe as a whole [32]. More research is needed to estimate rates of permafrost warming at depth and consequent carbon emissions. Already in 2010, Arctic permafrost emitted about as carbon as all US vehicles [55]. Part of the carbon emerges as CH4, where surface water prevents carbon under it being oxidized. That CH4 changes to CO2 in the air over several years. This study accounts for the effects of CO2 derived from permafrost. MacDougall et al. estimated that thawing permafrost can add up to ~100 ppm of CO2 to the air by 2100 and up to 300 more by 2300, depending on the four RCP emissions scenarios [56]. This is 200 GT of carbon by 2100 plus 600 GT more by 2300. The direct driver of such emissions is local temperatures near the air-soil interface, not human carbon emissions. Since warming is driven not just by emissions, but also by albedo changes and their multipliers, permafrost carbon losses from thawing may proceed faster than MacDougall estimated. Moreover, MacDougall estimated only 1,000 GT of carbon in permafrost [56], less than more recent estimates. On the other hand, a larger fraction of carbon may stay in permafrost soil in than MacDougall assumed, leaving deep soil rich in carbon, similar to that left by “recent” glaciers in Iowa.

Other Natural Carbon Emissions

Seabed CH4 hydrates may hold a similar amount of carbon to permafrost or somewhat less, but the total amount is very difficult to measure. By 2011, subsea CH4 hydrates were releasing 20-30% as much carbon as permafrost was [57]. This all suggests that eventual carbon emissions from permafrost and CH4 hydrates may be half to four times what MacDougall estimated. Also, the earlier portion of those emissions may happen faster than MacDougall estimated. In all, this study’s modeled permafrost carbon emissions range from 35 to 70 ppm CO2 by 2100 and from 54 to 441 ppm CO2 by 2400, depending on the scenario. As stated earlier, this model simply assumes that other natural carbon reservoirs will add half as much carbon to the air as permafrost does, on the same time path. These sources include outgassing from soils now unfrozen year-round, the warming upper ocean, seabed CH4 hydrates, and any net decrease in worldwide biomass.

Results

The Six Scenarios

  1. “2035 Peak”. Fossil-fuel emissions are reduced 94% by 2100, from a peak about 2035, and phased out entirely by 2160. Phase-out accelerates to 2070, when CO2 emissions are 25% of 2017 levels, then decelerates. Permafrost carbon emissions overtake human ones about 2080. Natural CO2 removal (CDR) mostly further acidifies the oceans. But it includes 1 GT per year of CO2 by rock weathering.
  2. “2015 Peak”. Fossil-fuel emissions are reduced 95% by 2100, from a peak about 2015, and phased out entirely by 2140. Phase-out accelerates to 2060, when CO2 emissions are 40% of 2017 levels, then decelerates. Compared to a 2035 peak, natural carbon emissions are 25% lower and natural CDR is similar.
  3. “x Fossil Fuels by 2050”, or “x FF 2050”. Peak is about 2015, but emissions are cut in half by 2040 and end by 2050. Natural CDR is the same as for the 2015 Peak, but is lower to 2050, since human CO2 emissions are less. This path has a higher GST from 2025 to 2084, while warming sooner from less SO4 outweighs less warming from GHGs.
  4. “Cold Turkey”. Emissions end at once after 2015. Natural CDR is only by rock weathering, since no new human CO2 emissions push carbon into the ocean. After 2060, cooling from ending CO2 emissions earlier outweighs warming from ending SO2
  5. “x FF 2050, CDR”. Emissions are the same as for “x FF 2050”, as is natural CDR. But human CDR ramps up in an S-curve, from less than 1% of emissions in 2015 to 25% of 2015 emissions over the 2055 to 2085 period. Then they ramp down in a reverse S-curve, to current levels in 2155 and 0 by 2200.
  6. “x FF 2050, 2xCDR” is like “x FF 2050, CDR”, but CDR ramps up to 52% of 2015 emissions over 2070 to 2100. From 2090, it ramps down to current levels in 2155 and 0 by 2190. CDR = 71% of CO2 emissions to 2017 or 229% of soil carbon lost since farming began [58], almost enough to cut CO2 in the air to 313 ppm, for 2°C warming.

Projections to 2400

The results for the six scenarios shown in Figure 5 spread ocean warming over 1,000 years, more than half of it by 2400. They use the factors discussed above for sea level, water vapor, and albedo effects of reduced SO4, snow, ice, and clouds. Permafrost emissions are based on MacDougall’s work, adjusted upward for a larger amount of permafrost, but also downward and to a greater degree, assuming much of the permafrost carbon stays as carbon-rich soil as in Iowa. As first stated in the introduction to Feedback Pathways, the model sets other natural carbon emissions to half of permafrost emissions. At 2100, net human CO2 emissions range from -15 GT/year to +2 GT/year, depending on the scenario. By 2100, CO2 concentrations range from 350 to 570 ppm, GLST warming from 2.9 to 4.5°C, and SLR from 1.6 to 2.5 meters. CO2 levels after 2100 are determined mostly by natural carbon emissions, driven ultimately by GST changes, shown in the lower left panel of Figure 5. They come from permafrost, CH4 hydrates, unfrozen soils, warming upper ocean, and biomass loss.

fig 5

Figure 5: Scenarios for CO2 Emissions and Levels, Temperatures and Sea Level.

Comparing temperatures to CO2 levels allows estimates of long-run climate sensitivity to doubled CO2. Sensitivity is estimated as ln(2)/ln(ppm/280) * ∆T. By scenario, this yields > 4.61° (probably ~5.13° many decades after 2400) for 2035 Peak, > 4.68° (probably ~5.15°) for 2015 Peak, > 5.22° (probably 5.26°) for “x FF by 2050”, and 8.07° for Cold Turkey. Sensitivities of 5.13, 5.15 and 5.26° are much less than the 8.18° derived from the Vostok ice core. This embodies the statement above, in the Partitioning Climate Sensitivity section, that in periods with little or no ice and snow [here, ∆T of 7°C or more – the 2035 and 2015 Peaks and x FF by 2050 scenarios], this albedo-related sensitivity shrinks to 3.3-3.4°. Meanwhile, the Cold Turkey scenario (with a good bit more snow and a little more ice) matches well the relationship from the ice core (and validated to 465 ppm CO2, in the range for Cold Turkey: 4 and 14 Mya). Another perspective is the climate sensitivity starting from a base not of 280 ppm CO2, but from a higher level: 415 ppm, the current level and the 2400 level in the Cold Turkey case. Doubling CO2 from 415 to 830 ppm, according to the calculations underlying Figure 5, yields a temperature in 2400 between the x FF by 2050 and the 2015 Peak cases, about 7.6°C and rising, to perhaps 8.0°C after 1-2 centuries. This yields a climate sensitivity of 8.0 – 4.9 = 3.1°C in the 415-830 ppm range. The GHG portion of that remains near 1.8° (see Partitioning Climate Sensitivity above). But the albedo feedbacks portion shrinks further, from 6.4°, past 3.3° to 1.3°, as thin ice and most snow are gone, as noted above, plus all SO4 from fossil fuels, leaving mostly thick ice and feedbacks from clouds and water vapor.

Table 5 summarizes estimated temperatures effects of 16 factors in the 6 scenarios to 2400. Peaking emissions now instead of in 2035 can keep eventual warming 1.1°C lower. Phasing out fossil fuels by 2050 gains another 1.2°C relatively cooler. Ending fossil fuel use immediately gains another 2.2°C. Also removing 2/3 of CO2 emissions to date gains another 2.4°C relatively cooler. Eventual warming in the higher emissions scenarios is a good bit lower than what would be inferred by using the 8.2°C climate sensitivity based on an epoch rich in ice and snow. This is because the albedo portion of that climate sensitivity (currently 6.4°) is greatly reduced as ice and snow disappear. More human carbon emissions (the first three scenarios especially) warm GSTs further, especially from less snow and cloud cover, more water vapor, and more natural carbon emissions. These in turn accelerate ice loss. All further amplify warming.

Table 5: Factors in Projected Global Surface Warming, 2010-2400 (°C).

table 5

Carbon release from permafrost and other reservoirs is lower in scenarios where GSTs do not rise as much. GSTs grow to the end of the study period, 2400, except for the CDR cases. Over 99% of warming after 2100 is due to amplifying feedbacks from human emissions during 1750-2100. These feedbacks amount to 1.5 to 5°C after 2100, in the scenarios without CDR. Projected mean warming rates with continued human emissions are similar to current rates of 2.5°C per century over 2000-2020 [2]. Over the 21st century, they range from 62 to 127% of the rate over the most recent 20 years. The mean across the 6 scenarios is 100%, higher in the 3 warmest scenarios. Warming slows in later centuries. The key to peak warming rates is disappearing northern sea ice and human SO4, mostly by 2050. Peak warming rates per decade in all 6 scenarios occur this century. They are fastest not for the 2035 Peak scenario (0.38°C), but for Cold Turkey (.80°C when our SO2 emissions stop suddenly) and xFF2050 (0.48°C, as SO2 emissions phase out by 2050). Due to SO4 changes, peak warming in the x FF 2050 scenario, from 2030 to 2060, is 80% faster than over the past 20 years, while for the 2035 Peak, it is only 40% faster. Projected SLR from ocean thermal expansion (OTE) by 2400 ranges from 3.9 meters in the 2035 Peak scenario to 1.5 meters in the xFF’50 2xCDR case. The maximum rate of projected SLR by 2400 is 15 meters from 2300 to 2400, in the 2035 Peak scenario. That is 5 times the peak 8-century rate 14 kya. However, the mean SLR rate over 2010-2400 is less than the historical 3 meters per century (from 14 kya) in the CDR scenarios and barely faster for Cold Turkey. The rate of SLR peaks from 2130 to 2360 for the 4 scenarios without CDR. In the two CDR scenarios, projected SLR comes mostly from the GIS, OTE, and the WAIS. But the EAIS is the biggest contributor in the three fastest warming scenarios.

Perspectives

The results show that the GST is far from equilibrium; barely more than 20% of 5.12°C warming to equilibrium. However, the feedback processes that warm Earth’s climate to equilibrium will be mostly complete by 2400. Some snow melting will continue. So will melting more East Antarctic and (in some scenarios) Greenland ice, natural carbon emissions, cloud cover and water vapor feedbacks, plus warming the deep ocean. But all of these are tapering off by 2400 in all scenarios. Two benchmarks are useful to consider: 2°C and 5°C above 1880 levels. The 2015 Paris climate pact’s target is for GST warming not to exceed 2°C. However, projected GST warming exceeds 2°C by 2047 in all six scenarios. Focus on GLSTs recognizes that people live on land. Projected GLST warming exceeds 2°C by 2033 in all six scenarios. 5° is the greatest warming specifically considered in Britain’s Stern Review in 2006 [59]. For just 4°, Stern suggested a 15-35% drop in crop yields in Africa, while parts of Australia cease agriculture altogether [59]. Rind et al. projected that the major U.S. crop yields would fall 30% with 4.2°C warming and 50% with 4.5°C warming [60]. According to Stern, 5° warming would disrupt marine ecosystems, while more than 5° would lead to major disruption and large-scale population movements that could be catastrophic [59]. Projected GLST warming passes 5°C in 2117, 2131, and 2153 for the three warmest scenarios. But it never does in the other three. With 5° GLST warming, Kansas, until recently the “breadbasket of the world”, would become as hot in summer as Las Vegas is now. Most of the U.S. warms faster than Earth’s land surface in general [32]. Parts of the U.S. Southeast, including most of Georgia, become that hot, but much more humid. Effects would be similar elsewhere.

Discussion

Climate models need to account for all these factors and their interactions. They should also reproduce conditions for previous eras when Earth had this much CO2 in the air, using current levels of CO2 and other GHGs. This study may underestimate warming due to permafrost and other natural emissions. It may also overestimate how fast seas will rise in a much warmer world. Ice grounded below sea level (by area, ~2/3 of the WAIS, 2/5 of the EAIS, and 1/6 of the GIS) can melt quickly (decades to centuries). But other ice can take many centuries or millennia to melt. Continued research is needed, including separate treatment of ice grounded below sea level or not. This study’s simplifying assumptions, that lump other GHGs with CO2 and other natural carbon emissions proportionately with permafrost, could be improved with modeling for the individual factors lumped here. More research is needed to better quantify the 12 factors modeled (Table 5) and the four modeled only as a multiplier (line 10 in Table 5). For example, producing a better estimate for snow cover, similar to Hudson’s for Arctic sea ice, would be useful. So would other projections, besides MacDougall’s, of permafrost emissions to 2400. More work on other natural emissions and the albedo effects of clouds with warming would be useful.

This analysis demonstrates that reducing CO2 emissions rapidly to zero will be woefully insufficient to keep GST less than 2°C above 1750 or 1880 levels. Policies and decisions which assume that merely ending emissions will be enough will be too little, too late: catastrophic. Lag effects, mostly from albedo changes, will dominate future warming for centuries. Absent CDR, civilization degrades, as food supplies fall steeply and human population shrinks dramatically. More emissions, absent CDR, will lead to the collapse of civilization and shrink population still more, even to a small remnant.

Earth’s remaining carbon budget to hold warming to 2°C requires removing more than 70% of our CO2 emissions to date, any future emissions, and all our CH4 emissions. Removing tens of GT of CO2 per year will be required to return GST warming to 2°C or less. CDR must be scaled up rapidly, while CO2 emissions are rapidly reduced to almost zero, to achieve negative net emissions before 2050. CDR should continue strong thereafter.

The leading economists in the USA and the world say that the most efficient policy to cut CO2 emissions is to enact a worldwide price on them [61]. It should start at a modest fraction of damages, but rise briskly for years thereafter, to the rising marginal damage rate. Carbon fee and dividend would gain political support and protect low-income people. Restoring GST to 0° to 0.5°C above 1880 levels calls for creativity and dedication to CDR. Restoring the healthy climate on which civilization was built is a worthwhile goal. We, our parents and our grandparents enjoyed it. A CO2 removal price should be enacted, equal to the CO2 emission price. CDR might be paid for at first by a carbon tax, then later by a climate defense budget, as CO2 emissions wind down.

Over 1-4 decades of research and scaling up, CDR technology prices may drop far. Sale of products using waste CO2, such as concrete, may make the transition easier. CDR techniques are at various stages of development and prices. Climate Advisers provides one 2018 summary for eight CDR approaches, including for each: potential GT CO2 removed per year, mean US$/ton CO2, readiness, and co-benefits [62]. The commonest biological CDR method now is organic farming, in particular no-till and cover cropping. Others include several methods of fertilizing or farming the ocean; planting trees; biochar; fast-rotation grazing; and bioenergy with CO2 capture. Non-biological ones include direct air capture with CO2 storage underground in carbonate-poor rocks such as basalts. Another increases surface area of such rocks, by grinding them to gravel, or dust to spread from airplanes. They react with weak carbonic acid in rain. Another adds small carbonate-poor gravel to agricultural soil.

CH4 removal should be a priority, to quickly drive CH4 levels down to 1880 levels. With a half-life of roughly 7 years in Earth’s atmosphere, CH4 levels might be cut back that much in 30 years. It could happen by ending leaks from fossil fuel extraction and distribution, untapped landfills, cattle not fed Asparagopsis taxiformis, and flooding rice paddies. Solar radiation management (SRM) might play an important supporting role. Due to loss of Arctic sea ice and human SO4, even removing all human GHGs (scenario not shown) will likely not bring GLST back below 2°C by 2400. SRM could offset these two soonest major albedo changes in coming decades. The best known SRM techniques are (1) putting SO4 or calcites in the stratosphere and (2) refreezing the Arctic Ocean. Marine cloud brightening could play a role. SRM cannot substitute for ending our CO2 emissions or for vast CDR, both of them soon. We may need all three approaches working together.

In summary, the paleoclimate record shows that today’s CO2 level entails GST roughly 5.1°C warmer than 1880. Most of the increase from today’s GST will be due to amplification by albedo changes and other factors. Warming gets much worse with continued emissions. Amplifying feedbacks will add more GHGs to the air, even if we end our GHG emissions now. Further GHGs will warm Earth’s surface, oceans and air even more, in some cases much more. The impacts will be many, from steeply reduced crop yields (and widespread crop failures) and many places too hot to survive sometimes, to widespread civil wars, billions of refugees, and many meters of SLR. Decarbonization of civilization by 2050 is required, but far from enough. Massive CO2 removal is required as soon as possible, perhaps supplemented by decades of SRM, all enabled by a rising price on CO2.

List of Acronyms

List of Acro

References

  1. https://data.giss.nasa.gov/gistemp/tabledata_v3/
  2. https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
  3. Hansen J, Sato M (2011) Paleoclimate Implications for Human-Made Climate Change in Berger A, Mesinger F, Šijački D (eds.) Climate Change: Inferences from Paleoclimate and Regional Aspects. Springer, pp: 21-48.
  4. Levitus S, Antonov J, Boyer T (2005) Warming of the world ocean, 1955-2003. Geophysical Research Letters
  5. https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/
  6. https://www.eia.gov/totalenergy/data/monthly/pdf/sec1_3.pdf
  7. Hansen J, Sato M, Kharecha P, von Schuckmann K (2011) “Earth’s Energy imbalance and implications. Atmos Chem Phys 11: 13421-13449.
  8. https://www.eia.gov/energyexplained/index.php?page=environment_how_ghg_affect_climate
  9. Tripati AK, Roberts CD, Eagle RA (2009) Coupling of CO2 and ice sheet stability over major climate transitions of the last 20 million years. Science 326: 1394-1397. [crossref]
  10. Shevenell AE, Kennett JP, Lea DW (2008) Middle Miocene ice sheet dynamics, deep-sea temperatures, and carbon cycling: a Southern Ocean perspective. Geochemistry Geophysics Geosystems 9:2.
  11. Csank AZ, Tripati AK, Patterson WP, Robert AE, Natalia R, et .al. (2011) Estimates of Arctic land surface temperatures during the early Pliocene from two novel proxies. Earth and Planetary Science Letters 344: 291-299.
  12. Pagani M, Liu Z, LaRiviere J, Ravelo AC (2009) High Earth-system climate sensitivity determined from Pliocene carbon dioxide concentrations, Nature Geoscience 3: 27-30.
  13. Wikipedia – https://en.wikipedia.org/wiki/Greenland_ice_sheet
  14. Bamber JL, Riva REM, Vermeersen BLA, Le Brocq AM (2009) Reassessment of the potential sea-level rise from a collapse of the West Antarctic Ice Sheet. Science 324: 901-903.
  15. https://nsidc.org/cryosphere/glaciers/questions/located.html
  16. https://commons.wikimedia.org/wiki/File:AntarcticBedrock.jpg
  17. DeConto RM, Pollard D (2016) Contribution of Antarctica to past and future se-level rise. Nature 531: 591-597.
  18. Cook C, van de TF, Williams T, Sidney RH, Masao I, al. (2013) Dynamic behaviour of the East Antarctic ice sheet during Pliocene warmth, Nature Geoscience 6: 765-769.
  19. Vimeux F, Cuffey KM, Jouzel J (2002) New insights into Southern Hemisphere temperature changes from Vostok ice cores using deuterium excess correction. Earth and Planetary Science Letters 203: 829-843.
  20. Snyder WC (2016) Evolution of global temperature over the past two million years, Nature 538: 226-
  21. https://www.wri.org/blog/2013/11/carbon-dioxide-emissions-fossil-fuels-and-cement-reach-highest-point-human-history
  22. https://phys.org/news/2012-03-weathering-impacts-climate.html
  23. Smith SJ, Aardenne JV, Klimont Z, Andres RJ, Volke A, al. (2011). Anthropogenic Sulfur Dioxide Emissions: 1850-2005. Atmospheric Chemistry and Physics 11: 1101-1116.
  24. Figure SPM-2 in S Solomon, D Qin, M Manning, Z Chen, M. Marquis, et al. (eds.) IPCC, 2007: Summary for Policymakers. in Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the 4th Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, UK and New York, USA.
  25. ncdc.noaa.gov/snow-and-ice/extent/snow-cover/nhland/0
  26. https://nsidc.org/cryosphere/sotc/snow_extent.html
  27. ftp://sidads.colorado.edu/DATASETS/NOAA/G02135/
  28. Chen X, Liang S, Cao Y (2016) Satellite observed changes in the Northern Hemisphere snow cover phenology and the associated radiative forcing and feedback between 1982 and 2013. Environmental Research Letters 11:8.
  29. https://earthobservatory.nasa.gov/global-maps/MOD10C1_M_SNOW
  30. Hudson SR (2011) Estimating the global radiative impact of the sea ice-albedo feedback in the Arctic. Journal of Geophysical Research: Atmospheres 116:D16102.
  31. https://www.currentresults.com/Weather/Canada/Manitoba/Places/winnipeg-snowfall-totals-snow-accumulation-averages.php
  32. https://data.giss.nasa.gov/gistemp/tabledata_v3/ZonAnn.Ts+dSST.txt
  33. https://neven1.typepad.com/blog/2011/09/historical-minimum-in-sea-ice-extent.html
  34. https://14adebb0-a-62cb3a1a-s-sites.googlegroups.com/site/arctischepinguin/home/piomas/grf/piomas-trnd2.png?attachauth=ANoY7coh-6T1tmNEErTEfdcJqgESrR5tmNE9sRxBhXGTZ1icpSlI0vmsV8M5o-4p4r3dJ95oJYNtCrFXVyKPZLGbt6q0T2G4hXF7gs0ddRH88Pk7ljME4083tA6MVjT0Dg9qwt9WG6lxEXv6T7YAh3WkWPYKHSgyDAF-vkeDLrhFdAdXNjcFBedh3Qt69dw5TnN9uIKGQtivcKshBaL6sLfFaSMpt-2b5x0m2wxvAtEvlP5ar6Vnhj3dhlQc65ABhLsozxSVMM12&attredirects=1
  35. https://www.earthobservatory.nasa.gov/features/SeaIce/page4.php
  36. Shepherd A, Ivins ER, Geruo A, Valentina RB, Mike JB, et al. (2012) A reconciled estimate of ice-sheet mass balance. Science 338: 1183-1189.
  37. Shepherd A, Ivins E, Rignot E, Ben Smith (2018) Mass balance of the Antarctic Ice Sheet from 1992 to 2017. Nature 558: 219-222.
  38. https://en.wikipedia.org/wiki/Greenland_ice_sheet
  39. Robinson A, Calov R, Ganopolski A (2012) Multistability and critical thresholds of the Greenland ice sheet. Nature Climate Change 2: 429-431.
  40. https://earthobservatory.nasa.gov/features/CloudsInBalance
  41. Figures TS-6 and TS-7 in TF Stocker, D Qin, GK Plattner, M Tignor, SK Allen, J Boschung, et al. (eds.). IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, UK and New York, NY, USA.
  42. Zelinka MD, Zhou C, Klein SA (2016) Insights from a refined decomposition of cloud feedbacks. Geophysical Research Letters 43: 9259-9269.
  43. Wadhams P (2016) A Farewell to Ice, Penguin / Random House, UK.
  44. https://scripps.ucsd.edu/programs/keelingcurve/wp-content/plugins/sio-bluemoon/graphs/mlo_full_record.png
  45. Fretwell P, Pritchard HD, Vaughan DG, Bamber JL, Barrand NE et al. (2013) Bedmap2: improved ice bed, surface and thickness datasets for Antarctica .The Cryosphere 7: 375-393.
  46. Fairbanks RG (1989) A 17,000 year glacio-eustatic sea- level record: Influence of glacial melting rates on the Younger Dryas event and deep-ocean circulation. Nature 342: 637-642.
  47. https://en.wikipedia.org/wiki/Sea_level_rise#/media/File:Post-Glacial_Sea_Level.png
  48. Zelinka MD, Myers TA, McCoy DT, Stephen PC, Peter MC, al. (2020) Causes of Higher Climate Sensitivity in CMIP6 Models. Geophysical Research Letters 47.
  49. https://earthobservatory.nasa.gov/images/4073/panama-isthmus-that-changed-the-world
  50. https://sunearthday.nasa.gov/2007/locations/ttt_cradlegrave.php
  51. Hugelius G, Strauss J, Zubrzycki S, Harden JW, Schuur EAG, et al. (2014) Improved estimates show large circumpolar stocks of permafrost carbon while quantifying substantial uncertainty ranges and identifying remaining data gaps. Biogeosciences Discuss 11: 4771-4822.
  52. DeConto RM, Galeotti S, Pagani M, Tracy D, Schaefer K, al. (2012) Past extreme warming events linked to massive carbon release from thawing permafro.st Nature 484: 87-92.
  53. Figure SPM-2 in IPCC 2007: Summary for Policymakers. In: Climate Change 2007: The Physical Science Basis.
  54. Figure 22.5 in Chapter 22 (F.S. Chapin III and S. F. Trainor, lead convening authors) of draft 3rd National Climate Assessment: Global Climate Change Impacts in the United States. Jan 12, 2013.
  55. Dorrepaal E, Toet S, van Logtestijn RSP, Swart E, van der Weg, MJ, et al. (2009) Carbon respiration from subsurface peat accelerated by climate warming in the subarctic. Nature 460: 616-619.
  56. MacDougall AH, Avis CA, Weaver AJ (2012) Significant contribution to climate warming from the permafrost carbon feedback. Nature Geoscience 5:719-721.
  57. Shakhova N, Semiletov I, Leifer I, Valentin S, Anatoly S, et al. (2014) Ebullition and storm-induced methane release from the East Siberian Arctic Shelf. Nature Geoscience 7: 64-70.
  58. Sandeman J, Hengl T, Fiske GJ (2018) Soil carbon debt of 12,000 years of human land use PNAS 114:36, 9575-9580, with correction in 115:7.
  59. Stern N (2007) The Economics of Climate Change: The Stern Review. Cambridge University Press, Cambridge UK.
  60. Rind D, Goldberg R, Hansen J, Rosenzweig C, Ruedy R (1990) Potential evapotranspiration and the likelihood of future droughts. Journal of Geophysical Research. 95: 9983-10004.
  61. https://www.wsj.com/articles/economists-statement-on-carbon-dividends-11547682910
  62. www.climateadvisers.com/creating-negative-emissions-the-role-of-natural-and-technological-carbon-dioxide-removal-strategies/
fig 3

Marking with a Dye to Visualize the Limits of Resection in Breast Conserving Surgery

DOI: 10.31038/CST.2020544

Abstract

Objective: To study the technical aspects of the marking of the resection margins with a dye in breast-conserving surgery and to evaluate its aesthetic and oncological impacts.

Methods: Injection of methylene blue on a perpendicular path, at a controlled distance from the tumor. The tumor has previously been located by ultrasound or palpation. Then we study the carcinological and aesthetic results.

Results: Over a period of 4 years we operated on 36 patients. The average age was 43. Large breasts of average size were found in the majority of cases. Tumor sizes were dominated by T3 tumors and tumors were mostly located in the upper outer quadrant. The most frequently encountered histological types were invasive carcinomas with non specific type. The incisions were classic in more than 80% of cases or sometimes oncoplastic. The aesthetic results were satisfactory in 78% of cases. The carcinological results were marked by invaded margins in 3% of patients.

Conclusion: The results of the methylene blue injection technique to secure the excision margins and perform breast conserving surgery are satisfactory from the aesthetic and oncological point of view.

Keywords

Marking; Methylene blue; Aesthetic results; Oncological results.

Introduction

Breast cancer surgery is basically a total mastectomy or breast-conserving surgery (BCS) which is a partial removal of the gland, removing the tumor and a margin of healthy tissue. This resection is associated with a sentinel lymph node biopsy or an axillary dissection. For non-palpable lesions, identification by medical imaging with the placement of a harpoon helps guide the excision. For lesions of larger sizes, guidance and assessment of the margins is done by ultrasound or palpation of the mass [1]. Extemporaneous examination of lumpectomy specimens shows invasion of the margins and a need for resection in 25% of cases [2]. In daily practice at the Dakar Cancer Institute, we consider larger margins for BCS and breast cancer oncoplasty. For this we have introduced into our practice the peritumoral injection of methylene blue and oncoplasty techniques for conservation. The objective of this work was to study the technical aspects of dye marking in BCS and to assess its aesthetic and carcinological impacts.

Materials and Methods

Patients had to present with a tumor smaller than 4 cm initially or after chemotherapy. Pure methylene blue, a 10 cc syringe and a spinal anesthesia needle were used (Figure 1). The injection was around the tumor and more than 3 cm from the edges after localization by palpation or ultrasound (Figure 2), from the skin to the pectoralis major apnevrosis. The excision was done on the blue paths (Figure 3). We used Krishna Clough classification to assess aesthetic results.

fig 1

Figure 1: Injection Equipment.

fig 2

Figure 2: Infiltration of methylene blue over 3 cm.

fig 3

Figure 3: Resection on the blue path.

Results

Over a period of 4 years we operated on 36 patients. The average age was 43 with extremes of 25 and 62. Large breasts were found in 27% of cases, medium-sized breasts in 56% of cases and small breasts in 17% of cases. At the tumor level, 6 patients (16%) were classified as T1, 16 patients (44%) were classified as T2 and 14 patients (40%) were classified as T3. The tumor was located in 20 patients in the supero-external quadrant, i.e. 56% of cases. The histologic types were distributed as follows: 27 cases of Invasive Carcinomas with non Specific Type (ICNST) (50%), 2 cases of In Situ carcinoma (ISC) (12%), 2 cases of atypical ductal hyperplasia (ADH) (12%), 1 case of relapsed grade 2 phyllodes tumor (6%), 1 case of mammary lymphoma (6%), 1 case of ADH and ISC association (6%), 1 case of ICNST and ISC association (6%), 1 case of ICNST and lobular larcinoma association (6%) ). The predominant incision was the orange quarter (14 patients or 39% of cases). The other types of incision were periareolar (8 patients, 22% of cases), triangular (2 patients, 0.03% of cases), “batwing” and “hemibatwing” types (10 patients, 14% of cases). All patients had clear margins at the macroscopic specimen examination. At microscopic level, margins were invaded in 1 patient, i.e. 3% of cases, by an ISC of an extensive nature with foci of ADC giving rise to the indication of a mastectomy. One patient, 3% of cases, presented 1 mm close margins. Simple monitoring following radiotherapy was decided. The 1-year MRI was normal. We found 28 satisfactory aesthetic results, i.e. 78% of cases, 6 average results and 2 bad results. Four patients, i.e. 12% of cases, presented breast lymphatic drainage disorders with chronic pain and in 1 case 1 episode of acute lymphangitis. After 4 years follow up we found 1 recurrence (2, 7%).

Discussion

Young women benefit more from BCS than older women [3]. The breast shape of the young woman is a better guarantee of a good breast conserving technique. The injection of methylene blue is all the easier because the breast is less flabby. It is the same with the size of the breasts. The concern for a good margin is less in large breasts. The larger breast size is an argument of choice in BCS [4]. The tumor size and the extensive in situ component, especially in combination with foci of atypical hyperplasia, are risk factors for local recurrence. The size of the tumor is a risk factor for local recurrence and long-distance dissemination. Tumors larger than 5 cm N + have an 84% 5-year survival rate [1]. The tumor site did not change the injection technique. Upper and outer quadrant (UOQ) tumors are more accessible to conventional techniques because the gland is more developed and axillary dissection is done through the same incision. Lumpectomy of the UOQ offers more possibilities for simple surgery without recourse to oncoplastic reconstruction techniques [4]. Local recurrence and invaded margins were correlated with histological type. The presence of a combination of ductal carcinoma and extensive carcinoma in situ as well as large lesion size, presence of nuclear pleomorphism, absence of cellular polarisation and extensive necrosis has been implicated [5]. The surgery of phyllodes tumors owes its success to a good excision passing in clear margins. Lymphoma is a rare tumor of the breast. Its treatment is BCS and treatment of micrometastatic disease. In case of microscopic residue, if margins are not demarcated the risk of recurrence increases. Neo-adjuvant chemotherapy tends to reduce the risk of local recurrence despite the role of advanced nodal involvement at diagnosis, residual tumor larger than 2 cm, multifocal residual disease, and lymphovascular space invasion [6]. The type of incision depends on the tumor location, the breast and the tumor size, and the breast shape. The decision-making factors on the type of incision are the proximity of the tumor to the skin, the tumor site, the size of the breast, the possible conversion to mastectomy after a definitive histological result, with the possibility of immediate reconstruction, the choice expressed by the patient or the need to perform breast reduction or symmetrization at the same time. Oncoplastic technics using upper and lower pedicles, inverted T, pure vertical technics or round-block can be used independently of the location. More and more tumor size does not matter in the decision [7]. Symmetrization by glandular or skin excision may be an aesthetic imperative in carefully selected patients [8]. Aesthetic sequelae that occur in 20 to 30% of cases associate breast deformities, areola malposition and skin damage [4]. The sequelae are aggravated by radiotherapy of the breast. This led to the development of radiotherapy techniques on the site, in particular the Accelerated Partial Breast Irradiation (APBI). This radiotherapy modality, which includes interstitial brachytherapy, intraoperative radiotherapy and hypofractionation, although not very widespread, has been validated as a safe alternative because it gives recurrence rates almost identical to RTE with less chronic sequelae on breast and critical organs including the heart and lungs [9,10]. The oncological results are progressive over time. The recurrence and death rate is low. BCS does not increase mortality. The choice of technique must obey technical and carcinological requirements.

Conclusion

Identifier les marges dans BCS à l’aide d’un colorant pour cibler le chemin de coupe est une technique simple et bien tolérée. C’est une contribution importante à la sécurisation des marges et à la gestion des grosses tumeurs. Les résultats esthétiques et carcinologiques sont satisfaisants.

References

  1. Kapoor MM, Patel MM, Scoggins ME (2019) The Wire and Beyond: Recent Advances in Breast Imaging Preoperative Needle Localization. Radiographics 39:1886-1906.
  2. Rubio IT, Ahmed M, Kovacs T, Marco V (2016) Margins in breast conserving surgery: A practice-changing process. Euro J Surg Oncol 42: 631-640. [crossref]
  3. Lazow SP, Riba L, Alapati A, James TA (2019) Comparison of breast-conserving therapy vs mastectomy in women under age 40: National trends and potential survival implications. Breast J 25: 578-584. [crossref]
  4. Bertozzi N, Pesce M, Santi PL, Raposio E (2017) Oncoplastic breast surgery: comprehensive review. Eur Rev Med Pharmacol Sci 21: 2572-2585. [crossref]
  5. Provenzano E, Hopper JL, Giles GG, Marr G, Venter DJ, et al. (2004) Histological markers that predict clinical recurrence in ductal carcinoma in situ of the breast: an Australian population-based study. Pathology 36: 221-229.
  6. Chen AM, Meric-Bernstam F, Hunt KK, Thames HD, Oswald MJ, et al. (2004) Breast conservation after neoadjuvant chemotherapy: the MD Anderson cancer center experience. J Clin Oncol 22: 2303-2312. [crossref]
  7. Walcott-Sapp S, Srour MK, Lee M, Luu M, Amersi F, et al. (2020) Can Radiologic Tumor Size Following Neoadjuvant Therapy Reliably Guide Tissue Resection in Breast Conserving Surgery in Patients with Invasive Breast Cancer?. Am Surg 86: 1248-1253. [crossref]
  8. Deigni OA, Baumann DP, Adamson KA, Garvey PB, Selber JC, Caudle AS, Smith BD et al. (2020) Immediate Contralateral Mastopexy/Breast Reduction for Symmetry Can Be Performed Safely in Oncoplastic Breast-Conserving Surgery. Plast Reconstr Surg 145: 1134-1142. [crossref]
  9. Romero D (2020) APBI is an alternative to WBI. Nat Rev Clin Oncol [crossref]
  10. Veronesi U, Cascinelli N, Mariani L, Greco M, Saccozzi R, et al. (2002) Twenty-year follow-up of a randomized study comparing breast-conserving surgery with radical mastectomy for early breast cancer. N Engl J Med 347: 1227-1232. [crossref]
fig 1

Smaller and Small: Strategies to Iterate to Knowledge about the Granular Aspects of Donations

DOI: 10.31038/CST.2020543

Abstract

The paper presents the use of an emerging science, Mind Genomics, to understand a practical aspect of daily life: what motivates a person to donate to a specific charity. Beyond the knowledge of specific messages which are deemed to be potentially effective as a stimulus to donation, the paper shows how knowledge of a specific end-use can inform us about the mind of a person for a more general problem—how understanding the messages for donation drives a deeper understanding of human motivation. The paper moves from inexpensive pilot tests, through an affordable experiment, and onto the creation of a tool to assign new people worldwide to the proper groups, so they can receive the appropriately targeted messages.

Introduction

Knowing What to Say to Donors to Encourage Giving

In today’s world, departments of development for various organizations have become increasingly important and active. One is inundated daily by requests for donations for all sorts of causes, ranging from simple letters from individuals to sophisticated outreach including brochures and other presentations with information intended to tap one’s emotions and open one’s wallets. Most appeals from organizations appear to be ‘on point,’ with the proper phrases, the proper images, and so forth [1-3].

Approaches to Science – Idiographic versus Nomothetic

Today’s culture of science drives research towards large samples and well-defined stimuli. Despite the fact that a great deal of science is exploratory, the majority of studies published would have us believe that the studies are following the hallowed dicta of philosopher of science Karl Popper, invoking the hypothetico-deductive system, creating a hypothesis, and then falsifying it. The editors of major journals look for breakthrough work, combining a robust combination of novelty and familiarity. Such work is not common, although it occasionally surfaces. The evolving culture of science focuses on extensions of today’s state of knowledge as represented in the existing scientific literature. The typical phrase is ‘plugging holes in the literature,’ or ‘answer a call from the literature.’ Scientific rigor is as much rigorous statistics as rigorous thinking. The published work must convince by virtue of statistical differences, not by daring challenges which advance science. Despite what is promoted as scientific ‘doctrine,’ today’s scientific world frowns upon these new directions, however, when the content of journals and the reactions of reviewers are studied in detail. A quandary arises when the research is meant to explore a topic rigorously with good underlying design but with affordable samples, with the goal to be used for practical ends while truly adding to knowledge of a topic. Can this effort be called science? Typically, these problems emerge in the social and behavioral sciences, but less frequently in the harder sciences.

The focus of this paper is how one can quickly, inexpensively, and rigorously uncover the nature of the donor’s mind for a specific end recipient, that recipient being Children’s Cancer Center (name disguised to preserve confidentiality). The objective is to support children with cancer by addressing their medical, social, and psychological needs, as well as their family’s challenges. The problem is to discover what type of messages are likely to drive a person to donate. The problem is a practical one with a limited scope, specifically Children’s Cancer Center’s donations, but the learning which emerges from the study is relevant to an understanding of other communications driving support for a given charity. The empirical part of this paper shows the two steps followed to discover what to say to potential donors about Children’s Cancer Center. The combination of the two studies may be viewed as a discussion of ‘method,’ so-called methodological research. The specific findings of the second study, which is larger, but still small in terms of general practice, show what can be discovered for practical use.

About Children’s Cancer Center

Data from the World Health Organization (WHO) and the National Cancer Institute reveal that, in the United States, cancer is the leading cause of death by disease past infancy and will lead to the deaths of approximately 1,190 children in the U.S. in 2021. Further, As of January, 2015, the most recent data readily available, The National Cancer Institute reports that there are 429,000 survivors of childhood and adolescent cancer (diagnosed at ages 0 to 19 years) alive in the United States, and these survivors face serious medical problems during and after the acute phase of their disease (National Cancer Institute, 2018, 2020) [4,5].

Childhood cancer is a global issue. According to St. Jude Children’s Research Hospital’s website, cancer is diagnosed each year in about 175,000 children ages 14. The World Health Organization reports that more than 300,000 new cases are diagnosed annually in children ages 0-19. The number of actual cases is probably greater, because children in low-income countries are not likely to be included as part of the count. As Alex’s Lemonade website points out, “globally, cancer stole 11.5 million years of healthy life away from children in 2017.” This is because of life years taken away from kids who die, as opposed to a 90-year-old adult who dies of cancer and has very few life years left (Alex’s Lemonade, 2020) [6].

Despite the global prevalence of childhood cancer and the death rates associated with it, now 80% survival rates for US children but only 20% globally, only 4% of US government funding in the cancer sector is directed towards pediatric cancers. This has been challenged by pediatric cancer activists for years. A 2015 article by Kristin Connor in the Washington Examiner sheds light on the logic behind why our government doesn’t increase funding for childhood cancer:

“…cancer research funds are driven by the number of people — of any age — who have the disease. And, of course, adults, with decades of exposures and behaviors, experience cancer in much greater numbers than young children. This approach therefore seems like the “democratic” way to distribute federal money. Yet it doesn’t do much for the more than 15,700 children diagnosed each year with cancer, and the more than 40,000 children undergoing cancer treatment each year all across the United States. But instead of looking at the number of annual diagnoses, perhaps we should consider the number of life-years potentially saved. For each child with cancer, on average, as many as 71 potential life years might be saved. That’s an important factor that is not being considered when funding allocation decisions are made.” [7].

Despite great progress in US survival rates (84% of children diagnosed with cancer are alive at least five years after diagnosis), 16% are still dying AND those who do survive for five years are not necessarily cured, and many of them suffer from long-term side effects from their illness and associated treatments. According to Alex’s Lemonade, “Children who were treated for cancer are twice as likely to suffer chronic health conditions later in life versus children without a history of cancer.”

Some reason for optimism comes from the WHO: “most childhood cancers can be cured with generic medicines and other forms of treatments including surgery and radiotherapy. Treatment of childhood cancer can be cost-effective in all income settings.” Early intervention is critical to improving pediatric cancer outcomes. The WHO also calls for childhood cancer data systems, which are “needed to drive continuous improvements in the quality of care, and to drive policy decisions.” Donating to organizations like Children’s Cancer Center supports those factors that will lead to improved outcomes around the world. With the WHO’s statement that “the most effective strategy to reduce the burden of cancer in children is to focus on a prompt, correct diagnosis followed by effective therapy,” supporting an organization like Children’s Cancer Center is critical to reducing death rates of children worldwide [8].

The Mind Genomics Approach

Mind Genomics is an emerging science with roots in experimental psychology, sociology, consumer research, and statistics, respectively. The objective of a Mind Genomics study is to understand the messages for a topic which drive a specific response, such as ‘Dislike/Like,’ ‘Not interested/Interested,’ ‘Will Not donate/ will donate,’ ‘Will pay a certain amount,’ ‘expected to feel a certain way,’ and so forth. The purview of Mind Genomics is everyday life and the expected decisions that people make when they are presented with messages about a specific, granular, situation, of the type that would confront them daily. The process of Mind Genomics, the intellectual underpinnings, the statistics, and business-relevant patents have been documented extensively, and need not be repeated in their specifics. The reader is directed to a representative list [9-11]. Mind Genomics grew out of the need to create a new vision of science, one studying the behavior of the everyday, from the viewpoint of experimentation, rather than observation. Anthropology already studied individual cultures and behaviors in depth, with recent efforts attempting to move from purely descriptive to quantitative [12]. Sociology already studies everyday behavior but does not conduct experiments, and looks for general rules in everyday behavior, rules which are ‘nomothetic,’ dealing with generalities. Social psychology moves more closely into the world of the mind but again deals with issues of nomos. Social psychology is not experimental, and while it may deal with ordinary daily behavior, it attempts to provide a broad sweep of the behavior of people, rather than focusing on the topic itself. The topic of the study is only a means to understand the person. In the above disciplines, researchers focus on the person, using the normal situation to understand the person.

In contrast to other disciplines, Mind Genomics focuses on the specifics of the situation, using the person and the rules of judgment to understand. Thus, the learning is about the specifics of daily life, and less so about the person himself or herself. Indeed, one might use the metaphor that Mind Genomics focuses on the situation, with the situation ‘illuminated’ through the lights of different sources. These ‘lights,’ these different forms of illumination, are the people. The ultimate objective of Mind Genomics is to create a ‘Wiki of daily experience,’ a virtual encyclopedia of daily life and the different aspects of that daily life, dimensionalized into specifics, with the data being the aspect and numbers representing the way the ordinary person feels about that specific, on some type of scale. The problem for the ‘project of science’ is what type of information is acceptable for science? That is, the project discussed has a specific objective. Does the fact that there is such an objective invalidate the science, simply because the results pertain to a specific end-user, the Children’s Cancer Center charity? Furthermore, are the results not ‘valid’ because the base size is low? Finally, what is the status of the preparatory study—a small preliminary study to identify whether there are messages which resonate? Do preparatory studies deserve a place in the research report, because they illustrate the way towards making the larger discovery, by one or a set of small, ‘trial’ experiments?

Illustrating the Process of Mind Genomics Applied to Donations

Mind Genomics has already been used to study the nature of effective communication for donations [10,13]. The objective of the study is to understand the most productive and effective way to communicate to a prospective donor to Children’s Cancer Center. The relevance of the topic, donation, and the relevance of Children’s Cancer Center in the world of charity organizations for children with cancer will become obvious from the review of today’s information about children and cancer. Thus, anything helping to understand WHAT to communicate, and to WHOM, can play a major role in the world of health care and fundraising. The ordinary process for understanding what to communicate does not invoke science, nor does it invoke foundational experiments bridging the world of science and application. The ordinary process might be either to select previous messages that ‘worked’ to drive donations, or perhaps to classify the prospective donors into different groups, based upon WHO they are, WHAT they have done in the past, or how they THINK about general topics. The short case presented here shows how a rigorous scientific approach to understand how the mind of the donor can be applied to situations where guidance is needed, rather than where one wishes to establish for a scientific proposition with reasonable certainty. The underlying world view is that even within the world of application, one can create knowledge which informs the greater science. In the case study presented here, we show how a small pair of studies, one with four respondents and a succeeding one with 50 respondents, informs the world of charitable donations, establishing patterns that can used later on as springboards either for more application or for theory building. We now move to the science of Mind Genomics, following the process, not so much to establish general rules, but rather to investigate a specific, defined situation: donation to Children’s Cancer Center Hospital. We follow a series of steps, whether the Mind Genomics study is designed to understand charitable donations in general, or to understand charitable donations to a specific cause.

Our presentation of the process shows the results from two iterations. The first iteration, with the very small base size of four respondents (n=4), will show how Mind Genomics extracts information at virtually the level of one or a few individuals, in a manner similar to the way the anthropologist or the consumer researcher extracts information from in-depth interviews with one or two people or from focus groups of three or more people. The second iteration will move on with a more quantitative study of the responses from 50 individuals, after building on the learning from the first iteration, changing some of the material, and then testing. It is important to note that the process need not be restricted to one small study followed by one larger one, but might comprise several small studies, until these sequences of ‘iterations’ provide the information which seem to be most appropriate to answer the applied question, and to provide the structured knowledge for a ‘wiki of the mind’ with respect to the topic.

Step 1: Choose a Topic

This step sounds simple, but it requires the researcher to focus on a specific topic. Choosing the specific topic is the start of critical thinking required by Mind Genomics, whether the topic is a general one of daily behavior (what makes a person donate to a charity?) or a specific one (what makes a person want to donate to Children’s Cancer Center Hospital?).

Step 2: Create Four Questions Which ‘Tell a Story,’ Pertaining to the Topic

The iterative nature of Mind Genomics ensures that the researcher need not worry that the questions are correct. Indeed, part of the underlying world view of Mind Genomics is that science should be exploratory.

Step 3: Create Four Answers to Each Question

Again, these answers need not be the correct answers. The ability to iterate, to run a number of these small experiments, generate data which guide the researcher to better questions and better answers.

Step 4: Select a Rating Scale

The rating scale can be 5, 7 or 9 points. The actual number of scale points is left to the discretion of the respondents, as is the rating scale. There is no right or wrong scale. The topic of questions and scales has been a focus of researchers for a century. The pragmatic side of Mind Genomics is that the scale should be simple. The scale for this type of question (not donate vs. donate) should be simple to understand, anchored at both ends. An odd number of scale points is easier to work with when there is the possibility of a neutral point.

Step 5: Launch the Study and Get the Results Fully Analyzed within 90 Minutes

The process obtains respondents through a panel service (Luc.id), with the Mind Genomics platform automatically analyzing the data and returning a complete report, the entire process typically taking less than one to one and a half hours.

Step 6: Present the Appropriate Vignettes to the Respondent, Vignettes Created for That Respondent by the Permuted Experimental Design

Record the rating on the anchored 1-9 scale, and record the response time (consideration time), operationally defined as the number of seconds from the appearance of the vignette on the screen to the actual rating assigned by the respondent. Each respondent evaluates the appropriate set of vignettes to constitute an experimental design, allowing subsequent powerful analyses. Each respondent evaluates a unique set of combinations of messages, so that across the set of respondents the evaluations cover many of the possible combinations, rather than covering a few combinations, but with precision. The learning will be in the stimuli, not in the precision of the measurement.

Step 7: Obtain Data Analyzable Both at the Level of the Individual and at the Level of the Group, Respectively

Each respondent evaluates a full experimental design, analyzable at the level of the individual respondent. For the design comprising four questions and four answers (elements), the design prescribes 24 combinations (vignettes). Each vignette comprises 2-4 elements, no more than one element or answer from any question. The design ensures that each element appears 5x, uncorrelated with any other element; The experimental design is maintained, but the combinations are changed according to a permutation scheme [14,15]. Thus, the combinations cover more of the ‘design space’ than the usual approach using experimental design. The underlying rationale is that it is more productive to test many possible combinations with underlying variability (noise) in each measurement than to limit oneself to a few combinations, measuring each point in the design with many replicate measures to average out the variation. In short, the argument by Mind Genomics is that knowledge emerges from scope with modest precision at each point (the big pattern emerges), rather than from precision with narrow scope. This is the key tenet of Mind Genomics: scope is better than precision, at least in the early explorations of a topic.

Step 8: Convert the Rating Scale in Two Ways

The first transformation is ‘Top 3’ defined as a transformed value of 0 when the original rating was 1-6, defined as 100 when the original was 7-9. The first transformation focuses on what ‘drives’ a person to select ‘donate.’ The second transformation is ‘Bot 3’ defined as a transformed value of 0 when the original rating was 4-9, defined as a transformed value of 100 when the original rating was 1-3. The second transformation focuses on what ‘drives’ a person to select ‘will not donate.’ To all transformed ratings a small random number was added (<10-5) to ensure variation in the transformed rating, and thus to ensure that the OLS (ordinary least-squares regression) will ‘work,’ and not ‘crash.’ OLS regression requires variation in the dependent variable. The addition of the small random number ensures that variation without materially affecting the results.

Step 9: Cluster the Individual Respondents Based Upon the Pattern of Their 16 Coefficients for Top3

The clustering is done using k-means clustering with the measure of distance being (1-Pearson Correlation), viz., [16]. The Pearson correlation coefficient shows the strength of a linear relation between two sets of measures. When the relation is perfectly linear, increases in one measure correspond to precise increase in the other measure. There is no scatter, the Pearson correlation is +1, and the distance is 0 (1-1=0). In contrast, when the relation is perfectly inverse, increases in one measure correspond to precise decreases in the other measure. Again, there is no scatter, the Pearson correlation is -1, and the distance is 2 (1–1=2). The clustering program generates two, and then three groups, called mind-sets, because the clusters represent groups who attend to the elements or messages in different ways. We select that cluster solution (the array of mind-sets) which tells the most obvious story (interpretable), and which comprises the smallest number of segments (mind-sets). For the data in this study, the three-mind-set solution was easier to understand.

Step 10: Create the Model for All Appropriate Data from the Respondents from Each Key Subgroup

Each group (Total, three Mind-Sets) generates three models or equation; Top3 (drivers of positive response), Bot3 (drivers of negative response), and RT (response time, or consideration time, measure of engagement with the material, whether the response to the element was positive or negative).

The model is a simple weighted, linear equation of the form:

Top3 (or Bot3) = ko + k1(A1) + k2(A2) … k16 (D4)

Response Time = k1(A1) + k2(A2) …. K16 (D4)

The additive constant k0, shows the estimated Top3 (or Bot3) response in the absence of elements. The additive constant can be thought of as a baseline response, or the underlying, fundamental likelihood of the respondent to ‘donate’ (Top3) or ‘not donate’ (Bot3). The additive constant is not meaningful for response time RT, since in the absence of elements there is nothing to which one can respond.

Step 11: Assign a New Person to One of the Mind-sets by Means of a Short Questionnaire, the PVI, Personal Viewpoint Identifier

The PVI assigns a NEW person to one of the mind-sets, and by so doing expands the scope of the small-scale studies to practical use, whether to create a more effective campaign (application), or to understand the distribution and possibly nature of the people in the different mind-sets. This adds to our general knowledge of the minds of people regarding messages relevant to donations (science).

Results

The Two Studies

To illustrate the value of small studies and what can be learned with a sequential approach requiring 2-3 days, we present the results of two studies designed to understand what messages may work for a campaign. The project deals with messaging to drive donations for Children’s Cancer Center, a hospital devoted to pediatric cancer (name of actual hospital disguised to maintain confidentiality). To make the topic general, the actual study was conducted among the general population to uncover the messages which would appeal the general population, not simply appeal to previous donors to Children’s Cancer Center.

The knowledge development was done in two phases. The first phase, or experiment, can be considered a pilot study with 10 respondents, sufficient to provide deep insights. The key difference between a pilot study of 10 respondents and a larger scale study of 40-50 respondents, or even a much larger scale study of 100-200 respondents, is simply the ability to identify different groups in the population and study the pattern of their responses.

Study 1: Preliminary Learning through a ‘Mini-Study’

As noted above, the Mind Genomics project begins with the topic (donating to Children’s Cancer Center specifically, or a cancer hospital for children in general). The next step requires the researcher to formulate the four questions that ‘tell a story.’ The questions emerging from the initial discussion tell such a story. They may not be the only questions, but in an exploratory study the objective is to learn just ‘what works.’ The four questions are:

A. Question A: What is it like to be a pediatric cancer patient?

B. Question B: Why is it important to support Children’s Cancer Center?

C. Question C: What are the outcomes for children when you donate?

D. Question D: How do you give your donation?

The study was executed with 10 respondents chosen from a large group of panel participants recruited by Luc.id, Inc., based in Louisiana, a provider of panel participants for on-line studies. Each respondent evaluated a different set of 24 vignettes, constructed according to Step 6 above. With as few as one respondent the Mind Genomics study generates meaningful data, readable at the level of that single respondent. With 10 respondents, and occasionally even as few as three or four, patterns rapidly emerge, patterns which are relevant for the respondents, but which may or may not be projectible to the population at large.

Table 1 shows the coefficients for the response time, the positive coefficients for the positive responses (I will donate), and the positive coefficients for the negative responses (I will not donate).

Table 1: The four questions, four answers to each question and the coefficients from the grand model relating the 16 elements to the binary transformed rating. To help the underlying patterns emerge, only the positive, non-zero, coefficients are shown for Top3 and for Bot3. The response times are shown for all elements, but only the response times of 1.1 seconds are shown, those driving ‘engagement’.

table 1

The response time gives a sense of the elements which most engage the respondent. Even with a small base size of 10 respondents, the data from the deconstruction of the ratings into the contribution of the elements gives a sense of the nature of elements that effectively engage. Those elements talk about the children, and survival. The elements may not ‘drive expected donations,’ but they do engage as shown by the long response time (consideration time). Moving on to the ratings, or more specifically the transformed ratings, Table 1 suggests that the additive constant, the proclivity to donate or not to donate ranges between 40 and 50. In the absence of elements which provide specific information, there is no dramatic drive to donate or not to donate, at least for these randomly chosen respondents. What is more important, however, is that among the 16 elements or, the messages, only two reach significance (coefficients of 8 or higher).

You can donate online with the click of your mouse!

Children’s Cancer Center freely shares their research and treatment protocols with hospitals around the world.

The two elements have little in common, suggesting that there is probably no single strong message. It is the nature of researchers to continue looking. These data suggest no ‘magic bullet.’ They do suggest that if there are any ‘magic bullets,’ they may be found in different mind-sets in the population, if such mind-sets can be identified. It is at this point that one can begin to formulate hypotheses about the psychology of donations. The hypothesis emerging here is that ‘painting a graphic word picture’ of the child will engage attention. The science of Mind Genomics has now enriched our thinking about the psychology of donations and generosity, suggesting that graphic design of the recipient is something to consider. The data suggest a further opportunity to understand the nature of the portrait being painted.

Study 2: Identifying the Underlying Structure of What Works for Donating to the Hospital

Study 1 constituted the first foray into the topic, executed with 10 respondents. Although 10 respondents are not often considered to be sufficient to establish results, a base of 10 respondents from one or two focus groups is acceptable when considered to be an exploratory step. Thus, we considered Study 1 to be exploratory, providing information in a disciplined way, but with simply too few respondents.

Here again are the questions from Study 1

Question A: What is it like to be a pediatric cancer patient?

Question B: Why is it important to support Children’s Cancer Center?

Question C: What are the outcomes for children when you donate?

Question D: How do you give your donation?

Based upon the patterns of responses, here are the four revised questions. The key change is Question D, and of course the text of the elements themselves.

Question A: What is it like to be a pediatric cancer patient (why would you want to help)?

Question B: Why is it important to support Children’s Cancer Center?

Question C: What are the outcomes for children when you donate?

Question D: What would inspire you to give?

The second study was conducted with 50 respondents, of which the data from 48 respondents were retained. The remaining two respondents did not provide age or gender, and so their data were eliminated.

Table 2 shows the same type of data as does Table 1, this time for the new set of respondents, and the new set of elements. Once again, each respondent evaluated a unique set of 24 vignettes, constructed by experimental design, so that the data can be analyzed down to the level of the individual respondent. This strategy, so-called ‘within-subjects design’ ensures that the data can be further deconstructed into subgroups called ‘mind-sets,’ based upon the patterns of the coefficients for each respondent. At first glance, the data reaffirm the previous finding from Study #1 that there are no elements which strongly driving expected donations, when the topic is associated with Children’s Cancer Center. One might consider this a failure when the objective is to discover the so-called ‘magic bullet,’ the message that will work for everyone. The result might be the continued search for this ‘magic bullet’ in successive efforts, only to realize in the end that there is no ‘magic bullet,’ or if there is, no one has any idea about what specifically it is  and how to express it. At the practical level, the effort will be seen to have been wasted. There will be no science of communication about charitable contributions and how people feel. The unsatisfactory conclusion, soon to be discarded is ‘no business results, no contributions to the science of people, no additional knowledge for science of charity communications.’

Table 2: Results from the total panel from Study #2.

table 2

The picture changes, and significant learning for practical application and for foundational knowledge emerges, when one dives more deeply into Mind Genomics and discovers mind-sets. In Mind Genomics, the continuing data suggest the existence of what could be called mind-sets, different patterns of ideas which are interpretable in the form of a ‘story,’ patterns that seem to attach themselves to people. In the world of Mind-Genomics, considering people simply as ‘protoplasm which responds,’ there emerge groups of ideas which separately drive strong responses. People are the carriers of these ideas. People allow these groups of ideas to emerge. A person typically falls into a specific mind-set for a topic and not into the other mind-sets for the same topic. When we look at the data through the lens of mind-sets, using the computational process outlined in Steps 9 and 10 above and doing the computation on the Top3 (positive responses), we emerge with three new-to-the-world mind-sets, shown in Table 3 for Top3 (positive response – likely to donate), and in Table 4 for Bot3 (negative response – not likely to donate).

Table 3: Drivers of positive responses to messages, showing total panel, gender, and the three emergent mind-sets based on clustering using Top3.

table 3

To strengthen the scientific aspect of the results—the learning which is meant to be foundational rather than simply a direction for messaging to raise money—we include gender, as well, comparing the responses of males and females. Table 3 suggests three mind-sets. It is in the mind-sets that the strong elements emerge, elements with coefficients of +8 or higher.

a. There are some positive elements by gender, but no strong performers at all. Both male and female respondents are modestly interested in donating (additive constant=49)

b. Mind-Set 1 – Describe the professional services (modestly interested in donating at a basic level, additive constant=40)

c. Mind-Set 2 – Describe the person helped (modestly interested in donating at a basic level, additive constant=39)

d. Mind-Set 3 – Describe the institution’s performance (strongly interested in donating at a basic level, additive constant=69).

Table 4 shows the messages that should be avoided, the messages driving the response of ‘Not Donate.’ The coefficients emerge from considering only ratings of 1 and 2 as relevant on the 5-point scale, with ratings of 3-5 (neutral, will donate) as not relevant, and coded as 0. The additive constant is far lower for Not Donate than it is for Donate, meaning that people are more inclined to say that they will donate. Several messages to be used by the Center are likely to ‘backfire’, driving donors away. Mind-Set 2 is especially sensitive to the wrong messages.

Table 4: Drivers of negative responses to messages, showing total panel, gender, and the three emergent mind-sets based on clustering using Top3.

table 4

Finding these Respondents in the Population

A continuing topic in Mind Genomics is the value of a ‘next step’ beyond the already-important discovery of mind-sets. Mind-sets themselves provide a way of understanding daily life, through a new focus on the every-day and the way people differ in their typical behaviors. Yet, beyond the scientific contribution of knowledge is the remarkable potential of expanding the value of the learning, moving beyond the respondents tested in the study to the entire world. A simile is the colorimeter used to quantify colors of objects. The science of color can be developed in any location with any material. The real value of the science in terms of the ‘world outside’ is to measure the colors of new objects, not by repeating the study in which the colors were discovered, but rather by measuring the colors of the new objects using a machine in which the science has been already programmed [17-23]. The approach used in Mind Genomics is called the PVI, the personal viewpoint identifier. The objective is to use the data from Table 3 (MS1, MS2, MS3) to create a short questionnaire (six questions), on a simple to-use scale. The questions come from the actual study. The pattern of responses assigns a NEW PERSON to one of the three mind-sets. There are 64 possible patterns, each pattern mapping to one of the three mind-sets. Figure 1 shows the PVI, doing so in two parts. The left part is a short introduction, to introduce the person to the task, and to obtain optional background information. The right part is the actual PVI, including some basic questions about attitudes towards ‘giving’ and the six-question PVI. The results are forwarded to a database and can be sent to the respondents, as well.

fig 1

Figure 1: The PVI, personal viewpoint identifier, based upon the second study. The PVI is located at: https://www.pvi360.com/TypingToolPage.aspx?projectid=1261&userid=2018.

Discussion and Conclusion

The empirical results are simple to discover, just by looking at the table of elements and how the elements or messages drive interest in donating. The message used in requesting should be straightforward and focus on how the organization saves and changes lives for the better. The outcome of the organization’s work, and not the process, should be the main message. One should avoid directly focusing on needs and tax breaks, respectively. Although a minority of prospective donors will care about needs or tax advantages, most people say that they will contribute when they are suitably convinced by the cause and mission of the effort and in the vision detailing how their contribution can help. It is at this point that one can begin to formulate hypotheses about the psychology of donations. The hypothesis emerging here is that ‘painting a graphic word picture’ of the child will engage attention. The science of Mind Genomics has now enriched our thinking about the psychology of donations and generosity, suggesting that graphic design of the recipient is something to consider. The data suggest a further opportunity to understand the nature of the portrait being painted. It is not important to point out major ‘learnings’ from the results, learnings which confirm or disconfirm what is known in the literature, or what is hypothesized to be the case for the psychology of donors or the psychology of children with cancer. That information is, of course, important to know for science. What is more important, however, is the ability to have at one’s disposal a tool for small-scale, iterative experimentation: Mind Genomics, a tool which returns rich information even with remarkably small base sizes, such as n=10 or even fewer. In the world of science, Mind Genomics becomes a tool bridging the gap between the idiographic (individual) and the nomothetic (the general world). Just as important, Mind Genomics becomes both a practical tool to increase donations, as well as a tool for the development of systematized knowledge, both for the current generation and for those to come—a ‘wiki of the mind.’ Finally, viewed from the grand proscenium arch of civilization, Mind Genomics provides a record of how people of a certain time, in a specific environment, and faced with known needs, think about topics— a record of inestimable value to philosophy, psychology, history, sociology, anthropology, and economics, just to name a few disciplines where knowledge of the granular is important.

References

  1. Cao X, Jia L (2017) The effects of the facial expression of beneficiaries in charity appeals and psychological involvement on donation intentions: evidence from an online experiment. Nonprofit Management and Leadership 27: 457-473.
  2. Das N, Guha, A, Biswas A, Krishnan B (2016) How product–cause fit and donation quantifier interact in cause-related marketing (CRM) settings: Evidence of the cue congruency effect. Marketing Letters 27: 295-308.
  3. Mejova Y, Garimella VRK, Weber I, Dougal MC (2014) February. Giving is caring: understanding donation behavior through email. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing pg: 1297-1307.
  4. Childhood Cancer Survivor Study: An Overview. (2018, September 27). The National Cancer Institute. https://www.cancer.gov/types/childhood-cancers/ccss
  5. Childhood Cancer. (2020, Nov 7). The National Cancer Institute. https://www.cancer.gov/types/childhood-cancers
  6. Alex’s Lemonade (2020) Global Childhood Cancer Facts: By the Numbers. (2020, Dec 7). Alex’s Lemonade. https://bit.ly/3a49Cpl
  7. Connor, K. (2015, May 25). Why childhood cancer research gets shortchanged. Washington Examiner. https://www.washingtonexaminer.com/why-childhood-cancer-research-gets-shortchanged.
  8. Jude’s Children Research Hospital, 2020 Childhood Cancer Facts. (2020, Dec 7). St. Jude Children’s Research Hospital. https://www.stjude.org/treatment/pediatric-oncology/childhood-cancer-facts.html .
  9. Gabay G, Moskowitz HR (2012) The algebra of health concerns: implications of consumer perception of health loss, illness and the breakdown of the health system on anxiety. International Journal of Consumer Studies 36: 635-646.
  10. Galanter E, Moskowitz H, Silcher M (2011) The Price of Grace: Donations, Charities, and the Mind. People, Preferences and Prices: Sequencing the Economic Genome of the Consumer Mind pg:103.
  11. Moskowitz HR, Gofman A (2007) Selling blue elephants: How to make great products that people want before they even know they want them. Pearson Education.
  12. Williams LL, Quave K (2019) Quantitative Anthropology, A workbook. Elsevier.
  13. Gabay G, Moskowitz H, Gere A (2019) September. UNDERSTANDING THE DONATING MIND & OPTIMIZING MESSAGING–PUBLIC HOSPITALS. In 12th Annual Conference of the EuroMed Academy of Business.
  14. Gofman A, Moskowitz H (2010a) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
  15. Moskowitz H, Gofman A, I novation Inc (2003) System and method for content optimization. U.S. Patent 6,662,215.
  16. Ramasubbareddy S, Srinivas TAS, Govinda K, Manivannan SS (2020) Comparative Study of Clustering Techniques in Market Segmentation. In Innovations in Computer Science and Engineering, pg: 117-125, Springer, Singapore.
  17. Gofman A, Moskowitz, HR (2010b) Improving customers targeting with short intervention testing. International Journal of Innovation Management, 14: 435-448.
  18. Global Initiative for Childhood Cancer. World Health Organization. (2020, December 7). https://www.who.int/cancer/childhood-cancer/en/
  19. Health Affairs, 2020. https://www.healthaffairs.org/doi/full/10.1377/hlthaff.25.2.541
  20. Importance of CEO involvement in creating a culture of philanthropy in hospitals.
  21. Moskowitz HR, Gofman A, Innovation Inc, (2011). System and method for performing conjoint analysis. U.S. Patent 7,941,335.
  22. Notaro S (2020) https://bit.ly/3b7wBQi.
  23. Social Science Direct (2020) https://www.sciencedirect.com/science/article/abs/pii/S2213076415300051. Use of social media by US Children’s’ hospitals.

Differences between 5-Minute and 15-Minute Measurement Time Intervals of the CGM Sensor Glucose Device Using GH-Method: Math-Physical Medicine (No. 281)

Introduction

This paper describes the research results by comparing the glucose data from a Continuous Glucose Monitor (CGM) sensor device collecting glucose at 5-minute (5-min) and 15-minute (15-min) intervals during a period of 125 days, from 2/19/2020 to 6/23/2020, using the GH-Method: math-physical medicine approach. The purposes of this study are to compare the measurement differences and to uncover any possible useful information due to the different time intervals of the glucose collection.

Methods

Since 1/1/2012, the author measured his glucose values using the finger-piercing method: once for FPG and three times for PPG each day. On 5/5/2018, he applied a CGM sensor device (brand name: Libre) on his upper arm and checked his glucose measurements every 15 minutes, a total of ~80 times each day. After the first bite of his meal, he measured his Postprandial Plasma Glucose (PPG) level every 15 minutes for a total of 3-hours or 180 minutes. He maintained the same measurement pattern during all of his waking hours. However, during his sleeping hours (00:00-07:00), he measured his Fasting Plasma Glucose (FPG) in one-hour intervals.

With his academic background in mathematics, physics, computer science, and engineering including his working experience in the semiconductor high-tech industry, he was intrigued with the existence of “high frequency glucose component” which is defined as those lower glucose values (i.e. lower amplitude) but occurring frequently (i.e.. higher frequency). In addition, he was interested in identifying those energies associated with higher frequency glucose components such as the various diabetes complications that would contribute to the damage of human organs and to what degree of impact. For example, there are 13 data-points for the 15-minute PPG waveforms, while there are 37 data-points for the 5-minute PPG waveforms. These 24 additional data points would provide more information about the higher frequency PPG components.

Starting from 2/19/2020, he utilized a hardware device based on Bluetooth technology and embedded with customized application software to automatically transmit all of his CGM collected glucose data from the Libre sensor directly into his customized research program known as the eclaireMD system, but in a shorter time period for each data transfer. On the same day, he made a decision to transmit his glucose data at 5-minute time intervals continuously throughout the day; therefore, he is able to collect ~240 glucose data within 24 hours.

He chose the past 4-months from 2/19/2020 to 6/19/2020, as his investigation period for analyzing the glucose situation. The comparison study included the average glucose, high glucose, low glucose, waveforms (i.e. curves), correlation coefficients (similarity of curve patterns), and ADA-defined TAR/TIR/TBR analyses. This is his secondresearch report on the 5-minute glucose data. His first paper focused on the most rudimentary comparisons [1].

References 2 through 4 explained some example research using his developed GH-Method: math-physical medicine approach [2,3].

Results

The top diagram of Figure 1 shows that, for 125 days from 2/19/2020 – 6/23/2020, he has an average of 259 glucose measurements per day using 5-minute intervals and an average of 85 measurements per day using 15-minute intervals. Due to the signal stability of using Bluetooth technology, for the 5-min, it actually has 259 data instead of the 240 data per day.

IMROJ-5-3-516-g001

Figure 1. Daily glucose, 30-days & 90-days moving average glucose of both 15-minutes and 5-minutes.

The middle diagram of Figure 1 illustrates the 30-days moving average of the same dataset as the “daily” glucose curve. Therefore, after ignoring the curves during the first 30 days, we focus on the remaining three months and can detect the trend of glucose movement easier than “daily” glucose data chart. There are two facts that can be observed from this middle diagram. First, the gap between 5-min and 15-min is wider in the second month, while the gap becomes smaller during the third and fourth month. This means that the 5-min results are converging with the 15-min results.Secondly, both curves of 5-min and 15-min are much higher than the finger glucose (blue line). This indicates that the Libre sensor provides a higher glucose reading than the finger glucose. From the listed data below, the CGM sensor daily average glucoses are about 8% to 10% higher than the finger glucose.

5-min sensor: 118 mg/dL (108%)

15-min sensor: 120 mg/dL (110%)

Finger glucose: 109 mg/dL (100%).

The bottom diagram of Figure 1 is the 90-days moving average glucose. Unfortunately, his present dataset only covers 4 months due to late start of collecting his 5-min data; however, the data trend of the last month, from 5/19-6/23/2020, can still provide a meaningful trend indication. As time goes by, additional data will continue to be collected, his 5-min glucose’s 90-days moving trend will be seen more clearly.

Figure 2 shows the synthesized views of his daily glucose, PPG, and FPG.Here, “synthesized” is defined as the average data of 125 days.For example, the PPG curve is calculated based on his 125×3=375 meals. Listed below is a summary of his primary glucose data (mg/dL) in the format of “average glucose/extreme glucose”. Extreme means either maximum or minimum, where the maximum for both daily glucose and PPG due to his concerns of hyperglycemic situation, and the minimum for FPG due to his concerns of insulin shock. The percentage number in prentice is the correlation coefficients between the curves of 15-min and 5-min.

Daily (24 hours):15-min vs. 5-min

117/143vs. 119/144(99%)

PPG (3 hours):15-min vs. 5-min

126/135vs. 125/134(98%)

FPG (7 hours):15-min vs. 5-min

102/95 vs. 105/99 (89%).

Those primary glucose values between 15-min and 5-min are close to each other in the glucose categories. It is evident that the author’s diabetes conditions are under well control for these 4 months. However, by looking at Figure 2 and three correlation coefficients %, we can see that daily glucose and PPG have higher similarity of curve patterns (high correlation coefficients of 98% and 99%) between 15-min and 5-min, but FPG curves have a higher degree of mismatch in patterns (lower correlation coefficient of 89%). This signifies that his FPG values during sleeping hours have a bigger difference between 15-min and 5-min.

IMROJ-5-3-516-g002

Figure 2. Synthesized daily glucose, PPG, and FPG of both 15-minutes and 5-minutes.

Figure 3 are the results using candlestick model [4,5]. The top diagram is the 15-min candlestick chart and the bottom diagram is the 5-min candlestick chart. Candlestick chart, also known as the K-Line chart, includes five primary values of glucoses during a particular time period; “day” is used in this study. These five primary glucose data are:

Start: beginning of the day.

Close: end of the day.

Minimum: lowest glucose.

Maximum: highest glucose.

Average: average for the day.

Listed below are five primary glucose values of both 15-min and 5-min.

15-min: 108/116/86/170/120.

5-min: 111/116/84/173/118.

IMROJ-5-3-516-g003

Figure 3. Candlestick charts of both 15-minutes and 5-minutes.

By ignoring the first two glucoses, start and close, let us focus on the last three glucoses: minimum, maximum, and average. The 5-min method has a lower minimum and a higher maximum than the 15-min method. This is due to the 5-min method capturing more glucose data; therefore, it is easier to catch the lowest and highest glucoses during the day. The difference of 2mg/dL between 15-min’s average 120 mg/dL and 5-min’s average 118 mg/dL is only a negligible 1.7%.

Again, it is also obvious from these candlestick charts that the author’s diabetes conditions are under well control for these 4 months.

Conclusion

In summary, the glucose differences between 5-min and 15-min based on simple arithmetic and statistical calculations are not significant enough to draw any conclusion or make any suggestion on which are the “suitable” or better measurement time intervals. However, the author will continue his research to pursue this investigation of energy associated with higher-frequency glucose components in order to determine the glucose energy’s impact or damage on human organs (i.e. diabetes complications).

The author has read many medical papers about diabetes. The majority of them are related to the medication effects on glucose symptoms control, not so much on investigating and understanding “glucose” itself. This situation is similar to taming and training a horse without a good understanding of the temperament and behaviors of the animal. Medication is like giving the horse a tranquilizer to calm it down. Without a deep understanding of glucose behaviors, how can we truly control the root cause of diabetes disease by only managing the symptoms of hyperglycemia?

References

  1. Hsu, Gerald C. eclaireMD Foundation, USA (2020) Analyzing CGM sensor glucoses at 5-minute intervals using GH-Method: math-physical medicine (No. 278).
  2. Hsu, Gerald C. eclaireMD Foundation, USA(2020) Predicting Finger PPG by using Sensor PPG waveform and data via regression analysis with three different methods using GH-Method: math-physical medicine (No. 249).
  3. Hsu, Gerald C. eclaireMD Foundation, USA (2019) Applying segmentation pattern analysis to investigate postprandial plasma glucose characteristics and behaviors of the carbs/sugar intake amounts in different eating places using GH Method: math-physical medicine (No. 150).
  4. Hsu, Gerald C. eclaireMD Foundation, USA (2019) A case study of the impact on glucose, particularly postprandial plasma glucose based on the 14-day sensor device reliability using GH-Method: math-physical medicine (No. 124).
  5. Hsu, Gerald C. eclaireMD Foundation, USA. Comparison study of PPG characteristics from candlestick model using GH-Method: Math-Physical Medicine (No. 261).

Incivility from Undergraduate Nursing Students in the United Kingdom

DOI: 10.31038/IJNM.2020111

Abstract

There is growing evidence that under-graduate nursing students are demonstrating inappropriate and uncivil behaviour towards academics which is also reported as harassment and contra-power harassment. Harassment is unwanted behaviour which an individual finds offensive or which makes them feel intimidated or humiliated and unwanted behaviours include verbal or written words of abuse such as offensive emails, comments on social media network sites, stalking and sexually motivated behaviours. Contra-power harassment is defined as the harassment of individuals in formal positions of power and authority by those that are not. One of the most cited reasons for inappropriate behaviour by undergraduate students is related to grading of course work and course progression, but literature relating to what extent this is occurring towards nurse academics is nominal.

Aim: The aim of this study is to understand the extent to which nursing academics experience inappropriate, uncivil or harassing behaviour deemed as harassment from students.

Method: Nursing academics in Universities in the United Kingdom, which provided undergraduate nursing programmes, were invited to complete an online questionnaire; an introductory letter and participant information sheet was provided. A 41-item Likert scale (strongly agree-strongly disagree) was used to elicit academics’ experiences of contra-power harassment and their views regarding possible contributing factors.

Results: The responses from UK academics indicated that students were disrespectful and demanding in their written communications; that they challenging academic integrity; and they expected to be coached more to gain a higher degree classification. This mirrored the Australian responses [1] which indicated that inappropriate behaviour was related to consumerism of higher education and a sense of entitlement from students as they paid for a degree and that academics experiencing the highest levels of student harassment related to assessment grades.

Conclusions: Incivility, poor and demanding behaviour is becoming more common place in higher education and this is causing academics to question their own interactions and understanding of student psychology and culture and the need to develop coping strategies. Appreciation of the risk factors of poor behaviour can aid academics in ensuring that not only is there an appropriate harassment prevention policy but that the implementation of appropriate prevention strategies is in place.

Highlights

  • Students harass academics to try to gain a higher grade in their academic work.
  • Students demonstrate poor language skills in electronic communications.
  • Undergraduate students’ uncivil behaviour is affecting academics.

Keywords

Harassment, Incivility, Contra-power, Student nurses

Introduction

Research is showing that violence in society is increasing and can cause suffering and ruin lives. Socially aggressive behaviour can occur across the life span and is where individuals may be irritable, impulsive, angry and violent; accordingly, individuals will be more aggressive due to developmental transitions, a range of medical and / or psychiatric diagnoses [2]. Whilst not everyone may be subjected to violent behaviour literature suggesting that unsociable behaviour is increasing, and that it is shaped by society, led to the City University of London establishing The Violence and Society Centre (2019) [3], it’s aim to ‘produce the evidence to build the theory needed to inform policy, politics and practice to move towards zero violence’. One can suggest therefore that students entering higher education may have been on the receiving end of socially aggressive behaviour, and as such have the potential to demonstrate violent behaviour in the university. They may also demonstrate unacceptable behaviour because they are vulnerable to the newness of the university environment, and/or have a new stressful living environment, and the social pressures of belonging and the need to achieve is great [4]. Research exploring the behaviour of nursing students is showing that they are behaving in an uncivil, aggressive and harassing manner toward academics. Lee [5] and Christensen [1] suggest that poor behaviour is a result of the commercialisation of higher education with it being seen as an economic investment, pay-as-you-go access to a university education. Kopp and Finney [6] discuss how perceptions of academic entitlement has been theoretically linked with uncivil student behaviour. However, entitlement and the reality of higher education are too often incompatible as the effort needed to obtain a degree and the demands of the course are too high for some students to achieve and non-achievement is a threat to investment which manifests itself in poor uncivil behaviour [1,5,7].

Background

Inappropriate, uncivil contra-power harassing behaviour towards academics by students is becoming more common place. Research has focused on different types and potential causes. White [7-9], all identified that contra-power harassment was characterised by verbal, task, personal and isolationist attack. Verbal abuse is reported as being the most common form of incivility and consists of shouting, swearing, inappropriate language or verbally aggressive language, name calling or heckling. Nursing student incivility in the USA identified that the three major disruptive behaviours were inattentiveness in class, attendance problems and lateness [10]; over 40% (n: 409) of respondents identified that they had been subjected to verbal abuse and over 23% being subjected to offensive physical contact / violence which included hitting or slapping. White’s [7] UK research also found that malicious rumour mongering was rife and this is identified as social and emotional abuse. Blizard [8] discusses isolation attack and how this can be students using mobile phones or talking during lessons, or when individual students using the collective voice to air their displeasure and harass.  De Souza & Fansler’s [9] work on contra-power sexual harassment found that personal attack manifested in comments of a sexual nature being written in unit/module evaluations, stalking and in some cases sexual harassment. They found that over 50% of academics had experienced some form of sexual harassment or unwanted sexual attention from students. White’s [10] study described how female academics where the targets of sexual innuendo or seen as sexual objects by male students, and where male academics where offered sexually explicit picture texts as bribery for favourable assessment results. Lashley and DeMeneses [10-13] identified that incivility in nursing students was demonstrated by lateness, inattention, absenteeism, academic dishonesty, verbal abuse, aggressive behaviours (including use of mobile technology). Task harassment was also identified and this included contacting academics outside normal working hours, allegations of bias marking, and fabricating evidence against an academic and character assassination on social media. Other literature which focuses on non-nursing students found similarities [13-18]. Despite workplace bullying and harassment being unlawful in the UK (UK Equality Act 2010) it is still occurring. This is mirrored in the USA by the Workplace Bullying Institute [19] who estimates that one in three employees has been bullied. Lampman [9] found that women in academia reported significantly more negative outcomes as a result of harassment than men as they were more likely to receive threats, episodes of intimidation or bullying from students. It should be noted here that there is a prevalence of females in nurse academia because nursing in the UK is predominantly a female profession (in 2016 only 11.4% were male). Nurses are regulated by the Nursing and Midwifery Council (NMC) – [20] and must abide by the NMC Code of professional conduct (NMC 2018).  This states that nurses must ‘treat people with kindness, respect and compassion’ and ‘recognise diversity’ and ‘respect and uphold people’s human rights’ and as such nurses need to have exceptional communication and interpersonal skills and hold an empathetic disposition. However, there is a national and international scrutiny of healthcare which suggested that nurses, especially nursing students, do not hold the disposition necessary to be a nurse [21]. Phillips [22], and Rosser [23] longitudinal study however showed that student nurse did hold caring values; and Scammell [24] identified that higher education recruitment strategies in the UK upheld a values based selection and admission criteria. Yet Watson [25] suggest that service users, and their families and carers, are dissatisfied with healthcare and that worldwide political influences are impacting on healthcare provision [26-29] is causing discontent. Literature is suggesting that incivility towards academics is becoming a commonly occurring phenomenon and is causing academics to be concerned. Kopp and Finney [6], Lampman [9] and Christensen [1] suggest that part of the reasons for growing incivility is that there is a growing sense of entitlement and a shifting of cultural norms by the present generation of students accessing higher education. Alarmingly Christensen [1] found that students neither concerned nor cared about the consequences of harassing the academics. In Lee’s [5] UK work she highlighted how there is power imbalance in favour of the university student. Indeed, Keashly and Neuman [30] noted that for many academics, caught in the ‘cycle of abuse’, had very little recourse and feared repercussions if not being believed if they spoke out and this left them powerless. Indeed, academics’ being bullied by students is also being reported on in national press] and how the abuse is making staff extremely anxious [31].

Aim

The aim of this study is to better understand the extent to which nursing academics experience contra-power harassment from undergraduate nursing students.

Method, Setting and Sample

A convenience sample of 19 universities across the UK were invited to take part. Heads of nursing departments / Deans of faculties were asked to disseminate an online survey. A participation information sheet outlining the aim of the study, study protocol, ethical approval, what participation entailed and link to the study were emailed to the heads / Deans. Anonymity of the university and respondents was emphasised (Table 1).

Table 1: Participant Demographics (n=17).

Age 36-40 2
41-45 1
46-50 5
51-55 4
56-60 4
>60 1
Gender Female 12
Male 5
University Faculty Health 6
Science & Engineering 2
Business 4
Arts & Humanities 1
Other 3
Academic Level Associate Lecturer 1
Lecturer 2
Senior Lecturer 10
Principle Lecturer 3
Associate Professor 1
Years’ Experience 2-5 1
6-10 1
11-15 8
16-20 4
21-25 1
26-30 2
Current Work Status Full-Time 15
Part-Time 2
Teaching Space Undergraduate 13
Post Graduate 4

[NB: 1 participant did not follow through with completing the survey].

Ethics

Ethics approval was sought and granted by the ethics committees in the authors universities (Western Sydney University & Bournemouth University).

Data Collection

Data was collected from November 2018 to May 2019. The Likert scale statements were developed from the literature. For validity a draft survey was sent to five experienced research active nursing academics, after which refinements were made until consensus reached. The survey had three sections 1) demographics, 2) experiences of contra-power harassment and 3) possible contributing factors. Demographic data asked for age, gender, years of academic experience, majority of teaching practice (under-graduate or postgraduate), and academic level. A total of 41 Likert scale statements were included in sections two and three. Section two used a five-point scale ‘never-always’ scale and contained contra-power harassment statements (Table 2). Section three used a five-point scale ‘strongly disagree – strongly agree’ scale with pre-worded statements which focused on perceptions of contributing factors (Table 3).

Table 2: Academics Experiences of Contra-Power Harassment (n=16).

  Scoring: Never (1) – Always (5) Sometimes

N (%)

Often

N (%)

Always

N (%)

Median (Mean) Std. Dev
Q1 I feel that when a student complains, their word is believed, whereas I have to justify my actions 3 (18) 5 (31) 3 (18) 3 (3.31) 1.25
Q2 I receive criticism about my student feedback, that is not constructive 4 (25) 4 (25) 2 (2.56) 1.09
Q3 I feel my role is less about educating students, and more about me being a provider of marks/grades 6 (37) 5 (31) 2 (12) 3 (3.25) 1.18
Q4 I have had experiences of students being aggressive and disrespectful to me in their response to their marks and grades 10 (62) 2 (12) 3 (2.81) .75
Q5 Students do not take responsibility for their learning, and then insist it’s my fault for not teaching them well enough 8 (50) 6 (37) 3 (3.19) .83
Q6 I feel like retaliating against a student who has been unfairly critical of me, on a personal level 7 (43) 1 (6) 2 (2.25) 1.00
Q7 I find students challenge my authority, my experience and my expertise 5 (31) 3 (18) 2 (2.56) .96
Q8 I notice that some students’ expectations of their academic ability are too high or unachievable, and this is reflected in how they communicate with me 5 (31) 8 (50) 3 (3.25) .93
Q9 In my experience, as student expectations of their academic ability increase, so do complaints 5 (31) 7 (43) 1 (6) 3 (3.38) .89
Q10 I feel powerless to discipline a student who is harassing me 4 (25) 3 (18) 3 (18) 3 (3.00) 1.41
Q11 I have been ‘stalked’ by students when outside of the university physically and/or electronically 3 (18) 1 (1.56) .81
Q12 I have had students repeatedly contact me when outside of the normal classroom times, by email or phone messages 3 (18) 6 (37) 3 (2.69) 1.25
Q13 I have had students criticise the marks and /or feedback other academics have given them 10 (62) 6 (37) 3 (3.38) .50
Q14 I feel that the student harassment I experience is because students behave unprofessionally with university academics 4 (25) 7 (43) 1 (6) 3 (3.19) 1.17
Q15 I have had students argue about their marks simply because they want a higher grade 4 (25) 7 (43) 3 (3.13) .89
Q16 I have had students complaining about their mark when they have compared their work with other students because they want a higher grade 10 (62) 4 (25) 3 (3.13) .62
Q17 I feel I am being perceived by students not as a knowledgeable expert, but as one who provides a service 6 (37) 4 (25) 1 (6) 3 (2.94) 1.12
Q18 I have been the centre of unfounded student accusations of impropriety of a sexual nature
Q19 I sometimes engage in displaced aggression against other individuals as a result of student harassment 1 (6) 1 (1.38) .619
Q20 I feel angry when students harass me unnecessarily 5 (31) 2 (12) 3 (18) 3 (2.94) 1.39
Q21 I feel scared and fear for my physical safety when a student is verbally aggressive 3 (18) 1 (6) 0 2 (1.88) .96
Q22 I feel helpless and powerless when students personally attack me on social media 1 (6) 1 (6) 3 (18) 1 (2.27) 1.67
Q23 I am irritated when students actively engage with their electronic devices (e.g. mobile phones, tablets, laptops) in the lesson I’m teaching 4 (25) 5 (31) 2 (12) 3 (3.19) 1.17
Q24 I have been accused of being racist because students are not happy with the mark they have been awarded or don’t feel supported as they would expect 1 (6) 1 (6) 1 (1.44) .89
Q25 I am concerned for my professional reputation when I respond to a student who has harassed me 4 (25) 2 (12) 2 (2.31) 1.30

Note: Std Dev – Standard Deviation.

Table 3: Academics attitudes to the contributing factors associated with Contra-Power Harassment.

Scoring: Strongly Disagree (1) – Strongly Agree (5) Percentage % (n=16) Median (Mean) Std. Dev
Q1 There is a lot of pressure on academics to answer emails from students quickly 75 (12) 4 (4.19) .98
Q2 Some students write emails that can be misconstrued as abusive and disrespectful because they have poor written language skills 68 (11) 4 (3.63) .96
Q3 I am distressed when student emails attack me personally and when they are demanding or confrontational 75 (12) 4 (3.75) 1.07
Q4 I believe that consumerism in higher education leads some students to believe that they hold a greater balance of power than the academics 75 (12) 5 (4.31) 1.13
Q5 Sometimes, I am not sure whether it is in my best interests to report student harassment of me to the University 31 (5) 3 (2.88) 1.26
Q6 I feel that students harass academics because students do not have the ability to cope with academic and personal stressors 62 (10) 4 (3.75) 1.00
Q7 Sometimes I feel I have not received support from the University when I report a student’s harassment 24 (3) 3 (2.69) 1.19
Q8 It is usually when assignments or exams are due that I get the most unacceptable behaviour from students 55 (9) 4 (3.38) 1.09
Q9 I believe widening participation has led to increased levels of student harassment of academics 30 (5) 2 (2.81) 1.22
Q10 I believe students hold the view that academics owe them something because they are paying for their degree 81 (13) 5 (4.31) .94
Q11 The commercialisation of higher education has led to some students being self- absorbed and self-centred, and as a result they are quick to blame others rather than accept responsibility 81 (13) 4 (4.19) 1.05
Q12 The diversity of the student cohort has led to me being harassed more frequently 18 (3) 2 (2.31) 1.20
Q13 When students are unclear or unsure of the programme and/or university requirements, they display more aggressive and unacceptable behaviours 68 (11) 4 (3.88) .72
Q14 Students today use aggression to exert power over academics 43 (7) 3 (3.25) 1.07
Q15 I believe that there is often a cultural clash when students behave aggressively or inappropriately towards me 62 (10) 3 (2.75) 1.13
Q16 The way some students communicate with me is belittling 37 (6) 2 (2.81) 1.05

Note: The higher the mean the more negatively nursing academics responded; Percentage indicates those that responded either “Agree” or “Strongly Agree”; Std Dev=Standard Deviation.

Data Analysis

Non-parametric testing using Mann Whitney U was used to analyse the demographical data and experiences of and contributing factors associated with contra-power harassment. Cronbach’s-Alpha was also performed to assess internal consistency of the Likert scale statements. Inferential statistics, measures of central tendency and Cronbach’s-Alpha were used to assess consistency of Likert Scale statements. Inductive content analysis was used to identify patterns in the four open ended questions and generic themes identified.

Findings

There were 16 respondents – more females than male. Respondents were lecturer and senior lecturer grades with between 6 and 9 years’ experience of teaching undergraduate students predominantly in Southern England.

Responses to questions which focused on nursing academics experiences of contra-power harassment clearly showed that respondents had experienced harassment from nursing students. Analysis showed 3 main themes: – entitlement, desire for higher grade and societal culture. One of the main forms of harassment related to language skills in the form of poorly written and / or demanding emails from the nursing student and this being supported by harassing emails from their parents.

‘People sometimes forget to say please and thank you before and after a request and this makes the request read like a demand’; ‘Students use words/ comments such as “unfair” or “I am displeased with my mark”. On their own they don’t sound particularly abusive but when it is part of a longer email it all starts to build to feel more threatening’.

‘high achieving parents expect much from their children which can result in the children behaving in unacceptable ways due to the pressure and their parents undertake some bullying behaviour’.

‘More and more parents are getting involved and there can be some very bullying tones’

Another form of harassment related to teaching credibility and challenging academic judgement.

They [students] lash out, insult my credibility and teaching content’; ‘I’ve had students challenge my academic judgement (at the time feedback and marks are released) but my feedback is comprehensive and specific’; ‘stating they have not received help or guidance when what is required is covered in lectures, seminars, drop in sessions and 1:1 meetings, but these students are the ones who have not attended’.

‘A group of low performing students pursued a systematic but completely spurious complaint in a very rude and obnoxious manner; one male student was particularly aggressive’.

Responses to questions which asked nursing academics about the contributing factors associated with contra-power harassment clearly showed that most unacceptable behaviour occurred around assessments and the students desire to have a high class degree.

‘I WANT A BETTER MARK’ (capitals denoting shouting) or ‘I want a first’.

‘They are paying therefore they expect to get good marks’.

‘Students pay a lot of money and some believe they are buying a degree’.

‘We always had 60% as a trigger point, e.g. below 60% students were likely to challenge but this has now, over the last 5 years or so, moved to being 70% so now we get challenged is students aren’t given 70%+’.

‘When academic judgement is overturned it makes it appears that despite regulations the student will win’.

‘Grade grabbing has increased and the uni appeals procedure encourages personal attacks’.

The question asking whether widening participation had increased harassment tended to show that academic disagreed, although academics perceived that struggling students expected more help and school attainment had not helped with the independent study needed at university.

Schools let students resubmit work until they get a good mark. We don’t. They are frustrated by the lack of a second chance which they are used to’.

‘Students seek coaching rather than guidance’.

‘Often these students appear to have less social skills to cope with criticism – they take it personally and not about the piece of work submitted’.

Other comments indicating societal expectations included sexism and racism:

As a female I do feel that sometimes students from the Middle East do not always respect female academics’.

‘As a female and international academic, the wider cohort is more condemning and sceptical of my ability compare to a ‘white British male’ teaching the exact same content’.

‘Respect for academics seems to have gone out of the window with students swearing at academics telling them to ‘F’ off. This seems to happen to the much younger academics where the age differences are small’.

Discussion

The results from this study suggest that undergraduate student nurses are being uncivil towards academics and this takes form in a variety of ways. It is suggested that incivility is due to a societal culture because students feel entitled to more help and an expectation of a higher grade. Findings from this, like others, shows that students harass academics to give them higher grades. Indeed, in the UK universities have been exploring potential grade inflation. Statistics show that the increase has been part of a long term trend and in the early 1990’s about 8% of students achieved a first class degree whereas in 2018 it was 26%, a rise from 18% in 2012 – 2013 (Higher Education Statistics Agency 2018) – [32], and internal audit is subjecting academics to justify the grades given. Research is also highlighting that students are trying to increase their grades by what is now called ‘contract cheating’. It is suggested that as a university education is a commodity, rather than a development of thinking, learning and reasoning, students are buying essays, being dishonest in their essay writing (i.e. parents are writing the essays) and that they do not feel this is wrong [33-36]. The results from this study are not too dissimilar to other research which highlights that student aggression and contra-power harassment is exhibited in a variety of ways. However, what the research is not showing is that UK academics are subjected to constant assessment and one could suggest that tolerance of uncivil behaviour is lessened. In recent years, university managers, leaders and academics have been expected to be responsive to diverse student needs and expectations, a decline in funding, a competitive research environment together with an increase in fiscal accountability. Houston [37] state that ‘meeting challenges to deliver outputs and outcomes is a complex balancing act’ as academics are not only required to balance teaching commitments, income generate, meet research outputs and publishing requirements but they are constantly subjected to internal and external accountability and a number of national measurements’. There are three such measurements. One is the National Student Survey which was introduced in 2005 and is managed by the Office for Students (the independent regulator of higher education in England). This survey assesses undergraduate student’s opinions of the quality of their degree programmes and whilst the results have made institutions take student feedback seriously it has also been used by university managers to discipline staff if scores are low. Another tool is the Research Excellence Framework (REF) which was introduced in 2008 by the Higher Education Funding Council for England (initially called the Research Assessment Exercise and replaced by the REF in 2014). The aim being to produce UK-wide indicators of research excellence providing a quality international benchmark to drive funding and assesses the impact of academic’s research. The third tool, introduced in 2017, is the Teaching Excellence Framework (TEF). This measures excellence in three areas: teaching quality, learning environment and the educational and professional outcomes achieved by students. Consequently, academic are being assessed by internal and external measures and these are key matrices and important consideration for academics applying for promotion and career progression. Positive student feedback in NSS and high scores in TEF and REF are also important in the mandate for supporting university funding. At the same time that academics are being assessed via these national frameworks they are being subjected to excessive demands from students. Student expectations are high and a consumer identity which is being recognised by students are making them demand more from the university [38-40]. Not only is there a growing body of research which shows that academics are being harassed by undergraduates but there is a growing body of research that is showing that horizontal violence (an umbrella term used to describe a range of aggressive behaviours between colleagues) between nurses is as rife [41-48]. Student nurses in the UK spend half of the duration of their programme in practice (2300 hours over three years) and one could suggest that if they are subjected to horizontal violence, or witness to it, and as such they may assume it is ‘normal’ behaviour. For example, research identifies that nurses tolerate low level incivilities, such as condescending tone or gossip or eye-rolling, and consequently student nurses are socialized to accept these behaviours as part of the job and one could suggest that  they may transpose it into the university setting by being uncivil to academics. There is also evidence that academics, despite universities have anti-bullying policies, are being bullied by their employers and that victims pay a high price (such as job loss). The outcome of bullying is often hidden from the public and The Guardian [49] – a renowned British newspaper – reported that in two years UK universities have spent nearly £90m on payoffs to staff who have been subjected to bullying and that as many as 4,000 settlements occurred, some of which are thought to relate to allegations of bullying. However they reported that these payoffs came with “gagging orders”. The British Broadcasting Corporation also undertook an independent survey and identified that ‘Dozens of academics were made to sign Non-Disclosure Agreements after being “harassed” out of their jobs following the raising of’ complaints’ (BBC 2019) [50]. Reports such as this raise fear, stress and reduced motivation for work and one could suggest that this and the constant inspection of their work is preventing staff from achieving high levels of performance. Khan [51-53] systematic review clearly identified that academics that are exposed to the excessive demands of work are subject to burnout resulting in physical and psychological issues and a consequence of this is that universities less productive due to poorly performing academics who have a lower sense of commitment.

Conclusion

There is no doubt that undergraduate students are demonstrating uncivil behaviour that this is having an effect on academics and there are many studies that have looked at the potential causes of this behaviour and its effects on academics. This study has added to the body of knowledge because it specifically relates to undergraduate student nurses and their behaviour towards nurse academics. It is showing that nurse academics are experiencing harassment due to students demands for higher grades and when the students have not achieved they appear to have less social skills in order to cope with the feedback. What has also discussed is a controversial issue which is that academics are less able to manage student behaviour because they are facing constant assessment themselves from internal and external forces. Also this study has suggested that incivility in the nursing profession is acting as role model for student and this is manifesting itself in university.

Study Limitations

This survey was originally sent to academics in the UK, Australia and New Zealand. However, the UK responses were very few in comparison to Australia (n=82) [1] and although the overall findings were not too dissimilar one questioned why responses might have been so few. One might suggest that recent discourse in the UK universities had led to academics being fearful of completing the questionnaire or general disharmony with working life causing anxiety and fatigue, and high workloads do not allow time for participating in research such as this. Of course they may be also suffering from survey fatigue as they are expected to complete returns for REF, TEF and respond to NSS feedback.

References

  1. Christensen M, White S, Dobbs S, Craft J, Palmer C (2019) Contra-power harassment of nursing academics. Nurse Education Today 74: 94-96.
  2. Liu J, Lewis G, Evans L (2013) Understanding Aggressive Behaviour Across the Life Span. Journal of Psychiatric Mental Health Nursing 20: 156-168. [crossref]
  3. City University of London (2019) Violence and Society Centre. https://www.city.ac.uk/about/schools/interdisciplinary-city/violence-and-society
  4. Rith-Najarian LR, Boustani MM, Chorpita BF (2019) A systematic review of prevention programs targeting depression, anxiety, and stress in university students. Journal of Affective Disorders 257: 568-584. [crossref]
  5. Lee D (2006) University Students Behaving Badly. Trentham Books, Stoke on Trent.
  6. Kopp JP, Finney SJ (2013) Linking academic entitlement and student incivility using latent means modelling. Journal of Experimental Education 81: 322-336.
  7. White SJ (2010) Upward harassment: harassment of academics in post-1992 English universities. Unpublished PhD Thesis. University of Wales, Cardiff.
  8. Blizard LM (2014) Faculty members’ Perceived Experiences and Impact of Cyber bullying from Students at a Canadian University: A Mixed Methods Study. Unpublished PhD Thesis. Simon Fraser University.
  9. Lampman C, Crew EC, Lowery SD, Tompkins K (2016) Women faculty distressed: descriptions and consequences of academic contra-power harassment. NASPA Journal about Women in Higher Education 9: 169-189. [crossref]
  10. Lashley FR, DeMeneses M (2001). Student civility in nursing programs: a national survey. Journal of Professional Nursing 17: 81-86. [crossref]
  11. DeSouza E, Fansler A (2003) Contrapower sexual harassment: a survey of students and faculty members. Sex Roles 48: 519-542.
  12. White SJ (2013) Student nurses harassing academics. Nurse Education Today 33: 41-45. [crossref]
  13. Ibrahim SAEA, Qalawa SA (2016) Factors affecting nursing students’ incivility: as perceived by students and faculty staff. Nurse Education Today 36: 118-123.
  14. Ziefle K (2018) Incivility in nursing education: generational differences. Teaching and Learning in Nursing 13: 27-30.
  15. Kolanko K, Clark C, Heinrich K, Olive D, Serembus J, et al. (2006) Academic dishonesty, bullying, incivility ad violence: difficult challenges facing nursing education. Nurse Education Perspectives 19: 34-43. [crossref]
  16. Riech S, Crouch L (2007) Connectiveness and civility in online learning. Nurse Education in Practice 7: 425-432.
  17. Workplace Bullying Institute. 2012. https: //www.workplacebullying.org/ – [17]
  18. Clark C, Farnsworth J, Landrum E (2009) Development and description of the incivility in nursing education survey. The Journal of Theory Construction and Testing 13: 7-15.
  19. Nursing and Midwifery Council (2018) The Code. Professional standards of practice and behaviour for nurses, midwives and nursing associates. https: //www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf
  20. McCabe D (2009) Academic dishonesty in nursing schools: an empirical investigation. Journal of Nurse Education 48: 614-623. [crossref]
  21. Mc Crink A (2010) Academic misconduct in nursing students: behaviours, attitudes, rationalisations and cultural identity. Journal of Nurse Education 49: 653-659. [crossref]
  22. Willis Commission (2012) Quality with compassion: the future of nursing education. Royal College of Nursing. London.
  23. Phillips J, Cooper K, Rosser E, Scammell J, Heaslip V, et al. (2015) An exploration of the perceptions of caring held by students entering nursing programmes in the United Kingdom: A longitudinal qualitative study phase 1. Nurse Education in Practice 15: 403-408. [crossref]
  24. Rosser EA, Scammell J, Heaslip V, White SJ, Phillips J, et al. (2019) Caring values in undergraduate nurse students: A qualitative longitudinal study. Nurse Education Today 77: 65-70. [crossref]
  25. Scammell J, Tait D, White SJ (2017) Challenging nurse student selection policy: using a lifeworld approach to explore the link between care experience and student values. Nursing Open 4: 218-229. [crossref]
  26. Watson J (2009) Caring Science and Human caring theory: transforming personal and professional practices of nursing and health care. Journal of Health and Human Services Administration 31: 466-482. [crossref]
  27. Francis R (2010) Robert Francis Inquiry report into Mid-Staffordshire NHS Foundation Trust. The Stationery Office, London.
  28. Carlisle D (2012) Crunch time for US nurses. Nursing Standard 27: 16-18.
  29. Lindebaum D (2019) When (and why) students bully academics. Times Higher Education. April. https://dirklindebaum.eu/wp-content/uploads/2019/05/When-and-why-students-bully-academics-Times-Higher-Education-THE.pdf
  30. Francis R (2013) Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry: Executive summary. The Stationery Office, London.
  31. Ward-Smith P (2013) The Affordable Care Act: Can we all achieve Presidential Health?. Urologic Nursing 33: 201-202.
  32. Keashly L, Neuman JH (2008) Final Report: Workplace Behaviour (Bullying) Survey. Minnesota State University, Mankato.
  33. Morrish L (2019) Higher Education Policy Institute: The University has become an anxiety machine. https://www.hepi.ac.uk/2019/05/23/the-university-has-become-an-anxiety-machine/
  34. Walker M, Townley C (2012) Contract Cheating: A new Challenge for Academic honesty?. Journal of Academic Ethics 10: 27- 44.
  35. Sattler S, Wiegel C, van Veen F (2017) The use Frequency of 10 different methods for Preventing and Detecting Academic Dishonesty and the Factors Influencing their use. Studiers in Higher Education 42: 1126-1144.
  36. Harper R, Bretag T, Ellis C, Newton P, Rozenberg P, Saddiqui S, et al. (2018) Contract cheating: a survey of Australian University Staff. Studies in Higher Education 44: 1857- 1873.
  37. Ross J (2019) Mums and dads ‘bigger problem’ than essay mills. Times Higher Education. 29th
  38. Houston D, Meyer LH, Paewai S (2007) Academic staff workloads and job satisfaction: Expectations and values in academe. Journal of Higher Education Policy and Management 28: 17-30.
  39. Kandiko CB, Mawer M (2013) Student Expectations and Perceptions of Higher Education. London: Kings Institute.
  40. Tomlinson M (2016) Students’ Perception of Themselves as ‘Consumers’ of Higher Education. British Journal of Sociology of Education 38: 450-467.
  41. Bunce L, Baird A, Jones SE (2017) The student-as-consumer approach in higher education and its effects on academic performance. Studies in Higher Education 42: 1958-1978.
  42. Vessey JA, DeMarco RF, Gaffney DA, Budin WC (2009) Bullying of staff registered nurses in the workplace: a Preliminary study for developing personal and organisational strategies for the transformation of hostile to healthy workplace Environments. Journal of Professional Nursing 25: 299-306. [crossref]
  43. Purpora C, Blegen MA, Stotts NA (2014) Horizontal violence among hospital staff nurses related to oppressed self or oppressed group. Journal of Professional Nursing 20: 24-30.
  44. Lachman VD (2014) Ethical Issues in the Disruptive Behaviours of Incivility, Bullying, and Horizontal / Lateral Violence. Ethics, Law and Policy 23: 56-60.
  45. Lachman VD (2015) Ethical Issues in the Disruptive Behaviours of Incivility, Bullying, and Horizontal / Lateral Violence. Urologic Nursing 35: 39-42. [crossref]
  46. Taylor RA, Taylor SS (2017) Enactors of horizontal violence: The pathological bully, the self-justified bully and the unprofessional co-worker. Journal of Advanced Nursing 73: 3111-3118. [crossref]
  47. Bloom EM (2019) Horizontal violncve among nurses: Experinces, responces and job perforamnce. Nurisng Forum 54: 77-83. [crossref]
  48. Lewis-Pierre L, Anglade D, Saber S, Gatlamorta, Kand Piehl D (2019) Evaluating Horizontal Violence and Bullying in the Nursing Workforce of an Oncology Academic Medical Centre. Journal of Nursing Management 27: 1005-1010.
  49. Thompson R (2019) What if you’re the bully?. American Nurse Today 14: 22-25
  50. The Guardian (2019) Bullying of Academics in Higher Education. https://www.theguardian.com/education/2019/apr/17/uk-universities-pay-out-90m-on-staff-gagging-orders-in-past-two-years
  51. 2019. UK universities face ‘gagging order’ criticism. https://www.bbc.co.uk/news/education-47936662
  52. Khan A,Ud Din S,Anwar M (2019) Sources and Adverse Effects of Burnout Among Academic Staff: A Systematic Review. City University Research Journal (CURJ) 9: 350-363.
  53. United Kingdom Government. 2010. Equality Act. https: //www.gov.uk/workplace-bullying-and-harassment

Unusual Sella Mass: Pituitary Abscess (PA)

Abstract

Pituitary abscess (PA) caused by an infectious process is a rare cause of Sellar mass. The clinical features and radiological appearance of PA as an intra- or supra-sellar mass are similar to many other pituitary lesions, and so they are often misdiagnosed as pituitary tumor.

70% of cases occur in a previously healthy pituitary gland. These are classified as primary pituitary abscesses, persumbly secondary to either hematogenous spread or as an extension from an adjacent infective focus such as meningitis, sphenoid sinusitis, Cavernous sinus thrombophlebitis or contaminated cerebrospinal fluid (CSF) fistula.

The rest are secondary abscesses, and arise from pre-existing lesions, such as an adenoma, apoplexy in a tumor, a craniopharyngioma, or a complicated Rathke’s cleft cyst and lymphoma. The risk factors are for PA are immunosuppression, previous irradiation or surgical procedures to the pituitary gland [1].

In almost 50% of cases, the pathogenic microorganism causing the infection is not isolated. A history of recent meningitis sinusitis or head surgery can be the source [2].

Correct diagnosis before surgery is difficult and is usually confirmed intra- or post-operatively. The early surgical intervention allows appropriate antibiotic therapy and hormone replacement resulting in reduced mortality and morbidity. A long term follow-up is recommended because of the high risk of recurrence and of postoperative hormone deficiencies.

Keywords

Pituitary abscess, Papilledema, Panhypopituitarism, Rathke’s cleft cyst, Propionibacterium acnes

Introduction

A pituitary abscess (PA) represents 0.2%-0.6% of all pituitary lesions and can be life threatening. It can have a prolonged disease course. The first case was reported by Heslop in 1848, and so far, <300 cases have been reported worldwide [3]. It is an infectious process that presents as a mass in the Sella. Clinical features and the radiological appearance of the PA as an intra or suprasellar mass are similar to many other pituitary lesions, so it is often misdiagnosed as a cystic pituitary tumor, craniopharyngioma, and Rathke’s cyst. It can be life-threatening if not appropriately diagnosed or treated, and the outcome is difficult to predict. Fortunately, the majority of the cases have a chronic course. The disease has a higher prevalence in females between the age of 12 to 76 years. The average period it takes to diagnose from the onset of symptoms is around 8 years.

PA can occur as a primary disease or can be secondary to infections caused by either hematogenous spread or as an extension from an adjacent infected tissue such as meningitis, sphenoid sinusitis, Cavernous sinus thrombophlebitis or contaminated cerebrospinal fluid fistula 70% of cases occur in a previously healthy pituitary gland. These are classified as primary pituitary abscesses, and the rest are secondary abscesses that arise from pre-existing lesions, such as an adenoma, apoplexy in a tumor, a craniopharyngioma or a complicated Rathke’s cleft cyst and lymphoma [4].

In almost 50% of cases, the pathogenic microorganism causing the infection cannot be isolated. A history of recent meningitis sinusitis or head surgery can be the source [2].

Correct diagnosis before surgery is difficult and is usually confirmed intra- or post-operatively. The early surgical intervention allows appropriate antibiotic therapy and hormone replacement resulting in reduced mortality and morbidity. A long term follow-up is recommended because of the high risk of recurrence and postoperative hormone deficiencies.

We present 2 cases of pituitary abscess in young women. One presented with bilateral papilledema and the other with panhypopituitarism. Both had a sellar mass on an MRI scan, and the diagnosis was made intra-operatively. Microbiological culture in both cases was positive for Propionibacterium acnes (P.acnes). P.acnes is a gram-positive organism, a part of the normal skin microbe. This organism is most commonly isolated from wounds following craniotomies after Staphylococcus aureus and streptococcus epidermidis. Low-grade infections can manifest between 3-36 months.

Case 1

A 14-year old South Asian girl presented with a one-month history of worsening frontal headaches that occurred daily, associated with vomiting, nausea, lethargy, photophobia, and sleep disturbance. Aside from well-controlled asthma, she has been previously healthy. There was no recent travel history or infectious contacts. On examination, she appeared alert and active. She had bilateral papilledema, suggesting raised intracranial pressure (ICP). She was apyrexial and systemically well. Her cerebral magnetic resonance imaging (MRI) scan revealed a soft tissue mass in the pituitary fossa extending up towards the optic chiasm, with mild edema in the optic nerve and tracts. The scan also showed an enlarged pituitary gland and thickened stalk. The findings suggest an inflammatory process like hypophysitis, particularly Langerhans cell histiocytosis (LCH) because of her age. There were no other features of LCH. She had a normal liver US and skeletal survey. She had no symptoms of Diabetes insipidus. Her pituitary hormones were normal, including the stimulated cortisol. Her Prolactin was elevated. Her serum sodium and osmolality were normal. Her ESR was slightly raised, but autoantibodies, serum tumor markers, ACE, and the Quantiferon tuberculosis test were negative. Her IgG4 subclass was normal (Table 1). The formal ophthalmology review did not show evidence of bilateral papilledema. Her symptoms improved with oral analgesics, and steroid treatment was not initiated.

Table 1: Results at initial presentation.

                    Short Synacthen test

Time T=0 T-30 T=60
Cortisol (nmol/L) 186 452 594

                   Baseline tests

Test Result Normal range
IGF-1(nmol/L) 47.9 18.3 to 63.5
TSH (mU/L) 1.62 0.51-4.3
T4 (pmol/L) 13.3 10.8-19
LH (IU/L) 4.3 Follicular phase 2-13

Mid cycle 14-6

Luteal phase 1-11

FSH (IU/L) 1.8 Follicular phase 4-13

Mid cycle 5-22

Luteal phase 2-8

Postmenopause>25

ACTH (ng/L) <3 0-50
Prolactin (mU/L) 806 102-496
Serum Na mmol/L 144 133-146
Serum Osmolality mOsmo/Kg 293 282-300
Random Urine Osmolality mOsmo/Kg 475 100-1400
LDH (u/L) 188 120 to 300
HCG (IU/L) <1 0-1
alpha Fetoprotein (kU/L) 1 0-10
C-Reactive protein (mg/L) 1.5 0-5
ESR (mm/h) 28 1-12
Complement C3 (g/l) 1.1 0.75-1.65
Complement C4 (g/l) 0.29 0.14-0.54
Antinuclear antibodies Negative
Angiotensin convert enzyme (U/L) 42 16-85
IgG4 (g/L) 0.04 0-1.3

 

A repeat MRI scan 3 months later discussed in a multidisciplinary meeting was reported to suggest Rathke’s cleft cyst abscess/ Pituitary abscess (Figures 1 and 2). She underwent a trans-sphenoidal endoscopic pituitary biopsy for diagnosis. The appearances suggested a Rathke’s left cyst and a pituitary abscess. Immunostaining for ACTH, FSH, LH, growth hormone, TSH and Prolactin, chromogranin, synaptophysin, and collagen IV was consistent with anterior pituitary tissue. Microbiological culture on prolonged incubation was positive for Propionibacterium with no acid-fast bacilli growth. TB culture was also negative. She received a 6-week course of antibiotics, including 2 weeks of intravenous ceftriaxone and oral metronidazole followed by 4 weeks of oral co-amoxiclav. Her headaches and vomiting deteriorated after biopsy with a peak CRP of 218 mg/L, which resolved following medical treatment. Imaging with MRI and baseline pituitary function blood tests has since been repeated following the 6 weeks to assess the management’s effectiveness, which showed normal results. The patient reported the resolution of headaches and able to resume full-time schooling.

fig 1 414

Figure 1: MRI at presentation.

fig 2 414

Figure 2: MRI 3 months after transphenoidal surgery.

Case 2

29 years old Caucasian fine arts student presented to the emergency department with fever, headaches, profuse sweating, tiredness, and blurring of vision. Her symptoms, particularly headaches, had worsened over the last 12 months. She had noticed polydipsia and polyuria. She also had amenorrhoea for twelve months. She was treated at her local hospital twice in the preceding 3 years with symptoms of headaches, fever, weight loss, and vomiting. She had a lumbar puncture 3 times to rule out a possibility of central nervous system infection. On both occasions, she was discharged home after empirical treatment with antibiotics for suspected meningitis. There was no other past medical history. There was no recent travel history or infectious contacts. She was not on any regular medications.

The initial pituitary MRI and contrast-enhanced MRI scan revealed the absence of the posterior pituitary bright spot and a thickened pituitary stalk with a deviation of infundibulum to the right. There was a homogenous hyperintense area within the pituitary gland with no discernable pituitary tissue. This area was hypointense on T2 (Figures 3-5). The differential diagnosis was apoplexy, hypophysitis or a proteinaceous cystic lesion replacing or compressing the pituitary gland. The optic nerves and the chiasm appeared normal. Her investigations confirmed her to have hypopituitarism with Diabetes insipidus (Table 2). Her lumbar puncture showed no CSF abnormality. Her tumor markers and Quantiferon for tuberculosis were negative. The case was discussed in multidisciplinary meeting (MDT) and with empirical diagnosis of hypophysitis, she was started on prednisolone with the replacement of deficient hormones, including Desmopressin. She showed no improvement in her clinical symptoms. A 3 month interval scan showed an increase in the size of the pituitary gland with further thickening of the stalk and optic chiasm displaced superiorly. After the second discussion in MDT, she had a pituitary biopsy. During surgery, soft yellow-white pus-like material was drained after dural incision. The microscopy showed necrotic material with a little amount of compressed anterior pituitary gland, chronic inflammation, and no evidence of adenoma or granuloma or giant cells was found. No acid-fast bacilli or organisms were seen on gram staining, and the culture for TB was negative. There was scanty growth of Propionibacterium acneformis. Her interval scan 3 months later showed complete resolution of the non-enhancing T1 hypertense pituitary tissue with a further decrease in the size of the pituitary gland. She remains on full hormones replacement. She had an insulin tolerance test that confirmed her growth hormone deficiency, and she is now on growth hormone replacement. She remains on hydrocortisone, Thyroxine, female hormone replacement, and Desmopressin.

fig 3 414

Figure 3: MRI at presentation.

fig 4 414

Figure 4: MRI 3 months later.

fig 5 414

Figure 5: MRI post-surgery.

Table 2: Results at initial presentation.

                   Short Synacthen test

Time T=0 T-30
Cortisol (nmol/L) 148 169

                   Baseline tests

Test Result Normal range
IGF-1(nmol/L) 12.7 11.9-40.7
TSH (mU/L) 1.35 0.27-4.20
T4 (pmol/L) 5.3 10.8-25.5
LH (IU/L) 3.1 Follicular phase 2-13

Mid cycle 14-96

Luteal phase 1-11

FSH (IU/L) 5.1 Follicular phase 4-13

Mid cycle 5-22

Luteal phase 2-8

Postmenopause>25

Oestradiol (pmol/L) <92 92-1462
Prolactin (mU/L) 577 102-496
Serum Na mmol/L 142 133-146
Serum Osmolality mOsmo/Kg 301 275-295
Random Urine Osmolality mOsmo/Kg 154 100-1400
CSF-b HCG (IU/L) <2 <2
CSF-alpha fetoprotein (µg/L) <1 <1
C-reactive protein (mg/L) 1.4 0-5
ESR (mm/h) 3 1-12
Antinuclear antibodies Negative
IgG4 (g/L) <0.01 0-1.3

Discussion

A pituitary abscess is an infectious process characterized by the accumulation of purulent material in the sella turcica. It is rare, and can be a life-threatening condition unless promptly diagnosed and treated. We report 2 cases of secondary pituitary abscess in young women. The first case was due to abscess in the Rahtke’s cleft cyst (RCC), and the second was Pituitary gland abscess with a history of otitis media and repeated lumbar punctures for presumed meningitis.

The clinical presentation of PA is nonspecific, such as headaches, pituitary hypofunction, and visual disturbances, whereas the infection can be discreet and inconstant [5,6]. Symptoms can be acute, subacute, or chronic, explaining the late diagnosis; in some cases. Visual disturbance, including hemianopia, can be present in 50% of cases. Headache without a particular pattern is a regular feature (70-90%) and can be debilitating. Anterior pituitary hypofunction due to destruction and necrosis of the gland is the commonest presentation resulting in fatigue and amenorrhoea (54-85%). In one series, 28 out of the 33 patients had anterior pituitary hypofunction. Pituitary hormone deficiencies persist in the majority of patients following treatment Up to 70% of patients with PA can have central Diabetes insipidus. In contrast, fever with signs of meningeal irritation is reported in 25% of cases [5].

MRI is the imaging of choice for the pituitary lesions. PA can present as a suprasellar mass (65%) or as an intrasellar mass (35%). A typical PA appears as a single cystic or partially cystic mass that is hypointense on T1-weighted image and hyperintense in T2-weighted image. It can show a rim of enhancement after contrast gadolinium. The posterior pituitary bright spot is mostly absent in majority of the cases (Wang et al.). The lesion’s signal depends on protein, water, lipid content, and whether there is hemorrhage. Imaging can also show the invasion of an adjacent anatomical structure, peripheral meningeal enhancement, thickening of the pituitary stalk, and paranasal sinus enhancement [6].

Diffusion-weighted magnetic resonance imaging (DWI) is widely used to differentiate cerebral abscess from other necrotic masses. Brain abscesses typically show high intensity on DWI with decreased apparent diffusion coefficient (ADC) value in their central region. The high intensity on DWI is useful but not specific to PA because pituitary apoplexy can also exhibit high intensity on DWI [7]. The accuracy of DWI in PA remains controversial. In the Wang et al. case series, PA was misdiagnosed in one-third of the case [6]. The radiological differential diagnosis includes, Rathke’s cleft cyst, cystic pituitary adenoma, arachnoid, and dermoid cysts, metastases, glioblastoma multiforme, chronic hematoma, and multiple sclerosis [8]. Rathke’s cleft cyst mainly can mimic a pituitary abscess [9]. RCC is the second most common incidentaloma after adenomas and accounts for 20% of incidental pituitary lesions at autopsy. The incidence of RCCs in children was reported to be much lower than in adults. However, the prevalence is now believed to be much higher, especially among those with the endocrine-related disorder [10]. Gunes et al. reported the radiological appearance of RCC on MRI in 13.5% of the children who underwent MRI for the investigation of endocrine-related disorders. Patients with RCC are usually asymptomatic, but symptomatic RCC is more common in females in both adult and pediatric populations [11]. RCC can cause significant morbidity such as headache, visual disturbances, chemical meningitis, endocrine dysfunction (hypothyroidism, menstrual abnormalities, diabetes insipidus, adrenal dysfunction, and very rarely apoplexy). Short stature, growth deceleration, delayed puberty are also reported in children and adolescents.

The diagnosis of PA in most cases can only be confirmed after surgical exploration, due to overlapping of clinical signs, symptoms, imaging, and laboratory findings with other sellar lesions. Signs of inflammation are present in less than a third of the patients. The PA should be included in the differential diagnosis of patients with headaches or signs of pituitary dysfunction and patients with pituitary mass who develop signs of meningeal inflammation.

The main treatment for PA in patients with mass effect is Transsphenoidal excision (TSS) with decompression of sella and antibiotic therapy. This can result in the resolution of visual abnormalities. Treatment is effective for typical symptoms such as fever, headache, and visual changes. Patients with shorter duration of symptoms and those with primary abscess have better improvement in their pituitary dysfunction. Majority of the patients remain with pituitary dysfunction even after the treatment.

Antibiotic therapy should to started promptly even in the patients who are waiting for microbiology and histological confirmation for about 4–6 weeks [1,12]. Empirical treatment with ceftriaxone is indicated until the results are available. Hormone replacement is commenced depending on the hormone deficits including stress dose glucocorticoid therapy. Hypocortisolemia should be recognized among patients presenting with sellar masses, as early diagnosis and treatment improve survival and endocrinological outcome. Patients who suffer from the pituitary abscess may eventually have a good quality of life if they are diagnosed and treated early. A craniotomy is reserved for larger lesions with the suprasellar extension or where transsphenoidal surgery is ineffective [13]. In a series published with 66 patients, 81.8% of patients recovered completely, 12.1% of patients had at least one operation for recurrence, and only one patient had died [14].

There are widespread pathogenic microorganisms in abscesses. These include Gram-positive bacteria, Gram-negative bacteria, anaerobes, and fungi [8,11]. Streptococcus and Staphylococcus are the most predominant Grampositive bacteria, whereas Escherichia coli, Mycobacterium, and Neisseria have also been reported [3,10,11]. Aspergillus fumigatus is mostly isolated in cases of secondary PA. Immunosuppressed patients mostly have Candida and Histoplasma. Cultures are positive only in 50% of cases; therefore, broad-spectrum antibiotics are given as empirical treatment. The pathogen identification is important for the therapeutic management [15].

Both patients had culture-positive for Propionibacterium acnes (P. acnes). This organism is seated deeply in the pilosebaceous glands, mainly in the scalp and face. It is a slow-growing, pleomorphic, non-spore-forming gram-positive anaerobic bacillus that is a universal component of the normal skin microbiota. It is usually considered a contaminant of blood cultures but occasionally can cause serious infections, including postoperative central nervous system (CNS) infections. P. acnes are the most commonly isolated organism after Staphylococcus aureus and Staphylococcus epidermidis following craniotomies. In the presence of heavy infiltrates, the Gram stain is not reliable. Gram stain is only positive in about 10.5% of clinically significant infections with moderate growth. P. acnes behave in a less aggressive manner than other postsurgical organisms and only accounts for a small fraction of CNS infections [16]. P. acnes abscesses typically follow craniotomy, shunts, access to reservoirs, trauma, and foreign bodies. Granulomatous responses have been documented in the CNS following P. acnes infections.

P. acnes grow slowly in the laboratory. This can cause in a delay in diagnosis, missed diagnosis, or delay in treatment if specimens are not cultured for an extended period. Cultures may not grow for as long as 14 days, so samples should be held beyond the usual 5 to 7 days. Gram stain may not be a reliable technique for the rapid diagnosis of P. acnes infections. When there is evidence of an abundant inflammatory response in the Gram-stained smear, a more careful evaluation of cultures must be performed. Polymerase chain reaction for the 16S rRNA or mass spectrometry can be a useful tool for rapid identification and typing of P. acnes following recovery in culture. Propionibacterium is susceptible to antibiotics used for the treatment of anaerobic infections, including penicillin, erythromycin, lincomycin, and clindamycin, but not metronidazole, which is notably ineffective against P. acnes [17].

Patients with PA should be followed up with serial MRI of the pituitary, hormonal profile and visual fields at 3, 6, and 12 months after surgery. The recurrence rate is variable and depends on the nature of the abscess (primary or secondary. The majority of relapses are associated with either an immunological defect or previous pituitary surgery [12,18].

Conclusion

We presented 2 cases of unusual sellar mass from an abscess in an adolescent and a young adult due to P. acnes, both responded well to treatment.

The pituitary abscess should be included as the differential diagnosis of patients with a sellar or a suprasellar mass, headaches, pituitary dysfunction, and meningeal inflammation.

The diagnosis is difficult before surgery because of overlapping clinical signs, radiological and laboratory findings with other sellar lesions.

Broad-spectrum antibiotics should be started empirically even before the culture results are available.

Culture is positive only in 50% of cases, and in case of unusual bacteria like P. acnes, an extended culture is required for the confirmation of the diagnosis.

Pituitary dysfunction should be recognized and appropriately treated particularly glucocorticoid replacement.

Transsphenoidal surgery is the treatment of choice and this is followed by pronged 4-6 weeks of broad-spectrum antibiotic therapy.

Early and efficient surgical and medical management results in lower mortality and higher recovery of pituitary hormone function.

Patients should be followed up with MRI imaging, assessment of the hormone replacement if required, and visual field assessment because of a chance of recurrence.

References

    1. Lin Y, Lin F, Liang Q, Li Y, Wnag Z, et al. (2017) Pituitary abscess: report of two cases and review of the literature. Neuropsychiatr Dis Treat 13: 1521-1526. [crossref]
    2. Furnica RM, Lelotte J, Duprez T, Maiter D, Alexopoulou O, et al. (2018) Recurrent pituitary abscess: case report and review of the literature. Endocrinol Diabetes Metab Case Rep 17-0162. [crossref]
    3. Kummaraganti S, Bachuwar R, Hundia V, et al. (2013) Pituitary abscess: A rare cause of pituitary mass lesion. Endocrine Abstracts 31: 1. [crossref]
    4. Al Salamn JM, Al Agha RAMB, Helmy M, et al. (2017) Pituitary abscess. BMJ Case Rep 2016-217912. [crossref]
    5. Nordjoe YE, Igombe SRA, Laamrani FZ, Jroundi L, et al. (2019) Pituitary abscess: two case reports. J Med Case Rep 13: 342.
    6. Wang Z, Gao L, Zhou X, Guo X, Wang Q, Lian W, Wang R, Xing B, et al. (2018) Magnetic resonance imaging characteristics of pituitary abscess: a review of 51 cases. World Neurosurg 114: e900-e902. [crossref]
    7. Xu XX, Li B, Yang HF, Du Y, Li Y, Wang WX, et al. (2014) Can diffusion-weighted imaging be used to differentiate brain abscess from other ring-enhancing brain lesions? A meta-analysis. Clin Radiol 69: 909-915. [crossref]
    8. Corsello SM, Paragliola1 RM, et al. (2017) Differential diagnosis of pituitary masses at magnetic resonance Imaging. Endocrine 58: 1-2.
    9. Coulter IC, Mahmood S, Scoones D, Bradey N, Kane PJ, et al. (2014) Abscess formation within a Rathke’s cleft cyst. J Surg Case Rep 11: 105. [crossref]
    10. Vasilev V, Rostomyan1 L, Daly AF, Potorac J, Zacharieva S, et al. (2016) Bonneville JF and Becker A. Pituitary ‘incidentaloma’: neuroradiological assessment and differential diagnosis. European Journal of Endocrinology 175: R171-R18. [crossref]
    11. Güneş A, Güneş SO (2020) The neuroimaging features of Rathke’s cleft cysts in children with endocrine-related diseases. Diagn Interv Radiol 1: 61-67. [crossref]
    12. Vates GE, Berger MS, Wilson CB, et al. (2001) Diagnosis and management of pituitary abscess: a review of twenty-four cases. J Neurosurg 95: 233-241. [crossref]
    13. Karagiannis AKA, Dimitropoulou F, Papatheodorou A, Lyra S, Seretis A, Vryonidou A, et al. (2016) Pituitary abscess: a case report and review of the literature. Endocrinol Diabetes Metab Case Rep [crossref]
    14. Ling X, Zhu T, Luo Z, Zhang Y, Chen Y, Zhao P, Si Y (2017) A review of pituitary abscess: our experience with surgical resection and nursing care. Transl Cancer Res 6(4): 852-859.
    15. Achermann Y, Goldstein EJC, Coenye T, Shirtliff ME, et al. (2014) Propionibacterium acnes: from Commensal to Opportunistic Biofilm-Associated Implant Pathogen. Clin Microbiol Rev 27: 419-440. [crossref]
    16. Chung S, Kim JS, Seo SW, Ra EK S, Joo SI, Kim SY, Park SS, Kim EC, et al. (2011) A Case of Brain Abscess Caused by Propionibacterium acnes 13 Months after Neurosurgery and Confirmed by 16S rRNA Gene Sequencing. Korean J Lab Med 31(2): 122-126. [crossref]
    17. Yacoub AT, Khwaja S, Daniel L, et al. (2015) Propionibacterium acnes Causing Central Nervous System Infections: A Case Report and Review of Literature. Infectious Diseases in Clinical Practice 23: 60-65. [crossref]
    18. Batool SM, Mubarak F, Enam SA, et al. (2019) Diffusion-weighted magnetic resonance imaging may be useful in differentiating fungal abscess from malignant intracranial lesion: Case report. Surg Neurol Int 10: 13. [crossref]

A Safety Signal’s Significance with the COVID-19 Coronavirus

Introduction

The global pandemic involving COVID-19 (coronavirus) has produced unprecedented challenges for the medical, healthcare providers and our world community. The World Health Organization (WHO 2020) initially declared COVID-19 a pandemic, pointing to the over numerous cases of the coronavirus illness in over a hundred countries and territories around the world and the sustained risk of further global spread [1,2]. The term pandemic is most often applied to new influenza strains, and the Centers for Disease Control and Prevention (CDC) use it to refer to strains of virus that are able to infect people easily and spread from person to person in an efficient and sustained manner. Such a declaration refers to the spread of a disease, rather than the severity of the illness it causes. A pandemic declaration can result in increased levels of stress, anxiety, panic and levels of functional depression for some individuals [3]. Recognized is the realization that these unusual circumstances create significant uncertainty and unease in the professional and personal lives of health care professionals and their patients.

Definition of a Safety Signal

“Safety signals” are learned cues that predict the nonoccurrence of an aversive event. As such, safety signals are potent inhibitors of fear and stress responses. Investigations of safety signal learning have increased over the last few years due in part to the finding that traumatized persons are unable to use safety cues to inhibit fear, making it a clinically relevant phenotype.

The coronavirus has traumatized some which has been recognized as a state of heightened fear or anxiety in environments globally. This symptom has been conceptualized as a generalization of the fear conditioned during the traumatic experience that becomes resistant to extinction. As opposed to danger learning where a cue is paired with aversive stimulation, safety learning involves associating distinct environmental stimuli also known as safety signals that can be used an applied when aversive events occur as in a global pandemic.

During periods of high stress such as during this Covid-19 pandemic, fear often permeates the lives of many because if the unknown nature of this illness. This occurs because of the absence of a learned safety signal. Such safety signals can inhibit fear responses to cues in the environment. As such, safety signals are only learned when the subject expects danger but it does not necessarily occur. More fundamental to the clinical importance of a safety signal is the distinction between safe and dangerous circumstances. Thus, identifying the mechanisms of safety learning represents a significant goal for basic neuroscience that should inform future prevention and treatment of trauma and other anxiety disorders.

With COVID-19 global pandemic, the World Health Organization (2020) continues to ask countries to “take urgent and aggressive action.” World leaders continue holding international teleconferences with health officials to address the most effective way to protect the public and develop public health policy for the coronavirus that has caused multiple illnesses and deaths worldwide.

Transitioning the Pandemic

The urgency has created stressful life experiences for all ages that pose the potential for illness resulting for some in disabling fear, a hallmark of anxiety and stress-related disorders [4]. Researchers at Yale University and Weill Cornell Medicine report on a novel way that could help combat such anxiety experienced at times like these. When life events as the spread of the Corvid 19 triggers excessive fear and the absence of a safety signal. In humans, a symbol or a sound that is never associated with adverse events can relieve anxiety through an entirely different brain network than that activated by fear and worry. Each individual must find their own “safety signal” whether that is a mantra, song, a person, or even an item like a stuffed animal that represents the presence of safety and security.

The Centers for Disease Control and Prevention (CDC), the World Health Organization (WHO), and other reputable agencies have advocated on how to address the coronavirus by washing hands frequently, avoid sharing personal items, and maintaining social distance from others beyond immediate family.

While it’s still unclear exactly how much of the current coronavirus outbreak has been fueled by asymptomatic, mildly symptomatic, or pre-symptomatic individuals, the risk of contagion exists. A yet to be published article in the CDC journal “Emerging Infectious Disease” (CDC 2020) reports that the time between cases in a chain of transmission is less than a week, with more than 10% of patients being infected by someone who has the virus but does not yet have symptoms according to Dr. Luren Meyers, a professor of integrative biology at UT Austin, who was part of a team of scientists from the United States, France, China and Hong Kong examining this viral threat.

Earlier this year, researchers in China published a research letter in the Journal of the American Medical Association, outlining a case of an asymptomatic woman in Wuhan, China who reportedly spread the virus to five family members while traveling to Anyang, China-all of whom developed COVID-19 pneumonia. The sequence of events suggests that the coronavirus may have been transmitted by the asymptomatic carrier,” [5].

Prevention Interventions

Coordinated regional efforts are underway under the direction of the Centers for Disease Control and Prevention (CDC) that provides guidelines aimed at prevention intervention. Each individual should make the effort to create one’s own “safety signal” by following the recommendations of the CDC (2020). Know how it spreads and that there is currently no vaccine to prevent coronavirus disease (COVID-19). Critical for prevention is avoided exposing the virus. The virus is thought to spread mainly from person-to-person. Between people who are in close contact with one another. Through respiratory droplets produced when an infected person coughs or sneezes. These droplets can land in the mouths or noses of people who are nearby or possibly be inhaled into the lungs.

Disinfecting by washing hands often with soap and water for at least twenty seconds especially after you have been in a public place or after blowing your nose, coughing, or sneezing. If soap and water are not readily available, use a hand sanitizer that contains at least 60% alcohol. Cover all surfaces of your hands and rub them together until they feel dry. Avoid touching the eyes, nose, and mouth with unwashed hands Put distance between yourself and other people if COVID-19 is spreading in your community. This is especially important for people who are at higher risk of getting immune compromised illness.

Health care calls for “sheltering in place” are effort to provide primary prevention it’s important to stay home to slow the spread of COVID-19, and if you must go out, practice personal quarantine. While we stay home, don’t let fear and anxiety about the COVID-19 pandemic become overwhelming. Managing mental health issues can be aided by taking breaks from watching, reading, or listening to news stories and social media. It remains important to take the time to connect with others. Networking with friends and loved ones over the phone or via video chat about the thoughts and feelings experienced during this pandemic is very important to maintain mental health daring three times. Employ the use mindful meditation, eating healthy meals, exercising regularly, and getting plenty of sleep.

Take steps to protect yourself and others. Stay sheltered in place especially when you’re sick. Shelter in place means to seek safety within the building one already occupies, rather than to evacuate the area or seek a community emergency shelter. The American Red Cross says the warning is issued when “chemical, biological, or radiological contaminants which would include exposure to the coronavirus.

Efforts must be made to cover one’s mouth and nose with a tissue when you cough or sneeze or use the inside of your elbow. Throw used tissues in the trash. Immediately wash your hands with soap and water for at least 20 seconds. If soap and water are not readily available, clean your hands with a hand sanitizer that contains at least 60% alcohol.

It is important to wear a facemask for your own health as well as the health of others. Everyone should wear a facemask when they are around other people (e.g., sharing a room or vehicle) and before entering a healthcare provider’s office. If someone is not able to wear a facemask due to breathing difficulties, then these individuals should cover all coughs and sneezes, and people who are caring for theme should wear a facemask when they enter ones room. Wear a facemask when caring for someone who is showing any signs or symptoms of respiratory infection and fever.

When considering the anxiety and apprehension individuals may experience with the vulnerabilities of the present pandemic and future epidemics of this proportion, patient medical education can provide a buffer against the Prevention interventions that include cleaning and disinfecting objects and surfaces that are touched regularly. This includes tables, doorknobs, light switches, countertops, handles, desks, phones, keyboards, toilets, faucets, and sinks. If surfaces are dirty, clean them: Use detergent or soap and water prior to disinfection. With first signs of symptoms, take advantage of Virtual Care in an effort to minimize unnecessary visits to an emergency room or health care provider’s office, which can also decrease the spread of illness and/or infection of many conditions, including COVID-19. Finally, each individual is encouraged to establish one’s own “safety signal” by adhering to the multiple precautions that include the guidelines developed and promoted by the World Health organization and the Centers for Disease Control and Prevention (CDC 2020).

References

  1. Centers for Disease Control (2020) Coronavirus Disease 2019 (COVID-19).
  2. World Health Organization (2020) Coronavirus disease 2019 (COVID-19): Situation Report-38.
  3. Miller TW (2015) Problem Epidemics in Recent Times. Health & Wellness. Lexington Kentucky: Rock point Publisher Incorporated.
  4. Miller TW (2010) Handbook of Stressful Transitions across the Life Span. New York: Springer Publishers Incorporated.
  5. Huang C, Wang Y, Li X, et al. (2020) Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 395: 497-506.
FIG 2

Understanding the Algebra of the Restaurant Patron: A Cartography Using Cognitive Economics and Mind Genomics

DOI: 10.31038/NRFSJ.2020311

Abstract

The studies reported here extended the range of Mind Genomics beyond considering how people feel about a situation (homo emotionalis) to what type of economic impact would be occasioned by that situation (homo economicus). The topic here is the familiar experience of observing the behavior of the staff with each other, and with the customer, in a restaurant. Respondents rated the expected price of the check using a relative scale (25% less vs. 25% more). Shifting the focus to economic consideration revealed fewer strong performing messages, and fewer, and less clear mind-sets, based upon the pattern of individual respondents. Confirming previous unpublished observations, the data from the three studies suggest that shifting the attention of the respondent to economics rather than emotions forces the respondent into a conservative stance. Studies on pricing must take this emergent conservatism into account when attempting to understand how people actually ‘feel’ about a situation.

Introduction – From The Outside Looking In

Today’s social sciences for the most part deal with normative behavior, behavior that is typical in situations. A term for this behavior is nomothetic, from the Greek word nomos, meaning general rule or normative rule. The focus on the nomothetic can be seen from studies of how patrons think of restaurants in a general sort of way, and from the unbelievable omnipresence of customer satisfaction surveys focusing on the food, the service, the décor, and so forth [1-4]. Customer satisfaction is a growing business. The hospitality industry is one of the biggest users, in order to understand the experience, from what happens, to how it happens. For the most part, researchers use a set number of questions about the experience, breaking down the questions into responses about the décor, the server’s attitudes, the food, and so forth. In a typical survey the objective is to obtain a quick measure of the subjective impression of the restaurant, an impression which is tallied with many others to generate a profile of performance, or a set of composite scores [1]. The end result is knowledge about what is important to the customer, information relevant for journals and the science, as well as how did a specific establishment perform on a certain day, information important for business. The typical questions focus on the person’s feelings, attempting to link feelings to economic implications, such as the increase or decrease of the business.

When researchers try to understand a situation, they can avail themselves of a variety of techniques. Anthropological observation, depth interviews, focus groups, and surveys are the major tools. Most of these tools are used within the context of understanding the business as a social entity (anthropology, sociology), or as a money-making enterprise that can be analyzed and fine-tuned to increase the revenues and the profitability, as well as increase both employee satisfaction and customer satisfaction. The world of the restaurant is of continuing interest to researchers. The restaurant is a microcosm, of interest to businesspeople, organization psychologists, those in the world of food service, and so forth. There are no lack of papers and journals devoted to the world of restaurants in general, and to the world of food service in particular. Most of the papers look at the restaurant from the ‘outside,’ observing either the behavior, or asking the customer to evaluate the experience. There have been some papers looking at the mind of the restaurant consumer in some depth, moving beyond the standard surface questions [5,6]. Most of these deeper-focused papers deal with the topic from the point of view of the profession hospitality, and not from the point of view of psychology.

The Contribution of Mind Genomics to Understanding the Perception of the Restaurant Experience

Mind Genomics is a newly developing science, dealing with the nature of how we make decisions in our daily lives. Rather than focusing on unusual and artificial situations to propose or disprove a hypothesis, Mind Genomics can be better considered to be a cartography, a study of the landscape, with the goal to uncover patterns in everyday life, specifically patterns involved in the way people take in information, and make decisions. Mind Genomics differs from social psychology which observes behavior and hypothesizes inner structures of the mind and differs from experimental psychology which sets up artificial situations, measures responses, and develops hypotheses about mental processes. In contrast, Mind Genomics creates mixture of communication elements about the specifics of a topic, measures the responses to meaningful combinations of messages created according to an experimental design, and deduces the ‘algebra’ of the mind regarding how the person weights the information. Mind Genomics thus combines the methods of market research (concept evaluation), statistical design (systematic variations of combinations of messages), and experimental psychology (evaluating and deconstructing the patterns of response of respondents, viz., ‘subjects’ who participate in an experiment disguised as a simple survey).

In previous studies using the methods of Mind Genomics, the focus has been on the emotional or affective response to the test messages. These responses may either be ratings (e.g., dislike/like, not buy/buy; not believe/believer), the selection of a usage occasion or even the selection of an emotion [7]. The approach of instructing a respondent to give an opinion may be described as investigating ‘homo emotionalis,’ emotional man. In recent years, researcher have begun to consider economic aspects. In concept testing and in conjoint measurement, for example, researchers have mixed price with other features, and instructed the respondent to select the preferred combination of price + features (pairwise trade-off) or rate interest in a selling proposition about a product or a service, with price being one of the features in the proposition (concept testing). During the past two decades, author Moskowitz has occasionally explored the potential of using price as a dependent variable. The respondent is instructed to read a test concept, and instead of (or in addition to) rating the product on liking, the respondent is instructed to select a price. The analysis re-codes the rating, replacing each rating by the price attached to it. The price may be presented in irregular order so that the respondent has to search for the price in a set of price. That approach ensures that the price is not simply used as a Likert scale of magnitude [8].

The integrated set of three studies here, dealing with the response to customers observing the behavior of managers and servers in a restaurant extends the use of Mind Genomics and economics. Author Rappaport has coined the term ‘cognitive economics’ for the extension, where economic considerations, rather than ratings of emotions, serve as the dependent variable [9].

Attribution Instead of Rating

The new direction in Mind Genomics, Attribution, will follow the approach pioneered with the direct estimation of price. In the latter studies, where price was the rating variable, the respondent evaluated different combinations of product features and benefits, selecting a price that might be appropriate for a product or service described by the concept or test vignette. The terms vignette, concept and test combination are used interchangeably. The analysis by OLS (ordinary least-squares) regression revealed the part-worth value of each benefit or feature, or even brand name and tag line. Since the rating was expressed in terms of dollars and cents, the equation uncovered the dollar value of each element. The OLS equation was expressed as: Dollar Value = k1(A1) + k2(A2) … k16(D16), as an example. The equation shows the dollar value selected by the respondent deconstructed into 16 smaller dollar values, k1 – k16, for 16 elements (features, benefits, brand names, tag lines, etc.). Attribution in Mind Genomics moves the focus from the evaluation of price for a specific item whose components are known to the estimated price that would be paid for a situation to be described, where there are no features, but rather actions. One might call this the ‘dollar value of a smile.’ The undergirding hypothesis is that one can present vignettes about situations, such as staff behavior in a restaurant, and ask respondents to judge the relative magnitude of the check for a meal, the relative magnitude from more expensive to the same to less expensive.

The notion of attribution is new, without any exploratory data to be found. There is a well-developed science for the dollar value of product and service features, but the dollar value pertains to what is being purchased. There is an expectation that the dollar value will change with the different features. We are accustomed to paying more or less for certain benefits, features, and even brands. The act of judging is straightforward, at least at a subjective level. Whether the judgments are correct or not can be determined through experiment. In contrast, attribution explores a potentially tenuous relation, if any, between money and the perception of behavior, in a world where the two may not be linked at all. The process of measuring this variable we call ‘attribution’ will become clear as we move through three studies dealing with the estimated size of the ‘check’ for a meal, based upon a description of the behavior of the server and the manager. Each experiment begins with four questions about the situation, and four answers to each question. The role of the question is to set up the structure of information, and to create a structure for the answer. The respondent never sees the question but rather sees only a set of combinations of answers. The respondent 24 different combinations of answers, viz., 24 ‘vignettes’ or test concepts, and rates each vignette on the expected size of the check that the meal would cost. The study does not ask the respondent what she or he would pay for the meal, but rather instructs the respondent to guess about the size of the check to be given by the server. There is no direct cue about price, since the source of the size of the check is unknown, and the respondent is being told that the check is simply delivered.

The origin of these studies emerges from ongoing discussions about the lack of knowledge about the mind of the customer, other than the sociological and market research studies of the type cited above. That is, there is little known about the everyday formation of impressions about the restaurant by customers who walk into a restaurant, are seated, and observe what is going on. The information of interest to most people is the restaurant itself, and the criteria for judgment as to whether one wants to return to the restaurant. The standard knowledge emerging from the experiments is surface. It should be noted that this set of three exploratory studies is both novel and routine. The novelty is the use of pricing as a dependent measure to assess a subjective impression. The dollar value of an experience is not new [9,10], just as the dollar value of product quality is not new [11]. What is new is the use of a seemingly unrelated measure, the dollar value of the check or bill for the meal. There is no clear or necessary or ‘right’ relationship between the dollar value of the check and the description of the restaurant.

The Three Experiments – Mind Genomics Applied to Cognitive Economic Attribution

Mind Genomics works according to a systematized process, following a user-friend path. The software makes the set up straightforward. The set-up system is simple, shown by Figure 1, which represents the different steps that the researcher follows, and at each point types in the relevant information onto a computerized form. Figure 1 is meant to be schematic, showing an actual sequence of completed forms, in the sequence presented to the user. The user is led through a series of forms to complete. The process is virtually self-explanatory but is absent bells and whistles. The format is simple, to the point, and guides the researcher through the process, step by step, beginning with the selection of the topic, the requirement to create four questions, the requirement to create four answers for each question, and finishing with the introduction to the experiment, the rating scale, and the anchor points for the rating scale (highest and lowest).

FIG 1

Figure 1: The set-up system for the Mind Genomics project.

It is worth noting that the ‘difficulties’ encountered in these studies are not from the study itself, but typically because people think in an undisciplined fashion. The form in Figure 1 forces the respondent to think in a systematical fashion, beginning with the topic, then proceeding to the questions, and finally moving to creating four answers for each question. After the first one or two experiences, the thinking of the typical researcher changes, as the respondent begins to follow the disciplined path demanded by the computer program. We illustrate the set up with the first of the three studies, traits of the server. We deal with the results in detail, and then follow up with a cursory analysis of the key findings for the other two studies, the interaction among the staff (Study 2) and the interaction with the customer (Study 3).

Step 1 (Panel A)

Select the name of the topic. This first step requires the researcher to give a name to the project. As simple and as direct as it sounds, Step 1 requires the researcher to focus on the topic as a coherent ‘whole,’ rather than thinking about the topic in a diffuse way. The research then records the name on the proper screen. The study here is Traits of Servers.

Step 2 (Panel B)

Select four questions which tell a story about the topic. It is at Step 2 that the topic should crystallize in the mind of the respondent. The text is typed onto the computer form, one question after another. The questions are never seen directly by the respondent, but simply used as an aid to help generate the creation of the four answers to each question. It is relevant to note here that Step 2 is the most difficult step in the entire process. Most people do not approach problems and knowledge acquisition in a structured, disciplined fashion. Two or three experiences suffice.

Step 3 (Panels C1-C4)

Repeat each question (automatically done by the computer), and instruct the respondent to type in the four different answers to the question in exactly the language and format that the respondent will see it. It is straightforward here to copy text from other languages and other alphabets, and then paste into the computer form. Each of the four panels corresponds to one of the questions. Table 1 shows the four questions, and the four answers to each question.

Table 1: The four questions and the four answers to each question for Study #1.

Study 1: Traits of server and manager
Question A: what personality traits does a server possess?
A1 the server’s personality: consistent smile; high energy; competent in customer service
A2 the server’s personality: insensitive to people with different personalities who show up
A3 the server’s personality: easily communicates with people similar to themselves regarding their specific needs like special dietary requests
A4 the server’s personality: sensitive to customer’s/coworker’s cultural differences; understands we are all different
Question B: what personality traits does a manager possess?
B1 the manager’s personality: stern disposition; takes on an authoritative role
B2 the manager’s personality: knows their customers likes and dislikes the
B3 the manager’s personality: knows their staff’s strengths and weaknesses
B4 the manager’s personality: knows their staff’s weaknesses and strengths
Question C: how does a server assist his/her manager?
C1 server assists manager: shows up to work on time on a consistent basis
C2 server assists manager: shows up with a can-do, team player attitude
C3 server assists manager: friendly to coworkers and customers alike
C4 server assists manager: shows up late
Question D: how does a manager assist his/her wait staff?
D1 the manager assists: stern disposition; takes on an authoritative role
D2 the manager assists: knowledge in all aspects of restaurant tasks
D3 the manager assists: deals with confrontations between staff and customers in a bias manner
D4 the manager assists: shows favoritism amongst staff and customers; generally disrespectful

The rationale for four questions and four answers per question comes from the vision of the researchers to create an easy-to-use system to answers questions about specific topics, such as products, political candidates, and social situations. The original goal was to make the number of possible messages about a topic virtually unlimited. With repeated experience, it became clear that most issues could be satisfied with 36 elements, such as four questions with nine answers (36 elements, 60 vignettes), or six questions with six answers each (36 elements, 48 vignettes). Over time it was the design comprises four questions with four different alternatives (16 elements, 24 vignettes) which emerged as the most practical. Note that elements B3 and B4 are the same, except for a reversal of the order of elements. B3 began with strengths and finished with weaknesses. B4 began with weaknesses and finished with strengths. The Mind Genomics process lets us explore these side issues of order, and study different ways of expressing the same idea, whether these be minor differences (e.g., order of ideas) or major differences (different tonality of language.)

Step 4 (Panels D1 and D2)

Orient the respondent (D1) and then create the rating questions, selecting the number of points, and the rating scale (D2). There are three sequential steps to create the rating scale, comprising the text, the number of scale points, and the anchor points for the low end of the scale and the high end of the scale. Only two scale anchors are allowed in the current version. For other formats, the actual scale points and their anchors are typed out.  For this study, the rating scale is:

Please read the vignette below. How much would you expect the price to be for your meal
1= 25% lower  …   9= 25% higher

Step 5 (Panel E)

Show the actual vignette. This is not part of the set up, but is what the vignette looks like on a computer tablet or a PC. There is a slightly different ‘look’ for a smartphone, due to the difference in size and dimension. Each respondent evaluates a unique set of 24 vignettes, comprising either two, threeor four elements, viz., answers. A vignette can contain a maximum of one answer from a question, never two or more answers, This simple bookkeeping device ensures that the respondent will never be presented with a vignette comprising mutually contradictory elements. at least contradictory by presenting different altenratives to the same question.

Each respondent evaluated a different set of 24 vignettes, created by permuting the basic experimental design [12]. This strategy maintains  the power of an experimental design even at the level of the individual respondent, but ensures that each respondent evlauated a unique set of 24 vignettes. Two benefits emerge, the first beiug the ability to analyze the data by creating a model at the level of the individual (important for clustering), and the second ensuring that the study covers a wide range of possible combinations and thus needs absolutely no knowledge about the most promising combinations to test.

Step 6

Create the database (Table 2). The project generates 24 rows of data for each person. An example of the database appears in Table 2, with the table transpose for presentation purpooses. . The data are set up for immediately stastical anaysis.

Table 2: Example of the database prepared for analysis. The actual matrix format for data analysis is transposed 90 degrees.

Panelist Each respondent has a unique identification number (UID) 1 2 3 4 5 6
Row in database The 30 respondents generate 24 rows of data each, one for each vignette 246 407 478 583 642 678
Gender Male or Female, obtained from an up-front classification question Fem Male Fem Male Fem Male
Age The respondent gives year of birth 36 23 19 24 23 20
Age Group After-the fact grouping into two ages Old Old Young Old Old Young
Self-Profiling (Answer one only) Who do you relate to most in a restaurant setting?

1= Wait staff ( food )

2= Owner

3= Bus ( drinks, setup, cleanup )

4=Cashier/host

1 1 1 1 3 1
Test Order The computer records the order of trial 12 23 17 6 18 22
A1 Each element in the study us coded 1 when appearing in the vignette, and 0 when absent from the vignette 0 0 0 0 0 0
A2 0 1 0 0 0 0
A3 0 0 0 0 0 0
A4 0 0 0 0 0 1
B1 0 0 1 0 0 0
B2 0 0 0 0 0 0
B3 1 0 0 1 1 0
B4 0 0 0 0 0 1
C1 0 0 0 0 0 1
C2 1 0 0 0 1 0
C3 0 0 1 1 0 0
C4 0 0 0 0 0 0
D1 0 1 1 0 0 0
D2 0 0 0 0 0 0
D3 1 0 0 0 1 0
D4 0 0 0 1 0 0
Rating The 9-point rating scale anchored at 1 (25% lower) and 9 (25% higher) 8 4 8 6 7 2
Price The percentage departure from 0 19 -6 19 6 13 -19
Rtseconds Response time to the vignette in the nearest 10th of a second 1.2 0.8 5.0 4.9 1.1 4.0
Clusters2 Membership in one of the two clusters 1 2 1 1 2 2
Clusters3 Membership one of the three cluster 1 3 1 3 2 2

Step 7: Convert the Data to Percent

The nine ratings of price are transformed to relative price, with a rating of 9 transformed to +25 (25% higher), a rating of 5 transformed to 0 (same expected price), and a 1 transformed to -25 (25% lower).

Step 8

Build separate equations for the predefined groups (total, gender, age) The data from each of the self-defined groups, total, gender, and age, were analyzed to create an equation of the form:

Percent Departure of Check from Typical (+25 to – 25) = k1(A1) + k2(A2) … k16(D4)

The coefficient for an element is relative change (percent) is size of the check when the element is inserted into the vignette: (increase in expected check when positive, decrease in expected check when negative).

Table 3 shows the coefficients for the different groups. We highlight only those elements which generate positive or negative changes of 8% or higher in the check. The interesting finding from the first group of respondents is that there are no elements which drive up the value of the check, or drive it down, as least strongly. No coefficient is 8 or higher, viz., no element can be attributed to be a major driver of the check price.

Step 9

Create new to the world mind-sets by dividing the respondents into groups based upon the patterns of their coefficients. Each respondent generates an equation with 16 coefficients, the equation relating the presence/absence of the 16 elements to the percent change expected for the check. The percent is shown as a whole number. A 25% increase in the check is shown as +25; 25% decrease in the check is shown as -25). The pattern of coefficients allows the use of k-means clustering [7]. The clustering program computes a measure of ‘distance’ between pairs of respondents, the measure D defined as (1-Pearson Correlation, viz. 1-R.) The Pearson Correlation, R, measures the strength of a linear relation between two variables, based upon the different observations. There are 16 observations for each respondent. The Pearson Correlation varies from a high of +1 when two variables are perfectly linearly related to each other, to 0 when two variables are not related to each other, to -1 when two variables are inversely related to each other.

Step 10

Create the models for two and three clusters emerging from the clustering. The segmentation or clustering does not know anything about the ‘meaning’ of the elements, but simply works with the coefficients, and the distance values. The clustering yields five new models, two for two-mindsets, and three for three-mind sets. It is the task of the researcher to name these mind-sets, based upon the pattern of strong performing positive elements. We will only present the results for the three mind-sets.

Results – Study #1 (Traits of Servers and Manager)

The first analysis comprises the deconstruction of relative price based on the traits of the staff. Table 3 shows that nearly all elements increase the expected bill, but each element increases the expected size of the check to a small degree. There are no elements which stand out as strong contributors of the magnitude of the check, at least when we deal with respondents classified by gender or by age, respectively.

Table 3: Study #1: How the traits and behaviors of the server and the manager drives the relative size of the check. Numbers in the cells are the increment or decrement of the size of the check, expressed as percent, attributable to the element.

table 3

The respondent who is instructed to assign monetary value to a situation (so-called homo economicus) often is more conservative than the respondent who is instructed to assign a rating of a feeling. These data suggest a conservative response. For the Total Panel, the highest contribution to the checks only 3.6% (server assists manager: shows up to work on time on a consistent basis.) For the Total Panel, the lowest contribution to the check is -1.9% (server assists manager: shows up with a can-do, team player attitude.) There are similar, small contributions for the subgroups defined by gender and by age. At least for the total panel and for the key subgroups defined by age and gender, there is no clear relation between the positive behavior of the staff, their interaction, and the price of the check. We see a clearer set of contributions when we divide the respondents into ‘mind-sets’ based upon the pattern of their coefficients for the relative price, rather than by who they are (mind-sets versus conventional geo-demographic subgroups). Yet, as both Table 3 shows for all the data, and Table 4 shows for the strong-performing elements by mind-set, there are still very few elements which drive an expectation of a large increment or decrement of the check.

Table 4: Study #1: How the traits of the server and the manager drives the relative size of the check. Data from the strongest elements for the three mind-sets.

table 4

The division of respondents into three mind-sets suggests that:

Mind-Set 1

No clear elements drive change in size of the check

Mind-Set 2

Associates warm service with a higher check, associates manager involvement with a lower check. It may be that these respondents feel that any focus on the server’s personality will increase the check.

Mind-Set 3

Expects to pay more for a server who does the job. Expects to pay less for a server who is friendly, and with whom the customer identifies.

Study 1 on the Traits of the Server and Manager suggests that,, in contrast to homo emotionalis who can be shown to have expansive feelings, these patterns emerging when the mind-sets are separated, homo economicus still shows a constrained range of feelings, even when the different mind-sets are identified by the same clustering method, k-means.

Study #2 (Behavior of Staff as the Customer Enters the Restaurant)

The second study moves to what the customer might observe when walking into the restaurant, but before the customer has been seated. We see no clear relation between the incremental or decremental size of the check and staff behavior at the entrance to the restaurant (Table 5)

Table 5: Study #2: How the behavior of the staff at the time of customer entrance to the restaurant drives the relative size of the check. Numbers in the cells are the increment or decrement of the size of the check, expressed as percent, attributable to the element.

table 5

The key differences which emerge come from the three mind-sets (Table 6).

Table 6: Study #2: How the behavior of the staff at the time of customer entrance to the restaurant drives the relative size of the check. Data from the strongest elements for the three mind-sets.

table 6

Mind-Set 1 appears to expect to pay more for staff which look busy, whether they are harmoniously busy or not. Mind-Set appears to expect to pay less for staff seemingly eager to wait on the customer.

Mind-Set 2 expects to pay more when the staff look busy.

Mind-Set 3 expects to pay more when the staff look competent and resolve a problem. Mind-Set 3 expects to pay less for incompetent service.

Study #2 reaffirms that when the respondent is asked to use economics, specifically money as a measure of something that is not usually appraised in economic terms, viz., behavior and service, homo economicus takes over, and forces the respondent in a conservative, judgmental stance. No elements emerge as dramatically strong drivers of the magnitude of the check.

Study # 3 – Dollar Value of Description of the Interaction Between Server and Customer

Study #3 was run exactly as studies 1 and 2.This time, however, the topic was the interaction of the server and the customer. Once again, no patterns emerge for the total panel and for the key subgroups of gender and age (Table 7). The key results emerge for the mind-sets (Table 8)

Table 7: Study #3: How the interaction of the server with the customer drives the relative size of the check. Numbers in the cells are the increment or decrement of the size of the check, expressed as percent, attributable to the element.

table 7

Table 8: Study #3: How the interaction of the server with the customer drives the relative size of the check. Data by mind-set

table 8

Mind-Set 1

Focus on the customer generates an expectation of a higher check. Focus on the server generates an expectation of a lower check.

Mind-Set 2

Weak effects. No strong expectations either direction.

Mind-Set 3

Focus on incompetence drives the expectation of a slightly higher check.

Again, in contrast to homo emotionalis, we see homo economicus is far more conservative, especially when there is attribution without clear linkage, rather than evaluation with clear linkage. An example of the evaluation would be the expectation of the price of the check when the messages deal with the actual food, rather than the service.

Beyond Cognitive Responses of Homo Economicus to A Focus on Engagement Time (Response Time)

The second aspect of the analysis involves the amount of time that a respondent spends making a decision. The data from the deconstruction suggests that the respondent is conservative, at least at a conscious level. At the level of the unconscious, however, can we discover anything more about homo economics and attribution? That is, if we are able to measure the time needed to make a decision, do we learn anything more? Or, in fact, is attribution more elusive? One of the features of the Mind Genomics system is the ability to measure response times, defined as the number of seconds between the time the vignette appears o the screen and the time that the respondent assigns a rating. The response time shortens and reaches a steady stage after 2-3 experiences with the task. Since each respondent evaluated all of the elements in different combinations, and each element appeared many times in each position, one need not eliminate the first 1-3 vignettes. They can simply be included because the slow response should distribute itself approximately equally across all respondents and all elements.

The respondents could not have known their own response times for each element, for three reasons:

  1. The respondent was not aware that the response time was being measured
  2. There was too much to do when evaluating 24 vignettes
  3. Each vignette comprised 2-4 elements.

The response times are measured as a totality. Any response time of 9 seconds or longer was defined as 9 seconds. The randomization of experimental designs ensured that the vignettes requiring 9 seconds or longer would most likely comprise similar elements.

Figure 2 shows the distribution of the response times for each element. The three histograms are plotted in a vertical fashion, allowing the eye to compare the shape of the histograms. It is clear that the response times tend to be longest when the task is to attribute relative price of the check to the traits of the server and the manager. It is clear that the response times tend to be shortest when the task is to attribute relative price of the check to the interaction of the server with the customer. These patterns make intuitive sense, because the respondent can identify with the situation of the server interacting with the respondent. There is little to think about. The reaction is quick because the situation is familiar.

FIG 2

Figure 2: Histograms of the frequencies of the response times for the vignettes. Each graph pertains to one study.

A deeper look into the data reveals the number of seconds that can be ascribed to each element. The analysis is similar to the previous analysis linking the presence/absence of the element to the relative magnitude of the check (1=25% less to 9=25% more). This time, the dependent variable is the response time to the nearest tenth of a second. The equation showing the deconstruction of the response time once again has no additive constant: Response Time (Seconds) = k1(A1) + k2(A2) … k16(D4)

Table 9 shows the combination of element and subgroup for elements defined to ‘engage the respondent.’ In this study, engagement is operationally defined as an element whose deconstructed value of response time is 1.4 seconds or longer. The number 1.4 seconds is an operational definition of engagement, emerging from the analysis of hundreds of studies of this type. The typical engagement times for elements are generally 0.3 to 0.7 seconds, but the engagement times vary by seriousness of topic. Thus, 1.4 seconds for estimated response time is a safe estimate for an element which engages, albeit an estimate of convenience since there is no agreed-upon definition of engagement vs. response time.

Table 9: Response times (engagement) to individual elements by respondents in key subgroups. Only those elements generating response times of 1.4 seconds or more are shown in the table.

table 9

Table 9 suggests that there are some elements which engage the respondent in for dramatically longer times.

The total panel shows no long engagement times.

Males engage with the elements about assisting, whether server assists manager or manager assists server. In contrast, females engage in the element talking about a negative end to the meal.

Younger respondents engage with assistance as well, whether positive or negative. They also respond to elements talking about the nature of the service. Older respondents do not engage with any element.

Mind-Set 1 engages with all types of elements, positive and negative, and at all stages of the staff-customer interaction.

Mind-Set 2 engages with speed of service (‘beeline’).

Mind-Set 3 engages most with the staff being busy.

The use of response time reveals a somewhat more detailed story, suggesting that the attribution of dollar value to staff behavior may not reveal itself as much in the conscious evaluation of ‘how much money will change hands’ but rather in the unconscious variation in engagement time (response time to individual elements).

Discussion and Conclusion

The emerging science of Mind Genomics has been previously used to understand how people respond in an emotional fashion to the description of features and attributes of products and situations [13], as well as understand the dollar value of features and products [10,11]. The approach here moves from the evaluation of concrete descriptions of products and situations to the attribution of value to situations which have no intrinsic value in an of themselves. The introductory studies here are the atmosphere and behavior of service and managerial staff in a restaurant, and the attributed value of such service to one economic indicator, the magnitude of the check.

The data suggest that it is difficult to link economics (e.g., value of the check) to behavior which is not directly related to the product. The Mind Genomics experiment works, at least in practice. What emerges, however is a greatly constricted pattern, a conservatism which does show itself dramatically when one is rating the concrete situation based on feelings, or when one is rating the dollar value of a tangible item or clearly defined service for which one will pay. The implications of this study are great. We live in an economic society where the focus is on customer satisfaction, and the expected economic returns of customer satisfaction. These data suggest that such efforts may be more difficult than one might think. It is all well and good to measure the satisfaction of customers, but just how does that translate into what people will pay. The data from this study suggests that the results of a Mind Genomics study might not be very clear, whether the study deals with the evaluation of a situation without a customer (Study #1: Traits of Server and Manager), the evaluation of a situation where the customer is being introduced into the situation (Study #2: Staff Behavior as Customer Walks In), or even the evaluation of a situation describing the interaction with the staff (Study #3: Interaction of Server and Customer). Or to summarize, how then do we measure the dollar value of customer satisfaction? What have we missed?

References

  1. Han H, Ryu K (2009) The Roles of the Physical Environment, Price Perception, and Customer Satisfaction in Determining Customer Loyalty in the Restaurant Industry. Journal of Hospitality & Tourism Research 33: 487-510.
  2. Namkung Y, Jang S (2007) Does Food Quality Really Matter in Restaurants? Its Impact On Customer Satisfaction and Behavioral Intentions. Journal of Hospitality & Tourism Research 31: 387-409.
  3. Qin H, Prybutok VR (2009) Service quality, customer satisfaction, and behavioral intentions in fast-food restaurants. International Journal of Quality and Service Science 1: 78-95.
  4. Ryu K, Han H (2010) Influence of the Quality of Food, Service, and Physical Environment on Customer Satisfaction and Behavioral Intention in Quick-Casual Restaurants: Moderating Role of Perceived Price. Journal of Hospitality & Tourism Research 34: 310-329.
  5. Jang S, Liu Y, Namkung Y (2011) Effects of authentic atmospherics in ethnic restaurants: investigating Chinese restaurants. International Journal of Contemporary Hospitality Management 23: 662-680.
  6. Teng CC (2011) Commercial hospitality in restaurants and tourist accommodation: Perspectives from international consumer experience in Scotland. International Journal of Hospitality Management 30: 866-874.
  7. Zemel R, Choudhuri SG, Gere A, Upreti H, Deite Y, et al. (2019) Mind, consumers, and dairy: Applying artificial intelligence, mind genomics, and predictive viewpoint typing.
  8. Moskowitz H, Baum E, Rappaport S, Gere A (2019) Estimated Stock Price Based on Company Communications: Mind Genomics and Cognitive Economics as Knowledge-Creation Tools for Behavioral Finance. Edelweiss Applied Science and Technology 4: 60-69.
  9. Moskowitz H, Rappaport S, Moskowitz D, Porretta S, Velema B, et al. (2017) Chapter 14 – Product design for bread through mind genomics and cognitive economics. In D. Bagchi & S. Nair (Eds.), Developing New Functional Food and Nutraceutical Products 249-278.
  10. Moskowitz HR (2012) ‘Mind genomics’: the experimental, inductive science of the ordinary, and its application to aspects of food and feeding. Physiol Behav 107: 606-613.
  11. Moskowitz HR (1995) The dollar value of product quality: The effect of pricing versus overall liking on consumer stated purchase intent for pizza. Journal of Sensory Studies 10: 239-247.
  12. Gofman A, Moskowitz H (2010) Isomorphic Permuted Experimental Designs and Their Application in Conjoint Analysis. Journal of Sensory Studies 25: 127-145.
  13. Gere A, Harizi A, Bellissimo N, Roberts D, Moskowitz H (2020) Creating a mind genomics wiki for non-meat analogs. Sustainability 12: 5352.