ABC News Watch
An Honest Climate Debate
And Then There's Physics
Australian Climate Madness
Bishop Hill
Bob Tisdale - Climate Observations
C3 Headlines
CACA
CFACT
Chew The Fat
Climate Audit
Climate Change Dispatch
Climate Common Sense
Climate Conferences - Heartland
Climate Conversation Group
Climate Depot
Climate Edinburgh
Climate Etc.
Climate Lessons
Climate of Sophistry
Climate Physics
Climate Realists
Climate Resistance
Climate Sanity
Climate Science: Roger Pielke Sr.
Climate Skeptic
climatefraudwatcher
Climategate2009
Climatesense-norpag
Clive Best
Co2 Insanity
CO2 Science
Co2sceptics.com
Deep Climate
Dr. Roy Spencer, PhD
Dr. Tim Ball
ecomyths
Enthusiasm, Scepticism and Science
Errors in IPCC climate science
Geoffchambers's Blog
Global Climate Scam
Global Warming Hoax
Global Warming or is it Global Cooling?
Global Warming Science
GlobalWarming.org
Gore Lied
Green Hell Blog
Greenhouse Bullcrap
Greenie Watch
Grumpy Denier
Gust Of Hot Air
Harmless Sky
Heliogenic Climate Change
Ice Age Now
James Taylor - Forbes
Jennifer Marohasy
Jo Nova
Junk Science
Kiwi Thinker
Klimazwiebel
Maribo
MasterResource
Minnesotans For Global Warming
Musings from the Chiefio
News Busters
No Cap And Trade
No Frakking Consensus
No Tricks Zone
NOAA/ESRL Trends in CO2
Not A Lot Of People Know That
Notes on a Scandal
Omnologos
pindanpost
Planet Gore
Plants Need CO2
Polar Bear Science
Policlimate
Principia Scientific International
Real Science
Really Real Climate
RhymeAfterRhyme
Roger Pielke Jr.'s Blog
ScottishSceptic
Talking About the Weather
Tallbloke's Talkshop
The Air Vent
The Australian Climate Sceptics
The Big Green Lie
The Carbon Sense Coalition
The Cosmic Tusk
The Global Warming Challenge
The Global Warming Policy Foundation
The Hockey Schtick
The Inconvenient Skeptic
The IPCC Report
The Next Grand Minimum
The Reference Frame
The Resilient Earth
The Science of Doom
The SPPI Blog
The View From Here
Tom Nelson
Tory Aardvark
Troy's Scratchpad
Trust, yet verify
VK3BBRs Blog
Watts Up With That

1000frolly
Albertkallal
Bob Tisdale
Bushvision
Camguy58
Cato Institute
CFACT
Climate Central
Climate Resistance
Climate Review
Climate Scam
ClimateGateExposed
Climatism
CO2 Is Green
CO2 Science
Coyote Blog
Dr David Evans
Friends of Science
Galileo Movement
Global Stewardship
I Love Carbondioxide
LibertyInOurTime
MagicJava TV
Michael Coffman
Minnesotans For Global Warming
No Cap And Trade Group
Not Evil Just Wrong
Plants Need CO2
Question The Hype
ShinyChuck
SkepticsSpeakOut
Stefan Molyneux
Steve Goreham
Taxing Air
The GWPF
The Heartland Institute
The Independent Institute
Tom Harris ICSC
wakeup2thelies
Weather Action TV

ICCC1
ICCC2
ICCC3
ICCC4
ICCC5
ICCC6
ICCC7
ICCC8
Richard Lindzen, PhD
Roy Spencer, PhD

The Hockey Schtick

If you can't explain the 'pause', you can't explain the cause...

Physicist Dr. Fred Singer: The Sea Is Rising, but Not Because of Climate Change
Publish: Tue 15 May 2018 - 11:46 PM
Website: THE HOCKEY SCHTICK
Twitter: @hockeyschtick1
Source: View Original

There is nothing we can do about it, except to build dikes and sea walls a little bit higher.


By 
Of all known and imagined consequences of climate change, many people fear sea-level rise most. But efforts to determine what causes seas to rise are marred by poor data and disagreements about methodology. The noted oceanographer Walter Munk referred to sea-level rise as an “enigma”; it has also been called a riddle and a puzzle.
It is generally thought that sea-level rise accelerates mainly by thermal expansion of sea water, the so-called steric component. But by studying a very short time interval, it is possible to sidestep most of the complications, like “isostatic adjustment” of the shoreline (as continents rise after the overlying ice has melted) and “subsidence” of the shoreline (as ground water and minerals are extracted).
I chose to assess the sea-level trend from 1915-45, when a genuine, independently confirmed warming of approximately 0.5 degree Celsius occurred. I note particularly that sea-level rise is not affected by the warming; it continues at the same rate, 1.8 millimeters a year, according to a 1990 review by Andrew S. Trupin and John Wahr. I therefore conclude—contrary to the general wisdom—that the temperature of sea water has no direct effect on sea-level rise. That means neither does the atmospheric content of carbon dioxide.
This conclusion is worth highlighting: It shows that sea-level rise does not depend on the use of fossil fuels. The evidence should allay fear that the release of additional CO2 will increase sea-level rise.
But there is also good data showing sea levels are in fact rising at an accelerating rate. The trend has been measured by a network of tidal gauges, many of which have been collecting data for over a century.
The cause of the trend is a puzzle. Physics demands that water expand as its temperature increases. But to keep the rate of rise constant, as observed, expansion of sea water evidently must be offset by something else. What could that be? I conclude that it must be ice accumulation, through evaporation of ocean water, and subsequent precipitation turning into ice. Evidence suggests that accumulation of ice on the Antarctic continent has been offsetting the steric effect for at least several centuries.
It is difficult to explain why evaporation of seawater produces approximately 100% cancellation of expansion. My method of analysis considers two related physical phenomena: thermal expansion of water and evaporation of water molecules. But if evaporation offsets thermal expansion, the net effect is of course close to zero. What then is the real cause of sea-level rise of 1 to 2 millimeters a year?
Melting of glaciers and ice sheets adds water to the ocean and causes sea levels to rise. (Recall though that the melting of floating sea ice adds no water to the oceans, and hence does not affect the sea level.) After the rapid melting away of northern ice sheets, the slow melting of Antarctic ice at the periphery of the continent may be the main cause of current sea-level rise.
All this, because it is much warmer now than 12,000 years ago, at the end of the most recent glaciation. Yet there is little heat available in the Antarctic to support melting.
We can see melting happening right now at the Ross Ice Shelf of the West Antarctic Ice Sheet. Geologists have tracked Ross’s slow disappearance, and glaciologist Robert Bindschadler predicts the ice shelf will melt completely within about 7,000 years, gradually raising the sea level as it goes.
Of course, a lot can happen in 7,000 years. The onset of a new glaciation could cause the sea level to stop rising. It could even fall 400 feet, to the level at the last glaciation maximum 18,000 years ago.
Currently, sea-level rise does not seem to depend on ocean temperature, and certainly not on CO2. We can expect the sea to continue rising at about the present rate for the foreseeable future. By 2100 the seas will rise another 6 inches or so—a far cry from Al Gore’s alarming numbers. There is nothing we can do about rising sea levels in the meantime. We’d better build dikes and sea walls a little bit higher.
Mr. Singer is a professor emeritus of environmental science at the University of Virginia. He founded the Science and Environmental Policy Project and the Nongovernmental International Panel on Climate Change.
Appeared in the May 16, 2018, print edition.

New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model
Publish: Sun 24 Sep 2017 - 2:34 PM
Website: THE HOCKEY SCHTICK
Twitter: @hockeyschtick1
Source: View Original

New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model

Ned Nikolov* and Karl Zeller
Ksubz LLC, 9401 Shoofly Lane, Wellington CO 80549, USA
Corresponding Author:
Ned Nikolov
Ksubz LLC, 9401 Shoofly Lane
Wellington CO 80549, USA
Tel: 970-980-3303, 970-206-0700
E-mail: ntconsulting@comcast.net
Received date: November 11, 2016; Accepted date: February 06, 2017; Published date: February 13, 2017
Citation: Nikolov N, Zeller K (2017) New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model. Environ Pollut Climate Change 1:112.s
Copyright: © 2017 Nikolov N, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Visit for more related articles at Environment Pollution and Climate Change
 View PDF Download PDF

Abstract

A recent study has revealed that the Earth’s natural atmospheric greenhouse effect is around 90 K or about 2.7 times stronger than assumed for the past 40 years. A thermal enhancement of such a magnitude cannot be explained with the observed amount of outgoing infrared long-wave radiation absorbed by the atmosphere (i.e. ≈ 158 W m-2), thus requiring a re-examination of the underlying Greenhouse theory. We present here a new investigation into the physical nature of the atmospheric thermal effect using a novel empirical approach toward predicting the Global Mean Annual near-surface equilibrium Temperature (GMAT) of rocky planets with diverse atmospheres. Our method utilizes Dimensional Analysis (DA) applied to a vetted set of observed data from six celestial bodies representing a broad range of physical environments in our Solar System, i.e. Venus, Earth, the Moon, Mars, Titan (a moon of Saturn), and Triton (a moon of Neptune). Twelve relationships (models) suggested by DA are explored via non-linear regression analyses that involve dimensionless products comprised of solar irradiance, greenhouse-gas partial pressure/density and total atmospheric pressure/density as forcing variables, and two temperature ratios as dependent variables. One non-linear regression model is found to statistically outperform the rest by a wide margin. Our analysis revealed that GMATs of rocky planets with tangible atmospheres and a negligible geothermal surface heating can accurately be predicted over a broad range of conditions using only two forcing variables: top-of-the-atmosphere solar irradiance and total surface atmospheric pressure. The hereto discovered interplanetary pressure-temperature relationship is shown to be statistically robust while describing a smooth physical continuum without climatic tipping points. This continuum fully explains the recently discovered 90 K thermal effect of Earth’s atmosphere. The new model displays characteristics of an emergent macro-level thermodynamic relationship heretofore unbeknown to science that has important theoretical implications. A key entailment from the model is that the atmospheric ‘greenhouse effect’ currently viewed as a radiative phenomenon is in fact an adiabatic (pressure-induced) thermal enhancement analogous to compression heating and independent of atmospheric composition. Consequently, the global down-welling long-wave flux presently assumed to drive Earth’s surface warming appears to be a product of the air temperature set by solar heating and atmospheric pressure. In other words, the so-called ‘greenhouse back radiation’ is globally a result of the atmospheric thermal effect rather than a cause for it. Our empirical model has also fundamental implications for the role of oceans, water vapour, and planetary albedo in global climate. Since produced by a rigorous attempt to describe planetary temperatures in the context of a cosmic continuum using an objective analysis of vetted observations from across the Solar System, these findings call for a paradigm shift in our understanding of the atmospheric ‘greenhouse effect’ as a fundamental property of climate.

Keywords

Greenhouse effect; Emergent model; Planetary temperature; Atmospheric pressure; Greenhouse gas; Mars temperature

Introduction

In a recent study Volokin et al. [1] demonstrated that the strength of Earth’s atmospheric Greenhouse Effect (GE) is about 90 K instead of 33 K as presently assumed by most researchers e.g. [2-7]. The new estimate corrected a long-standing mathematical error in the application of the Stefan–Boltzmann (SB) radiation law to a sphere pertaining to Hölder’s inequality between integrals. Since the current greenhouse theory strives to explain GE solely through a retention (trapping) of outgoing long-wavelength (LW) radiation by atmospheric gases [2,5,710], a thermal enhancement of 90 K creates a logical conundrum, since satellite observations constrain the global atmospheric LW absorption to 155–158 W m-2 [11-13]. Such a flux might only explain a surface warming up to 35 K. Hence, more than 60% of Earth’s 90 K atmospheric effect appears to remain inexplicable in the context of the current theory. Furthermore, satellite- and surface-based radiation measurements have shown [12-14] that the lower troposphere emits 42-44% more radiation towards the surface (i.e., 341-346 W m-2) than the net shortwave flux delivered to the Earth-atmosphere system by the Sun (i.e., 240 W m-2). In other words, the lower troposphere contains significantly more kinetic energy than expected from solar heating alone, a conclusion also supported by the new 90 K GE estimate. A similar but more extreme situation is observed on Venus as well, where the atmospheric downwelling LW radiation near the surface (>15,000 W m-2) exceeds the total absorbed solar flux (65–150 W m-2) by a factor of 100 or more [6]. The radiative greenhouse theory cannot explain this apparent paradox considering the fact that infrared-absorbing gases such as CO2, water vapor and methane only re-radiate available LW emissions and do not constitute significant heat storage or a net source of additional energy to the system. This raises a fundamental question about the origin of the observed energy surplus in the lower troposphere of terrestrial planets with respect to the solar input. The above inconsistencies between theory and observations prompted us to take a new look at the mechanisms controlling the atmospheric thermal effect.
We began our study with the premise that processes controlling the Global Mean Annual near-surface Temperature (GMAT) of Earth are also responsible for creating the observed pattern of planetary temperatures across the Solar System. Thus, our working hypothesis was that a general physical model should exist, which accurately describes GMATs of planets using a common set of drivers. If so, then such a model would also reveal the forcing behind the atmospheric thermal effect.
Instead of examining existing mechanistic models such as 3-D GCMs, we decided to try an empirical approach not constrained by a particular physical theory. An important reason for this was the fact that current process-oriented climate models rely on numerous theoretical assumptions while utilizing planet-specific parameterizations of key processes such as vertical convection and cloud nucleation in order to simulate the surface thermal regime over a range of planetary environments [15]. These empirical parameterizations oftentimes depend on detailed observations that are not typically available for planetary bodies other than Earth. Hence, our goal was to develop a simple yet robust planetary temperature model of high predictive power that does not require case-specific parameter adjustments while successfully describing the observed range of planetary temperatures across the Solar System.

Methods and Data

In our model development we employed a ‘top-down’ empirical approach based on Dimensional Analysis (DA) of observed data from our Solar System. We chose DA as an analytic tool because of its ubiquitous past successes in solving complex problems of physics, engineering, mathematical biology, and biophysics [16-21]. To our knowledge DA has not previously been applied to constructing predictive models of macro-level properties such as the average global temperature of a planet; thus, the following overview of this technique is warranted.
Dimensional analysis background
DA is a method for extracting physically meaningful relationships from empirical data [22-24]. The goal of DA is to restructure a set of original variables deemed critical to describing a physical phenomenon into a smaller set of independent dimensionless products that may be combined into a dimensionally homogeneous model with predictive power. Dimensional homogeneity is a prerequisite for any robust physical relationship such as natural laws. DA distinguishes between measurement units and physical dimensions. For example, mass is a physical dimension that can be measured in gram, pound, metric ton etc.; time is another dimension measurable in seconds (s), hour (h), years, etc. While the physical dimension of a variable does not change, the units quantifying that variable may vary depending on the adopted measurement system.
Many physical variables and constants can be described in terms of four fundamental dimensions, i.e., mass [M], length [L], time [T], and absolute temperature [Θ]. For example, an energy flux commonly measured in W m-2 has a physical dimension [M T-3] since 1 W m-2=1 J s-1 m-2=1 (kg m2 s-2) s-1 m-2=kg s-3. Pressure may be reported in units of Pascal, bar, atm., PSI or Torr, but its physical dimension is always [M L-1 T-2] because 1 Pa=1 N m-2=1 (kg m s-2) m-2=1 kg m-1 s-2. Thinking in terms of physical dimensions rather than measurement units fosters a deeper understanding of the underlying physical reality. For instance, a comparison between the physical dimensions of energy flux and pressure reveals that a flux is simply the product of pressure and the speed of moving particles [L T-1], i.e., [M T-3]=[M L-1 T-2] [L T-1]. Thus, a radiative flux FR (W m-2) can be expressed in terms of photon pressure Pph (Pa) and the speed of light c (m s-1) as Fr=cPph. Since c is constant within a medium, varying the intensity of electromagnetic radiation in a given medium effectively means altering the pressure of photons. Thus, the solar radiation reaching Earth’s upper atmosphere exerts a pressure (force) of sufficient magnitude to perturb the orbits of communication satellites over time [25,26].
The simplifying power of DA in model development stems from the Buckingham Pi Theorem [27], which states that a problem involving n dimensioned xi variables, i.e.,
equation
can be reformulated into a simpler relationship of (n-m) dimensionless πi products derived from xi, i.e.,
equation
where m is the number of fundamental dimensions comprising the original variables. This theorem determines the number of nondimensional πi variables to be found in a set of products, but it does not prescribe the number of sets that could be generated from the original variables defining a particular problem. In other words, there might be, and oftentimes is more than one set of (n-m) dimensionless products to analyze. DA provides an objective method for constructing the sets of πi variables employing simultaneous equations solved via either matrix inversion or substitution [22].
The second step of DA (after the construction of dimensionless products) is to search for a functional relationship between the πivariables of each set using regression analysis. DA does not disclose the best function capable of describing the empirical data. It is the investigator’s responsibility to identify a suitable regression model based on prior knowledge of the phenomenon and a general expertise in the subject area. DA only guarantees that the final model (whatever its functional form) will be dimensionally homogeneous, hence it may qualify as a physically meaningful relationship provided that it (a) is not based on a simple polynomial fit; (b) has a small standard error; (c) displays high predictive skill over a broad range of input data; and (d) is statistically robust. The regression coefficients of the final model will also be dimensionless, and may reveal true constants of Nature by virtue of being independent of the units utilized to measure the forcing variables.
Selection of model variables
A planet’s GMAT depends on many factors. In this study, we focused on drivers that are remotely measurable and/or theoretically estimable. Based on the current state of knowledge we identified seven physical variables of potential relevance to the global surface temperature: 1) topof- the-atmosphere (TOA) solar irradiance (S); 2) mean planetary surface temperature in the absence of atmospheric greenhouse effect, hereto called a reference temperature (Tr); 3) near-surface partial pressure of atmospheric greenhouse gases (Pgh); 4) near-surface mass density of atmospheric greenhouse gases (ρgh); 5) total surface atmospheric pressure (P); 6) total surface atmospheric density (ρ); and 7) minimum air pressure required for the existence of a liquid solvent at the surface, hereto called a reference pressure (Pr). Table 1 lists the above variables along with their SI units and physical dimensions. Note that, in order to simplify the derivation of dimensionless products, pressure and density are represented in Table 1 by the generic variables Px and ρx, respectively. As explained below, the regression analysis following the construction of πi variables explicitly distinguished between models involving partial pressure/density of greenhouse gases and those employing total atmospheric pressure/density at the surface. The planetary Bond albedo (αp) was omitted as a forcing variable in our DA despite its known effect on the surface energy budget, because it is already dimensionless and also partakes in the calculation of reference temperatures discussed below.
Planetary VariableSymbolSI UnitsPhysical Dimension
Global mean annual near-surface temperature (GMAT), the dependent variableTsK[Θ]
Stellar irradiance (average shortwave flux incident on a plane perpendicular to the stellar rays at the top of a planet’s atmosphere)SW m-2[M T-3]
Reference temperature (the planet’s mean surface temperature in the absence of an atmosphere or an atmospheric greenhouse effect)TrK[Θ]
Average near-surface gas pressure representing either partial pressure of greenhouse gases or total atmospheric pressurePxPa[M L-1 T-2]
Average near-surface gas density representing either greenhouse-gas density or total atmospheric densityPxkg m-3[M L-3]
Reference pressure (the minimum atmospheric pressure required a liquid solvent to exists at the surface)PrPa[M L-1 T-2]
Table 1: Variables employed in the Dimensional Analysis aimed at deriving a general planetary temperature model. The variables are comprised of 4 fundamental physical dimensions: mass [M], length [L], time [T] and absolute temperature [Θ].
Appendix A details the procedure employed to construct the πi variables. DA yielded two sets of πi products, each one consisting of two dimensionless variables, i.e.,
and
equation
This implies an investigation of two types of dimensionally homogeneous functions (relationships):
and
equation (2)
Note that π1=Ts/Tr occurs as a dependent variable in both relationships, since it contains the sought temperature Ts. Upon replacing the generic pressure/density variables Px and ρx in functions (1) and (2) with either partial pressure/density of greenhouse gases (Pgh and ρgh) or total atmospheric pressure/density (P and ρ), one arrives at six prospective regression models. Further, as explained further, we employed two distinct kinds of reference temperature computed from different formulas, i.e., an effective radiating equilibrium temperature (Te) and a mean ‘no-atmosphere’ spherical surface temperature (Tna) (Table 1). This doubled the πi instances in the regression analysis bringing the total number of potential models for investigation to twelve.
Reference temperatures and reference pressure
A reference temperature (Tr) characterizes the average thermal environment at the surface of a planetary body in the absence of atmospheric greenhouse effect; hence, Tr is different for each body and depends on solar irradiance and surface albedo. The purpose of Tr is to provide a baseline for quantifying the thermal effect of planetary atmospheres. Indeed, the Ts/Tr ratio produced by DA can physically be interpreted as a Relative Atmospheric Thermal Enhancement (RATE) ideally expected to be equal to or greater than 1.0. Expressing the thermal effect of a planetary atmosphere as a non-dimensional quotient instead of an absolute temperature difference (as done in the past) allows for an unbiased comparison of the greenhouse effects of celestial bodies orbiting at different distances from the Sun. This is because the absolute strength of the greenhouse effect (measured in K) depends on both solar insolation and atmospheric properties, while RATE being a radiation-normalized quantity is expected to only be a function of a planet’s atmospheric environment. To our knowledge, RATE has not previously been employed to measure the thermal effect of planetary atmospheres.
Two methods have been proposed thus far for estimating the average surface temperature of a planetary body without the greenhouse effect, both based on the SB radiation law. The first and most popular approach uses the planet’s global energy budget to calculate a single radiating equilibrium temperature Te (also known as an effective emission temperature) from the average absorbed solar flux [6,9,28], i.e.,
equation (3)
Here, S is the solar irradiance (W m-2) defined as the TOA shortwave flux incident on a plane perpendicular to the incoming rays, αp is the planetary Bond albedo (decimal fraction), ε is the planet’s LW emissivity (typically 0.9 ≤ ε <1 .0="" a="" al.="" assume="" based="" by="" et="" href="https://www.omicsonline.org/open-access/new-insights-on-the-physical-nature-of-the-atmospheric-greenhouse-effect-deduced-from-an-empirical-planetary-temperature-model.php?aid=88574#29" in="" lunar="" measurements="" on="" regolith="" reported="" study="" this="" title="29" vasavada="" we="">29
], and σ=5.6704 × 10-8 W m-2 K-4 is the SB constant. The term S(1-αp )⁄4 represents a globally averaged shortwave flux absorbed by the planetatmosphere system. The rationale behind Eq. (3) is that the TOA energy balance presumably defines a baseline temperature at a certain height in the free atmosphere (around 5 km for Earth), which is related to the planet’s mean surface temperature via the infrared optical depth of the atmosphere [9,10]. Equation (3) was introduced to planetary science in the early 1960s [30,31] and has been widely utilized ever since to calculate the average surface temperatures of airless (or nearly airless) bodies such as Mercury, Moon and Mars [32] as well as to quantify the strength of the greenhouse effect of planetary atmospheres [2-4,6,9,28]. However, Volokin et al. [1] showed that, due to Hölder’s inequality between integrals [33], Te is a non-physical temperature for spheres and lacks a meaningful relationship to the planet’s Ts.
The second method attempts to estimate the average surface temperature of a planet (Tna) in the complete absence of an atmosphere using an explicit spatial integration of the SB law over a sphere. Instead of calculating a single bulk temperature from the average absorbed shortwave flux as done in Eq. (3), this alternative approach first computes the equilibrium temperature at every point on the surface of an airless planet from the local absorbed shortwave flux using the SB relation, and then spherically integrates the resulting temperature field to produce a global temperature mean. While algorithmically opposite to Eq. (3), this method mimics well the procedure for calculating Earth’s global temperature as an area-weighted average of surface observations.
Rubincam [34] proposed an analytic solution to the spherical integration of the SB law (his Eq. 15) assuming no heat storage by the regolith and zero thermal inertia of the ground. Volokin et al. [1] improved upon Rubincam’s formulation by deriving a closed-form integral expression that explicitly accounts for the effect of subterranean heat storage, cosmic microwave background radiation (CMBR) and geothermal heating on the average global surface temperature of airless bodies. The complete form of their analytic Spherical Airless- Temperature (SAT) model reads:
equation (4a)
where αe is the effective shortwave albedo of the surface, ηe is the effective ground heat storage coefficient in a vacuum, Rc=σ 2.7254=3.13 × 10-6 W m-2 is the CMBR [35], and Rg is the spatially averaged geothermal flux (W m-2) emanating from the subsurface. The heat storage term ηe is defined as a fraction of the absorbed shortwave flux conducted into the subsurface during daylight hour and subsequently released as heat at night.
Since the effect of CMBR on Tna is negligible for S>0.15 W m-2 [1] and the geothermal contribution to surface temperatures is insignificant for most planetary bodies, one can simplify Eq. (4a) by substituting Rc=Rg=0 This produces:
equation (4b)
where 0.932=0.7540.25. The complete formula (4a) must only be used if S ≤ 0.15 W m-2 and/or the magnitude of Rg is significantly greater than zero. For comparison, in the Solar System, the threshold S ≤ 0.15 W m-2 is encountered beyond 95 astronomical unis (AU) in the region of the inner Oort cloud. Volokin et al. [1] verified Equations (4a) and (4b) against Moon temperature data provided by the NASA Diviner Lunar Radiometer Experiment [29,36]. These authors also showed that accounting for the subterranean heat storage (ηe) markedly improves the physical realism and accuracy of the SAT model compared to the original formulation by Rubincam [34].
The conceptual difference between Equations (3) and (4b) is that Τe represents the equilibrium temperature of a blackbody disk orthogonally illuminated by shortwave radiation with an intensity equal to the average solar flux absorbed by a sphere having a Bond albedo αp, while Τna is the area-weighted average temperature of a thermally heterogeneous airless sphere [1,37]. In other words, for spherical objects, Τe is an abstract mathematical temperature, while Tna is the average kinetic temperature of an airless surface. Due to Hölder’s inequality between integrals, one always finds Τe>>Τna when using equivalent values of stellar irradiance and surface albedo in Equations (3) and (4b) [1].
To calculate the Tna temperatures for planetary bodies with tangible atmospheres, we assumed that the airless equivalents of such objects would be covered with a regolith of similar optical and thermo-physical properties as the Moon surface. This is based on the premise that, in the absence of a protective atmosphere, the open cosmic environment would erode and pulverize exposed surfaces of rocky planets over time in a similar manner [1]. Also, properties of the Moon surface are the best studied ones among all airless bodies in the Solar System. Hence, one could further simplify Eq. (4b) by combining the albedo, the heat storage fraction and the emissivity parameter into a single constant using applicable values for the Moon, i.e., αe=0.132, ηe=0.00971 and ε=0.98 [1,29]. This produces:
equation (4c)
Equation (4c) was employed to estimate the ‘no-atmosphere’ reference temperatures of all planetary bodies participating in our analysis and discussed below.
For a reference pressure, we used the gas-liquid-solid triple point of water, i.e., Pr=611.73Pa [38] defining a baric threshold, below which water can only exists in a solid/vapor phase and not in a liquid form. The results of our analysis are not sensitive to the particular choice of a referencepressure value; hence, the selection of Pr is a matter of convention.
Regression analysis
Finding the best function to describe the observed variation of GMAT among celestial bodies requires that the πi variables generated by DA be subjected to regression analyses. As explained in Appendix A, twelve pairs of πi variables hereto called Models were investigated. In order to ease the curve fitting and simplify the visualization of results, we utilized natural logarithms of the constructed πi variables rather than their absolute values, i.e., we modeled the relationship In (π1)=f (In(π2)) nstead of π1=f(π2) In doing so we focused on monotonic functions of conservative shapes such as exponential, sigmoidal, hyperbolic, and logarithmic, for their fitting coefficients might be interpretable in physically meaningful terms. A key advantage of this type of functions (provided the existence of a good fit, of course) is that they also tend to yield reliable results outside the data range used to determine their coefficients. We specifically avoided non-monotonic functions such as polynomials because of their ability to accurately fit almost any dataset given a sufficiently large number of regression coefficients while at the same time showing poor predictive skills beyond the calibration data range. Due to their highly flexible shape, polynomials can easily fit random noise in a dataset, an outcome we particularly tried to avoid.
The following four-parameter exponential-growth function was found to best meet our criteria:
equation (5)
where x=In π2 (and y=In π1) are the independent and dependent variable respectively while a,b,c and d are regression coefficients. This function has a rigid shape that can only describe specific exponential patters found in our data. Equation (5) was fitted to each one of the 12 planetary data sets of logarithmic πi pairs suggested by DA using the standard method of least squares. The skills of the resulting regression models were evaluated via three statistical criteria: coefficient of determination (R2), adjusted R2, and standard error of the estimate (σest) [39,40]. All calculations were performed with the SigmaPlot 13 graphing and analysis software.
Planetary data
To ensure proper application of the DA methodology we compiled a dataset of diverse planetary environments in the Solar System using the best information available. Celestial bodies were selected for the analysis based on three criteria: (a) presence of a solid surface; (b) availability of reliable data on near-surface temperature, atmospheric composition, and total air pressure/density preferably from direct observations; and (c) representation of a broad range of physical environments defined in terms of TOA solar irradiance and atmospheric properties. This resulted in the selection of three planets: Venus, Earth, and Mars; and three natural satellites: Moon of Earth, Titan of Saturn, and Triton of Neptune.
Each celestial body was described by nine parameters shown in Table 2 with data sources listed in Table 3. In an effort to minimize the effect of unforced (internal) climate variability on the derivation of our temperature model, we tried to assemble a dataset of means representing an observational period of 30 years, i.e., from 1981 to 2010. Thus, Voyager measurements of Titan from the early 1980s suggested an average surface temperature of 94 ± 0.7 K [41]. Subsequent observations by the Cassini mission between 2005 and 2010 indicated a mean global temperature of 93.4 ± 0.6 K for that moon [42,43]. Since Saturn’s orbital period equals 29.45 Earth years, we averaged the above global temperature values to arrive at 93.7 ± 0.6 K as an estimate of Titan’s 30-year GMAT. Similarly, data gathered in the late 1970s by the Viking Landers on Mars were combined with more recent Curiosity- Rover surface measurements and 1999-2005 remote observations by the Mars Global Surveyor (MGS) spacecraft to derive representative estimates of GMAT and atmospheric surface pressure for the Red Planet (Table 2). Some parameter values reported in the literature did not meet our criteria for global representativeness and or physical plausibility and were recalculated using available observations as described below (Table 3).
ParameterVenusEarthMoonMarsTitanTriton
Average distance to the Sun,  (AU)0.72331.01.01.52379.58230.07
Average TOA solar irradiance,  (W m-2)2,601.31,360.91,360.9586.214.81.5
Bond albedo,  (decimal fraction)0.9000.2940.1360.2350.2650.650
Average absorbed shortwave radiation,  (W m-2)65.0240.2294.0112.12.720.13
Global average surface atmospheric pressure,  (Pa)9,300,000.0 ± 100,00098,550.0 ± 6.52.96 × 10-10 ± 10-10685.4 ± 14.2146,700.0 ± 1004.0 ± 1.2
Global average surface atmospheric density,  (kg m-3)65.868 ± 0.441.193 ± 0.0022.81 × 10-15 ± 9.4 × 10-150.019 ± 3.2 × 10-45.161 ± 0.033.45 × 10-4 ± 9.2 × 10-5
Chemical composition of the lower atmosphere (% of volume)96.5 CO2
3.48 N20.02 SO2
77.89 N2 
20.89 O20.932 Ar 
0.248 H2O
0.040 CO2
26.7 4He
26.7 20Ne 
23.3 H2
20.0 40Ar
3.3 22Ne
95.32 CO2 
2.70 N2 
1.60 Ar 
0.13 O2
0.08 CO 
0.021 H2O
95.1 N24.9 CH499.91 N2
0.060 CO 
0.024 CH4
Molar mass of the lower atmosphere, (kg mol-1)0.04340.02890.01560.04340.02740.0280
GMAT,  (K)737.0 ± 3.0287.4 ± 0.5197.35 ± 0.9190.56 ± 0.793.7 ± 0.639.0 ± 1.0
Table 2: Planetary data set used in the Dimensional Analysis compiled from sources listed in Table 3. The estimation of Mars’ GMAT and the average surface atmospheric pressure are discussed in Appendix B. See Section 2.5 for details about the computational methods employed for some parameters.
Planetary BodyInformation Sources
Venus[32-48]
Earth[12,13,32,49-55]
Moon [1,29,32,48,56-59]
Mars[32,48,60-63]
Titan[32,41-43,64-72]
Triton[48,73-75]
Table 3: Literature sources of the planetary data presented in Table 2.
The mean solar irradiances of all bodies were calculated as S=SE rau-2 where rau is the body’s average distance (semi-major axis) to the Sun (AU) and SE=1,360.9 W m-2 is the Earth’s new lower irradiance at 1 AU according to recent satellite observations reported by Kopp and Lean [49]. Due to a design flaw in earlier spectrometers, the solar irradiance at Earth’s distance has been overestimated by ≈ 5 W m-2 prior to 2003 [49]. Consequently, our calculations yielded slightly lower irradiances for bodies such as Venus and Mars compared to previously published data. Our decision to recalculate S was based on the assumption that the orbital distances of planets are known with much greater accuracy than TOA solar irradiances. Hence, a correction made to Earth’s irradiance requires adjusting the ‘solar constants’ of all other planets as well.
We found that quoted values for the mean global temperature and surface atmospheric pressure of Mars were either improbable or too uncertain to be useful to our analysis. Thus, studies published in the last 15 years report Mars’ GMAT being anywhere between 200 K and 240 K with the most frequently quoted values in the range 210–220 K [6,32,76-81]. However, in-situ measurements by Viking Lander 1 suggest that the average surface air temperature at a low-elevation site in the Martian subtropics does not exceed 207 K during the summerfall season (Appendix B). Therefore, the Red Planet’s GMAT must be lower than 207 K. The Viking records also indicate that average diurnal temperatures above 210 K can only occur on Mars during summertime. Hence, all such values must be significantly higher than the actual mean annual temperature at any Martian latitude. This is also supported by results from a 3-D global circulation model of the Red Planet obtained by Fenton et al. [82]. The surface atmospheric pressure on Mars varies appreciably with season and location. Its global average value has previously been reported between 600 Pa and 700 Pa [6,32,78,80,83,84], a range that was too broad for the target precision of our study. Hence our decision to calculate new annual global means of near-surface temperature and air pressure for Mars via a thorough analysis of available data from remote-sensing and in-situ observations. Appendix B details our computational procedure with the results presented in Table 2. It is noteworthy that our independent estimate of Mars’ GMAT (190.56 ± 0.7 K), while significantly lower than values quoted in recent years, is in perfect agreement with spherically integrated brightness temperatures of the Red Planet derived from remote microwave measurements in the late 1960s and early 1970s [85-87].
Moon’s GMAT was also not readily extractable from the published literature. Although lunar temperatures have been measured for more than 50 years both remotely and in situ [36] most studies focus on observed temperature extremes across the lunar surface [56] and rarely discuss the Moon’s average global temperature. Current GMAT estimates for the Moon cluster around two narrow ranges: 250–255 K and 269–271 K [32]. A careful examination of the published data reveals that the 250–255 K range is based on subterranean heat-flow measurements conducted at depths between 80 and 140 cm at the Apollo 15 and 17 landing sites located at 26oN; 3.6° E and 20° N; 30.6° E, respectively [88]. Due to a strong temperature dependence of the lunar regolith thermal conductivity in the topmost 1-2 cm soil, the Moon’s average diurnal temperature increases steadily with depth. According to Apollo measurements, the mean daily temperature at 35 cm belowground is 40–45 K higher than that at the lunar surface [88]. The diurnal temperature fluctuations completely vanish below a depth of 80 cm. At 100 cm depth, the temperature of the lunar regolith ranged from 250.7 K to 252.5 K at the Apollo 15 site and between 254.5 K and 255.5 K at the Apollo 17 site [88]. Hence, reported Moon average temperatures in the range 250-255 K do not describe surface conditions. Moreover, since measured in the lunar subtropics, such temperatures do not likely even represent Moon’s global thermal environment at these depths. On the other hand, frequently quoted Moon global temperatures of ~270 K are actually calculated from Eq. (3) and not based on surface measurements. However, as demonstrated by Volokin et al. [1], Eq. (3) overestimates the mean global surface temperature of spheres by about 37%. In this study, we employed the spherical estimate of Moon’s GMAT (197.35 K) obtained by Volokin et al. [1] using output from a NASA thermo-physical model validated against Diviner observations [29].
Surprisingly, many publications report incorrect values even for Earth’s mean global temperature. Studies of terrestrial climate typically focus on temperature anomalies and if Earth’s GMAT is ever mentioned, it is often loosely quoted as 15 C (~288 K) [2-4,6]. However, observations archived in the HadCRUT4 dataset of the UK Met Office’s Hadley Centre [50,89] and in the Global Historical Climatology Network [51,52,90,91] indicate that, between 1981 and 2010, Earth’s mean annual surface air temperature was 287.4 K (14.3 C) ± 0.5 K. Some recent studies acknowledge this more accurate lower value of Earth’s absolute global temperature [92]. For Earth’s mean surface atmospheric pressure we adopted the estimate by Trenberth et al. [53] (98.55 kPa), which takes into account the average elevation of continental landmasses above sea level; hence, it is slightly lower than the typical sea-level pressure of ≈ 101.3 kPa.
The average near-surface atmospheric densities (p, kg m-3) of planetary bodies were calculated from reported means of total atmospheric pressure (P), molar mass (M, kg moL-1) and temperature (Ts) using the Ideal Gas Law, i.e.,
equation (6)
where R=8.31446 J moL-1 K-1 is the universal gas constant. This calculation was intended to make atmospheric densities physically consistent with independent data on pressure and temperature utilized in our study. The resulting p values were similar to previously published data for individual bodies. Standard errors of the air-density estimates were calculated from reported errors of and Τs for each body using Eq. (6).
Data in Table 2 were harnessed to compute several intermediate variables and all dimensionless πi products necessary for the regression analyses. The results from these computations are shown in Table 4. Greenhouse gases in planetary atmospheres represented by the major constituents carbon dioxide (CO2), methane (CH4) and water vapor (H2O) were collectively quantified via three bulk parameters: average molar mass (Mgh, kg moL-1), combined partial pressure (Pgh, Pa) and combined partial density (ρgh, kg m-3). These parameters were estimated from reported volumetric concentrations of individual greenhouse gases (Cx, %) and data on total atmospheric pressure and density in Table 2 using the formulas (Table 4):
Intermediate Variable or Dimensionless ProductVenusEarthMoonMarsTitanTriton
Average molar mass of greenhouse gases, (kg mol-1) (Eq. 7)0.04400.02160.00.04400.01600.0160
Near-surface partial pressure of greenhouse gases,  (Pa) (Eq. 8)8,974,500.0 ± 96,500283.8 ± 0.020.0667.7 ± 13.87,188.3 ± 4.99.6 × 10-4 ± 2.9 × 10-4
Near-surface density of greenhouse gases,  (kg m-3) (Eq. 9)64.441 ± 0.4292.57 × 10-3 ± 4.3 × 10-60.00.018 ± 3.1 × 10-40.148 ± 8.4 × 10-44.74 × 10-8 ± 1.3 × 10-8
Radiating equilibrium temperature,  (K) (Eq. 3)185.0256.4269.7211.983.639.2
Average airless spherical temperature,  (K) (Eq. 4c)231.7197.0197.0159.663.635.9
Ts/ Te3.985 ± 0.0161.121 ± 0.0020.732 ± 0.0030.899 ± 0.0031.120 ± 0.0080.994 ± 0.026
Ts/ Tna3.181 ± 0.0131.459 ± 0.0021.002 ± 0.0041.194 ± 0.0041.473 ± 0.0111.086 ± 0.028
In(Ts/Te)1.3825 ± 0.00410.1141 ± 0.0017-0.3123 ± 0.0046-0.1063 ± 0.00370.1136 ± 0.0075-5.2×10-3 ± 0.0256
In(Ts/Tna)1.1573 ± 0.00410.3775 ± 0.00171.59×10-3 ± 0.00460.1772 ± 0.00370.3870 ± 0.00750.0828 ± 0.0256
In[(Pgh 3/(ρgh S2)]28.13648.4784Undefined10.752023.1644-4.7981
ln[P3/(ρgh S2)]
28.243326.0283+∞10.830432.212220.2065
ln[Pgh3/(ρ S2)]
28.11452.3370Undefined10.739619.6102-13.6926
ln[Pgh/Pr]
9.5936-0.7679Undefined0.08762.4639-13.3649
ln[P3/(ρ S2)]
28.221419.8869-46.749710.818028.658011.3120
In[P/Pr]9.6292 ± 0.01085.0820 ± 6.6×10-5-28.3570 ± 0.35160.1137 ± 0.02075.4799 ± 6.8×10-4-5.0300 ± 0.3095
Table 4: Intermediate variables and dimensionless products required for the regression analyses and calculated from data in Table 2. Equations used to compute intermediate variables are shown in parentheses. The reference pressure is set to the barometric triple point of water, i.e., Pr=611.73 Pa.
equation (7)
equation (8)
equation (9)
where Cgh=CCO2+CCH4+CH2O is the total volumetric concentration of major greenhouse gases (%). The reference temperatures Τe and Τna were calculated from Equations (3) and (4c), respectively.

Results

Function (5) was fitted to each one of the 12 sets of logarithmic πi pairs generated by Equations (1) and (2) and shown in Table 4Figures 1 and 2 display the resulting curves of individual regression models with planetary data plotted in the background for reference. Table 5 lists the statistical scores of each non-linear regression. Model 12 depicted in Figure 2f had the highest R2=0.9999 and the lowest standard error σest=0.0078 among all regressions. Model 1 (Figure 1a) provided the second best fit with R2=0.9844 and σest=0.1529 Notably, Model 1 shows almost a 20-time larger standard error on the logarithmic scale than Model 12. Figure 3 illustrates the difference in predictive skills between the two top-performing Models 1 and 12 upon conversion of vertical axes to a linear scale. Taking an antilogarithm weakens the relationship of Model 1 to the point of becoming immaterial and highlights the superiority of Model 12. The statistical results shown in Table 5 indicate that the explanatory power and descriptive accuracy of Model 12 surpass these of all other models by a wide margin.
environment-pollution-climate-change-thermal-enhancement
Figure 1: The relative atmospheric thermal enhancement (Ts/Tr) as a function of various dimensionless forcing variables generated by DA using data on solar irradiance, near-surface partial pressure/density of greenhouse gases, and total atmospheric pressure/density from Table 4. Panels a through f depict six regression models suggested by DA with the underlying celestial bodies plotted in the background for reference. Each pair of horizontal graphs represents different reference temperatures (Tr) defined as either Tr/Te (left) or Tr/Tna (right).
environment-pollution-climate-change-six-additional-regression
Figure 2: The same as in Figure 1 but for six additional regression models (panels a through f).
environment-pollution-climate-change-best-performing-regression
Figure 3: Comparison of the two best-performing regression models according to statistical scores listed in Table 5. Vertical axes use linear scales to better illustrate the difference in skills between the models.
No.Functional ModelCoefficient of Determination (R2)Adjusted R2Standard Error
1
equation
0.98440.93750.1529
2equation0.95620.82490.1773
3equation0.1372-2.45111.1360
4equation0.2450-2.02000.7365
5equation0.98350.93390.1572
6equation0.94670.78660.1957
7equation0.98180.92740.1648
8equation0.96490.85980.1587
9equation0.4488-0.37800.7060
10equation0.62560.06390.4049
11equation0.93960.84890.2338
12equation0.99990.99970.0078
Table 5: Performance statistics of the twelve regression models suggested by DA. Statistical scores refer to the model logarithmic forms shown in Figures 1 and 2.
Since Titan and Earth nearly overlap on the logarithmic scale of Figure 2f, we decided to experiment with an alternative regression for Model 12, which excludes Titan from the input dataset. This new curve had R2=1.0 and σest=0.0009. Although the two regression equations yield similar results over most of the relevant pressure range, we chose the one without Titan as final for Model 12 based on the assumption that Earth’s GMAT is likely known with a much greater accuracy than Titan’s mean annual temperature. Taking an antilogarithm of the final regression equation, which excluded Titan, yields the following expression for Model 12:
equation (10a)
The regression coefficients in Eq. (10a) are intentionally shown in full precision to allow an accurate calculation of RATE (i.e., the Ts/Tna ratios) provided the strong non-linearity of (Figures 1-3 and Table 5) the relationship and to facilitate a successful replication of our results by other researchers. Figure 4 depicts Eq. (10a) as a dependence of RATE on the average surface air pressure. Superimposed on this graph are the six planetary bodies from Table 4 along with their uncertainty ranges.
environment-pollution-climate-change-relative-atmospheric
Figure 4: The relative atmospheric thermal enhancement (Ts/Tna ratio) as a function of the average surface air pressure according to Eq. (10a) derived from data representing a broad range of planetary environments in the solar system. Saturn’s moon titan has been excluded from the regression analysis leading to Eq. (10a). Error bars of some bodies are not clearly visible due to their small size relative to the scale of the axes. See Table 2 for the actual error estimates.
Equation (10a) implies that GMATs of rocky planets can be calculated as a product of two quantities: the planet’s average surface temperature in the absence of an atmosphere (Tna, K) and a nondimensional factor (Ea ≥ 1.0) quantifying the relative thermal effect of the atmosphere, i.e.,
equation (10b)
where Τna is obtained from the SAT model (Eq. 4a) and Ea is a function of total pressure (P) given by:
equation (11)
Note that, as P approaches 0 in Eq. (11), Ea approaches the physically realistic limit of 1.0. Other physical aspects of this equation are discussed below.
For bodies with tangible atmospheres (such as Venus, Earth, Mars, Titan and Triton), one must calculate Tna using αe=0.132 and ηe=0.00971, which assumes a Moon-like airless reference surface in accordance with our pre-analysis premise. For bodies with tenuous atmospheres (such as Mercury, the Moon, Calisto and Europa), Tna should be calculated from Eq. (4a) (or Eq. 4b respectively if S>0.15 W m-2 and/or Rg ≈ 0 W m-2) using the body’s observed values of Bond albedo αe and ground heat storage fraction ηe. In the context of this model, a tangible atmosphere is defined as one that has significantly modified the optical and thermo-physical properties of a planet’s surface compared to an airless environment and/or noticeably impacted the overall planetary albedo by enabling the formation of clouds and haze. A tenuous atmosphere, on the other hand, is one that has not had a measurable influence on the surface albedo and regolith thermos-physical properties and is completely transparent to shortwave radiation. The need for such delineation of atmospheric masses when calculating Tna arises from the fact that Eq. (10a) accurately describes RATEs of planetary bodies with tangible atmospheres over a wide range of conditions without explicitly accounting for the observed large differences in albedos (i.e., from 0.235 to 0.90) while assuming constant values of αe and ηe for the airless equivalent of these bodies. One possible explanation for this counterintuitive empirical result is that atmospheric pressure alters the planetary albedo and heat storage properties of the surface in a way that transforms these parameters from independent controllers of the global temperature in airless bodies to intrinsic byproducts of the climate system itself in worlds with appreciable atmospheres. In other words, once atmospheric pressure rises above a certain level, the effects of albedo and ground heat storage on GMAT become implicitly accounted for by Eq. (11). Although this hypothesis requires an investigation beyond the scope of the present study, one finds an initial support for it in the observation that, according to data in Table 2, GMATs of bodies with tangible atmospheres do not show a physically meaningful relationship with the amounts of absorbed shortwave radiation determined by albedos. Our discovery for the need to utilize different albedos and heat storage coefficients between airless worlds and worlds with tangible atmospheres is not unique as a methodological approach. In many areas of science and engineering, it is sometime necessary to use disparate model parameterizations to successfully describe different aspects of the same phenomenon. An example is the distinction made in fluid mechanics between laminar and turbulent flow, where the nondimensional Reynold’s number is employed to separate the two regimes that are subjected to different mathematical treatments.
We do not currently have sufficient data to precisely define the limit between tangible and tenuous atmospheres in terms of total pressure for the purpose of this model. However, considering that an atmospheric pressure of 1.0 Pa on Pluto causes the formation of layered haze [93], we surmise that this limit likely lies significantly below 1.0 Pa. In this study, we use 0.01 Pa as a tentative threshold value. Thus, in the context of Eq. (10b), we recommend computing Tna from Eq. (4c) if P>10-2Pa, and from Eq. (4a) (or Eq. 4b, respectively) using observed values of αe and ηe if P ≤ 10-2Pa. Equation (4a) should also be employed in cases, where a significant geothermal flux Rg>>0 exists such as on the Galilean moons of Jupiter due to tidal heating, and/or if S ≤ 0.15 W m-2. Hence, the 30-year mean global equilibrium surface temperature of rocky planets depends in general on five factors: TOA stellar irradiance (S), a reference airless surface albedo (αe), a reference airless ground heat storage fraction (ηe), the average geothermal flux reaching the surface (Rg), and the total surface atmospheric pressure (P). For planets with tangible atmospheres (P>10-2Pa) and a negligible geothermal heating of the surface (Rg ≈ 0), the equilibrium GMAT becomes only a function of two factors: S and P, i.e., Τs=32.44S0.25Eα (P). The final model (Eq. 10b) can also be cast in terms of Ts as a function of a planet’s distance to the Sun (rau, AU) by replacing S in Equations (4a), (4b) or (4c) with 1360.9 rau-2.
Environmental scope and numerical accuracy of the new model
Figure 5 portrays the residuals between modeled and observed absolute planetary temperatures. For celestial bodies participating in the regression analysis (i.e., Venus, Earth, Moon, Mars and Triton), the maximum model error does not exceed 0.17 K and is well within the uncertainty of observations. The error for Titan, an independent data point, is 1.45 K or 1.5% of that moon’s current best-known GMAT (93.7 K). Equation (10b) produces 95.18 K for Titan at Saturn’s semi-major axis (9.582 AU) corresponding to a solar irradiance S=14.8 W m-2. This estimate is virtually identical to the 95 K average surface temperature reported for that moon by the NASA JPL Voyager Mission website [94]. The Voyager spacecraft 1 and 2 reached Saturn and its moons in November 1980 and August 1981, respectively, when the gas giant was at a distance between 9.52 AU and 9.60 AU from the Sun corresponding approximately to Saturn’s semi-major axis [95].
Data acquired by Voyager 1 suggested an average surface temperature of 94 ± 0.7 K for Titan, while Voyager 2 indicated a temperature close to 95 K [41]. Measurements obtained between 2005 and 2010 by the Cassini-Huygens mission revealed Ts ≈ 93.4 ± 0.6 K [42,43]. Using Saturn’s perihelion (9.023 AU) and aphelion (10.05 AU) one can compute Titan’s TOA solar irradiance at the closest and furthest approach to the Sun, i.e., 16.7 W m-2 and 13.47 W m-2, respectively. Inserting these values into Eq. (10b) produces the expected upper and lower limit of Titan’s mean global surface temperature according to our model, i.e., 92.9 K ≤ Ts≤ 98.1 K. Notably this range encompasses all current observation-based estimates of Titan’s GMAT. Since both Voyager and Cassini mission covered shorter periods than a single Titan season (Saturn’s orbital period is 29.45 Earth years), the available measurements may not well represent that moon’s annual thermal cycle. In addition, due to a thermal inertia, Titan’s average surface temperature likely lags variations in the TOA solar irradiance caused by Saturn’s orbital eccentricity. Thus, the observed 1.45 K discrepancy between our independent model prediction and Titan’s current best-known GMAT seems to be within the range of plausible global temperature fluctuations on that moon. Hence, further observations are needed to more precisely constrain Titan’s long-term GMAT.
Measurements conducted by the Voyager spacecraft in 1989 indicated a global mean temperature of 38 ± 1.0 K and an average atmospheric pressure of 1.4 Pa at the surface of Triton [73]. Even though Eq. (10a) is based on slightly different data for Triton (i.e., Ts =39 ±1.0 K and P=4.0 Pa) obtained by more recent stellar occultation measurements [73], employing the Voyager-reported pressure in Eq. (10b) produces Ts=38.5 K for Triton’s GMAT, a value well within the uncertainty of the 1989 temperature measurements (Figure 5).
environment-pollution-climate-change-global-temperatures
Figure 5: Absolute differences between modeled average global temperatures by Eq. (10b) and observed GMATs (from Table 2) for the studied celestial bodies. Saturn’s moon Titan represents an independent data point, since it was excluded from the regression analysis leading to Eq. (10a).
The above comparisons indicate that Eq. (10b) rather accurately describes the observed variation of the mean surface temperature across a wide range of planetary environments in terms of solar irradiance (from 1.5 W m-2 to 2,602 W m-2), total atmospheric pressure (from near vacuum to 9,300 kPa) and greenhouse-gas concentrations (from 0.0% to over 96% per volume). While true that Eq. (10a) is based on data from only 6 celestial objects, one should keep in mind that these constitute virtually all bodies in the Solar System meeting our criteria for availability and quality of measured data. Although function (5) has 4 free parameters estimated from just 5-6 data points, there are no signs of model overfitting in this case because (a) Eq. (5) represents a monotonic function of a rigid shape that can only describe well certain exponential pattern as evident from Figures 1 and 2 and statistical scores in Table 5; (b) a simple scatter plot of In (P/Pr) vs. In(Ts/Tna)visibly reveals the presence of an exponential relationship free of data noise; and (c) no polynomial can fit the data points in Figure 2f as accurately as Eq. (5) while also producing a physically meaningful response curve similar to known pressure-temperature relationships in other systems. These facts indicate that Eq. (5) is not too complicated to cause an over fitting but just right for describing the data at hand.
The fact that only one of the investigated twelve non-linear regressions yielded a tight relationship suggests that Model 12 describes a macro-level thermodynamic property of planetary atmospheres heretofore unbeknown to science. A function of such predictive power spanning the entire breadth of the Solar System cannot be just a result of chance. Indeed, complex natural systems consisting of myriad interacting agents have been known to sometime exhibit emergent responses at higher levels of hierarchical organization that are amenable to accurate modeling using top-down statistical approaches [96]. Equation (10a) also displays several other characteristics discussed below that lend further support to the above notion.
Model robustness
Model robustness defines the degree to which a statistical relationship would hold when recalculated using a different dataset. To test the robustness of Eq. (10a) we performed an alternative regression analysis, which excluded Earth and Titan from the input data and only utilized logarithmic pairs of Ts/Tna and P/Pr for Venus, the Moon, Mars and Triton from Table 4. The goal was to evaluate how well the resulting new regression equation would predict the observed mean surface temperatures of Earth and Titan. Since these two bodies occupy a highly non-linear region in Model 12 (Figure 2f), eliminating them from the regression analysis would leave a key portion of the curve poorly defined. As in all previous cases, function (5) was fitted to the incomplete dataset (omitting Earth and Titan), which yielded the following expression:
equation (12a)
Substituting the reference temperature Tna in Eq. (12a) with its equivalent from Eq. (4c) and solving for Ts produces
equation (12b)
It is evident that the regression coefficients in the first exponent term of Eq. (12a) are nearly identical to those in Eq. (10a). This term dominates the Ts-Prelationship over the pressure range 0-400 kPa accounting for more than 97.5% of the predicted temperature magnitudes. The regression coefficients of the second exponent differ somewhat between the two formulas causing a divergence of calculated RATE values over the pressure interval 400 –9,100 kPa. The models converge again between 9,000 kPa and 9,300 kPa. Figure 6 illustrates the similarity of responses between Equations (10a) and (12a) over the pressure range 0–300 kPa with Earth and Titan plotted in the foreground for reference (Figure 6).
environment-pollution-climate-change-robustness
Figure 6: Demonstration of the robustness of Model 12. The solid black curve depicts Eq. (10a) based on data from 5 celestial bodies (i.e., Venus, Earth, Moon, Mars and Triton). The dashed grey curve portrays Eq. (12a) derived from data of only 4 bodies (i.e., Venus, Moon, Mars and Triton) while excluding Earth and Titan from the regression analysis. The alternative Eq. (12b) predicts the observed GMATs of Earth and Titan with accuracy greater than 99% indicating that Model 12 is statistically robust.
Equation (12b) reproduces the observed global surface temperature of Earth with an error of 0.4% (-1.0 K) and that of Titan with an error of 1.0% (+0.9 K). For Titan, the error of the new Eq. (12b) is even slightly smaller than that of the original model (Eq. 10b). The ability of Model 12 to predict Earth’s GMAT with an accuracy of 99.6% using a relationship inferred from disparate environments such as those found on Venus, Moon, Mars and Triton indicates that (a) this model is statistically robust, and (b) Earth’s temperature is a part of a cosmic thermodynamic continuum well described by Eq. (10b). The apparent smoothness of this continuum for bodies with tangible atmospheres (illustrated in Figure 4) suggests that planetary climates are wellbuffered and have no ‘tipping points’ in reality, i.e., states enabling rapid and irreversible changes in the global equilibrium temperature as a result of destabilizing positive feedbacks assumed to operate within climate systems. This robustness test also serves as a cross-validation suggesting that the new model has a universal nature and is not a product of over fitting.
The above characteristics of Eq. (10a) including dimensional homogeneity, high predictive accuracy, broad environmental scope of validity and statistical robustness indicate that it represents an emergent macro-physical model of potential theoretical significance deserving further investigation. This conclusive result is also supported by the physical meaningfulness of the response curve described by Eq. (10a).

Discussion

Given the high statistical scores of the new model (Eq. 10b) discussed above, it is important to address its physical significance, potential limitations, and broad implications for the current climate theory.
Similarity of the new model to Poisson’s formula and the SB radiation law
The functional response of Eq. (10a) portrayed in Figure 4 closely resembles the shape of the dry adiabatic temperature curve in Figure 7a described by the Poisson formula and derived from the First Law of Thermodynamics and the Ideal Gas Law [4], i.e.,
environment-pollution-climate-kinetic-relations
Figure 7: Known pressure-temperature kinetic relations: (a) Dry adiabatic response of the air/surface temperature ratio to pressure changes in a free dry atmosphere according to Poisson’s formula (Eq. 13) with a reference pressure set to po=100 kPa; (b) The SB radiation law expressed as a response of a blackbody temperature ratio to variations in photon pressure (Eq. 14). Note the qualitative striking similarity of shapes between these curves and the one portrayed in Figure 4 depicting the new planetary temperature model (Eq. 10a).
equation (13)
Here, To and po are reference values for temperature and pressure typically measured at the surface, while T and p are corresponding scalars in the free atmosphere, and cp is the molar heat capacity of air (J moL-1 K-1). For the Earth’s atmosphere, R/cp=0.286. Equation (13) essentially describes the direct effect of a pressure p on gas temperature (T) in the absence of any heat exchange with the surrounding environment.
Equation (10a) is structurally similar to Eq. (13) in a sense that both expressions relate a temperature ratio to a pressure ratio, or more precisely, a relative thermal enhancement to a ratio of physical forces. However, while the Poisson formula typically produces 0≤ T/To ≤ 1.0Eq. (10a) always yields Ts/Tna ≥ 1.0. The key difference between the two models stems from the fact that Eq. (13) describes vertical temperature changes in a free and dry atmosphere induced by a gravity-controlled pressure gradient, while Eq. (10a) predicts the equilibrium response of a planet’s global surface air temperature to variations in total atmospheric pressure. In essence, Eq. (10b) could be viewed as a predictor of the reference temperature To in the Poisson formula. Thus, while qualitatively similar, Equations (10a) and (13) are quantitatively rather different. Both functions describe effects of pressure on temperature but in the context of disparate physical systems. Therefore, estimates obtained from Eq. (10a) should not be confused with results inferred from the Poisson formula. For example, Eq. (10b) cannot be expected to predict the temperature lapse rate and/or vertical temperature profiles within a planetary atmosphere as could be using Eq. (13). Furthermore, Eq. (10a) represents a top-down empirical model that implicitly accounts for a plethora of thermodynamic and radiative processes and feedbacks operating in real climate systems, while the Poisson formula (derived from the Ideal Gas Law) only describes pressure-induced temperature changes in a simple mixture of dry gases without any implicit or explicit consideration of planetary-scale mechanisms such as latent heat transport and cloud radiative forcing (Figure 7).
Equation (10a) also shows remarkable similarity to the SB law relating the equilibrium skin temperature of an isothermal blackbody (Tb, K) to the electromagnetic radiative flux (I, W m-2) absorbed/ emitted by the body’s surface, i.e., Tb=(I ⁄ σ)0.25. Dividing each side of this fundamental relationship by the irreducible temperature of deep Space Tc=2.725 K and its causative CMBR Rc=3.13 × 10-6 W m-2 respectively, yields Tb⁄Tc =(I ⁄ Rc )0.25. Further, expressing the radiative fluxes I and Rc on the right-hand side as products of photon pressure and the speed of light (c, m s-1) in a vacuum, i.e., I=cPph and Rc=cPc , leads to the following alternative form of the SB law:
equation (14)
where Pc=1.043 × 10-14 Pa is the photon pressure of CMBR. Clearly, Eq. (10a) is analogous to Eq. (14), while the latter is structurally identical to the Poisson formula (13). Figure 7b depicts Eq. (14) as a dependence of the ratio on photon pressure Pph.
It is evident from Figures 4 and 7 that formulas (10a), (13) and (14) describe qualitatively very similar responses in quantitatively vastly different systems. The presence of such similar relations in otherwise disparate physical systems can fundamentally be explained by the fact that pressure as a force per unit area represents a key component of the internal kinetic energy (defined as a product of gas volume and pressure), while temperature is merely a physical manifestation of this energy. Adding a force such as gas pressure to a physical system inevitably boosts the internal kinetic energy and raises its temperature, a process known in thermodynamics as compression heating. The direct effect of pressure on a system’s temperature is thermodynamically described by adiabatic processes. The pressure-induced thermal enhancement on a planetary level portrayed in Figure 4 and accurately quantified by Eq. (10a or 11) is analogous to a compression heating, but not fully identical to an adiabatic process. The latter is usually characterized by a limited duration and oftentimes only applies to finite-size parcels of air moving vertically through the atmosphere. Equation (11), on the other hand, describes a surface thermal effect that is global in scope and permanent in nature as long as an atmospheric mass is present within the planet’s gravitational field. Hence, the planetary RATE (Ts/Tna ratio) could be understood as a net result of countless simultaneous adiabatic processes continuously operating in the free atmosphere. Figures 4 and 7 also suggest that the pressure control of temperature is a universal thermodynamic principle applicable to systems ranging in complexity from a simple isothermal blackbody absorbing a homogeneous flux of electromagnetic radiation to diverse planetary atmospheres governed by complex non-linear process interactions and cloud-radiative feedbacks. To our knowledge, this cross-scale similarity among various pressure-temperature relationships has not previously been identified and may provide a valuable new perspective on the working of planetary climates.
Nevertheless, important differences exist between Eq. (10a) and these other simpler pressure-temperature relations. Thus, while the Poisson formula and the SB radiation law can mathematically be derived from ‘first principles’ and experimentally tested in a laboratory, Eq. (10a) could neither be analytically deduced from known physical laws nor accurately simulated in a small-scale experiment. This is because Eq. (10a) describes an emergent macro-level property of planetary atmospheres representing the net result of myriad process interactions within real climate systems that are not readily computable using mechanistic (bottom-up) approaches adopted in climate models or fully reproducible in a laboratory setting.
Potential limitations of the planetary temperature model
Equation (10b) describes the long-term (30 years) equilibrium GMATs of planetary bodies and does not predict inter-annual global temperature variations caused by intrinsic fluctuations of cloud albedo and/or ocean heat uptake. Thus, the observed 0.82 K rise of Earth’s global temperature since 1880 is not captured by our model, since this warming was likely not the result of an increased atmospheric pressure. Recent analyses of observed dimming and brightening periods worldwide [97-99] suggest that the warming over the past 130 years might have been caused by a decrease in global cloud cover and a subsequent increased absorption of solar radiation by the surface. Similarly, the mega shift of Earth’s climate from a ‘hothouse’ to an ‘icehouse’ evident in the sedimentary archives over the past 51 My cannot be explained by Eq. (10b) unless caused by a large loss of atmospheric mass and a corresponding significant drop in surface air pressure since the early Eocene. Pleistocene fluctuations of global temperature in the order of 3.0–8.0 K during the last 2 My revealed by multiple proxies [100] are also not predictable by Eq. (10b) if due to factors other than changes in total atmospheric pressure and/or TOA solar irradiance.
The current prevailing view mostly based on theoretical considerations and results from climate models is that the Pleistocene glacial-interglacial cycles have been caused by a combination of three forcing agents: Milankovitch orbital variations, changes in atmospheric concentrations of greenhouse gases, and a hypothesized positive icealbedo feedback [101,102]. However, recent studies have shown that orbital forcing and the ice-albedo feedback cannot explain key features of the glacial-interglacial oscillations such as the observed magnitudes of global temperature changes, the skewness of temperature response (i.e., slow glaciations followed by rapid meltdowns), and the mid- Pleistocene transition from a 41 Ky to 100 Ky cycle length [103-107]. The only significant forcing remaining in the present paleo-climatological toolbox to explicate the Pleistocene cycles are variations in greenhousegas concentrations. Hence, it is difficult to explain, from a standpoint of the current climate theory, the high accuracy of Eq. (11) describing the relative thermal effect of diverse planetary atmospheres without any consideration of greenhouse gases. If presumed forcing agents such as greenhouse-gas concentrations and the planetary albedo were indeed responsible for the observed past temperature dynamics on Earth, why did these agents not show up as predictors of contemporary planetary temperatures in our analysis as well? Could it be because these agents have not really been driving Earth’s climate on geological time scales? We address the potential role of greenhouse gases in more below. Since the relationship portrayed in Figure 4 is undoubtedly real, our model results point toward the need to reexamine some fundamental climate processes thought to be well understood for decades. For example, we are currently testing a hypothesis that Pleistocene glacial cycles might have been caused by variations in Earth’s total atmospheric mass and surface air pressure. Preliminary results based on the ability of an extended version of our planetary model (simulating meridional temperature gradients) to predict the observed polar amplification during the Last Glacial Maximum indicate that such a hypothesis is not unreasonable. However, conclusive findings from this research will be discussed elsewhere.
According to the present understanding, Earth’s atmospheric pressure has remained nearly invariant during the Cenozoic era (i.e., last 65.5 My). However, this notion is primarily based on theoretical analyses [106], since there are currently no known geo-chemical proxies permitting a reliable reconstruction of past pressure changes in a manner similar to that provided by various temperature proxies such as isotopic oxygen 18, alkenones and TEX86 in sediments, and Ar-N isotope ratios and deuterium concentrations in ice. The lack of independent pressure proxies makes the assumption of a constant atmospheric mass throughout the Cenozoic a priori and thus questionable. Although this topic is beyond the scope of our study, allowing for the possibility that atmospheric pressure on Earth might have varied significantly over the past 65.5 My could open exciting new research venues in Earth sciences in general and paleoclimatology in particular.
Role of greenhouse gasses from a perspective of the new model
Our analysis revealed a poor relationship between GMAT and the amount of greenhouse gases in planetary atmospheres across a broad range of environments in the Solar System (Figures 1-3 and Table 5). This is a surprising result from the standpoint of the current Greenhouse theory, which assumes that an atmosphere warms the surface of a planet (or moon) via trapping of radiant heat by certain gases controlling the atmospheric infrared optical depth [4,9,10]. The atmospheric opacity to LW radiation depends on air density and gas absorptivity, which in turn are functions of total pressure, temperature, and greenhouse-gas concentrations [9]. Pressure also controls the broadening of infrared absorption lines in individual gases. Therefore, the higher the pressure, the larger the infrared optical depth of an atmosphere, and the stronger the expected greenhouse effect would be. According to the present climate theory, pressure only indirectly affects global surface temperature through the atmospheric infrared opacity and its presumed constraint on the planet’s LW emission to Space [9,107].
There are four plausible explanations for the apparent lack of a close relationship between GMAT and atmospheric greenhouse gasses in our results: 1) The amounts of greenhouse gases considered in our analysis only refer to near-surface atmospheric compositions and do not describe the infrared optical depth of the entire atmospheric column; 2) The analysis lumped all greenhouse gases together and did not take into account differences in the infrared spectral absorptivity of individual gasses; 3) The effect of atmospheric pressure on broadening the infrared gas absorption lines might be stronger in reality than simulated by current radiative-transfer models, so that total pressure overrides the effect of a varying atmospheric composition across a wide range of planetary environments; and 4) Pressure as a force per unit area directly impacts the internal kinetic energy and temperature of a system in accordance with thermodynamic principles inferred from the Gas Law; hence, air pressure might be the actual physical causative factor controlling a planet’s surface temperature rather than the atmospheric infrared optical depth, which merely correlates with temperature due to its co-dependence on pressure.
Based on evidences, we argue that option #4 is the most likely reason for the poor predictive skill of greenhouse gases with respect to planetary GMATs revealed in our study (Figures 1-3). By definition, the infrared optical depth of an atmosphere is a dimensionless quantity that carries no units of force or energy [3,4,9]. Therefore, it is difficult to fathom from a fundamental physics standpoint of view, how this non-dimensional parameter could increase the kinetic energy (and temperature) of the lower troposphere in the presence of free convection provided that the latter dominates the heat transport in gaseous systems. Pressure, on the other hand, has a dimension of force per unit area and as such is intimately related to the internal kinetic energy of an atmosphere E (J) defined as the product of gas pressure (P, Pa) and gas volume (V, m3), i.e., E (J)=PV. Hence, the direct effect of pressure on a system’s internal energy and temperature follows straight from fundamental parameter definitions in classical thermodynamics. Generally speaking, kinetic energy cannot exist without a pressure force. Even electromagnetic radiation has pressure.
In climate models, the effect of infrared optical depth on surface temperature is simulated by mathematically decoupling radiative transfer from convective heat exchange. Specifically, the LW radiative transfer is computed in these models without simultaneous consideration of sensible- and latent heat fluxes in the solution matrix. Radiative transfer modules compute the so-called heating rates (K/ day) strictly as a function of atmospheric infrared opacity, which under constant-pressure conditions solely depends on greenhousegas concentrations. These heating rates are subsequently added to the thermodynamic portion of climate models and distributed throughout the atmosphere. In this manner, the surface warming becomes a function of an increasing atmospheric infrared opacity. This approach to modeling of radiative-convective energy transport rests on the principle of superposition, which is only applicable to linear systems, where the overall solution can be obtained as a sum of the solutions to individual system components. However, the integral heat transport within a free atmosphere is inherently nonlinear with respect to temperature. This is because, in the energy balance equation, radiant heat transfer is contingent upon power gradients of absolute temperatures, while convective cooling/heating depends on linear temperature differences in the case of sensible heat flux and on simple vapor pressure gradients in the case of latent heat flux [4]. The latent heat transport is in turn a function of a solvent’s saturation vapor pressure, which increases exponentially with temperature [3]. Thus, the superposition principle cannot be employed in energy budget calculations. The artificial decoupling between radiative and convective heat-transfer processes adopted in climate models leads to mathematically and physically incorrect solutions with regard to surface temperature. The LW radiative transfer in a real climate system is intimately intertwined with turbulent convection/advection as both transport mechanisms occur simultaneously. Since convection (and especially the moist one) is orders of magnitude more efficient in transferring energy than LW radiation [3,4], and because heat preferentially travels along the path of least resistance, a properly coupled radiative-convective algorithm of energy exchange will produce quantitatively and qualitatively different temperature solutions in response to a changing atmospheric composition than the ones obtained by current climate models. Specifically, a correctly coupled convective-radiative system will render the surface temperature insensitive to variations in the atmospheric infrared optical depth, a result indirectly supported by our analysis as well. This topic requires further investigation beyond the scope of the present study.
The direct effect of atmospheric pressure on the global surface temperature has received virtually no attention in climate science thus far. However, the results from our empirical data analysis suggest that it deserves a serious consideration in the future.
Theoretical implications of the new interplanetary relationship
The hereto discovered pressure-temperature relationship quantified by Eq. (10a) and depicted in Figure 4 has broad theoretical implications that can be summarized as follows:
Physical nature of the atmospheric ‘greenhouse effect’: According to Eq. (10b), the heating mechanism of planetary atmospheres is analogous to a gravity-controlled adiabatic compression acting upon the entire surface. This means that the atmosphere does not function as an insulator reducing the rate of planet’s infrared cooling to space as presently assumed [9,10], but instead adiabatically boosts the kinetic energy of the lower troposphere beyond the level of solar input through gas compression. Hence, the physical nature of the atmospheric ‘greenhouse effect’ is a pressure-induced thermal enhancement independent of atmospheric composition. This mechanism is fundamentally different from the hypothesized ‘trapping’ of LW radiation by atmospheric trace gases first proposed in the 19th century and presently forming the core of the Greenhouse climate theory. However, a radiant-heat trapping by freely convective gases has never been demonstrated experimentally. We should point out that the hereto deduced adiabatic (pressure-controlled) nature of the atmospheric thermal effect rests on an objective analysis of vetted planetary observations from across the Solar System and is backed by proven thermodynamic principles, while the ‘trapping’ of LW radiation by an unconstrained atmosphere surmised by Fourier, Tyndall and Arrhenius in the 1800s was based on a theoretical conjecture. The latter has later been coded into algorithms that describe the surface temperature as a function of atmospheric infrared optical depth (instead of pressure) by artificially decoupling radiative transfer from convective heat exchange. Note also that the Ideal Gas Law (PV=nRT) forming the basis of atmospheric physics is indifferent to the gas chemical composition.
Effect of pressure on temperature: Atmospheric pressure provides in and of itself only a relative thermal enhancement (RATE) to the surface quantified by Eq. (11). The absolute thermal effect of an atmosphere depends on both pressure and the TOA solar irradiance. For example, at a total air pressure of 98.55 kPa, Earth’s RATE is 1.459, which keeps our planet 90.4 K warmer in its present orbit than it would be in the absence of an atmosphere. Hence, our model fully explains the new ~90 K estimate of Earth’s atmospheric thermal effect derived by Volokin et al. [1] using a different line of reasoning. If one moves Earth to the orbit of Titan (located at ~9.6 AU from the Sun) without changing the overall pressure, our planet’s RATE will remain the same, but the absolute thermal effect of the atmosphere would drop to about 29.2 K due to a vastly reduced solar flux. In other words, the absolute effect of pressure on a system’s temperature depends on the background energy level of the environment. This implies that the absolute temperature of a gas may not follow variations of pressure if the gas energy absorption changes in opposite direction to that of pressure. For instance, the temperature of Earth’s stratosphere increases with altitude above the tropopause despite a falling air pressure, because the absorption of UV radiation by ozone steeply increases with height, thus offsetting the effect of a dropping pressure. If the UV absorption were constant throughout the stratosphere, the air temperature would decrease with altitude.
Atmospheric back radiation and surface temperature: Since (according to Eq. 10b) the equilibrium GMAT of a planet is largely determined by the TOA solar irradiance and surface atmospheric pressure, the down-welling LW radiation appears to be globally a product of the air temperature rather than a driver of the surface warming. In other words, on a planetary scale, the so-called back radiation is a consequence of the atmospheric thermal effect rather than a cause for it. This explains the broad variation in the size of the observed down-welling LW flux among celestial bodies irrespective of the amount of absorbed solar radiation. Therefore, a change in this thermal flux brought about by a shift in atmospheric LW emissivity cannot be expected to impact the global surface temperature. Any variation in the global infrared back radiation caused by a change in atmospheric composition would be compensated for by a corresponding opposite shift in the intensity of the vertical convective heat transport. Such a balance between changes in atmospheric infrared heating and the upward convective cooling at the surface is required by the First Law of Thermodynamics. However, current climate models do not simulate this compensatory effect of sensible and latent heat fluxes due to an improper decoupling between radiative transfer and turbulent convection in the estimation of total energy exchange.
Role of planetary albedos: The fact that Eq. (10b) accurately describes planetary GMATs without explicitly accounting for the observed broad range of albedos, i.e., from 0.136 to 0.9 (Table 2), indicates that the shortwave reflectivity of planetary atmospheres is mostly an intrinsic property (a byproduct) of the climate system itself rather than an independent driver of climate as currently believed. In other words, it is the internal energy of the atmosphere maintained by solar irradiance and air pressure that controls the bulk of the albedo. An indirect support for this unorthodox conclusion is provided by the observation that the amounts of absorbed shortwave radiation determined by albedos show no physically meaningful relationship with planetary GMATs. For example, data in Table 2 indicate that Venus absorbs 3.7 times less solar energy per unit area than Earth, yet its surface is about 450 K hotter than that of Earth; the Moon receives on average 54 W m-2 more net solar radiation than Earth, but it is about 90 K cooler on average than our planet. The hereto proposed passive nature of planetary albedos does not imply that the global cloud cover could not be influenced by an external forcing such as solar wind, galactic cosmic rays, and/or gravitational fields of other celestial objects. Empirical evidence strongly suggests that it can [108-113], but the magnitude of such influences is expected to be small compared to the total albedo due to the presence of stabilizing negative feedbacks within the system. We also anticipate that the sensitivity of GMATs to an albedo change will greatly vary among planetary bodies. Viewing the atmospheric reflectivity as a byproduct of the available internal energy rather than a driver of climate can also help explain the observed remarkable stability of Earth’s albedo [54,114].
Climate stability: Our semi-empirical model (Equations 4a, 10b and 11) suggests that, as long as the mean annual TOA solar flux and the total atmospheric mass of a planet are stationary, the equilibrium GMAT will remain stable. Inter-annual and decadal variations of global temperature forced by fluctuations of cloud cover, for example, are expected to be small compared to the magnitude of the background atmospheric warming because of strong negative feedbacks limiting the albedo changes. This implies a relatively stable climate for a planet such as Earth absent significant shifts in the total atmospheric mass and the planet’s orbital distance to the Sun. Hence, planetary climates appear to be free of tipping points, i.e., functional states fostering rapid and irreversible changes in the global temperature as a result of hypothesized positive feedbacks thought to operate within the system. In other words, our results suggest that the Earth’s climate is well buffered against sudden changes.
Effect of oceans and water vapor on global temperature: The new model shows that the Earth’s global equilibrium temperature is a part of a cosmic thermodynamic continuum controlled by atmospheric pressure and total solar irradiance. Since our planet is the only one among studied celestial bodies harboring a large quantity of liquid water on the surface, Eq. (10b) implies that the oceans play virtually no role in determining Earth’s GMAT. This finding may sound inexplicable from the standpoint of the radiative Greenhouse theory, but it follows logically from the new paradigm of a pressure-induced atmospheric warming. The presence of liquid water on the surface of a planet requires an air pressure greater than 612 Pa and an ambient temperature above 273.2 K. These conditions are provided by the planet’s size and gravity, its distance to the Sun, and the mass of the atmosphere. Hence, the water oceans on Earth seem to be a thermodynamic consequence of particular physical conditions set by cosmic arrangements rather than an active controller of the global climate. Similarly, the hydrocarbon lakes on the surface of Titan [115,116] are the result of a high atmospheric pressure and an extremely cold environment found on that moon. Thus, our analysis did not reveal evidence for the existence of a feedback between planetary GMAT and a precipitable liquid solvent on the surface as predicted by the current climate theory. Consequently, the hypothesized runaway greenhouse, which requires a net positive feedback between global surface temperature and the atmospheric LW opacity controlled by water vapor [117], appears to be a model artifact rather than an actual physical possibility. Indeed, as illustrated in Figure 4, the hot temperature of Venus often cited as a product of a ‘runaway greenhouse’ scenario [117,118] fits perfectly within the pressuredependent climate continuum described by Equations (10b) and (11).

Model Application and Validation

Encouraged by the high predictive skill and broad scope of validity of Model 12 (Figure 2f) we decided to apply Eq. (10b) to four celestial bodies spanning the breadth of the Solar System, i.e., Mercury, Europa, Callisto and Pluto, which global surface temperatures are not currently known with certainty. Each body is the target of either ongoing or planned robotic exploration missions scheduled to provide surface thermal data among other observations, thus offering an opportunity to validate our planetary temperature model against independent measurements.
The MESSENGER spacecraft launched in 2004 completed the first comprehensive mapping of Mercury in March 2013 (https://messenger.jhuapl.edu/). Among other things, the spacecraft also took infrared measurements of the planet’s surface using a special spectrometer [119] that should soon become available. The New Horizons spacecraft launched in January 2006 [120] reached Pluto in July of 2015 and performed a thermal scan of the dwarf planet during a flyby. The complete dataset from this flyby were received on Earth in October of 2016 and are currently being analyzed. A proposed joint Europa-Jupiter System Mission by NASA and the European Space Agency is planned to study the Jovian moons after year 2020. It envisions exploring Europa’s physical and thermal environments both remotely via a NASA Orbiter and in situ by a Europa Lander [121].
All four celestial bodies have somewhat eccentric orbits around the Sun. However, while Mercury’s orbital period is only 88 Earth days, Europa and Callisto circumnavigate the Sun once every 11.9 Earth years while Pluto takes 248 Earth years. The atmospheric pressure on Pluto is believed to vary between 1.0 and 4.0 Pa over the course of its orbital period as a function of insolation-driven sublimation of nitrogen and methane ices on the surface [122]. Each body’s temperature was evaluated at three orbital distances from the Sun: aphelion, perihelion, and the semi-major axis. Since Mercury, Europa and Callisto harbor tenuous atmospheres (P<<10 span="">-2
 Pa), the reference temperature Tna in Eq. (10b) must be calculated from Eq. (4a), which requires knowledge of the actual values of αe, ηe, and Rg. We assumed that Mercury had Rg=0.0 W m-2, α=0.068 [123] and Moon-like thermo-physical properties of the regolith (ηe=0.00971). Input data for Europa and Callisto were obtained from Spencer et al. [124], Moore et al. [125], respectively. Specifically, to calculate ηe and Rg for these moons we utilized equatorial temperature data provided by Spencer et al. [124] in their Figure 1, and by Moore et al. [125] in a figure along with a theoretical formula for computing the average nighttime surface temperature T at the equator based on the SB law, i.e.,
equation (15)
where S(1-α) ηe is the absorbed solar flux (W m-2) stored as heat into the subsurface. The geothermal heat flux on Europa is poorly known. However, based on thermal observations of Io reported by Veeder et al. [126], we assumed Rg=2.0 W m-2 for Europa. Using S=50.3 W m-2, an observed nighttime equatorial temperature T=90.9 K and an observed average night-side albedo α=0.58 [124], we solved Eq. (15) for the surface heat storage fraction to obtain ηe=0.085for Europa. A similar computational procedure was employed for Callisto using α=0.11 and equatorial surface temperature data in Moore et al. [125]. This produced Rg=0.5 W m-2 and ηe=0.057. Using these values in Eq. (15) correctly reproduced Callisto’s nighttime equatorial surface temperature of ≈ 86.0 K. The much higher ηe estimates for Europa and Callisto compared to ηe=0.00971 for the Moon can be explained with the large water-ice content on the surface of these Galilean moons. Europa is almost completely covered by a thick layer of water ice, which has a much higher thermal conductivity than the dry regolith. Also, sunlight penetrates deeper into ice than it does into powdered regolith. All this enables a much larger fraction of the absorbed solar radiation to be stored into the subsurface as heat and later released at night boosting the nighttime surface temperatures of these moons. Volokin et al. [1] showed that GMAT of airless bodies is highly sensitive to ηe.
Table 6 lists the average global surface temperatures of the four celestial bodies predicted by Eq. (10b) along with the employed input data. According to our model, Mercury is about 117 K cooler on average than NASA’s current estimate of 440 K [32], which is based on Eq. (3) and does not represent a spherically averaged surface temperature [1]. Our prediction of Europa’s GMAT, 99.4 K, agrees well with the ≈ 100 K estimate reported for this moon by Sotin et al. [127]. Our estimate of Pluto’s average surface temperature at perihelion (38.6 K) is similar to the mean temperature computed for that dwarf planet by Olkin et al. [124] using a mechanistic model of nitrogen ice volatilization at the surface. Stern et al. [128] and Gladstone et al. [94] reported initial results from the flyby observations of Pluto taken by the Radio Experiment (REX) instrument aboard the New Horizons spacecraft in July 2015, when the dwarf planet was approximately at 32.9 AU from the Sun. Using the observed surface pressure of 1.05 ± 0.1 Pa (10.5 ± 1 μbar) our model predicts an average global temperature of 36.7 K for Pluto. Stern et al. [128] reported a near-surface temperature of ≈ 38 K. However, this value was calculated from pre-flyby global brightness measurements rather than derived via spherical integration of spatially resolved surface temperatures (Stern, personal communication). Since global brightness temperatures tend to be higher than spherically averaged kinetic surface temperatures [1], our model prediction may well be within the uncertainty of Pluto’s true global temperature. We will know more about this in 2017 when spatially resolved thermal measurements obtained by New Horizons become available.
 Surface Atmospheric Pressure (Pa) αe(fraction)  ηe(fraction) Rg (W m-2)Predicted Average Global Surface Temperature at Specific Orbital Distances from the Sun
AphelionSemi-major AxisPerihelion
Mercury5 × 10-10αe=0.068
ηe=0.00971
Rg=0.0
296.8 K 
(0.459 AU)
323.3 K
(0.387 AU)
359.5 K
(0.313 AU)
Europa10-7αe=0.62 
ηe=0.085
Rg=2.0
98.1 K
(5.455 AU)
99.4 K 
(5.203 AU)
100.7 K 
(4.951 AU)
Callisto7.5 × 10-7αe=0.11 
ηe=0.057
Rg=0.5
101.2 K
(5.455 AU)
103.2 K 
(5.203 AU)
105.4 K
(4.951 AU)
Pluto1.05αe=0.132
ηe=0.00971
Rg=0.0
30.0 K
(49.310 AU)
33.5 K
(39.482 AU)
38.6 K
(29.667 AU)
Table 6: Average global surface temperatures predicted by Eq. (10b) for Mercury, Europa, Calisto and Pluto. Input data on orbital distances (AU) and total atmospheric pressure (Pa) were obtained from the NASA Solar System Exploration [48] website, the NASA Planetary Factsheet [32] and Gladstone et al. [94]. Solar irradiances required by Eq. (10b) were calculated from reported orbital distances as explained. Values of αe, ηe and Rg for Europa and Callisto were estimated from observed data by Spencer et al. [124] and Moore et al. [125] respectively (see text for details).
One should use caution when comparing results from Eq. (10b) to remotely sensed ‘average temperatures’ commonly quoted for celestial bodies with tenuous atmospheres such as the moons of Jupiter and Neptune. Studies oftentimes report the so-called ‘brightness temperatures’ retrieved at specific wavelengths that have not been subjected to a proper spherical integration. As pointed out by Volokin et al. [1], due to Hölder’s inequality between integrals, calculated brightness temperatures of spherical objects can be significantly higher than actual mean kinetic temperatures of the surface. Since Eq. (10b) yields spherically averaged temperatures, its predictions for airless bodies are expected to be lower than the disk-integrated brightness temperatures typically quoted in the literature.

Conclusion

For 190 years the atmosphere has been thought to warm Earth by absorbing a portion of the outgoing LW infrared radiation and reemitting it back toward the surface, thus augmenting the incident solar flux. This conceptualized continuous absorption and downward reemission of thermal radiation enabled by certain trace gases known to be transparent to solar rays but opaque to electromagnetic long-wavelengths has been likened to the trapping of heat by glass greenhouses, hence the term ‘atmospheric greenhouse effect’. Of course, we now know that real greenhouses preserve warmth not by trapping infrared radiation but by physically obstructing the convective heat exchange between a greenhouse interior and the exterior environment. Nevertheless, the term ‘greenhouse effect’ stuck in science.
The hypothesis that a freely convective atmosphere could retain (trap) radiant heat due its opacity has remained undisputed since its introduction in the early 1800s even though it was based on a theoretical conjecture that has never been proven experimentally. It is important to note in this regard that the well-documented enhanced absorption of thermal radiation by certain gases does not imply an ability of such gases to trap heat in an open atmospheric environment. This is because, in gaseous systems, heat is primarily transferred (dissipated) by convection (i.e., through fluid motion) rather than radiative exchange. If gases of high LW absorptivity/emissivity such as CO2, methane and water vapor were indeed capable of trapping radiant heat, they could be used as insulators. However, practical experience has taught us that thermal radiation losses can only be reduced by using materials of very low IR absorptivity/emissivity and correspondingly high thermal reflectivity such as aluminum foil. These materials are known among engineers at NASA and in the construction industry as radiant barriers [129]. It is also known that high-emissivity materials promote radiative cooling. Yet, all climate models proposed since 1800s were built on the premise that the atmosphere warms Earth by limiting radiant heat losses of the surface through to the action of IR absorbing gases aloft.
If a trapping of radiant heat occurred in Earth’s atmosphere, the same mechanism should also be expected to operate in the atmospheres of other planetary bodies. Thus, the Greenhouse concept should be able to mathematically describe the observed variation of average planetary surface temperatures across the Solar System as a continuous function of the atmospheric infrared optical depth and solar insolation. However, to our knowledge, such a continuous description (model) does not exist. Furthermore, measured magnitudes of the global down-welling LW flux on planets with thick atmospheres such as Earth and Venus indicate that the lower troposphere of these bodies contains internal kinetic energy far exceeding the solar input [6,12,14]. This fact cannot be explained via re-radiation of absorbed outgoing thermal emissions by gases known to supply no additional energy to the system. The desire to explicate the sizable energy surplus evident in the tropospheres of some terrestrial planets provided the original impetus for this research.
We combined high-quality planetary data from the last three decades with the classical method of dimensional analysis to search for an empirical model that might accurately and meaningfully describe the observed variation of global surface temperatures throughout the Solar System while also providing a new perspective on the nature of the atmospheric thermal effect. Our analysis revealed that the equilibrium global surface temperatures of rocky planets with tangible atmospheres and a negligible geothermal surface heating can reliably be estimated across a wide range of atmospheric compositions and radiative regimes using only two forcing variables: TOA stellar irradiance and total surface atmospheric pressure (Eq. 10b with Tna computed from Eq. 4c). Furthermore, the relative atmospheric thermal enhancement (RATE) defined as a ratio of the planet’s actual global surface temperature to the temperature it would have had in the absence of atmosphere is fully explicable by the surface air pressure alone (Eq. 10a and Figure 4). At the same time, greenhouse-gas concentrations and/or partial pressures did not show any meaningful relationship to surface temperatures across a broad span of planetary environments considered in our study (see Figures 1 and 2 and Table 5).
Based on statistical criteria including numerical accuracy, robustness, dimensional homogeneity and a broad environmental scope of validity, the new relationship (Figure 4) quantified by Eq. (10a) appears to describe an emergent macro-level thermodynamic property of planetary atmospheres heretofore unbeknown to science. The physical significance of this empirical model is further supported by its striking qualitative resemblance to the dry adiabatic temperature curve described by the Poisson formula (Eq. 13) and to the photon-pressure form of the SB radiation law (Eq. 14). Similar to these well-known kinetic relations, Eq. (10a) also predicts the direct effect of pressure on temperature albeit in the context of a different macro-physical system. To our knowledge, this is the first model accurately describing the average surface temperatures of planetary bodies throughout the Solar System in the context of a thermodynamic continuum using a common set of drivers.
The planetary temperature model consisting of Equations (4a), (10b), (11) has several fundamental theoretical implications, i.e.,
• The ‘greenhouse effect’ is not a radiative phenomenon driven by the atmospheric infrared optical depth as presently believed, but a pressure-induced thermal enhancement analogous to adiabatic heating and independent of atmospheric composition;
• The down-welling LW radiation is not a global driver of surface warming as hypothesized for over 100 years but a product of the near-surface air temperature controlled by solar heating and atmospheric pressure;
• The albedo of planetary bodies with tangible atmospheres is not an independent driver of climate but an intrinsic property (a byproduct) of the climate system itself. This does not mean that the cloud albedo cannot be influenced by external forcing such as solar wind or galactic cosmic rays. However, the magnitude of such influences is expected to be small due to the stabilizing effect of negative feedbacks operating within the system. This novel understanding explains the observed remarkable stability of planetary albedos;
• The equilibrium surface temperature of a planet is bound to remain stable (i.e., within ± 1 K) as long as the atmospheric mass and the TOA mean solar irradiance are stationary. Hence, Earth’s climate system is well buffered against sudden changes and has no tipping points;
• The proposed net positive feedback between surface temperature and the atmospheric infrared opacity controlled by water vapor appears to be a model artifact resulting from a mathematical decoupling of the radiative-convective heat transfer rather than a physical reality.
The hereto reported findings point toward the need for a paradigm shift in our understanding of key macro-scale atmospheric properties and processes. The implications of the discovered planetary thermodynamic relationship (Figure 4, Eq. 10a) are fundamental in nature and require careful consideration by future research. We ask the scientific community to keep an open mind and to view the results presented herein as a possible foundation of a new theoretical framework for future exploration of climates on Earth and other worlds.
Appendices
Appendix A. Construction of the Dimensionless π Variables
Table 1 lists 6 generic variables (Ts, Tr, S, Px, Pr and ρx) composed of 4 fundamental dimensions: mass [M], length [L], time [T], and absolute temperature [Θ]. According to the Buckingham Pi theorem [27], this implies the existence of two dimensionless πiproducts per set. To derive the πi variables we employed the following objective approach. First, we hypothesized that a planet’s GMAT (Ts) is a function of all 5 independent variables listed in Table 1, i.e.
equation (B.1)
This unknown function is described to a first approximation as a simple product of the driving variables raised to various powers, i.e.
equation (A.2)
where a, b, c, d and e are rational numbers. In order to determine the power coefficients, Eq. (A.2) is cast in terms of physical dimensions of the participating variables, i.e.
equation (A.3)
Satisfying the requirement for dimensional homogeneity of Eq. (A.2) implies that the sum of powers of each fundamental dimension must be equal on both sides of Eq. (A.3). This allows us to write four simultaneous equations (one per fundamental dimension) containing five unknowns, i.e.
equation (A.4)
System (A4) is underdetermined and has the following solution:α=1, b=2e, and c=3e+d. Note that, in the DA methodology, one oftentimes arrives at underdetermined systems of equations, simply because the number of independent variables usually exceeds the number of fundamental physical dimensions comprising such variables. However, this has no adverse effect on the derivation of the sought dimensionless πi products.
Substituting the above roots in Eq. (A.2) reduces the original five unknowns to two: d and e, i.e.
equation (A.5a)
These solution powers may now be assigned arbitrary values, although integers such as 0, 1 and -1 are preferable, for they offer the simplest solution leading to the construction of proper πi variables. Setting d=0 and e=-1 reduces Eq. (A.5a) to
equation (A.5B)
providing the first pair of dimensionless products:
equation (A.6)
The second pair of πi variables emerges upon setting d=-1 and e=0 in Eq. (A.5a), i.e.
equation (A.7)
Thus, the original function (A.1) consisting of six dimensioned variables has been reduced to a relationship between two dimensionless quantities, i.e.π1=f (π2). This relationship must further be investigated through regression analysis.
Appendix B. Estimation of Mars’ GMAT and Surface Atmospheric Pressure
Although Mars is the third most studied planetary body in the Solar System after Earth and the Moon, there is currently no consensus among researchers regarding its mean global surface temperature (TM). TM values reported over the past 15 years span a range of 40 K. Examples of disparate GMATs quoted for the Red Planet include 200 K [79], 202 K [82,130], 210 K [32], 214 K [80], 215 K [6,81], 218 K [77], 220 K [76], 227 K [131] and 240 K [78]. The most frequently cited temperatures fall between 210 K and 220 K. However, a close examination of the available thermal observations reveals a high improbability for any of the above estimates to represent Mars’ true GMAT.
Figure B.1 depicts hourly temperature series measured at 1.5 m aboveground by Viking Landers 1 and 2 (VL1 and VL2 respectively) in the late 1970s [60]. The VL1 record covers about half of a Martian year, while the VL2 series extends to nearly 1.6 years. The VL1 temperature series captures a summer-fall season on a site located at about 1,500 m below Datum elevation in the subtropics of Mars’ Northern Hemisphere (22.5° N). The arithmetic average of the series is 207.3 K (Fig. B1a). Since the record lacks data from the cooler winter-spring season, this value is likely higher than the actual mean annual temperature at that location. Furthermore, observations by the Hubble telescope from the mid-1990s indicated that the Red Planet may have cooled somewhat since the time of the Viking mission [132,133]. Because of a thin atmosphere and the absence of significant cloud cover and perceptible water, temperature fluctuations near the surface of Mars are tightly coupled to diurnal, seasonal and latitudinal variations in incident solar radiation. This causes sites located at the same latitude and equivalent altitudes to have similar annual temperature means irrespective of their longitudes [134]. Hence, one could reliably estimate a latitudinal temperature average on Mars using point observations from any elevation by applying an appropriate lapse-rate correction for the average terrain elevation of said latitude (Figure B.1).
environment-pollution-climate-Viking-Lander
Figure B.1: Near-surface hourly temperatures measured on Mars by (a) Viking Lander 1 at Chryse Planitia (22.48° N, 49.97° W, Elevation: -1,500 m); and (b) Viking Lander 2 at Utopia Planitia (47.97° N, 225.74° W, Elevation: -3,000 m) (Kemppinen et al. [60]; data downloaded from: https://www-k12.atmos.washington.edu/k12/ resources/mars_data-information/data.html). Black dashed lines mark the arithmetic average (Tmean) of each series. Grey dashed lines highlight the range of most frequently reported GMAT values for Mars, i.e., 210–240 K. The average diurnal temperature can only exceed 210 K during the summer; hence, all Martian latitudes outside the Equator must have mean annual temperatures significantly lower than 210 K.
At 22.5° absolute latitude, the average elevation between Northern and Southern Hemisphere on Mars is close to Datum level, i.e. about 1,500 m above the VL1 site. Adjusting the observed 207.3 K temperature average at VL1 to Datum elevation using a typical near-surface Martian lapse rate of -4.3 K km-1 [78] produces ~201 K for the average summerfall temperature at that latitude. Since the mean surface temperature of a sphere is typically lower than its subtropical temperature average, we can safely conclude based on Figure B.1a that Mars’ GMAT is likely below 201 K. The mean temperature at the VL2 site located at ~48° N latitude and 3,000 m below Datum elevation is 191.1 K (Fig. B.1b). The average terrain elevation between Northern and Southern Hemisphere at 48o absolute latitude is abouT-1,500 m. Upon adjusting the VL2 annual temperature mean to -1,500 m altitude using a lapse rate of -4.3 K km-1 we obtain 184.6 K. Since a planet’s GMAT numerically falls between the mean temperature of the Equator and that of 42° absolute latitude, the above calculations suggest that Mars’ GMAT is likely between 184 K and 201 K.
environment-pollution-climate-surface-air-temperatures
Figure B2: Mean annual surface air temperatures at five Martian absolute latitudes (gray dots) estimated from data provided by Viking Landers, Curiosity Rover, and the Mars Global Surveyor Radio Science Team. Each dot represents a mean annual temperature corresponding to the average terrain elevation between Northern and Southern Hemisphere for particular latitude. The black curve represents a third-order polynomial (Eq. B.1) fitted through the latitudinal temperature means via non-linear regression. Mars’ GMAT, TM=190.56 K (marked by a horizontal gray dashed line) was calculated via integration of polynomial (B.1) using formula (B.2).
A close examination of the Viking record also reveals that average diurnal temperatures above 210 K only occur on Mars during the summer season and, therefore, cannot possibly represent an annual mean for any Martian latitude outside the Equator. On the other hand, frequently reported values of Mars’ GMAT in excess of 210 K appear to be based on the theoretical expectation that a planet’s average surface temperature should exceed the corresponding effective radiating temperature produced by Eq. (3) [78,6], which is Te ≈ 212 K for Mars. This presumption is rooted in the a priori assumption that Te represents a planet’s average surface temperature in the absence of atmospheric greenhouse effect. However, Volokin et al. [1] have shown that, due to Hölder’s inequality between integrals, the mean physical temperature of a spherical body with a tenuous atmosphere is always lower than its effective radiating temperature computed from the globally integrated absorbed stellar flux. In other words, Eq. (3) yields non-physical temperatures for spheres. Indeed, based on results from a 3-D climate model Haberle [130] concluded that Mars’ mean global surface temperature is at least 8 K cooler than the planet’s effective radiating temperature. Therefore, Mars’ GMAT must be inferred from actual measurements rather than from theoretical calculations.
In order to obtain a reliable estimate of Mars’ GMAT, we calculated the mean annual temperatures at several Martian latitudes employing near-surface time series measured in-situ by the Viking landers and Curiosity Rover, and remotely by the Mars Global Surveyor (MGS) spacecraft. The Radio Science Team (RST) at Stanford University utilized radio occultation of MGS refraction data to retrieve seasonal time-series of near-surface atmospheric temperature and pressure on Mars [61,62,135]. We utilized MGS-RST data obtained between 1999 and 2005. Calculated mean temperatures from in-situ measurements were adjusted to corresponding average terrain elevations of target latitudes using a lapse rate of -4.3 K km-1 [78]. Figure B.2 portrays the estimated Mean Annual near-surface Temperatures (MAT) at five absolute Martian latitudes (gray dots) along with their standard errors (vertical bars). The equatorial MAT was calculated from Curiosity Rover observations; temperatures at absolute latitudes 0.392 rad (22.48°) and 0.837 rad (47.97°) were derived from VL measurements, while these at latitudes 1.117 rad (64°) and 1.396 rad (80°) were estimated from MGS-RST data. The black curve represents a third-order polynomial fitted through the latitudinal temperature averages and described by the polynomial:
equation (B.1)
with L being the absolute latitude (rad). MAT values predicted by Eq. (B.1) for Mars’ Equatorial and Polar Regions agree well with independent near-surface temperatures remotely measured by the Mars Climate Sounder (MCS), a platform deployed after MGS in 2006 [136]. Shirley et al. [136] showed that, although separated in time by 2-5 years, MCS temperature profiles match quite well those retrieved by MGS-RST especially in the lower portion of the Martian atmosphere. Figures 2 and 3 of Shirley et al. [136] depict nighttime winter temperature profiles over the Mars’ northern and southern Polar Regions, respectively at about 75° absolute latitude. The average winter surface temperature between the two Hemispheres for this latitude is about 148.5 K. This compares favorably with 156.4 K produced by Eq. (B.1) for 75° (1.309 rad) latitude considering that MAT values are expected to be higher than winter temperature averages. Figures 4 and 5 of Shirley et al. [136] portray average temperature profiles retrieved by MGS-RST and MCS over lowlands (165° – 180° E) and highlands (240° - 270° E) of the Mars’ equatorial region (8° N - 8° S), respectively. For highlands (≈5 km above Datum), the near-surface temperature appears to be around 200 K, while for lowlands (≈2.5 km below Datum) it is ≈211 K. Since most of Mars’ equatorial region lies above Datum, it is likely that Mars’ equatorial MAT would be lower than 205.5 K and close to our independent estimate of ≈203 K based on Curiosity Rover measurements.
Mars’ GMAT (TM) was calculated via integration of polynomial (B.1) using the formula:
equation (B.2)
(Figure B.2) where 0 ≤ cosL ≤ 1 is a polar-coordinate area-weighting factor. The result is TM=190.56 ± 0.7 K (Figure B.2). This estimate, while significantly lower than GMAT values quoted in recent publications, agrees quite well with spherically integrated brightness temperatures of Mars retrieved from remote microwave observations during the late 1960s and early 1970s [85-87]. Thus, according to Hobbs et al. [85] and Klein [86], the Martian mean global temperature (inferred from measurements at wavelengths between 1 and 21 cm) is 190 – 193 K. Our TM estimate is also consistent with the new mean surface temperature of the Moon (197.35 K) derived by Volokin et al. [1] using output from a validated NASA thermo-physical model [29]. Since Mars receives 57% less solar ittadiance than the Moon and has a thin atmosphere that only delivers a weak greenhouse effect [9], it makes a physical sense that the Red Planet would be on average cooler than the Moon (i.e. TM<197 .3k="" 213="" a="" as="" average="" by="" diviner="" equator="" href="https://www.omicsonline.org/open-access/new-insights-on-the-physical-nature-of-the-atmospheric-greenhouse-effect-deduced-from-an-empirical-planetary-temperature-model.php?aid=88574#1" if="" is="" k="" latitude="" lunar="" moreover="" nasa="" observations="" of="" oon="" revealed="" s="" temperature="" the="" title="1" warmest="">1
,29], it is unlikely that Mars’ mean global temperature would be equal to or higher than 213 K as assumed by many studies [6,76-78,80,131]
Published values of Mars’ average surface atmospheric pressures range from 600 Pa to 700 Pa [6,32,78,80,124,125]. Since this interval was too broad for the target precision of our study, we employed MGSRST data retrieved from multiple latitudes and seasons between 1999 and 2005 to calculate a new mean surface air pressure for the Red Planet. Our analysis produced P=685.4 ± 14.2 Pa, an estimate within the range of previously reported values.

Funding Sources

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

References


WSJ: Change Would Be Healthy at U.S. Climate Agencies, such as mentioning margin of error!
Publish: Sat 04 Feb 2017 - 1:39 PM
Website: THE HOCKEY SCHTICK
Twitter: @hockeyschtick1
Source: View Original

Change Would Be Healthy at U.S. Climate Agencies

In the Obama era, it was routine for press releases to avoid mentioning any margin of error.


Opinion Journal Video: Business World Columnist Holman Jenkins Jr. on why the Trump Administration should reform NOAA and NASA. Photo credit: Getty Images.

By

Holman W. Jenkins, Jr.
It will be hard to notice when President Trump does something worthy of hysteria if everything he does is greeted with hysteria. Take claims that he’s laying siege to the alleged chastity of climate scientists. This is one subject where it might be wise not to rely on the reflexive media narrative. 
The year 2016 was the warmest ever recorded—so claimed two U.S. agencies, NASA’s Goddard Institute for Space Studies and the Commerce Department’s National Oceanic and Atmospheric Administration. Except it wasn’t, according to the agencies’ own measures of statistical uncertainty.
Such fudge is of fairly recent vintage. Leaving any discussion of the uncertainty interval out of press releases only became the norm in the second year of the Obama administration. Back when he was presenting the 2008 numbers, NASA’s James Hansen, no slouch in raising climate alarms, nevertheless made a point of being quoted saying such annual rankings can be “misleading because the difference in temperature between one year and another is often less than the uncertainty in the global average.”
Statisticians wouldn’t go through the trouble of assigning an uncertainty value unless it meant something. Two measurements separated by less than the margin of error are the same. And yet NASA’s Goddard Institute, now under Mr. Hansen’s successor Gavin Schmidt, put out a releasedeclaring 2014 the “warmest year in the modern record” when it was statistically indistinguishable from 2005 and 2010.
Nowadays Goddard seems to mention confidence interval only when it’s convenient. So 2015, an El Niño year, was the warmest yet “with 94 percent certainty.” No confidence interval was cited one year later in proclaiming 2016 the new warmest year “since modern recordkeeping began.” In fact, the difference versus 2015 was a mere one-quarter of the margin of error.
Commerce’s NOAA makes a fetish of ignoring confidence interval in its ranking of the 12 warmest years. Yet when statistical discipline is observed, 2015 and 2016, the two El Niño years, are tied for warmest. And the years 1998, 2003, 2005, 2006, 2007, 2009, 2010, 2012, 2013 and 2014 are all tied for second warmest.
In other words, whatever the cause of warming in the 1980s and 1990s, no certain trend is observable since then.
Shall we posit a theory about all this? U.S. government agencies stopped mentioning uncertainty ranges because they wanted to engender a steady succession of headlines pronouncing the latest year unambiguously the hottest when it wasn’t necessarily so.
This doesn’t mean you should stop being concerned about a potential human impact on climate. But when government scientists deliberately seek to mislead, it’s a warning to raise your guard.
For instance, NOAA states its annual temperature estimate as an “anomaly” in relation to the 20th-century average. Do you really believe government scientists can reconstruct a global average temperature for years in the first half of the 20th century with sufficient accuracy to allow comparisons of 1/100ths of a degree?
You start to notice other things. The numbers keep changing. Years 2005 and 2010 were exactly tied in 2010, but now 2010 is slightly warmer, just enough to impart an upward slope to any graph that ignores statistical uncertainty.
Government scientists are undoubtedly ready with justifications for each of the countless retroactive adjustments they impose on the data, but are you quite sure they can be trusted?
Climate science is not a hoax. The U.S. government spends impressive sums to take the increasingly rigorous readings from which a global average temperature is distilled. But other countries like the U.K. and Japan also do sophisticated monitoring and end up with findings roughly similar to the findings of U.S. agencies, yet they don’t feel the need to lie about it. For instance, the U.K. Met Office headlined its 2016 report “one of the warmest two years on record.” A reader only had to progress to the third paragraph to discover that the difference over 2015 was one-tenth the margin of error.
President Trump is a complete novice, but presumably at some point he will climb the learning curve, gain control over his administration, and start making cagier decisions about which fights are worth having. Our guess is that fighting with his administration’s climate scientists won’t seem like much of a priority. And yet, given all the money U.S. taxpayers spend on climate science, a mental freshening wouldn’t be the worst thing. Goddard’s Mr. Schmidt, keeper of a snarling blog that makes frequent use of the slur “denier,” got his start at the New York City-based NASA science lab more than 20 years ago.
On the slight chance Mr. Trump does make such a move, keep something else in mind: Undifferentiated hysteria will apparently be the media reaction to every Trump action equally whether those actions are entirely justified or entirely indefensible.

Keeping Cool About Hot Temperatures
Publish: Tue 24 Jan 2017 - 11:25 AM
Website: THE HOCKEY SCHTICK
Twitter: @hockeyschtick1
Source: View Original

Keeping Cool About Hot Temperatures


Last year was warmer by 0.04 Celsius, but it was also an El Niño year.


By now you’ve seen the headline: 2016 was the hottest year on record. The news has been paired with predictions of civilization’s imminent demise. But a closer look at the evidence reveals that the political heat is overwrought—and there’s still no reason to re-engineer the global economy to mitigate small climate fluctuations.
The National Oceanic and Atmospheric Administration (NOAA) announced this week that last year was the warmest in the agency’s 137-year series, and that 2016 broke the previous record for the third consecutive year. This sounds alarming, until you read that 2016 edged out 2015 by a mere 0.04 degrees Celsius. That’s a fraction of the margin of error. Atmospheric data from satellites detected similarly small warming over previous years. In other words, no one really knows if last year was a record.
Here’s what we do know: 2015 and 2016 were major years for El Niño, a Pacific trade winds phenomenon known to produce temperature spikes. The Cato Institute’s Patrick Michaels has detailed in these pages how in 1998, another big El Niño year, average surface temperatures increased about a quarter-degree Fahrenheit and then dropped in the following years. That is similar to the increase in 2015—and by the end of 2016 temperatures were falling back toward 2014 levels. Even NOAA admits El Niño’s role.
The underreported news here is that the warming is not nearly as great as the climate-change computer models have predicted. As climatologist Judith Curry testified to Congress in 2014, U.N. Intergovernmental Panel on Climate Change simulations forecast surface temperatures to increase on average 0.2 degrees Celsius per decade in the early 21st century. The warming over the first 15 years was closer to 0.05 degrees Celsius. The models also can’t explain why more than 40% of the temperature increase since 1900 happened between 1910 and 1945, which accounts for only 10% of the increase in carbon emissions. 
These nuances are important because phrases such as “hottest year ever” are waved around as a pretext for political action that usually involves giving more control over the economy to governments. This is inevitably sold as urgently required to save the planet. 
But even these regulations, taxes and subsidies would do little to reverse global temperature trends, though they could reduce the economic growth and wealth creation needed to cope with the consequences of higher temperatures. That is true of all President Obama ’s ministrations—from the Clean Power Plan to the Paris climate accord to subsidies for Al Gore ’s green-energy portfolio. 
The most inconvenient truth during the Obama years has been that the biggest cause of lower U.S. CO 2 emissions has been the energy shift to natural gas from coal. Yet the climate-change lobby opposes fracking.
The Earth’s surface has warmed over the last century by close to a degree Celsius, and the trend bears watching. But the additional questions to consider are about future magnitudes and impact, and what if any policies would make a difference without doing serious economic harm. The best insurance against the risks of climate change is economic growth and innovation—more efficient batteries, for example.
But adding to human knowledge on climate requires a thorough airing and debate over the evidence. That won’t happen as long as alarmists continue to try to shut down debate by spinning doomsday tales about sizzling temperatures.

Solar activity, ocean cycles, & water vapor explain 98% of climate change since 1900, NOT CO2!
Publish: Tue 20 Dec 2016 - 9:48 AM
Website: THE HOCKEY SCHTICK
Twitter: @hockeyschtick1
Source: View Original



by Dan Pangburn, MSME


Summary

Thermalization and the complete dominance of water vapor in reverse-thermalization explain why atmospheric carbon dioxide (CO2) has no significant effect on climate. Reported average global temperature (AGT) since before 1900 is accurately (98% match with measured trend) explained by a combination of ocean cycles, sunspot number anomaly time-integral and increased atmospheric water vapor.



Introduction

The only way that energy can significantly leave earth is by thermal radiation. Only solid or liquid bodies and greenhouse gases (ghg) can absorb/emit in the wavelength range of terrestrial radiation. Non-ghg gases must transfer energy to ghg gases (or liquid or solid bodies) for this energy to be radiated.

The word ‘trend’ is used here for temperatures in two different contexts. To differentiate, α-trend is an approximation of the net of ocean surface temperature oscillations after averaging-out the year-to-year fluctuations in reported average global temperatures. The term β-trend applies to the slower average energy change of the planet which is associated with change to the average temperature of the bulk volume of the material (mostly ocean water) involved.

Some ocean cycles have been named according to the particular area of the oceans where they occur. Names such as PDO (Pacific Decadal Oscillation), ENSO (el Nino Southern Oscillation), and AMO (Atlantic Multi-decadal Oscillation) might be familiar. They report the temperature of the water near the surface. The average temperature of the bulk water that is participating in these oscillations cannot significantly change so quickly because of high thermal capacitance [1].

This high thermal capacitance absolutely prohibits the rapid (year-to-year) AGT fluctuations which have been reported, from being a result of any credible forcing. According to one assessment [1], the time constant is about 5 years. A likely explanation for the reported year-to-year fluctuations is that they are stochastic phenomena in the over-all process that has been used to determine AGT. A simple calculation shows the standard deviation of the reported annual average measurements to be about ±0.09 K with respect to the trend. The temperature fluctuations of the bulk volume near the surface of the planet are more closely represented by the fluctuations in the trend. The trend is a better indicator of the change in global energy; which is the difference between energy received and energy radiated.

The kinetic theory of gases, some thermodynamics and the rudiments of quantum mechanics provide a rational explanation of what happens when ghg absorb photons of terrestrial thermal radiation.

Refutation of significant influence from CO2
There is multiple evidence (most identified earlier [2] ) that CO2 has no significant effect on climate:
1. In the late Ordovician Period, the planet plunged into and warmed up from the Andean/Saharan ice age, all at about 10 times the current CO2 level [3].
2. Over the Phanerozoic eon (last 542 million years) there is no correlation between CO2 level and AGT [3, 4].
3. During the last and previous glaciations AGT trend changed directions before CO2 trend [2].
4. Since AGT has been directly and accurately measured world wide (about 1895), AGT has exhibited up and down trends while CO2 trend has been only up. [2]
5. Since about 2001, the measured atmospheric CO2 trend has continued to rise while the AGT trend has been essentially flat. [21, 13]

Thermalization refutes CO2 influence on climate. (rev 10/21/16)
The relaxation time (amount of time that passes between absorption and emission of a photon by a molecule) for CO2 in the atmosphere is about 6 µsec [5, 6]. The elapsed time between collisions between gaseous molecules at sea level average temperature and pressure is about 0.0002 µsec[7]. Thus, at sea level conditions, it is approximately 6/0.0002 = 30,000 times more likely that a CO2 molecule, after it has absorbed a photon, will bump into another molecule, losing at least part of the momentum and energy it acquired from the photon. After multiple collisions, essentially all of the added photonic energy becomes distributed among other molecules and the probability of the CO2 molecule emitting a photon at sea level conditions becomes negligible. The process of distribution of the energy to other molecules is thermal conduction in the gas. The process of absorbing photons and conducting the absorbed energy to other molecules is thermalization. Thermalized energy carries no identity of the molecule that absorbed it.

Water vapor molecules can absorb (and emit) photons at hundreds of wavelengths in the wavelength range of significant terrestrial thermal radiation (nearly all in the wavelength range 6-100 microns) compared to only one (15 micron) for CO2 (wave length range of the single absorption band for CO2 is broadened to about 14-16 microns at sea level due to pressure, etc. but the multiple absorb/emit wave length bands for water vapor are equally broadened). 

Reverse thermalization, where the warmed jostling molecules excite some molecules to emit a photon is almost entirely to water vapor molecules at sea level conditions. The reason is relaxation time of some water vapor molecule rotational emission lines is 0.5 µsec compared to 6 µsec for CO2 molecules and/or the thousands more ‘opportunities’ for emission by water vapor.


Water vapor has more ‘opportunities’ for emission because there are about 35 times as many water vapor molecules in the atmosphere below about 5 km as there are CO2 molecules (See Figure 2) and each water vapor molecule has hundreds of emission bands compared to only one band for each CO2 molecule. Most, if not all, of the photons emitted by the water vapor molecules are at wavelengths different from the narrow band CO2 molecules can absorb. Effectively, energy absorbed by CO2 is rerouted to space via water vapor.

At very high altitudes, molecule spacing and time between collision increases to where reverse-thermalization to CO2 molecules becomes significant as does radiation from them to space.



Figure 1 is a typical graph showing top-of-atmosphere (TOA) thermal radiation from the planet. The TOA radiation from different locations on the planet can be decidedly different, e.g. as shown in Figure 9 of Reference [8]. Figure 1, here, might be over a temperate ocean and thus typical for much of earth’s surface.


Figure 1: Terrestrial thermal radiation and absorption.

Approximately 98% of atmospheric molecules are non-ghg nitrogen and oxygen. They are substantially warmed by thermalization of the photonic energy absorbed by the ghg molecules.


Figure 2: Water vapor declines rapidly with altitude. [9] (original from NASA)

Thermalized energy carries no identity of the molecule that absorbed it. The thermalized radiation warms the air, reducing its density, causing updrafts which are exploited by soaring birds, sailplanes, and occasionally hail. Updrafts are matched by downdrafts elsewhere, usually spread out but sometimes recognized by pilots and passengers as ‘air pockets’ and micro bursts.

A common observation of thermalization by way of water vapor is cloudless nights cool faster when absolute water vapor content of the atmosphere is lower.

Jostling between gas molecules (observed as temperature and pressure) sometimes causes reverse-thermalization. At low to medium altitudes, EMR emission stimulated by reverse-thermalization is essentially all by way of water vapor.

At altitudes below about 10 km a comparatively steep population gradient (decline with increasing altitude) in water vapor molecules favors outward radiation with increasing amounts escaping directly to space. At higher altitudes, increased molecule spacing and greatly diminished water vapor molecules favors reverse thermalization to CO2. This is observed in the sharp peaks at nominal absorb/emit wavelengths of non-condensing ghg (See Figure 1).

Thermalization results in the influence of CO2 on climate to be not significantly different from zero.


Environmental Protection Agency mistake
The US EPA asserts [10] Global Warming Potential (GWP) is a measure of “effects on the Earth's warming” with “Two key ways in which these [ghg] gases differ from each other are their ability to absorb energy (their "radiative efficiency"), and how long they stay in the atmosphere (also known as their "lifetime").” 

The EPA calculation overlooks the very real phenomenon of thermalization. Trace ghg (all ghg except water vapor) have no significant effect on climate because absorbed energy is immediately thermalized. 

Water vapor (Rev 8/26/16)
Water vapor is the ghg which makes earth warm enough for life as we know it. Increased atmospheric water vapor contributes to planet warming. Water vapor molecules are far more effective at absorbing terrestrial thermal radiation than CO2 molecules (even if thermalization did not eliminate CO2 as a significant warmer). Atmospheric water vapor has increased primarily (≈ 98%) as a result of increased irrigation, with comparatively tiny contributions from cooling towers at electricity generating facilities, and increased burning of hydrogen rich fossil fuels especially natural gas which is nearly all methane. Of course increased water vapor causes the planet to warm which further increases water vapor so there is a cumulative effect (in control system analysis as done by engineers, this is called feedback. The term ‘feedback’ has a somewhat different meaning to Climate Scientists). This cumulative effect also amplifies cooldowns. More water vapor in the atmosphere means more warming, probably acceleration of the hydrologic cycle and increased probability of floods. How much of recent flooding is simply bad luck in the randomness of weather and how much is because of the ‘thumb on the scale’ of added water vapor? Water vapor exhibits a logarithmic decline in effect of equal added increments (Fig. 3 of Ref. [12]).


Essentially all of the ghg effect on earth comes from water vapor. Clear air water vapor measurements over the non-ice-covered oceans in the form of total precipitable water (TPW) have been made since about 1987 by Remote Sensing Systems (RSS) [11]. A graph of this measured ‘global’ average anomaly data, with a reference value of 28.73 added, is shown in the left graph of Figure 3. The trend of this data is extrapolated both earlier and later using CO2 level as a proxy, with the expression kg/m^2 TPW = 4.5118 * ppmvCO2^0.31286. The result is the right-hand graph of Figure 3. (The 1940-1950 flat exists in the Law Dome CO2 data base.)

Figure 3: Average clear air total precipitable water over all non-ice-covered oceans.

Clouds (average emissivity about 0.5) consist of solid and/or liquid water particles that radiate approximately according to Planck spectrum and Stephan-Boltzmann (S-B) law (each particle contains millions of molecules).

The perception water vapor content of the atmosphere depends even minutely on CO2 content is profoundly misleading and precisely wrong because it ignores the partial pressure of water.



The AGT Model
Most modeling of global climate has been with Global Climate Models (GCMs) where physical laws are applied to hundreds of thousands of discrete blocks and the interactions between the discrete blocks are analyzed using super computers with an end result being calculation of the AGT trajectory. This might be described as a ‘bottom up’ approach. Although theoretically promising, multiple issues currently exist with this approach. Reference [13] discloses that nearly all of the more than 100 current GCMs are obviously faulty. The few which appear to follow measurements might even be statistical outliers of the ‘consensus’ method. The growing separation between calculated and measured AGT as shown at Figure 17 in Ref. [14] also suggests some factor is missing.

The approach in the analysis presented here is ‘top down’. This type of approach has been called ‘emergent structures analysis’. As described by Dr. Roy Spencer in his book THE GREAT GLOBAL WARMING BLUNDER“Rather than model the system from the bottom up with many building blocks, one looks at how the system as a whole behaves.” That approach is used here with strict compliance with physical laws. 

The basis for assessment of AGT is the first law of thermodynamics, conservation of energy, applied to the entire planet as a single entity. Much of the available data are forcings or proxies for forcings which must be integrated (mathematically as in calculus, i.e. accumulated over time) to compute energy change. Energy change divided by effective thermal capacitance is temperature change. Temperature change is expressed as anomalies which are the differences between annual averages of measured temperatures and some baseline reference temperature; usually the average over a previous multiple-year time period. (Monthly anomalies, which are not used here, are referenced to previous average for the same month to account for seasonal norms.)

The AGT model, a summation of contributing factors, is expressed in this equation:

Tanom = (A,y)+thcap-1 * Σyi=1895 {B*[S(i)-Savg] + C*ln[TPW(i)/TPW(1895)] –                              F * [(T(i)/T(1895))4 – 1]} + D                                                                                 (1)

Where:
Tanom = Calculated average global temperature anomaly with respect to the baseline of the anomaly for the measured temperature data set, K
A = highest-to-lowest extent in the saw-tooth approximation of the net effect on planet AGT of all ocean cycles, K
y = year being calculated
(A,y) = value of the net effect of ocean cycles on AGT in year y (α-trend), K
thcap = effective  thermal capacitance [1] of the planet = 17±7 W yr m-2 K-1
1895 = Selected beginning year of acceptably accurate world wide temperature measurements.
B = combined proxy factor and influence coefficient for energy change due to sunspot number anomaly change, W yr m-2
S(i) = average daily V2 sunspot numbers [15,16] in year i
Savg = baseline for determining SSN anomalies 
C = influence coefficient for energy change due to TPW change, W yr m-2
TPW(i) = total precipitable water in year i, kg m-2
TPW(1895) = TPW in 1895, same units as TPW(i)  
F = 1 to account for change to S-B radiation from earth due to AGT change, W yr m-2
T(i) = AGT calculated by adding T(1895) to the reported anomaly, K
T(1895) = AGT in 1895 = 286.707 K
D = offset that shifts the calculated trajectory vertically on the graph, without changing its shape, to best match the measured data, K (equivalent to changing the anomaly reference temperature).

Accuracy of the model is determined using the Coefficient of Determination, R 2, to compare calculated AGT with measured AGT.


Approximate effect on the planet of the net of ocean surface temperature (SST)
The average global ocean surface temperature oscillation is only about ±1/6 K. It is defined to not significantly add or remove planet energy. The net influence of SST oscillation on reported AGT is defined as α-trend. In the decades immediately prior to 1941 the amplitude range of the trends was not significantly influenced by change to any candidate internal forcing effect; so the observed amplitude of the effect on AGT of the net ocean surface temperature trend anomaly then, must be approximately the same as the amplitude of the part of the AGT trend anomaly due to ocean oscillations since then. This part is approximately 0.36 K total highest-to-lowest extent with a period of approximately 64 years (verified by high R2 in Table 1). 

The measured AGT trajectory (Figure 9) suggests that the least-biased simple wave form of the effective ocean surface temperature oscillation is approximately saw-toothed. Approximation of the sea surface temperature anomaly oscillation can be described as varying linearly from –A/2 K in 1909 to approximately +A/2 K in 1941 and linearly back to the 1909 value in 1973. This cycle repeats before and after with a period of 64 years.

Because the actual magnitude of the effect of ocean oscillation in any year is needed, the expression to account for the contribution of the ocean oscillation in each year to AGT is given by the following:

ΔTosc = (A,y)             K (degrees)                 (2)

where the contribution of the net of ocean oscillations to AGT change is the magnitude of the effect on AGT of the surface temperature anomaly trend of the oscillation in year y, and A is the maximum highest-to-lowest extent of the effect on AGT of the net ocean surface temperature oscillation. 

Equation (2) is graphed in Figure 4 for A=0.36.

Figure 4: Ocean surface temperature oscillations (α-trend) do not significantly affect the bulk energy of the planet.


Comparison of approximation with ‘named’ ocean cycles
Named ocean cycles include, in the Pacific north of 20N, Pacific Decadal Oscillation (PDO); in the equatorial Pacific, El Nino Southern Oscillation (ENSO); and in the north Atlantic, Atlantic Multidecadal Oscillation (AMO).

Ocean cycles are perceived to contribute to AGT in two ways: The first is the direct measurement of sea surface temperature (SST). The second is warmer SST increases atmospheric water vapor which acts as a forcing and therefore has a time-integral effect on temperature. The approximation, (A,y), accounts for both ways.

SST data is available for three named cycles: PDO index, ENSO 3.4 index and AMO index. Successful accounting for oscillations is achieved for PDO and ENSO when considering these as forcings (with appropriate proxy factors) instead of direct measurements. As forcings, their influence accumulates with time. The proxy factors must be determined separately for each forcing. The measurements are available since 1900 for PDO [17] and ENSO3.4 [18]. This PDO data set has the PDO temperature measurements reduced by the average SST measurements for the planet.

The contribution of PDO and ENSO3.4 to AGT is calculated by:
PDO_NINO = Σyi=1900 (0.017*PDO(i) + 0.009 * ENSO34(i))        (3)

Where:
            PDO(i) = PDO index [17] in year i
            ENSO34(i) = ENSO 3.4 index [18] in year i

How this calculation compares to the idealized approximation used in Equation (2) with A = 0.36 is shown in Figure 5.


Figure 5: Comparison of idealized approximation of ocean cycle effect and the calculated effect from PDO and ENSO.

The AMO index [19] is formed from area-weighted and de-trended SST data. It is shown with two different amounts of smoothing in Figure 6 along with the saw-tooth approximation for the entire planet per Equation (2) with A = 0.36.
Figure 6: Comparison of idealized approximation of ocean cycle effect and the AMO index.

The high Coefficients of Determination in Table 1 and the comparisons in Figures 5 and 6 corroborate the assumption that the saw-tooth profile with a period of 64 years provides adequate approximation of the net effect of all named and unnamed ocean cycles in the calculated AGT anomalies.

Atmospheric carbon dioxide
The level of atmospheric carbon dioxide (CO2) has been widely measured over the years. Values from ancient times were determined by measurements on gas bubbles which had been trapped in ice cores extracted from Antarctic glaciers [20]. Spatial variations between sources have been found to be inconsequential [2]. The best current source for atmospheric carbon dioxide level [21] is Mauna Loa, Hawaii. Extrapolation to future CO2 levels, shown in Figure 7, is accomplished using a second-order curve fit to data measured at Mauna Loa from 1980 to 2012. 

Figure 7: Measured atmospheric carbon dioxide level since 1880 and extrapolation to 2037.


Sunspot numbers
Sunspots have been regularly recorded since 1610. In 2015 historical (V1) SSN were reevaluated in light of current perceptions and more sensitive instruments and are designated as V2. The V2 SSN data set is used throughout this assessment. V2 SSN [15] are shown in Figure 8.

Sunspot numbers (SSN) are seen to be in cycles each lasting approximately 11 years. The current cycle, called 24, has been comparatively low, has peaked, and is now in decline.

The Maunder Minimum (1645-1700), an era of extremely low SSN, was associated with the Little Ice Age. The Dalton Minimum (1790-1820) was a period of low SSN and low temperatures. An unnamed period of low SSN (1880-1930) was also accompanied by comparatively low temperatures.

An assessment of this is that sunspots are somehow related to the net energy retained by the planet, as indicated by changes to the average global temperature trend. Fewer sunspots are associated with cooling, and more sunspots are associated with warming. Thus the hypothesis is made that SSN are proxies for the rate at which the planet accumulates (or loses) radiant energy over time. Therefore the time-integral of the SSN anomalies is a proxy for most of the amount of energy retained by the planet above or below breakeven.

Also, a lower solar cycle over a longer period might result in the same increase in energy retained by the planet as a higher solar cycle over a shorter period. Both magnitude and time are accounted for by taking the time-integral of the SSN anomalies, which is simply the sum of annual mean SSN (each minus Savg) over the period of study.

SSN change correlates with change to Total Solar Irradiance (TSI). However, TSI change can only account for less than 10% of the AGT change on earth. Because AGT change has been found to correlate with SSN change, the SSN change must act as a catalyst on some other factor (perhaps clouds [22]) which have a substantial effect on AGT.


Figure 8: V2 SSN [15]


Possible values for Savg are subject to two constraints. Initially they are determined as that which results in derived coefficients and maximum R2. However, calculated values must also result in rational values for calculated AGT at the depths of the Little Ice Age. The necessity to calculate a rational LIA AGT is a somewhat more sensitive constraint. The selected value for Savg results in calculated LIA AGT of approximately 1 K less than the recent trend which appears rational and is consistent with most LIA AGT assessments.

PLEASE CONTINUE FOR REMAINDER OF ARTICLE AT DAN PANGBURN'S SITE for the Identity of the 3 factors in the equation which matches average global temperature (98% correlation from 1895-2015) at http://globalclimatedrivers2.blogspot.com 

HINT: CO2 is an insignificant factor. From the conclusions:

Conclusions

Three factors explain essentially all of Average Global Temperature change since before 1900. They are ocean cycles, accounted for with an approximation, influence quantified by a proxy; the  SSN [sunspot numbers] anomaly time-integral and, the gain in atmospheric water vapor measured since 1987 and extrapolated before and after using measured CO2 as a proxy.