**Don’t**

Don’t smooth your data and then use that smoothed data as input to other analysis. You will fool yourself. You will make over-confident decisions. It is the wrong thing to do. It is a mistake. It is a guarantee of over-certainty. I don’t know how to put it more plainly. Lord knows I have tried. See below for a non-success story.

Smoothing means *any* kind of modeling, which includes running means, just-plain-means, filtering of any kind, regression, wavelets, Fourier analysis, ARIMA, GARCH; in short, *any* type of function where actual data comes in and *something that is not data* comes out.

Do not use the something-that-is-not-data as if it is data. This is a sin.

Don’t believe me. Try it yourself. The picture is from an upcoming paper I and some friends are writing.

It shows two simulated normal noise time series, with successively higher amounts of smoothing applied by a k-rolling mean. From top left clockwise: k = 1, 10, 20, 30; a k = 1 corresponds to no smoothing. The original time series are shown faintly for comparison. The correlation between the two series is indicated in the title.

More smoothing equals higher correlations. Since there are no causes between these series, the correlation should be hovering around 0, which it is in the first panel. And that correlation stays near 0—for the original real not fake un-smoothed data. But if you calculate the correlation between the smoothed series…the sky’s the limit!

Now it is not true that in each and every and all instances that smoothing will increase the correlation between two smoothed series. It might be that (in absolute value), for your one-time smoothing, correlation decreases or stays put. But it usually will increase, and usually by a lot.

Why? Imagine *any* two straight lines with non-zero slopes. These two straight lines will have perfect Pearson correlation, either +1 or -1. Regression and other measures will also show perfect agreement. The proof of this is trivial, and I leave it as an exercise (don’t be lazy; try it). Smoothing makes time series data look more like straight lines, as the pictures show. Simple as that.

There are all manner of fine points I’m skipping and would make wonderful Masters projects. Just what kind of data and what kind of smoothing and what statistical measures are affected and by what magnitude? All these questions are quantifiable and will make for fun puzzles. My experience with actual data and actual smoothing and typical measures shows that magnitude is large.

**It happens**

Now, without betraying any confidences, let me tell you of the latest in a long and growing string of bad examples. Two companies, one internationally known for their quantitative prowess, another even better known for its ability to make vast wads of money. Call them A (stats) and B (client). I did not work for either A or B, but know and advised certain parties.

B advertised and wondered how much of an effect this had on its measure of success. A said they could tell, using sophisticated Bayesian models incorporating social media data.

*Social media!*

Wowzee! Tell people you have busted open the secrets of social media and they will dump buckets of cold cash on you. Hint: everybody who says they have it figured out is either exaggerating to themselves or to their clients. (Say, that’s a pretty bold statement.)

Anyway, smoothing occurred. And correlations greater than 0.95 were boasted of. I’m not kidding about this number. Company A really did brag of enormous “impacts” of its smoothed measures. And Company B believed them—because they wanted to believe. Sophisticated Bayesian models incorporating social media data! How *could* you go wrong?

The real correlations, using unsmoothed data, were near 0. Just as you’d expect them to be for such noisy data as “social media” predicting a company’s measure of success. Do you really think Twitter streams contain magic?

I told all involved. I explained pictures like those above. I was emphatic and clear. I stood neither to gain nor lose regardless of the decision. Only two people (at B) believed me, neither of whom were in a position to make decisions.

At least I am comforted that Reality is my friend here. The company’s will eventually realize, but probably never admit, that their measures are spurious. Because they will realize but not admit, these measures will be quietly abandoned…

…As soon as the next computer self-programmed big data machine learning artificially intelligent smart-phone-data algorithm comes along and seduces them.

I actually understood that one. I admit to the sorry state of my knowledge.

To me, calculating statistics on smoothed data intuitively just seems wrong. Do researchers genuinely not see this, have they been badly trained, or do they just give in to the temptation of the large coefficient?

Does the above relate at all to climate modeling?

Just curious

John B,

Yep. See the Classic Posts page on the Environment & Climate. Happens all the time.

I posted a comment, but it isn’t showing up. Is there moderation on?

At any rate, it was an enjoyable exercise that you suggested. For fun and enrichment, I tried it out comparing a linear and an exponential function, and the correlation is nearly 1 (although not exact). What a simple but powerful proof it is! It will be even harder to believe the results of many papers and “science” out there.

James,

I saw it in the spam, tried to release it, but it just disappeared. I have no explanation except that Akismet seems hungrier than usual lately.

Notwithstanding whatever company/ies analysis/ses regarding social media Briggs was evaluatingâ€¦.donâ€™t forget or believe for a second that social media isnâ€™t a treasure-trove of untapped & unexploited wealth-making [for the provider] data. Some snippets:

Re Facebook, Google, etc: â€œYou are not their client. You are their product.” Sen. Al Franken

See starting page 25 of his ABA speech at: http://assets.sbnation.com/assets/1033745/franken_aba_antitrust_speech.pdf

Selected quotes: http://www.bibliotecapleyades.net/sociopolitica/sociopol_internetfacebook17.htm

SPENDING DATA:

http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

http://www.dmnews.com/dear-target-im-not-pregnant-or-wait-am-i/article/320231/

SOCIAL MEDIA:

http://www.wired.com/2014/10/facebook-king-data-brokers/

http://www.foxbusiness.com/personal-finance/2014/01/03/facebooks-messenger-lawsuit-data-mining-dislike/

As we used to joke, garbage in gospel out. The computer is infallible.

Try correlating independent monotone sequences for fun and games. The expected value of the correlation is somewhere north of 0.90

Your example of â€œIt happensâ€ is vague. No, I donâ€™t believe you. Show me the data!

There is another possible reason why two variables have a sample Pearson correlation coefficient of near zero â€“ there is no linear relationship between them. (Pearson correlation coefficient is a measure of the strength of the linear relationship between two variables.)

What is â€œmore smoothingâ€? A larger bandwidth? What is â€œa lotâ€?

Though I am not sure why one would study the (linear) relationship between two time series in the manner described here, but the above is incorrect.

Because, regardless of how one wants to smooth each time series, it is not clear how smoothing of each series will influence the strength of the linear relationship between two (variables) time series.

It is possible that a wide bandwidth would â€œflattenâ€ (if curve) or â€œlinearizeâ€ the relationship between the two variables. Still, we canâ€™t make the above conclusions.

Anyway, try the following two series.

n=200; y=x=w=rnorm(n)

for (t in 2:n) x[t] = 0.7*x[t-1] + w[t]

for (t in 2:n) y[t] = 0.05 *t + 0.5*y[t-1] + w[t]

Letâ€™s also be clear that whatâ€™s described in this post is not what wavelets or Fourier analysis or ARIMA or GARCH is about.

Friends donâ€™t let friends make mistakes.

JH,

Quite right. Friends don’t let friends make mistakes.

Take your code exactly as written. Download the zoo package, which provides for rolling means, and then paste in this:

require(zoo)

n=200; y=x=w=rnorm(n)

for (t in 2:n) x[t] = 0.7*x[t-1] + w[t]

for (t in 2:n) y[t] = 0.05 *t + 0.5*y[t-1] + w[t]

for (k in c(1,10,20,30)){

print(cor(rollmean(x,k),rollmean(y,k)))

}

Here’s what I got after just one run:

[1] -0.130129

[1] -0.3553055

[1] -0.5626511

[1] -0.6653475

In other words, smoothing increases (in absolute value) correlation. Have fun.

Your next step will be to notice where I claimed all the niceties would make lovely Master’s projects (maybe you can do one?). Say, why do rolling means produce increasing

negativecorrelations (not too hard to figure out). What if I tried, say, a Kalman or a low pass filter? Would that give me increasing positive ones?All sorts of good questions to play with.

Correction –

Because, regardless of how one wants to smooth each time series, smoothing of each time series says little about the relationship between two variables. (I wouldn’t suggest eye-ball method here.) It is not clear how the strength of the linear relationship between two

smoothed(variables) time series would be related to the one between twooriginal(variables) time series.Hi Briggs, try the simulations a few times.

And you know that I won’t get the same result.

Try the following. (Yes, we both get the same result.)

set.seed(222)n=200; y=x=w=rnorm(n)

for (t in 2:n) x[t] = 0.7*x[t-1] + w[t]

for (t in 2:n) y[t] = 0.05 *t + 0.5*y[t-1] + w[t]

for (k in c(1,10,20,30)){

print(cor(rollmean(x,k),rollmean(y,k)))

}

Result –

[1] 0.1376599

[1] 0.06793597

[1] 0.009735388

[1] -0.09771799

>

An obvious answer is that it is the result of one simluation run. However, I seriously doubt this is the answer you have in mind.

So, please do explain. (One may not make conclusion based on one simulation result. Though one simulation can be easily used as a counter example. )

Again. Friends donâ€™t let friends make mistakes.

Hi Debbie,

I see you’ve confirmed what I said in the original post and that sometimes correlation decreases. Excellent. Your next task is to show that it on average increases, and under what circumstances.

Oh, try this one for fun. A low pass filter with increasing positive correlations!

(Repeat your code for x and y first.)

require(signal)

for (k in 2:5){

bf <- signal::butter(k, 1/50, type="low") print(cor(signal::filter(bf, x),signal::filter(bf, y))) } Don't just report that one time you found which shows decreasing (how long did that take you anyway?), but show how this looks for repeated runs. Good homework problem.

All,

If it helps, I believe, at least in the simple smoothers like running means, the answers to this are analytic. Somebody out there must have a Masters student they can put on this.

Mr. Briggs,

If you are writing something about it, it’s your task to show it and show you know what you are talking about.

JH,

I tell my students that kind of answer is (technical term coming) a dodge.

Still, I give you a C just for classroom participation alone.

Poorly explaining the proper meaning of a statistic can get you convicted of manslaughter* — some such convicts are about to have their appeal heard & the following is worth the read as it illustrates the issues & confusion that can arise even when facts are communicated accurately:

https://medium.com/matter/the-aftershocks-7966d0cdec66

* At least in Italy (where seismologists were convicted of manslaughter for not properly warning of the Lâ€™Aquila earthquake) .

Pointed out to emphasize that statistics is not just about the math; in actual practice, translating the calculations & results into terms the layperson, or politician, or senior manager, or the ignorant masses, can properly comprehend is, [too] often, of as much or greater import.

And when was the last time anyone had a stats class that addressed communicating analytical results with the non-technical masses, who, usually, are the ultimate consumer (rhetorical question).

This is fun!

require(zoo)

nf=rnorm(1099)

hf=nf-rollmean(nf,100)

lf=rollmean(rnorm(1099),100)

x=hf+lf

y=hf-lf

par(mfrow=c(3,3))

for (k in c(1,10,20,30,40,50,60,70,80)){

title=paste(“k=”,k,”Cor=”,format(cor(rollmean(x,k),rollmean(y,k)),digits=5))

plot(rollmean(x,k),col=”red”,pch=20,xlab=”Time”,ylab=”Value”,main=title)

points(rollmean(y,k),col=”blue”,pch=20)

}

par(mfrow=c(1,1))

smoothedcor=function(k) { cor(rollmean(x,k),rollmean(y,k)) }

plot(1:500,sapply(1:500,FUN=smoothedcor))

It’s not too hard to figure out if you break a signal down into its frequency components. The covariance of a signal is the sum of the covariances at every frequency. As you reduce the bandwidth, you’re sampling a smaller part of the frequency spectrum, and so the variance of the estimate increases with the smaller sample size. It’s not so much that it gets bigger as that it gets

noisier, and uncorrelated series near zero inevitably move away from the correct value.The above series is fiddled to give large positive correlation at high frequencies and large negative correlation at low frequencies. As the bandwidth narrows, you get a weighted average of the two that puts steadily more weight on the negative, cancelling out at some point in the middle. The result is a pair of series that become steadily

lesscorrelated the more you smooth them, at least at first.Briggs,

You should edit your list.

Modern radar’s rely on the signal to noise ratio improvement provided by a Fourier transform.

If you don’t like radar theory, you might be more comfortable reviewing Parseval’s theorem.

Bill S,

Good example. Notice we don’t insist on never smoothing, but on using the smoothed data as input to other analyses.

I didn’t say it, but even that step is permissible

iffyou “carry along” the uncertainty you create in the smoothing process. I mean, it looks in the pictures I gave like correlation is increasing, but what I should be showing is not the naive +/- predictive window but the +/- window of the entire process, which includes the smoothing, and not just the end step.Smoothing does have uses, particularly in image enhancement, as you suggest.

“Modern radarâ€™s rely on the signal to noise ratio improvement provided by a Fourier transform.”Yes. That’s the special case where a signal is known to have a narrow bandwidth while the noise has a broader spectrum. Attenuating the data at those frequencies where the noise power exceeds the signal power improves accuracy. It reduces the noise more than it degrades the signal. But you do lose some of the high frequency bits of the signal, and the more you smooth it the worse it gets.

I think a better way to phrase the lesson is to say that you shouldn’t smooth data and use the output in further statistical processing unless you’re genuinely an expert at heavyweight time series statistics. Especially, you shouldn’t apply standard ‘textbook’ tests designed for raw data, with the same significance thresholds, without taking account of the effect of smoothing.

It’s perfectly possible to smooth data and then process the result safely, but you have to know precisely what effect the smoothing has had on your statistical model to compensate for it. In short, you need to know what you’re doing. And most of the people trying this sort of stuff basically don’t.

I don’t either. It’s difficult.

Mr. Briggs,

Itâ€™s your problem not mine, and I am not interesting in exploring it further. If you wish, I can hire an undergraduate student ($15/hour) to do it for you.

So, what is your not-too-hard-to-figure-out answer?Why calling me Debbie? This reminds me of Mark who was a bully in 4th grade. I asked him why he called Randi (a girl) names. No, you donâ€™t want to know what he said to me.

Time series data usually donâ€™t behave like the data your simulated here â€“ no trend and with uncorrelated normal noise. You could have at least generated some simple AR series or MA series. My idea was to generate two correlated time series: one with upward trend that would yield increasing smoothed values over time, and one with slightly down trend resulting decreased smoothed values for later time periods. Think of a parabola opening down. Not hard to do.

BTW, why would anyone (e.g., you) apply Butterworth filter (butter) to my generated series? No frequencies or wavenumbers involved.

I once did it with no smoothing whatsoever: I “discovered” a 98% correlation between Y=% imported passenger cars in US and X=% of women participating in the labor force. Conclusion: to save Detroit, get them wimmin back in the kitchen. Of course, what was really happening was that two time series that were both increasing will always correlate very strongly, as in “always.”

john b

I got into this blogosphere stuff because I looked into the background for an inconvenient truth. Figured these climate guys should be much more sophisticated at getting rid of corrupt data than I and that I could learn something. That did not turn out to be the case. So far – Briggs is the only one with any practical advise.

I’m still trying to wrap my mind around this “don’t smooth your data” stuff.

One problem I have right off the bat is that a lot of (most all?) measured data is inherently ‘smoothed’ by the measurement process or equipment used. For example, if you take a time series of temperature readings, each of those readings is in effect smoothed by the thermal mass of the temperature sensing element.

So if one shouldn’t use smoothed data as input to other analysis, in my mind this reduces to saying “don’t analyze measured data”. I don’t think I’ll be using this argument with my boss anytime soon!

But then there’s the stated caveat “unless you know exactly what effect your smoothing algorithm is having on your data, and incorporate that in your analysis”.

So this further reduces to “don’t claim to understand the analysis of your measured data unless you actually understand the analysis of your measured data.”

Back in college in my signal processing classes, invariably the professor would start off every analysis with “fit a trend line to the data and subtract it out”. When asked why, he would just say that it messed up the analysis to follow, and when pressed further, he said that a trend in your data was an indication that you haven’t taken enough data, and that as a practical matter you have to stop taking data at some point, so we have to subtract out the useless stuff (the trends) as best we can.

Here’s a thought experiment. Suppose you are one of a hundred people seated in a room, where each person has a knob in front of them that they can turn, clockwise or counter-clockwise, one ‘click’ per second. You are told that some of the knobs control the room lights, and some of the knobs do nothing. When instructed to start, everyone must decide, as quickly as possible, to which of these two groups their knob belongs. (Further imagine rewards and penalties sufficient to motive concerted participation.)

Go! The first second, you turn your knob clockwise. The room lights brighten. Are you ready to make your decision yet?

No? The second second you turn your knob another click clockwise. The lights brighten further. Now are you ready to make your decision?

No? You continue to click clockwise, and the lights brighten each time. At what point do you make your decision?

My guess is that you wouldn’t keep turning the knob clockwise, but that you would ‘mix it up’. And that you’d want to take quite a few ‘measurements’ and average the results before you’d feel comfortable making your decision. (BTW, this is basically how the Coherence Function was first explained to me.)

I suspect that maybe one big difference between signal processing and statistical analysis is that signal processing starts with the required confidence in the analysis result, and collects enough data to achieve the required confidence, whereas statistical analysis starts with necessarily limited data, and attempts to say as much about that data as possible with as much confidence as possible.

Does this sound right?

Briggs,

Thank you for the simple and clear illustration.

Your post has me worried. I’m an economist, and just about all of our data are smoothed, by seasonal adjustment (eg X12) at least. The data we feed into models tend to be SA, rather than the NSA, series.

Now, X12 is a much more complex procedure than a MA, but i suspect there are similar issues at stake. Could you suggest some ways to think about the consequences of doing this?

Thanks heaps,

Alex.

I understood almost none of that. However:

“Smoothing means any kind of modeling, which includes running means, just-plain-means, filtering of any kind, regression, wavelets, Fourier analysis, ARIMA, GARCH; in short, any type of function where actual data comes in and something that is not data comes out. ”

How do “anomalies” fit in? Everyone in “climate science” does analysis on “anomalies” and not data. At least that’s what it sounds like when I read their stuff. Are “anomalies” a type of “running means, just-plain-means, filtering of any kind, regression, wavelets, Fourier analysis, ARIMA, GARCH”.

I may be asking a question which has very little meaning. If so, sorry.

“Anomaly” appears to be climate-science speak for what I grew up calling “residual.” That is, first you calculate a mean on some set of data; then subtract the mean from each of the data points: (Xi – Xbar). This has the effect of translating the mean to 0. If I knew how to embed a picture here, I would show you a Shewhart chart of paste weights on battery grids and the residuals (anomalies) of the paste weights after subtracting out a shift due to a change in PbO paste batch and hourly “jumps” due to operator over-adjustment. The residuals form a stationary series, indicating that these two factors accounted for most of the action on the chart, the remainder being short term piece-to-piece variation.

â€œAnomalyâ€ also leads listeners to believe these are departures from the norm. The spin is built in.

YOS,

I think we agree, but let me see if I can restate the climate case as an analog to your process control case. The residuals of Xi â€“ Xbar for station data account mostly for the difference in station location; latitude, altitude. For two stations at roughly the same latitude and altitude, immediate surroundings can make a difference in absolute readings — the canonical examples being stations in parking lot next to air waste heat vents vs. under a shade tree in a grassy field.

For sake of ensuring the best quality data, let’s reject all poorly cited weather stations. Of the remainder, suppose that 100 years ago, 90% of stations were at mid to high latitudes and only 10% in lower tropical latitudes while at present the ratio is 75% to 25%, achieved by adding more tropical stations over time on an irregular schedule. Also assume that once a station is in operation, its operators report a daily min, max and mean by taking 24 readings at the top of each hour without any breaks in continuity. What does generating a time series from the arithmetic mean of the absolute (“raw”) station means over the 100 year interval tell me about the action on the chart?

I would say “Not much.” A geographical sample is a tricky business, as I recall from a discussion once with a nuclear inspector who was curious how I might distribute air samples across a desert country to detect sources of radiation. Much depends on objective. Discovery sampling differs from estimation sampling.

In sampling product, there is an expectation of some uniformity between nearby locations on the product: for example, the thickness of neighboring crackers on a baking conveyor belt. But neighboring geographical temperature stations may differ considerably due to a river or a range of hills between them– or even (temporarily) to a front moving through. In some regions of the country, the next station may be at a considerably different altitude.

Now, if each of these stations is reasonably consistent, you can subtract each station mean from its own data. The idea is that you don’t care about the actual temperature, but only with how the temperature is changing.

A near example: The Allegory of the Cookies

http://tofspot.blogspot.com/2013/05/the-wonderful-world-of-statistics-part.html

Jerry Pournelle has sometimes discussed his experiences trying to measure human body temperatures in space suits and suggests that measuring the temperature of a planet is no less complex.

It was stated here that there are 1000 stations with 100 year records. Almost all in Northeastern U.S. or Western Europe with nearly all subject to UHI.

They aren’t properly sited regardless of how well they are cited.

suppose that 100 years ago, 90% of stations were at mid to high latitudes and only 10% in lower tropical latitudes while at present the ratio is 75% to 25%, achieved by adding more tropical stations over time on an irregular schedule.thenWhat does generating a time series from the arithmetic mean of the absolute (â€œrawâ€) station means over the 100 year interval tell me about the action on the chart?Reminds me of an old joke: bus leaves depot with 6 passengers; makes first stop lets out 3 but picks up 5 more. Second stop: lets out 4 and picks up 9; Third stop: 8 exit and 2 get on. What was the bus driver’s name?

DAV,

Here’s an image showing the extent of glaciation in N. America at the last glacial maximum when temperatures were ~5-6 Â°C cooler: http://bostongeology.com/boston/casestudies/fillingbackbay/images/large/glacial_extent.gif

Two of the prior four interglacials were about 1 Â°C higher than the current interglacial, the other two were ~2-2.5 Â°C higher. At the peak of the last interglacial 130Kbp, temperatures were ~2.5 Â°C higher and sea levels were 20 m higher than today. Here’s what N. America would look like with those sea levels: http://tothesungod.files.wordpress.com/2013/10/namerica_20m-slr.png

So over the past 400K years, without any influence from us we are in fact currently within bounds of the normal behavior of the system. Thing is we’ve got a lot of infrastructure that wouldn’t be quite so valuable, or even useful, standing in or even covered by salt water. So that’s at least one argument for not flirting with a 2-2.5 Â°C temperature rise. Near the top of the last interglacial, sea level rise does lag temperatures on the order of 1,000 years or so as can be seen by eyeballing this chart, http://www.roperld.com/science/graphics/Temp_SeaLevelEemian.jpg but there are obviously some large uncertainties in that data. Face with such uncertainties, prudent folk often prefer to err on the side of caution. Others like to panic and run around saying the sky is falling.

Some people are indeed stumped by the “what’s the optimal climate for humanity” question. I think of it as a question of risk in the face of uncertainty than optimization, and that “normal” in this context means, “the climate to which we’ve already adapted our infrastructure in consideration of more certainly known risks and returns.”

Some people are indeed stumped by the â€œwhatâ€™s the optimal climate for humanityâ€ question. I think of it as a question of risk in the face of uncertainty than optimization, and that â€œnormalâ€ in this context means,But why call the differences “anomalies” (

something that deviates from what is standard, normal, or expected.) instead simply deltas? What is the risk factor of a temperature less than the peaks of the Medieval or Roman Warm Periods? For that matter, what is the risk in a temperature 1C higher than that in 1900? How do you define “normal” in this context?YOS,

I like the distinction and agree, objective is important.

Sure. Here in Berkeley we’re right downwind from the Gate, and in the summer we get the cooling benefit of the same foggy marine layer they do in the City. Yet not more than a few miles away over the Berkeley Hills, Walnut Creek will often be sweltering hot.

They’re not consistent for a litany of reasons, but ultimately each station mean does get subtracted from itself — first by month then by year.

Exactly, but even that is debatable. Stefan Rahmstorf has a recent post over at RealClimate that’s worth reading: http://www.realclimate.org/index.php/archives/2014/10/ocean-heat-storage-a-particularly-lousy-policy-target/

Roger Pielke, Sr.’s comment (#5) asks some of the same questions I’ve been asking, to wit: isn’t total energy retained in the system the best diagnosis of what’s actually happening? I’ve not yet read all of the responses his initial comment generated.

Nice writeup which asks some pertinent questions. I’m going to take a poke at the data from Alley (2000) paper on central Greenland data.

DAV,

Yup, I read that post. I come up with 945 stations with readings for all 100 years from 1915 through 2014 globally, 783 are in the US.

Last thread you told me it wasn’t possible to attribute warming to CO2, now you’re implying that UHI can be teased out of the signal. I’m having difficulty reconciling those statements.

How so?

I don’t know. I personally think that delta is not specific enough. “Baseline” temperature might have been a better label as that describes the process of building temperature anomaly time series.

As I mentioned previously, during the Eemian interglacial 130 to 115Kybp sea levels were 20 m higher than at any point during the Holocene. If those sea level estimates are correct, it follows that recent warm periods were not as warm, not as global, and/or did not last long enough to melt enough land ice to result in that kind of sea level rise.

Thus far apparently not much, though see again that ice takes time to melt. Also note that quarter-degree wiggles on decadal time frames have not proven to be extinction-level events.

Normal to the infrastructure we’ve built up based on current climate. If evidence suggested that temperatures were going to drop 2 C in the next 85 years I think there would be good reasons to be concerned.