Ultimate space simulation software

Watsisname
Science Officer
Posts: 1849
Joined: 06 Sep 2016
Location: Bellingham, WA

Just a bit more to add -- I think this is a really useful visualization to supplement the explanation above.  It's mathy, but a good combination of physics and road safety.  Here I'm showing the maximum acceleration we can get by braking before we lose control, for any given curve and any given starting speed through that curve.  I assume a coefficient of friction of and , and I ignore the slope of the road since the effect is generally small.  This is a function of two variables, so the graph is actually a surface in three dimensions.  The vertical axis is our max acceleration in meters per second squared.  The R axis is the radius of curvature (in meters), and the v axis is our speed (in kilometers per hour).

And the function which makes this graph is

which is derived by calculating the combined acceleration (centripetal plus linear) with acceleration being due to friction.

A few bits of insight to be had here.  First is that for low speeds and large radius of curvature (gentle curves; large values of R), the function is pretty much flat and near its maximum.  The maximum is given by .  In other words we can achieve at most about 70% of g.  Sporty vehicles with specialized tires can do better, while poor road conditions make it worse.  (No surprise that it's very hard to start or stop on an icy road).

Next, as the turn becomes sharper (smaller R) and our speed becomes faster, our maximum safe acceleration decreases.  But it does not decrease in a linear way.  It's very gradual at first, but then plummets very steeply.  That steep region is what we want to avoid -- it means we have less control, and the amount of control we have changes very quickly.  A few km/hr makes a big difference.  Beyond that region, the graph drops to 0.  Actually, the function is undefined here, and represents combinations of speed and curvature for which we cannot be in control.

For more clarity, here's the projection in the v,R plane.  I think this really emphasizes the 'risky' area where we are close to that maximum speed calculated earlier.

Of course, this is all just numbers and nobody is seriously going to do math to figure out if they are driving safely.  The goal is to understand conceptually why we can lose control on curves even when we think we're doing the right thing, and what we can do to help mitigate the risk.

Hornblower
Pioneer
Posts: 595
Joined: 02 Nov 2016
Location: Gale Crater
Contact:

It's pretty cool what people can do in KSP
"Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space." - Douglas Adams

Banana
Astronaut
Posts: 56
Joined: 17 Dec 2016
Location: The future

Wow...and I thought my first flight-capable vessel was impressive!
Bananas are eggcellent.

Watsisname
Science Officer
Posts: 1849
Joined: 06 Sep 2016
Location: Bellingham, WA

Watsisname
Science Officer
Posts: 1849
Joined: 06 Sep 2016
Location: Bellingham, WA

midtskogen
World Builder
Posts: 968
Joined: 11 Dec 2016
Location: Oslo, Norway
Contact:

Is that forecast credible?  The UK Met Office issues seasonal forecasts, and their success rate is roughly the same as amateurs depending on folklore.
NIL DIFFICILE VOLENTI

Watsisname
Science Officer
Posts: 1849
Joined: 06 Sep 2016
Location: Bellingham, WA

Perhaps counter-intuitively, it is easier to predict global average conditions over long periods than regional conditions over shorter periods.  Their skill in forecasting the next year's global temperature is actually pretty good.

Probably the largest source of uncertainty in this forecast is tied to uncertainty in forecasting ENSO, since this is a large factor for transferring heat between the ocean and atmosphere.  The skill in ENSO forecasts is not excellent, but better than flipping a coin.  Dunno about folklore though.

midtskogen
World Builder
Posts: 968
Joined: 11 Dec 2016
Location: Oslo, Norway
Contact:

A year isn't that much longer than a season, and of course it's easier to predict global temperature than local if the accepted absolute error is the same (like ±0.5C), since the local variability is much higher.  I'm very sceptical towards publishing seasonal forecasts and calling it science when their accuracy isn't clearly better than "the season will be roughly like last year's season".

In forecasting global 2017 temperature, I guess "take 2016 temperature and subtract a little because the ENSO index is dropping" is reasonable, but not advanced science and I wouldn't bother to publish if I were a met office (making a guess if a journalist calls, fine).  It's like making a forecast for tomorrow saying that "it will be somewhat more cloudy and windy tomorrow" because the barometer pressure began dropping this evening.  There's a lot of interesting things that could happen, which aren't modelled at all.
NIL DIFFICILE VOLENTI

Watsisname
Science Officer
Posts: 1849
Joined: 06 Sep 2016
Location: Bellingham, WA

Sure, a methodology which predicts annual changes in global temperature with better than 0.1°C accuracy using sophisticated statistical methods and a global climate model which treats the physics of ocean-atmosphere interactions and energy balance with solar radiation, greenhouse gases, and aerosols... may not qualify as 'advanced science' for certain readers.  I mean, it's not something I could do myself and be done in time for supper, and I'm pretty sure you can't either, but it's definitely not the most advanced science I've ever seen.  I dunno, it might even be less complicated than consulting the local gods depending on your rituals.

Anyway, as it so happens, there actually is a paper which discusses the methodology and analyzes the skill of these forecasts, and it is even free to view.  You know, if you want to check your assumptions about how it works or what things it does or does not account for.

http://onlinelibrary.wiley.com/doi/10.1002/grl.50169/full

midtskogen
World Builder
Posts: 968
Joined: 11 Dec 2016
Location: Oslo, Norway
Contact:

Thanks for that link.  It quickly highlights my concern.  Here's the data:
x.png (41.03 KiB) Viewed 3075 times

Converting this back to numbers:
year  obs. pred.2000 0.288 0.4122001 0.422 0.4712002 0.482 0.4692003 0.491 0.5532004 0.440 0.5022005 0.460 0.5132006 0.452 0.4542007 0.401 0.5452008 0.310 0.3722009 0.442 0.4482010 0.499 0.5832011 0.402 0.441

Then I get an absolute mean error of 0.058 and a root squared error of 0.072, which agrees with their numbers 0.06 and 0.07, so my conversion seems ok.

Let's replace their advanced model with the simplest one I can think of: Next year's temperature will be like last year's.  The result: an absolute mean error of 0.064 and a root squared error of 0.077, which is practically the same.  My message is simply: if the model's performance is indistinguishable from the most naive approach possible, don't bother, yet.
NIL DIFFICILE VOLENTI

Watsisname
Science Officer
Posts: 1849
Joined: 06 Sep 2016
Location: Bellingham, WA

Yet they don't predict that next year's temperature is the same as this year's.  They predict the direction and magnitude of the change.  Since 2000, they predicted next year's temperature will be colder 4 times, and every time they were right.  12 times they predicted next years temperature would be higher, and 11 times they were right.  The 12th time (2007) was a prediction of +0.03 +/- 0.17, and the observed change was zero.

So that's insight that you do not get from the most naive approach possible, and it comes from deeper physics than just 'ENSO index is currently rising or falling".

midtskogen
World Builder
Posts: 968
Joined: 11 Dec 2016
Location: Oslo, Norway
Contact:

year  obs. pred.  obs.   pred.2000 0.288 0.4122001 0.422 0.471 warmer warmer2002 0.482 0.469 warmer warmer2003 0.491 0.553  flat  warmer2004 0.440 0.502 colder  flat2005 0.460 0.513  flat  warmer2006 0.452 0.454  flat   flat2007 0.401 0.545 colder  flat2008 0.310 0.372 colder  flat2009 0.442 0.448 warmer warmer2010 0.499 0.583 warmer warmer

I think everything within ±0.03 should be considered flat, and then we get a 50% success rate, which is slightly better than random since we have a third option "flat".  The naive approach, "the trend will be the same", gives a 40% success rate. I still find the track record somewhat unconvincing.
NIL DIFFICILE VOLENTI

Watsisname
Science Officer
Posts: 1849
Joined: 06 Sep 2016
Location: Bellingham, WA

You think so?  Well, maybe it's not so bad that the naive approach is only slightly worse as a forecaster for next year, but that also comes with no insight to the underlying dynamics, nor is it conducive for updating the forecast for a new influence, say a boost in aerosol forcing from a volcanic eruption.  As a demonstration of understanding the influences of Earth's temperature, it fails pretty badly -- compare the total correlation with the persistence method as a hindcaster.  Pretty profound difference.

Anyway, I personally don't care deeply about what next year's global average temperature actually is. The annual change is small and the individual does not feel it in any direct or obvious way.  What interests me is that they can predict it with pretty good skill, and that the methodology demonstrates that we do have a good understanding of the factors involved.  The naive forecast says next year's temperature will be the same.  Why?  "No reason, it just happens to work about 40% of the time."  Ok.  The Met Office's forecast says next year's temperature should be a little lower, because the sun's output is about the same, atmosphere's fairly clean of aerosols right now, and despite the greenhouse gas emissions trapping a bit more heat, the ocean will also go into a mode of absorbing more heat from the atmosphere.  Well that's way more compelling in my view, and we know it works a lot better than assuming persistence.

But I dunno, consulting the farmer's almanac can be fun, too.

midtskogen
World Builder
Posts: 968
Joined: 11 Dec 2016
Location: Oslo, Norway
Contact:

More compelling because it's more complex?  That sounds like a rejection of Occam's razor.

There are, of course, real skills involved here, mainly being able to measure the global temperature and to compute indices for things like the ENSO.  2016 was special because of a very strong el Niño, which are known not to last, so the skill isn't to forecast a cooler 2017, but to have identified an el Niño.  But since the naive approach has roughly the same predictive force, it means that the events that can be used to predict temperature changes for the next year, are too few or to rare to make general forecasting for the next year useful.  As you say, the annual change is so small anyway, so it doesn't matter much.  For me that's just another reason why these forecasts are well below the scientifically meaningful publishing threshold.

In the stock market there are index funds and there are managed funds aiming to beat the index.  Those managing the funds do predict the market with pretty good skills, and their methodology demonstrates that we do have a good understanding of the factors involved.  Yet, index funds (the naive approach) tend to perform slightly better.  So I prefer the naive approach.
NIL DIFFICILE VOLENTI

Watsisname
Science Officer
Posts: 1849
Joined: 06 Sep 2016
Location: Bellingham, WA

Good grief.  I'm really not interested in trying to explain myself further -- I'd thought I already did a pretty good job.

I'll just agree we have different viewpoint.

### Who is online

Users browsing this forum: No registered users and 1 guest