Ultimate space simulation software

 
Inarius
Observer
Observer
Posts: 5
Joined: 03 Nov 2016

Work progress 0.9.9.0

23 Aug 2018 00:12

@Macronicus : i think that "next year", is long enough to think "yes". End of 2019 is in 16 months, after all. I can't speak for SE OF COURSE.
But when SE says "But now I cannot release SE, because it have many new incompleted, not debugged and not optimized things.", I'm quite optimistic.
It means that only this remains, soooooo no "new" things left. Of course it will take longer, but I'll expect something in the end of Q1 2019.
Nevertheless, I wouldl't like to be SE now. Debugging can be very distressing and depressing because you can't schedule bug solving duration. And optimization if very boring and tedious, too. Not the best part of development...
 
User avatar
SpaceEngineer
Author of SpaceEngine
Author of SpaceEngine
Topic Author
Posts: 694
Joined: 17 May 2016
Location: Saint-Petersburg
Contact:

Work progress 0.9.9.0

23 Aug 2018 01:24

vlad01 wrote:
Source of the post On the other hand I can get Pro series AMD workstation cards at the same price if not cheaper than the 1080Ti here in Australia which also have 32GB of VRAM and similar GPU performance, but they are dual Polaris cards with internal cross fire.

I don't know much about AMD "professional" cards, but used to use multi-thousand dollars NVidia Quadro, which has rendering performance in SE... like my 10 years-old 9800 GTX+. So I don't recomment to buy professional NVidia cards for gaming.

vlad01 wrote:
Source of the post I currently have a heavily overclocked 980 with 4GB and it has stuttering/pausing issues when it saturates the VRAM which it does very easy.  I know a few users here commented that even on the 1080Ti the usage get around 9-10GB which leaves little headroom if this is indeed the case.

Are you talking about SE? I changed VRAM usage model, now terrain takes only up to 1.5 GB at LOD 0, and up tp 4 GB at LOD 1. Add some 1-1.5 GB of static resources, and some room for dynamic things like stars, galaxies and ships models and textures (less than 1 GB). So 4 GB is enough for most use cases now (unless you want extreme LOD settigns).

vlad01 wrote:
Source of the post So with these questions and the very contrasting offerings from AMD to nvidia it makes the decision for best GPU quite hard. 
Basically.
nvidia consumer grade cards= Fastest, most expensive, least VRAM
AMD prosumer/pro grade cards = Similar-slower, much cheaper, significantly more VRAM.

AMD has glitchy software. I am stick on 2017 driver, because new drivers causing stuttering of the whole system every few seconds. And this is very old problem, users reporting it during years, and it is still not solved by AMD.
But yes, prices for AMD are much better. If you not going to buy a new GPU at the peak of mining insanity.

Inarius wrote:
Source of the post Nevertheless, I wouldl't like to be SE now. Debugging can be very distressing and depressing because you can't schedule bug solving duration. And optimization if very boring and tedious, too. Not the best part of development...

You can't imagine what excitement I feel then achieving 50% performance improvement after some 2 weeks-long boring optimization coding :)
 
Inarius
Observer
Observer
Posts: 5
Joined: 03 Nov 2016

Work progress 0.9.9.0

23 Aug 2018 02:25

ok ! 50% optimisation IS a thing :) congratulation :)
 
vlad01
Space Pilot
Space Pilot
Posts: 103
Joined: 02 May 2018

Work progress 0.9.9.0

23 Aug 2018 02:45

Thanks for the reply Space Engineer. 

I don't know what the new Quadros are like driver wise but the old ones can easily be soft modded to use the Geforce drivers.
I have an Quadro FX 4000 in my old retro gaming PC which I run as a 6800 Ultra. It overclocks insanely high too, in fact I got speeds equivalent to those up in the top 10 Aquamark scores for aircooled.

As for AMD professional cards, you can also choose the gaming driver mode. Vega FE has this option. Radeon Pro duo can also be done this way but it requires to be forced.  Both of these cards are a bit cheaper than the 1080Ti here in Australia and offer 16 and 32GB of VRAM. Both of these show decent gaming performance so they work a lot better than the modern Quadro in that regard, they aren't the fastest though, 1080 and 1080Ti like performance.

Problem is Radeon is cheap now here while nvidia are still at record high prices, mining is well and truly over too, people only really mined on Radeon anyway so I don't know why the prices for nvidia are so high?. The Titan Xp is the same price as the new 2080Ti which shows how bad the prices have become.

So what do you think the VRAM foot print will be like at highest LOD and visual settings in 0.990? Is it possible to have higher LOD setting than shown on the setting panel?
 
User avatar
Canleskis
Space Tourist
Space Tourist
Posts: 33
Joined: 08 Oct 2016

Work progress 0.9.9.0

23 Aug 2018 02:58

SpaceEngineer wrote:
AMD has glitchy software. I am stick on 2017 driver, because new drivers causing stuttering of the whole system every few seconds. And this is very old problem, users reporting it during years, and it is still not solved by AMD.
But yes, prices for AMD are much better. If you not going to buy a new GPU at the peak of mining insanity.

That's weird, I have been using AMD cards with SpaceEngine for more than 4 years and have none of these issues whatsoever, even with the latest drivers. At LOD 1, loading speed at 50 and SE quite modded, my RX 570 8GB runs perfectly and I always have 40+ FPS on the surface on planets while the terrain is still generating (usually lasts 0.5-2 seconds). Might be a Windows 7 problem? 

vlad01 wrote:
Source of the post So what do you think the VRAM foot print will be like at highest LOD and visual settings in 0.990? Is it possible to have higher LOD setting than shown on the setting panel?

From the console you can change the LOD up to 2 with Set LandLod 2.
 
vlad01
Space Pilot
Space Pilot
Posts: 103
Joined: 02 May 2018

Work progress 0.9.9.0

23 Aug 2018 03:15

Could be a driver hardware issue. I know nvidia cards with AMD CPUs causes these sorts of issues for CPU performance. Or it can be AMD cards not liking the intel CPU, chipset drivers, OS scheduler etc..

I doubt it's win 7, there is far more issues reported with win 10 actually, especially with AMD hardware where 7 tends to work out of the box. Same thing happen with Ryzen, even though it was not designed or had official support under 7, 7 performed much much better on release than 10 did. It took over 6 months for MS to update and patch 10 to work correctly on Ryzen before it was able to catch up to out of the box performance on 7. AMD driver and certain OS/hardware combos had historically taken time after release for all bugs and optimization to be sorted out.
 
User avatar
Canleskis
Space Tourist
Space Tourist
Posts: 33
Joined: 08 Oct 2016

Work progress 0.9.9.0

23 Aug 2018 04:05

vlad01 wrote:
Could be a driver hardware issue. I know nvidia cards with AMD CPUs causes these sorts of issues for CPU performance. Or it can be AMD cards not liking the intel CPU, chipset drivers, OS scheduler etc..

I doubt it's win 7, there is far more issues reported with win 10 actually, especially with AMD hardware where 7 tends to work out of the box. Same thing happen with Ryzen, even though it was not designed or had official support under 7, 7 performed much much better on release than 10 did. It took over 6 months for MS to update and patch 10 to work correctly on Ryzen before it was able to catch up to out of the box performance on 7. AMD driver and certain OS/hardware combos had historically taken time after release for all bugs and optimization to be sorted out.

Very weird then, wonder what could cause the stuttering issues SpaceEngineer and other users have. 
Also had a AMD CPU and GPU combo before, and no real issue, even though I had a small performance improvement on SE when going on Intel. Now I still have this Intel CPU and an AMD GPU and no drivers issues, neither performance and SE runs perfectly. I guess it's really a matter of isolated cases, but improving the overall performance and support should hopefully solve most of it. 
Although I wonder how can SSD can impact SE performance. I guess it would be almost nothing since the work is done on the GPU by generating the textures, but maybe SSDs can limit stutterings?
Last edited by Canleskis on 23 Aug 2018 04:48, edited 1 time in total.
 
vlad01
Space Pilot
Space Pilot
Posts: 103
Joined: 02 May 2018

Work progress 0.9.9.0

23 Aug 2018 04:28

AMD CPU and GPU are normally the most compatible Same goes with nvidia and intel CPUs.  The biggest issue is nvidia with AMD ryzen, under certain games and apps the performance can drop a lot. It's an nvidia driver issue and generally known across the community.

I don't know of AMD GPU and intel CPU problems too much, but most I have heard of were intel chipset driver/OS and AMD gpu combos that ran a bit funny. But not having an AMD GPU since 2003 I can't say from experience.

One thing that does cause bad stutter on mine is running out of system memory. I have only 8Gb and when available memory is lower than 200Mb or so SE has a hard time and the audio even stutters. I close my internet browser and regain a few GB and it's all good after that.  I used to get a constant stutter on my 980 but I read the FAQs and you can disable the thread optimization and that fixes that issue.  Only performance issue I have left is running out of VRAM which happens virtually straight way on or above a planet surface.  Hence why I asked Space Engineer all those questions as I am a bit concerned as nvidia have not increased capacity on consumer cards in many many years now. Hate to spend that sort of money and only hit a wall again.  GPU market now is very weird and not wallet friendly.

This has probably gone a bit too off topic. I do apologize for this.
 
NoName
Observer
Observer
Posts: 2
Joined: 19 Jul 2018

Work progress 0.9.9.0

23 Aug 2018 05:19

[quote="SpaceEngineer"]No, new nebulae will not help with this. The problem is that stars must be rendered [i]inside[/i] dust somehow. New nebulae does not support this, they are rendered on top of stars. So if you have some star which is closer to you than nebula, nebula's dust will absorb light from that star anyway. Making this correct is a very complex task. I have some algorithms in mind, but I will work on them after 0.990 release.[/quote]
Ok, great! But i've discovered something else that seems weird. After 0.974 some parameters are "limited" so to say. For example the effective temperatures on scorching planets has a limit of 1200 K (927°C) and planets that are supposed to be hotter is stuck in this border. The size of procedural galaxies is limited to about 100 000 pc (326 160 ly), and bigger galaxies are also stuck in this border. Is it a bug, or is it gonna be fixed in 0.990?
 
vlad01
Space Pilot
Space Pilot
Posts: 103
Joined: 02 May 2018

Work progress 0.9.9.0

23 Aug 2018 05:23

Canleskis wrote:
SpaceEngineer wrote:
AMD has glitchy software. I am stick on 2017 driver, because new drivers causing stuttering of the whole system every few seconds. And this is very old problem, users reporting it during years, and it is still not solved by AMD.
But yes, prices for AMD are much better. If you not going to buy a new GPU at the peak of mining insanity.

That's weird, I have been using AMD cards with SpaceEngine for more than 4 years and have none of these issues whatsoever, even with the latest drivers. At LOD 1, loading speed at 50 and SE quite modded, my RX 570 8GB runs perfectly and I always have 40+ FPS on the surface on planets while the terrain is still generating (usually lasts 0.5-2 seconds). Might be a Windows 7 problem? 

vlad01 wrote:
Source of the post So what do you think the VRAM foot print will be like at highest LOD and visual settings in 0.990? Is it possible to have higher LOD setting than shown on the setting panel?

From the console you can change the LOD up to 2 with Set LandLod 2.

Sorry I missed this post.  Thanks a heap for that  :)  SE looks that much nicer now.  I suspected this setting existed but I can't recall who mentioned it and I couldn't find it in the readme.txt

But my poor old GPU lol  it literally saturated in 30 sec of approaching this planet.

test_LOD2.jpg
 
Macronicus
Space Pilot
Space Pilot
Posts: 113
Joined: 25 Jan 2018

Work progress 0.9.9.0

23 Aug 2018 07:21

SpaceEngineer wrote:
vlad01 wrote:
Source of the post On the other hand I can get Pro series AMD workstation cards at the same price if not cheaper than the 1080Ti here in Australia which also have 32GB of VRAM and similar GPU performance, but they are dual Polaris cards with internal cross fire.

I don't know much about AMD "professional" cards, but used to use multi-thousand dollars NVidia Quadro, which has rendering performance in SE... like my 10 years-old 9800 GTX+. So I don't recomment to buy professional NVidia cards for gaming.

vlad01 wrote:
Source of the post I currently have a heavily overclocked 980 with 4GB and it has stuttering/pausing issues when it saturates the VRAM which it does very easy.  I know a few users here commented that even on the 1080Ti the usage get around 9-10GB which leaves little headroom if this is indeed the case.

Are you talking about SE? I changed VRAM usage model, now terrain takes only up to 1.5 GB at LOD 0, and up tp 4 GB at LOD 1. Add some 1-1.5 GB of static resources, and some room for dynamic things like stars, galaxies and ships models and textures (less than 1 GB). So 4 GB is enough for most use cases now (unless you want extreme LOD settigns).

vlad01 wrote:
Source of the post So with these questions and the very contrasting offerings from AMD to nvidia it makes the decision for best GPU quite hard. 
Basically.
nvidia consumer grade cards= Fastest, most expensive, least VRAM
AMD prosumer/pro grade cards = Similar-slower, much cheaper, significantly more VRAM.

AMD has glitchy software. I am stick on 2017 driver, because new drivers causing stuttering of the whole system every few seconds. And this is very old problem, users reporting it during years, and it is still not solved by AMD.
But yes, prices for AMD are much better. If you not going to buy a new GPU at the peak of mining insanity.

Inarius wrote:
Source of the post Nevertheless, I wouldl't like to be SE now. Debugging can be very distressing and depressing because you can't schedule bug solving duration. And optimization if very boring and tedious, too. Not the best part of development...

You can't imagine what excitement I feel then achieving 50% performance improvement after some 2 weeks-long boring optimization coding :)


How many more features are you gonna add?
To a Galaxy billion light-years away from a Dark World.
 
User avatar
N0B0DY
Space Pilot
Space Pilot
Posts: 140
Joined: 09 Dec 2016

Work progress 0.9.9.0

23 Aug 2018 13:05

Amazing work Space Engineer! You are the titan lifter of the universe:
► Show Spoiler

and we are all standing on your shoulders!
 
User avatar
Stellarator
Explorer
Explorer
Posts: 155
Joined: 10 Jul 2018
Location: Andromeda A* blackhole

Work progress 0.9.9.0

23 Aug 2018 13:07

N0B0DY wrote:
Amazing work Space Engineer! You are the titan lifter of the universe:
► Show Spoiler

and we are all standing on your shoulders!

Hey, cool pic. Did you do that yourself, or is it from online?
Futurum Fusionem
 
User avatar
SpaceEngineer
Author of SpaceEngine
Author of SpaceEngine
Topic Author
Posts: 694
Joined: 17 May 2016
Location: Saint-Petersburg
Contact:

Work progress 0.9.9.0

23 Aug 2018 15:03

Version 0.990 will not need so much video memory thanks to texture compression. At Full HD (1080p) screen resolution, you need no more that 6 GB graphics card.

At LOD 0, fully loaded view from the surface consumes just 600-700 Mb of VRAM (look at the LandNode reading on the left). Detail textures are enabled, texture compression is enabled. This is just terrain system consumption, and this scene is loaded at startup, so 600 Mb is only room taken by terrain nodes visible on the screen (4003 nodes were generated, LandNode-Allocated reading). Engine reserves some extra nodes to let you rotate camera and fly around, typically 3x times more than this (12900 nodes, LandNode-Max). So I may say that terrain engine requires 2 GB of video memory at LOD 0.

But this is only terrain engine. Add another 1 GB of static textures and buffers, some room for stars/galaxies/ships. Total 3 GB for stable work at LOD 0. SE works even with lower amount of VRAM, it dymanically reallocates it as needed (for example, removing long ago used terrain nodes to free memory for ship textures then you spawn a ship). But if you have just 1.6 GB of video memory, you can load such scene with no problem. But if you start moving, nodes which out of view will be deleted almost immediately and replaced with new nodes. So then you decide to retucn back to initial point, you will see loading process again. So 3 GB is better.

LOD 0.0.jpg


At LOD 1.5, memory requirements rises up. Now this scene generated 1.5 Gb of terrain textures, 7314 nodes were created, 20000 nodes reserved. Fully used reserve is 4.2 GB of VRAM. Add 1 GB of static resources, so you need 5.5 GB graphics card for smooth playing.

LOD 1.5.jpg


At LOD 2, engine generates 2.2 GB of textures, 10000 nodes, but reserved limit is still 20000 (so again, 6 GB graphics card will be enough). Reserved limit can be raised in the config, but I don't see purpose for this. Nobody playing at LOD 2, it can be used just for taking extreme screenshots from a stationary point. Because loading time is huge, about 5 minutes. Note that framerate is much lower than at LOD 1.5 and LOD 1 - because graphics card needs to render 10000 meshes with a set of unique textures each.

LOD 2.0.jpg


You can sel LOD 3 and even more through the console command, but I don't believe any GPU can handle that. Also, with the new terrain engine, extreme LODs improves nothing. You may notice some improvement of texture and mesh detail between LOD 0 and 1, but LOD 2 is almost the same as LOD 1. Yes, terrain is splitted on more nodes, but their texture detail is excessive, so they get downscaled by GPU resterization anyway. LOD 1 is already pixel-to-pixel resolution.

So I may say that if you playing at Full HD resolution, you need 6 GB at most extreme. If you have 4k display, then LOD 0 will take nearly the same VRAM as LOD 1 on 2k, because of doubled resolution, so 6 GB will work again. You need more VRAM only to play at excessive LOD 2 in 4k resolution. To take screenshots at LOD 2 in 4k, 6 GB will be enough I think.
 
vlad01
Space Pilot
Space Pilot
Posts: 103
Joined: 02 May 2018

Work progress 0.9.9.0

23 Aug 2018 18:52

Thanks Space Engineer for the detailed explanation. Everything is clear to me now.

Who is online

Users browsing this forum: No registered users and 1 guest