Ultimate space simulation software

 
User avatar
Stellarator
Explorer
Explorer
Posts: 260
Joined: 10 Jul 2018
Location: Andromeda A* blackhole

A very PC discussion

05 Oct 2018 18:10

vlad01 wrote:
Source of the post I just call it LCD cancer

I'll be sure to spread the meme then ;). It sounds a bit like a edgy cyberpunk band.

Anyway, I had a peek at the pixio PX329 reviews - and yeah, pretty good stuff. But I could not help but notice that 13% of the ratings are 1 star, which in my experience is a tad high. Then again a lot of people are pretty lousy when it comes to trouble-shooting computers.

Now if only you had a great CPU and GPU to go with it...
Futurum Fusionem
 
vlad01
Explorer
Explorer
Posts: 168
Joined: 02 May 2018

A very PC discussion

05 Oct 2018 22:15

Maybe with any luck the impending 7nm AMD CPU and GPU keynote at CES might just be that.  Ryzen 3000 is going to be a massive deal in CPU history.  AMD are committed to 7nm gaming GPUs as well and many suspect nvidia botched and rushed the RTX launch because of it.  There was internal nvidia document leaked showing 7nm AMD as the competitor for the 2080 and 2080Ti, it it is true it would makes sense. One can hope it will good and certainly will be much cheaper than the insane nvidia prices.

i am certainly waiting to see what happens.
 
User avatar
Stellarator
Explorer
Explorer
Posts: 260
Joined: 10 Jul 2018
Location: Andromeda A* blackhole

A very PC discussion

06 Oct 2018 02:05

vlad01 wrote:
Source of the post Ryzen 3000 is going to be a massive deal in CPU history.

5ghz, or so the tech-folks say. That is indeed history in the making. They say that technology will always exponentially advance, but truth be told I`ve taken for granted the fact that my CPUs have always been between 3-4 Ghz. 
Futurum Fusionem
 
vlad01
Explorer
Explorer
Posts: 168
Joined: 02 May 2018

A very PC discussion

06 Oct 2018 02:53

yeah but gains aren't all about clock speed.  Good example was the Althon 64 vs the P4.. the Althon was considerable faster than the fastest P4 at 3.8Ghz while chugging along at mid 2Ghz range.  The netburst design that intel had was designed for peak speeds of around 7-10Ghz but physics always wins. What an oversight that was, surly they would of researched that transistors at those sizes would not switch that fast?

Speed of clock does not scale well with power consumption, but IPC, pipeline width and architectural efficiency is where the gains have been for the last 15 years.

What is going to make the 3000 series special is the IPC gains will be significant and beat out intel and previous zen by conservative 15% in clock for clock. The icing on the cake hopefully will be the gains in clock as well, like a double whammy.

Early engineering samples have said to already run at 4.5. If that is true I have little doubt high 4s even low 5s wont be out of the question.   There is news that TSMC will be introducing 5nm before mid year as well, not production I don't think though, like tape out stage.

This is like 1999 all over again! at least for CPUs. Possibly GPUs if AMD manage to pull something out but I don't hold my hopes up too much.  CPUs I have little doubt though, it will be good.

I have studied the architecture of both zen and xxLake and what I found is it seems that AMD's has a slightly longer and narrower pipeline which should equal a slightly higher clock potential over the intel, the thing that seems to be their biggest hold up is the fabrication. GoSlow fab is notorious for poor performing nodes so moving to TSMC was by far the right choice AMD has made.

If they get anywhere near 5Ghz it will be the biggest leap in CPU performance in like 10 years.
 
User avatar
Stellarator
Explorer
Explorer
Posts: 260
Joined: 10 Jul 2018
Location: Andromeda A* blackhole

A very PC discussion

06 Oct 2018 16:03

vlad01 wrote:
Source of the post surly they would of researched that transistors at those sizes would not switch that fast?

Just a good example of the tunnel-vision that some of these companies have for their tech. I recall quite a few products that suffered due to a lack of basic foresight and predictive planning. Or maybe they were just rushed and did not finish the product in time.
vlad01 wrote:
Source of the post This is like 1999 all over again!

Ughh, cue flash-backs!

vlad01 wrote:
Source of the post If they get anywhere near 5Ghz it will be the biggest leap in CPU performance in like 10 years.

So would the future CPUs advance beyond 5 Ghz in the next ten years or so? The Intel Core i9 seems pretty snazzy, but will we finally beat Moore's Lawand Dennard Scaling? Or are we going to be more like this:
main-qimg-8ae6ca7ea1516fe3f431423be3ac00d4.png
Futurum Fusionem
 
vlad01
Explorer
Explorer
Posts: 168
Joined: 02 May 2018

A very PC discussion

07 Oct 2018 03:36

Simple ICs made from non silicon semiconductors like specialized germanium or gallium based RF device see in to the 10s and even 100s of GHz, parts used in radar amplifiers etc...  For extremely complex devices like CPUs and GPU on silicon, short answer no. Mid 5Ghz is likely the upper limit on the very best silicon tech, physics basically determines that silicon is only good up to those points.

CPUs have slowed down due to lack of competition and stagnation. Intel have done jack all since about 2010, only the minimal effort and AMD were almost non existent during that period.   AMD have come out of the blue and not only caught up but have already taken over in enterprise and will continue to pull ahead and not long before the same happens in desktop.   Progress is not dead yet and I think the last 2 years and the future is bright once again.

GPU progress never slowed down really, only CPUs due to the absence of competition.   We are now seeing a little bit of this in GPUs for the same reason but I think that is short lived.


When you compare transistor count and overall performance it is clear that is where the gains are coming from, clock speeds only make a minor contributor in overall performance improvements, the clock speed race ended in the early 2000s, it's been all architecture and complexity ever since, as they say CPUs have gone short and wide in the pipelines and paralleled.

As for Intel, they have had almost no gains left in IPC now that can make from this design and this is why you see them pushing and pushing at the clock speeds as they have no other major place to gain anything from. They need to wipe the slate clean and start again to have a chance keeping up in the future I think.   I do believe the 9900k will be the last hurrah for this architecture.   They are at the limits of the 14nm node, at the limits of the ring bus layout (only scales to about 8 cores with gains) and the thermal limits (soldered this time to be able to get those clock targets and not melt) TDP at the limits.

It will be a great chip for sure but likely the fastest they will get out of this design for a desktop type of CPU.
 
Harge
Observer
Observer
Posts: 3
Joined: 04 Nov 2016
Location: South Pole

A very PC discussion

07 Oct 2018 17:42

70 intel i9 extreme rgb edition cpus overclocked to 9 ghz
6tb of rgb ddr4-7000mhz ram
rgb nVidia GeForce RTX 2080 ti in 80 way sli
rgb 80tb nvme ssd
rgb 60tb pci-e ssd
rgb 100tb sata ssd
rgb 750tb hdd 7200rpm
600 rgb fans
liquid cooling everywhere
 
vlad01
Explorer
Explorer
Posts: 168
Joined: 02 May 2018

A very PC discussion

07 Oct 2018 17:53

I love this site. So much in depth info on micro architecture and fabrication.

https://en.wikichip.org/wiki/WikiChip


There is a great article there on NRAM, something prior to this I never heard of. 
 
User avatar
DoctorOfSpace
World Builder
World Builder
Topic Author
Posts: 1112
Joined: 22 Aug 2016
Location: SpaceX Mars Colony
Contact:

A very PC discussion

08 Oct 2018 07:07

CPU: Intel Core i7-5820K 4.2GHz 6-Core Processor - RAM: G.Skill Ripjaws V Series 32GB (4 x 8GB) DDR4-2400 - GPU: EVGA GeForce GTX 1080 Ti SC Black Edition
Quando omni flunkus, moritati
 
vlad01
Explorer
Explorer
Posts: 168
Joined: 02 May 2018

A very PC discussion

08 Oct 2018 18:29

You know what we lack, any openGL benchmarks. No matter how deep I search I can not find any openGL benchmarks outside linux.

It would be great to compare all the cards from the last 3 generation in openGL so we can at least get a sense on how the new vs old should perform in SE.
 
vlad01
Explorer
Explorer
Posts: 168
Joined: 02 May 2018

A very PC discussion

08 Oct 2018 18:59

DoctorOfSpace wrote:

Bang on in every way.   What I don't understand is why people are buying these and then defend the asking price.
 
User avatar
Stellarator
Explorer
Explorer
Posts: 260
Joined: 10 Jul 2018
Location: Andromeda A* blackhole

A very PC discussion

08 Oct 2018 23:14

vlad01 wrote:
Source of the post openGL benchmarks.

What does this do? I'm unfamiliar with the term, though it could be that I learned it named as something else. Plus I suffer from Google apathy, otherwise I'd look it up there :D.
Futurum Fusionem
 
User avatar
JackDole
World Builder
World Builder
Posts: 1052
Joined: 02 Nov 2016
Location: Terra

A very PC discussion

08 Oct 2018 23:32

vlad01 wrote:
Source of the post No matter how deep I search I can not find any openGL benchmarks outside linux.

'FurMark' is an OpenGL benchmark. https://geeks3d.com/furmark/
 
vlad01
Explorer
Explorer
Posts: 168
Joined: 02 May 2018

A very PC discussion

08 Oct 2018 23:33

The API software uses to talk to the GPU.  you can't get any benchmark you like in DX11, DX12 and Vulkan but seems no one has done any in OpenGL which is what SE uses.  

Last time I was serious in gaming there was only DX9 DX10 and OpenGL.  Is it that no one uses it anymore in games?

OpenGL seems to be all or nothing in Linux professional use and on windows professional is common too.


The thing is each API has it's weakness and strengths and thus performance scaling with software and hardware. To really see how different hardware performs and how it relates to what you want to use it for is to either see benchmarks for your exact application or to see others that use the same API to get a sense of performance scaling. 

See each make of GPU also can either be fully made for an API or just merely support it but not necessarily utilize it. 

for example Pascal is great at DX11 but average at DX12 and Vulkan where as the 20 series is basically flipped.

AMD GPUs have always been good at DX12 and Vulkan.   But poor old OpenGL there is no info I can find to which make and series of GPU works best with it.

I have requested so many tech channels that do benchmarking to try out SE on their cards as I want to know, been ignored by every single one of them.  It would be great if someone could methodically test SE out like this.
 
User avatar
JackDole
World Builder
World Builder
Posts: 1052
Joined: 02 Nov 2016
Location: Terra

A very PC discussion

08 Oct 2018 23:40

vlad01 wrote:
Source of the post No matter how deep I search I can not find any openGL benchmarks outside linux.

Look at one post above your last post. You may have missed it.

Who is online

Users browsing this forum: No registered users and 1 guest