Jump to content

Performance HELP pls.


Cadivus

Recommended Posts

There is no real difference in performance between a Haswell Refresh chip and a Skylake chip.

 

Skylake is about motherboard chipset features and a die shrink to 14nm, not speed. In many respects, a i7-4790k is faster than a i7-6700k because it turbos to 4.4GHz while the Skylake turbos only to 4.2GHz.

ish

 

You're trying to make a point about i7 vs i5 based on 1 or 2 fps in a few tests picked out to make that point. I rather think your own examples point out the oppisite, there is no detectable difference between a i5 and a i7 for gaming. 87 fps on a i5 vs 88 fps on a i7 is not something a human can detect, even if it was consistent. Further, the i7 is a hundred dollars more expensive, money that would be far better put into a GPU.

 

Those two chips are based on designs that are two years apart. That they differ by only 1FPS is the point. You are comparing chips who's only similarity is the "i" followed by a number between 4 and 8. When you compare the 2 stock chips that are architecturally identical, the fps difference is around ~10-15FPS. Google a stress comparison of the i5-6500 and the i7-6700 in Far Cry 4 and Grand Theft Auto 5. You'll not only see a FPS difference up to the neighborhood of around 26%. You'll see a wide fluctuation in the i5 frame rate that doesn't exist i7 (read that as i5 stressing out) because of the Hyper Threading and the extra L3 cache allow the extra push the i5 just doesn't have the chips for.

 

If you're in the GTX 960 or GTX 970 budget range, an i7 is way out of place for gaming. For a $100 price increase from the i5 to the i7, I'd expect at least a 10% performance jump, and it just isn't there.

 

People on a budget should be making choices that fit in their budget. The current "best" Intel has is the i7 Skylake. If you can afford it, do it. If you can't you can't, don't. AMD hasn't made the move to the PC CPU 14nm process yet, but they have announced they are planning to Q4 of this year with the Zen architecture.

 

DDR4 is replacing DDR3. AMD has already done it in their embedded market and will with Zen. Skylake is Intel's move. The Skylake will future proof your system far more than last year's i5. One thing you can be certain of, The best video card on the market is already out of date.

DirectX 12 is also on the horizon. Its already shown to favor multi-core processors. ( there is a demo on steam) Play with it yourself... The i7 plays stronger than the stock i5. It plays nicer than the overclocked i5.

 

If you have to upgrade now, go that direction. If you don't have to do it, then wait until DDR4 makes the jump from 2133 to 4K or so.

 

If you are going to overclock, then all bets are off. Grab some LN2 and crank it to 6Ghz.

 

i5-6500

H170 Motherboard

16GB DDR3

GTX 970

Total - $655

 

vs.

 

i7-6700k

Z170 Motherboard

16GB DDR4

GTX 950

Total - $760

 

Again, compare apples to apples. The chips we are talking about share the same pinout, so they dont require a different motherboard or different memory. (A gtx 950 runs $149.99 so that config would make the i7 (non k) about $5.00 cheaper than the i5/970 ;) )

Geforce GTX 970 $279.99

2x8GB DDR4 2133 $74.49

H170 Motherboard $99.99

$454.47 before processors

i7-6700 $329.99

i5-6500 $204.99

Link to comment
Share on other sites

CPU: i5 650 4MB Cache, LGA1156, 3.2ghz-3.6ghz "safe" OC..

As I replied in your other thread, it's the old i5-650 that's holding you back. The rest of the system is fine.

 

There's no significant way to upgrade your CPU with your current motherboard. You need a new motherboard and CPU.

You can either get a Haswell CPU and motherboard and keep your old RAM, or, best of all, get a new Skylake CPU such as an i5-6500, motherboard, and DDR4 RAM.

Link to comment
Share on other sites

This game hasn't been hero engine for about six years. Please educate yourself.

 

Even a heavily modified hero engine by programmers that no longer work for Bioware is at its core still a hero engine and subject to the same issues as the original.

Link to comment
Share on other sites

 

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/23

 

*Overall, Skylake is not an earth shattering leap in performance. In our IPC testing, with CPUs at 3 GHz, we saw a 5.7% increase in performance over a Haswell processor at the same clockspeed and ~ 25% gains over Sandy Bridge. That 5.7% value masks the fact that between Haswell and Skylake, we have Broadwell, marking a 5.7% increase for a two generation gap.*

 

With i7-6700k having a 200 MHz slower turbo speed compared to i7-4790k, in the end, the two chips at stock speeds will perform plus or minus about the same.

 

There are benchmarks, then there is what the average person can perceive, which is what so much of the tech review world glosses over so often.

 

Unless you work with it on a regular basis, I fully expect to be able to put an Ivy Bridge i5 or i7 on a desk, next to a Skylake i5 or i7, and you won't be able to tell the difference without looking. There just isn't that much of a difference outside of edge cases. You have to go back to Sandy Bridge to see it (and even then a lot of people won't, depends on what you're running)

Link to comment
Share on other sites

When you compare the 2 stock chips that are architecturally identical, the fps difference is around ~10-15FPS. Google a stress comparison of the i5-6500 and the i7-6700 in Far Cry 4 and Grand Theft Auto 5. You'll not only see a FPS difference up to the neighborhood of around 26%. You'll see a wide fluctuation in the i5 frame rate that doesn't exist i7 (read that as i5 stressing out) because of the Hyper Threading and the extra L3 cache allow the extra push the i5 just doesn't have the chips for.

 

I would question any such results. There isn't enough of a difference between the chips to provide the fps spread you're describing. Either someone is testing it wrong or is setting it up to show what they want it to show.

 

I have most of the chips that we're talking about (only the i7-6700k is missing from my test bench because it is still in short supply and I'm not recommending it for anyone right now)

 

People on a budget should be making choices that fit in their budget. The current "best" Intel has is the i7 Skylake. If you can afford it, do it. If you can't you can't, don't.

 

If you can afford it, you might still not want to do it. The number of people who should actually spend the extra $100 for an i7 is in the very small single digits, yet the number of people who DO buy it is far larger, because "oh it is faster and thus it is better, right?"

 

A lot of people make uninformed buying decisions, or poorly informed at least, which is why cards like the GT710 exist:

 

http://www.anandtech.com/show/9993/nvidias-partners-rollout-geforce-gt-710-to-fight-integrated-graphics

 

So many people see a computer and say, "oh, that has NVidia GeForce in it, that's good right? I need one of those, that computer should do the trick", without understanding that a name means nothing.

 

For most people, the i7 fits into that category, only in reverse. "Oh, well the i7 is faster than the i5, right? And I want a fast computer, so I better get that". Sure, except does it make the difference you think it makes? Not likely. Sure, it doesn't hurt you, but you just spent $100 more than you had to, because you didn't understand what you were buying.

 

DDR4 is replacing DDR3.

 

So? It will over time, but what difference does that make to the person building or buying a computer today?

 

If you build a Skylake computer today, with few exceptions, you should do it with DDR3. The odds of actually upgrading the RAM in the system's lifetime are low, $60 will get you 16GB of DDR3 which should last you the life of the machine for most use cases, and it costs less than DDR4.

 

DDR4 being "new" doesn't mean much if the difference doesn't show up to the end user.

 

AMD has already done it in their embedded market and will with Zen.

 

Who cares what AMD does?

 

If I had to guess, you're young and haven't watched the market for decades, you think "new and shiny" must be best, and you think that having lots of upgrade paths for chips and motherboards is a nice thing.

 

In my very long experience going back to the 8088, I have never found that to be the case and I see nothing that will change it going forward. CPU/RAM/Motherboard almost always gets installed as a set and replaced as a set. Oh sure, you can find exceptions, but not many. I sure wouldn't base any buying decisions on that line of thinking.

 

Skylake is Intel's move. The Skylake will future proof your system far more than last year's i5. One thing you can be certain of, The best video card on the market is already out of date.

 

The best video card on the market isn't out of date, the best video card on the market is... *drum roll please* The best video card on the market! :)

 

As for Skylake, it is no more future proof than a Haswell Refresh would be. For most users planning to keep a computer up to 5 years, either choice is fine right now, they aren't going to see or notice any difference. It also won't matter if they put in DDR3 or DDR4.

 

Being "newer" does not equal being "better".

 

DirectX 12 is also on the horizon. Its already shown to favor multi-core processors. ( there is a demo on steam) Play with it yourself... The i7 plays stronger than the stock i5. It plays nicer than the overclocked i5.

 

By the time DirectX 12 matters, you're likely to have replaced whatever computer you buy today.

 

Having DX12 doesn't matter if you're playing a DX9 or DX11 game, it only matters if you're playing a DX12 game, and those won't become a thing for a long time.

 

Again, compare apples to apples. The chips we are talking about share the same pinout, so they dont require a different motherboard or different memory.

 

You are making the mistake of confusing what is possible with what is reasonable and customary.

 

The only people who should be using Z170 motherboards are those using K CPUs. If you are NOT using a K CPU then you shouldn't be on anything beyond H170. There is a decent price difference between the H and Z boards, at least from the same company and quality.

 

A decent number of people running non-K CPUs won't even be on H170, they'll be on H110, which is fine for a lot of users as well.

 

As for different RAM, I have yet to see a Z170 board that uses DDR3, but I see lots of H110 and H170 boards that use DDR3 (and plenty that use 4 of course).

 

So a non-K CPU on a H170 board with DDR3 is a whole lot less money than a K CPU on a Z170 board with DDR4. At least enough money to pay for a jump in GPU power.

Link to comment
Share on other sites

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/23

 

*Overall, Skylake is not an earth shattering leap in performance. In our IPC testing, with CPUs at 3 GHz, we saw a 5.7% increase in performance over a Haswell processor at the same clockspeed and ~ 25% gains over Sandy Bridge. That 5.7% value masks the fact that between Haswell and Skylake, we have Broadwell, marking a 5.7% increase for a two generation gap.*

 

With i7-6700k having a 200 MHz slower turbo speed compared to i7-4790k, in the end, the two chips at stock speeds will perform plus or minus about the same.

 

There are benchmarks, then there is what the average person can perceive, which is what so much of the tech review world glosses over so often.

 

Unless you work with it on a regular basis, I fully expect to be able to put an Ivy Bridge i5 or i7 on a desk, next to a Skylake i5 or i7, and you won't be able to tell the difference without looking. There just isn't that much of a difference outside of edge cases. You have to go back to Sandy Bridge to see it (and even then a lot of people won't, depends on what you're running)

 

Further down in that same article is: "When we ratchet the CPUs back up to their regular, stock clockspeeds, we see a gap worth discussing. Overall at stock, the i7-6700K is an average 37% faster than Sandy Bridge in CPU benchmarks, 19% faster than the i7-4770K, and 5% faster than the Devil’s Canyon based i7-4790K."

5% faster for a chip that is 200 Mhz slower is telling.

 

You are correct that the end user probably wont be able to tell the difference, that job belongs to the person recommending the machine to the end user. 60% of the people I know can get along just fine with an off the shelf machine and be perfectly happy. They don't play PC games. The other 40% tear their machine apart regularly.

 

I would question any such results. There isn't enough of a difference between the chips to provide the fps spread you're describing. Either someone is testing it wrong or is setting it up to show what they want it to show.

 

I have most of the chips that we're talking about (only the i7-6700k is missing from my test bench because it is still in short supply and I'm not recommending it for anyone right now)

With all other hardware being equal in testing machines, an increase of 4 threads, 500 MHz base, 300 MHz boosted and 2MB L3 cache is a rather large difference between chips. I would hazard a guess that difference also plays a factor into why Intel charges more for the i7s.

 

If you can afford it, you might still not want to do it. The number of people who should actually spend the extra $100 for an i7 is in the very small single digits, yet the number of people who DO buy it is far larger, because "oh it is faster and thus it is better, right?"

 

A lot of people make uninformed buying decisions, or poorly informed at least, which is why cards like the GT710 exist:

 

http://www.anandtech.com/show/9993/nvidias-partners-rollout-geforce-gt-710-to-fight-integrated-graphics

 

So many people see a computer and say, "oh, that has NVidia GeForce in it, that's good right? I need one of those, that computer should do the trick", without understanding that a name means nothing.

 

For most people, the i7 fits into that category, only in reverse. "Oh, well the i7 is faster than the i5, right? And I want a fast computer, so I better get that". Sure, except does it make the difference you think it makes? Not likely. Sure, it doesn't hurt you, but you just spent $100 more than you had to, because you didn't understand what you were buying.

Gamers have always been the early adopters of CPU's as evidence of the great many benchmarks that show games in action. Alienware and Falcon Northwest don't advertising office machines. Tomshardware doesn't rush reviews for the latest mid range CPU (they have them, but they don't rush them). All of those (now) companies got their start through gamers coming together around the bleeding edge CPU's and motherboards. Its what created the market we have today. Its also why we have the standards we have today and we aren't flooded by crappy hardware from S3, Diamond Multimedia, Number 9 and the like. Horrible reviews of their flagships caused those products to fail miserbly and die off.

 

Speaking of horrible video cards, while it may be based on a 2013 chip, that card is better than what it is said to be marketed against. The iGPU on the Celeron won't run Aero smoothly. Grandma won't be able to have shiny windows with see-thru edges. When she wins at Suduko, all the coins popping will cause her system to stutter. Bargain basement computer makers can use these half-height style cards and keep nice quite fan free systems and allow granny to have that neat little bubble screen saver.

 

 

So? It will over time, but what difference does that make to the person building or buying a computer today?

 

If you build a Skylake computer today, with few exceptions, you should do it with DDR3. The odds of actually upgrading the RAM in the system's lifetime are low, $60 will get you 16GB of DDR3 which should last you the life of the machine for most use cases, and it costs less than DDR4.

 

DDR4 being "new" doesn't mean much if the difference doesn't show up to the end user.

 

The memory should be paired with the processor. You wouldn't put RIMMs with a i5 because it just doesn't support it. Yes the Skylake architecture supports both DDR3 and DDR4, but to reference your own article, "In full speed gaming benchmarks we have some situations that benefit from Skylake (GRID on high end graphics cards) and others that drop (Mordor on GTX 770), but the important aspect to consider is despite Skylake supporting both DDR3L and DDR4 memory, our results show that even with a fast DDR3 kit, a default-speed DDR4 set of memory is still worth upgrading to. On average there’s a small change in performance in favor of the DDR4 (especially in integrated graphics), but DDR4 confers benefits such as more memory per module and lower voltages to aid power consumption".

 

DDR4 isn't more expensive, is inherently faster, is designed to run with architecture, uses less power, and is one less thing you'll have to change out if you upgrade over the next 2 generations (depending on the speed of DDR4 you go with).

DDR4 89.99

DDR3 89.99

 

Who cares what AMD does?

 

If I had to guess, you're young and haven't watched the market for decades, you think "new and shiny" must be best, and you think that having lots of upgrade paths for chips and motherboards is a nice thing.

 

In my very long experience going back to the 8088, I have never found that to be the case and I see nothing that will change it going forward. CPU/RAM/Motherboard almost always gets installed as a set and replaced as a set. Oh sure, you can find exceptions, but not many. I sure wouldn't base any buying decisions on that line of thinking.

 

Age aside, there are exactly 2 players in the CPU field, AMD and Intel. Who cares what AMD does? 24% of the CPU market. Excluding the company that owns 76% share of the CPU market and has already made the move to DDR4, 24% is a significiant share of a market.

Steam Hardware Survey

 

The Haswell uses a LGA 1150 socket. The Skylake uses a LGA 1151 socket. The Kaby Lake and Cannonlake chips are projected to be LGA 1151 sockets. All you have to do is change the processor and motherboard on a Skylake system, leaving everything else alone until the Cannonlake debutes in 2017. Stick with the Haswell, its a full upgrade.

 

The memory form factor doesnt change very often. DDR lost out to DDR2 around 03-04. DDR3 took over around 2007. DDR3 is fully able to step down so DDR3-2400 can run in a DDR3-800 slot just fine. Spending the money on something that isn't going to change very often, and has a lifetime warrentee, isn't a "new and shiny upgrade path". Its researched planning that might be a bit cheaper than replacing perfectly good memory every time the motherboard and processor are upgraded.

 

The best video card on the market isn't out of date, the best video card on the market is... *drum roll please* The best video card on the market! :)

This is one purchase where you have to find the "one" you want and get it, not live on the cutting edge. Every reference subscriber has their own tweaked version that outdoes everyone else's. EVGA's is Overclocked x% more than MSI's, but ASUS's comes with a bean bag chair. There is the current reference card, and the many variations of it. that are released every week. AMD plays the same game. The current version is out of date simply because the next version is waiting in the wings. Spend a $1000 now or $1000 in 2 months, buying the latest and greatest video card isnt a lasting solution becuase you can't keep up unless you are very well funded.

 

As for Skylake, it is no more future proof than a Haswell Refresh would be. For most users planning to keep a computer up to 5 years, either choice is fine right now, they aren't going to see or notice any difference. It also won't matter if they put in DDR3 or DDR4.

 

Being "newer" does not equal being "better".

 

 

By the time DirectX 12 matters, you're likely to have replaced whatever computer you buy today.

 

Having DX12 doesn't matter if you're playing a DX9 or DX11 game, it only matters if you're playing a DX12 game, and those won't become a thing for a long time.

 

RE Skylake and future proof, see above.

If you plan on keeping a computer for 5 years, you should plan on upgrading it at least once. In my experience with gamers two years requires game tweaking. Three years requires using the "low" setting in games which can be unacceptable to some.

 

Being newer doesn't mean better unless being newer also means being an average 37% faster than Sandy Bridge in CPU benchmarks, 19% faster than the i7-4770K, and 5% faster than the Devil’s Canyon based i7-4790K. Then being newer might just mean being better.

 

Directx 12 support is already out in the wild. Refering back to that Steam Hardware Survey. 28.18% DX12GPU & Windows 10, 39.64% DX12GPU & Pre-WIN10.

Skylake already supports it on the iGPU, so your desktop will be pretty.

https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support

http://schedule.gdconf.com/search-sessions/Directx+12+

http://gamingbolt.com/directx-12-improved-just-cause-3s-performance-on-pc

Then there's Vulkan. It was designed to be supported by just about everything, but it is also designed to distribute across cores. More of the stronger cpu = better performance routine.

 

You are making the mistake of confusing what is possible with what is reasonable and customary.

 

The only people who should be using Z170 motherboards are those using K CPUs. If you are NOT using a K CPU then you shouldn't be on anything beyond H170. There is a decent price difference between the H and Z boards, at least from the same company and quality.

 

A decent number of people running non-K CPUs won't even be on H170, they'll be on H110, which is fine for a lot of users as well.

 

As for different RAM, I have yet to see a Z170 board that uses DDR3, but I see lots of H110 and H170 boards that use DDR3 (and plenty that use 4 of course).

 

So a non-K CPU on a H170 board with DDR3 is a whole lot less money than a K CPU on a Z170 board with DDR4. At least enough money to pay for a jump in GPU power.

 

You may wish to review that section, there was never any mention made of using a K CPU on a Z170 board (Z170 boards were never mentioned by me).

Edited by FlyingUsPoo
Link to comment
Share on other sites

Further down in that same article is: "When we ratchet the CPUs back up to their regular, stock clockspeeds, we see a gap worth discussing. Overall at stock, the i7-6700K is an average 37% faster than Sandy Bridge in CPU benchmarks, 19% faster than the i7-4770K, and 5% faster than the Devil’s Canyon based i7-4790K."

5% faster for a chip that is 200 Mhz slower is telling.

 

The 19% faster than the i7-4770k is mostly due to the clock speed difference at stock, not the IPC difference.

 

I would agree that Skylake is about 5% faster than Haswell at the same speed, but you won't notice 5% in normal use, only benchmarks will show the difference.

 

You are correct that the end user probably wont be able to tell the difference

 

Then we are in agreement. :)

 

I would further make the statement that the normal end user will not see any difference between an i5 and i7 desktop chip from the same generation, or even one apart.

 

i5 Haswell or Skylake, i7 Haswell or Skylake, they'll all perform about the same, give or take a few percentage points.

 

It is worth noting that I didn't say the i7 wasn't faster, I said that it didn't provide worthwhile benefit to gamers. :)

 

With all other hardware being equal in testing machines, an increase of 4 threads, 500 MHz base, 300 MHz boosted and 2MB L3 cache is a rather large difference between chips. I would hazard a guess that difference also plays a factor into why Intel charges more for the i7s.

 

That gap doesn't exist in the real world, or it shouldn't at least. Which was my point about the i5-6600k only existing to be overclocked. If you're running it at stock, you're doing it wrong and should have gotten a i5-6500 instead.

 

The only real difference for most PC users (including gamers) between the i5 and i7 is the clockspeed, and overclocking fixes that. Note the stock speeds of the i5-6600 and i7-6700 (non-K versions) and compare them, you'll find them MUCH closer together than the K versions.

 

DDR4 isn't more expensive, is inherently faster, is designed to run with architecture, uses less power, and is one less thing you'll have to change out if you upgrade over the next 2 generations (depending on the speed of DDR4 you go with).

DDR4 89.99

DDR3 89.99

 

You picked out two poor examples to try and make a point, but your overall mistake is that you're buying into the marketing nonsense.

 

16GB of DDR3 doesn't cost $90, you simply found an example where it does, but you can buy that for $60. You can't buy 16GB of DDR at any speed for $60.

 

Age aside, there are exactly 2 players in the CPU field, AMD and Intel. Who cares what AMD does? 24% of the CPU market.

 

So? We're talking about the differences between the i5 and the i7, which has nothing to do with AMD.

 

The Haswell uses a LGA 1150 socket. The Skylake uses a LGA 1151 socket. The Kaby Lake and Cannonlake chips are projected to be LGA 1151 sockets. All you have to do is change the processor and motherboard on a Skylake system, leaving everything else alone until the Cannonlake debutes in 2017. Stick with the Haswell, its a full upgrade.

 

You won't do that, almost no one will. That is the false logic of "future proofing".

 

There won't be enough of a difference between Skylake and Cannonlake to make the change worth making. I've seen that promise many times over the decades, I have yet to actually find it worthwhile to upgrade the CPU in a computer. If the jump is enough to bother with, you need a new motherboard.

 

Every time.

 

The memory form factor doesnt change very often. DDR lost out to DDR2 around 03-04. DDR3 took over around 2007. DDR3 is fully able to step down so DDR3-2400 can run in a DDR3-800 slot just fine. Spending the money on something that isn't going to change very often, and has a lifetime warrentee, isn't a "new and shiny upgrade path".

 

DDR2 didn't start out at 800 MHz, DDR3 didn't start out at 1,600 MHz. While the basic format has been around awhile, there have been many changes.

 

DDR3 also has 1.35v, 1.5v, and 1.65v flavors, and many timings. It hasn't remotely been static since 2007. Back then, 1,066 MHz was the normal speed, with some machines using 1,333 MHz DDR3. My current machine (built in 2014) uses DDR3-2400, which wasn't even an option back then.

 

Unless you upgrade often, very little can be carried over for very long.

 

Its researched planning that might be a bit cheaper than replacing perfectly good memory every time the motherboard and processor are upgraded.

 

The same promises were made back in the 90s with "Pentium Overdrive". That wasn't worth doing then, CPU upgrades aren't worth doing now.

 

Directx 12 support is already out in the wild. Refering back to that Steam Hardware Survey. 28.18% DX12GPU & Windows 10, 39.64% DX12GPU & Pre-WIN10.

 

So? DX10 support existed for years before games went to DX10. Seriously, if you think the above means anything, then you don't understand the subject at all.

 

The games have to be written to support it. Just having DX12 doesn't help, the game companies have to write the games to support it.

 

SWTOR doesn't run any faster on Windows 10 than it does on Windows 7 or 8 for exactly this reason, it is still using DX9 and will ALWAYS use DX9, unless Bioware rewrites the engine for DX12. But if they do that, then the engine can ONLY run on Windows 10.

 

That is not likely to happen within the reasonable lifetime of SWTOR.

 

----

 

TL;DR - In my opinion, you buy into hype and marketing way too much and don't deal with the reality of the business enough. New and shiny and big numbers and promises in the future seem to appeal to you, but all that really matters is right now, today.

Link to comment
Share on other sites

/popcorn ...

 

Though ...

 

The only real difference for most PC users (including gamers) between the i5 and i7 is the clockspeed, and overclocking fixes that.

 

I tend to agree with currently though I think with the advent of DX12 and the ( hopeful ) uptake of it in the ( again hopeful ) near future this will change.

 

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/22

 

Pretty much demonstrates to me that the i7 still isn't worth the money for gaming current gaming.

 

However if I were upgrading now it would be the first time in a long time I've looked at the i7 was a reasonable future proof option ( again due to DX12 ).

 

Either way I'm waiting out most of the year to see what happens with Zen, new GPUs and DX12 before going down the upgrade path ... I bought a PS4 instead to give me more than enough "new to me" gaming experience ( bloody 15 games I've bought so far and still trying to finish fallout 4 on PC )

Link to comment
Share on other sites

/popcorn ...

 

Yes, I was debating if I should even reply or not to him or not that last time...

 

First, I completely stand by my posts and my conclusions based on the items discussed.

 

Second, I respect his right to disagree and have a different viewpoint. I'm not the sole arbiter of such things in the world, so at the end of the day, it is just two people who have viewpoints and opinions. I hope people read both posts and make up their own mind.

Link to comment
Share on other sites

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/22

 

Pretty much demonstrates to me that the i7 still isn't worth the money for gaming current gaming.

 

Thank you for posting that, good example of the point I was trying to make a few posts back. Maybe an i7 matters in the future some time for gaming, but today, not really.

 

Or to put it another way, it isn't going to hurt you, but it may not be money well spent if you could instead put the extra money into the GPU, or RAM, or SSD, or perhaps your bank account. :)

 

However if I were upgrading now it would be the first time in a long time I've looked at the i7 was a reasonable future proof option ( again due to DX12 ).

 

Either way I'm waiting out most of the year to see what happens with Zen, new GPUs and DX12 before going down the upgrade path ... I bought a PS4 instead to give me more than enough "new to me" gaming experience ( bloody 15 games I've bought so far and still trying to finish fallout 4 on PC )

 

Always in motion is the future, difficult to see! :)

 

Based on my decades of building computers, selling computers, and watching the industry closely, I would submit that many things have been promised or hyped over the years, many have not come true, or if they arrived, didn't have the impact promised.

 

After all, Pentium 4 was supposed to take us to 10 GHz, was it not? :) Pentium OverDrive was the solution to upgrading 486 machines, and so on.

 

I am perfectly willing to believe that in 5+ years, DX12 might well have an impact and be a game changer. I don't think it will be any sooner than that, due to the game development process and the need to keep games compatible with Windows 7 for the next 5 years.

 

Nothing you build today is going to be "future proof" for 5 years. Some of it might be useful, but lets be honest, 5 years ago, Sandy Bridge was the latest and greatest and frankly a Skylake today is noticeably faster. Not by 5 or 10%, but by nearly 50% (and more depending on what you're doing).

 

So buying a i7 Sandy Bridge 5 years ago instead of a i5 Sandy Bridge, hoping it would matter one day, or future proof for longer, rather misses the point. A Skylake i5 is faster than a Sandy Bridge i7, for 99% of the use cases of the average consume. Buying more than you need today in the hope it matters in the future is a fools errand in my opinion.

 

Buy what you need today, use it, enjoy it. Take the extra money you would have spent and put it into the bank, upgrade a year sooner if you want, it'll make more of a difference in 3 years than buying the best of the best today.

 

---

 

Note: The above advice is for those on a budget and whom want to get best bang for the buck. If you have plenty of money and want the best, by all means, buy it. Someone has to, so maybe it is you! :)

 

---

 

Example:

 

i5-6500 - $205

H170 ASUS Motherboard - $90

16GB DDR3 - $60

GTX 970 GPU - $320

Total - $675

 

vs.

 

i7-6700k - $380

Z170 ASUS motherboard - $155

16GB DDR4 - $85

GTX 980 TI GPU - $650

Total - $1270

 

The second machine isn't nearly twice as fast as the first, but it costs twice the money. You can afford to upgrade the first machine every 2.5 years for the same money as the second machine every 5 years.

 

I personally think the first machine makes the most sense for most people gaming today, the second is only for those who don't care so much about the cost and just want the best.

 

In 5 years, the first machine upgraded mid-life will be faster than the second machine after 5 years of no upgrades.

Link to comment
Share on other sites

Ty for all the response guys. I've purchased these based on everyone's responses. Hopefully i don't mess up. :>

 

MB: MSI Z97 Gaming 5.

CPU: i7 4790k 4.0-4.40ghz, 8mb etc

GPU: Gonna stick my Gainward GTX 770 Phantom 4GB.

Mem: Currently running 8 gigs of dual corsairs at 1333 with plans to get 16gigs better ones for the new MB.

Ty very much for the help. Been reading a lot and very good feedback and advice.

Link to comment
Share on other sites

 

I am perfectly willing to believe that in 5+ years, DX12 might well have an impact and be a game changer. I don't think it will be any sooner than that, due to the game development process and the need to keep games compatible with Windows 7 for the next 5 years.

 

Perhaps, perhaps not - we've been over this before in another topic and whilst it's all more or less opinion based on both sides of the coin at this stage I was quite impressed with the amount of games coming out to support DX12 in the near future.

 

Hitman, the new Deus Ex and Quantum Break are games I personally will be getting regardless but will be very interested in their DX12 performance to dictate if an upgrade is worth it or not ( basically if they don't get noticeable performance benefits under DX12 and hardware that benefits from it then yeah we're a long way off from seeing the improvements to justify it ) - heck they need to justify the leap to Windows 10 for me as I'm happy with 7 if I'm not using DX12 at this stage.

 

 

So buying a i7 Sandy Bridge 5 years ago instead of a i5 Sandy Bridge, hoping it would matter one day, or future proof for longer, rather misses the point
.

 

I wouldn't have thought twice about an i7 back then though, there was nothing on the horizon to dictate it was going to get any use other than "well developers might use the extra threads!" - now we DO have DX12 and it's now a matter of uptake more than anything else ...

In saying that GPU is still the bottleneck 90% of the time so one still has to wonder what direction they'll take with their engine in this regard if they do decide to go down this path ( use the extra threads to do some of the less instantly required GPU processing perhaps? *shrug* ) - because there is little use in going "wow we now use 8 threads" if your game is still hindered by the GPU, they just won't bother with the development on multi threading if that's the case. It needs to be worth doing so and so far it hasn't really been the case.

Link to comment
Share on other sites

Perhaps, perhaps not - we've been over this before in another topic and whilst it's all more or less opinion based on both sides of the coin at this stage I was quite impressed with the amount of games coming out to support DX12 in the near future.

 

:) True enough...

 

I remember that conversation now that you bring it up... I think my summery was "they might sell as DX12 compatible" as a feature checkbox, but that doesn't mean they actually use much of it.

 

When DX10 came out, a number of games had a "DX10 mode" to appear new and current, but if you look at them side-by-side with the DX9 version, you'll be hard pressed to tell the difference.

 

It was a marketing checkbox more than anything else. If the game still had to support DX9, then there was very little of the DX10 feature set they could use without redesigning the game or levels.

 

When games come out that are DX12 REQUIRED, then I think you'll start to see a difference. DX12 optional is more likely to be a marketing gimmick than a really big deal.

 

Also keep in mind that game companies want to sell to a broad audience, and right now only about half of the CPUs on Steam's hardware survey are quad core. You wouldn't design a game to require an i7 today, next week, or next year, that's just poor business planning.

 

---

 

Another thought... how many games today, right now, actually REQUIRE a quad core CPU? A few will use it, but compare the i3 with hyperthreading to the i5 without it, both present 4 cores to Windows (and games), but the i3 really doesn't have four cores.

 

Fallout 4 - Intel Core i3-3220

 

Fallout 4 - Multiple CPUs tested

http://www.gamersnexus.net/game-bench/2182-fallout-4-cpu-benchmark-huge-performance-difference

 

Does the i5 run Fallout 4 faster than the i3? Yes it does. Does it matter? Meh, that is a personal decision, but the i3 isn't that bad:

 

i3-4130 - 53 fps

i5-4690k - 77 fps

 

Yea, the i5 is faster, for twice the price. I wouldn't call 53 fps unplayable. :)

 

BTW, if you doubt the power of hyperthreading, consider this:

 

Pentium G3258 - 35 fps

i3-4130 - 53 fps

 

That is more or less the same chip as the i3, without hyperthreading (and slightly lower clockspeed).

 

Side note: If you want to see just how much AMD really sucks...

 

i3-4130 3.4GHz - 53 fps

FX-9590 - 4.7GHz - 58 fps

 

Seriouslly, 5 whole FPS faster while running nearly 50% faster clock speed and 8 bloody cores, compared to 2+2 cores on the i3.

Link to comment
Share on other sites

:) True enough...

 

I remember that conversation now that you bring it up... I think my summery was "they might sell as DX12 compatible" as a feature checkbox, but that doesn't mean they actually use much of it.

 

When DX10 came out, a number of games had a "DX10 mode" to appear new and current, but if you look at them side-by-side with the DX9 version, you'll be hard pressed to tell the difference.

 

It was a marketing checkbox more than anything else. If the game still had to support DX9, then there was very little of the DX10 feature set they could use without redesigning the game or levels.

 

When games come out that are DX12 REQUIRED, then I think you'll start to see a difference. DX12 optional is more likely to be a marketing gimmick than a really big deal.

 

Indeed which is more why I'm taking a wait n see attitude.

 

In saying that it doesn't have to be required for a "DX12 mode" to make better use of the architecture and yield significant performance increases ... that would be nice anyway but I'm not buying into of course until I see games I play/want to play demonstrate it.

 

 

Also keep in mind that game companies want to sell to a broad audience, and right now only about half of the CPUs on Steam's hardware survey are quad core. You wouldn't design a game to require an i7 today, next week, or next year, that's just poor business planning.

 

Definitely and in a way I wonder if it holds back game development somewhat by having to devote resources to more low end or backward compatibility.

Another thought... how many games today, right now, actually REQUIRE a quad core CPU? A few will use it, but compare the i3 with hyperthreading to the i5 without it, both present 4 cores to Windows (and games), but the i3 really doesn't have four cores.

 

None I'm aware of.

 

 

Seriouslly, 5 whole FPS faster while running nearly 50% faster clock speed and 8 bloody cores, compared to 2+2 cores on the i3.

 

Zen or bust ... almost literally ;)

Link to comment
Share on other sites

Indeed which is more why I'm taking a wait n see attitude.

 

In saying that it doesn't have to be required for a "DX12 mode" to make better use of the architecture and yield significant performance increases ... that would be nice anyway but I'm not buying into of course until I see games I play/want to play demonstrate it.

 

I will be the very first to shout from the rooftops if it turns out to be a "big deal". That would be wonderful, FREE performance!

 

All I'm suggesting (and anyone reading this should take it as a suggestion/consideration) is that spending real dollars today on the "maybe benefits" of tomorrow is a questionable decision.

 

You're putting out for sure money for maybe future stuff. Is it worth it? That is a personal decision, but I've been personally burned too many times on "promises of future use" that didn't pan out to go around upgrading or overbuying when it isn't needed.

 

Zen or bust ... almost literally ;)

 

^ Quote for truth... AMD has one last chance to get this right, or they may well be exiting the desktop CPU business after Zen if it isn't all it is cracked up to be. They have no more money and nothing more to sell to buy time, the money is gone from spinning off GF, settling with Intel, and selling their buildings and leasing them back. They have nothing left to barter with, it is Zen or bust. :)

 

I've posted about it before, Intel could buy AMD outright using couch cushion money. I suspect Intel might end up making an investment in AMD just to keep them around for anti-trust reasons, if nothing else. Or Intel might end up letting someone buy AMD and keep the x86 license, for the same reasons.

 

If Intel has any brains, they'll want competition, or at least the appearance of it.

 

Do you remember when Microsoft invested $150 million in Apple back in the 90s after Steve Jobs came back? Same thing. :)

Link to comment
Share on other sites

http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/5

 

Just released today, an update on DX12...

 

First, keep in mind that this is still beta software. Second, it is a very, very early game that appears to be counting hard on the whole "DX12, look!" factor.

 

Beyond that, on the AMD cards, DX 12 appears to help, but on the NVidia cards, it does not. But a more accurate way to look at it is that on the NVidia cards, it is about the same speed either way, fast. On AMD it is "slower" on DX11, but DX12 isn't really faster than the NVidia cards.

 

In other words, I'm terribly underwhelmed so far. But time will tell.

Link to comment
Share on other sites

http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/5

 

Just released today, an update on DX12...

 

First, keep in mind that this is still beta software. Second, it is a very, very early game that appears to be counting hard on the whole "DX12, look!" factor.

 

Beyond that, on the AMD cards, DX 12 appears to help, but on the NVidia cards, it does not. But a more accurate way to look at it is that on the NVidia cards, it is about the same speed either way, fast. On AMD it is "slower" on DX11, but DX12 isn't really faster than the NVidia cards.

 

In other words, I'm terribly underwhelmed so far. But time will tell.

 

True but the thing I was more interested with DX12 is how it utilises CPU cores/performance more than GPU performance at this stage. Again though as long as the GPU is the bottleneck I'm not convinced how this is going to help without some real rethinking on how they create their engines ( spread some load usually reserved for the GPU to the CPU somehow - I'm not game dev so not sure how these sorts of things could be accomplished if it all :) ).

Link to comment
Share on other sites

Beyond that, on the AMD cards, DX 12 appears to help, but on the NVidia cards, it does not. But a more accurate way to look at it is that on the NVidia cards, it is about the same speed either way, fast. On AMD it is "slower" on DX11, but DX12 isn't really faster than the NVidia cards.

Nvidia's drivers have always been better, so it's not hard to believe that DX12 offers less headroom for improvement for them.

 

True but the thing I was more interested with DX12 is how it utilises CPU cores/performance more than GPU performance at this stage. Again though as long as the GPU is the bottleneck I'm not convinced how this is going to help without some real rethinking on how they create their engines ( spread some load usually reserved for the GPU to the CPU somehow - I'm not game dev so not sure how these sorts of things could be accomplished if it all :) ).

Current 3D graphics APIs - DX9, DX10, DX11 and OpenGL - only allow feeding commands to the GPU from a single thread, and thus a single core. The motivation for DX12 (and Vulkan) is that GPUs have become so fast that a single thread is no longer sufficient for keeping them busy. These new APIs both lower the overhead at the CPU and allow multi-threaded command dispatch, thus giving the CPU more room to work with. You are right that if a game engine already manages to keep the GPU fully tasked, moving to DX12 is unlikely to gain any significant performance benefits.

Link to comment
Share on other sites

Xbox One. It shouldn't be a surprise to anyone that AMD runs better under DX12.

 

Yea, but the real question is why does it run so poorly under DX11?

 

Fury X isn't that much faster than GTX 980 TI when both are running DX12. What is interesting is how GTX 980 TI holds basically the same level of performance under DX11 while Fury X loses so much.

 

Frankly, we can't draw much if anything from this, it is a beta game that hasn't been released yet, and drivers from both red and green teams will get better over the months. I just think it is worth noting that DX12 doesn't give either side a boost, so much as it makes AMD competitive with NVidia... in this one case.

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.