Intel's new graphics chips - OpenGL 3.0 support

http://www.gamasutra.com/view/feature/6168/sponsored_feature_first_look_at_.php

Read the comments…

If this is old news, sorry for posting :stuck_out_tongue:

There’s a surprise. It means nothing until we see the perf figures though, and given Intel’s past reputation (especially with OpenGL) I definitely wouldn’t be betting the farm on it just yet.

Is that some kind of low budget spin-off derived from the hyped failure they called Larrabee ?

Is that some kind of low budget spin-off derived from the hyped failure they called Larrabee ?

No; this is Sandy Bridge. Their on-CPU GPU much like Llano for AMD. Unlike their current on-CPU GPU, this one actually has performance. It isn’t performance equivalent to a $100 GPU, but it’s better than many integrated GPUs.

You have a lot of expectations Alfonse!
$100 GPU == Radeon 5670 / Radeon 5750 || GeForce GT 430 / GeForce 450

Even if Intel chips has made some performance progress, I don’t see it getting close to those chips. Without we are not speaking about drivers…

Even if Intel chips has made some performance progress, I don’t see it getting close to those chips.

That’s why I said, “It isn’t performance equivalent to a $100 GPU”.

I’m really having a hard time seeing how a general purpose CPU can possibly be competitive with 15 years of accumulated knowledge of how to build high performing dedicated GPUs. I think it might be competitive with hardware from a generation or two back (which will be 2-4 generations back by the time it’s publicly available) but the dedicated GPU should still be the preferred option. I’d love to be proven wrong though, and I’d especially love it if Intel really did have something special up their sleeves that would shake up the established players a little (not least because I have this perverse fondness for Intel parts; they’re actually decent enough if you code to their strengths and don’t try anything too fancy).

I’m really having a hard time seeing how a general purpose CPU can possibly be competitive with 15 years of accumulated knowledge of how to build high performing dedicated GPUs.

It’s not a “general purpose CPU.” It’s a GPU on the same chip as the CPU. It’s really not that hard to understand: they simply swap out one of the CPU cores for an actual GPU.

the dedicated GPU should still be the preferred option.

Preferred for what? Crysis?

The days of rampant explosion in graphics development is over. It simply isn’t cost effective for games. I do a non-trivial portion of my PC gaming on an embedded HD-3300, which has only 80 shader processors. I’d love to be able to replace it with a Llano that has 400 SPs; I could play more games on my low-powered machine.

I don’t know; I’ve heard that one before and it’s always been wrong. In the past it was “once you go 3D there’s nowhere left to go” or “once you go to hardware acceleration there’s nowhere left to go”, but it’s always turned out that there actually were plenty of places left to go. Bill Gates’ (in)famous “640k ought to be enough for anyone” is even symptomatic of the same kind of thinking.

Admittedly with a fully programmable device the situation is a little different these days, but at the same time one should not assume that the current triangle-based vertex/fragment rasterisation paradigm is going to continue unto eternity.

I can’t predict the future and I’m not going to pretend to even try, but I would be cautious about declaring anything “over”. If the past has taught us nothing else it’s that there is always room for further development in completely unexpected areas.

I don’t know; I’ve heard that one before and it’s always been wrong. In the past it was “once you go 3D there’s nowhere left to go” or “once you go to hardware acceleration there’s nowhere left to go”, but it’s always turned out that there actually were plenty of places left to go. Bill Gates’ (in)famous “640k ought to be enough for anyone” is even symptomatic of the same kind of thinking.

First, Bill Gates never actually said that, so stop spreading apocrypha/urban legends.

Second, I never said that there was nothing more to be gained in graphics hardware. I said that the “rampant explosion” was over.

Much like with the incoming primacy of small devices (tablets, mobile phones, etc), making things smaller and more efficient is becoming more important than making them faster. The push for the absolute fastest GPU no matter what simply isn’t there anymore.

Sound chips went through the same thing. Time once was that sound chips were add-in boards (AIB) like GPUs. Then, the first on-motherboard embedded sound chips came out. These weren’t great, but they did steal the lowest of the low-end from sound chip makers. Time passed, and on-motherboard chips got better and better. Now, it is the AIB-style sound chip that is the anomaly; rare is the motherboard that lacks a quality, 5.1 or 7.1-capable sound chip.

Something similar has started with GPUs. As I said, I do quite a bit of gaming on a motherboard-embedded HD 3300. An on-CPU GPU with more shaders than the 3300 could do even better. No, it wouldn’t be as good as a $150 or $200 GPU. But it does make the low-end AIBs obsolete.

Give it time. On-CPU GPUs will eventually become a standard feature. After a while, they’ll start eating their way into the mid-grade GPUs. Game developers and other graphics software developers will start developing their games specifically for on-CPU GPUs, for that level of performance. Given time, an AIB GPU will be a luxury.

For sure, features will be added to GPUs. But it will not be at the pace that we’ve seen in the last decade.

Same here. And I lean the same direction.

Current high-end dedicated GPUs have ~6 times the memory bandwidth of current high-end CPUs (Fatahalian), and existing GPU memory bandwidth is becoming a bottleneck more often (deferred techniques, etc.). If you toss a GPU on the CPU’s memory bus, something’s gotta give.

Then there’s the cut in GPU core count…

Gut says the initial target market for Fusion/Sandy Bridge is laptop/embedded apps, not high-end 3D graphics. But I look forward to seeing the benchmarks. I too would love to be proven wrong.

What this arch will hopefully enable is faster braided parallelism (less latency flipping between task and data parallel). Should be interesting.

Current high-end dedicated GPUs have ~6 times the memory bandwidth of current high-end CPUs

Even today, when on-motherboard audio chips are ubiquitous, you can still find add-in board sound hardware. These are high-end chips that are generally for professional or audiophile purposes. Normal people don’t buy them anymore; even most gamers are happy with the on-motherboard sound chip.

Embedded chips don’t kill the high-end. They just make it specialized. Niche. They strangle it by taking away the budget for it. This works much more for graphics because GPU design is expensive.

If on-CPU GPUs are eventually capable of challenging the $100-$150 range GPUs, then a real problem emerges for GPU development (well, for NVIDIA’s GPU development at least, since AMD and Intel make the CPUs that have GPUs on it). That $400 card is effectively subsidized by the lower-margin, higher-volume cards. Gutting the entire sub-$100 market and the low-half of the $100-$200 market would result in a substantial loss in revenue. These things would just come “free” with CPUs.

If that happens, game developers will start experience substantial pressure to develop specifically for these kinds of GPUs. Maybe deferred rendering is no longer advantageous performance-wise, so it is abandoned. Maybe people use tessellation or GPU-generated vertex data more to keep the vertex transfer bandwidth down. Maybe they substitute GPU computations for textures to mitigate the bandwidth issue. Better lighting models, etc. Megatexture-style things start becoming more and more interesting, as the GPU is directly using CPU memory.

It’s all about playing to a platform’s strengths.

This kind of thing could easily kill high end GPUs. Oh sure, some people would get them. But they would be a gamer luxury item, like current $400 GPUs.

Gut says the initial target market for Fusion/Sandy Bridge is laptop/embedded apps, not high-end 3D graphics.

There’s a lot of performance between laptops and high-end 3D. I expect the first version of Fusion to make $50 GPUs effectively obsolete.

I wouldn’t count nVidia out, they used to be big in the chipset market, I wouldn’t be surprised if they tried to enter the processor market (I think they already have with mobile phones), and became a new competitor to AMD and Intel… But thats just a cool fantasy in my opinion.

No kidding!

Which is why CPU-based GPUs and dedicated GPUs will both continue as separate markets for a long, long time.

What might happen is, the high-end of the GPU market becomes the [main/only] focus of GPU vendors, since their low-end is largely co-opted.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.