Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apparently this was the big thing that held Larrabee back as a GPU. The hardware was capable, but they never got their drivers to the point where they could do a good job with existing OpenGL, Direct3D, etc. code.


Is there any strong evidence of the claim that Larrabee HW had the potential of being a capable GPU, and that it was just a SW issue?

Despite being increasingly programmable, GPUs still have very fine-tuned hardware for graphics, and this goes deeper than just adding some texture mapping units here or there.

A good example of this is the fact that both Nvidia and AMD keep on adding new modes in HW to streamline graphics primitive processing with varying degrees of success (see mesh shaders and primitive shaders.)

To me, this signals that a free-for-all software approach as championed by Larrabee is simply not efficient enough in a competitive environment where GPUs are declared winner or loser based on benchmark differences of just a few percent.


Tom Forsyth's write up seems to make mostly reasonable claims. Though even he notes that it wasn't competitive with the high end at the time.

"So let's talk about the elephant in the room - graphics. Yes, at that we did fail. And we failed mainly for reasons of time and politics. And even then we didn't fail by nearly as much as people think. Because we were never allowed to ship it, people just saw a giant crater, but in fact Larrabee did run graphics, and it ran it surprisingly well. Larrabee emulated a fully DirectX11 and OpenGL4.x compliant graphics card - by which I mean it was a PCIe card, you plugged it into your machine, you plugged the monitor into the back, you installed the standard Windows driver, and... it was a graphics card. There was no other graphics cards in the system. It had the full DX11 feature set, and there were over 300 titles running perfectly - you download the game from Steam and they Just Work - they totally think it's a graphics card! But it's still actually running FreeBSD on that card, and under FreeBSD it's just running an x86 program called DirectXGfx (248 threads of it). And it shares a file system with the host and you can telnet into it and give it other work to do and steal cores from your own graphics system - it was mind-bending! And because it was software, it could evolve - Larrabee was the first fully DirectX11-compatible card Intel had, because unlike Gen we didn't have to make a new chip when Microsoft released a new spec. It was also the fastest graphics card Intel had - possibly still is. Of course that's a totally unfair comparison because Gen (the integrated Intel gfx processor) has far less power and area budget. But that should still tell you that Larrabee ran graphics at perfectly respectable speeds. I got very good at ~Dirt3 on Larrabee.

Of course, this was just the very first properly working chip (KNF had all sorts of problems, so KNC was the first shippable one) and the software was very young. No, it wasn't competitive with the fastest GPUs on the market at the time, unless you chose the workload very carefully (it was excellent at running Compute Shaders). If we'd had more time to tune the software, it would have got a lot closer. And the next rev of the chip would have closed the gap further. It would have been a very strong chip in the high-end visualization world, where tiny triangles, super-short lines and massive data sets are the main workloads - all things Larrabee was great at. But we never got the time or the political will to get there, and so the graphics side was very publicly cancelled."

http://tomforsyth1000.github.io/blog.wiki.html#%5B%5BWhy%20d...


Thanks for that!

> it was excellent at running Compute Shaders

On one hand, that statement supports my belief that there is more to graphics than just a lot of raw compute flops.

> It would have been a very strong chip in the high-end visualization world, where tiny triangles, super-short lines and massive data sets are the main workloads - all things Larrabee was great at.

But this goes the opposite way. :-)

Because you'd think that a lot of small primitives would make it harder to deploy those raw compute flops.


And didn't ever offer OCL IIRC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: