Total Pageviews

Intel will offer a customizable chip to keep data center clients happy

To meet the needs of webscale and select enterprise customers Intel will build a customizable and programmable CPU that combines an Intel processor and a programmable chip from an undisclosed partner, Diane Bryant, SVP and General Manger of Intel's data center group, plans to announce onstage at the Gigaom Structure conference Wednesday. Bryant said the customizable CPU is already in development, and would be used in production environments next year.

"We have been engaging directly with large-scale service providers to give them exactly what they need," said Bryant.

The chip would combine a Xeon processor and a programmable chip known as an FPGA, or a field-programmable gate array. Instead of just placing the FPGA near the chip, which is the usual way one would place an FPGA or other accelerator chip, the two would be linked and able to share access to the memory available to the CPU. This coherency is essential for making the processors faster and avoiding bottlenecks associated with using other accelerators such as graphics processors or even an FPGA that isn't coherently linked.

The primary companies making FPGAs are Xilinx, Altera and Latice Semiconductor, but Bryant didn't say what firm it was working with for the FPGA, only that Intel wasn't designing those itself. However Intel will test and manufacture the entire chip for customers.

If this sounds familiar, it's because just this week Microsoft said it was using a similar Xeon and FPGA option in its data centers to process search queries at a faster rate. When asked if Microsoft was using Intel's new chip, Bryant said she couldn't comment. However, this type of customizable silicon would be beneficial across a variety of use cases, from translating search algorithms to compression of genetic data.

The benefit of using FPGAs, which tend to be costly, is that they are able to be programmed to run a specific set of algorithms at peak efficiency and can later be re-programmed as the algorithms or work changes. As noted in Monday's post about Microsoft's efforts, the greater efficiency and agility is something many webscale clients have been willing to pay for, even looking to other alternative processor architectures in order to gain it.

So in many ways, Intel's decision to bring on an FPGA is a signal that the x86 architecture needed some goosing, especially as other alternative architectures start gaining interest from the Facebooks and Googles of the world that are Intel's top clients. As alternatives go, the ARM architecture, which is the underlying architecture in the brains of most cell phones, has been seen as the one most likely to give Intel a run for its money.

"I'm not naive to the fact to that people are looking at a second source," said Bryant. "With a new tech option I would absolutely expect that customers will be evaluating that solution."

Chipmakers from AMD to Marvell have designed server chips using the ARM architecture praising its modularity and the ability to create what amounts to custom chips aimed at specific compute jobs. But with this FPGA option Intel may have just taken that modularity advantage away. It's still too soon to tell as ARM chips are just now making their way into servers this year, but I'll be intrigued to see if the Intel FPGA strategy turns into a stopgap measure to keep big clients happy and away from ARM or if it is a truly new philosophy and approach for Intel.

Bryant also plans to share several other pieces of news, including the anticipated launch next month of a new Hadoop distribution that will combine Intel's code with that of Cloudera, a company that Intel recently put millions into. The new release will merge the Rhino and Century open source security standards as well as stabilize Spark on Yarn. Spark is the open source data-processing framework that is becoming very popular because it's much faster than traditional Hadoop MapReduce (and therefore better for certain applications such as machine learning and interactive SQ queries) and easier to program. YARN is resource-management layer now standard in Apache Hadoop that lets a single cluster run multiple types of workloads — such as Spark, MapReduce and the Storm stream-processing engine, for example — simultaneously.


Photo by Jakub Mosur

Structure 2014 ticker

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.