The solutions are designed to enable customers to accelerate the development and use of artificial intelligence (AI) and analytics workloads running in data center, network, and intelligent-edge environments. The new 3rd Gen Xeon Scalable processors, says the company, is the industry’s first mainstream server processor with built-in bfloat16 support, making AI inference and training more widely deployable on general-purpose CPUs for applications that include image classification, recommendation engines, speech recognition, and language modeling.
"The ability to rapidly deploy AI and data analytics is essential for today’s businesses," says Lisa Spelman, Intel corporate vice president and general manager, Xeon and Memory Group. "We remain committed to enhancing built-in AI acceleration and software optimizations within the processor that powers the world’s data center and edge solutions, as well as delivering an unmatched silicon foundation to unleash insight from data.”
Bfloat16 is a compact numeric format that uses half the bits as today’s FP32 format but achieves comparable model accuracy with minimal - if any - software changes required. The addition of bfloat16 support, says the company, accelerates both AI training and inference performance in the CPU.
Intel-optimized distributions for leading deep learning frameworks (including TensorFlow and Pytorch) support bfloat16 and are available through the company's AI Analytics toolkit. Intel also delivers bfloat16 optimizations into its OpenVINO toolkit and the ONNX Runtime environment to ease inference deployments.
The 3rd Gen Intel Xeon Scalable processors - code-named "Cooper Lake" - evolve the company's four- and eight-socket processor offering. The processor is designed for deep learning, virtual machine (VM) density, in-memory database, mission-critical applications and analytics-intensive workloads. Customers refreshing aging infrastructure, says the company, can expect an average estimated gain of 1.9 times on popular workloads3 and up to 2.2 times more VMs4 compared with five-year-old four-socket platform equivalents.
The company also announced the following other additions to the its hardware and software AI portfolio:
- New Intel Optane persistent memory: As part of the 3rd