IBM unveils processor with on-chip AI acceleration

August 25, 2021 // By Jean-Pierre Joosting
IBM unveils processor with on-chip AI acceleration
Chip design unlocks the ability to leverage deep learning inferencing on high-value transactions, designed to greatly improve the ability to intercept fraud, among other use cases.

At the annual Hot Chips conference, IBM unveiled details of the upcoming new IBM Telum Processor, designed to bring deep learning inference to enterprise workloads to help address fraud in real-time. Telum is IBM's first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, the breakthrough of this new on-chip hardware acceleration is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. A Telum-based system is planned for the first half of 2022.

According to recent Morning Consult research commissioned by IBM, 90% of respondents said that being able to build and run AI projects wherever their data resides is important. IBM Telum is designed to enable applications to run efficiently where the data resides, helping to overcome traditional enterprise AI approaches that tend to require significant memory and data movement capabilities to handle inferencing. With Telum, the accelerator in close proximity to mission critical data and applications means that enterprises can conduct high volume inferencing for real time sensitive transactions without invoking off platform AI, which may impact performance. Clients can also build and train AI models off-platform, deploy and infer on a Telum-enabled IBM system for analysis.

Picture: 
A collection of IBM Telum 7nm processors on a silicon wafer. Telum is IBM’s first processor that contains on-chip acceleration for AI workloads. Credit: Connie Zhou for IBM.

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.