At the annual Hot Chips conference, IBM unveiled details of the upcoming new IBM Telum Processor, designed to bring deep learning inference to enterprise workloads to help address fraud in real-time. Telum is IBM's first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, the breakthrough of this new on-chip hardware acceleration is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. A Telum-based system is planned for the first half of 2022.
According to recent Morning Consult research commissioned by IBM, 90% of respondents said that being able to build and run AI projects wherever their data resides is important. IBM Telum is designed to enable applications to run efficiently where the data resides, helping to overcome traditional enterprise AI approaches that tend to require significant memory and data movement capabilities to handle inferencing. With Telum, the accelerator in close proximity to mission critical data and applications means that enterprises can conduct high volume inferencing for real time sensitive transactions without invoking off platform AI, which may impact performance. Clients can also build and train AI models off-platform, deploy and infer on a Telum-enabled IBM system for analysis.