The eSilicon-developed AI-targeted “tiles” include subsystems such as: multiply-accumulate (MAC), convolution and transpose memory, among others. The physical interface to the HBM memory stack (or PHY) is also part of the library.
A typical AI design requires access to large amounts of memory. This is usually accomplished with a combination of customized memory structures on the AI chip itself and off-chip access to dense 3D memory stacks called high-bandwidth memory (HBM). Access to these HBM stacks is accomplished through a technology called 2.5D integration. This technology employs a silicon substrate to tightly integrate the chip with HBM memory in a sophisticated multi-chip package. The current standard for this interface is HBM2. The development of customized on-chip memory and 2.5D integration represent eSilicon core competencies that are required for a successful AI design.
eSilicon says it is currently engaged with several tier one system providers and high-profile startups to deploy the neuASIC platform and its associated IP, with initial applications focusing on the data centre and information optimization, human/machine interaction and autonomous vehicles.
“We see a vast array of possibilities for acceleration of AI algorithms,” said Patrick Soheili, vice president of business and corporate development at eSilicon. “ASICs provide a clear power and performance advantage. Thanks to our neuASIC platform, the MLAP segment of the market can now expand to serve a wide range of applications.”
eSilicon - www.esilicon.com