MENU

Mentor joins semiconductor process characterisation group

Business news |
By Ally Winning

Mentor has joined STMicroelectronics in the European Nano2022 project to accelerate the characterisation of standard cells, I/Os and memories.

Characterizing silicon platforms with hundreds of cells and several hundred process, voltage and temperature (PVT) variables can consume thousands of CPUs for weeks, running millions of SPICE simulations and generating billions and even trillions of data points. This bottleneck is a productivity drain for the broader semiconductor industry.

New reinforcement-learning techniques are planned during the program to speed up platform characterization, including process technologies under 10nm.

“Characterisation of standard cell libraries is really important for us,” said Cyril Colin-Madan, deputy director of Technology and Design Platforms for STMicroelectronics. “After highly successful collaborations under the Nano 2012 and Nano 2017 programs, we aim to further advance the state of microelectronics design and manufacturing in Europe under Nano 2022,” he said.

“We are focussed on an area, characterisation of standard cell libraries, that has a large impact on the design flow,” said Jean Marc Talbot, Senior Engineering Director DSM/AMS at Mentor. “Library characterisation is at the boundary of the production of IP and the use of the libraries in the physical design flow for synthesis and timing analysis. This is a growing bottleneck for advanced nodes which is why we decided to focus on this.”

“Libraries have thousands of cells with different voltages, hundreds of different voltage and temperature corners. We are talking about millions of simulations and thousands of files so we are in the world of big data and the associated challenges,” he said.

A simple computation for characterisation for example for 500 corners for 5000 standard cells requires a spice simulation for each item so this needs 14 billion simulations with thousands of CPUs for several weeks for a 28nm process and it is getting worse with statistical simulations.

Next: Process characterisation


“For lower nodes you have to take into account the variability of the process and the liberty format has an added format so not only do you need to compute the nominal value but the statistical components with a monte carlo analysis. With a simple calculation to provide enough data you get 1 trillion simulations. This takes too long and can introduce lots and lots of errors,” he said. “There are simplifications that can be done but its clearly a bottleneck but its on the critical path of the digital design flow.”

“The objective we have set for the end of the programme in 2022 is to use machine learning to get an order of magnitude faster throughput in characterisation and also provide easy to use library verification tools so the users can have tools to explore the libraries, to explore gigabytes of complex data and seamlessly debug any errors,” he said. “We have a little over two years to achieve this.”

Mentor acquired the Solido Characterization Software Suite with machine learning technologies in 2018 that increases the throughput of library characterization by orders of magnitude, while producing accurate Liberty files and statistical data. It also provides tools and a designer-centric user interface that enable the exhaustive verification of characterized Liberty files.

“Using machine learning you can select a smart corner that we simulate and from that build an ML model across the range of corners, and this gives 100x acceleration,” said Colin-Madan at Mentor.

“To find outliers or inconsistencies in the vast amount data is impossible by navigating text files so people use scripts with rule based checks. Building machine learning models helps to extract everything, finding new classes of problem that rule based tools cannot find,” he said.

Another part of the collaboration is the graphical interface and how to present results so they are easily understood.

“We believe that machine learning is the key technology for the future of a lot of tools and so far what we have is unique. The full semiconductor industry will benefit from the project and we are building on a very solid foundation,” said Colin-Madan.

Cloud computing is also key to increasing the performance of the library chacterisation by increasing the number of CPUs that can be used.

“The current technology is already scalable to large designs with hundreds of thousands of parameters,” said Talbot. “What we aim to do is instead of using several thousand CPU cores for four weeks we want to decrease that to a few days. Its orthogonal – we want to benefit from the scalability in the cloud and use the active learning – we will use both.”

“Everything is running on CPUs, we are not using GPU acceleration,” said Talbot. “We are surveying what other type accelerators can bring in general but we have not implemented anything. What makes the strengths of these tools is the deep integration and you need to be able to have very efficient software infrastructure across a large number of CPUs and deeply integrate with SPICE simulators and a wide variety of ML learning models so the benefit of integration is speed.”

“Mentor’s research and industrial partnership with STMicroelectronics has established a long and successful track record of collaboration, resulting in advancements to Mentor’s tools, which in turn deliver immediate value to STMicroelectronics and other customers,” said Ravi Subramanian, Ph.D., senior vice president, IC Verification Solutions for Mentor. “Mentor is pleased to expand on this successful collaboration with the Nano 2022 program, and we look forward to working with STMicroelectronics to achieve our mutual goals.”

www.entreprises.gouv.fr/fr/numerique/enjeux/nano-2022

Related articles

Other articles on eeNews Europe 


Share:

Linked Articles
eeNews Embedded
10s