Push the UVM start button then hit the accelerator – part 1

Push the UVM start button then hit the accelerator – part 1

Feature articles |
So, you’re STILL not using UVM? Maybe you’ve taken a good look and decided that the Universal Verification Methodology for SystemVerilog (UVM) is not for your team. Maybe, you’ve not got round to taking that close look, but you’ve read a lot of these kinds of articles and been scared off by the warnings that UVM is hard to learn. This article won’t help you learn ANY of it, but it will point you to how you might speed up your UVM learning, your UVM adoption and even your UVM execution throughput.
By eeNews Europe

Share:

After all, who needs a steep learning curve on top of all that other verification work? Then there is the uncertainty of the potential return on investment in adopting UVM. Will it really find more bugs or verify your design faster?

Make no mistake, becoming proficient with all the features of UVM takes a couple of weeks of formal training, and a while longer to confidently put what you’ve learned into practice. At the Verification Futures Conference in 2012, Janick Bergeron, one of the fathers of constrained random verification methodology, was asked for some advice on how to learn UVM. The answer was “don’t try to learn all of it”.

A boost up the UVM Learning curve

Remember, you are not alone, pioneering a new methodology; many have walked that path before you and some of those offer guidance to those following behind. For example, Doulos, the well-respected training company, has been teaching UVM classes and helping to define the standards since UVM started, and has created something called “Easier UVM” as a way to accelerate adoption and reuse.

Easier UVM was originally created as a way to help non-experts to learn UVM and to accelerate its adoption after the students return to work. Comprising methodology guidelines and a really useful code generation tool, Easier UVM has already been adopted for real-life projects and tape-outs. John Aynsley, CTO of Doulos, says that the three most common ways that users benefit from Easier UVM in their projects are;

1. Use the code generator purely as a learning tool.

2. Use the code generator and templates to create, develop and maintain all the SystemVerilog code to run UVM.

3. Use the code generator just once in order to create the framework, but then proceed fully manually form then on.

None of these approaches is more correct or recommended than the others; it is simply a matter of choice for each team. However, in all cases, the Easier UVM approach ensures the UVM code is written in a consistent style.

Of course, verification teams need to know what they’re doing with UVM, and expertise is eventually required, but if you are the first advocate for UVM within your team, then Easier UVM will help you gain early success in order to promote the wider use of UVM in other projects.

[continues next page]


Seeing the wood and the trees

Sometimes, in activities as complex as UVM, the big picture can be lost in the details, and vice versa, so another useful boost up the learning curve is provided by simulation and debug tools that are UVM-literate. In the case of the Riviera-Pro simulator from Aldec, for example, the UVM Graph tool understands all the constituent parts of a UVM test environment and displays these in a recognisable format, as shown in Figure 1.

Figure 1. UVM-aware tool GUIs help us understand UVM structure.

Here we can see the major components of the UVM environment displayed much as they are in the text books and training classes, making it easier to grasp how all those SystemVerilog files combine to create the complete environment. The hierarchy display also shows recognisable UVM elements as the tool interprets them.

UVM brings portability to the testbench

If we’ve spent time creating a test bench then it would help if it was portable and we could then re-use it across different verification platforms, e.g. in RTL simulation and in emulation. In fact the Verification IP industry depends upon that portability. At Aldec, Easier UVM guidelines have been followed in order to help create Verification IP which is portable and re-usable across simulation, emulation and even hybrid environments. This portability has become important because the use of emulation, in particular, is becoming more widespread partly owing to the need for ever greater throughput in UVM-based verification.

[continues next page]


Why is simulation throughput important for UVM?

You may have heard people disparage UVM by suggesting that it is inefficient; that it wastes simulation cycles on poorly-constrained random tests instead of being more design-aware. There is no doubt; it IS easier to throw more simulation cycles at a verification problem in order to achieve coverage goals, rather than create more pertinent directed tests: however, that is not the whole story (wise readers nod inwardly at this point).

One widely-adopted best-practice is to use both directed tests and constrained random tests, such as those employed by UVM. We start by using traditional directed tests in order to reach a pre-agreed functional coverage score; testing all the obvious major and minor features of the Design Under Test (DUT). This provides diminishing returns as we need to test less and less reachable and/or observable parts of the DUT; so then we switch to the constrained random approach for which UVM is famous – testing non-obvious corner cases to bring the coverage up to the desired 100% score. UVM can be used for directed tests as well as constrained random, of course, but the latter requires many more cycles of simulation runtime than the perfect directed test, leading to simulation throughput becoming a bottle-neck. This is especially a problem when teams prematurely switch to constrained random instead of using design knowledge to create better/more directed tests.

How do we increase our efficiency when using UVM? We either run more simulators, write more efficient tests and/or constraints, or we accelerate the cycles themselves i.e. run simulation faster.

Coverage-driven and metric-driven techniques are a help, so are statistical tools which converge more quickly on the optimal constraints, but emulation is increasingly being used to accelerate UVM in order to overcome the constrained-random throughput bottleneck.

Increasing UVM throughput with FPGA-based emulation

Emulators have a number of use modes including simulation acceleration, for which we are required to place part or all of the DUT, and perhaps part or all of testbench, into the emulator hardware. Some emulators are based on custom gate arrays or many small processors but in the case of the Aldec HES platforms, the hardware is based on FPGAs. The advantage of FPGA-based emulation is that these FPGAs are based on the latest silicon technology and hence the fastest possible runtime performance (really important for simulation acceleration, of course).

The entire verification environment might be placed in the emulator, but let’s consider the more common approach of partitioning the DUT in the emulator, and the testbench on the simulator. We must then link the testbench with the DUT via bi-directional interfaces between the simulator and the hardware. For each port between the testbench and the DUT, we need to ensure that every simulator event that produces a signal change in that port produces an equivalent voltage representing logic 1 or 0 at a physical location somewhere in the emulator hardware (and vice versa). We can see this represented in Figure 2, which shows a simplified example of a single port, perhaps a top-level bus port, in a UVM test environment.

Figure 2. Partitioning at the signal level

As might be obvious, even a simple transaction on that bus port would involve multiple simulation events and changes. The hardware cannot run any faster than the simulator’s ability to make those changes and also by the signal-by-signal, event-by-event communication between the testbench and the DUT. Often the overall speed of the simulation acceleration will be governed by the clock rate, complexity and traffic on such interfaces, whatever the speed of the hardware. Even so, the acceleration achieved is still useful for large DUTs and long duration tests.

For greater acceleration, we need a way of not only speeding up the communication but allowing the hardware to run at its own (higher) speed for most of the time. That’s where transactors come in…which will be covered in part 2 of this article, that will appear in the December 2015 digital edition of EDN Europe.

About the author

Doug Amos is an FPGA Consultant who works closely with Aldec around its FPGA-based prototyping solutions; he is also the (UK’s) National Microelectronics Institute’s FPGA Network Manager

Linked Articles
eeNews Embedded
10s