All posts by bpsadmin

Issue 15: Code Quality Essentials for High Reliability FPGAs – Part 1

As a young hardware engineer, I started using programmable logic. What could be better, aside from maybe the price and power. You didn’t have to be too disciplined during the design phase because you could just reprogram the device if you had a bug or two. Heck your boss didn’t even have to know, just fix it in the lab. No PCB rework needed.

Even though this was years ago, and devices were no where near as complex as today’s FPGAs, this mindset still separates FPGA from ASIC design. With ASICs there can be a large non-recurring engineering cost and no forgiveness for design bugs, therefore up-front verification is not an option.

The “I can just fix it” FPGA attitude is a major reason a recent study showed that 68% of FPGAs are behind schedule and 83% of projects have a significant bug escape into production. On top of that, code that’s developed this way and at the time was deemed “good enough” has a habit of sticking around to become what’s knows as “legacy code” that no one wants to touch because it’s so poorly written that no one on the team today has any chance of understanding what it’s really doing.

When designing FPGAs, code quality is essential to staying on schedule, avoiding design iterations and worse, bugs found in production. This is especially true when it comes to high reliability applications such as automotive, space and medical devices where bugs can be extremely expensive or impossible to fix. But just what makes RTL (VHDL or SystemVerilog) quality code? The answer is, well, there isn’t just one thing that makes for quality code. It’s a combination of considerations from readability to architecture choices.

Over this series of blogs, we will investigate and do a deep dive into specific aspects of “quality code”. Part one will focus on readable and maintainable RTL code, highlighting some best practices. Part two will be a deep dive into Finite State Machine architectures and coding guidelines and part three will focus on the challenges around multiple Clock Domains.

Readable and maintainable code

During a code review, early in my career, my lead engineer had the audacity to take my code printout and throw it in a garbage can. Of course, as a young engineer I was flabbergasted because I had simulated the code and it worked like it was supposed to, or so I thought. He then said, in a reassuring voice, “now let me show you how to code so someone else can figure it out.”

What I learned next was that readable and maintainable code took some common discipline, starting with the basics such as naming conventions. It’s not so much whether the organization prefers big endian or little endian, spaces or tabs, prefixes or suffixes, underscores or hyphens or camel case, the important thing is to have a standard and stick to it. By standardizing on naming conventions for architectures, packages, signals, clocks, resets, etc. my code became clearer and as a side benefit, it reduced the code complexity. Going through code by hand to uphold those standards is incredibly tedious, but the task can be easily automated with a static analysis tool.

A simple and common mistake I made was to use hard coded numbers, especially in shared packages. While as the original coder I may understand exactly why I hard coded that specific number, 10 years down the line, when it’s time to update the device or functionality, the use of a hard coded number will add to confusion and delays.

Back to my code simulating, what I didn’t see lurking in my code were things that made it ripe for simulation vs synthesis mismatches. For example, variables that are used before they’re assigned might be unknown or might retain a previous value, which means possible mismatches between the simulation and actual functionality after synthesis. This simple mistake can mean the design works in simulation but then later fails in the lab or, worse, in the field.

Another common example of code quality issues is missing assignments in IF and Else blocks and case statements that will cause most synthesis tools to create latches in the design along with the registers. Implied latches can cause issues with timing, and different synthesis tools will do this in different ways, so change vendors, change implementations.


Figure 1 Latch vs Flip Flop

The goal of any tool is to succeed, and synthesis tools want to synthesize successfully, so they may give your code the benefit of the doubt, and assume your code is right until it’s proven wrong. Many tools will even accept common poor practices and “fix” them for you. Again, different tools, different “fixes.”

Another code quality issue is synchronous de-assertion of asynchronous resets. Some people pay little attention to how their reset will work in the real world. They just throw in a global reset just so the simulation looks “tidy” at Time Zero, but this isn’t enough. That reset must work in the real world. Keep in mind, a global reset is just that, global, so it has a large fanout and may have a significant delay and skew, so you need to buffer it properly. And because this reset is asynchronous, by definition it can happen at any time, and it forces the flip-flops to a known state immediately. That’s not a problem, but the de-assertion issue comes not when your reset pulse begins, but when it ends relative to the active clock edge. The minimum time between the inactive edge of your reset and the active edge of your clock is called recovery time. Violating recovery time is no different than violating setup or hold time. The easiest way to avoid this issue is to design your reset as shown here. The active edge can happen at any time, but the inactive edge is synchronous with the clock.


Figure 2 Asynchronous Reset

Finding and addressing code quality issues, such as naming conventions, reset issues, excessive area, low frequency and meantime between failures up front, as you code can significantly reduce the number of iterations through synthesis and place and route, improving productivity, reducing development costs and improving the reliability of a design.

When designing FPGAs, code quality is essential to staying on schedule, avoiding design iterations and worse, bugs found in production. The Visual Verification Suite from Blue Pearl Software provides RTL Analysis to identify coding style and structural issues up front. The RTL Analysis points out naming conventions as well as structural issues such as long paths and if-then-else analysis as you code, rather than late in the design cycle.


Figure 3 Visual Verification Suite

So, what is it that makes the Visual Verification Suite such a powerful debugging environment? It has a straightforward, understandable graphical user interface that runs on Windows or Linux, and quickly generates reports that show aspects of your design in general, like your highest fanout nets, or your longest if-then-else chains, and an easy-to-filter report window showing the specific issues it has found. The suite also includes numerous checks to catch violations of company specific naming conventions.


Figure 4 Finite State Machine Viewer

From these reports, or from the main window, you can open the schematic viewer directly to the area of interest, and with just a few mouse clicks you can turn that into a path schematic to isolate the issue even further. On top of that it provides views that help you with finite state machine analysis, CDCs, and long combinational paths. To learn more, we encourage you to sign up for a demonstration to learn more how the Visual Verification Suite can ensure quality code for high reliability FPGAs.

Check back for Part 2, A deep dive into Finite State Machine architectures and coding guidelines and Part 3, the challenges around multiple Clock Domains.

Issue 14: Accelerated Verification of Block-Based FPGA Designs

FPGA designers face increasing challenges with time-to-market due to the combination of increased FPGA complexity and performance. This often moves the engineer away from hand crafting each element of the design and toward using IP cores provided by device vendors and other third-party suppliers. Using off the shelf IP frees the engineer to focus on the development of custom blocks, which are critical for the application and where their special value-added knowledge is applied.

Blue Pearl’s Visual Verification™ Suite helps developers address these time-to-market and complexity challenges. The suite enables the engineer to achieve tighter development time scales, by identifying issues with custom developed RTL, such as FSM errors, bad logic or design elements which may impact performance in simulation or implementation. With the suite, issues are found early in the design cycle, as you code, not late in the design or worst, in the lab.

In addition, the Visual Verification Suite can analyze the complete design to ensure there are no accidental Clock Domain Crossings (CDCs) introduced between custom IP blocks and vendor or third-party IP.

Working with vendor and third-party IP can be a challenge, especially when interfacing with tools like Intel® Quartus® Prime and its Platform Designer (formerly Qsys). This challenge can occur as the architectural diagram is created visually and pulls together several RTL files, typically encrypted, to correctly describe the IP modules and the overall architecture. If not handled correctly the engineer could be swamped by messages when using static analysis tools like the Visual Verification Suite. These messages/warnings result from coding structures within the vendor and third-party IP encrypted blocks so the user can do nothing about them.


Example Quartus Platform Designer Block Diagram.

Here Blue Pearl’s patented Grey Cell™ technology saves significant time. As the project loads, the Visual Verification Suite is aware of the vendor’s IP libraries and loads a Grey Cell model in place of the IP Core. This model contains only the first rank of input flip flops and the last rank of output flip flops. Knowing this allows the suite to perform a clock domain crossing analysis at the top level of the design without the need to look in depth into the vendor or third-party IP. This also reduces the number of messages and warnings to only the ones of interest to the developer. (Learn more about Grey Cell technology www.bluepearlsoftware.com/files/GreyCell_WP.pdf)


Grey Cell Representation of a FIFO

To be able to pull in the top-level design from Intel’s Platform Designer, Blue Pearl provides Tcl scripts in the installation directory which can be used in the Quartus Tcl window to extract the project and create a Tcl script which can be opened by the Visual Verification Suite.


The scripts directory of the Visual Verification Suite installation

Creating a Blue Pearl project Tcl script is as simple as copying the QuartusToBluePearl_sh.tcl script to the Quartus project area and then sourcing it in the Tcl console.

This will create a LoadBluePearlFromQuartus.tcl Tcl script which can then be opened by the Visual Verification Suite, and which contains all the elements and files required to get started with analysis.

Alternately, Quartus users can use the LoadBluePearlFromQuartus.sh or LoadBluePearlFromQuartus.bat Linux or Windows script to start Quartus in non-graphical mode and source the same script.

A similar flow is available for Xilinx® Vivado® users. Copy the VivadoToBluePearl.tcl script into the project directory and source it from the Vivado Tcl console to produce a LoadBluePearlFromVivado.tcl Tcl script.

Blue Pearl’s Visual Verification Suite, used early and often in the design process with Quartus Prime or Vivado, as opposed to as an end of design/sign-off only tool, significantly contributes to design efficiency, and quality, while minimizing chances of field vulnerabilities and failures.

To learn more about the Visual Verification suite, please request a demonstration https://bluepearlsoftware.com/request-demo/

Issue 13: Hardware Security: Risk Mitigation Requires a Security Focused Verification Methodology

Issue 13 image 1

In most SoC and FPGA designs, hardware security is now as important as power, area, and cost. In fact, the National Institute of Standards and Technology (NIST) reported a record number of Common Vulnerabilities and Exposures (CVEs) in 2021, and the year is not over yet. Given the difficulty of patching hardware, architecture choices along with an established security focused verification methodology must be in place to circumvent weakness in the logic that, if exploited, results in a negative impact to the chip’s security triad (confidentiality, integrity, or availability).

To help remediate security risk in hardware devices, in Feb. 2020, the MITRE Corporation released version 4.0 of their (CWE) list. The new version, for the first time, provides a valuable list of common hardware weaknesses, in addition to its software weaknesses, that are the root causes of many vulnerabilities.

The hardware list is categorized into major themes such as Security Flow Issues, Debug and Test Problems, Memory and Storage Issues, General Circuit and Logic Design Concerns, and so on. The CWE list has been developed to help design teams as they quantify risk exposure. In addition, it provides a valuable guide for threat modeling and secure design. By identifying potential issues early, the projected cost of a security incident can be significantly lowered or eliminated.

Over the last year, the list has continued to expand and as of version 4.6 (Oct. 28, 2021), it contains 98 common hardware weaknesses. While many of these vulnerabilities can be avoided with good architectural choices, others can be avoided with a mature design verification methodology that takes advantage of RTL static analysis.

Take for example, CWE-1245: Improper Finite State Machines (FSMs) in Hardware Logic. This CWE highlights that FSMs, if not constructed correctly, can allow an attacker to put a system in an undefined state, to cause a denial of service (DoS) or gain privileges on the victim’s system.

Many secure data operations and data transfers rely on the state reported by a FSM. Faulty FSM designs that do not account for all states, either through undefined states (left as don’t cares) or through incorrect implementation, might lead an attacker to drive the system into an unstable state from which the system cannot recover without a reset, thus causing a DoS. Depending on what the FSM is used for, an attacker might also gain additional privileges to launch further attacks and compromise the security guarantees.

With simulation alone, it is difficult to verify a complex FSM as it requires complex test vectors with both positive and negative testing to check all combinations of possible states. Fortunately, a mature RTL static verification tool such as Blue Pearl Software’s Visual Verification Suite provides without any need for test vectors, finite state machine analysis that automatically extracts the FSM from surrounding code, then checks for dead or unreachable states as well as generating an easy-to-read bubble diagram to better visualize the FSM and its potential vulnerabilities.

In addition, the suite also provides checks for coding style conformance, structure, race conditions (CWE-1298), resets (CWE-1271), as well as specific checks for functional safety protocols such as DO-254 conformance.

Issue 13 image 2

Complex Finite State Machine

Design teams that leverage static verification as part of their functional verification methodology are proven to reduce hardware security risks, as well as expensive and time-consuming simulation, synthesis and place and route runs and reruns, freeing up expensive licenses as well as improving overall design productivity.

Issue 13 image 3

Find and fix issues as you code, not late in the design cycle

Blue Pearl’s Visual Verification Suite, used early and often in the design process as opposed to as an end of design/sign-off only tool, significantly contributes to design security, efficiency, and quality, while minimizing chances of field vulnerabilities and failures.
To learn more about the Visual Verification suite, please request a demonstration https://bluepearlsoftware.com/request-demo/

Issue 12: Moore With Less

Moore’s law foresaw the ability to pack twice as many transistors onto the same sliver of silicon every 18 months. Fast forward roughly 55 years, some experts now think Moore’s law is coming to an end. Others argue that the law continues with a blending of new innovations that leverage systemic complexity such as 2.5D and 3D integration techniques.

FPGA vendors and their customers have taken advantage of Moore’s law for decades and reaped the benefits of innovations such as 2.5D interposer technology as well as hardened processors and application specific subsystems.  However, with these features came new hardware design challenges. According to the 2020 Wilson Research study of FPGA designs:

  • The FPGA market continues to struggle with non-trivial bug escapes into production
  • The FPGA market is rapidly maturing its verification processes to address this growing complexity
  • FPGA teams that are more mature in their functional verification methodology are likely to experience fewer bug escapes

As an example of this complexity, the study indicates that 92% of FPGA designs contain two or more asynchronous clock domain crossings (CDC). This class of metastability bugs cannot be found in RTL simulation. To simulate CDC issues requires a gate-level model with timing, which is often not available until late in the design flow, if at all.  Even if such a gate-level model is available, it would still require a detailed set of test vectors to fully exercise the area of the circuit where the CDC exists, a daunting task at best.

The study also pointed out that 68% of designs miss their schedule! While there are several reasons for this, with static verification tools such Blue Pearl’s Visual Verification Suite, project teams avoid costly and time-consuming design spins due to simulation versus hardware mismatches, invalid timing constraints and CDC issues that can cause metastability problems.

In addition, with static verification, 63% of trivial human errors or typostypically not found until simulation (sourceDVCon U.S. 2018) can instead be identified early as the design is being created. By “shifting verification left”, design teams have been proven to save time along with delivering on a much more predictable schedule.

Mature RTL static verification tools such as Visual Verification Suite provide coding style conformance, structural, path, reset, and finite state machine analysis, as well as specific checks for DO-254 and STARC conformance.


Find and fix issues as you code, not late in the design cycle

FPGA teams that leverage static verification as part of their functional verification processes are proven to reduce expensive and time-consuming simulation, synthesis and place and routeruns and reruns, freeing up expensive licenses as well as improving overall design productivity.

As an example of the return on investment of adding static verification to an FPGA design methodology, let’s take an example of five FPGA designers working on a 12-month FPGA project. If each person worked 200 days per year, the project would take approximately 1000 person days. Let’s assume each designer spends about 50% of their time focused on one project that is 500 person days.

The Wilson study showed that approximately 51% of the design time is spent in simulation (250 person days). By adding static verification to the design methodology, if 50% (63% derated for simplicity)  of the issues are trivial human errors or typos that can caught up front, this could save as much as 128 (250 x 51% x 50%) person days in simulation alone.

Next, if you add the impact of CDC issues, where ~37% of High Reliability designs have clock related issues, project teams could eliminate another possible 3 month delay due to additional simulation and/or lab debug to find a CDC issue, or 90 person days. Finally, if you add the project management efficiencies, the project team could save as much as 240 person days (1.2 person years).

To put this into real numbers, at a loaded cost of $150K/year/engineer, the net savings would be around $180K per year. This is not inclusive of other unratifiable savings, such as reduced simulation and synthesis license needs, project overrun costs/penalties, lab debug equipment and lab time as well as a significantly reduced risk of field failures. These savings far outweigh the costs of adding static verification as part of a complete functional verification methodology.

Whether you believe Moore’s law is coming to an end or not, one thing is clear: Most project teams are required to increase productivity just to keep up with FPGA complexity, often with smaller and smaller design teams. Blue Pearl’s Visual Verification Suite, used early and often in the design process as opposed to as an end of design/sign-off only tool, significantly contributes to design efficiency and quality, while minimizing chances of field failures.

To learn more about the Visual Verification suite, please request a demonstration.

Issue 11: Accelerating Verification for Satellite Applications

Development of an FPGA for space missions is, for many engineers, one of the most exciting end applications. However, due to the critical nature of space, the development typically comes with an increased verification workload. Not only must the design be verified functionally in simulation, we must also ensure the developer has not inadvertently introduced any latent design issues during the coding process.

Functional verification of an FPGA’s RTL requires a considerable simulation effort to achieve code coverage (branch, path, toggle, condition, expression etc). To achieve a high level of coverage the simulation must test several boundary and corner cases to observe the behaviour of the unit under test and ensure its correctness. This can lead to long simulation runs and significant delays between iterations when issues are detected. Of course, issues found in simulation can range from functional performance such as insufficient throughput, to state machine deadlocks due to incorrect implementation during the coding process.

This is where static analysis tools such as the Visual Verification Suite from Blue Pearl Software can be wonderfully complementary to functional simulation and can help save considerable time and effort in the verification stage, when code coverage is being investigated.

Static analysis enables functional verification to be started with a much higher quality of code, reducing iterations late in the verification cycle. In addition, static analysis typically also runs in tens of seconds to minutes compared to long running simulations.

Let’s take a look at how a typical space FPGA development can be assisted by the use of a static analysis tool up front prior to starting functional verification.

The first step in the use of a static analysis tool is the selection of the rule set against which the RTL code will be checked for errors and warnings. The rule set can include structural checks e.g. is a single or two process state machines used, are there unreachable states, does the state machine address a power-of-two states, etc. There are also rules for coding style enforcement, which ensures compliance with organizational coding standards while improving readability and comprehension by other team members.

Definition of the rule set can be challenging, which is why the Visual Verification Suite includes a number of predefined rule sets.These include rules which ensure DO-254 best practice, along with a new rule set generated from recent ESA work based upon the CNES VHDL coding rules.

Many of the predefined rules focus upon the structural elements which may be incorrect in the design and include elements such as:

  • Unnecessary events – These are unnecessary signals included in the sensitivity list. Such inclusion will lead to simulation mismatch and add complexity to achieving code coverage.
  • If-Then-Else Depth – This will analyse the If-Then-Else structures to identify deep paths which may impact timing performance and throughput when implemented.
  • Terminal State – This is a state in a state machine which once entered has no exit condition. Finding this prior to simulation can save wasted simulation time.
  • Unreachable State -This is a state in a state machine which has no entrance condition. Finding this prior to simulation can again save considerable simulation time.
  • Reset -This ensures each flip flop is reset and reset removal is synchronous to the clock domain of the reset. Several in-orbit issues have been detected relying upon the power-on status of registers and as such reset for all flip flops is best practice.
  • Clocking – Clocking structures are also analysed to ensure there is no clock gating or generation of internal clocks.
  • Safe Counters – Checks to counters ensure that terminal counts use greater than or equal to for up counters and less than or equal to for down counters. This ensures single event effects have a reduced impact on locking up counters.
  • Dead / unused code – Analyses and warns about unused / dead code in the design. This can be removed prior to functional simulation and reduces head scratching when code coverage cannot be achieved.

Modern space FPGA designs also often include multi-clock solutions, for example ADC and DAC clocking domains. This multi-clock environment can introduce Clock Domain Crossing challenges as information is passed between clock domains, leading to corruption if incorrectly synchronised.

Finding CDC issues is impossible in functional simulation, and they are normally identified using vendors’ timing analysis tools. This means information on CDC is only available at the end of the implementation process, and iterations will require the rerunning of verification and implementation to achieve a corrected implementation.

The ability to identify the Clock Domain Crossing issues prior to the verification stage can save significant time and verification iterations. The Visual Verification Suite provides designers the ability to perform CDC analysis on the source code, identifying CDC issues earlier in the development cycle and saving considerable time in the development process.

In summary, Static Analysis provides developers of FPGA for space applications the ability to reduce development and verification time as it enables entry into functional simulation and implementation with a higher quality of code. If this interests you, please take it for a test drive on your next FPGA.

Are FPGA Vendor Provided Tools All I Need?

Anyone who has designed with the latest breed of FPGAs, complete with scalar, RF, and DSP engines, your choice of hard and soft processors along with custom and standard interfaces, understands why most FPGA projects are behinds schedule.  In fact, according to a recent Wilson Research study, responders reported that 68% of FPGAs designs are delivered behind schedule.

When talking with design teams about this, the typical response boils down to, ‘we are too busy being productive to be more productive’. Meaning, we have a current flow, we use the FPGA vendor provided tools, and we don’t have the time nor the people to evaluate new tools and methodologies. This said, with today’s FPGA complexity and need for high-reliability designs and systems, this can be a costly mistake.

To convince FPGA development teams that change is needed, EDA companies have come up with cute marketing slogans such as “Shift Left” and “Verify as you Code”. The benefit proposed is if you can catch issues sooner, the faster and less costly they are to fix. While this would seem obvious, it typically takes management’s commitment to high-reliability and streamlined design practices to realize the true benefit of adding additional tools into the flow.

While FPGA vendor provided tools are necessary, by themselves, they are not sufficient when it comes to streamlining high-reliability FPGA design. So why adopt a 3rd party Lint tool like Visual Verification Suite’s Analyze RTL? To answer this question we asked Adam Taylor, Founder and Lead Consultant at Adiuvo Engineering & Training Ltd. for the top 10 reasons his team adopted the Visual Verification Suite for their work with the European Space Agency (ESA) aimed at improving the usability of the ESA soft-core IP. 

For background, the ESA soft-core IP (ESA IP portfolio) was developed to promote and consolidate the use of standardized functions, protocols and/or architectures such as SpaceWire, CAN, TMTC, and more. Adam and his team have been reviewing the cores to ensure they are clean of syntax, structural, and clock domain crossing issues.

Here is Adam’s response…

  1. Ease of use – No steep learning curve to using and becoming proficient with the tool.
  2. Focuses in on issues – Provides filtered reports, path-based schematics, and cross probing to quickly find issues and then assign waivers to fix or not.
  3. Design Enablement – Low ‘noise’ text reports provide significant information on the structure of the design to help optimise if necessary – they also help designers understand legacy designs and pre-existing IP blocks.
  4. Find issues earlier in the design cycle – Enter simulation and synthesis with a better quality of code. The later issues are found the more costly they are to fix.
  5. Design Scenarios – Ensure the configuration of generics does not introduce any corner cases when developing IP e.g. one generic resulting in an overflow which is not caught until much later.
  6. FSM viewer – Ensure no illegal /deadlocked/unmapped states are in the FSM – simulation requires you ask the right question to find it, or worse you find it after hours of simulation, which then must be done again.
  7. Design metrics – Tracking of warnings, errors, and ‘Must Fix / Won’t fix’ waivers over time allows assessment of the maturity of the code and the engineering effort to fix it. This results in more accurate program management estimations as to the state of the design.
  8. Design Sign off – You know required tests were actually run – good for “goods in inspection” of code as well as to understand the impact of code changes.
  9. Easy creation of custom packages for company design rules – Automates the design review process by enabling design reviews to be consistent and focus on assigning must/won’t fix waivers.
  10. Built in safety packages (DO-254), industry standard checks (STARC), FPGA specific libraries on your choice of Linux and Windows to streamline setup and deployment.

The Visual Verification Suite augments FPGA vendor tools by generating complete timing constraints for false and multicycle paths and reporting on functional design, FSM and clock domain crossing issues that can be fixed before simulation, synthesis and physical implementation, reducing the number of iterations in the flow considerably. To find out how your team can benefit by verifying as they code, request a demo from the Blue Pearl team.

Issue 10: No, Latches are (mostly) not OK in FPGA Design

FPGA designs are increasingly used for high reliability applications such as aerospace, medical, industrial, and automotive. Failure of the FPGA design in these applications could lead to injury, loss of life or environmental disaster. It is therefore critical that the design behaves as we intend and not as the synthesis or implementation tool thinks it should.

Of course, FPGA designs are sequential in natures and use counters, state machines and shift registers etc. to implement the functionality required. However, to control the next state logic or next counter increment we naturally use combinatorial logic.

The behaviour of the combinatorial logic is described in processes (VHDL) or always blocks (Verilog). The output(s) of the combinatorial block should be a function of the inputs only and not previous inputs. If we fail to fully describe every possible input condition, the synthesis tool will need to implement memory for those missed input states. To implement this memory the synthesis tool will insert a latch, often referred to as an implied latch.

Typical Coding structures which will create latches in Verilog and VHDL are shown below. Note how the combinatorial logic is incomplete.

When designing for high reliability applications, failure to fully define the output for a set of given input conditions can result in a significant design issue. As such, the implementation of an inadvertent latch in your design is something which should be detected and corrected.

Failure to fully define the behaviour of a combinatorial circuit may lead to unexpected behaviour and differences between synthesis and simulation. Functionally, as latches are enabled potentially for a longer period than synchronous flip flops are, in high reliability designs this increases the possibility of a Single Event Transient being captured and stored in the latch, further impacting the circuit behaviour.

This can have a significant impact on the probability that a SET will be captured at the end point when analysed compared to a combinatorial path and synchronous design.

However, even if your application is not high reliability, we need to consider the other impacts using a latch might have on your design.

Timing and placement may become an issue – for example Xilinx UltraScale CLB contains 16 storage elements (2 per LUT). Within a CLB these 8 LUT and 16 Storage elements are split top (A-D) and bottom (E-H). Each element top or bottom can be defined as either register or latch – thus selecting one latch means the remaining 7 must be latches or unused.  This can influence the timing of the design due to placement of the latch resource in relation to the registered elements as the routing path is increased. This can lead to issues achieving timing closure.

In addition, latches may also result in a situation which reduces the ability of the timing engine to accurately determine the timing relationship where latches are involved. The timing engine is developed to accurately close timing when synchronous design is used. As such, timing paths may be incorrectly analysed and optimised where latches are involved. This could lead to the inclusion of latent issues yet to arise.

Being able to find inadvertent latch creation early in the design process is critical to entering simulation and synthesis with a better quality of code. This is where Blue Pearl can assist developers, using its structural RTL analysis to quickly identify latches in both Verilog and VHDL.Enabling the LATCH_CREATED check (one of the 300+ structural coding checks available during the load phase) will identify any latches within the design.

Once identified they can be quickly corrected prior to running simulation and then synthesis. Of course, synthesis will report latches have been created in the synthesis report; however, by this time the verification may have been completed. Re-running verification to ensure the behaviour is the same once the latches have been created will be time consuming and costly.

Issue 9: High Reliability FSM and Counter Support

In our journey of exploring the Blue Pearl Visual Verification Suite’s capabilities we will examine how it can help us create better finite state machines (FSMs) and counters.

FSMs are of course the central element of programmable logic designs. This is especially true when we are creating mission critical orhigh reliability solutions where the use of processors is discouraged or prohibited.

When we are creating state machines for high reliability applications, we want to ensure the state machine cannot inadvertently enter an illegal state. An illegal state is defined as one for which we have not set a behaviour or an entry path from the normal operation of the state machine.

If such an event were to occur in the operation of a mission critical or high reliability system, this could lead to failure of the system as behaviour is undefined. A traditional way to detect unmapped states is to perform a hand review of all FSMs within the design. This can take a considerable effort on large FSMs and can easily lead to errors as it is a manual review step.

I was pleased to see the latest release of the Visual Verification Suite includes new checks which enable us to verify that our state machines and counters are safely implemented.

The new messages are

  • BPS-1067: State machine expanded to max states.
  • BPS-1068: Counter cannot recover from unreachable states.

The first of these, checking that the state machine uses a power-of-two states, is contained within the FSM analysis and is enabled by setting the expand_fsm_to_state_var_size TCL variable to true. This variable can be set via the FSM Analysis Options page in the Design Settings dialog in the GUI, or via a TCL script in command line.

Image 1

Setting this TCL variable results in the FSM analysis checking to ensure state machines include a power-of-two states.If not, a message is generated,and the FSM diagram is updated to show the missing state.

For example, in a simple FSM which only defines three of four states we get the following messages and diagram in the FSM viewer.

Diagram FSM Viewer

We can see the fourth state is clearly identified as not being correctly implemented in the state machine.

If we are implementing a high reliability system and such a warning is received, we are then able to identify the state machine and update the design to provide protection for the unmapped state.

Another area where the high reliability designs can run into issues is checking for counter terminal values. The second new message introduced ensures that instead of checking for “equal to” the desired terminal count, the design checks for “greater than or equal” or “less than or equal”depending upon the direction of the count.

If a single event upset flips the counter value to one that is beyond the specified terminal count, the check ensures that the counter will not be prevented from completing its action.

One way to enable this check on the counters in our design is to use the Load Checks page in the Design Settings dialog in the GUIand search for modulususing the text strip, as shown in the figure. Then, enable the check.

Design Structure

Next time we perform a load, all the counters in our design will be evaluated using the check and any counters which fail will be identified, allowing us to take corrective action.

In summary, these two new messages provide additional visibility of issues in our RTL designs which might impact performance in high reliability applications. They allow us to quickly focus in on counters and state machines which need corrective action to be suitable for the task at hand.

Issue 8: RTL Coding for Space and High Reliability Applications

Thanks to their performance, flexibility and “any-to”outstanding interfacing capabilities, FPGAs are incredibly popular in high reliability applications. They are often used in space systems such as satellites, launchers, aerospace, and autonomous vehicles. In these high reliability applications, depending upon the use case, the system must be able to keep operating or gracefully and safely fail, should an error occur.

Developing high reliability FPGAs is a complex task which must be considered holistically as part of the wider system. One common step deployed in high reliability flows is the enforcement of RTL development guidelines. These guidelines will contain rules which ensure key elements of the designare setup correctly, such as:

  • All flip flops are reset – Relying on the initialised state at power up can lead to unpredictable behaviour which can result in loss of the mission. One example of this was the NASA Wide-Field Infrared Explorer lost due to reliance on the power-on default state
  • Vectors are of correct sizes and types – This ensures overflow of incorrect translation does not occur as was the case with Arianne 5 initial launch, in which a 64-bit floating-point number was translated as a 16-bit signed number.
  • State machines and counters cannot enter an incorrect state due to a single event effects (SEE)
  • Clock domain interactions are safe and data crosses without the potential for corruption.
  • Naming conventions and coding styles are complied with to ensure readability and portability across projects.
  • Coding structures which are likely to cause safety issues are not used. For example, latches, not full defining if statements, gated clocks etc.
  • Mismatches between simulation and synthesis interpretations of RTL

EXAMPLE OF BAD CODING:

One example of a bad coding structure is demonstrated below in a state machine reset example. While technically correct, the state machine relies upon the position of the state declaration for the reset state. Changes made later to the state machine may result in a change to the behavior. If left as is, this could cause system issues as well.

Image 1

So how would you check for coding issues like this? There several different coding standards which can be used. An example is the D0254 working group, Centre National d’Études Spatiales (CNES) VHDL coding standards.

Image 2

Mapping of DO-254 VHDL working group coding practice rules to BPS Checks

Image 3

Mapping of DO-254 VHDL working group safe synthesis rules to BPS Checks

BUILT IN AND CUSTOM RULE PACKAGES:

Blue Pearl’s Visual Verification Suite provides several hundred of built in checks which can be used to validate high-reliability guidelines. Some rule checks like the DO-254 coding standards are implemented already within the suite as a package.

Image 4

Alternatively, if a different guideline standard is to be used, e.g. CNES, a separate package can be implemented by defining a custom package. This custom package takes the formof a TCL file and can define a custom BSP package.

Image 5

Defining a Custom Rule Set

Image 6

Custom Rule Set Loaded into BluePearl

Once the packages are defined, they can be used to check the RTL source files.The maturity of the design can be assessed by tracking the warnings over time. This capability is provided by the suite’s Management Dashboard. Using thedashboard, you can see the number of errors and warnings received as the development progresses. Of course, this should be trending down over time as the issues are addressed.

Using the Management Dashboard in this manner also provides metrics on the impact of requirement changes as the development progresses. In the image below you can see the initial development errors and warnings being corrected before a change of requirements leads to additional increased effort. Metrics such as this are very useful for project management and change impact assessment.

Image 7

Blue Pearl solutions provide developers of high reliability systemsthe ability to verify and debug their FPGAs prior to implementation, leveraging automated industry standard verification rules and guidelines. Only with such analysis can designersensure the RTL which enters synthesis and simulation is of the highest quality. In addition, the Management Dashboard enables the maturity of the RTL development to be established, providing great metrics on the project for the project management team.

Issue 7: Five Capabilities You Might Not Know About Visual Verification Suite

The capabilities for Blue Pearl’s Visual Verification Suite are well known for Lint, Clock Domain Crossing Analysis and Design Management. However, there are several capabilities in the tool which are easily overlooked but nevertheless provide the designer with significant benefits, let’slook at five of these.

  1. 1. Text Reports – Along with the linting and messages which result from the structural analysis checks. The suitealso enables designers to generate additionaltext reports on the design. These text reports provide significant detail on the objects structure, design resources, generic and parameter settings and so on aboutthe design. This information is particularly useful if you are trying to understand a legacy design, especially when combined with the dependency viewer. However, usefulness of the text reports is not limited to just information which helps us understand the design. It also includes information which can help us gain better performance in the final design. One example of this being the IF/THEN/ELSE length report.Understanding the IF/THEN/ELSE depth enables the designer to identify areas of the design where optimizations can be made to ensure timing performance is achieved.

    Enabling Text Reports
    Enabling Text Reports

    IF/THEN/ELSE Depth
    IF/THEN/ELSE Depth
  2. 2. Schematic Viewer – The schematic viewer is ideal when we are trying to isolate issues raised in the linting structural checks, CDC and SDC analysis. The schematic viewer can also be used to visualize a legacy design to understand the interconnection between modules and most importantly clocking and reset structure.
    Clock Tree View
    Clock Tree View

    Reset Tree View
    Reset Tree View
    Within the schematics viewer we are also able to cross probe to the RTL, highlight nets and trace cones of logic to help understand the design and track down issues.

    Schematic Actions for a selected element
    Schematic Actions for a selected element
  3. 3. Path Analysis – Timing closure is one of the most time-consuming elements of the logic design process. Often,when using programmable logic, we need to wait for the vendor design tool to complete its implementation before we know of any timing issues. This of course, can take several hours and is often an iterative process. One of the key design structures which impacts timing performance is the path length, that is the number of logic elements between flip flops. The Visual Verification Suite’s Analyze RTL tool elaborates the RTL design and indicates the long paths between flip flops. This allows the designer to act, to correct before implementation. Using such analysis can save the design engineer significant time in the implementation stages of the project.
    Path Analysis between two registers
    Path Analysis between two registers
  4. 4. Design Scenarios – When we design our RTL modules, we want them to be as reusable as possible, saving design time. To make our designs as flexible as possible we often use generics or parameters in our RTL to enable different final implementations of the IP core in the FPGA or ASIC. Within the suite, design scenario enables us to create solutions with a different setting of the generics and parameters within our designs.
    Using design scenarios enables us to ensure that changes to the generics/parameters does not result in additional or new violations of the enabled packages and checks. Or indeed if the parameter or generic results in errors.
  5. 5. Analysis Scenarios – Similar to the design scenario, analysis scenarios allow us to change the settings currently used by Analyze. The benefit of changing Analyze settings in this manner is the original settings remain unchanged while we observe the impact of making the changes to the checks.If they are accepted, we can add them into the main settings. Alternatively, we can use scenarios to run different checks on files depending upon the required level of check.

    Both the Design and Analysis scenarios, once completed, create a new option under the design scenarios menu which can be used to enable that scenario. If you need to change the configuration of the scenario once it is enabled,just use the “design settings” as we would do for a normal scenario.


    Hopefully, you can see the benefits these five capabilities of Blue Pearl Software’s Visual Verification Suite can bring to your ASIC and FPGA verification.