Your Innovation, Your Community
April 20-22, 2021
Chairman and co-Chief Executive Officer
Synopsys
Tuesday, April 20
9 AM PT
Fellow
Intel Corporation
Wednesday, April 21
9:10 AM PT
Head of Computing Platform and Semiconductors
CARIAD, a Volkswagen Group Company
Thursday, April 22
9:05 AM PT
For your convenience, the SNUG World agenda is arranged by day and track. Select a date tab to see all the track offerings for that day. Select the individual track tab to see the detailed sessions in each track to plan your time at SNUG World. Please check back regularly for updates to the agenda. All times are listed in Pacific Time (PT).
*Denotes user content reviewed by the SNUG World Technical Committee
Can a wafer-sized SoC outperform some of the fastest supercomputers by 200X? If it’s from Cerebras, the answer is YES. Cerebras’ innovative Wafer Scale Engine is designed to provide the parallel processing performance that next-generation data centers need to accelerate deep learning and reinvent artificial intelligence. In this presentation, Cerebras will reveal the compute, memory, and bandwidth challenges of bringing an AI chip of this scale to market, and how they overcome them with Synopsys’ design solution for AI chips, focusing on DesignWare IP, Verification, Digital Design, and Silicon Lifecycle Management.
The demand for neural network processing is requiring SoC hardware innovation across all market segments. These demands bring a new set of IP requirements unique to different segments, including new processors, higher bandwidth memories, high speed interconnect, and optimized architectural configurations. Constantly evolving next-generation neural networks place unique additional demands over and above the standard PPA needs of traditional chipset hardware. This presentation will describe how a leading AI SoC customer supports emerging requirements for fast-changing neural networks. Attendees will learn about successful implementations of how IP, IP tools, and design services can enable more competitive, higher performance SoC designs while minimizing time-to-market.
Hardware ASIC / System Safety are key enablers for the overall robustness and dependability of autonomous vehicle architectures. In developing functional safety that addresses hardware random failures and systematic failures, it is also crucial to consider cybersecurity and safety of intended functionality (SOTIF). In this keynote, we’ll cover NVIDIA’s vision for functional safety, as well as our work with various international governing standards and regulations. The talk will also include an overview of the standard IEEE P2851, which aims to provide an exchangeable and interoperable format for safety analysis and safety verification activities at IP, SoC and system levels. The standard addresses interoperability challenges of dependable systems covering functional safety, SOTIF, cybersecurity and other characteristics such as reliability, maintainability and real time.
Convolutional Neural Networks (CNN) are widely adopted for various computer vision tasks. They are also gaining traction in the auto-motive industry. As such, front-view cameras may rely on CNNs for semantic segmentation and object detection. They typically require very performant and very efficient CNN accelerators. The presentation will give a brief overview of CNNs in general. It will explain the use of CNNs for semantic segmentation and object detection and describe some CNNs designed for these tasks. It will discuss the challenges of embedding a CNN accelerator in a SoC and of providing a SW tool chain that enables the full performance and efficiency of the CNN accelerator. It will describe the performance results generated by Bosch using Synopsys' DesignWare ARC EV processor and CNN engine with 3520 MACs.
At higher levels of autonomy, the autonomous driving (AD) compute solution will become more centralized to rapidly fuse multimodal sensor data and safely choose and initiate self-driving actions. Complex automotive SoCs with purpose-built processors to increase computational efficiency require automotive-grade IP and ISO 26262-certified safety-aware design and hardware and software verification solutions to meet strict design targets to achieve target ASIL. Before silicon is available, early development and testing of safe and secure software relies on virtualized hardware. Synopsys partners with automotive market leaders and new entrants to accelerate development of safety-critical SoC. In this session, we present Synopsys’ safety-aware expertise, IP and tool solution to help reduce cost, alleviate risk and accelerate go-to-market.
Engineering an autonomous driving (AD) system requires a thorough understanding of the dynamic interaction between driver, vehicle, and environment. A dynamic model at the driving level enables design of an E/E network architecture with E/E systems as acting nodes which meets the customer expectation. In this presentation we discuss why the success of the SoC design architecture is highly dependent on the dynamic model of the ADAS/AD feature at the driving level. Key requirements include assigning the appropriate sense, control, and actuation functions to the SoC, laying out the SoC function with sufficient performance (SOTIF), avoiding and mitigating random and systematic faults (functional safety), and preventing malevolent intrusion to the SoC and from there into the whole E/E network (security).
Automobiles must operate in a safe, reliable and secure manner, especially in next-generation autonomous driving and advanced driver-assistance systems (ADAS) applications. E/E systems in such vehicles, such as SoC designs, should comply with the ISO 26262 standard to achieve functional safety which is specified by automotive safety integrity level (ASIL). Samsung Foundry will present how their automotive reference flow is using Synopsys’s comprehensive automotive solutions that help their customers meet their target ASILs.
Hardware ASIC / System Safety are key enablers for the overall robustness and dependability of autonomous vehicle architectures. In developing functional safety that addresses hardware random failures and systematic failures, it is also crucial to consider cybersecurity and safety of intended functionality (SOTIF). In this keynote, we’ll cover NVIDIA’s vision for functional safety, as well as our work with various international governing standards and regulations. The talk will also include an overview of the standard IEEE P2851, which aims to provide an exchangeable and interoperable format for safety analysis and safety verification activities at IP, SoC and system levels. The standard addresses interoperability challenges of dependable systems covering functional safety, SOTIF, cybersecurity and other characteristics such as reliability, maintainability and real time.
Convolutional Neural Networks (CNN) are widely adopted for various computer vision tasks. They are also gaining traction in the auto-motive industry. As such, front-view cameras may rely on CNNs for semantic segmentation and object detection. They typically require very performant and very efficient CNN accelerators. The presentation will give a brief overview of CNNs in general. It will explain the use of CNNs for semantic segmentation and object detection and describe some CNNs designed for these tasks. It will discuss the challenges of embedding a CNN accelerator in a SoC and of providing a SW tool chain that enables the full performance and efficiency of the CNN accelerator. It will describe the performance results generated by Bosch using Synopsys' DesignWare ARC EV processor and CNN engine with 3520 MACs.
Enabling EDA workloads in public cloud creates flexibility in engineering execution during compute demand peaks by providing the ability for key workloads to run in hybrid cloud mode. Static Timing Analysis is one of the most compute/memory intensive workflow for peak shaving, Qualcomm worked with Synopsys R&D to enable this capability in AWS . Some challenges faced were how STA is complex to execute and reside at tail end of design cycle, leaving little tolerance for delay. In summary, EDA workloads in the cloud have demonstrated equal or better performance in AWS vs. on premises.
Synopsys Testcase Packager (STP) is the next generation, application-agnostic, zero-integration testcase packaging technology for all Synopsys Products. EDA environments are highly complex with different dependencies and undergo a high frequency of unregulated changes. STP has significantly improved the productivity by automatically capturing a complete customer testcase and reproducing it in a remote environment. Typically, EDA workloads have large data sizes which makes uploading to Cloud impractical during peak usage e.g. 10,000+ cores. STP solves this with "EDA Data Environment" synchronization. That is, pre-populate and continually auto-synchronize data between on-premises and Cloud for on-demand execution.
IBM and Synopsys have been working together to investigate the advantages and trade-offs of migrating high performance EDA applications to the cloud. In this work, we describe the incorporation of synopsys icv validator tools into a digital design flow enabled on a hybrid cloud infrastructure. Large chips require significant compute resource for design and validation of data. Our emphasis in this work is to reduce the process time by running synopsis icv validator applications in kubernetes based containers. Virtual machines are configured with defined cpu and memory requirements. Runtime improvements are realized by taking advantage of icv distributed and multi-threaded capabilities. The advantage of kubernetes is that they can be ported to any cloud environment. We discuss the advantages and disadvantages of running applications using on premis and hybrid cloud models.
We have been investing in technologies to accelerate our product enablement on the public cloud. These include job distribution, scalability, elasticity, data transfer, containerization, and more. We explored containerization as this technology provides many benefits such as isolation from host dependencies, bundling all application dependencies into single package, and lowering the overhead compared to virtual machines. In this talk, we will present our journey in enabling the framework for container support for our products both on-prem and on the cloud with the leading container HPC and native solutions, namely Singularity and Docker.
Ever increasing design size and complex technology requirement in advanced process nodes drive the demand for huge computing power in latest SOC designs to meet tight time-to-market requirement. Cloud now becomes a MUST. To provide shortest path to the cloud for our customers, Samsung Foundry has been working with cloud partners on SAFE(Samsung Advanced Foundry Ecosystem) and come up with SAFE-CDP(SAFE Cloud Design Platform), a virtual chip design environment on the cloud. SAFE-CDP has all things needed for cloud-based design and we went even further by running extensive experiments on the cloud to optimize EDA workload for the cloud. In this presentation, we provide our practice of building up a cloud-friendly environment for Synopsys IC Validator and Finesim by utilizing elastic CPU technology and GPU-based acceleration, respectively.
Cloud provides a unique opportunity to get access to hundreds of cores. Moving EDA workloads to the cloud presents a unique set of challenges and considerations. In this presentation we will explore solutions to those challenges by demonstrating how to “seamlessly” scale Formal Verification using Scale-Out Computing on AWS (SOCA). VC Formal on AWS SOCA provides a unique solution to Shift-Left Verification with better performance & convergence.
Learn how the latest advancements in circuit simulation can help you reduce the turnaround time of your analog/mixed-signal and memory designs while improving the quality of results.
Discussion on the latest compute technologies to enable further speedup in simulation turnaround time
Integrated-circuit simulation with SPICE (Simulation Program with Integrated Circuit Emphasis) has benefitted profoundly from multi-core parallel compute technologies in the last decade, achieving a five-to-ten times runtime improvement. As the circuit complexity and size gear up for hundreds of millions of components, however, this approach can no longer address daylong or even weeklong simulation challenges owing to architecture limitations and Amdahl’s law.
In this work, we will present a practical and phenomenal solution that can achieve order-of-magnitude speedup in simulation turnaround time, powered by NVIDIA graphics processing units. In particular, we have developed advanced computational algorithms, techniques, and heterogenous compute management system to make the best possible use of compute resources. This enables massive and efficient parallelism and is able to process within hours trillions of double-precision floating-point arithmetic operations of sparse linear algebraic matrix systems typical of today’s IC simulation.
Evaluation of GPU technology on circuit simulation based on Samsung Foundry AMS Design IP.
There are many solutions in the market for schematic based simulation environment, but none for Netlist. Synopsys Cockpit fills this gap and is Flexible and Feature Rich. The tool helps analog and mixed signal design engineer to run simulations in a very interactive way reducing manual errors and its tight integration with waveform and calculator tool makes debug very easy. The Tool also has a rich collection of tcl API’s making it possible to integrate any Internal Tools, Data mine your design and perform other complicated tasks. This presentation will go over the key reasons why we had to pick a Netlist based solution, How we were able to leverage the tcl API’s and automate key tasks to boost productivity of design engineers.
Memory real estate is continuously increasing, reaching more than 80% on present day SoCs. In Automotive SoCs, Memory IPs are used for various applications ranging from ADAS to navigation and infotainment. SoCs designed for life critical applications like ADAS (ASIL-D category in ISO26262 standard) go through rigorous functional safety checks and FMEDA (Failure Modes, Effects and Defect Analysis) becomes a necessary step to systematically predict the failure rate of all IPs used in such subsystems. Among various objectives of FMEDA, in this paper, we focus on the requirement for fault analysis and discuss how we are able to use TestMAX CustomFault to perform fault analysis to ensure functional safety of our embedded Memory IPs catering to the Automotive market.
Achieving PPA goals, reaching timing closure, and meeting project schedules continue to be the Top-3 challenges of the design implementation flow, according to Synopsys users. As designs spiral in both their size and complexity, encompassing ever more functionality into a single system-on-chip (SoC), it grows increasingly challenging to debug, optimize and implement these designs while still efficiently meeting project timelines and the overarching end-product goals. In this session, Synopsys and industry leaders will present new solutions to accelerate design throughput and realize superior QoR while also improving design and project efficiency.
In this session, Arm and Synopsys experts will highlight the best practices, new methodologies, and enabling technologies that are delivering the industry’s leading power, performance, and area (PPA) for the latest Arm cores. You will learn about the new capabilities of Synopsys’ Fusion Design Platform being developed and deployed in close collaboration with Arm to enable optimized implementation of these cores to address performance goals for the next generation of smartphones, laptops, hyper-scale cloud computing, 5G, and edge designs. Key technologies that will be discussed include: physically aware RTL restructuring with RTL Architect, signoff-driven ECO closure with PrimeECO, in-design power integrity analysis and fixing with RedHawk Analysis Fusion, H-Trees with MSCTS, hierarchical design and, placement attractions methodologies.
This presentation will explain why looking at power analysis holistically using workloads running on the full SoC opens new opportunities for design teams to increase competitiveness of their products and reduce project risk. We will introduce breakthrough emulation technology that enable multiple analysis iterations per day.
PSS is used to capture functional intent, which can then be used regardless of the verification stage. The functional intent is represented as a stimulus model written in the PSS language. This tutorial will show how Synposys solution enables porting stimulus to many different environments from single source; Generation of many intelligent test cases from a succinct model and creation of coverage-driven, system-level test cases targeting bugs, which are difficult to detect, at the System Level
Introducing Euclide - IDE with On-the-fly Design and Testbench checks highlighting intergration with Verdi, VCS and ZeBu for improved productivity, performance and compatibility
This presentation is on improving verification productivity by unifying and automating Verification Continuum flows. We will show the latest advancements and results from the natively integrated VC Execution Manager solution.
See how VCS's new Intelligent Coverage Optimization and Dynamic Performance Optimization Technologies deliver unique ability to improve verification efficiency and productivity
Increasing design and manufacturing complexities at advanced nodes (7nm, 5nm and below) pose significant challenge for physical verification engineers to achieve on-time design closure. IC Validator continues to innovate in scalable performance, high productivity and robust debugging. In this presentation, TSMC talks about partnership with Synopsys IC Validator on technology collaboration and tool enablement for TSMC’s advanced process nodes. Synopsys provides an overview of latest IC Validator innovations for physical verification productivity and how to deploy these technologies to achieve faster physical verification closure.
IC Validator Explorer LVS provides a fast and automated way to find root causes of the early full-chip LVS issues. By swiftly detecting design issues, Explorer LVS delivers results up to 30x faster than a traditional LVS flow, enabling more frequent and shorter iterations of running/debugging/fixing design issues and eventually faster LVS sign-off closure.
With increasing DRC complexity at 7nm and large design sizes, physical verification turnaround time has become a key challenge to deliver tape-outs on schedule. In this presentation, Achnonix discusses physical verification methodology with IC Validator and how this methodology was successfully deployed on latest designs to enhance full flow productivity and accelerate design closure.
IC Validator Workbench is a must-have utility for IC Validator physical verification flow. It enables efficient viewing and editing of Layout databases: Quickly open layout database and access the graphical data for fast review and editing, Compare Layout databases, Automate repetitive tasks using standard scripting languages (Tcl or Python). IC Validator Workbench can also be used to efficiently merge multiple database files into your complete design for chip finishing. Additionally, the full array of IC Validator tools are integrated and accessible in IC Validator Workbench environment to make design verification easy to accomplish.
We will explain why looking at power analysis holistically using workloads running on the full SoC opens new opportunities for design teams increase competitiveness of their products and reduce project risk. We will introduce breakthrough emulation technology that enable multiple analysis iterations per day.
Instance count in the netlist of full chip Graphics IP has increased exponentially in recent programs, posing roadblocks in Multi-Voltage (MV) Verification Signoff. At the same time, tighter schedules have required design teams to reduce iterations and time available for verifying the design. A full-blown chip level netlist either does not load into the VC LP tool or offers unreasonable verification run times. Black-boxing ‘placed and routed’ blocks in the netlist leads to critical verification coverage losses, creating a need for scripted checks thereby resulting in a delayed and low-confidence MV sign-off. This article discusses a Static Abstract Modelling (SAM) based chip level MV Verification using VC LP, that successfully enables handling the large size of the design (no tool capacity issues), without compromising MV sign off quality. The feature is based on retaining only the necessary logic gates and connectivity, required for verification, within the chip level netlist. Using SAM in VC LP, we were able to demonstrate at least a 3 week faster and highly reliable MV sign off. In our tests, SAM based VC LP simultaneously achieves increased verification coverage by up to 9% w.r.t our baseline runs, eliminates the need for any external MV checks and improves debug-efficiency by at least 25%. SAM based methodology is now and will be the new norm in MV verification especially for growing designs.
Application of Zero Trust principles across the microelectronics lifecycle has the potential to raise confidence in the components available for implementation in safety-critical systems that have an ever-increasing role in society. However, since Dr. Lisa Porter (DUSD(R&E)) advocated for a new “Zero Trust” paradigm for securing microelectronics at DARPA ERI in 2019, the hardware community has raised concerns regarding the meaning and application of Zero Trust principles for hardware. This presentation returns to the first principles and foundational tenets of Zero Trust established by the cybersecurity community and provides an initial mapping for their application to stages of the hardware lifecycle with the objective of outlining how application of security controls at the appropriate points can uplift the confidence in implemented microelectronics.
This talk will address where common vulnerabilities are introduced during the design of an SoC, how good practice can help security assurance when designing logic, integrating IPs or performing pre-silicon verification.
Some custom VLSI technology is approaching 40 years of age. End of life buys and obsolete technology or destroyed mask sets may make buying new parts impossible. Luckily, FPGA technology has become more affordable, faster and large enough to reproduce most VLSI designs or even boards from 20+ years ago. In this paper I’ll introduce a flow for re-implementing a legacy design in an FPGA.
Fault-injection attacks have become a major concern for hardware designs, primarily due to their powerful capability in tampering with critical locations in a device to cause violation of its integrity, confidentiality, and availability. Researchers have proposed a number of physical and architectural countermeasures against fault-injection attacks; however, these techniques usually come with large overhead and design efforts making them difficult to use in practice. In addition, the current electronic design automation (EDA) tools are not fully equipped to support vulnerability assessment against fault-injection attacks at the design-time to avoid tedious manual design review. In this paper, we propose an automated framework for fault-injection vulnerability assessment of designs at gate-level using Synopsys Z01X, while considering the design-specific security properties using novel models and metrics. Our experimental results on the security properties of AES, RSA, and SHA implementations show that the security threat from fault-injection attacks can be significantly mitigated by protecting the identified critical locations, which are less than 0.6% of the design.
The microelectronics landscape is rapidly changing for the DoD, as integrated circuit and system design complexities and hardware assurance requirements increase owing to high-performance mission critical requirements and the need to protect sensitive data. The DoD data center has unique requirements uncommon to the commercial world. Corporate suppliers deal primarily with relatively static projects for hardware emulation configuration before switching to a new project and new data center configuration. The low project/configuration change velocity is a benefit for corporate solutions that the DoD heterogeneous user/project base cannot leverage. The successful DoD data center must address the more complicated challenges associated with a very high change velocity. This atypical aggressive use model delivers the DoD significant economic advantages, though at the expense of up-front implementation cost. For current and future DoD requirements, this paper outlines decisions and processes that will provide the DoD with improved performance and higher reliability at lower cost than current best practices allow. Keywords—Trust, Enterprise, Data center, Simulation Acceleration, Hardware Emulation, Verification, Hardware Assurance
Security hardening during the design creation flow has traditionally been very challenging. Every design group seems to have a unique flow, and every application has differing security requirements. These factors often drive increases in the power and area and decreases in the performance of the integrated circuit. There are multiple threats and many types of defenses already existing and new threats coming daily. How can developers expose and mitigate vulnerabilities during design creation while still considering competing design requirements? What steps are necessary early in the design flow to ensure proper chip provenance, authentication, provisioning, test, debug, and threat detection post tape-out? This tutorial presents how Synopsys, under a DARPA-sponsored program, is addressing these challenges.
ECO closure is a major bottleneck for design closure and can result in tapeout delays due to the unpredictable nature of late stage ECO changes during the signoff stage of the design. In this presentation we will share a methodology and practical results of how we reduced of our ECO closure cycle by 50% by deploying Synopsys’ signoff-driven ECO closure solution
In this work, PrimeTime's new DVD analysis has been proved. The new method of PrimeTime is able to directly read Redhawk-SC's DVD report and estimate its impact on timing. It is shown that the new method is well correlated with quad-core ARM CA53 chip implemented in Samsung's LN10LPP process.
Dramatic rise in design size and complexity has lead to a slew of signoff challenges that affect designers’ ability to meet TAT target. In this tutorial, major advances in PrimeTime to address these challenges will be reviewed. We will cover new techniques to improve productivity, such as reducing report runtime bottlenecks, managing number of corners in multivoltage design, and reducing runtime & memory requirement for hierarchical designs.
Characterization is a compute intensive exercise and the demands are growing by the day to capture more views in more accurate forms. Moments LVF, high-sigma accurate requirements, EM are a few examples of more views emerging in more accurate forms, pushing the demand for compute through the roof, when factoring shrinking time-to-market requirements. While everyone’s working hard to innovate to address these challenges, we at Arm, in collaboration with Synopsys, present our contribution to this game my marrying two concepts – Cloud Computing and Arm-Core Execution. Cloud computing offers amazing scalability with the right mix of software and configuration, thereby helping us meet the compute demands emanating from our time-to-market requirements. However, there is a cost associated and this must be profitable over the incumbent on-premises approach, to turn heads.
This is where Arm based AWS Graviton2 processors comes into picture, they are not only fast but also cost effective hence execution on Arm-powered cores, instead of the conventional x86-powered cores, is the magic wand that helps us achieve our objectives. This paper talks about how we power our characterizations using SiliconSmart on AWS using Arm Neoverse based Graviton2 instances.
Library Characterization has typically been a long pole in a design cycle. With ever more demand for accuracy at lower technology nodes, liberty formats has evolved, and the more intricate amount of data help static timing/power analysis and place and route tools to reduce signoff pessimisms. This comes with a cost of simulating more data. To deliver this in a timely manner, highly distributed, massive computing resources are required. Massive scaling of distributed computing resources requires the balance between the handling of the hardware, the handshake between the resource management system and the application being executed. Using parallel distribution technique, systematic partition of the data systems and license check out scheme, we were able to scale our library characterization from a typical compute farm of 30k CPUs to 120k CPUs, allowing us to quadruple the throughput where necessary when additional characterizations are needed on an on demanded basis.
High Performance, High Accuracy, High Reliability are the corner stones of lower technology node characterizations and libraries are the solid base of any successful chip tapeout. With our new Next generation Characterization Product – we aim to achieve all of these goals and then some more. Next Generation Product is fully backward compatible with SiliconSmart ADV thus offering a seamless path to upgrade and also embeds the support for Next Generation simulator product. This tutorial will take you through the introduction of Next Generation of characterization product, all its current new offerings and an exciting roadmap of the upcoming features.
Under pressure to meet design schedules, design-for-test (DFT) engineers and teams must quickly architect, implement and validate increasing complex DFT logic. The complexity continues to rise as the challenge to meet manufacturing test quality and cost goals for many newer-generation designs are met by using sophisticated test techniques. This session will provide an overview and relevant details of Synopsys TestMAX Manager and new flow guidance to enable easy implementation of major RTL DFT components such as compression logic, logic BIST, memory BIST, and access networks. In addition, connections to DFT functionality validation will be covered as well as connections to synthesis-based test for lower-level, essential DFT (example: scan chains), accelerating the entire DFT effort.
TestMAX Manager flow was used in an Intel SoC project for design-for-test (DFT) implementation, providing a highly automated process to insert scan IP at the RTL level as well as generating design constraints that were passed to synthesis and physical design integration. This Intel SoC design was one of the pilot projects that adopt a complete TestMAX Manager flow from RTL to GDS. This project demonstrates a large scale design with a complex design structure. Though TestMAX Manager flow provides an integrated scan IP insertion, scan synthesis and timing constraints, there are significant design specific modifications needed to overcome the challenges caused by complicated clock tree design. This presentation provides an in-depth analysis of the on-chip clock (OCC) controller and clock structure. It also gives a practical static timing analysis (STA) scan constraint guideline that can be referenced in other similar designs.
Design-for-test solutions need increased capabilities as device complexity increases and semiconductor integration changes. Large complex devices at advanced process nodes require more rigorous testing methods and more test content. Test compression provides a method to deliver more test content through a limited number of tester channels. Using a hybrid system that supports deterministic test pattern generation and pseudorandom pattern application it is possible to make better tradeoffs between coverage, test data, and test time. The combination of TestMAX DFT and TestMAX XLBIST comprises the hybrid solution that scales to support testing over a range of different access methods from low pin count test access port (JTAG) to wide input/output connections. It reduces test time by supporting the delivery of fast external data and match internal scan speed to higher I/O speed.
Intel is working to replace an internally developed CPU-centric pattern conversion methodology with a more streamlined approach with a widely adopted third-party tool while still meeting the needs of different business segments. Intel partnered with an EDA vendor to drive enhancements to the pattern conversion methodology as well as is managing total cost of ownership. This allowed for standardization and simplified conversion of STIL patterns. The new pattern conversion methodology with specific customization maintained the requirement for high volume manufacturing within Intel’s ecosystem.
Design complexity and size continue to grow, requiring the need for hierarchical test by partitioning designs into smaller parts to make design-for-test (DFT) and ATPG manageable as well as decreasing test bandwidth needs.
Burn-in test is also a requirement for detecting early failures by testing the product to operate in extreme temperature conditions for a time longer than manufacturing test. High parallelization to minimize test cost is usually required in burn-in, with fewer channels available than manufacturing test.
ST Microelectronics’ requirement for hierarchical test and burn-in are met with the introduction of a test fabric, a new Synopsys technology.
The test fabric helped to meet hierarchical test requirements, by supporting pattern porting in two different modes, parallel for manufacturing test, and serial for burn-in test. ATPG patterns generated at partition level can be ported to the top and can be re-used for both manufacturing and burn-in test.
In this presentation, the application of the test fabric is covered, experimental data from DFT and ATPG are also shared.
For today’s high-performance ASICs, designing on leading process node with short execution schedules while achieving best possible PPA is essential for developing competitive products. Achieving predictable convergence for RTL2GDS design using APR tools is heavily dependent on design input collaterals quality and is generally susceptible to changes in RTL, floorplan, stdcell/EBB libraries, design constraints etc. Exploring APR convergence recipe changes for each change in design/process collaterals to continue maximizing the PPA becomes a challenge as isolating effects of each change and tuning APR optimization for PPA is non-trivial. This problem gets harder for designing multi-million instance SOC/IPs being developed on leading process nodes and includes new SOC/IP architectures where changes in RTL, floorplan are large and constant. This presentation focusses on novel DSO.ai solution which uses ML/AI techniques to simplify design optimization search space exploration. DSO.ai works in conjunction with APR implementation tools like Fusion Compiler and provides all necessary capabilities to permute various tool options, design constraints, design specific optimization strategy/recipe variations and learn from each APR run to predict and optimize the settings for achieving target PPA goals. The learning process is continuous over multiple runs as well as iterations which helps in studying the effect of each parameter and keep optimizing the settings for improving PPA as well as drive towards out of the box convergence. We present a case study focusing on PPA improvement for representative designs from a graphics-IP implementation in SOC. Using DSO.ai, we were able to reduce total power of these representative designs by upto 10% without affecting other metrics negatively. We also discuss potential use models for DSO.ai in APR implementation cycle to achieve auto-convergence and conclude with recommendations to maximize the PPA benefit with DSO.ai.
In SoC design flow, physical design is a black box problem and it is very difficult to understand where the output comes from. Artificial intelligence offers good solutions to explore the design solution space and get better PPA. In this session we will share details of our work with DSO.ai at Samsung Foundry and examine the observed improvements and benefits of the AI-driven approach.
Memory real estate is continuously increasing, reaching more than 80% on present day SoCs. In Automotive SoCs, Memory IPs are used for various applications ranging from ADAS to navigation and infotainment. SoCs designed for life critical applications like ADAS (ASIL-D category in ISO26262 standard) go through rigorous functional safety checks and FMEDA (Failure Modes, Effects and Defect Analysis) becomes a necessary step to systematically predict the failure rate of all IPs used in such subsystems. Among various objectives of FMEDA, in this paper, we focus on the requirement for fault analysis and discuss how we are able to use TestMAX CustomFault to perform fault analysis to ensure functional safety of our embedded Memory IPs catering to the Automotive market.
This presentation describes the application of the Synopsys analog fault simulator TestMAX CustomFault at TDK-Micronas. It gives an introduction to the tool, the methods used in it, and its application for the determination of some ISO26262 metrics.
Currently the automotive industry is going through a major transition and applications like self driving cars require enormous computing power which make designs more complex in nature. This change not only requires a faster but also exhaustive signoff for safety critical automotive designs. We consulted with automotive industry leaders and developed a safety critical methodology on proven SpyGlass Lint technology. This methodology has selected lint rules and custom settings devised specifically for automotive static signoff.
Addressing Random HW faults in Automotive designs requires a Safety Architecture with Safety Mechanisms (SMs) to monitor and detect their occurrence. Random Faults analysis is done through FMEDA (Failure Mode Effect Diagnostic Analysis), for computing the ISO 26262 metric. Validation of the FMEDA metric is done through Fault Injection testing. Fault Injection campaigns on today’s complex designs pose, however, a considerable challenge in terms of effort, computing resources and tool capacity. This presentation will discuss an efficient methodological approach to Fault injection flow, which enables a faster convergence, using Synopsys consulting rich experience. Considerations such as preparation by FMEDA review, understanding design constraints and SM type dependent fault injection settings will be presented.
Standards such as ISO 26262 define strict requirements, processes, and methods that all stakeholders – IP vendors, sub-system and SoC developers – must abide by when designing safety critical automotive products. One such requirement is the Development Interface Agreement (DIA) which defines the interactions, interfaces, responsibilities, dependencies and work products to be exchanged between customers, like Infineon, and suppliers for all distributed safety related activities. In this session, we will explain the details of distributed development based on DIA and outline the different activities for which DIAs must be signed during a distributed development process. In the second part of the presentation, Infineon will highlight their approach on meeting SoC-level functional safety objectives while closely collaborating with Synopsys.
Standards such as ISO 26262 define strict requirements, processes, and methods that all stakeholders – IP vendors, sub-system and SoC developers – must abide by when designing safety critical automotive products. One such requirement is the Development Interface Agreement (DIA) which defines the interactions, interfaces, responsibilities, dependencies and work products to be exchanged between customers, like Infineon, and suppliers for all distributed safety related activities. In this session, we will explain the details of distributed development based on DIA and outline the different activities for which DIAs must be signed during a distributed development process. In the second part of the presentation, Infineon will highlight their approach on meeting SoC-level functional safety objectives while closely collaborating with Synopsys.
Memory real estate is continuously increasing, reaching more than 80% on present day SoCs. In Automotive SoCs, Memory IPs are used for various applications ranging from ADAS to navigation and infotainment. SoCs designed for life critical applications like ADAS (ASIL-D category in ISO26262 standard) go through rigorous functional safety checks and FMEDA (Failure Modes, Effects and Defect Analysis) becomes a necessary step to systematically predict the failure rate of all IPs used in such subsystems. Among various objectives of FMEDA, in this paper, we focus on the requirement for fault analysis and discuss how we are able to use TestMAX CustomFault to perform fault analysis to ensure functional safety of our embedded Memory IPs catering to the Automotive market.
This presentation describes the application of the Synopsys analog fault simulator TestMAX CustomFault at TDK-Micronas. It gives an introduction to the tool, the methods used in it, and its application for the determination of some ISO26262 metrics.
Modern Circuit designs require new ways of analysis and characterization. Learn how the Simulation Environment can solve these challenges.
Catching potential electrical issues early can avoid extra design iterations. Synopsys custom design platform provides a unified workflow to accurately estimate, measure, extract and simulate parasitics by bringing signoff tools into the design process, enabling faster design convergence.
SK Hynix established circuit/arrangement standardization for frequently used Amplifiers (Legacy Design) and developed automation flow by actively utilizing Synopsys's Template Manager function. During the development period, we were able to reduce the complexity of template production and increase the completeness through the improvement of the Synopsys Template Manager (Dummy, Resistor, etc.) and the development of the Flexible Hierarchy Template function. By making templates and applying flows to about 30 amplifiers, we were able to reduce the design time required for the design by more than 2x times compared to the previous one. In addition, it is expected that the flow can be expanded according to the level of classification of Legacy Design in the future.
Engineers can reduce analog layout TAT by using Custom Compiler's visually-assisted layout automation technology. It provides productive and easy-to-use features for analog placement, routing and template-based design reuse methodology to achieve high quality layout.
We will present a novel flow for standard cell design that uses advanced features in Custom Compiler to reduce layout time. Our flow includes deployment of schematic driven layout (SDL). SDL is a powerful technique for improving layout productivity, but not many standard cell designers are open to using it. We had developed a methodology leveraging features in Custom Compiler that made adoption of SDL much more practical. This included capabilities for hierarchy manipulation, automatic and interactive device chaining, interactive analysis, etc.
Xilinx ® IC designers have benefitted from Custom Compiler™ technologies to increase design reliability and reduce late-phase editing iterations, thus improving productivity and design closure. Key features, such as the schematic-driven-layout (SDL) process, retains critical nets in the generated layout, alleviating the need to correct connectivity downstream. To maximize reliability, built-in restrictions prevented incorrect logic modifications. In addition, resistance and capacitance calculations, shield creation and reporting, and via checking on partially completed designs promoted efficient processes that resulted in reliable designs. By reducing costly post-layout modifications and iterations, the in-design assistants also contributed to further productivity gains while delivering reliable design closures predictably.
AMS design challenges have significantly increased with complex design requirements at advanced CMOS processes. Samsung's advanced node AMS Design Reference Flow is intended to reduce this design complexity and improve design productivity at advanced technology nodes. The flow demonstrates to the end users how Samsung foundry PDK’s are well in sync with the latest AMS design platforms. Samsung Foundry customers can now take advantage of the most advanced features for circuit design, performance, reliability verification, automated layout, block and chip integrations for custom and digitally-controlled analog-based design on Synopsys’ Custom Compiler, simulation environment, and simulators. In this session, we will talk about Samsung’s AMS Design Reference Flow from schematic to layout verification and the future.
This session will provide a flow closer look into RTL Architect (RTL-A) technologies and share insights on lessons learned from multiple engagements over the past year. We will review and demonstrate how RTL-A capabilities have enabled customers to significantly improve and speed up the RTL creation process. We will also discuss some tips and tricks when deploying RTL-A .
Complex multi-million gate designs and rapid adoption of advanced nodes are pushing EDA tools to their limits. Faster design convergence, achieving optimal PPA is paramount for product success. Front-end design teams needs to rapidly explore domain-specific architectures and improve RTL quality for PPA gains. This session talks about RTL Architect from Synopsys that helps to reduce RTL development lifecycle, provides early feedback on design implementability and PPA metrics. Highlights some of the key technologies of RTL Architect - Fast synthesis engine that enables RTL designers to predict power, performance, area, congestion impact based on their design choices, advanced interactive debug capabilities to provide early insights into key RTL quality metrics. We will also share RTL Architect performance on Nvidia's high performance computing designs.
This session will include a brief tutorial on recent advances in FM ECO. The session will also include a case study from Alphawave relaying their recent experience leveraging FM ECO to quickly implement similar ECOs to multiple versions of their IP.
With increasing complexities in design and continuously evolving process, it is a challenging task to maintain a tight schedule and be ahead of competition to gain leadership. While architects and Register Transfer Level (RTL) design teams explored ideas to partition the design and produce quality RTL, Physical Design (PD) backend team worked on many new aspects to achieve the most optimal Quality of Results (QoR). While we have improved the collation between FE/BE over the past years, but it still has room for further improvement. Frontend handoffs needs to be more predictable to backend (PD team) and would help to reduce Synthesis/Place & Route iterations and faster overall convergence. Features-Added Physical Aware Synthesis (FPAS) provides us with multiple benefits, ranging from improved timing QoR by identify critical paths, early reliable congestion analysis, and power estimation which aligns closely to Fusion Compiler. It also provides additional benefits w.r.t partitioning the design along with pipelines that can be added to meeting timing as opposed to waiting for physical design feedback and hence saving iteration time. In addition, FPAS allowed cross-probing from RTL to timing paths and layout in single user interface, which allow ease-of-analysis for designers. Our initial studies reported at least 2x runtime improvement compared to default synthesis tool. For timing, FPAS showed approximately 90% timing paths are correlated within 10% margin. Modules placement, density and congestion map are well correlated. Less than 15% power different observed compared to default synthesis tool.
In this session we will outline some of the considerations that designers must be aware of when they are ready to shift their designs to PCIe 6.0: such as doubling of the data rate, accessing a complete IP solution that offers optimized performance and seamless interoperability between the controller and PHY, achieving timing closure at 1+ GHz, and understanding the impact of the new PCIe 6.0 features including FLITs, new low power state, and PAM-4 signaling.
The increasing volume of data for AI workloads is driving the need for more advanced networking functionality for faster data movement. SoCs for hyperscale data centers, artificial intelligence, and networking applications are more complex. Such SoCs are disaggregated in a multi-die package, requiring a robust and reliable 112G USR/XSR or HBI links to allow inter-die connectivity. In this session, we will describe the new use cases, such as co-package optics, for die-to-die connectivity as well as outline key design requirements of standards-based SerDes and parallel die-to-die interface solutions with testability and performance/power tradeoff capabilities and supporting interposer and substrate technologies for 2.5/3D packaging.
In this session, we will explain the benefits of MIPI’s CSI-2, DSI/DSI-2 D-PHY, and C-PHY standards for camera and display applications by highlighting some of the key features such as D-PHY’s 4.5 Gbps bandwidth, C-PHY’s 3.5 Gsps bandwidth with a 3-wire architecture, CSI-2’s maximum throughput for mega-pixel cameras, and DSI/DSI-2’s high resolution and bits per pixel. In addition, we will highlight the advantages of integrating MIPI C-PHY and D-PHY for higher performance in today’s visual applications.
HBM DRAMs, mainly for GPUs and accelerators, provide high throughput per channel at a low power per bit transferred. For applications seeking higher memory density and bandwidth than HBM2E, the industry is now anticipating the release of next-generation HBM3 which is expected to provide higher transfer rates with even better performance. In this session, we will focus on the introduction of HBM3 which is expected to double the density to 64GB/s with 512 GB/s of bus size, all essential requirements for high-performance computing.
The latest SoCs on advanced semiconductor nodes especially FinFET, typically include a fabric of sensors spread across the die and for good reason. But what are the benefits? This presentation explores some of the key applications for in-chip sensing and PVT monitoring and why embedding this type of IP is an essential step to maximise performance and reliability and minimise power, or a combination of these objectives. The presentation will also examine use cases from key application platforms including AI, Data Center, Automotive, 5G and Consumer.
Showcase VCS, ExecMan and VC Formal Cloud solutions
Regression Debug Automation (RDA) provides root cause analysis solutions for different types of errors (TB, DUT, VIP, IP, SoC) in verification flow (check-in probe, regression, VCS version migration, design verification, and more). RDA categorizes failure types, applies diverse root cause analysis engines then generates a RCA report for users to easily understand and manages it. RDA automates the debug flow for design errors and can improve the performance of design verification dramatically.
To maintain aggressive scaling trends, current devices use track height reduction as the primary scaling knob. Gate All Around process technology offers a way to reduce track-height while using 90% or more of current FinFET process steps. To enable teaching and research, we have developed a predictive process design kit (PDK) in collaboration with Synopsys, targeted for the 3nm node. Cell layouts and schematics were designed using Synopsys Custom Compiler and verified using Synopsys IC Validator. We will discuss challenges encountered in design rule creation and DRC and LVS runset development. Some early results will be shown.
IC Validator Launch connects IC Validator with the user and the design environment by providing unified and customizable interface. Users today use command line interface to run and interact with IC Validator. IC Validator Launch provides an intuitive and easy-to-use graphical interface to launch IC Validator jobs. In this tutorial, learn how to setup a IC Validator run by specifying location of input data, options for the run, runset options customization, debug results with IC Validator VUE and integration with design tools such as IC Validator Workbench and Virtuoso
Learn about IC Validator technology advances to enhance full chip physical verification productivity and customers’ experiences with IC Validator on cutting-edge 7nm and 5nm design tape-outs. Acacia shares challenges with full chip verification of advanced node designs and their methodology with IC Validator for faster physical verification closure. Eximius presents about scaling IC Validator jobs to hundreds of cores to achieve full chip signoff within hours for 5nm designs.
IC Validator PERC is a reliability verification solution that enables customized checking for EOS/ESD/ERC rules. IC Validator PERC provides fast performance, scalability and intuitive debugging for reliability verification. In this tutorial, learn about latest advances in IC Validator PERC and new capabilities including: Current density checking with StarRC extraction, Point-to-point resistance checking with StarRC extraction, Voltage based spacing checks, topology and layout checks.
Increasing variations at advanced nodes pose serious challenges to robust product functionality and performance. To address these, design teams add guardband margins and signoff at higher sigma to manage risk, resulting in over-designing, thus paying higher PPA cost.
PrimeShield’s innovative ML-driven statistical engine enables full statistical design variation analysis, lowers the overall pessimism, and in some cases catches potential risks of design optimism. In this presentation, we will discuss how PrimeShield rapidly identifies and drives optimization of bottlenecks at, cell, path, and design level. We will also cover how the new robustness ECO methodology can be effectively used.
As technology continues to scale, design size has seen a multi fold increase leading to complexity explosion for design implementation. Adding to the mix, the design complexity and ever-increasing market demands to push maximum logic working on minimum area at the highest performance has led to complex design-rule-checks (DRCs) and increasing design-for-manufacturability (DFM) challenges. In a situation where time to market is a key indicator, StarRC’s Virtual Metal Fill (VMF) feature can help reduce TAT by controlling ECO cycles and is tightly correlated to Real Metal Fill (RMF). NXP will share VMF performance data in this session.
Moving from an existing EDA tool to a new one is a nightmare due to the time it takes for synchronizing the database of new tool with existing flow. If we take the example of extraction database, it is used for post layout simulation and EMIR analysis. With the help of the new StarRC GPD based flow developed at ST Microelectronics we were able to simplify the flow and achieve efficiency gain. In this session we will showcase the seamless flow as adopted by ST and its’ benefits such as reduced disk space, improved TAT, etc.
StarRC continues to invest and innovate in scalable runtime and capacity of core extraction and field solver technologies. Improving QOR, TAT & designer productivity while providing golden signoff extraction are the key aspects driving product roadmap. This session will provide an overview of our current technologies, innovation in adv. Nanosheets/GAA processes, new features for digital design flow & integration with digital platforms.
Test points are a well-known, but underutilized design-for-test technique to boost coverage and reduce the number of test patterns required to achieve fault coverage targets. This session will provide a brief tutorial on TestMAX Advisor to analyze RTL and gate-level designs to determine the most impactful control and observation points. Furthermore, unique fusion technology and its usage will be explained to implement test points in the design using methods that ensures optimal performance, power, and area (PPA) of the design. Learn how to easily deploy test points with a single step that automatically combines TestMAX Advisor testability analysis and TestMAX DFT design-for-test within Synopsys synthesis products to improve ATPG and logic BIST results today!
This presentation will review the methodology for validating our most recent Vision Processing Unit (VPU) RTL design, using static connectivity checks. The design of this project was very modular, with multiple configurations. The VPU design instantiates several thousand design units, each connected to the main DFT unit, or the functional logic, with SystemVerilog interfaces. This design style worked very well from an implementation perspective but often led to confusing and difficult to verify RTL structures. To accommodate the fast pace of changes, a high level of automation was required. As Lint checks and design-for-test verification goals are a part of our current flow, it made it straightforward to enable additional static connectivity verification. By gathering and post-processing design attributes, SGDC content was generated. Our Design Automation team was able to include the connectivity verification tests into our organization’s source control Continuous Integration pipeline – so tests were run on every RTL code commit and at regular intervals.
As design complexity increases with multiple voltages and power domains, it brings challenges in the design and implementation of physical design-friendly design-for-test (DFT) architecture. While some of the complex designs with flattened physical implementation give the best QoR, it makes traditional top-down DFT flow more challenging with multiple voltages and power domains. RTL based DFT partition scan architecture helps to achieve the best QoR with a flattened physical implementation strategy. In this approach, various DFT components get added to DFT partitions at the RTL stage which helps to achieve faster turnaround time and best QoR in the physical implementation. DFT partitions and the corresponding DFT components are decided based on various metrics like voltage domains, power domains, flip-flop count, clocking, IP interfaces, physical partitions, feed-throughs, ports location, placement blockages, etc. DFT components include multiple codecs, on-chip clock controllers, scan pipelines, scan decode logic, scan wrapper cells and shift power control logic, etc. This paper illustrates the RTL based DFT partition scan architecture and results.
The TestMAX Manager platform is the latest solution from Synopsys which enables the shift-left of the complete DFT insertion flow using RTL. This presentation demonstrates the flow adopted by Samsung to implement the TestMAX SMS flow for memory test & repair. Power (UPF) and placement (DEF) based implementation is successfully used to insert the SMS components in a back-end friendly way. Integration at the SoC level is done using the IEEE 1500 SMS component configured by a TAP Controller.
In the emerging era of large-scale SoCs comprised of complex IPs, typically designed for AI and automotive applications, it is essential to embrace an innovative approach to overcome numerous DFT challenges. Therefore, a solution must be scalable, robust, and functional safety (FuSa)aware, in addition to meeting the fast time to market aspect. This presentation explains automotive SoC requirements and challenges, as well as an advanced shift-left design-for-test methodology and its criticality. This innovative approach, with the described solution, allows full de-coupling between functional and test design aspects of a safety-critical SoC.
Power management for thermal requirements is the most important design challenge not only for mobile applications but also for server applications. A convergent RTL-to-GDSII flow is critical to meet the two opposing requirements of high performance and low power consumption within competitive time-to-market goals. Furthermore, to carry out sufficient tests in a short time with a limited number of pins, an integrated, state-of-the-art DFT technology is highly desired. In this session, we will discuss our efforts in building an integrated flow using Fusion Compiler, which realizes a highly-convergent singular design flow from RTL-to-GDSII, including DFTMAX compression logic integration, and share the results of our application on high-performance NAND flash and SSD controller designs.
In this tutorial, we will cover about how VC SpyGlass is helping to reduce noise with Machine learning technology for CDC verification and Formal analysis for Lint verification. User can see 20-30x productivity gain in their CDC verification cycle and 30-35% additional noise reduction with Formal Enabled Lint.
In this tutorial, we will talk about the challenges with CDC verification on billion+ gate designs and how Hierarchical CDC methodology can scale up to run-time and capacity. It will reduce the noise significantly as it will only report the boundary level violations which are meaningful to the SoC-level.
As a startup, one typically doesn’t have access to a reference platform or to previous versions of a chip that can be used as a starting point for software development. There are multiple paths one can take to develop the software, but at the end of the day, the entire team needs to have some reference of the hardware being designed, a software development platform, as well as a reference to test new IP's and interconnections. One of the reasons SiMa is generating so much buzz is not just the fact that we have an interesting hardware proposition, but also our software is a big differentiator. We pride ourselves in making the transition from PC based AI application development to the firmware application very easy. Virtualizer and VDKs have helped us not just in aggregating our custom IP together, but also in testing the clock cycles, dependency delays and much more. In this presentation we will cover how we have used Virtualizer/VDK for the following tasks:
- Verify the boot up sequence of the heterogenous system with 12 cores.
- Facilitate SW development for custom IP model integrated into the SoC.
- Validate complex SW architecture flow build-out.
Software methodologies such as Continuous Integration and Delivery and Test Driven Development have become ubiquitous in recent years. This has been a very necessary development and allows large teams to collaborate effectively in the development and testing of the complex software stacks we now find in embedded automotive and AI applications. We find these methods in use across the product development lifecycle from initial, pre-silicon development thru software integration with hardware, to deployment of software variants once a product has launched. A significant issue facing such set-ups is the cost of custom hardware to run the tests that ensure software quality is maintained throughout the product lifecycle.
It this tutorial we will show you how Virtual Prototypes are integrated with application development and debug, image building, revision control, and Continuous Integration platforms to scale up and fully automate the software testing process. We will demonstrate the flow based on the Eclipse based Virtualizer Studio Integrated Development Environment, GIT version control, and the Jenkins automation server. This setup accelerates turn-around time and reduces costs over running these tests on expensive and hard to maintain hardware.
The presentation introduces GLOBALFOUNDRIES® 22FDX® as the technology and design platform of choice for the next generation of automotive designs. It focuses on how the 22FDX® AG1 Automotive digital design platform supports ISO26262 functional safety (FuSa) requirements. Synopsys® Design Compiler® (DC-NXT) and IC Compiler® II toolset based “safety-aware digital design flow” is described to introduce FuSa features on a safety critical design, implemented with GF® 22FDX® based Synopsys® Automotive Grade 1 (AG-1) 9-track std-cell library. The FuSa features covered are “TMR (triple mode redundancy), DMR (double mode redundancy), Fault-tolerant flops, DCLS (dual core lock step) and 100% (or high) RVI (redundant via insertion)”. Finally, implementation results are summarized, with key focus on the impact assessment of FuSa feature introduction on the PPA results.
Reliability, availability and serviceability is a major concern for cloud applications. Many semiconductor and system companies put emphasis on adding hardware duplication, a costly solution to detect and mitigate failures. A deep analysis of the design can identify the portions of the design that can cause the system to behave unpredictably in the presence of soft errors. Statistical analysis based on probability of error propagation in the design can be performed in large complex systems such as the ones in cloud applications before any testbenches are available. We will present our findings based on the static analysis approach used in TestMAX FuSa to quickly identify the registers most vulnerable to soft errors early in the design development.
Automotive software architectures are changing, and more software than ever is now in scope for functional safety requirements. Organizations who are both new to automotive, or newly in scope for these requirements, sometimes struggle to understand what is needed and formulate an end to end strategy. During this session we discuss how Synopsys has helped leading global automotive suppliers and OEMs achieve ISO 26262 compliance goals with coding standards, fuzz testing, open source management and product cybersecurity strategies and testing.
For the past decade, Linux in the vehicle has looked much like a traditional embedded system; custom microcontrollers, a one-time software load, and minimal connectivity. Now drivers including ECU consolidation, advanced infotainment and ADAS, 5G connectivity, and customer expectations of continuous functional updates have changed forever the platform requirements for in vehicle compute. A strong ecosystem of standard hardware and software, with extended support and update lifecycles for both security and functionality, is emerging, and functional safety for in-vehicle Linux has also become a rapid requirement. This talk will give an overview of the trajectory of these industry trends.
The presentation introduces GLOBALFOUNDRIES® 22FDX® as the technology and design platform of choice for the next generation of automotive designs. It focuses on how the 22FDX® AG1 Automotive digital design platform supports ISO26262 functional safety (FuSa) requirements. Synopsys® Design Compiler® (DC-NXT) and IC Compiler® II toolset based “safety-aware digital design flow” is described to introduce FuSa features on a safety critical design, implemented with GF® 22FDX® based Synopsys® Automotive Grade 1 (AG-1) 9-track std-cell library. The FuSa features covered are “TMR (triple mode redundancy), DMR (double mode redundancy), Fault-tolerant flops, DCLS (dual core lock step) and 100% (or high) RVI (redundant via insertion)”. Finally, implementation results are summarized, with key focus on the impact assessment of FuSa feature introduction on the PPA results.
Reliability, availability and serviceability is a major concern for cloud applications. Many semiconductor and system companies put emphasis on adding hardware duplication, a costly solution to detect and mitigate failures. A deep analysis of the design can identify the portions of the design that can cause the system to behave unpredictably in the presence of soft errors. Statistical analysis based on probability of error propagation in the design can be performed in large complex systems such as the ones in cloud applications before any testbenches are available. We will present our findings based on the static analysis approach used in TestMAX FuSa to quickly identify the registers most vulnerable to soft errors early in the design development.
Design data migration between different nodes is challenging tasks when it involves different EDA platforms. The migration even becomes more difficult when stackMos devices are present in the design. In this paper we talks about the productivity enhancement achieved using Custom compiler tools, starting right from schematic migration until simulation. The command line simulation challenges is been answered by Sonoma/SAE with the enablement of Multi TB and Sequential TB features. Integration of third party simulators in Custom compiler environment helped us in calibrating the results. Handling measurement expressions effectively within SAE without any change provides a faster turnaround time. The TB reusability feature saved considerable time during design/spec changes, also saving simulation history provides a better way to preserve the milestone simulation database. Also, productivity improvement is seen on layout front in handling stackMos devices using Folding and chaining features, which actually saves considerable time for placing devices and making physical connections
Custom Design market is growing driven by AMS and Co-design opportunities. Couple that with migration to advanced nodes, has brought parasitic extraction to the forefront. There is a constant need to improve capacity, accuracy, productivity & features for PEX tools. In this session you will learn how StarRC is making a strong push in this market with new custom design features, 3DIC support and tight integration with Custom Design Platforms
Fusion Compiler RM 2.0 provides a quick and easy path to use best in class R&D recommended flows and technologies for your Fusion Compiler design kit. Hit the ground running and stay up to date by using RM2.0. This talk will go through an overview of RM 2.0, how it’s setup, how to use it to get very good out of the box results, and pr
Ethernet switch designs continues to grow in port count and part speeds. We will show how virtual network testing can drive the DUT with a broad range of packet scenarios to enable stress testing of new architecture
Hybrid technologies have shown over the last few years, that they deliver signficant benefits for pre-silicon software bring-up. We will explain why hybrid has become an essential technology and how far the technology has advanced.
While we all experience the first 5G services, the development of 5G silicon remains a race that will be on for many years to come. System verification of the evolving 5G standard requires fast execution engines and fully 5G compliant test vectors. We will demonstrate an industry leading solution based on ZeBu
Adoption of formal verification includes many steps - education, training, leveraging formal Apps, etc. But conclusive results and fast turnaround is equally important for maximum productivity. Out-of-the-box performance/convergence depends on many core components such as engines, orchestration, regression management, user controls of these components. In this presentation we will discuss how VC Formal has been making use of the latest technologies, such as Machine Learning, to deliver best-in-class out-of-the-box performance.
Datapath validation using formal verification is unique in terms of behind the scenes technologies and methodology needed for productive verification. At the same time it has some similarities to traditional formal property checking when it comes to the need for visibility into the complexity of the verification problem and methods and techniques that can be applied for faster performance and better convergence. In this presentation, we will discuss the basics of datapath validation as well as new features making it more convenient for verification/design engineers and formal experts
Formal Signoff has had well defined metrics for some time now. These metrics provide a predicable step-by-step path to completeness. These signoff metrics are closely mapped to coverage metrics used in simulation which makes it easier for verification/design engineers. In this presentation, we will review these signoff metrics and share details on how these metrices are setup, calculated and analyzed in a single interactive environment making it intuitive and seamless to signoff on your designs.
Ansys RedHawk and RedHawk-SC, the industry standard for sign-off rail analysis are completely integrated within Fusion Compiler to allow users to perform, analyze and fix IR violations. Synopsys Power Integrity flow in Fusion Compiler enables users to analyze and fix dynamic IR violations at various stages of the back-end implementation flow. The following techniques encompass this flow including:
• Dynamic power shaping (DPS) early at “place_opt” reduces peak dynamic current and dynamic IR drop by scheduling clock arrival times.
• IR Driven Placement (IRDP) at “clock_opt” performs cell spreading in IR hot spots to eliminate IR violations.
• IR Driven CCD (IRD-CCD) and IR Driven Optimization at “route_opt” employs cell relocation and cell sizing to alter simultaneous switching to address dynamic IR violations.
• IR Driven PG Augmentation (PGA) involves the addition of P/G segments in IR hot spots to reduce P/G resistance and improve P/G grid robustness to fix IR violations.
The tutorial will serve as a refresher to the techniques employed in the Synopsys Power Integrity described above.
Power integrity and reliability analyseis are being pulled ever closer to the heart of IC design as silicon processes continue to shrink, design sizes continue to grow, and ultra-low voltage power supplies have eliminated any room for voltage drop or electromigration margins. A major change is that simultaneous switching noise has now come to dominate total IR-drop, which requires expanded activity coverage to catch all possible switching aggressors and avoid frequency loss. Ansys and Synopsys have cooperated to address these issues and developed joint customer solution flows that effectively shift timing closure to the left. We will look at full-flow power integrity with RedHawk-SC and Fusion Compiler from early in-design to incremental IR-drop ECO fixing with Tweaker, and final full- chip signoff. And we will show how the RedHawk-SC integration with PrimeTime automatically detects and fixes IR-drop timing violations to avoid voltage- drop escapes and ensure maximum design performance.
One of the biggest problems with RTL power estimation is accuracy comparing against sign-off power. If the estimation is not accurate, designers lose confidence in the tool. There are multiple factors that make power estimation at the RTL level challenging. Clock tree modeling is not accurate at the RTL level because CTS is only done during implementation. Clock tree power contributes up to 30% of the overall power for some workloads. Estimation without timing and physical awareness also causes power miscorrelation. Finally using different synthesis engines between RTL power estimation and implementation also contributes to miscorrelation.
PrimePower RTL address most of the challenges we face today. The tool is timing and physical aware, models clock tree topologies as actual implementation. It also uses the same FAST compilation engine as the construction tool, Fusion Complier. This presentation covers the extensive correlation study conducted on multiple Intel projects using PrimePower RTL.
Low power, energy efficient SoC design requires accurate analysis to identify power problems early, during the RTL stage. PrimePower RTL is a fast, physically-and-timing-aware power estimation solution that enables designers to analyze, explore, and optimize their RTL with confidence, improving power and shortening the design cycle. This tutorial highlights key PrimePower technologies enabling RTL power exploration. An example design illustrates the flow showing how to quickly identify power problems, use RTL power metrics and cross-probing to understand root causes, explore solutions using what-if analysis, and reduce RTL power by improving clock-gating efficiency, register gating, glitch power, and more.
This session presents different strategies and IC Validator features that were employed to debug and resolve design rule violations in a SoC design with multiple Hard-IPs with an overall design size greater than 250 sq-mm. For instance, In-process charging, or Antenna violations need special features to know not only victim gates but also source of aggressor nets involved. Similarly, voltage dependent design rule violations would be easy to debug if one is aware of voltage propagation path or differential voltage that gets considered. The paper also outlines compute resource optimization techniques and throughput improvement that can be achieved for a complex SoC design as above by harnessing multiple features of machine learning and multi-host features supported in IC Validator. The paper even describes correct-by-construction features within ICV that pre-empts design rule violations during placement and routing stages thereby saving multiple cycle times.
IBM and Synopsys have been working together to investigate the advantages and trade-offs of migrating high performance EDA applications to the cloud. In this work, we describe the incorporation of synopsys icv validator tools into a digital design flow enabled on a hybrid cloud infrastructure. Large chips require significant compute resource for design and validation of data. Our emphasis in this work is to reduce the process time by running synopsis icv validator applications in kubernetes based containers. Virtual machines are configured with defined cpu and memory requirements. Runtime improvements are realized by taking advantage of icv distributed and multi-threaded capabilities. The advantage of kubernetes is that they can be ported to any cloud environment. We discuss the advantages and disadvantages of running applications using on premis and hybrid cloud models.
Prototyping teams need to deliver maximum usage efficiency to software and system validation teams. We will show how the HAPS Gateway system helps to maximize the value from using HAPS across teams and projects.
With few exception any prototype has connectivity to the real world through new protocols. We will show the different options how HAPS prototypes can interface with the real world. Using PCIe Gen5 as an example we will demonstrate why high performance prototypes are critical for system validation.
Advanced node designs have stringent thresholds of power, performance, area (PPA), which leaves very little tolerance for timing errors and physical layout. In addition, advanced process nodes have much more complicated physical rules to meet, which leads to a higher number of scenarios that need to be simulated to ensure a successful design. To meet aggressive design tapeout schedules, design teams aim to signoff with a much shorter ECO closure cycle time, even as the designs metrics of scenarios and instances are reaching greater than 500 and over two hundred million, respectively. Every ECO change can potentially become a bottleneck and influence the tape out schedule. The ECO solution needs to effectively identify, analyze, fix and recover all of the potential issues in chip performance, power, area and reliability. We will introduce how Synopsys Tweaker ECO Closure Platform can solve these challenges and help you better control their project schedule and achieve the best QoR.
Silicon wafer test time is a precious and expensive part of delivery of integrated circuits. Scan chains enable fast, high coverage wafer testing. With the increasing complexity of chips at advanced nodes, the importance of incorporating scan chains to test the embedded memories is critical. Full chip scan testing requires high level models of custom design modules. The challenge is proving that these high-level models implement the same scan chain as the custom design. In this session we will discuss ESP, the only solution for validating scan chains from behavioral level models all the way down to transistor level implementations
Signal integrity (SI) has been a major concern in custom design and signoff for a few years. NanoTime has been continuously upgrading its feature set to accurately account for SI effects.
In the timing domain, newer technology nodes required more elaborate modeling of aggressor waveforms. Results showed a significant pessimism reduction in most cases with little impact to the overall runtime.
In the noise domain, effects such as driver weakening and load-induced noise became more prevalent. Given that stages are affected by noise waveforms coming from both drivers and fanouts, advanced reporting capabilities are required to guide designers into which stages should be investigated first.
In this tutorial, we review basic SI concepts for timing and noise, and we focus on recently added SI features that target advanced process nodes. The presentation should enable NanoTime users to extract the most value out of their SI results.
This presentation showcases the benefits of using Fusion Compiler over the traditional Synopsys tools like Design Compiler DFT, ICC-II. Fusion Compiler provides advantages like shorter run-times, better performance, easy design-for-test (DFT) integration, higher utilization, reduction in total area, and more will be given as examples.
Besides the benefits of using one platform that combines all the tools, what makes Fusion Compiler more productive to the end-user is the inner integration level between those individual tools when they are integrated under Fusion Compiler.
Microsoft switched to Fusion Compiler under a very demanding and stressful timeline which went smoothly while meeting and exceeding deadlines. Following the reference documentation and using the automatic conversion scripts shortened the ramp-up time compared to a full end to end flow bring-up from scratch.
Automotive SOCs have strict reliability requirements. One way to guarantee reliability is to isolate potential field failures that pass wafer sort testing and post process them into a failing part to keep them from being built. This re-binning, or inking, is based on statistical analysis. There are many standard algorithms used to accomplish this including DPAT, good die/bad neighborhood, clusters, etc. In SiliconDash, we show how we can automatically ink parts in an high volume manufacturing environment. We handle the complexity of combining different data sources for single wafer. We apply multiple inking algorithms. Then we automatically send a new build map to assembly.
Final test module fails on image sensor related to front-end manufacturing are cost killer and must be understood to quickly recover situation and at least be capable to screen them at wafer sort stage. Some specific optical parameters at final test are not correlated to the one measured at EWS due to test environment constraints. We were capable to build a multivariate linear model on 16 EWS parameters correlating with FT optical fail. Thanks to Silicondash database and scripting capability we have processed EWS wafer database on thousand wafers to generate inking maps for assy plant and save FT cost.
SiliconDash is the next generation high-volume semiconductor big data analytics solution for fabless companies, IDMs, OSATs and foundries. It provides comprehensive yield management, quality management and throughput management of your IC and MCM (multi-chip module) products throughout the manufacturing and test process. It delivers comprehensive end-to-end real-time intelligence and control of manufacturing and test operations for executives, managers, product engineers, test engineers, quality engineers, sustaining engineers, device engineers, yield engineers and test operators. SiliconDash handles the complex management of test data. It applies analytics algorithms to all your data through its stream compute platform. SiliconDash then turns these analytics into actions through its industry-leading Insights feature.
Large die designs can exhibit many types of variations across the die area. In this presentation we will show the architecture, implementation and use cases for a design that implements more than 100 PVT sensors.
In-chip sensors and PVT monitors are semiconductor IP circuits that are typically embedded inside of system-on-chip (SoC) designs. Sensing the dynamic operating environment of the SoC (i.e., the voltage and temperature) as well as the static condition (i.e., process) provides a method to optimize the SoC’s performance based on the local conditions the chip is experiencing. These monitors and sensors represent embedded analog IP blocks that are typically integrated into SoCs with the goal of sensing the process variability and operating environment of the chip. These devices exploit the fact that certain measurable characteristics of a semiconductor device change depending on levels of activity and provide a level of visibility into SoC operation that is simply unavailable any other way. The technology is also used as the device is deployed in the field to measure and optimize performance.
Improving silicon yield requires an intensive understanding of silicon failures. For this understanding, failure analysis technologies and methodologies are mandatory to ensure cost-effective product manufacturing. Working with Synopsys, Samsung Foundry has collaborated on future-oriented failure analysis methodology. In this presentation, you will hear about the latest technology updates for failure analysis solution of Samsung Foundry silicon and see examples of its application. Techniques and schemes, such as transistor-level, defect trend, and volume diagnostics using Yield Explorer will be covered.
Nanometer node yield issues are dominated by design-process-test interactions, mandating cross-domain analyses to mitigate these issues rapidly. Yield Explorer brings yield relevant data from diverse sources such as the physical design flow, wafer manufacturing, and wafer and chip level testing into a single data bank. With the widest possible range of data at their disposal, users achieve unsurpassed clarity in root cause analysis when faced with systematic yield limiters. Yield Explorer achieves this with an order of magnitude advantage in analysis speed in the most complex of use cases—for example, 10X faster volume diagnostics analysis of ATPG output. This significant analysis capability and speed advantage sets Yield Explorer in a class apart from previous yield management systems and enables, for the first time, true connectivity to EDA tools.
Learn more about boosting simulation performance with FGP, distributed compile and other performance related enhancements
See VCS's new "Dynamic Multi-cycle path Verification" to catch bugs at RTL and performance improvements with "Dynamic Test Loading" and "Distributed Compile" capabiltiies
Learn more about enhancements in the latest VCS and Verdi releases for real Number Modeling (RNM), Interface Elements(IE) debug and running AMS multi-day simulations.
With the growing complexity of Testbench, TB debugging has undoubtedly become the most challenging bottleneck in todays’ verification flow. To maximize the efficiency of daily TB debugging process, Verdi automates the instant recall flow for UVM debug. The simulation process and debugging environment at the time when UVM error occurs is kept and recalled automatically, drastically saving debug turnaround time. In addition, Verdi facilitates the dumping and display of class objects and UVM objects. Providing information of dynamic objects and visualization in GUI, taking testbench debug to the next level.
Learn about how to use Interactive Verdi with VCS solver engine and debug complex constraint failures.
Data centres and cloud services are the heart of digital transformation. With the move to greater complexity, problems that were once isolated to individual design blocks are now system-level concerns. Cache coherency is just the latest of these concerns. Every SoC team is facing or will face this challenge of accelerating coverage closure to meet tape out schedules. Proper verification requires fast cycle-accurate models, coherency-aware system level scenarios & most important to ensure the coherency is maintained across the System by System Monitors. This tutorial presents a highly automated approach to cache coherency verification at the SoC level providing upto 10X overall efficiency : generation of test cases to stress every aspect of a multi-processor, multi-memory, multi-level cache design. This solution requires no specialized knowledge of cache algorithms or of the underlying generation technology
SNUG thanks the members of the Technical Committee who volunteer their time and expertise to support SNUG’s technical quality, local perspective and value to the users of Synopsys tools and technology.
Savita Banerjee, Facebook
Savita Banerjee leads DFX strategy for Augmented Reality products at Facebook Reality Labs. She received her Ph.D. from the University of Massachusetts at Amherst and started her career at Bell Labs. Savita is recognized for her contributions to advancements in silicon technology for storage, networking, data centers and AR/VR applications. At Microsoft, she led design for test strategy for Hololens and Xbox utilizing advanced process nodes with optimized flows to meet their stringent product requirements. She is passionate about building disruptive technologies for next generation compute platforms that improve how we work, communicate, and have fun.
Savita is committed to global citizenship, youth empowerment and cultivating diversity and inclusion at work. She is the proud mother of two who loves the outdoors, classic rock and making pizza.
Sergei Babokhov, Intel
Laurent Besson, Easii IC
Gregg Bromley, GEO Semiconductor
David Brownell, Analog Devices
Thomas Buerner, Nokia
Ahmet Ceyhan, Intel
Tarun Chawla, STMicroelectronics
Leah Clark, Synopsys
Jon Colburn, NVIDIA
Andy Copperhall, Independent
Cliff Cummings, Paradigm Works
Al Czamara, Test Evolution
Jack Dong, Intel
Ravikishore Gandikota, NVIDIA
Majid Ghameshlu, Siemens AG (Austria)
Ralph Goergen, NXP Semiconductors
Ronald Goodstein, Lockheed Martin
Soenke Grimpen, Infineon
Peter Grove, Dialog Semiconductor
Anwarul Hasan, Independent
Zafar Hasan, NVIDIA
Anand Iyer, Microsoft
Ronald Kalim, Intel
Brian Kane, Northrop Grumman
Victoria Kolesov, Intel
Mohan Krishnareddy, Arteris
Claus Kuntzsch, University of Applied Sciences Nuremberg
Farid Labib, GLOBALFOUNDRIES
Besson Laurent, Easii IC
Boris Litinsky, Juniper
Charles Magnuson, Intel
Tom Mahatdejkul, Arm
Corbett Marler, Intel
Stella Matarrese, STMicroelectronics
Karsten Matt, GLOBALFOUNDRIES
Didier Maurer, IC'ALPS
Glen McDonnell, Broadcom
Don Mills, Microchip
Jeff Montesano, Verilab
Bryan Morris, Ciena
Naveen Mysore, Intel
Nathalie Meloux, STMicroeletronics
Nitin Navale, Xilinx
Giuseppe Notarangelo, STMicroelectronics
Firouzeh Nourkhalaj, Synopsys
Sathappan Palaniappan, Broadcom
Sachin Parikh, Broadcom
Olivia Poon, Marvell Semiconductor
Frank Poppen, OFFIS - Institute for Information Technology
Herbert Preuthen, GLOBALFOUNDRIES
PD Priyadarshan, Cisco
Karthik Rajan, Microchip
Francois Ravatin, STMicroelectronics
Jeremy Ridgeway, Broadcom
Chris Kiegle, Marvell Semiconductor
Jason Rziha, Microchip
Kiran Sama, Amazon
Robert Siegmund, GLOBALFOUNDRIES
Neel Sonara, Broadcom
Mark Sprague, Intel
Matthew Streyle, Samsung
Sean Sun, NXP Semiconductors
Ravish Sunny, Intel
Manfred Thanner, NXP Semiconductors
Tony Todesco, AMD
Jeff Vance, Verilab
Alessandro Valerio, STMicroelectronics
Raj Varada, Intel
Sudha Vasu, NVIDIA
Sandeep Venishetti, Google
Upasna Vishnoi, Marvell Semiconductor
Viba Viswanathan, Samsung
Krishna Vittala, Microchip
Jon Wei, NovuMind
Jing Zhang, Intel
Make a Difference at SNUG! Join the Technical Committee
Learn More