I’m an electronics engineer who spent a part of his career in the business intelligence and analytics domain. In that regard, I’m always interested in technology and business areas that have unique analytics needs. Semiconductor design closure is one such domain. With 14 nanometer geometry fabrication now coming on-line, the complexity of integrated circuits is taking another geometric step in complexity as large projects can have 200+ IP blocks in their designs (see figure below).
Variability and Velocity are more critical than Volume
When taking into consideration that millions of transistors can constitute a block and that blocks can be chosen from libraries in the thousands, and that there can be multiple variations of a block, the analytics challenge approaches that of Big Data. Though, this not necessarily because of overall data size, but because of data complexity, variability and velocity.
For these large projects, then, the effort to meet timing, power, IR drop and other design parameters takes geometrically longer…yet again. Of course, some of this increased verification effort can be done in parallel by multiple design teams, each working on sub-sections of the chip. But, ultimately the entire system design has to be simulated to assure right design first time. I’m sure most would agree with me that system failure often happens at interfaces. Whether it’s an interface within a design or a responsibility interface between designers, it’s the same situation.
Why ordinary Big Data analytics won’t do the job
Effective analytics for design testing and verification provides a way to analyze interface operation from all relevant perspectives. Coming back to the topic of Big Data, my view is that commonly known Big Data analytics tools could be helpful, but are not sufficient to meet this requirement. In particular, I observe that appropriate semiconductor big data analytics must have the following capabilities:
- Support for the hierarchical nature of chip design.
- Ability to integrate information from multiple design tools and relate them in some way to each other to indicate relevant cause/effect relationships.
- The ability to compare and contrast these relationships using graphical analytics to expose key relationships super quickly.
- The ability to easily zoom, pivot, filter, sort, rank and do other kinds of analytics tasks on data to gain the right viewpoints.
- The ability to deliver these analytics with minimal application admin or usage effort.
- Effective visualizations for key design attributes unique to semiconductor projects.
- The ability to process data from analog, digital and the other types of common EE design and simulation tools.
- The ability to handle very complex, large chip design data structures so that requirement, specification and simulation consistency is maintained.
It seems to me that semiconductor design engineers have been quietly contending with Big Data analytics challenges even though they haven’t necessarily been part of the mainstream Big Data conversations. Yet, the tools in use for chip design perhaps have some very interesting capabilities for other technical and business disciplines. My $.02.
Also, we’re going to be at the Design Automation Conference in San Francisco this year again. We will have a full presentation and demo agenda, a cocktail hour and prizes, join us!
Eric ROGGE is a member of the High-Tech Industry team. You can find him on Twitter @EricAt3DS.