Fun with Flags! or Fun with Error Norms

Our favorite TV geek from the Big Bang Theory: Dr. Sheldon Cooper presents Fun with Flags

Ok, that was just just a cheap trick to get your attention. The real purpose of this post is to discuss…

Fun with Error Norms!

I know, how much geekier can it get? Specifically, the error norms as used in the 3DExperience Calibration App. I put together an Excel file for three reasons:

  1. Compare the error norm used in Abaqus batch pre versus in the Calibration App.
  2. Show a calculation example of all of the error norms in the Calibration App.
  3. Use two datasets, representing two repetitions of the same test, and show that trying to use the optimization process to ‘average’ across multiple test repetitions is a very bad idea.

The Excel file is set up to read from left to right using different tabs.

Tab Scenario gives an overview of the purpose.

Tab Neo-Hooke Curve Fit shows synthetic data for two test repetitions in columns A & B. In both cases the number of data points is n=6. Repetition 1 is in rows 2-11 and repetition 2 is in rows 17-26. Column G calculates the RSE error norm as it is done in Abaqus batch pre. Column H calculates the RSE error norm as is done in the Calibration App (MAS). The slight difference between these two error norms is shown on the “Scenario” tab.

Note: Abaqus batch pre will discard strain=stress=0 points, so you see reference to n=5 for pre.

Tab NH Cal n=6 & N=11 uses two datasets that have a different number of data points.

All other tabs are for reference purposes.

Tab Neo-Hooke Curve Fit:  We have two test repetitions, #1 test has an NH (Neo-Hooke) answer of C10=0.2 and test #2 has an NH (Neo-Hooke) answer of C10=0.4. Both tests are a simple uniaxial pull test to the same maximum strain.

In general, when working with test data, one should review all of the test data, and use the most representative data.

Sometimes people will blindly average together all the test data from a series of repetitions. By “blindly”‘ I mean without correcting datasets for obvious errors, such as noisy data or clear outliers, or clear zero shift errors. This is a bad idea.

Sometimes people will bring all of the test repetitions into the Calibration App and think that the optimization process will magically find the “best” average answer. This is also a bad idea.

In row 32, the weighted objective is formed for all of the error norms in MAS (weights=1 in this work). This Excel file is set up to allow the use of the Excel solver.

In rows 34 and below, values are gathered to show how the weighted objective varies as a function of C10. The image below is taken from the Excel file. The purpose of this graph is to show that the “optimal” solution will depend on the choice of error norm. For RSE, the optimal solution is C10=0.24, for MSE, the optimal solution is C10=0.30. For MAE, this is quite interesting, there is no unique solution to the problem. For all values of C10 between 0.20 and 0.40, the MAE error norm has the same value. An optimizing solver may determine an answer that is initial value dependent.

To access the two files, visit the SIMULIA Community and download the zip file, including a 3dxml file for import into the 3DX Calibration App and the Excel file discussed above.



SIMULIA offers an advanced simulation product portfolio, including AbaqusIsightfe-safeToscaSimpoe-MoldSIMPACKCST Studio SuiteXFlowPowerFLOW, and more. The SIMULIA Community is the place to find the latest resources for SIMULIA software and to collaborate with other users. The key that unlocks the door of innovative thinking and knowledge building, the SIMULIA Community provides you with the tools you need to expand your knowledge, whenever and wherever.

Tod Dalrymple

R&D Applications Director at Dassault Systemes Simulia Corp.
Tod has worked as the Engineering Services manager and later the General Manager of the Great Lakes COE in the USA. He is now working for the SIMULIA R&D organization focusing attention on how we deliver advanced material modeling technology to our customers.