IEC 61000-4-3: Difference between revisions

From RadiWiki
Jump to navigation Jump to search
(New page: The EN 61000-4-3 standard describes the Testing and measurement techniques for Radiated, radio-frequency and electromagnetic field immunity tests. There have been several releases of ...)
 
Line 16: Line 16:
   
   
# Where the 1995 version, the 1995, A1 version and the 2002 version, mention to throw away a MAXIMUM of 4 out of 16 points, the question arises if you should remove any points, as long a all test points to comply to the 0 – 6 dB criterion. No clear clue can be found in the text. However, one can imagine, taking all 16 points into consideration (even if all these points fall within the 0-6 dB criterion) will result in a higher calibration power level, and therefore a higher generated field.
# Where the 1995 version, the 1995, A1 version and the 2002 version, mention to throw away a MAXIMUM of 4 out of 16 points, the question arises if you should remove any points, as long a all test points to comply to the 0 – 6 dB criterion. No clear clue can be found in the text. However, one can imagine, taking all 16 points into consideration (even if all these points fall within the 0-6 dB criterion) will result in a higher calibration power level, and therefore a higher generated field.
# The goal of the above version will be to find a total of 12 points, which comply with the 0-6 dB criterion at one hand, and will result in 12 remaining points with the lowest differences between these points. For this purpose, one can either calculate the average fields of all 16 points one time, and then through away the four points which differ most from the average, or recalculated the average field every time one point has been thrown away. (I.e. calculated the average for 4 times). Again, it will be obvious, this will result in a different calibration file.
# The goal of the above version will be to find a total of 12 points, which comply with the 0-6 dB criterion at one hand, and will result in 12 remaining points with the lowest differences between these points. For this purpose, one can either calculate the average fields of all 16 points one time, and then through away the four points which differ most from the average, or recalculated the average field every time one point has been thrown away. (I.e. calculated the average for 4 times). Again, it will be obvious, this will result in a different calibration file.
# The 1995, A1 version and the 2002 version allow for a constant power calibration method or a constant field calibration method. Where the constant power method will result in the fastest calibration time (no levelling is required), the constant field method will result in the most accurate calibration result. This is due to the fact that power meter linearity will be in the order of a view tenth of a dB, where field sensor linearity will be in the order of 1 dB.  During a constant field method the field strength reading of the sensor will remain the same for all test points, leaving the power meter linearity error as the main error contributor. During a constant power calibration, the field sensor linearity error will be the main error contributor. However, when using the constant field method, the standard states to use the same calculation method as used during the constant power method (calculate the average of the field strength in V/m and throw away 4 of the 16 points which deviate most from the average). Obviously this cannot be done when the field strength is  kept constant, as is done during a constant field method. Now 2 approaches remain: either the average out of the generated power is used to determine which points deviate most from the average, or,  as an alternative, the average out of the root mean square of the power levels is used to determine which points deviate most from the average. The later procedure will result in  calibration file which will correspond with a calibration file made using the constant power method. Although this sounds as the better method,  the first approach also results in a calibrated room, in which 12 out of the 16 points will have a field strength within the criterion of 0 to +6 dB
# The 1995, A1 version and the 2002 version allow for a constant power calibration method or a constant field calibration method. Where the constant power method will result in the fastest calibration time (no levelling is required), the constant field method will result in the most accurate calibration result. This is due to the fact that power meter linearity will be in the order of a view tenth of a dB, where field sensor linearity will be in the order of 1 dB.  During a constant field method the field strength reading of the sensor will remain the same for all test points, leaving the power meter linearity error as the main error contributor. During a constant power calibration, the field sensor linearity error will be the main error contributor. However, when using the constant field method, the standard states to use the same calculation method as used during the constant power method (calculate the average of the field strength in V/m and throw away 4 of the 16 points which deviate most from the average). Obviously this cannot be done when the field strength is  kept constant, as is done during a constant field method. Now 2 approaches remain: either the average out of the generated power is used to determine which points deviate most from the average, or,  as an alternative, the average out of the root mean square of the power levels is used to determine which points deviate most from the average. The later procedure will result in  calibration file which will correspond with a calibration file made using the constant power method. Although this sounds as the better method,  the first approach also results in a calibrated room, in which 12 out of the 16 points will have a field strength within the criterion of 0 to +6 dB
# To make the above even more complex, one good optimize the calculation algorithm in such a way, that the minimum power level is required, and the 0 to +6dB criterion is still met.
# To make the above even more complex, one good optimize the calculation algorithm in such a way, that the minimum power level is required, and the 0 to +6dB criterion is still met.



Revision as of 13:39, 13 January 2009

The EN 61000-4-3 standard describes the Testing and measurement techniques for Radiated, radio-frequency and electromagnetic field immunity tests.

There have been several releases of this standard, and most releases describe a different method for the calculation of the Uniform Field Area.

Uniform Field Area calculation according to EN 61000-4-3[edit]

16 points are taken, on a flat surface, 1.5 m by 1.5 m wide, 0.8 meter above the floor. According to the standard, the room passes the homogeneity criteria, as long as 75% of the field strength of 12 out of 16 points in the homogeny area, is within 0 to +6 dB.

In order to achieve this, one should take the average of the 16 field strengths and remove a maximum of 4 out of 16 points which deviate most form the average.

The point with the lowest field strength (or highest forward power) is taken as the calibration value for this frequency point.

Remarks on the EN-IEC61000-4-3 calibration method[edit]

The calibration method of this standard has been subject to a number of changes over the past years. The described method allowed a number of ways to interpret the way of performing calculations. Below a number of discussion points are mentioned:

  1. Where the 1995 version, the 1995, A1 version and the 2002 version, mention to throw away a MAXIMUM of 4 out of 16 points, the question arises if you should remove any points, as long a all test points to comply to the 0 – 6 dB criterion. No clear clue can be found in the text. However, one can imagine, taking all 16 points into consideration (even if all these points fall within the 0-6 dB criterion) will result in a higher calibration power level, and therefore a higher generated field.
  2. The goal of the above version will be to find a total of 12 points, which comply with the 0-6 dB criterion at one hand, and will result in 12 remaining points with the lowest differences between these points. For this purpose, one can either calculate the average fields of all 16 points one time, and then through away the four points which differ most from the average, or recalculated the average field every time one point has been thrown away. (I.e. calculated the average for 4 times). Again, it will be obvious, this will result in a different calibration file.
  3. The 1995, A1 version and the 2002 version allow for a constant power calibration method or a constant field calibration method. Where the constant power method will result in the fastest calibration time (no levelling is required), the constant field method will result in the most accurate calibration result. This is due to the fact that power meter linearity will be in the order of a view tenth of a dB, where field sensor linearity will be in the order of 1 dB. During a constant field method the field strength reading of the sensor will remain the same for all test points, leaving the power meter linearity error as the main error contributor. During a constant power calibration, the field sensor linearity error will be the main error contributor. However, when using the constant field method, the standard states to use the same calculation method as used during the constant power method (calculate the average of the field strength in V/m and throw away 4 of the 16 points which deviate most from the average). Obviously this cannot be done when the field strength is kept constant, as is done during a constant field method. Now 2 approaches remain: either the average out of the generated power is used to determine which points deviate most from the average, or, as an alternative, the average out of the root mean square of the power levels is used to determine which points deviate most from the average. The later procedure will result in calibration file which will correspond with a calibration file made using the constant power method. Although this sounds as the better method, the first approach also results in a calibrated room, in which 12 out of the 16 points will have a field strength within the criterion of 0 to +6 dB
  4. To make the above even more complex, one good optimize the calculation algorithm in such a way, that the minimum power level is required, and the 0 to +6dB criterion is still met.

Probably as a result of all the above issues, the EN61000-4-3, 2002, A1 version of the basic standard, uses a completely different approach in calculation which points to throw away. Instead of taking the average of the points, all points are place in descending order, and, starting from the highest power level (or, the lowest field strength level if performing a constant field calibration), check if the first 12 points are within the 0 to +6 dB criterion. If not, skip the first point and check if the following 12 points are within the criterion. This procedure is repeated (for a maximum of 5 times) unit the 0 to +6 dB criterion is met. This new method answers the questions above:

  • Issue 1: For all frequency points, 4 points are removed now.
  • Issue 2: With the new approach, there is no discussion anymore
  • Issue 3: The procedure for Constant power and constant field are now clearly described and will result in the same final calibration file for both methods.
  • Issue 4: No alternative calculation methods are possible anymore. This method will result in de calibration file with the highest power level, which complies with the 0 to +6dB criterion.