Running a New Optimization
Now we are ready to optimize the Iteration 2 configuration in tutorial-new-3.msop. Press the Run button on the toolbar. The optimization was run for about 20 minutes on a CPU supporting eight concurrent threads. The result has been saved as tutorial-new-4.msop in the tutorial project samples. This is the project you should open to be able to see the same results that will be shown below. This optimization gives the following result.
It's useful to compare this result with Iteration 1 after optimization, shown below.
Evaluating the Results
Even with the addition of two PEQs per sub in Iteration 2, increasing the maximum optimization frequency from 160 Hz to 190 Hz still seems to have compromised our low-frequency seat-to-seat consistency a bit. Looking at both graphs from 20 Hz to 60 Hz, the seat-to-seat response variation of Iteration 1 seems a little better than Iteration 2 overall.
In comparing the performance of Iteration 1 and Iteration 2, the use of phrases like "seems to have compromised our low-frequency seat-to-seat consistency a bit" and "seems a little better than" doesn't tell us much. We'd like to put some numbers to these observations. While optimizing, the Optimization Status tab of the Output window shows the current value of the optimization error. See this figure for an example. If we haven't closed the project since running the optimization, we can still see that error from Iteration 2. If we close the project, we'll lose that, and at any rate, we've already lost the error information from Iteration 1. How do we get the error information back? And how can we compare errors of Iteration 1 and Iteration 2 in a meaningful, quantitative way?
The Configuration Performance Metrics Dialog
The Configuration Performance Metrics dialog is the tool we want to use. Choose Config, Performance Metrics from the main menu to invoke it. The dialog will appear as in the figure below. I've annotated this figure to highlight the most important data it shows.
This dialog allows us to show a number of different errors as the optimizer itself would have computed them. This will give us the ability to compare the performance of different configurations in a quantitative way.
Note: In order to get answers that match the numbers shown below for this project (tutorial-new-4.msop), make sure Allow different SPLs at different listening positions is checked.
At the top right, the three most important errors reported by this dialog are highlighted. These are:
- MLP target error, dB RMS: This error is a measure of how much the MLP response deviates from the target. If the MLP response perfectly matched the target, this error would be zero.
- Seat-to-seat variation, dB: This error is a measure of how much the responses at all the listening positions deviate from one another. If all the responses matched one another perfectly, this error would be zero regardless of the shape of the responses.
- Final error, dB RMS: This is the error displayed in the Output window when the optimization is running. It combines the MLP target error, dB RMS and the Seat-to-seat variation, dB together into one composite error that the optimizer attempts to minimize.
The Need for Custom Frequency Ranges
We'll be comparing Iteration 1 and Iteration 2 using these three metrics. We can do that by selecting them in the Configuration Performance Metrics dialog and just jotting down the three results for each configuration.
This sounds simple, but there is one catch. Over what frequency range are these errors computed? By default, the Configuration Performance Metrics dialog uses the Frequency range to optimize that you specify on the Method page of the Optimization Options property sheet. Recall from our previous work that this range was 15 Hz to 160 Hz for Iteration 1, and 15 Hz to 190 Hz for Iteration 2. So if we were to just write down the errors as-is, we'd be comparing optimization errors taken over two different frequency ranges, which is not a fair comparison.
The solution to this inconsistency is to use a custom frequency range for each configuration. First, we'll compare them over the range 15 Hz to 160 Hz, then we'll do the comparison over the 15 Hz to 190 Hz range. For the first case, perform the following steps:
- Select Iteration 1 in the left column of the Configuration Performance Metrics dialog.
- Click the Use a custom frequency range radio button.
- Verify that the Maximum freq, Hz: edit control shows the value of 160. If it does not, change it so that it does.
- If the Recalculate button is enabled, click it to make sure the error calculations are up to date.
- Select Iteration 2 in the left column of the Configuration Performance Metrics dialog.
- Click the Use a custom frequency range radio button.
- Change the value in the Maximum freq, Hz: edit control from 190 to 160.
- If the Recalculate button is enabled, click it to make sure the error calculations are up to date.
The figure below shows the result for Iteration 2 after specifying this custom frequency range.
The error results over the 15 Hz to 160 Hz frequency range are summarized in the table below.
Configuration | MLP target error | Seat-to-seat variation | Final error |
---|---|---|---|
Iteration 1 | 0.76 dB | 0.70 dB | 0.72 dB |
Iteration 2 | 0.69 dB | 0.72 dB | 0.71 dB |
Errors of Iteration 1 and Iteration 2, 15 Hz - 160 Hz
This result is a little surprising, given that earlier we observed the seat-to-seat variation of Iteration 2 in the frequency range from 20 Hz to 60 Hz to be a lot worse than for Iteration 1. Just as an experiment, set the custom frequency range for Iteration 1 and Iteration 2 to be 20 Hz to 60 Hz, using the same method we just used above for setting custom frequency ranges. The results are as follows:
Configuration | MLP target error | Seat-to-seat variation | Final error |
---|---|---|---|
Iteration 1 | 0.43 dB | 0.40 dB | 0.41 dB |
Iteration 2 | 0.44 dB | 0.62 dB | 0.58 dB |
Errors of Iteration 1 and Iteration 2, 20 Hz - 60 Hz
The poorer seat-to-seat variation of Iteration 2 is clearly shown here. Apparently, better matching of responses toward the higher end of the frequency range of Iteration 2 is evening things out when the wider frequency range of 15 Hz - 160 Hz is considered.
Finally, let's repeat this exercise with Iteration 1 and Iteration 2 using a custom frequency range of 15 Hz to 190 Hz. Again, use the method described above for setting custom frequency ranges. We get the following results:
Configuration | MLP target error | Seat-to-seat variation | Final error |
---|---|---|---|
Iteration 1 | 2.17 dB | 1.31 dB | 1.57 dB |
Iteration 2 | 0.84 dB | 0.79 dB | 0.80 dB |
Errors of Iteration 1 and Iteration 2, 15 Hz - 190 Hz
Here, Iteration 2 is better in every measure. This is to be expected, as the optimization frequency range of Iteration 1 only went up to 160 Hz, so for Iteration 1, there wss no control over the responses from 160 Hz to 190 Hz.
Click Close or press Esc to exit the Configuration Performance Metrics dialog.
In the next section, we'll look at trying to reduce the errors further by using shared filters in MSO.