Multi-Sub Optimizer Tutorial (page 9)

Running a New Optimization

Now we are ready to optimize the Iteration 2 configuration in tutorial-new-3.msop. Press the Run button on the toolbar. The optimization was run for about 20 minutes on a CPU supporting eight concurrent threads. The result has been saved as tutorial-new-4.msop in the tutorial project samples. This is the project you should open to be able to see the same results that will be shown below. This optimization gives the following result.

Iteration 2 After Optimization
Iteration 2 After Optimization

It's useful to compare this result with Iteration 1 after optimization, shown below.

Iteration 2 After Optimization
Iteration 1 After Optimization

Evaluating the Results

Even with the addition of two PEQs per sub in Iteration 2, increasing the maximum optimization frequency from 160 Hz to 190 Hz still seems to have compromised our low-frequency seat-to-seat consistency a bit. Looking at both graphs from 20 Hz to 60 Hz, the seat-to-seat response variation of Iteration 1 seems a little better than Iteration 2 overall.

In comparing the performance of Iteration 1 and Iteration 2, the use of phrases like "seems to have compromised our low-frequency seat-to-seat consistency a bit" and "seems a little better than" doesn't tell us much. We'd like to put some numbers to these observations. While optimizing, the Optimization Status tab of the Output window shows the current value of the optimization error. See this figure for an example. If we haven't closed the project since running the optimization, we can still see that error from Iteration 2. If we close the project, we'll lose that, and at any rate, we've already lost the error information from Iteration 1. How do we get the error information back? And how can we compare errors of Iteration 1 and Iteration 2 in a meaningful, quantitative way?

The Configuration Performance Metrics Dialog

The Configuration Performance Metrics dialog is the tool we want to use. Choose Config, Performance Metrics from the main menu to invoke it. The dialog will appear as in the figure below. I've annotated this figure to highlight the most important data it shows.

Configuration Performance Metrics Dialog
Configuration Performance Metrics Dialog

This dialog allows us to show a number of different errors as the optimizer itself would have computed them. This will give us the ability to compare the performance of different configurations in a quantitative way.

Note: In order to get answers that match the numbers shown below for this project (tutorial-new-4.msop), make sure Allow different SPLs at different listening positions is checked.

At the top right, the three most important errors reported by this dialog are highlighted. These are:

The Need for Custom Frequency Ranges

We'll be comparing Iteration 1 and Iteration 2 using these three metrics. We can do that by selecting them in the Configuration Performance Metrics dialog and just jotting down the three results for each configuration.

This sounds simple, but there is one catch. Over what frequency range are these errors computed? By default, the Configuration Performance Metrics dialog uses the Frequency range to optimize that you specify on the Method page of the Optimization Options property sheet. Recall from our previous work that this range was 15 Hz to 160 Hz for Iteration 1, and 15 Hz to 190 Hz for Iteration 2. So if we were to just write down the errors as-is, we'd be comparing optimization errors taken over two different frequency ranges, which is not a fair comparison.

The solution to this inconsistency is to use a custom frequency range for each configuration. First, we'll compare them over the range 15 Hz to 160 Hz, then we'll do the comparison over the 15 Hz to 190 Hz range. For the first case, perform the following steps:

The figure below shows the result for Iteration 2 after specifying this custom frequency range.

Configuration Performance Metrics Dialog
Configuration Performance Metrics Dialog

The error results over the 15 Hz to 160 Hz frequency range are summarized in the table below.

Configuration MLP target error Seat-to-seat variation Final error
Iteration 1 0.76 dB 0.70 dB 0.72 dB
Iteration 2 0.69 dB 0.72 dB 0.71 dB

Errors of Iteration 1 and Iteration 2, 15 Hz - 160 Hz

This result is a little surprising, given that earlier we observed the seat-to-seat variation of Iteration 2 in the frequency range from 20 Hz to 60 Hz to be a lot worse than for Iteration 1. Just as an experiment, set the custom frequency range for Iteration 1 and Iteration 2 to be 20 Hz to 60 Hz, using the same method we just used above for setting custom frequency ranges. The results are as follows:

Configuration MLP target error Seat-to-seat variation Final error
Iteration 1 0.43 dB 0.40 dB 0.41 dB
Iteration 2 0.44 dB 0.62 dB 0.58 dB

Errors of Iteration 1 and Iteration 2, 20 Hz - 60 Hz

The poorer seat-to-seat variation of Iteration 2 is clearly shown here. Apparently, better matching of responses toward the higher end of the frequency range of Iteration 2 is evening things out when the wider frequency range of 15 Hz - 160 Hz is considered.

Finally, let's repeat this exercise with Iteration 1 and Iteration 2 using a custom frequency range of 15 Hz to 190 Hz. Again, use the method described above for setting custom frequency ranges. We get the following results:

Configuration MLP target error Seat-to-seat variation Final error
Iteration 1 2.17 dB 1.31 dB 1.57 dB
Iteration 2 0.84 dB 0.79 dB 0.80 dB

Errors of Iteration 1 and Iteration 2, 15 Hz - 190 Hz

Here, Iteration 2 is better in every measure. This is to be expected, as the optimization frequency range of Iteration 1 only went up to 160 Hz, so for Iteration 1, there wss no control over the responses from 160 Hz to 190 Hz.

Click Close or press Esc to exit the Configuration Performance Metrics dialog.

In the next section, we'll look at trying to reduce the errors further by using shared filters in MSO.