> Advanced Programming Techniques > Parallel Optimizers > Parallel MIP Optimizer > Output from the Parallel MIP Optimizer

The parallel MIP optimizer generates slightly different output in the node log (see Progress Reports: Interpreting the Node Log) from the serial MIP optimizer. The following paragraphs explain those differences.

If the MIP optimizer is running in parallel, it will display elapsed time for the MIP optimizer in wall-clock time, independent of the setting of the clock-type parameter (assuming MIP logging has not been turned off).

ILOG CPLEX prints a summary of timing statistics specific to the parallel MIP optimizer at the end of optimization. You can see typical timing statistics in the following sample run.

Problem 'fixnet6.mps.gz' read.
Read time =    0.02 sec.
CPLEX> o
Tried aggregator 1 time.
MIP Presolve modified 308 coefficients.
Aggregator did 1 substitutions.
Reduced MIP has 477 rows, 877 columns, and 1754 nonzeros.
Presolve time =    0.02 sec.
Clique table members: 2
MIP emphasis: balance optimality and feasibility
Root relaxation solution time =    0.07 sec.

        Nodes                                         Cuts/
   Node  Left     Objective  IInf  Best Integer     Best Node    ItCnt     Gap

      0     0     3192.0420    12                   3192.0420        8
*     0+    0                   0     4505.0000     3192.0420        8   29.14%
                  3384.5860    15     4505.0000     Cuts:  36       51   24.87%
                  3513.7923    17     4505.0000     Cuts:  25       92   22.00%
                  3530.1967    19     4505.0000  Flowcuts:  9      104   21.64%
*     0+    0                   0     4471.0000     3530.1967      104   21.04%
                  3604.4590    17     4471.0000 Flowcuts:  10      124   19.38%
                  3607.9420    18     4471.0000  Flowcuts:  4      131   19.30%
*     0+    0                   0     4448.0000     3607.9420      131   18.89%
                  3608.7548    20     4448.0000  Flowcuts:  3      136   18.87%
                  3617.6257    19     4448.0000  Flowcuts:  2      141   18.67%
                  3624.7454    19     4448.0000  Flowcuts:  2      150   18.51%
                  3627.2428    20     4448.0000  Flowcuts:  1      152   18.45%
*     0+    0                   0     3994.0000     3627.2428      152    9.18%
*    30+    5                   0     3985.0000     3736.8034      305    6.23%
*    46     0                   0     3983.0000     3972.7760      326    0.26%

Root relaxation processing (before b&c):
  CPU      time             =    0.60
Parallel b&c, 2 threads:
  Real     time             =    0.72
  Critical time (total)     =    0.00
  Spin     time (average)   =    0.01
                              -------
Total (sequential+parallel) =    1.32 sec.

Cover cuts applied:  1
Flow cuts applied:  38
Gomory fractional cuts applied: 9

Integer optimal solution:  Objective =    3.9830000000e+03
Solution time =    0.90 sec.  Iterations = 328  Nodes = 47

CPLEX> q

The summary at the end of the sample says that 0.60 of a second was spent in the phase of processing the root relaxation, that is, all the combined steps (preprocessing, root relaxation solution, cutting planes, heuristic) that occur at the root before the first branch occurs. The parallel part of this sample run took 0.72 of a second of real time (that is, elapsed time for that phase).

Other parts of the sample report indicate that the processors spent an average of 0.01 of a second of real time spinning (that is, waiting for work while there were too few active nodes available). The real critical time was a total of 0.00 seconds, time spent by individual processors in updating global, shared information. Since only one processor can access the critical region at any instant in time, the amount of time spent in this region really is crucial: any other processor that tries to access this region must wait, thus sitting idle, and this idle time is counted separately from the spin time.

There is another difference in the way logging occurs in the parallel MIP optimizer. When this optimizer is called, it makes a number of copies of the problem. These copies are known as clones. The parallel MIP optimizer creates as many clones as there are threads available to it. When the optimizer exits, these clones and the memory they used are discarded.

If a log file is active when the clones are created, then ILOG CPLEX creates a clone log file for each clone. The clone log files are named cloneK.log, where K is the index of the clone, ranging from 0 (zero) to the number of threads minus one. Since the clones are created at each call to the parallel MIP optimizer and discarded when it exits, the clone logs are opened at each call and closed at each exit. (The clone log files are not removed when the clones themselves are discarded.)

The clone logs contain information normally recorded in the ordinary log file (by default, cplex.log) but inconvenient to send through the normal log channel. The information likely to be of most interest to you are special messages, such as error messages, that result from calls to the LP optimizers called for the subproblems.