Benchmarks and Visualization

Quality

To evaluate the performance of our implementation we calculate the mean squared error on the unknown pixels of the benchmark images of [RRW+09].

_images/laplacian_quality_many_bars.png

Figure 1: Mean squared error of the estimated alpha matte to the ground truth alpha matte.

_images/laplacians.png

Figure 2: Mean squared error across all images from the benchmark dataset.

Visualization

The following videos show the iterates of the different methods. Note that the videos are timewarped.

CF KNN
LKM RW

Performance

We compare the computational runtime of our solver with other solvers: pyAMG, UMFPAC, AMGCL, MUMPS, Eigen and SuperLU. Figure 3 shows that our implemented conjugate gradients method in combination with the incomplete Cholesky decomposition preconditioner outperforms the other methods by a large margin. For the iterative solver we used an absolute tolerance of \(10^{-7}\), which we scaled with the number of known pixels, i.e. pixels that are either marked as foreground or background in the trimap.

_images/time_image_size.png

Figure 3: Comparison of runtime for different image sizes.

_images/average_running_time.png

Figure 4: Peak memory for each solver usage in MB.

_images/average_peak_memory_usage.png

Figure 5: Mean running time of each solver in seconds.