只能提供 inference 后的 label,不提供 belief,因此现在想不到办法用于 learning (求 gradient, expected feature, log partition)
猜测因为 energy minimization 只是求 mode, 因此并不保证能求 distribution.
ICM
Graph Cut
-Expansion
-Swap
LBP
TRW-S
使用请参见Readme
1.默认
typedef int EnergyVal; /* The total energy of a labeling */
typedef int CostVal; /* costs of individual terms of the energy */
是否要修改?
2.设置 data term, use array 或者函数
3. 设置 smoothness term, use 1) array 2) vmax, k, labmda 3) 函数。1) 2) 和位置无关,3)和 i,j 有关。但是 1)2) 可以设置和位置有关系的 weight,存在 hcue, vcue
4. 设置 MRF, infer
5. 以上针对 4-neighbor, graph cut 算法可以使用 general neighborhood. 参见 Readme
和 CRF2.0 类似,也是先计算出所有可能的labels情况下的energy (相应的就是 potential),再进行infer.
如何求 expected energy or expected features?
Please read the PAMI paper carefully, especially for discussion part.
Here are some tips:
1. lower energy Vs. accurate
2. Expansion and TRWS seem to be the winners. And Expansion is faster.
ICM is an old and poor method. LBP also not good. For graph cuts,
there never seems to be any reason to use Swap instead of Expansion.
3. So when we have a new problem, try Expansion first, just like try rbf for
SVM first.
4. In the paper, the authors discussion minimization methods, based on a
designed MRF model, which doesn’t take occlusion into account. What is
a model concerning occlusion?
5. The groudtruth is not the global minimum. So these methods can easily
produce energy lower than groundtruth. Of course what we need is the
more close to groundtruth, the better, ie., minimum energy solution is not
our goal, the most accurate one is.
6. For this point, it states that for the designed model, better minimization
techniques are unlikely to produce significantly more accurate labelings.
Expansion and TRWS can produce results very close to the global minimum.
7. To achieve this goal, the only thing we can do is to build more powerful
model (use more powerful energy function). However, creating more accurate
models will not lead to better results if good labelings under these
models cannot be found. It is also difficult to gauge the power of a model
without the ability to produce low energy labelings.
8. energy is the function being minimized. TRWS provides a lower bound to
the energy. And this lower bound increases monotonically.
9. when using expansion, truncation may be needed if the so-called regularity
condition on the smoothness term V is not met. Refer to the end of Section
V.
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment