I found that the pre-trained Sonata encoder produces different outputs depending on whether center-shift is applied. How can it be adapted for downstream tasks like matching and registration, where center-shift is usually not applicable? By the way, since lidar point cloud often don't have color, is there any solution for producing transform-invariance features without color?
I found that the pre-trained Sonata encoder produces different outputs depending on whether center-shift is applied. How can it be adapted for downstream tasks like matching and registration, where center-shift is usually not applicable? By the way, since lidar point cloud often don't have color, is there any solution for producing transform-invariance features without color?