The depth images shot in Lidar mode are of shape (256, 192), which is different from the RGB images' shape (960, 720). Can I just resize the depth images using cv2.resize(depth_array, (720, 960)) and scale the intrinsics by 3.75 (960/256)? Is such scaling accurate? Looking for help.
The depth images shot in Lidar mode are of shape (256, 192), which is different from the RGB images' shape (960, 720). Can I just resize the depth images using cv2.resize(depth_array, (720, 960)) and scale the intrinsics by 3.75 (960/256)? Is such scaling accurate? Looking for help.