Skip to content

Commit

Permalink
Bump samples to Zivid SDK 2.13
Browse files Browse the repository at this point in the history
This commit adds the following changes:
New samples:
- network_configuration
- automatic_network_configuration_for_cameras
- transform_point_cloud_via_aruco_marker
- transform_point_cloud_via_checkerboard
- roi_box_via_aruco_marker
Modification of samples:
- hand-eye sample now supports ArUco markers
- in various samples, calibration.detect_calibration_board() is used
  instead of experimental.calibration.detect_feature_points(). This
  removes the zivid.experimental.calibration dependency.
  • Loading branch information
csu-bot-zivid authored and SatjaSivcev committed Jul 30, 2024
1 parent 19cae6b commit 2111652
Show file tree
Hide file tree
Showing 24 changed files with 1,095 additions and 237 deletions.
18 changes: 16 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Python samples

This repository contains python code samples for Zivid SDK v2.12.0. For
This repository contains python code samples for Zivid SDK v2.13.1. For
tested compatibility with earlier SDK versions, please check out
[accompanying
releases](https://github.com/zivid/zivid-python-samples/tree/master/../../releases).
Expand Down Expand Up @@ -60,6 +60,9 @@ from the camera can be used.
- [capture\_hdr\_print\_normals](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/advanced/capture_hdr_print_normals.py) - Capture Zivid point clouds, compute normals and print a
subset.
- **info\_util\_other**
- [automatic\_network\_configuration\_for\_cameras](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/automatic_network_configuration_for_cameras.py) - Automatically set the IP addresses of any number of
cameras to be in the same subnet as the provided IP address
of the network interface.
- [camera\_info](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/camera_info.py) - Print version information for Python, zivid-python and
Zivid SDK, then list cameras and print camera info and state
for each connected camera.
Expand All @@ -69,6 +72,8 @@ from the camera can be used.
- [firmware\_updater](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/firmware_updater.py) - Update firmware on the Zivid camera.
- [get\_camera\_intrinsics](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/get_camera_intrinsics.py) - Read intrinsic parameters from the Zivid camera (OpenCV
model) or estimate them from the point cloud.
- [network\_configuration](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/network_configuration.py) - Uses Zivid API to change the IP address of the Zivid
camera.
- [warmup](https://github.com/zivid/zivid-python-samples/tree/master/source/camera/info_util_other/warmup.py) - A basic warm-up method for a Zivid camera with specified
time and capture cycle.
- **maintenance**
Expand Down Expand Up @@ -114,8 +119,16 @@ from the camera can be used.
images to find the marker coordinates (2D and 3D).
- [reproject\_points](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/reproject_points.py) - Illuminate checkerboard (Zivid Calibration Board) corners
by getting checkerboard pose
- [roi\_box\_via\_aruco\_marker](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/roi_box_via_aruco_marker.py) - Filter the point cloud based on a ROI box given relative
to the ArUco marker on a Zivid Calibration Board.
- [roi\_box\_via\_checkerboard](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/roi_box_via_checkerboard.py) - Filter the point cloud based on a ROI box given relative
to the Zivid Calibration Board.
- [transform\_point\_cloud\_via\_aruco\_marker](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/transform_point_cloud_via_aruco_marker.py) - Transform a point cloud from camera to ArUco marker
coordinate frame by estimating the marker's pose from the
point cloud.
- [transform\_point\_cloud\_via\_checkerboard](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/transform_point_cloud_via_checkerboard.py) - Transform a point cloud from camera to checkerboard (Zivid
Calibration Board) coordinate frame by getting checkerboard
pose from the API.
- **hand\_eye\_calibration**
- [pose\_conversions](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/hand_eye_calibration/pose_conversions.py) - Convert to/from Transformation Matrix (Rotation Matrix
+ Translation Vector).
Expand All @@ -134,7 +147,8 @@ from the camera can be used.
- [display](https://github.com/zivid/zivid-python-samples/tree/master/source/sample_utils/display.py) - Display relevant data for Zivid Samples.
- [paths](https://github.com/zivid/zivid-python-samples/tree/master/source/sample_utils/paths.py) - Get relevant paths for Zivid Samples.
- [robodk\_tools](https://github.com/zivid/zivid-python-samples/tree/master/source/sample_utils/robodk_tools.py) - Robot Control Module
- [save\_load\_matrix](https://github.com/zivid/zivid-python-samples/tree/master/source/sample_utils/save_load_matrix.py) - try:
- [save\_load\_matrix](https://github.com/zivid/zivid-python-samples/tree/master/source/sample_utils/save_load_matrix.py) - Save and load Zivid 4x4 transformation matrices from and to
YAML files.
- [white\_balance\_calibration](https://github.com/zivid/zivid-python-samples/tree/master/source/sample_utils/white_balance_calibration.py) - Balance color for 2D capture using white surface as reference.
- **applications**
- **advanced**
Expand Down
2 changes: 1 addition & 1 deletion continuous-integration/setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ function install_www_deb {
rm -r $TMP_DIR || exit
}

install_www_deb "https://downloads.zivid.com/sdk/releases/2.12.0+6afd4961-1/u${VERSION_ID:0:2}/zivid_2.12.0+6afd4961-1_amd64.deb" || exit
install_www_deb "https://downloads.zivid.com/sdk/releases/2.13.1+18e79e79-1/u${VERSION_ID:0:2}/zivid_2.13.1+18e79e79-1_amd64.deb" || exit

python3 -m pip install --upgrade pip || exit
python3 -m pip install --requirement "$ROOT_DIR/requirements.txt" || exit
Expand Down
13 changes: 5 additions & 8 deletions source/applications/advanced/auto_2d_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,6 @@
first. If you want to use your own white reference (white wall, piece of paper, etc.) instead of using the calibration
board, you can provide your own mask in _main(). Then you will have to specify the lower limit for f-number yourself.
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
"""

import argparse
Expand All @@ -29,7 +27,6 @@
import matplotlib.pyplot as plt
import numpy as np
import zivid
import zivid.experimental.calibration
from sample_utils.calibration_board_utils import find_white_mask_from_checkerboard
from sample_utils.white_balance_calibration import compute_mean_rgb_from_mask, white_balance_calibration

Expand Down Expand Up @@ -152,7 +149,7 @@ def _find_white_mask_and_distance_to_checkerboard(camera: zivid.Camera) -> Tuple
settings = _capture_assistant_settings(camera)
frame = camera.capture(settings)

checkerboard_pose = zivid.experimental.calibration.detect_feature_points(frame).pose().to_matrix()
checkerboard_pose = zivid.calibration.detect_calibration_board(frame).pose().to_matrix()
distance_to_checkerboard = checkerboard_pose[2, 3]

rgb = frame.point_cloud().copy_data("rgba")[:, :, :3]
Expand Down Expand Up @@ -520,10 +517,10 @@ def _print_poor_pixel_distribution(rgb: np.ndarray) -> None:
black_and = np.sum(np.logical_and(np.logical_and(rgb[:, :, 0] == 0, rgb[:, :, 1] == 0), rgb[:, :, 2] == 0))

print("Distribution of saturated (255) and black (0) pixels with final settings:")
print(f"Saturated pixels (at least one channel): {saturated_or}\t ({100*saturated_or/total_num_pixels:.2f}%)")
print(f"Saturated pixels (all channels):\t {saturated_and}\t ({100*saturated_and/total_num_pixels:.2f}%)")
print(f"Black pixels (at least one channel):\t {black_or}\t ({100*black_or/total_num_pixels:.2f}%)")
print(f"Black pixels (all channels):\t\t {black_and}\t ({100*black_and/total_num_pixels:.2f}%)")
print(f"Saturated pixels (at least one channel): {saturated_or}\t ({100 * saturated_or / total_num_pixels:.2f}%)")
print(f"Saturated pixels (all channels):\t {saturated_and}\t ({100 * saturated_and / total_num_pixels:.2f}%)")
print(f"Black pixels (at least one channel):\t {black_or}\t ({100 * black_or / total_num_pixels:.2f}%)")
print(f"Black pixels (all channels):\t\t {black_and}\t ({100 * black_and / total_num_pixels:.2f}%)")


def _plot_image_with_histogram(rgb: np.ndarray, settings_2d: zivid.Settings2D) -> None:
Expand Down
11 changes: 3 additions & 8 deletions source/applications/advanced/get_checkerboard_pose_from_zdf.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,13 @@
The checkerboard point cloud is also visualized with a coordinate system.
The ZDF file for this sample can be found under the main instructions for Zivid samples.
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
"""

from pathlib import Path

import numpy as np
import open3d as o3d
import zivid
import zivid.experimental.calibration
from sample_utils.paths import get_sample_data_path
from sample_utils.save_load_matrix import assert_affine_matrix_and_save

Expand All @@ -35,8 +32,8 @@ def _create_open3d_point_cloud(point_cloud: zivid.PointCloud) -> o3d.geometry.Po
xyz = np.nan_to_num(xyz).reshape(-1, 3)
rgb = rgba[:, :, 0:3].reshape(-1, 3)

point_cloud_open3d = o3d.geometry.PointCloud(o3d.utility.Vector3dVector(xyz))
point_cloud_open3d.colors = o3d.utility.Vector3dVector(rgb / 255)
point_cloud_open3d = o3d.geometry.PointCloud(o3d.utility.Vector3dVector(xyz.astype(np.float64)))
point_cloud_open3d.colors = o3d.utility.Vector3dVector(rgb.astype(np.float64) / 255)

refined_point_cloud_open3d = o3d.geometry.PointCloud.remove_non_finite_points(
point_cloud_open3d, remove_nan=True, remove_infinite=True
Expand Down Expand Up @@ -75,9 +72,7 @@ def _main() -> None:
point_cloud = frame.point_cloud()

print("Detecting checkerboard and estimating its pose in camera frame")
transform_camera_to_checkerboard = (
zivid.experimental.calibration.detect_feature_points(frame).pose().to_matrix()
)
transform_camera_to_checkerboard = zivid.calibration.detect_calibration_board(frame).pose().to_matrix()
print(f"Camera pose in checkerboard frame:\n{transform_camera_to_checkerboard}")

transform_file_name = "CameraToCheckerboardTransform.yaml"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,17 +1,14 @@
"""
Perform Hand-Eye calibration.
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
"""

import datetime
from pathlib import Path
from typing import List
from typing import List, Tuple

import numpy as np
import zivid
import zivid.experimental.calibration
from sample_utils.save_load_matrix import assert_affine_matrix_and_save


Expand All @@ -26,7 +23,7 @@ def _enter_robot_pose(index: int) -> zivid.calibration.Pose:
"""
inputted = input(
f"Enter pose with id={index} (a line with 16 space separated values describing 4x4 row-major matrix):"
f"Enter pose with id={index} (a line with 16 space separated values describing 4x4 row-major matrix): "
)
elements = inputted.split(maxsplit=15)
data = np.array(elements, dtype=np.float64).reshape((4, 4))
Expand All @@ -46,14 +43,14 @@ def _perform_calibration(hand_eye_input: List[zivid.calibration.HandEyeInput]) -
"""
while True:
calibration_type = input("Enter type of calibration, eth (for eye-to-hand) or eih (for eye-in-hand):").strip()
calibration_type = input("Enter type of calibration, eth (for eye-to-hand) or eih (for eye-in-hand): ").strip()
if calibration_type.lower() == "eth":
print("Performing eye-to-hand calibration")
print(f"Performing eye-to-hand calibration with {len(hand_eye_input)} dataset pairs")
print("The resulting transform is the camera pose in robot base frame")
hand_eye_output = zivid.calibration.calibrate_eye_to_hand(hand_eye_input)
return hand_eye_output
if calibration_type.lower() == "eih":
print("Performing eye-in-hand calibration")
print(f"Performing eye-in-hand calibration with {len(hand_eye_input)} dataset pairs")
print("The resulting transform is the camera pose in flange (end-effector) frame")
hand_eye_output = zivid.calibration.calibrate_eye_in_hand(hand_eye_input)
return hand_eye_output
Expand All @@ -78,6 +75,58 @@ def _assisted_capture(camera: zivid.Camera) -> zivid.Frame:
return camera.capture(settings)


def _handle_add_pose(
current_pose_id: int, hand_eye_input: List, camera: zivid.Camera, calibration_object: str
) -> Tuple[int, List]:
"""Acquire frame with capture assistant.
Args:
current_pose_id: Counter of the current pose in the hand-eye calibration dataset
hand_eye_input: List of hand-eye calibration dataset pairs (poses and point clouds)
camera: Zivid camera
calibration_object: m (for ArUco marker(s)) or c (for Zivid checkerboard)
Returns:
Tuple[int, List]: Updated current_pose_id and hand_eye_input
"""

robot_pose = _enter_robot_pose(current_pose_id)

print("Detecting calibration object in point cloud")

if calibration_object == "c":

frame = zivid.calibration.capture_calibration_board(camera)
detection_result = zivid.calibration.detect_calibration_board(frame)

if detection_result.valid():
print("Calibration board detected")
hand_eye_input.append(zivid.calibration.HandEyeInput(robot_pose, detection_result))
current_pose_id += 1
else:
print("Failed to detect calibration board, ensure that the entire board is in the view of the camera")
elif calibration_object == "m":

frame = _assisted_capture(camera)

marker_dictionary = zivid.calibration.MarkerDictionary.aruco4x4_50
marker_ids = [1, 2, 3]

print(f"Detecting arUco marker IDs {marker_ids} from the dictionary {marker_dictionary}")
detection_result = zivid.calibration.detect_markers(frame, marker_ids, marker_dictionary)

if detection_result.valid():
print(f"ArUco marker(s) detected: {len(detection_result.detected_markers())}")
hand_eye_input.append(zivid.calibration.HandEyeInput(robot_pose, detection_result))
current_pose_id += 1
else:
print(
"Failed to detect any ArUco markers, ensure that at least one ArUco marker is in the view of the camera"
)
return current_pose_id, hand_eye_input


def _main() -> None:
app = zivid.Application()

Expand All @@ -88,31 +137,26 @@ def _main() -> None:
hand_eye_input = []
calibrate = False

while True:
calibration_object = input(
"Enter calibration object you are using, m (for ArUco marker(s)) or c (for Zivid checkerboard): "
).strip()
if calibration_object.lower() == "m" or calibration_object.lower() == "c":
break

print(
"Zivid primarily operates with a (4x4) transformation matrix. To convert\n"
"from axis-angle, rotation vector, roll-pitch-yaw, or quaternion, check out\n"
"our pose_conversions sample."
)

while not calibrate:
command = input("Enter command, p (to add robot pose) or c (to perform calibration):").strip()
command = input("Enter command, p (to add robot pose) or c (to perform calibration): ").strip()
if command == "p":
try:
robot_pose = _enter_robot_pose(current_pose_id)

frame = _assisted_capture(camera)

print("Detecting checkerboard in point cloud")
detection_result = zivid.experimental.calibration.detect_feature_points(frame)

if detection_result.valid():
print("Calibration board detected")
hand_eye_input.append(zivid.calibration.HandEyeInput(robot_pose, detection_result))
current_pose_id += 1
else:
print(
"Failed to detect calibration board, ensure that the entire board is in the view of the camera"
)
current_pose_id, hand_eye_input = _handle_add_pose(
current_pose_id, hand_eye_input, camera, calibration_object
)
except ValueError as ex:
print(ex)
elif command == "c":
Expand Down
Loading

0 comments on commit 2111652

Please sign in to comment.