You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The integration for Ultralytics seems to be flawed, specifically the normalisation (?) may be at fault. Inferencing with a YOLOv8 segmentation model produces results that are distinctly offset from the official way which is:
model.predict( ... save=True)
When I raised this in the Ultralytics discord I was told the following:
It doesn't make any sense.
x1 = y2 * w in their calculation.
Yeah. In this, first they messed up the w, h = mask.shape. .shape returns h,w not w,h.
Then they use tmp[3] as x1 which if you back track becomes:
x1 = y2 * w but w is h because they messed so it's really x1 = y2 * h where y2 is normalized and h is the height of the image. So it translates to x1 = y2 in absolute coordinates.
But they're indexing from 0 so it's really x2 = y2, then they index the mask using mask[x0:x1, y0:y1] so it should actually cancel out the effect but mask ought to be indexed with mask[y0:y1, x0:x1] but they flipped x and y but also flipped it in the indexing.
Code to reproduce issue
I cannot provide a complete bare minimum as it would involve needing to share my model and/or dataset but the here is how I ran FiftyOne:
importfiftyoneasfoimportfiftyone.utils.ultralyticsasfouuimportnumpyasnpfromultralyticsimportYOLO# Check if a dataset with the given name existsiffo.dataset_exists("segmented_watermarks"):
# Delete the existing datasetfo.delete_dataset("segmented_watermarks")
# Load your YOLOv8 segmentation modelmodel=YOLO("C:/repos/SegmentedWatermarks/results/896p-yolov8x-pt2-v1-100e-close-mosaic-30/weights/best.pt")
# Specify the dataset directory and typedataset_dir="C:/repos/SegmentedWatermarks/data"dataset_type=fo.types.YOLOv5Dataset# Load the datasetdataset=fo.Dataset.from_dir(
dataset_dir=dataset_dir,
dataset_type=dataset_type,
split="test",
name="segmented_watermarks"
)
forsampleindataset.iter_samples(progress=True):
result=model(sample.filepath)[0]
sample["instances"] =fouu.to_instances(result)
sample.save()
# Launch FiftyOne app to visualize the datasetif__name__=="__main__":
session=fo.launch_app(dataset)
session.wait()
Yellow is Ultralytics output, blue is FiftyOne:
System information
OS Platform and Distribution (e.g., Linux Ubuntu 22.04): Windows 11 22H2
Python version (python --version): Python 3.10.9
FiftyOne version (fiftyone --version): FiftyOne v0.24.1, Voxel51, Inc
Describe the problem
The integration for Ultralytics seems to be flawed, specifically the normalisation (?) may be at fault. Inferencing with a YOLOv8 segmentation model produces results that are distinctly offset from the official way which is:
model.predict( ... save=True)
When I raised this in the Ultralytics discord I was told the following:
Code to reproduce issue
I cannot provide a complete bare minimum as it would involve needing to share my model and/or dataset but the here is how I ran FiftyOne:
Yellow is Ultralytics output, blue is FiftyOne:
System information
python --version
): Python 3.10.9fiftyone --version
): FiftyOne v0.24.1, Voxel51, IncOther info/logs
The integration was added here: https://github.com/voxel51/fiftyone/pull/3451/commits
Willingness to contribute
from the FiftyOne community
The text was updated successfully, but these errors were encountered: