A Flutter plugin for managing Yolov5, Yolov8 and Tesseract v5 accessing with TensorFlow Lite 2.x. Support object detection, segmentation and OCR on Android. iOS not updated, working in progress.
Add flutter_vision as a dependency in your pubspec.yaml file.
In android/app/build.gradle
, add the following setting in android block.
android{
aaptOptions {
noCompress 'tflite'
noCompress 'lite'
}
}
Comming soon ...
- Create a
assets
folder and place your labels file and model file in it. Inpubspec.yaml
add:
assets:
- assets/labels.txt
- assets/yolovx.tflite
- Import the library:
import 'package:flutter_vision/flutter_vision.dart';
- Initialized the flutter_vision library:
FlutterVision vision = FlutterVision();
- Load the model and labels:
modelVersion
: yolov5 or yolov8 or yolov8seg
await vision.loadYoloModel(
labels: 'assets/labelss.txt',
modelPath: 'assets/yolov5n.tflite',
modelVersion: "yolov5",
quantization: false,
numThreads: 1,
useGpu: false);
- Make your first detection:
confThreshold
work with yolov5 other case it is omited.
Make use of camera plugin
final result = await vision.yoloOnFrame(
bytesList: cameraImage.planes.map((plane) => plane.bytes).toList(),
imageHeight: cameraImage.height,
imageWidth: cameraImage.width,
iouThreshold: 0.4,
confThreshold: 0.4,
classThreshold: 0.5);
- Make your first detection or segmentation:
final result = await vision.yoloOnImage(
bytesList: byte,
imageHeight: image.height,
imageWidth: image.width,
iouThreshold: 0.8,
confThreshold: 0.4,
classThreshold: 0.7);
- Release resources:
await vision.closeYoloModel();
- Create an
assets
folder, then create atessdata
directory andtessdata_config.json
file and place them into it. Download trained data for tesseract from here and place it into tessdata directory. Then, modifie tessdata_config.json as follow.
{
"files": [
"spa.traineddata"
]
}
- In
pubspec.yaml
add:
assets:
- assets/
- assets/tessdata/
- Import the library:
import 'package:flutter_vision/flutter_vision.dart';
- Initialized the flutter_vision library:
FlutterVision vision = FlutterVision();
- Load the model:
await vision.loadTesseractModel(
args: {
'psm': '11',
'oem': '1',
'preserve_interword_spaces': '1',
},
language: 'spa',
);
- Get Text from static image:
final XFile? photo = await picker.pickImage(source: ImageSource.gallery);
if (photo != null) {
final result = await vision.tesseractOnImage(bytesList: (await photo.readAsBytes()));
}
- Release resources:
await vision.closeTesseractModel();
result is a List<Map<String,dynamic>>
where Map have the following keys:
Map<String, dynamic>:{
"box": [x1:left, y1:top, x2:right, y2:bottom, class_confidence]
"tag": String: detected class
}
result is a List<Map<String,dynamic>>
where Map have the following keys:
Map<String, dynamic>:{
"box": [x1:left, y1:top, x2:right, y2:bottom, class_confidence]
"tag": String: detected class
"polygons": List<Map<String, double>>: [{x:coordx, y:coordy}]
}
result is a List<Map<String,dynamic>>
where Map have the following keys:
Map<String, dynamic>:{
"text": String
"word_conf": List:int
"mean_conf": int}
For flutter_vision bug reports and feature requests please visit GitHub Issues