0% found this document useful (0 votes)
45 views182 pages

O2X Software Manual-En

The document is a software manual for the Universal Vision Sensor models O2D5xx, O2I4xx, O2I5xx, and O2Uxxx, version 2.7.9. It includes sections on safety instructions, installation, getting started, user interface structure, and application details. The manual provides comprehensive guidance on system requirements, hardware and software installation, and operational features of the sensor.

Uploaded by

HARRAK Mohammed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views182 pages

O2X Software Manual-En

The document is a software manual for the Universal Vision Sensor models O2D5xx, O2I4xx, O2I5xx, and O2Uxxx, version 2.7.9. It includes sections on safety instructions, installation, getting started, user interface structure, and application details. The manual provides comprehensive guidance on system requirements, hardware and software installation, and operational features of the sensor.

Uploaded by

HARRAK Mohammed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 182

Software manual

GB
Universal vision sensor
O2D5xx
O2I4xx
O2I5xx
O2Uxxx
Version 2.7.9
11591355 / 01 09 / 2024
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Contents
1 Preliminary note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1 Symbols used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Legal and copyright information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Open source information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Safety instructions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Intended use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Disclaimer of warranties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1 System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.3.1 Uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.4 Command line parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
7 Start page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.1 Find sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
7.1.1 Connecting the device manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.2 Connecting a device that has already been used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.3 Playing back image captures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.3.1 Converting an image capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
8 Structure of the user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
9 Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
9.1 Creating a region of interest (ROI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
9.2 Creating a region of disinterest (ROD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
10 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
10.1 Images & trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
10.1.1 Add new image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
10.1.2 Trigger mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
10.1.3 Frame rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.1.4 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.1.5 Calibration wizards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10.1.5.1 Rough measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10.1.5.2 Precise measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.1.5.3 Robot sensor calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10.1.6 Reference image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.1.7 Exposure time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.1.8 Analogue gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.1.9 Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.1.10 Illumination of internal segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.1.11 Filter type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
10.1.12 Filter strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
10.1.13 Invert image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
10.1.14 Image quality check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.2.1 Add new model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.2.2 Bar code 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.2.2.1 Code family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
10.2.2.2 Encoding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
10.2.2.3 Number of codes per ROI group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.2.2.4 Timeout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.2.2.5 Measure ISO quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.2.2.6 Check char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
10.2.2.7 Minimum contrast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
10.2.2.8 Min code length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
10.2.2.9 Quiet zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

2
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

10.2.2.10 Use bar orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73


10.2.2.11 Symbology identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
10.2.2.12 Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
10.2.2.13 Orientation tolerance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
10.2.2.14 Number of scanlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
10.2.2.15 Majority voting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
10.2.2.16 Merge scanlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
10.2.2.17 Minimum identical scanlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
10.2.2.18 Start/Stop tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10.2.2.19 Element size variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10.2.2.20 Barcode height min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10.2.2.21 Barcode width min. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10.2.2.22 ROI size check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10.2.3 Data code 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
10.2.3.1 Code family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
10.2.3.2 Presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.2.3.3 Encoding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.2.3.4 Number of codes per ROI group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.2.3.5 Timeout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
10.2.3.6 Quality grading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
10.2.3.7 Polarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
10.2.3.8 Strict quiet zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
10.2.3.9 Symbology identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
10.2.3.10 Contrast tolerance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
10.2.3.11 Finder pattern tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
10.2.3.12 Module grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
10.2.3.13 Max slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
10.2.3.14 Mirrored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
10.2.3.15 Symbol shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
10.2.3.16 Symbol columns min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
10.2.3.17 Symbol columns max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
10.2.3.18 Symbol rows min. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
10.2.3.19 Symbol rows max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
10.2.3.20 Symbol size min. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
10.2.3.21 Symbol size max. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
10.2.3.22 ROI size check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
10.2.4 OCR;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
10.2.4.1 Activate anchor tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.2.4.2 Preferred characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.2.4.3 Special characters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.2.4.4 Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
10.2.4.5 Number of rows per ROI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
10.2.4.6 Using character selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
10.2.4.7 Regular expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
10.2.4.8 Relative text alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
10.2.4.9 Text orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
10.2.4.10 Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
10.2.4.11 ROI size check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
10.2.5 BLOB analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
10.2.5.1 Number of objects per ROI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
10.2.5.2 Object properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
10.2.5.3 ROI size check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
10.2.5.4 Activate anchor tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
10.2.5.5 Object definition area of a BLOB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
10.2.6 Contour detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
10.2.6.1 Number of objects per ROI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
10.2.6.2 Pyramid levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
10.2.6.3 Object definition area of a contour. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
10.2.7 Contour anchor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
10.2.8 1D barcode anchor tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
10.2.9 2D data code anchor tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
10.3 Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115

3
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10.4 Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118


10.4.1 Logic utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
10.4.2 Logic block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
10.4.3 Output logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120
10.4.4 Logic block [New note] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
10.4.5 Logic blocks [Model results] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
10.4.6 Logic block [Application result] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
10.4.7 Logic blocks [String operations] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
10.4.8 Logic blocks [Binary operations] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
10.4.9 Logic blocks [Arithmetic] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
10.4.10 Logic blocks [Converter] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
10.4.11 Logic blocks [Digitalisation] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137
10.4.12 Logic blocks [Logical functions] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
10.4.13 Logic blocks [Output]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
10.4.14 Logic blocks [Pin events]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142
10.4.15 Logic element [Result status] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142
10.4.16 Example – code found . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142
10.4.17 Example – compare reference code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143
10.4.18 Example – compare reference code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .144
10.4.19 Example – compare distance values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .144
10.4.20 Example – counter and comparator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145
10.4.21 Example – converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145
10.5 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146
10.5.1 Data packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149
10.5.1.1 Anchor result. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150
10.5.1.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
10.5.1.3 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .154
10.5.2 Example of “Provide overall quality” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164
10.5.3 Output as hex view or INT16 table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165
10.5.4 Display data from the data packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165
10.6 Test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166
11 Service report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
12 device set-up. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170
12.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170
12.2 Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171
12.3 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .172
12.4 NTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
12.5 FTP / SFTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174
12.6 RTSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175
12.7 ifm mass storage device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176
13 Appendix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
13.1 Assigning a static IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179

4
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

1 Preliminary note
You will find instructions, technical data, approvals and further information using the QR code on the
unit / packaging or at documentation.ifm.com.

1.1 Symbols used


Requirement
Instructions
Reaction, result
[...] Designation of keys, buttons or indications
Cross-reference
Important note
Non-compliance may result in malfunction or interference.
Information
Supplementary note

1.2 Legal and copyright information


© All rights reserved by ifm electronic gmbh. No part of these instructions may be reproduced and
used without the consent of ifm electronic gmbh.
All product names, pictures, companies or other brands used on our pages are the property of the
respective rights owners.
• AS-i is the property of AS-International Association, (→ www.as-interface.net)
• CAN is the property of Robert Bosch GmbH, Germany (→ www.bosch.de)
• CAN is the property of CiA (CAN in Automation e.V.), Germany (→ www.can-cia.org)
• CODESYS™ is the property of CODESYS GmbH, Germany (→ www.codesys.com)
• DeviceNet™ is the property of ODVA™ (Open DeviceNet Vendor Association), USA
(→ www.odva.org)
• EtherNet/IP® is the property of → ODVA™
• EtherCAT® is a registered trademark and patented technology, licensed by
Beckhoff Automation GmbH, Germany.
• IO-Link® is the property of PROFIBUS Nutzerorganisation e.V., Germany (→ www.io-link.com)
• ISOBUS is the property of AEF - Agricultural Industry Electronics Foundation e.V., Germany
(→ www.aef-online.org)
• Microsoft® is the property of Microsoft Corporation, USA (→ www.microsoft.com)
• Modbus® is the property of Schneider Electric SE, France (→ www.schneider-electric.com)
• PROFIBUS® is the property of PROFIBUS Nutzerorganisation e.V., Germany
(→ www.profibus.com)
• PROFINET® is the property of → PROFIBUS Nutzerorganisation e.V., Deutschland
• Windows® is the property of → Microsoft Corporation, USA

1.3 Open source information


For more open source information see: documentation.ifm.com.

5
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

2 Safety instructions
Please read the operating instructions prior to set-up of the device. The device must be suitable for the
application without any restrictions.
If the operating instructions or the technical data are not adhered to, personal injury and damage to
property can occur.

6
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

3 Intended use
The software manual describes the functions of the ifm Vision Assistant software:
• recognising the device in the local subnet,
• configuring the device,
• collecting, storing and evaluating data,
• installing and monitoring applications on the device.

As soon as an application is installed on the device, the device can be operated without the ifm
Vision Assistant.

7
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

4 Disclaimer of warranties
ifm electronic gmbh disclaims to the fullest extent authorized by law any and all warranties, whether
express or implied, including, without limitation, any implied warranties of title, non-infringement, quiet
enjoyment, integration, merchantability or fitness for a particular purpose.
Without limitation of the foregoing, ifm expressly does not warrant that:
• the software will meet your requirements or expectations,
• the software or the software content will be free of bugs, errors, viruses or other defects,
• any results, output, or data provided through or generated by the software will be accurate, up-to-
date, complete or reliable,
• the software will be compatible with third party software,
• any errors in the software will be corrected.

Demo software and templates


Demo software and templates are provided “as is” (that is: excluding warranty) and “as available”,
without any warranty of any kind, either express or implied. The user acknowledges and agrees to use
the software at user’s own risk. In no event shall ifm be held liable for any direct, indirect, incidental or
consequential damages arising out of the use of or incorrect use the software. The user may use the
software solely for demonstration purposes and to assess the software functionalities and capabilities.

Customer-specific software
1. The software created and used has been put together by ifm especially for the customer using
modular software components made by ifm for numerous applications (standard software modules)
and adapted to the contractual service required (customer-specific application program).
2. Upon complete payment of the purchase price for the customer-specific application program, ifm
transfers the non-exclusive, locally and temporarily unrestricted usage right thereof to the
customer, without the customer acquiring any rights of any kind to the standard software module on
which the individual or customer-specific adaptation is based. Notwithstanding these provisions, ifm
reserves the right to produce and offer customer-specific software solutions of the same kind for
other customers based on other terms of reference. In any case ifm retains a simple right of usage
of the customer–specific solution for internal purposes.
3. By accepting the program, the user acknowledges and agrees to use the software at user’s own
risk. By accepting the program, the user also acknowledges that the software meets the
requirements of the specifications agreed upon. ifm disclaims any and all warranties, in particular
regarding fitness of the software for a particular purpose.

8
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

5 Installation

5.1 System requirements


Software
The following software is required for operation.
• Operating system: Windows 10 (32/64-bit)
• ifm Vision Assistant software 2.7.9 or higher
• Firmware of the device: 1.31.11032 or higher

Other versions
w Other versions of software and firmware may contain modified or new functions that are not
described in this software manual.

Hardware
The following hardware is required for operation:
• Hard disk: min. 1 GB free memory space
• Monitor: Resolution of min. 1024x768 pixels, 32-bit colour depth

Accessories
• Network connection (Ethernet) cables for setting the parameters:
– E11898 (2 m, M12 plug/RJ45 plug, 4 poles)
– E12283 (5 m, M12 plug/RJ45 plug, 4 poles)
– E12204 (10 m, M12 plug/RJ45 plug, 4 poles)
– E12205 (20 m, M12 plug/RJ45 plug, 4 poles)
• Power supply and process connection cables:
– EVC070 (2 m, M12 socket, 5 poles, A-coded, open cable end)
– EVC071 (5 m, M12 socket, 5 poles, A-coded, open cable end)
• Y connection cable:
– EVC847 (splitter for camera and external trigger)
– EVC848 (splitter for camera and external illumination unit)
• Power supply 24 V, 1.6 A
• Mounting set for the device (clamp mounting): E2D500

w Information about the accessories: www.ifm.com

5.2 Hardware
w Detailed information on installation and electrical connection can be found in the operating
instructions of the device: documentation.ifm.com

9
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

5.3 Software
Installing ifm Vision Assistant
u Download the ifm Vision Assistant: documentation.ifm.com
u Copy the zip file to a directory on the PC and unzip.
w The ifm Vision Assistant is installed and can be started via the “ifmVisionAssistant.exe”.

The ifm Vision Assistant does not start


w If the ifm Vision Assistant does not appear within 5 to 10 seconds after starting:
u Check the system requirements. (Ò System requirements / 9)
u Check the unzipped zip file for completeness.

5.3.1 Uninstall
Uninstalling the ifm Vision Assistant
u Delete the installation folder of the ifm Vision Assistant.
w The ifm Vision Assistant is uninstalled.

w Existing settings and log files are not deleted.

Deleting the settings and log files


u Delete the following directory: „ %AppData%\ifm electronic\ifmVisionAssistant “

5.4 Command line parameters


The command line parameters influence the start of ifm Vision Assistant by attaching parameters to
the exe file. Several parameters can be appended one after the other, separated by a space.

Command line parameters via command prompt


Start ifm Vision Assistant via the command prompt:
u In the prompt, add the command line parameters after ifmVisionAssistant.exe separated by a
space.
w Example: "ifmVisionAssistant.exe -log"

Command line parameters via Windows


Start ifm Vision Assistant with command line parameters via Windows:
u Right-click on [ifm Vision Assistant].
u Click on [Properties] in the submenu.
u Click on the [Shortcut] tab.
u Click on the [Target] field and move the cursor to the end of the line.
u Enter a space followed by the command line parameter.
u Click [OK].

Available command line parameters


The following command line parameters are available:

Command line parameter Description


-disableclosebtn Disables the button for terminating ifm Vision Assistant.

10
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Command line parameter Description


-log Creates a log file for detailed error analysis.
The log file is saved in the following folder:
"%APPDATA%\ifm
electronic\ifmVisionAssistant\logs"
-autoconnect filename.xml Automatically establishes the connection to a device.
The file “ filename.xml ” must contain the following XML
code:
<?xml version="1.0"
encoding="UTF-8"?>
<sensor>
<sensorType>O2Uxxx</sensorType>
<addressType>IP</addressType>
<name>My sensor</name>
<address>
<ip>192.168.0.69</ip>
<pcic_port>50010</pcic_port>
<web_port>80</web_port>
<mac>00:02:01:21:b9:ee</mac>
</address>
</sensor>

u Adjust the information in the XML file: IP address, ports,


etc.
-geometry [screen]:[width]x[height]+[x]+[y] Sets the windows size and position of ifm Vision Assistant
(incl. Windows frame). The minimum window size is
1024x768 pixels.
Example:
-geometry 1:1380x768+0+0"
The window is placed on screen 1 (screen=1).
The window size including Windows frame is set to
1380x768 (width = 1380 and height = 768).
The window is positioned at the top left (x=0 and y=0).
When negative values are entered for the window position x
and y, the opposite corner is used as zero point. Example:
“ +0+0 ” window at the top left
“ -0+0 ” window at the top right
“ +0-0 ” window at the bottom left
“ -0-0 ” window at the bottom right
-frameless Starts ifm Vision Assistant without the native Windows
frame window.
-cmd "Mon:rec:startRecording=file.dat" Switches to the [Monitor] area after the start and starts re-
cording data.
A PowerShell script can be used to automatically insert the
date and time into the file name of the recording:
-cmd "Mon:rec:startRecording=c:/dat/
$((Get-Date).ToString('yyyy-dd-
MM_hh-mm-ss'))_Cam.dat"
-cmd "Mon:rec:durationSecs=-1" Sets the duration of the recording in seconds.
“ -1 ”: unlimited recording time
-cmd "Mon:rec:fileSplitSizeMB=3000" Splits the recorded data into blocks. The block size can be
set in MB. The file names of the blocks contain a continu-
ous counter.
-cmd "Mon:g2d:deviceByIndex=2" Sets the video source by index. For example, an external
frame grabber can be used as a video source.
For notebooks, index=" 1 " is often the built-in camera.
-cmd "Mon:g2d:deviceByName=Hauppauge Cx23100 Video Sets the video source by name. For example, an external
Capture_2" frame grabber can be used as a video source.

11
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Kiosk mode
In kiosk mode, the Windows frame is hidden and ifm Vision Assistant cannot be closed normally by
the user. The mode is ideal for trade fairs and demonstrations.
Use the kiosk mode:
u Use the following command line parameters in succession:

ifmVisionAssistant.exe -disableclosebtn -frameless

The key combination “Ctrl+F4” closes ifm Vision Assistant.

12
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

6 Getting started
The chapter explains the first steps with the device and the ifm Vision Assistant software.

Connecting the device


Installation and electrical connection are described in the operating instructions of the device.

Using ifm Vision Assistant


u Start the ifm Vision Assistant software. Software (Ò Software / 10)
u Click on the [Find device] button on the start page. Find device (Ò Find sensor / 15)
w Once the device is connected, the [Monitor] area is displayed.

ifm Vision Assistant: [Monitor] area


The [Monitor] area displays the received data of the device in a live image. The application is
monitored in this area. The device is in the operating mode.

ifm Vision Assistant: [Application] area


An application sets the device for a specific application. You can quickly switch between the
applications.

Up to 32 applications can be added. An application contains up to 10 models.

Typically, an application contains the following settings:


• Device camera and triggers: Images and triggers (Ò Images & trigger / 29)
• Recognition of contours, analysis of surfaces, search for codes and text recognition: Model settings
(Ò Models / 65)
• Sequence of processing of images and models: Flow (Ò Flow / 115)
• Output logic for data transfer to an external controller: Logic (Ò Logic / 118)
• Configuration of the output via the interfaces: Interfaces (Ò Interfaces / 146)
• Collection of statistical data: Test (Ò Test / 166)

Recognising an object, code or text with the device

u Click on the [Application] button: .


w The “Application” area shows the applications stored on the device. (Ò Application / 26)

u Click on the [Add new application] button: .


w The [wizard] guides you through adding an application in several steps. The application will then
be activated and the [Monitor] area will be displayed.
w The [User-defined mode] button is used to customise the application.
The device will then be ready for operation and run the active application.

13
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

7 Start page
The start page contains the basic functions of the ifm Vision Assistant.

Fig. 1: Home page

Symbol Name Description


Zoom out Reduces the size of the window.

Zoom in Enlarges the window.

Full screen Displays the window in full screen mode.

Exit Closes the software.

Tab. 1: Title bar

w With the F11 key, you can switch between full screen and window view.

Name Description
[Device status] Displays information about the hardware and firmware of the connected device.
The information can be saved in a text file for diagnostics by the support staff.
For the [Device status] function, the device must be connected.
[Wiring] Displays information on wiring and connection aids.
[Settings] Sets the language and colour of the user interface.

14
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Name Description
[Help] Displays the documentation and contact details for support.
Tab. 2: Menu bar

Button Name Description


Find device Searches for connected devices and displays them in a list. (Ò
Find sensor / 15)
For this function, the device must be connected.
Recent Displays list of devices that have already been used. (Ò Con-
necting a device that has already been used / 16)
For this function, the device must be connected.
Replay Plays back a saved image capture. (Ò Playing back image cap-
tures / 16)
Monitoring tool Monitors applications on several connected devices. (Ò Moni-
tor / 21)
The monitoring tool is described in a separate software manual:
documentation.ifm.com
Tab. 3: Buttons

7.1 Find sensor


This function searches for new devices and displays them in a list. A device can then be connected.

Preparations
u Connect the device to the voltage supply.
u Connect the device to a PC via Ethernet.
u Unblock the following ports in the network firewall:
UDP-Port: 3321
TCP/HTTP: 80 and 8080
TCP: 50010

Connect a new device

u Click on the [Find sensor] button:


w The ifm Vision Assistant searches for connected devices. A list shows the devices found and their
settings.
u Select a found device.
w The connection to the device will be established.

Connection issues
w If the device is not found:
u Check the connections and the operating status of the device.
u The IP addresses of device and PC must be in the same subnet.
u Connect the device to the PC directly via Ethernet without any intermediate network devices
(e.g. router).
u Connect the device manually. (Ò Connecting the device manually / 16)

Messages in the ifm Vision Assistant


w With the key combination [Ctrl+C], the text of a message is copied to the clipboard.

15
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

7.1.1 Connecting the device manually


A device can be connected manually by entering the IP address.

u Click on the [Find device] button: .


u Click on the message [Manual connection].
w The [Manual connection] window is displayed.
u Select [manual connection] in the list O2Uxxx.
u Enter the IP address of the device.
w Preset: 192.168.0.69 .
u Click on the [Connect] button.

Connection issues
w The IP addresses of device and PC must be in the same subnet.

7.2 Connecting a device that has already been used


The [Recent] function displays devices that have already been used before in a list.

u Click on the [Recent] button:


w The [Recent] window is displayed.
u Select a device from the list.
w The device will be connected and can then be used.

7.3 Playing back image captures


The [Replay] function plays back a saved image capture. Recordings are saved in the "Monitor" area.
(Ò Monitor / 21)
The function contains the following operating elements:

Operating element Name Description


Previous image Jumps to the previous image.

Replay Starts the replay.

Next image Jumps to the next image.

Pause Pauses the replay.

Progress bar Indicates the current position of the re-


play.
[Open other file] Open other file Opens another image capture.
Tab. 4: Operating elements

Playing back image captures

u Click on the [Replay] button:


w A window for opening an image capture is displayed. The image captures are saved in the
following folder by default: %appdata%\ifm electronic\ifmVisionAssistant\capture

16
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

u Select an image capture.


u Click on the [Open] button.
w The image capture is displayed.

7.3.1 Converting an image capture


This function converts an image capture into another output format. The image capture is converted
via the following operating elements.

Operating element Type Description


[Output format] List Sets the output format.
[Output path] Output field Displays the set output path.
[…] Button Sets the output path.
[Data format] List Sets the data format.
The list is displayed when the [Output
format] is set to [O3D3XX PLY export of
point cloud].
[Create surfaces] Checkbox Creates a surface for the point cloud.
The checkbox is displayed when the
[Output format] is set to [O3D3XX PLY
export of point cloud].
[Output range] List Sets the output range.
[Convert] Button Starts converting the image capture.

Output format

Output format Description


[HDF5 ifm streams (*.h5)] Flexible data container.
[ADTF data capture file (*.dat)] Proprietary format.
[O2X5XX export of JSON/JPEG] JSON/JPEG files of an O2X5XX image capture.
[O3D3xx CSV export of image chunks] CSV file of the image chunks of an O3D3XX capture.
[O3D3XX PLY export of point cloud] PLY file of the point cloud of an O3D3XX capture.

Depending on the format of the open image capture, some output formats may not be
displayed.

Data format

Data format Description


[ASCII] Sets the data format to “ASCII”.
[Binary little endian] Sets the data format to “Binary little endian”.
[Binary big endian] Sets the data format to “Binary big endian”.

Depending on the output format set, the data format may not be displayed.

Output range

Output range Description


[Whole file] Converts the whole image capture.
[From current position to end of file] Converts from the current position of the progress bar to the
end of the image capture.
[From start to current position] Converts from the start of the image capture to the current po-
sition of the progress bar.

17
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Output range Description


[Just the next frame] Converts the next frame of the image capture, viewed from
the current position of the progress bar.

18
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

8 Structure of the user interface


The user interface is divided into the following areas:

1 2

Fig. 2: User interface


1 Navigation bar 2 Main area
3 Status bar

Navigation bar
The buttons in the navigation bar are used to switch between the different areas of ifm Vision
Assistant.

Button Name Description


Monitor Displays the received data of the device.
(Ò Monitor / 21)
Application Displays the applications. (Ò Applica-
tion / 26)
Service report Displays an evaluation of the device. (Ò
Service report / 168)
Device setup Displays the device setup. (Ò device
set-up / 170) The device and network
are set in the device setup.
Disconnect Disconnects from the device.

Main area
The settings of the selected function are displayed in the main area.

Status bar
The status bar shows current device information:

19
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

• the current window name, for example [Sensor window]


• the temperature of the device
• processing time for an image, for example 18 ms

20
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

9 Monitor
The [Monitor] area displays the received data of the device in a live image. The application is
monitored in this area. The device is in the operating mode.

1 2 3

Fig. 3: [Monitor] area


1 Status indicators 2 Live image
3 Code details 4 Tabs

Status indicators
The [Status indicators] show the states of the digital outputs and the statistics on the active
application.
• [Application]: shows the name of the active application.
• [Hardware]: shows the status of the digital outputs. If a signal is present, the LED lights yellow.
• [Current state]: shows the current state of the application.
• [Overall statistics]: shows the recorded values of all models of the active application and the
number of total measurements. The values Passed and Failed are incremented via a counter. The
ratio of the two values is indicated as a percentage.
• [Processing time]: shows the current, maximum and minimum processing time.
• [Reset all statistics]: this button resets the overall statistics.

Live image
The [live image] displays the current camera image of the device.

The live image is continuously updated in trigger mode [Continuous]. In another trigger mode, a
trigger must first be releaed for the live image to update.

21
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Code details
The [Code details] display details of the code, BLOB or contour selected in the live image.

Tabs
The [Monitor] area contains the following tabs:

Tab Description
[View options] Sets the live image.
[Recording] Records the data from the camera and the results of the appli-
cations.
[Results] Displays the results of models, images and calibrations. The
results can be restricted to specific models, images or calibra-
tions.
[Pixel x | y] Displays pixel and greyscale values in the area of a line, rec-
tangle or circle.
Displays pixel and greyscale values at the position of the
mouse pointer.
Displays calibrated coordinates and lengths if the application
contains calibrated images. (Ò Calibration wizards / 35)

The [View options] tab contains the following operating elements.

Operating element Name Description


Zoom out Reduces the size of the live image.
The zoom level can also be changed us-
ing the mouse wheel.
Reset zoom Resets the live image to the standard
size.
Zoom in Enlarges the live image.
The zoom level can also be changed us-
ing the mouse wheel.
Show all ROIs of all models Displays the ROIs of all models in the
live image.
Overlay application reference image Displays the reference image in the live
image. The [Overlay application refer-
ence image] button is displayed as soon
as a reference image has been saved
(Ò Reference image / 61)
The slider adjusts the transparency of
the reference image. The colour field to
the right of the slider sets the colour of
the reference image.
Change object opacity Adjusts the opacity of detected objects
in the live image using the slider.

The [Record options] tab contains the following operating elements:

Operating element Name Description


Save current image as JPEG Saves the current live image as a JPEG
file.
[Duration] Duration Sets the duration of the image capture.
Approx. 250 MB/minute are required.
If the duration is set to [Continuous], the
image capture is limited by the free
memory capacity of the data carrier.
Start/Stop Starts or stops the image capture. The
image capture is saved in a file with the
extension “*.h5” or “*.dat“.
- / 02:00 Image capture time Shows the duration of the current image
capture and the maximum capture time.

The [Results] tab contains the following operating elements.

22
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Name Description


Filter table by selected image Filters the table according to the select-
ed image.
Filter table by selected model Filters the table according to the select-
ed model.
[Code result table] Displays the results of the code analysis.

[BLOB result table] Displays the results of the BLOB analy-


sis.
The [Calibrated BLOB results] checkbox
is used to restrict the results table to cal-
ibrated values.
[Contour result table] Shows the results of the contour detec-
tion.
[Show ROI results] [Show ROI results] Shows the results of the ROI groups in
the table.

The [Pixel x | y] tab contains the following operating elements:

Operating element Name Description


Measure pixels along a line Displays the following values along a
line:
• Start and end point
• Angle
• Length [px]
• Greyscale ( 0-255 ).
Measure pixels in a rectangle Displays the following values in the area
of a rectangle:
• Length x width [px]
• Area [px]
• Greyscale ( 0-255 ).
Measure pixels in a circle Displays the following values in the area
of a circle:
• Position of the centre of the circle
• Radius [px]
• Area [px]
• Greyscale ( 0-255 ).
[At mouse pointer] Measure pixels at the position of the Displays the following values at the posi-
mouse pointer tion of the mouse pointer:
• Position X
• Position Y
• Greyscale ( 0-255 ).

The results can be copied to the clipboard with the key combination Ctrl+C.

9.1 Creating a region of interest (ROI)


Within the search zone (ROI: region of interest), the device searches for codes, BLOBs or contours.
With default setting and when a new model is added, an ROI covering the entire live image is
automatically created. Up to 64 ROIs can be created per model.

The operating elements for creating an ROI are only displayed in the model settings. (Ò
Models / 65)

Creating an ROI:
u Select a model.

23
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

w If no model exists, one must be added. (Ò Add new model / 66)


u Click on the [Create rectangular ROI] button:
u Create the region of interest in the live image by clicking and dragging using the mouse.
w Click and drag to move the created region of interest.
The following functions are used to set the region of interest:

Button Function Description


Change the size and shape of the region Click and drag the small square to the
of interest new position to change the region of in-
terest.
Rotate region of interest Click and drag the round button to rotate
the region of interest.
Group regions of interest Select several regions of interest while
pressing the Shift key. Then click the
button to group regions of interest.
Copy region of interest Click the button to copy the selected re-
gion of interest.
Delete region of interest Click the button to delete the selected
region of interest.

If a region of interest cannot be selected:


u click on the name of the region of interest.

9.2 Creating a region of disinterest (ROD)


Within the exclusion zone (ROD: region of disinterest), the device does not search for codes, BLOBs
or contours. Up to 64 regions of disinterest can be created per model.

The operating elements for creating a region of disinterest are only displayed in the model
settings. (Ò Models / 65)

Creating a region of disinterest:


u Select a model.
w If no model exists, one must be added. (Ò Add new model / 66)

u Click on the [Create rectangular ROD] button:


u Create the region of disinterest in the live image by clicking and dragging using the mouse.
w Click and drag to move the created region of disinterest.
The following functions are used to set the region of disinterest:

Button Function Description


Change the size and shape of the region Click and drag the small square to the
of disinterest new position to change the region of dis-
interest.
Rotate region of disinterest Click and drag the round button to rotate
the region of disinterest.
Copy region of disinterest Click the button to copy the selected re-
gion of disinterest.
Delete region of disinterest Click the button to delete the selected
region of disinterest.

If a region of disinterest cannot be selected:


u click on the name of the region of disinterest.

24
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

25
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10 Application
The [Application] area manages the applications of the connected device. An application contains
application-specific settings.
Typically, an application contains the following settings:
• Device camera and triggers: Images and triggers (Ò Images & trigger / 29)
• Search for codes, BLOBs or contours within the image: Model settings (Ò Models / 65)
• Sequence of processing of images and models: Flow (Ò Flow / 115)
• Output logic for data transfer to an external controller: Logic (Ò Logic / 118)
• Configuration of the output via the interfaces: Interfaces (Ò Interfaces / 146)
• Collection of statistical data: Test (Ò Test / 166)

Fig. 4: [Application] area

Up to 32 applications can be added. An application contains up to 10 models. (Ò Models / 65)

The [Application] area contains the following operating elements:

26
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Name Description


Add new application Displays the following buttons for adding
a new application:
[User-defined mode]: starts the user-de-
fined mode. All settings are displayed.
The mode is intended for advanced us-
ers.
[Single-code detection]: starts the wizard
for the detection of a code. The wizard
guides you through adding an applica-
tion in several steps.
[Date code verification]: starts the wizard
for the verification of a date code. The
wizard guides you through adding an ap-
plication in several steps.
[Contour presence control]: Starts the
wizard for analysing the contours of an
object.
[BLOB presence control] Starts the wiz-
ard for analysing the number, arrange-
ment and brightness of the pixels of an
object, such as a thread.
[Measurement and robot calibration]
Starts the calibration wizard for length
measurements and coordinate transfor-
mation between a robot and the device.
[BLOB analysis]: starts the wizard for
analysing the surface characteristics of
an object.
Import application Imports one or more applications from a
file with the extension *.o2i4xxapp .
Export all applications / Export applica- Exports one or more applications to a
tion file with the extension *.o2i4xxapp .
Delete all applications / Delete applica- Deletes all applications or a single appli-
tion cation after confirmation.
Deleted applications cannot be restored.
Activate Activates the selected application.

Duplicate Duplicates the selected application. The


duplicate can be used for tests, for ex-
ample.
Save Saves the changed application details.

Discard unsaved changes Discards the changed application details


and restores the status saved last.
[Edit application] Edit application with or without wizard Edits the selected application in the [Us-
er-defined mode].
If the application was created with a wiz-
ard, the wizard will open.
[Edit without wizard] Edit application without wizard Edits the selected application in the [Us-
er-defined mode].
[One-code application] Application name Sets the name of the selected applica-
tion.
[Application description] Application description Sets a description for the selected appli-
cation.
[Application details] Application details Displays details of the selected applica-
tion.

Edit application
The application is set in the [Edit application] area. An application contains application-specific
settings.

27
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

1 2 3

Fig. 5: [Edit application] area


1 Navigation bar 2 Settings
3 Main area

The [Edit application] area can be accessed in two ways:


• add a new application,
• edit the selected application.

Navigation bar
The buttons in the navigation bar are used to switch between functions.

Function Name Description


Images & trigger Sets the image and trigger settings of
the application. (Ò Images & trig-
ger / 29)
Model settings Sets the type of code, BLOB or contour.
(Ò Models / 65)
Flow Sets the processing order of the images
and models. (Ò Flow / 115)
Logic Sets the output logic. (Ò Logic / 118)
The model and pin events are assigned
to the outputs in the output logic.
Interfaces Sets the data packages which are sent
via the interface. (Ò Interfaces / 146)
Test Displays statistics and states of the con-
nected device. (Ò Test / 166)

28
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Settings
The selected function is set in the settings.

Main area
The codes, BLOBs and contours found and the live image are displayed in the main area. The live
image contains the following operating elements:

Operating element Name Description


Live image Continuously refreshes the live image in-
dependently of the set trigger source
and frame rate.
Force trigger Refreshes the live image once inde-
pendently of the set trigger source.
React to all triggers Refreshes the live image on each trigger
signal.
Wait for one trigger Refreshes the live image once on the
next trigger signal.
Save snapshot Saves the current live image in a file. If
several images exist, all are saved in a
file. The file has the extension
*.o2x5xximg .
Load snapshot Loads the live image from a file. The
loaded file is displayed instead of the
live image. The file has the extension
*.o2x5xximg .
If a file contains several images, there
may be an error message when the live
image is loaded. Before loading the live
image, add the number of images in the
ifm Vision Assistant. (Ò Add new im-
age / 33)
Enable code reading Enables detection of codes, BLOBs and
contours.
Create rectangular ROI Creates a search zone (ROI: region of
interest) in which codes are recognised.
(Ò Creating a region of interest
(ROI) / 23)
Create rectangular ROD creates a region of disinterest (ROD) in
which no codes are recognised. (Ò Cre-
ating a region of disinterest (ROD) / 24)

Some buttons are only visible in certain areas of the ifm Vision Assistant.

10.1 Images & trigger


The function [Images & triggers] sets the camera of the device and the triggers.

29
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Fig. 6: Function [Images & triggers]

To detect codes, models have to be added. (Ò Models / 65)

The function [Images & triggers] contains the following operating elements:

Operating element Type Description


Button Adds a new image. (Ò Add new im-
age / 33)

Area [Trigger & general]:

Operating element Type Description


[Trigger mode] List Sets the trigger. (Ò Trigger
mode / 33)
[Gate duration] Input field Stops the image recording after the set
[gate duration] has elapsed.
The setting is only available for the [Gat-
ed time-based] trigger mode.

30
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Trigger delay] Input field Sets a time delay in [ms] for the trigger.
[Enable burst trigger] Checkbox / input field Enables the burst trigger. The input field
is used to set how often the application
is triggered. For example, an application
with 3 set images and burst trigger=" 10 "
takes a total of 30 images.
After the image capture, the images are
evaluated. The first image rated as“
pass” is output. Other images rated as
“pass” are discarded.
The burst trigger is particularly suitable
for dynamic applications.
[Trigger gate logic] List Sets the logic of the [Gated…] trigger
modes:
[High active]: As long as the trigger is on
High , images are captured with a fixed
frame rate. The device stops capturing
images on Low .
[Low active]: As long as the trigger is on
Low , images are captured with a fixed
frame rate. The device stops capturing
images on High .
[Only one result per trigger gate] List Sets the handling of found codes in the
[Gated…] trigger modes:
[Off]: All recorded images are output.
[Relaxed]: The [Continuous] trigger
mode is active. If the gate is active and a
code is found: The gate is immediately
terminated and the result is output. If no
code is found: The gate is terminated
externally by a controller or a hardware
trigger after a certain time. If an image is
taken at the time of termination, the im-
age will still be evaluated.
[Strict]: The [Continuous] trigger mode is
active. If the gate is active and a code is
found: The gate is immediately terminat-
ed and the result is output. If no code is
found: The gate is terminated externally
by a controller or a hardware trigger af-
ter a certain time. If an image is taken at
the time of termination, the image will
not be evaluated.
[Frame rate] Input field Sets the frame rate to be achieved. (Ò
Frame rate / 34)
The setting is only available for certain
trigger modes.
[Focus] Input field Sets the focus. (Ò Selection / 34)
Button automatically optimises the focus.

[Calibrate] Button Opens the calibration wizard. (Ò Cali-


bration wizards / 35) The wizard
guides the user through the calibration
with a measuring tool or calibration pat-
tern (with robot tool tip).
Calibration increases the accuracy of
measurements with the device.
Button Imports a calibration from a file with the
extension .o2xcalib .
Button Exports a calibration to a file with the ex-
tension .o2xcalib .
[Rotate image by 180°] Checkbox Rotates the live image by 180°.

31
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[Save reference image] Button Saves the current live image as a refer-
ence image. (Ò Reference im-
age / 61)
Button Deletes the saved reference image.

Area [l1 New image 1] (name):

Operating element Type Description


Button Copies the image and changes to the
new image.
Button Renames the image.

Button Deletes the image.

Button Deactivates the image.

[Exposure time] Input field Sets the exposure time. (Ò Exposure


time / 61)
Button Automatically sets the exposure time.

[Analogue gain] List Sets the amplification factor of the ana-


logue signal (Ò Analogue gain / 61)
[Illumination] List Sets the internal and external illumina-
tion. (Ò Illumination / 62)
[Illumination internal segments] Button Displays the status of the internal illumi-
nation.
When internal illumination is active, indi-
vidual segments are deactivated and ac-
tivated by clicking in the graphic.
[Filter type] List Sets the filter for the image. (Ò Filter
type / 63)
[Filter strength] List Sets the intensity of the selected filter
type. (Ò Filter strength / 63)
[Invert image] Checkbox Inverts the brightness values of the im-
age. (Ò Invert image / 63)

[Image quality check] area:

Operating element Type Description


[Activated] Checkbox Activates the image quality check. (Ò
Image quality check / 64) The image
quality check checks the image quality
of the entire image and not only the
code quality.
[Reset statistics] Button Resets the statistics.
[Sharpness] Graphical representation Shows the current measuring range (or-
ange line), the measured value (green
dot) and the permitted value range (blue
brackets).
The blue brackets can be moved within
the graphic.
[Teach] Button Sets the current measuring range as the
permitted value range.
[Mean brightness] Graphical representation Shows the current measuring range (or-
ange line), the measured value (green
dot) and the permitted value range (blue
brackets).
The blue brackets can be moved within
the graphic.

32
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Teach] Button Sets the current measuring range as the
permitted value range.
[Underexposed area] Graphical representation Shows the current measuring range (or-
ange line), the measured value (green
dot) and the permitted value range (blue
brackets).
The blue brackets can be moved within
the graphic.
[Teach] Button Sets the current measuring range as the
permitted value range.
[Overexposed area] Graphical representation Shows the current measuring range (or-
ange line), the measured value (green
dot) and the permitted value range (blue
brackets).
The blue brackets can be moved within
the graphic.
[Teach] Button Sets the current measuring range as the
permitted value range.

10.1.1 Add new image


The [Add new image] button adds a new image. The following settings are saved in an “image”:
• exposure time
• analogue gain
• illumination
• filter
• image quality check
Up to 5 images with their own settings can be used in parallel. Several images ensure detection of
codes, BLOBs and contours with different
• qualities,
• surfaces,
• lighting conditions, etc.

10.1.2 Trigger mode


The list [Trigger mode] contains the following trigger modes:

Trigger mode Description


[Continuous] The device continuously captures images. This mode is usu-
ally used for tests.
[Process interface] The device is triggered via the process interface (e.g. by a
PLC).
[Positive edge] The device is triggered via the rising edge of an input signal.

33
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Trigger mode Description


[Negative edge] The device is triggered via the falling edge of an input signal.

[Both edges] The device is triggered via the rising and falling edge of an in-
put signal.

???? If [Trigger gate] logic is set to [High active]: The device starts
continuous image capture with a rising edge. As long as the
trigger is on High , images are captured with a fixed frame
rate. The device stops capturing images on Low .
With [Trigger gate logic] set to [Low active], the behaviour re-
verses.
[Gated PCIC] The device starts continuous image capture with the g com-
mand of the process interface:
1234L000000008
1234g1

Images are captured with a fixed frame rate. The device stops
capturing images with the following g command:
1234L000000008
1234g0

??? If [Trigger gate] logic is set to [High active]: The device starts
continuous image capture with a rising edge. As long as the
trigger is on High , images are captured with a fixed frame
rate. The device ends the image recording with the state Low
or after the set [gate duration] has expired.
With [Trigger gate logic] set to [Low active], the behaviour re-
verses.

10.1.3 Frame rate


The input field [Frame rate] defines a maximum frame rate for the device.
Set the frame rate:
u Enter the frame rate in the input field [Frame rate] and confirm with [Enter].

The frame rate only influences the trigger modes [Continuous] and [Gated …].
Depending on which additional settings are active, the requested frame rate will not be reached.

10.1.4 Selection
The [Focus] input field sets the distance in metres [m] between the lens and the code, BLOB or
contour to be detected. The optimum focus has been reached when the code, BLOB or contour in the
live image is sharply displayed and is detected by the device.

The set focus is used for all images. The focus cannot be set separately for each image.

The focus can be set in 3 ways:

34
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

• via the input field,

• via the [Optimise].button ,


• via the slider below the input field,

• Via the buttons / next to the slider.

In the [Focus] input field, the button appears if the focus has been optimised automatically.
A click on the button shows the focus levels found by the automatic focus.

10.1.5 Calibration wizards


The calibration wizards for [Rough measurement] and [Precise measurement] assist in calibrating the
instrument. The [Robot sensor calibration] wizard sets the coordinate transformation between the
machine and the robot.
The 3 wizards guide you step by step through the calibration of the device.
During calibration, the image pixels are transformed into world coordinates.

Start calibration wizard


The calibration wizards are started via the [Calibrate] button in the [Images & triggers] area of an
application. (Ò Images & trigger / 29)

Calibration wizard [Rough measurement]


Rough measurement is fast and provides precise calibrations with simple measuring tools. (Ò Rough
measurement / 35) For example, a caliper or ruler is suitable as a measuring tool.

Calibration wizard [Precise measurement]


Precise measurement recognises the pattern on a marker sheet and determines the number of image
pixels in a specific area. (Ò Precise measurement / 42) A marker sheet is a prerequisite for precise
measurement. The marker sheet is printed with the calibration wizard.

Calibration wizard [Robot sensor calibration]


Robot sensor calibration transforms the device’s image coordinate system into the robot’s world
coordinate system. (Ò Robot sensor calibration / 51)
The device sends the coordinates of found objects to the robot. The data can be used by the robot for
robot gripper navigation (pick & place), for example.
A prerequisite for robot sensor calibration is a marker sheet. The marker sheet is printed with the
calibration wizard.

10.1.5.1 Rough measurement


The rough measurement is fast and provides precise calibrations with simple measuring tools. For
example, a vernier calliper or ruler is suitable as a measuring tool.

35
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

3
2

Fig. 7: Rough measurement


1 Bottom of the device with maintenance flap for ifm 2 Field of view of the device (red rectangle)
memory stick
3 Working plane 4 Zero point

The zero point (4) is located at the top left of the live image for the rough measurement, starting
from the bottom of the device (1).

The individual pages of the wizard are described below.

1. Overview
The [Overview] page summarises the function of the wizard.

36
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Fig. 8: Rough measurement

Rough measurement requires a measurement tool such as a


- calliper gauge
- ruler or
- similar measuring tool with metric length indication.

2. Prepare calibration
On the [Prepare calibration] page, the calibration is prepared.
Before calibrating the device, follow the instructions below:
u For installation, follow the operating instructions of the device.

Fig. 9: The device is aligned with an angle of 90° perpendicular to the working plane.

u Place the object in the field of view of the device.


u Avoid reflections from the subject by adjusting the picture settings.
w Depending on the type, the device has polarisation filters and different illumination colours,
which can reduce reflections.
u Place the measuring tool (calliper, ruler etc.) on the object.
w For an accurate measurement result, the measuring tool must be on the same plane (surface)
as the object to be tested.

37
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

If the measuring tool has an inherent height of several millimetres: A sheet of paper printed with
a metric scale gives a more accurate measurement result.

3. Image settings focus


On the [Image settings focus] page, the focus of the device is set automatically or manually.
The focus describes the distance between
• the lens of the device (orange marking on the device) and
• the object that should be in focus.
The plane perpendicular to the optical axis is called the focal plane. The device automatically finds the
optimal focal plane.

Fig. 10: Measuring tool placed on the object with focus out of focus (left) and in focus (right).

Adjust focus
u Place the object in the field of view of the device.
u Place the measuring tool on the object.

u Select the [Autofocus] button:


w The focus is adjusted automatically.

u Alternatively, adjust the focus manually with the buttons / or the slider.

Rotate image
The live image can be rotated by 180°.
u Select the [Rotate image by 180°] button.

The live image can be adjusted using the mouse:


u Use the mouse wheel to enlarge or reduce the live image.
u Move the enlarged live image with the mouse pointer.

3.1 Image settings exposure


On the [Image settings exposure] page, the contrast of the object can be set. High contrast makes it
easier for the device to recognise structures.
The exposure of the object can be set automatically or manually.

Exposure time
The exposure time sets the amount of time in [µs] for taking a picture.
The exposure time can be set in several ways:
• via the input field,

• via the [Optimise].button ,


• via the slider below the input field,

38
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

• Via the buttons / next to the slider.

Analogue gain
The [Analogue gain] list sets the gain factor of the analogue signal. The analogue gain is linear to the
exposure time. Therefore, with double analogue gain, half the exposure time can be used. Especially
dynamic applications benefit from the short exposure times.

The analogue gain slightly increases the image noise.

Illumination segments
The [Illumination segments] set the internal illumination of the device.

Fig. 11: Setting the internal illumination


1 Click-activated LED

Clicking on an LED (1) in the [Internal illumination graphic] activates or deactivates the LED.
The status of an LED is displayed in colour:

Colour State
grey The LED is deactivated.
green The LED is activated.
blue The LED with polarisation filter is activated.

The polarisation filters and the [Colour of the internal illumination] buttons are only available for
the RGB-W units O2D50x, O2D51x and O2D54x.
The infrared units O2D52x, O2D53x and O2D55x are not equipped with polarisation filter and
LED colours.

In case of unwanted reflections, use the LEDs with polarisation filters.

Colour of the internal illumination


The [Colour of the internal illumination] buttons are used to set the LED colour of the internal
illumination.
u Depending on the colour of the object, set a suitable LED colour.
w A matching LED colour creates a great contrast with the measuring tool.
The following LED colours are available:
• [white]
• [green]
• [blue]
• [red]

Illumination
The [Illumination] list describes the types of illumination. The following quality parameters are
available:

Illumination Description
[None] Deactivates the internal and external illumination.

39
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Illumination Description
[Internal] Activates the internal illumination of the device. The graphic
below the list [Illumination segments] shows the status of the
internal illumination.
When internal illumination is active, individual segments are
deactivated and activated by clicking in the graphic.
[External] Activates the external illumination. The external illumination
unit is connected to switching output OUT5 of the device. In
addition, the external illumination has to be activated.
The external illumination unit is active as long as switching
output is in the " high " state.
The external illumination unit and the OUT5 output are only
available for 8-pole devices.
[Both] Activates the internal and external illumination.

4. Sensor calibration
On the [Sensor calibration] page, the measuring tool can be used to calibrate the sensor of the device:
u Place the measuring tool centrally or diagonally in the sensor’s field of view.

Fig. 12: Measuring points placed along the measuring tool

u Use buttons [A] and [B] to place the measuring points in the live image along the measuring tool.
w The measurement is more accurate if the measuring points [A] and [B] are placed as far apart
as possible.

40
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Fig. 13: Sensor calibration


1 Input field for the length

u Enter the distance between the measuring points in the [input field] for the length.

5. Test
The [Test] page displays the result of the calibration. In addition, the calibration can be tested.

Calibration details
The [Calibration details] tab shows the result of the calibration:
• Position of point A [px] / [mm]
• Position of point B [px] / [mm]
• Average conversion factor [px] in [mm]
• Length: Distance between the measuring points [mm]

Test measurement
The [Test measurement] tab provides tools for testing the calibration:

Button Description
Draws a line in the live image.
The table contains measurement results for the drawn line:
• Start point (x/y) [mm]
• End point (x/y) [mm]
• Length [mm]
Draws a rectangle in the live image.
The table contains measurement results for the drawn rectan-
gle:
• Width / height [mm]
• Range [mm²]

41
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Button Description
Draws a circle in the live image.
The table contains measurement results for the drawn circle:
• Centre point [mm]
• Radius [mm]
• Range [mm²]

10.1.5.2 Precise measurement


The precise measurement detects the pattern on a marker sheet and determines the number of image
pixels in a specific area. A marker sheet is a prerequisite for precise measurement. The marker sheet
is printed with the calibration wizard.

3
2

1 Zero point 2 Field of view of the device (red rectangle)


3 Working plane 4 Central dot pattern of the marker sheet

The zero point (1) is taught during precise measurement via the central dot pattern of the
marker sheet (4). If the marker sheet is placed in the centre of the live image, the zero point is
approximately in the centre of the image.

The individual pages of the wizard are described below.

1. Overview
The [Overview] page summarises the function of the wizard.

42
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Fig. 14: Precise measurement


1 Zero point

A marker sheet is a prerequisite for calibration.

2. Prepare calibration
On the [Prepare calibration] page, the calibration is prepared.
Before calibrating the device, follow the instructions below:
u For installation, follow the operating instructions of the device.

43
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Fig. 15: The marker sheet is placed on the object

u Place the object in the field of view of the device.

Fig. 16: Prepare calibration


1 Input field for the distance

u Enter the distance from the orange seal of the device to the object in the input field (1).

w The [Reset to focus] button resets the input field to the pre-set distance.
u Print the marker sheet.
u Place the marker sheet on the object or on the work plane.
w The height difference between the object plane and the working plane can be compensated for
on the [Calibration Z offset] page.

44
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Print marker sheet


The marker sheet is used to calibrate the device. Before printing the marker sheet, make sure that the
original scaling and no printer economy modes are activated. Otherwise, the calibration will be
inaccurate.

Print marker sheet


u Go to the [Prepare calibration] step.

Fig. 17: List of available marker sheets

u Select a marker sheet from the list.


w The length in brackets indicates the approximate distance between the device and the object.
The size of the marker sheet increases with the distance. With a distance of " > 5,20 m ", the
marker sheet is printed in DIN 2A0 format (1189 x 1682 mm).
u Select the [Print markers] button.
w The marker sheet is displayed.
u Select the [Print] button.

Set scaling
u Select the printer.
u Select the [Advanced] tab.
u In the [Page scaling] area, select [Use original page sizes] .
w The marker sheet must be printed in the original scale.

Deactivating the economy modes of the printer


u Select the [General] tab.
u Select the [Settings] button.
u Deactivate all economy modes for colour, toner, etc.
w The marker sheet must be printed in the best quality.
u Select the [OK] button.
u Select the [Print] button.
w The marker sheet is printed in the original scale and without economy modes.

45
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Check printed marker sheet

Fig. 18: Printed marker sheet with scales


1 Scale

There are scales (1) on the marker sheet to check the scaling. The marker sheet must be printed in
the original scale.
u Use a ruler to check the scales on the X and Y axes of the marker sheet.

3. Image settings focus


On the [Image settings focus] page, the focus of the device is set automatically or manually.
The focus describes the distance between
• the lens of the device (orange marking on the device) and
• the object that should be in focus.
The plane perpendicular to the optical axis is called the focal plane. The device automatically finds the
optimal focal plane.

46
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Fig. 19: The live image shows a section of the marker sheet.

Adjust focus
u Place the object in the field of view of the device.
u Place the marker sheet on the object.

u Select the [Autofocus] button:


w The focus is adjusted automatically.

u Alternatively, adjust the focus manually with the buttons / or the slider.

Rotate image
The live image can be rotated by 180°.
u Select the [Rotate image by 180°] button.

The live image can be adjusted using the mouse:


u Use the mouse wheel to enlarge or reduce the live image.
u Move the enlarged live image with the mouse pointer.

3.1 Image settings exposure


On the [Image settings exposure] page, the contrast of the object can be set. High contrast makes it
easier for the device to recognise structures.
The exposure of the object can be set automatically or manually.

Exposure time
The exposure time sets the amount of time in [µs] for taking a picture.
The exposure time can be set in several ways:
• via the input field,

• via the [Optimise].button ,


• via the slider below the input field,

47
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

• Via the buttons / next to the slider.

Analogue gain
The [Analogue gain] list sets the gain factor of the analogue signal. The analogue gain is linear to the
exposure time. Therefore, with double analogue gain, half the exposure time can be used. Especially
dynamic applications benefit from the short exposure times.

The analogue gain slightly increases the image noise.

Illumination segments
The [Illumination segments] set the internal illumination of the device.

Fig. 20: Setting the internal illumination


1 Click-activated LED

Clicking on an LED (1) in the [Internal illumination graphic] activates or deactivates the LED.
The status of an LED is displayed in colour:

Colour State
grey The LED is deactivated.
green The LED is activated.
blue The LED with polarisation filter is activated.

The polarisation filters and the [Colour of the internal illumination] buttons are only available for
the RGB-W units O2D50x, O2D51x and O2D54x.
The infrared units O2D52x, O2D53x and O2D55x are not equipped with polarisation filter and
LED colours.

In case of unwanted reflections, use the LEDs with polarisation filters.

Colour of the internal illumination


The [Colour of the internal illumination] buttons are used to set the LED colour of the internal
illumination.
u Depending on the colour of the object, set a suitable LED colour.
w A matching LED colour creates a great contrast with the measuring tool.
The following LED colours are available:
• [white]
• [green]
• [blue]
• [red]

Illumination
The [Illumination] list describes the types of illumination. The following quality parameters are
available:

Illumination Description
[None] Deactivates the internal and external illumination.

48
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Illumination Description
[Internal] Activates the internal illumination of the device. The graphic
below the list [Illumination segments] shows the status of the
internal illumination.
When internal illumination is active, individual segments are
deactivated and activated by clicking in the graphic.
[External] Activates the external illumination. The external illumination
unit is connected to switching output OUT5 of the device. In
addition, the external illumination has to be activated.
The external illumination unit is active as long as switching
output is in the " high " state.
The external illumination unit and the OUT5 output are only
available for 8-pole devices.
[Both] Activates the internal and external illumination.

4. Sensor calibration
On the [Sensor calibration] page, the marker sheet is used to calibrate the device:

Fig. 21: Marker sheet placed underneath the device


1 Dot pattern (marked red) 2 Central dot pattern (marked green)

Do not subject the device to movement, shocks or vibrations during calibration.

u Place the object in the field of view of the device.


u Place the marker sheet on the working plane.
w One of the dot patterns (1) must be in the field of view of the device, lying flat.
w The central dot pattern (2) is used as the zero point. The central dot pattern may be outside the
field of view of the device as the position is calculated by other dot patterns found.
u Click on the [Use image] button.
w The [percent indication] next to the button shows the quality of the calibration in percent. The
live image used is displayed below the button.
u Move and rotate the marker sheet as required.
w One of the dot patterns (1) must be in the field of view of the device, lying flat.

49
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

u Click on the [Use image] button.


u Repeat the last two instructions until the [percent indication] shows a value of " > 85 % ".
w Higher percentage values than " > 85 % " increase the quality of the calibration. Up to 16
images can be used.

Delete used live images


The live images used can negatively influence the calibration if they are blurred, for example.
Deleting a live image:

u Select the live image with the buttons and .

u Delete the selected live image with the button.


u Alternatively, use the [Delete all] button to delete all live images used.

4.1 Calibration of the Z offset


On the [Calibration Z offset] page, the difference between the calibration plane and the working plane
is compensated. Ideally, the calibration plane corresponds to the working plane. If this is the case, the
[Calibration Z offset] page can be skipped.
The [Calibration Z offset] page is necessary if the marker sheet was not directly on the object in the
[Sensor calibration] step. This happens when the calibration plane (marker sheet) is not equal to the
working plane (object).

2 3
Fig. 22: Z offset
1 Calibration plane with marker sheet 2 Z offset (orange area)
3 Working plane

For certain applications, the calibration level cannot be at the same level as the working plane. For
example, when
• a hole within a workpiece carrier is to be measured,
• the object to be measured protrudes from the calibration plane.

Setting the Z offset


The sign of the [Z offset] indicates whether the calibration plane with the marker sheet is in front of or
behind the working plane.
Setting the [Z offset]:
u Activate the [Z offset] button.
When the calibration plane with the marker sheet is above the working plane:
u Enter a positive [Z offset] in the input field.
When the calibration plane with the marker sheet is below the working plane:
u Enter a negative [Z offset] in the input field.

50
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Large values for [Z offset] degrade the accuracy of the calibration.

5. Test
The [Test] page displays the result of the calibration. In addition, the calibration can be tested.

Calibration details
The [Calibration details] tab shows the result of the calibration:
• Number of images [max. 16 images]
• Quality of calibration [%]
• Average conversion factor [px] in [mm]

Test measurement
The [Test measurement] tab provides tools for testing the calibration:

Button Description
Draws a line in the live image.
The table contains measurement results for the drawn line:
• Start point (x/y) [mm]
• End point (x/y) [mm]
• Length [mm]
Draws a rectangle in the live image.
The table contains measurement results for the drawn rectan-
gle:
• Width / height [mm]
• Range [mm²]
Draws a circle in the live image.
The table contains measurement results for the drawn circle:
• Centre point [mm]
• Radius [mm]
• Range [mm²]

10.1.5.3 Robot sensor calibration


The robot sensor calibration transforms the image coordinate system of the device into the world
coordinate system of the robot.
The device sends the coordinates of found objects to the robot. The data can be used by the robot for
robot gripper navigation (pick & place), for example.
A prerequisite for robot sensor calibration is a marker sheet. The marker sheet is printed with the
calibration wizard.

51
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

1
x

4
2
3
Fig. 23: Robot sensor calibration
1 Zero point 2 Tool tip
3 Field of view of the device (red rectangle) 4 Marking points

The zero point (1) is located in the centre of the robot’s coordinate axes during robot sensor
calibration.

The individual pages of the wizard are described below.

1. Overview
The [Overview] page summarises the function of the wizard.

Fig. 24: Robot sensor calibration

52
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

A marker sheet is a prerequisite for calibration.

2. Prepare calibration
On the [Prepare calibration] page, the calibration is prepared.
Before calibrating the device, follow the instructions below:
u For installation, follow the operating instructions of the device.

Fig. 25: The marker sheet is placed on the object

u Place the object in the field of view of the device.

Fig. 26: Prepare calibration


1 Input field for the distance

u Enter the distance from the orange seal of the device to the object in the input field (1).

53
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

w The [Reset to focus] button resets the input field to the pre-set distance.
u Print the marker sheet.
u Place the marker sheet on the object or on the work plane.
w The height difference between the object plane and the working plane can be compensated for
on the [Calibration Z offset] page.

Print marker sheet


The marker sheet is used to calibrate the device. Before printing the marker sheet, make sure that the
original scaling and no printer economy modes are activated. Otherwise, the calibration will be
inaccurate.

Print marker sheet


u Go to the [Prepare calibration] step.

Fig. 27: List of available marker sheets

u Select a marker sheet from the list.


w The length in brackets indicates the approximate distance between the device and the object.
The size of the marker sheet increases with the distance. With a distance of " > 5,20 m ", the
marker sheet is printed in DIN 2A0 format (1189 x 1682 mm).
u Select the [Print markers] button.
w The marker sheet is displayed.
u Select the [Print] button.

Set scaling
u Select the printer.
u Select the [Advanced] tab.
u In the [Page scaling] area, select [Use original page sizes] .
w The marker sheet must be printed in the original scale.

Deactivating the economy modes of the printer


u Select the [General] tab.
u Select the [Settings] button.
u Deactivate all economy modes for colour, toner, etc.

54
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

w The marker sheet must be printed in the best quality.


u Select the [OK] button.
u Select the [Print] button.
w The marker sheet is printed in the original scale and without economy modes.

Check printed marker sheet

Fig. 28: Printed marker sheet with scales


1 Scale

There are scales (1) on the marker sheet to check the scaling. The marker sheet must be printed in
the original scale.
u Use a ruler to check the scales on the X and Y axes of the marker sheet.

3. Image settings focus


On the [Image settings focus] page, the focus of the device is set automatically or manually.
The focus describes the distance between
• the lens of the device (orange marking on the device) and
• the object that should be in focus.
The plane perpendicular to the optical axis is called the focal plane. The device automatically finds the
optimal focal plane.

55
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Fig. 29: The live image shows a section of the marker sheet.

Adjust focus
u Place the object in the field of view of the device.
u Place the marker sheet on the object.

u Select the [Autofocus] button:


w The focus is adjusted automatically.

u Alternatively, adjust the focus manually with the buttons / or the slider.

Rotate image
The live image can be rotated by 180°.
u Select the [Rotate image by 180°] button.

The live image can be adjusted using the mouse:


u Use the mouse wheel to enlarge or reduce the live image.
u Move the enlarged live image with the mouse pointer.

3.1 Image settings exposure


On the [Image settings exposure] page, the contrast of the object can be set. High contrast makes it
easier for the device to recognise structures.
The exposure of the object can be set automatically or manually.

Exposure time
The exposure time sets the amount of time in [µs] for taking a picture.
The exposure time can be set in several ways:
• via the input field,

• via the [Optimise].button ,


• via the slider below the input field,

56
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

• Via the buttons / next to the slider.

Analogue gain
The [Analogue gain] list sets the gain factor of the analogue signal. The analogue gain is linear to the
exposure time. Therefore, with double analogue gain, half the exposure time can be used. Especially
dynamic applications benefit from the short exposure times.

The analogue gain slightly increases the image noise.

Illumination segments
The [Illumination segments] set the internal illumination of the device.

Fig. 30: Setting the internal illumination


1 Click-activated LED

Clicking on an LED (1) in the [Internal illumination graphic] activates or deactivates the LED.
The status of an LED is displayed in colour:

Colour State
grey The LED is deactivated.
green The LED is activated.
blue The LED with polarisation filter is activated.

The polarisation filters and the [Colour of the internal illumination] buttons are only available for
the RGB-W units O2D50x, O2D51x and O2D54x.
The infrared units O2D52x, O2D53x and O2D55x are not equipped with polarisation filter and
LED colours.

In case of unwanted reflections, use the LEDs with polarisation filters.

Colour of the internal illumination


The [Colour of the internal illumination] buttons are used to set the LED colour of the internal
illumination.
u Depending on the colour of the object, set a suitable LED colour.
w A matching LED colour creates a great contrast with the measuring tool.
The following LED colours are available:
• [white]
• [green]
• [blue]
• [red]

Illumination
The [Illumination] list describes the types of illumination. The following quality parameters are
available:

Illumination Description
[None] Deactivates the internal and external illumination.

57
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Illumination Description
[Internal] Activates the internal illumination of the device. The graphic
below the list [Illumination segments] shows the status of the
internal illumination.
When internal illumination is active, individual segments are
deactivated and activated by clicking in the graphic.
[External] Activates the external illumination. The external illumination
unit is connected to switching output OUT5 of the device. In
addition, the external illumination has to be activated.
The external illumination unit is active as long as switching
output is in the " high " state.
The external illumination unit and the OUT5 output are only
available for 8-pole devices.
[Both] Activates the internal and external illumination.

4. Robot coordinates
On the [Robot coordinates] page, the coordinate transformation between the machine and robot is set.
For the coordinate transformation, the 4 marking points on the marker sheet are approached with the
tool tip.

Do not move the marker sheet while approaching the 4 marking points.
u Use adhesive tape to stick the marker sheet to the object.

u Approach the 4 marking points A, B, C and D on the marker sheet one after the other with the tool
tip of the robot (Tool Centre Point).
u Position the tool tip exactly in the centre of the marking point.
w The more accurate the positioning, the more accurate the calibration.

Fig. 31: Marking point

u Enter the approached positions of the marking points A to D in the Robot coordinates table.

Fig. 32: The table contains exemplary coordinates for marker point A.

58
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

u Click on the [Teach] button.


w After clicking the button, the fine calibration to the working plane and the relative rotation to the
robot are taught. The dot pattern in the centre of the marker sheet is used for teaching.

Using marker coordinates


The [Use marker coordinates] button transfers the distance values starting from the central point
pattern of the marker sheet. So the base of the robot is assumed to be in the middle of the marker
sheet.
If the translatory values in X and Y as well as the rotatory value around Z are known (for example in
the case of a tool carrier), the coordinates can be inherited from the marker sheet.
After clicking the [Use marker coordinates] button, the values of the marker sheet for X, Y and Z
coordinates are transferred to the table.

5. Sensor calibration
On the [Sensor calibration] page, the marker sheet is used to calibrate the device:

Fig. 33: Marker sheet placed underneath the device


1 Dot pattern (marked red)

Do not subject the device to movement, shocks or vibrations during calibration.

u Place the object in the field of view of the device.


u Place the marker sheet on the working plane.
w One of the dot patterns (1) must be in the field of view of the device, lying flat.
u Click on the [Use image] button.
w The [percent indication] next to the button shows the quality of the calibration in percent. The
live image used is displayed below the button.
u Move and rotate the marker sheet as required.
w One of the dot patterns (1) must be in the field of view of the device, lying flat.
u Click on the [Use image] button.
u Repeat the last two instructions until the [percent indication] shows a value of " > 85 % ".

59
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

w Higher percentage values than " > 85 % " increase the quality of the calibration. Up to 16
images can be used.

Delete used live images


The live images used can negatively influence the calibration if they are blurred, for example.
Deleting a live image:

u Select the live image with the buttons and .

u Delete the selected live image with the button.


u Alternatively, use the [Delete all] button to delete all live images used.

5.1 Calibration of the Z offset


On the [Calibration Z offset] page, the difference between the calibration plane and the working plane
is compensated. Ideally, the calibration plane corresponds to the working plane. If this is the case, the
[Calibration Z offset] page can be skipped.
The [Calibration Z offset] page is necessary if the marker sheet was not directly on the object in the
[Sensor calibration] step. This happens when the calibration plane (marker sheet) is not equal to the
working plane (object).

2 3
Fig. 34: Z offset
1 Calibration plane with marker sheet 2 Z offset (orange area)
3 Working plane

For certain applications, the calibration level cannot be at the same level as the working plane. For
example, when
• a hole within a workpiece carrier is to be measured,
• the object to be measured protrudes from the calibration plane.

Setting the Z offset


The sign of the [Z offset] indicates whether the calibration plane with the marker sheet is in front of or
behind the working plane.
Setting the [Z offset]:
u Activate the [Z offset] button.
When the calibration plane with the marker sheet is above the working plane:
u Enter a positive [Z offset] in the input field.
When the calibration plane with the marker sheet is below the working plane:
u Enter a negative [Z offset] in the input field.

Large values for [Z offset] degrade the accuracy of the calibration.

60
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

6. Test
The [Test] page displays the result of the calibration. In addition, the calibration can be tested.

Calibration details
The [Calibration details] tab shows the result of the calibration:
• Number of images [max. 16 images]
• Quality of calibration [%]
• Average conversion factor [px] in [mm]

Test measurement
The [Test measurement] tab provides tools for testing the calibration:

Button Description
Draws a line in the live image.
The table contains measurement results for the drawn line:
• Start point (x/y) [mm]
• End point (x/y) [mm]
• Length [mm]
Draws a rectangle in the live image.
The table contains measurement results for the drawn rectan-
gle:
• Width / height [mm]
• Range [mm²]
Draws a circle in the live image.
The table contains measurement results for the drawn circle:
• Centre point [mm]
• Radius [mm]
• Range [mm²]

10.1.6 Reference image


The button [Store reference image] saves the current live image as a reference image. The reference
image is used as an overlay in the area [Monitor]. Monitor (Ò / 21)
The reference picture is used to help align the device. Examples:
• The orientation of the device changes and the original position is to be restored.
• The device is replaced and the new device is to be aligned in the same way.

10.1.7 Exposure time


The input field [Exposure time] sets the period in [μs] for the image capture.
The exposure time can be set in 3 ways:
• via the input field,

• via the [Optimise].button ,


• via the slider below the input field,

• Via the buttons / next to the slider.

10.1.8 Analogue gain


The [Analogue gain] list sets the gain factor of the analogue signal. The analogue gain is linear to the
exposure time. Therefore, with double analogue gain, half the exposure time can be used. Especially
dynamic applications benefit from the short exposure times.

61
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

The analogue gain slightly increases the image noise.

10.1.9 Illumination
The [Illumination] list sets the internal and external illumination. The following illuminations are
available:

Illumination Description
[None] Deactivates both internal and external illumination.
[Internal] Activates the internal illumination of the device. The graphic
below the [Illumination] list shows the status of the internal il-
lumination.
When internal illumination is active, individual segments are
deactivated and activated by clicking in the graphic.
[External] Activates the external illumination. The external illumination
unit is connected to the device’s switching output. In addition,
the external illumination has to be activated. (Ò Interfac-
es / 146) The illumination unit is active as long as the switch-
ing output is in the " high " state.
[Both] Activates internal and external illumination.

Which switching output the external illumination is connected to is specified in the operating
instructions for the device.

10.1.10 Illumination of internal segments


The [Illumination internal segments] graphic sets the internal illumination of the device. The internal
illumination is activated with the [Illumination] list. (Ò Illumination / 62)

Fig. 35: Setting the internal illumination


1 Click-activated LED

Clicking on an LED (1) in the [Illumination internal segments] graphic activates or deactivates the LED.
The status of an LED is displayed by its colour:

Colour State
Grey The LED is deactivated.
Green The LED is activated.
Blue The LED with polarisation filter is activated.

LEDs with polarisation filter are only available for devices with polarisation filter: www.ifm.com

In case of unwanted reflections, use the LEDs with polarisation filters.

Colour of the internal illumination


The [Colour of the internal illumination] buttons are used to set the LED colour of the internal
illumination.
u Depending on the colour of the object, set a suitable LED colour.
w A matching LED colour creates a great contrast with the measuring tool.

62
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

The following LED colours are available:


• [white]
• [green]
• [blue]
• [red]

The [Colour of the internal illumination] buttons are only available for the RGB-W devices:
www.ifm.com

The focus of the device depends on the LED colour.


u After changing the LED colour, readjust the focus. (Ò Selection / 34)

The focus setting always applies to the application. This means that all images in the application
use the same focus setting.
w If different LED colours are used:
u create a separate application for each LED colour or
u set a focus that provides an acceptable result for all images.

10.1.11 Filter type


The list [Filter type] sets filters for the image with which the representation is optimised. The following
filters available:

Filter type Description


[No filter] Deactivates the filter.
[Enlarge dark pixels] Enlarges dark pixel groups and decreases light pixel gaps.
[Enlarge light pixels] Enlarges light pixel groups and decreases dark pixel gaps.
[Smoothing] Reduces noise and spurious pixels by first sorting the neigh-
bouring pixel values in numerical order. Subsequently, the re-
spective pixel value is replaced by the mean pixel value.
The [Median] filter type is more robust than the [Mean value]
filter type. Individual non-representative pixel values influence
the median value only insignificantly.
[Mean value] Reduces noise and spurious pixels by replacing each pixel
value with the average value of the neighbouring pixel values.
This means that only the pixel values that are representative
of their surroundings will be retained.

10.1.12 Filter strength


The list [Filter strength] sets the intensity of the selected filter type. The following filter intensities are
available:

Filter strength Description


[1 (weak)] Uses a weak intensity for the selected filter type (preset).
[2] …
[3] …
[4] …
[5 (strong)] Uses a strong intensity for the selected filter type.

10.1.13 Invert image


The [Invert image] checkbox inverts the brightness values of the selected image:

63
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

black/white becomes white/black.

Inversion of the image is necessary with inverted 1D barcodes, e.g. for lasered metal surfaces.

10.1.14 Image quality check


The checkbox [Enabled] activates the image quality check. The image quality check checks whether
the measured values received from the camera are within the permitted value range. The following
measured values are recorded:
• Sharpness
• Mean brightness
• Underexposed area (all pixels with grey value < 10)
• Overexposed area (all pixels with grey value > 245)
If a measured value is outside the permitted value range, this can be signalled via the process
interface.
1 2 3 2 1

Fig. 36: Image quality check


1 Permitted value range 2 Min/max value
3 Current measured value

Permitted value range


The permitted value range is indicated by the blue square brackets. The brackets can be moved within
the graphic using the mouse. If the measured value (green dot) is outside the permitted value range
(blue brackets), this is indicated by an exclamation mark: . In addition, this can be signalled via the
process interface. For this, the process interface must be configured accordingly via the function
“Interface”. (Ò Interfaces / 146)

Min/max value
The min/max values are indicated by the orange dashed lines. The lines mark the min/max values of
the taught measured values. The lines overlap if the measured value has not yet changed.
The button [Reset statistics] discards the taught min/max values.
The button [Teach] adopts the current min/max values (orange dashed lines) as the default for the
permitted value range (blue square brackets).

Current measured value


The current measured value is indicated by the green dot. The min/max values are indicated by the
orange dashed lines.

Operating elements
The area [Image quality check] contains the following operating elements:

Operating element Type Description


[Activated] Checkbox Activates the image quality check.
[Reset statistics] Button Resets the min/max values.
[Sharpness] Blue square brackets Sets the permitted value range (blue
brackets). The brackets can be moved
within the graphic.

64
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Teach] Button Adopts the min/max values (orange
dashed lines) as the default for the per-
mitted value range (blue square brack-
ets).
[Mean brightness] Blue square brackets Sets the permitted value range. The blue
brackets can be moved within the graph-
ic.
[Teach] Button Adopts the min/max values (orange
dashed lines) as the default for the per-
mitted value range (blue square brack-
ets).
[Underexposed area] Blue square brackets Sets the permitted value range. The blue
brackets can be moved within the graph-
ic.
[Teach] Button Adopts the min/max values (orange
dashed lines) as the default for the per-
mitted value range (blue square brack-
ets).
[Overexposed area] Blue square brackets Sets the permitted value range. The blue
brackets can be moved within the graph-
ic.
[Teach] Button Adopts the min/max values (orange
dashed lines) as the default for the per-
mitted value range (blue square brack-
ets).

10.2 Models
The [Models] function sets the codes, BLOBs and contours to be detected. A model is the taught-in
“good” condition of one or more codes, BLOBs and contours. Up to 10 models can be added.
The device captures the image of a test part and compares it with the taught-in codes, BLOBs and
contours of the added models. Depending on the degree of agreement, a model is considered to have
been found.

65
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Fig. 37: [Models] function

The [Models] function contains the following operating elements:

Operating element Name Description


Create new model Adds a new model. (Ò Add new mod-
el / 66)
Delete all models Deletes all models. The models are ir-
revocably deleted.

“Model name” area:

Operating element Name Description


[Type] Type Indicates the code type of the model.
The code type is set when the model is
created.
Copy model Copies the model and changes to the
new model.
Rename model Renames the model.

Delete model Deletes the model.

Return to the default values Resets all settings to the default values.

[Image assignment] Image assignment Assigns the model to the selected imag-
[l1 New image 1] es.

10.2.1 Add new model


The [Add new model] button adds a new model. A model has to be added to detect codes The
settings for detection of codes, BLOBs and contours are stored in the model.

66
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Up to 32 applications can be added. An application contains up to 10 models. (Ò Models / 65)

After the [Add new model] button has been clicked, the following buttons are displayed:

Model type Description


[Automatic code search] Starts the [Create mode] wizard. The wizard detects 1D and
2D code families and creates a matching model.
The wizard does not detect the following code families:
- PharmaCode
- MSI
[Bar code 1D] Creates a model for recognising a 1D barcode. (Ò Bar code
1D / 67)
[Data code 2D] Creates a model for recognising a 2D data code. (Ò Data
code 2D / 77)
[OCR] creates a model for recognising text content. (Ò OCR; / 88)
OCR stands for Optical Character Recognition and refers to
the recognition of text in captured images. The text can then
be processed further.
The text recognition can be adjusted in many ways. This in-
creases the processing speed and reduces possible sources
of error.
[BLOB analysis] Creates a model that recognises objects via BLOB analysis.
(Ò BLOB analysis / 95)
The number, arrangement and brightness of the pixels of an
object are analysed. [BLOB analysis] is particularly suitable
for applications in which the objects vary in shape, size or col-
our value.
[Contour detection] Creates a model that recognises objects via contour detec-
tion. (Ò Contour detection / 104)
The contours of an object are analysed. [Contour detection] is
particularly suitable for applications where the shape of the
objects is repeated.
[Contour anchor] Creates a model to recognise a unique contour. The contour
will be recognised regardless of its position in the image. (Ò
Contour anchor / 112)
[Bar code 1D anchor] Creates a model for recognising a 1D barcode. The 1D bar-
code is recognised regardless of its position in the image. (Ò
1D barcode anchor tracking / 115)
[Data code 2D anchor] Creates a model for recognising a 2D data code. The 2D data
code will be recognised regardless of its position in the image.
(Ò 2D data code anchor tracking / 115)

10.2.2 Bar code 1D


The [1D Barcode] model contains settings for recognising a 1D barcode.
The model contains the following operating elements:

Operating element Type Description


[Activate anchor tracking] Checkbox Activates anchor tracking. The 1D bar-
code is recognised regardless of its po-
sition in the image.
[Code family] List Sets the code family. (Ò Code fami-
ly / 69)
[Encoding] List Sets the character encoding. (Ò Encod-
ing / 69)
[Number of codes per ROI group] Input box Sets the number of codes per ROI
group. (Ò Number of codes per ROI
group / 70)

67
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[Timeout] Checkbox Sets a timeout terminating the search for
codes when the time has elapsed. (Ò
Timeout / 70)
[Measure ISO quality] Checkbox Sets the quality classification according
to ISO. (Ò Measure ISO quality / 70)
[Check char] List Evaluates the check character of a bar-
code. (Ò Check char / 71)

[Optimiation] area:

Operating element Type Description


[Minimum contrast] Input box Sets the minimum contrast between
foreground and background. (Ò Mini-
mum contrast / 72)
[Min code length] Input box Sets the minimum code length. (Ò Min
code length / 72)
[Quiet zone] List Sets the verification of the quiet zones of
a code. (Ò Quiet zone / 72)
[Use bar orientation] Checkbox Improves the code contours. (Ò Use bar
orientation / 73)
[Symbology identifier] List Displays FNC1 and ECI characters in
codes. (Ò Symbology identifier / 73)

[Advanced] area:

Operating element Type Description


[Orientation] Input box Sets the orientation of the code. (Ò Ori-
entation / 74)
[Orientation tolerance] Input box Sets a zone for the oorientation in which
the code is detected. (Ò Orientation tol-
erance / 74)
[Number of scanlines] Input box Sets the number of scanlines for the de-
tection of a code. (Ò Number of scan-
lines / 74)
[Majority voting] Checkbox Sets the majority voting for the scanlines
of single-line barcodes. (Ò Majority vot-
ing / 75)
[Merge scanlines] Checkbox Sets merging of scanlines of single-line
barcodes. (Ò Merge scanlines / 75)
[Minimum identical scanlines] Input box Sets the minimum number of successful-
ly read scanlines. (Ò Minimum identical
scanlines / 75)
[Start/Stop tolerance] List Sets the tolerance for start and stop pat-
terns. (Ò Start/Stop tolerance / 76)
[Element size variable] Checkbox Activates the compensation of the small-
est element sizes of barcodes. (Ò Ele-
ment size variable / 76)

[Size restriction area]:

Operating element Type Description


[Barcode height min] List Sets the minimum barcode height. (Ò
Barcode height min / 76)
[Barcode width min] List Sets the minimum barcode width. (Ò
Barcode width min / 76)

[ROI size check] area:

68
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Activated] Checkbox Warns if the code moves further out of
the ROI with each trigger. (Ò ROI size
check / 76)

10.2.2.1 Code family


The list [Code family] sets the code family. The following code families are available for a 1D barcode:

Code families for 1D barcodes


2/5 Industrial
2/5 Interlaced
Codabar
Code 128
Code 39
Code 93
EAN-13 Add-On 2
EAN-13 Add-On 5
EAN-13
EAN-8 Add-On 2
EAN-8 Add-On 5
EAN-8
GS1 DataBar Expanded Stacked
GS1 DataBar Expanded
GS1 DataBar Limited
GS1 DataBar Omnidir
GS1 DataBar Stacked Omnidir
GS1 DataBar Stacked
GS1 DataBar Truncated
GS1-128
MSI
PharmaCode
UPC-A Add-On 2
UPC-A Add-On 5
UPC-A
UPC-E Add-On 2
UPC-E Add-On 5
UPC-E

10.2.2.2 Encoding
The list [Encoding] sets the character encoding of the code contents.
The list [Encoding] contains the following trigger modes:

Setting Description
[Latin-1 / ASCII] Decodes the characters according to ISO 8859-1.
[UTF-8] Decodes the characters according to UTF-8.
[UTF-16] Decodes the characters according to UTF-16.

69
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10.2.2.3 Number of codes per ROI group


The input field [Number of codes per ROI] sets the number of codes to be detected for an ROI group.
Creating a region of interest (ROI) (Ò / 23)

A search is made for the set number of codes. If more or fewer codes are found, the model is
considered “failed” in the overall statistics. (Ò Monitor / 21)

Example
[Number of codes per ROI] = 2
• If 1 ROI exists: 2 codes are searched in this one ROI.
• If 2 ungrouped ROIs exist (each ROI counts as ROI group): 2 codes are searched in each ROI.
Altogether, 4 codes are searched.
• If 2 grouped ROIs exist (1 ROI group): 2 codes are searched in this one ROI group. Altogether, 2
codes are searched. The codes may
– both be contained in the first ROI,
– both be contained in the second ROI,
– 1 code each contained in each ROI.

A large number of codes to be detected increases the evaluation time of the device.

10.2.2.4 Timeout
The checkbox [Timeout] sets a timeout terminating the search for codes when the time has elapsed.
For example, a maximum decoding time can be ensured with the timeout.

10.2.2.5 Measure ISO quality


The checkbox [Measure ISO quality] evaluates the code quality of 1D code types following ISO/IEC
15416. The evaluation of the code quality is available for the following 1D code types:
• 2/5 Interleaved, 2/5 Industrial
• Code 39, Code 93, Code 128
• EAN8, EAN8 Add-On 2, EAN8 Add-On 5
• EAN13, EAN13 Add-On 2, EAN13 Add-On 5
• UPC-A, UPC-A Add-On 2, UPC-A Add-On 5
• UPC-E, UPC-E Add-On 2, UPC-E Add-On 5
• GS1 Databar
• GS1-128
• MSI Barcode
• Codabar
• Pharmacode
Set the ISO quality parameters:
u Activate the checkbox [Measure ISO quality].
w The selection field [User-defined quality grade] is displayed. The contained quality parameters are
deselected by clicking [x] and added by clicking [+].
The following quality parameters are available:

Quality parameter Description


[Additional Requirements] Additional requirements

70
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Quality parameter Description


[Decodability] Decodability
[Decode] Decode
[Defects] Defects
[Minimal Edge Contrast] Minimum edge contrast
[Minimal Reflectance] Minimum reflectance
[Modulation] Modulation
[Symbol Contrast] Symbol contrast

In addition, the following quality parameters are available for GS1 Databar codes:

Quality parameter [Composite …] Description


[Codeword Yield] Codeword yield
[Decodability] Decodability
[Decode] Decode
[Defects] Defects
[Modulation] Modulation
[Unused Error Correction] Unused error correction
[Composite RAP …] Description
[Contrast] Contrast
[Decodability] Decodability
[Defects] Defects
[Minimal Edge Contrast] Minimum edge contrast
[Minimal Reflectance] Minimum reflectance
[Modulation] Modulation
[Overall] Overall quality

The function increases the evaluation time of the device.

10.2.2.6 Check char


The list [Check char] evaluates the check character of a barcode. The check character is used to
check whether the user data of the barcode has been read correctly.
Some code types contain a check character by default. For these code types, the content of the check
character cannot be read.
For code types such as Code 39 , Codabar , 2/5 Industrial and 2/5 Interleaved , the list
[Check char] is used to set how the check character is handled.
The list [Check char] contains the following settings:

Setting Description
[Absent] The barcode does not contain a check character. The com-
plete content of the barcode is interpreted as user data.
Preset: [Absent].
[Present] The barcode contains a check character. The correctness of
the user data is checked using the check character. If the
checksum of the user data does not correspond to the check
character, the barcode is classified as unreadable and is not
provided as a result.
The content of the check character is not provided.

71
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Setting Description
[Preserved] The barcode contains a check character. The correctness of
the user data is checked using the check character. If the
checksum of the user data does not correspond to the check
character, the barcode is classified as unreadable and is not
provided as a result.
The content of the check character is provided.

10.2.2.7 Minimum contrast


The input field [Contrast min] sets the minimum contrast between foreground and background. If a
code has a lower contrast, it is not detected.
Preset: 0 .

10.2.2.8 Min code length


The input field [Min. code length] sets the minimum code length. If a code is of shorter code length, it
is not detected.
Preset: 0 .

For the code types 2/5 Industrial and 2/5 Interleaved , the value 3 is preset.
Code lengths <3 lead to reading errors with these code types: The code types are erroneously
detected in texts and samples.

10.2.2.9 Quiet zone


The list [Quiet zone] sets the verification of quiet zones of a code. The quiet zone is at least of the
width of the narrowest bar of the barcode.
The list [Quiet zone] contains the following settings:

Setting Description
[False] Detects the codes when the quiet zones do not meet the
specified minimum widths.
The setting can lead to the detection of small codes within a
large code.
[True] Detects the codes when the quiet zones do meet the speci-
fied minimum widths. The following table contains the speci-
fied minimum widths as a multiple of a module width.
[Tolerant] The codes are detected if a limited number of corners occur in
the quiet zones. Maximum 1 corner per 4 module widths is al-
lowed.
[Custom] A factor is entered in the input field. The factor defines the
minimum width of the quiet zones.
With factor 1 , codes are detected if the quiet zone is at least
1 x the width of the narrowest bar of the barcode. The follow-
ing table contains the specified minimum widths as a multiple
of a module width.
Factor 2 requires double the minimum widths.

The following quiet zones apply to 1D barcodes:

Code families for 1D barcodes Minimum width left quiet zone Minimum width right quiet zone
2/5 Industrial 10 10
2/5 Interlaced 10 10
Codabar 10 10
Code 128 10 10
Code 39 10 10

72
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Code families for 1D barcodes Minimum width left quiet zone Minimum width right quiet zone
Code 93 10 10
EAN-13 Add-On 2 7 5
EAN-13 Add-On 5 7 5
EAN-13 11 7
EAN-8 Add-On 2 7 5
EAN-8 Add-On 5 7 5
EAN-8 7 7
GS1 DataBar Expanded Stacked 1 1
GS1 DataBar Expanded 1 1
GS1 DataBar Limited 1 1
GS1 DataBar Omnidir 1 1
GS1 DataBar Stacked Omnidir 1 1
GS1 DataBar Stacked 1 1
GS1 DataBar Truncated 1 1
GS1-128 10 10
MSI 10 10
PharmaCode 5 5
UPC-A Add-On 2 9 5
UPC-A Add-On 5 9 5
UPC-A 9 9
UPC-E Add-On 2 9 5
UPC-E Add-On 5 9 5
UPC-E 9 7

10.2.2.10 Use bar orientation


The checkbox [Use bar orientation] improves code contours. This allows for more stable results when
rotating the code, which is especially important for position tracking.

10.2.2.11 Symbology identifier


The list [Symbology identifier] shows whether the code contains an FNC1 and ECI character.
The list [Symbology identifier] contains the following settings:

Setting Description
[Off] Deactivates the output of the symbology identifier.
[Only mandatory] Activates the output of required symbology identifiers.

FNC1 character
The FNC1 character (Function 1 Character) indicates that the data in the symbol is formatted
according to certain industry and application standards.

ECI mark
The ECI (Extended Channel Interpretation) protocol indicates that the data is formatted with a 6-digit
code according to specific code tables. This can be an international character set, for example. In the
output stream, the data is encoded as \nnnnnn . If the symbol contains one or more ECI codes, all
backslashes in the normal data stream \ (ASCII code 92) are doubled.

73
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Only the actual value according to the specifications for Code 128 and GS1-128 is returned as
symbology identifier.
If necessary, the symbology identifier, which is composed of the prefix and the value " m ”, must be
manually prefixed to the decoded string (usually only if m>1 ).

Code Prefix
Code 128 ]C0
GS1-128 ]C1

10.2.2.12 Orientation
The input field [Orientation] sets the orientation of the code.

Fig. 38: Orientation of the code

A code is detected if the average orientation of its bars corresponds to the value [Orientation].
Preset: 0° .
The value range is -90...90° .

For PharmaCode , the value range is: -180...180° .

The input field [Orientation tolerance] extends the individual value [Orientation] to a range.
Orientation tolerance (Ò / 74)

10.2.2.13 Orientation tolerance


The input field [Orientation tolerance] sets a tolerance for the function [Orientation]. The code is
detected if the average orientation of its bars is in the range [Orientation tolerance].
Preset: 90° .
The value range is 0...90° .

The function increases the evaluation time of the device.

With the maximum value 90° , all codes are detected, independently of the orientation.

10.2.2.14 Number of scanlines


The input field [Num scanlines] sets the number of scanlines for detecting a code. The function reacts
differently for single-row and stacked barcodes.
With a single-row barcode, scanning stops as soon as a barcode has been detected. A single-row
barcode of poor quality requires more scanlines than a code of good quality. For a code of average
quality, a value of 2...5 is sufficient. The value must be increased when the code is no longer
detected.
With a stacked barcode, all scanlines are evaluated. A stacked barcode is composed of several rows
(maximum 11 rows). The number of scanlines can be reduced if a low number of rows is expected.

74
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

With the preset value 0 , a certain number of scanlines is used depending on the code type:

Code type Description


Single-row barcode: Code 128, EAN 13, GS1 DataBar 10
Limited etc.
Stacked barcode: GS1 DataBar Stacked (Omnidirectional) 20
Stacked barcode: GS1 DataBar Expanded Stacked 55
Stacked

If an image contains many incorrect code candidates, the evaluation time of the device is
reduced with a small value of [Num scanlines].

10.2.2.15 Majority voting


The checkbox [Majority voting] sets the majority voting for scanning single-row barcodes. The function
reduces the number of incorrectly detected codes. If the function is activated, the result detected by
the majority of all scanlines is used as overall result.

The function increases the evaluation time of the device.

10.2.2.16 Merge scanlines


The checkbox [Merge scanlines] sets merging of scanlines in single-row barcodes. If no barcode was
detected, scanlines are merged. A barcode is searched again in the merged scanlines which
increases the chance of detection.

The function increases the evaluation time of the device.

10.2.2.17 Minimum identical scanlines


The input field [Min identical scanlines] sets the minimum number of successfully detected scanlines.
As soon as the minimum number is reached, the function forwards the result of the scan. The function
reduces the probability of erroneously detecting barcodes.
The following applies with the value 0 :
• The contents of a single-row barcode is already transferred with the first successful scanline.
• The contents of a multi-row barcode is transferred when each row of the barcode has been
scanned successfully.
The following values are preset:

Code type Preset


2/5 Industrial 2
2/5 Interleaved 2
Alle anderen Codetypen 0

If [Majority voting] is also activated, the performance of [Min identical scanlines] changes.
Majority voting (Ò / 75)

If [Merge scanlines] is also activated, [Min identical scanlines] sets the number of scanlines with
successfully detected edges. Merge scanlines (Ò / 75)

75
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10.2.2.18 Start/Stop tolerance


The list [Start/Stop tolerance] sets the tolerance of the scanline for the start and stop patterns of
codes.
The list [Start/Stop tolerance] contains the following settings:

Setting Description
High Detects a code with a higher probability. Poorly readable
codes can be misrecognised.
Low Detects a code with a lower probability. Poorly readable
codes are rarely misrecognised.

The list [Start/Stop tolerance] is only available for code types Code 128 and GS1-128 .

10.2.2.19 Element size variable


The checkbox [Element size variable] compensates the smallest element sizes. The smallest element
sizes of a barcode can vary in size due to perspective-related distortions or distorted surfaces, making
the barcode difficult to read. If the checkbox is activated, such distortions are balanced out where
possible.

The checkbox [Element size variable] is only available for the following code types:
GS1 DataBar Expanded Stacked , GS1 DataBar Expanded and GS1 DataBar Limited .

10.2.2.20 Barcode height min


The list [Barcode height min] sets the minimum barcode height. The barcode height is automatically
detected in the default setting. The user-defined setting improves the detection of codes.
The list [Barcode height min] contains the following settings:

Setting Description
[Automatic] Automatically sets the barcode height.
[Custom] Manually sets the barcode height in pixels. The smallest pos-
sible value is 8 Pixel .

10.2.2.21 Barcode width min


The list [Barcode width min] sets the minimum barcode width. The barcode width is automatically
detected in the default setting. The user-defined setting improves the detection of codes.
The list [Barcode width min] contains the following settings:

Setting Description
[Automatic] Automatically sets the barcode width.
[Custom] Manually sets the barcode width in pixels.

10.2.2.22 ROI size check


The checkbox [Enabled] activates the function [ROI size check]. The function provides a warning if the
code moves to the edge of the defined ROI.
The function can be used as a predictive maintenance tool. The results of the function are forwarded
to a controller if desired.

76
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Example
Packages, each with a code to be read, move on a belt. If the belt speed or the position of the
packages does not match the trigger rate of the device exactly, the packages and thus the codes will
move out of the ROI. If the code is completely outside the ROI, it will be no longer read. The function
[ROI size check] warns against this.
The function [ROI size check] contains the following input fields:

Input box Description


[Threshold ROI warning] If the distance of at least one code contour to the ROI falls be-
low the set value, a warning is issued.
If a code is found, the warning area is displayed in colour in
the live image.
[Threshold distance to mean position] A mean value is formed from N read codes and stored as the
mean position (Nmax = 100 , where N = number of codes
read). If the distance of a read code is greater than the set
value, a warning is issued.
The mean value is reset if a code is not read successfully.
The last N positions are displayed in the form of a “track” in
the live image.
[Threshold movement score] If a code continues to move in the same direction in several
consecutive images, the probability that it will continue to
move in that direction increases. If the probability exceeds the
set value, a warning is issued.

10.2.3 Data code 2D


The [Data code 2D] model contains settings for recognising a 2D barcode.
The model contains the following operating elements:

Operating element Type Description


[Activate anchor tracking] Checkbox Activates anchor tracking. The 2D data
code will be recognised regardless of its
position in the image.
[Code family] List Sets the code family. (Ò Code fami-
ly / 78)
[Presets] List Prepares the model to detect certain
codes. (Ò Presets / 79)
[Encoding] List Sets the character encoding. (Ò Encod-
ing / 79)
[Number of codes per ROI group] Input box Sets the number of codes per ROI
group. (Ò Number of codes per ROI
group / 79)
[Timeout] Checkbox Sets a timeout terminating the search for
codes when the time has elapsed. (Ò
Timeout / 80)
[Quality grading] List Sets the quality grading. (Ò Quality
grading / 80)

[Optimiation] area:

Operating element Type Description


[Polarity] List Sets the polarity of the code to be de-
tected. (Ò Polarity / 83)
[Strict quiet zone] Checkbox Sets the behaviour in the event of a fault
in the quiet zone of a code. (Ò Strict qui-
et zone / 83)
[Symbology identifier] List Displays FNC1 and ECI characters in
codes. (Ò Symbology identifier / 83)

[Advanced] area:

77
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[Contrast tolerance] List Sets the tolerance to contrast differenc-
es for finding codes. (Ò Contrast toler-
ance / 84)
[Finder pattern tolerance] List Sets the behaviour in the event of errors
in the finder pattern of the code. (Ò
Finder pattern tolerance / 85)
[Module grid] List Sets the calculation of the centre of a
module. (Ò Module grid / 85)
[Max slant] Input box Sets the maximum slant of the L-shaped
finder pattern relative to a right angle.
(Ò Max slant / 85)
[Mirrored] List Sets the detection of mirrored codes. (Ò
Mirrored / 86)

[Size restriction area]:

Operating element Type Description


[Symbol shape] List Restricts the form of the codes to be de-
tected. (Ò Symbol shape / 86)
[Symbol columns min] Input box Sets the minimum number of columns
for the detection of a code. (Ò Symbol
columns min / 86)
[Symbol columns max] Input box Sets the maximum number of columns
for the detection of a code. (Ò Symbol
columns max / 86)
[Symbol rows min] Input box Sets the minimum number of rows for
the detection of a code. (Ò Symbol rows
min / 86)
[Symbol rows max] Input box Sets the maximum number of rows for
the detection of a code. (Ò Symbol rows
max / 87)
[Symbol size min.] Input box Sets the minimum number of symbol el-
ements on the X/Y axis for the detection
of a code. (Ò Symbol size min. / 87)
[Symbol size max.] Input box Sets the maximum number of symbol el-
ements on the X/Y axis for the detection
of a code. (Ò Symbol size max. / 87)

[ROI size check] area:

Operating element Type Description


[Activated] Checkbox Warns if the code moves further out of
the ROI with each trigger. (Ò ROI size
check / 87)

10.2.3.1 Code family


The [Code family] list sets the code family. The following code families are available for a 2D data
code:

Code families for 2D data codes


Data Matrix ECC 200
QR Code
Micro QR Code
PDF417
Aztec Code
DotCode
GS1 DataMatrix

78
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Code families for 2D data codes


GS1 QR Code
GS1 Aztec Code
GS1 DotCode

10.2.3.2 Presets
The list [Presets] prepares the model to detect certain codes.
The list [Presets] contains the following settings:

Setting Description
[Standard detection] Sets the fast detection of codes. Codes with a good contrast
of a sufficient size are detected.
[Enhanced detection] Sets the reliable detection of codes. Inverted codes and
codes with difficult contrast and size conditions are detected.
[Maximum detection] Sets the detection of codes with defects or hidden finder pat-
terns. The probability of detecting existing codes increases
compared to [Standard detection] and [Enhanced detection].

10.2.3.3 Encoding
The list [Encoding] sets the character encoding of the code contents.
The list [Encoding] contains the following trigger modes:

Setting Description
[Latin-1 / ASCII] Decodes the characters according to ISO 8859-1.
[UTF-8] Decodes the characters according to UTF-8.
[UTF-16] Decodes the characters according to UTF-16.

10.2.3.4 Number of codes per ROI group


The input field [Number of codes per ROI] sets the number of codes to be detected for an ROI group.
Creating a region of interest (ROI) (Ò / 23)

A search is made for the set number of codes. If more or fewer codes are found, the model is
considered “failed” in the overall statistics. (Ò Monitor / 21)

Example
[Number of codes per ROI] = 2
• If 1 ROI exists: 2 codes are searched in this one ROI.
• If 2 ungrouped ROIs exist (each ROI counts as ROI group): 2 codes are searched in each ROI.
Altogether, 4 codes are searched.
• If 2 grouped ROIs exist (1 ROI group): 2 codes are searched in this one ROI group. Altogether, 2
codes are searched. The codes may
– both be contained in the first ROI,
– both be contained in the second ROI,
– 1 code each contained in each ROI.

A large number of codes to be detected increases the evaluation time of the device.

79
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10.2.3.5 Timeout
The checkbox [Timeout] sets a timeout terminating the search for codes when the time has elapsed.
For example, a maximum decoding time can be ensured with the timeout.

10.2.3.6 Quality grading


The list [Quality grading] evaluates the code quality of 2D code types.
The list [Quality grading] contains the following settings:

Setting Description
[None] No quality grading set.
[ISO quality] Sets the grading of the quality following ISO/IEC 15415. Then
the overall quality can be set.
[AIM / ISO-TR29158 quality] Sets the grading of the quality following AIM DPM-1-2006.
Then the overall quality can be set.
[SEMI T10 quality] Sets the grading of the quality following SEMI T10.
The quality grading [SEMI T10 quality] is only available for the
code family Data Matrix ECC200 .

Setting the quality grading:


u Select a quality grading from the list [Quality grading].
w The selection field [User-defined quality grade] is displayed. The contained quality parameters are
deselected by clicking [x] and added by clicking [+].

The quality parameters are calculated on the basis of the selected quality grading. The following
is required for the standard-compliant quality assessment of the image capture:
- a defined illumination and measurement geometry,
- an adjustment of the image brightness by means of a calibrated code,
- the definition of a measurement device suitable for the application.

ISO quality
The quality grading [ISO quality] evaluates code quality in 5 levels:

Code quality Description


4 passed, very good, highest quality
3 passed
2 passed
1 passed
0 not passed, lowest quality

The quality grading [ISO quality] contains the following quality parameters:
Description
Data Matrix

Aztec code

GS1 Aztec
parameter

Micro QR

GS1 Data
QR code
ECC 200

GS1 QR
PDF417
Quality

Matrix
Code

Code

Code

Axial non-uniformity ● ● ● - ● ● ● ● Ratio of the module size in


horizontal and vertical di-
rection.
Contrast ● ● ● - ● ● ● ● Contrast of the modules
relative to the background.
Modulation ● ● ● ● ● ● ● ● Uniformity of the light and
dark modules.

80
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Description
Data Matrix

Aztec code

GS1 Aztec
parameter

Micro QR

GS1 Data
QR code
ECC 200

GS1 QR
PDF417
Quality

Matrix
Code

Code

Code
Decoding ● ● ● - ● ● ● ● Rating " 4 ” if the code can
be decoded, otherwise
" 0 ”.
Fixed pattern damage ● ● ● - ● ● ● ● Error rate in the 3 basic el-
ements of the code (finder
pattern, alternating pattern
and quiet zone).
Format information - ● ● - - - ● - Contains information on
error correction and the
mask pattern.
Grid non-uniformity ● ● ● - ● ● ● ● Orientation of the modules
relative to the specific
symbol grid.
Print growth ● ● ● - ● ● ● ● Ratio dark/light modules in
the alternating pattern.
Reflectance ● ● ● - ● ● ● ● Assessment of the ampli-
tude between the DataCo-
de modules.
Unused error correction ● ● ● ● ● ● ● ● Error of the code and
share of the available error
correction mechanisms to
successfully decode the
code.
Version information - ● ● - - - ● - Contains information on
the version of the QR
code.
Codeword yield - - - ● - - - - Assessment of the relative
number of correctly decod-
ed words.
Decodability - - - ● - - - - Assessment of the relative
number of correctly decod-
ed words.
Defects - - - ● - - - - Assessment of the bar/gap
representation of the code.
Start/Stop pattern - - - ● - - - - Assessment of the start
and stop patterns.
“●”: Quality grading available
„-„: Quality grading not available

AIM / ISO-TR29158 quality


The quality grading [AIM / ISO-TR29158 quality] contains the following quality parameters:
Description
Data Matrix

Aztec code

GS1 Aztec
parameter

Micro QR

GS1 Data
QR code
ECC 200

GS1 QR
PDF417
Quality

Matrix
Code

Code

Code

Axial non-uniformity ● ● ● - ● ● ● ● Ratio of the module size in


horizontal and vertical di-
rection.
Cell contrast ● ● ● - ● ● ● ● Contrast of the modules
relative to the background.
Cell modulation ● ● ● - ● ● ● ● Uniformity of the light and
dark modules.
Decoding ● ● ● - ● ● ● ● Rating " 4 ” if the code can
be decoded, otherwise
" 0 ”.

81
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Description
Data Matrix

Aztec code

GS1 Aztec
parameter

Micro QR

GS1 Data
QR code
ECC 200

GS1 QR
PDF417
Quality

Matrix
Code

Code

Code
Fixed pattern damage ● ● ● - ● ● ● ● Error rate in the 3 basic el-
ements of the code (finder
pattern, alternating pattern
and quiet zone).
Format information - ● ● - - - ● - Contains information on
error correction and the
mask pattern.
Grid non-uniformity ● ● ● - ● ● ● ● Orientation of the modules
relative to the specific
symbol grid.
Print growth ● ● ● - ● ● ● ● Ratio dark/light modules in
the alternating pattern.
Reflectance ● ● ● - ● ● ● ● Assessment of the ampli-
tude between the DataCo-
de modules.
Unused error correction ● ● ● - ● ● ● ● Error of the code and
share of the available error
correction mechanisms to
successfully decode the
code.
Version information - ● ● - - - ● - Contains information on
the version of the QR
code.
“●”: Quality grading available
„-„: Quality grading not available

SEMI T10 quality


The quality grading [SEMI T10 quality] returns the following values:

Value Description
P1 row Pixel coordinates of corner 1.
P1 column Pixel coordinates of corner 1.
P2 row Pixel coordinates of corner 2.
P2 column Pixel coordinates of corner 2.
P3 row Pixel coordinates of corner 3.
P3 column Pixel coordinates of corner 3.
P4 row Pixel coordinates of corner 4.
P4 column Pixel coordinates of corner 4.
Rows Number of lines [Modules].
Columns Number of columns [Modules].
Symbol contrast Contrast between light and dark modules in % related to 255
grey levels.
Symbol contrast SNR Signal-to-noise ratio of “Symbol contrast”.
Horizontal mark growth Relative width of dark modules, related to the total width of a
light and dark module [%]:
Breite_dunkel / (Breite_dunkel +
Breite_hell) * 100
The optimum value is " 50% ” (dark and light modules have the
same width). The value increases as the dark modules be-
come wider. The value decreases as the light modules be-
come wider.

82
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Value Description
Vertical mark growth Relative height of dark modules, related to the total height of
a light and dark module [%]:
Höhe_dunkel / (Höhe_dunkel +
Höhe_hell) * 100
The optimum value is " 50% ” (dark and light modules have the
same height). The value increases as the dark modules be-
come higher. The value decreases as the light modules be-
come higher.
DataMatrix cell width Average module width [pixels]
DataMatrix cell height Average module height [pixels]
Horizontal mark misplacement Shifting of the modules of the “Alternating Pattern” (the alter-
nating light-dark pattern at the top and left edge of the code).
Indication in [%] referred to the module width. A value close to
zero is ideal.
Vertical mark misplacement Shifting of the modules of the “Alternating Pattern” (the alter-
nating light-dark pattern at the top and left edge of the code).
Indication in [%] referred to the module height. A value close
to zero is ideal.
Cell defects Incorrectly classified symbol pixels [%] (i.e. light instead of
dark or vice versa). A value close to zero is ideal.
Finder pattern defects Incorrectly classified pixel of the finder pattern [%]. A value
close to zero is ideal.
Unused error correction Percentage value of the unused error correction. One value is
used per Reed Solomon block. The code size determines how
often the value occurs.

10.2.3.7 Polarity
The list [Polarity] sets the polarity of the code to be detected.
The list [Polarity] contains the following settings:

Setting Description
[Dark on light] Detects dark codes on a light background.
[Light on dark] Detects light codes on a dark background.
[Any] Detects dark codes on a light background and light codes on
a dark background.

10.2.3.8 Strict quiet zone


The checkbox [Strict quiet zone] sets the performance in the event of a fault in the quiet zone of a
code. The quiet zone is an empty zone framing the code. The quiet zone separates the code from
other objects.
If the checkbox [Strict quiet zone] is activated, codes with a damaged quiet zone are not detected. In
addition, the erroneous detection of Micro QR Codes within text or QR codes is reduced.

The code type Aztec Code does not have a quiet zone.

10.2.3.9 Symbology identifier


The list [Symbology identifier] shows whether the code contains an FNC1 and ECI character.
The list [Symbology identifier] contains the following settings:

Setting Description
[Off] Deactivates the output of the symbology identifier.
[Only mandatory] Activates the output of required symbology identifiers.

83
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

FNC1 character
The FNC1 character (Function 1 Character) indicates that the data in the symbol is formatted
according to certain industry and application standards.

ECI mark
The ECI (Extended Channel Interpretation) protocol indicates that the data is formatted with a 6-digit
code according to specific code tables. This can be an international character set, for example. In the
output stream, the data is encoded as \nnnnnn . If the symbol contains one or more ECI codes, all
backslashes in the normal data stream \ (ASCII code 92 ) are doubled.
Only the actual value according to the specifications for Data Matrix -, QR Code , Aztec Code ,
PDF417 and DotCode is returned as symbology identifier:

(
m∈[0,6] (QR-Code),
m∈[0,12] (Data Matrix ECC 200 und Aztec Code),
m∈[0,2] (PDF417) bzw.
m∈[0,5] (DotCode)
)

If necessary, the symbology identifier, which is composed of the prefix and the value m , must be
manually prefixed to the decoded string (usually only if m>1 ).

Code Prefix
Data Matrix ECC 200 ]d
QR Code ]Q
Aztec Code ]z
PDF417 ]L
DotCode ]J
GS1 DataMatrix ]d2
GS1 QR Code ]Q3
GS1 Aztec Code ]z1
GS1 DotCode ]J1

DotCodes are supported from firmware version 1.22.

10.2.3.10 Contrast tolerance


The list [Contrast tolerance] sets the tolerance to contrast differences for code finding. Contrast
differences are, for example, glare or reflections.

The list [Contrast tolerance] is only available for the code family Data Matrix ECC200 .

The list [Contrast tolerance] contains the following settings:

Setting Description
[Low] Sets the tolerance of code finding with contrast differences to
low.
Under normal conditions, contrast differences are detected.
This setting is suited for most applications. The setting has al-
most no influence on the evaluation time.
[High] Sets the tolerance of code finding with contrast differences to
high.
The setting has a significant influence on the evaluation time.

84
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Setting Description
[Any] Sets the tolerance of code finding with contrast differences
based on an algorithm.

10.2.3.11 Finder pattern tolerance


The list [Finder pattern tolerance] sets the behaviour in the event of errors in the finder pattern of the
code. The limiting lines framing the DataMatrix code are called finder pattern.

The [Finder pattern tolerance] is only available for the code family Data Matrix ECC200 .

The list [Finder pattern tolerance] contains the following settings:

Setting Description
[Low] The code is detected if there is a high degree of finder pattern
and there is almost no noise.
[High] The code is detected if the finder pattern is defective or partly
hidden. Only codes with a module grid of the same size are
detected. (Ò Module grid / 85)
Perspective distortions reduce the detection rate.
[Any] The code is detected if
• there is a high degree of finder pattern and there is almost
no noise or
• the finder pattern is defective or partly hidden.

10.2.3.12 Module grid


The list [Module grid] sets the calculation of the centre of a module. A DataMatrix code consists of
several modules.

The [Module grid] is preset to [Fixed] and cannot be changed if the [Finder pattern tolerance] is
set to [High].

The [Finder pattern tolerance] is only available for the code family Data Matrix ECC200 .

The list [Module grid] contains the following settings:

Setting Description
[Fixed] Uses constant spacing for the module grid.
[Variable] Uses the alternating pattern opposite the finder pattern (L-pat-
tern) for the module grid.
[Any] Uses the module grids [Fixed] and [Variable] one after the
other.

10.2.3.13 Max slant


The input field [Max slant] sets the maximum slant of the L-shaped finder pattern relative to a right
angle. The angle is indicated in degrees and corresponds to the distortion occurring when the code is
printed or an image is captured.

The [Max slant] is only available for the code family Data Matrix ECC200 .

The value range is 0...30° .


Preset: 30° .

85
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10.2.3.14 Mirrored
The list [Mirrored] sets the detection of mirrored codes. Codes are detected which are mirrored on the
vertical or horizontal axis.
The list [Mirrored] contains the following settings:

Setting Description
[No] Deactivates the detection of mirrored codes.
[Yes] Activates the detection of mirrored codes.
[Any] Activates the detection of mirrored and non-mirrored codes.

10.2.3.15 Symbol shape


The list [Symbol shape] restricts the shape of the code to be detected.

If [Finder pattern tolerance] is set to [Low], the function [Symbol shape] is ignored.
If [Finder pattern tolerance] is set to [High] or [Any], the setting [Rectangle] or [Square]
considerably reduces the evaluation time.

The [Symbol shape] is only available for the code family Data Matrix ECC200 .

The list [Symbol shape] contains the following settings:

Setting Description
[Rectangle] Activates the detection of rectangular codes.
[Square] Activates the detection of square codes.
With the symbol shape [Square], the number of rows and col-
umns is set with the input fields [Symbol size min] and [Sym-
bol size max].
[Any] Activates the detection of codes regardless of the symbol
shape.

10.2.3.16 Symbol columns min


The input field [Symbol columns min] sets the minimum number of columns for the detection of a
code.
Depending on the [Symbol shape], use the following values for [Symbol columns min]:

Symbol shape Symbol columns min


[Rectangle] >= 18
[Any] >= 10

10.2.3.17 Symbol columns max


The input field [Symbol columns max] sets the maximum number of columns for the detection of a
code.
Depending on the [Symbol shape], use the following values for [Symbol columns max]:

Symbol shape Symbol columns max


[Rectangle] <= 48
[Any] <= 144

10.2.3.18 Symbol rows min


The input field [Symbol rows min] sets the minimum number of rows for the detection of a code.

86
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Depending on the [Symbol shape], use the following values for [Symbol rows min]:

Symbol shape Symbol rows min


[Rectangle] >= 8
[Any] >= 8

10.2.3.19 Symbol rows max


The input field [Symbol rows max] sets the maximum number of rows for the detection of a code.
Depending on the [Symbol shape], use the following values for [Symbol rows max.]:

Symbol shape Symbol rows max


[Rectangle] <= 16
[Any] <= 144

10.2.3.20 Symbol size min.


The input field [Symbol size min] sets the minimum number of symbol elements on the X/Y axis for the
detection of a code.

The input field [Symbol size min.] is only available when the symbol shape [Square] is set.

Use the following values for the symbol shape [Square]:

Symbol shape Symbol size min


[Square] >= 10

10.2.3.21 Symbol size max.


The input field [Symbol size max.] sets the maximum number of symbol elements on the X/Y axis for
the detection of a code.

The input field [Symbol size max] is only available when the symbol shape [Square] is set.

Use the following values for the symbol shape [Square]:

Symbol shape Maximum symbol size


[Square] <= 144

10.2.3.22 ROI size check


The checkbox [Enabled] activates the function [ROI size check]. The function provides a warning if the
code moves to the edge of the defined ROI.
The function can be used as a predictive maintenance tool. The results of the function are forwarded
to a controller if desired.

Example
Packages, each with a code to be read, move on a belt. If the belt speed or the position of the
packages does not match the trigger rate of the device exactly, the packages and thus the codes will
move out of the ROI. If the code is completely outside the ROI, it will be no longer read. The function
[ROI size check] warns against this.

87
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

The function [ROI size check] contains the following input fields:

Input box Description


[Threshold ROI warning] If the distance of at least one code contour to the ROI falls be-
low the set value, a warning is issued.
If a code is found, the warning area is displayed in colour in
the live image.
[Threshold distance to mean position] A mean value is formed from N read codes and stored as the
mean position (Nmax = 100 , where N = number of codes
read). If the distance of a read code is greater than the set
value, a warning is issued.
The mean value is reset if a code is not read successfully.
The last N positions are displayed in the form of a “track” in
the live image.
[Threshold movement score] If a code continues to move in the same direction in several
consecutive images, the probability that it will continue to
move in that direction increases. If the probability exceeds the
set value, a warning is issued.

10.2.4 OCR;
The [OCR] model contains settings for recognising text content.
OCR stands for Optical Character Recognition and refers to the recognition of text in captured images.
The recognition proceeds as follows:
1. Separating the text blocks from graphic elements,
2. Recognising the line structures and separating individual characters,
3. Assigning a numerical value to the characters according to text coding.
The text can then be processed further.
The text recognition can be adjusted in many ways. This increases the processing speed and reduces
possible sources of error.
The model contains the following operating elements:

Operating element Type Description


[Activate anchor tracking] Checkbox Activates anchor tracking. (Ò Activate
anchor tracking / 89) The text will be
recognised regardless of its position in
the image.
[Preferred characters] Checkbox Activates recognition of specific charac-
ters. (Ò Preferred characters / 89)
[Special characters] Checkbox Activates recognition of special charac-
ters. (Ò Special characters / 89)
[Dot print text] Checkbox Activates recognition of dot matrix text.
(Ò Special characters / 89)
[Italics] Checkbox Activates recognition of text in Italics. (Ò
Special characters / 89)
[Classifier] List Activates a suitable font to reduce the
processing time. (Ò Classifiers / 90)
[Number of lines per ROI] Checkbox / input field Sets the number of rows per ROI group.
(Ò Number of rows per ROI / 90)

[Advanced] area:

Operating element Type Description


[Use character selection] Checkbox Activates recognition of [Preferred char-
acters] and [Special characters]. (Ò Us-
ing character selection / 91)

88
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Regular expression] Input field Sets the characters to be recognised
with a regular expression. (Ò Regular
expression / 91)
[Relative text orientation] List Sets the text orientation in the image
measured along the horizontal axis of
the image. (Ò Relative text align-
ment / 91)
[Text orientation] Input field Sets a zone for the relative text orienta-
tion in which the text is detected. (Ò
Text orientation / 92)
[Mode] List Sets automatic or manual recognition of
text. (Ò Mode / 93)

[ROI size check] area:

Operating element Type Description


[Activated] Checkbox Warns if the OCR content moves further
out of the ROI with each trigger. (Ò ROI
size check / 94)

10.2.4.1 Activate anchor tracking


The check box [Activate anchor tracking] enables the anchor tracking for the model. Subsequently,
objects are detected regardless of their orientation and position in the image.
The anchor tracking requires a separate anchor model. If no anchor model is available, one can be
created directly.

The anchor model is not used for reading characters and codes. This requires a separate model
that is linked to the anchor model.

The following models can be used as anchor models:


• [Contour anchor tracking]
• [1D barcode anchor tracking]
• [2D data code anchor tracking]

10.2.4.2 Preferred characters


The [Preferred characters] checkboxes activate recognition of certain characters. Text recognition is
optimised for industrial applications. Only the characters listed below can be recognised.
The [Preferred characters] checkboxes contain the following settings:

Setting Description
[Uppercase letters A...Z] Activates recognition of the upper-case letters A to Z.
[Lowercase letters a...z] Activates recognition of the lower-case letters a to z.
[Digits 0...9] Activates recognition of the digits 0 to 9 .
[Currency symbols € £ ¥ $] Activates recognition of the currency symbols € £ ¥ $ .

The setting changes the [Regular expression] input field in the [Advanced] area. Regular expression
(Ò / 91)

10.2.4.3 Special characters


The checkboxes [Special characters] activate the recognition of special characters. The special
characters are activated individually.
In addition to the 17 special characters, the following checkboxes can also be activated:

89
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Setting Description
[Dot print text] Activates the recognition of dot matrix characters.
Dot matrix characters are printed on packaging, for example,
for the expiry and production dates of food.
[Italics] Activates the recognition of characters in Italics.

The setting changes the [Regular expression] input field in the [Advanced] area. Regular expression
(Ò / 91)

10.2.4.4 Classifiers
A font for recognising characters is set in the [Classifiers] list. The appropriate font will reduce the
processing time.
The following fonts are available:

Font Description Example


[Document] Sets the [Document] font. The font is
particularly suitable for recognising char-
acters in Arial, Courier and Times New
Roman.
[Industrial] Sets the [Industrial] font. The font is par-
ticularly suitable for recognising charac-
ters in Arial, OCR-B and other sans-serif
fonts.
[OCR-A] Sets the [OCR-A] font. The font is par-
ticularly suitable for recognising charac-
ters in OCR-A.

[OCR-B] Sets the [OCR-B] font. The font is par-


ticularly suitable for recognising charac-
ters in OCR-B.

[Pharma] Sets the [Pharma] font. The font is par-


ticularly suitable for recognising charac-
ters in Arial, OCR-B and other fonts
used in the pharmaceutical industry.
The [Pharma] font does not recognise
lower case letters.

[SEMI] Sets the [SEMI] font. The font is particu-


larly suitable for recognising characters
in [SEMI]. The font is characterised by
the fact that the characters are easily
distinguishable.
[Universal] Sets the [Universal] font. The font is par-
ticularly suitable for recognising charac-
ters in Document, DotPrint, SEMI and
Industrial.

10.2.4.5 Number of rows per ROI


The [Number of rows per ROI] input field sets the number of rows to be detected for an ROI group.

A search is made for the set number of text lines. If more or fewer text lines are found, the
model will be considered “failed” in the overall statistics. (Ò Monitor / 21)

90
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Example

Anzahl der Textzeilen pro ROI = 2

• If 1 ROI exists: 2 text lines are searched for in this one ROI.
• If 2 ungrouped ROIs exist (each ROI counts as an ROI group): 2 text lines are searched for in each
ROI. Altogether, 4 text lines are searched for.
• If 2 grouped ROIs exist (1 ROI group): 2 text lines are searched for in this one ROI group.
Altogether, 2 text lines are searched for. The text lines can
– both be contained in the first ROI,
– both be contained in the second ROI,
– be contained separately, one in each ROI.

A large number of text lines to be detected increases the evaluation time of the device.

10.2.4.6 Using character selection


Enables the recognition of [Preferred characters] and [Special characters]. Which characters are
recognised is shown in the [Regular expression] input field.
When [Using character selection] is disabled:
• the [Regular expression] input field can be edited and
• the settings in [Preferred characters] and [Special characters] are ignored.

10.2.4.7 Regular expression


Sets the characters to be recognised with a regular expression. The regular expression describes a
set of strings with syntactical rules.
When [Using character selection] is disabled:
• the [Regular expression] input field can be edited and
• the settings in [Preferred characters] and [Special characters] are ignored.

10.2.4.8 Relative text alignment


The [Relative text orientation] list sets the reading direction of text. The following settings are
available:

Setting Description Live image


[-90] Reads text from bottom to top.

91
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Setting Description Live image


[0] Reads text from left to right.

[90] Reads text from top to bottom.

[180] Reads text upside down from right to left.

The text is recognised when the average orientation of the text is within the range of [Relative text
orientation].
Preset: [0]: Reads text from left to right.

The function increases the evaluation time of the device.

10.2.4.9 Text orientation


The [Text orientation] input field sets a tolerance for the reading direction of text.

45°

Fig. 39: Text orientation

A text is recognised if the average orientation of the text is within the range of [Text orientation].

92
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

The value range is 0...90 .


Preset: 45 .

The [Relative text orientation] list sets the reading direction of text.

10.2.4.10 Mode
The [Mode] list sets the recognition of text.
The following settings are available:

Setting Description
[auto] Recognises text automatically for the most part.
Automatic detection increases the evaluation time of the de-
vice.
[manual] Recognises text individually depending on the settings select-
ed.
Manual detection reduces the evaluation time of the device.

Mode [auto]
In [auto] mode, the following settings are displayed:

Setting Description
[Min. contrast] Sets the minimum contrast between the letters and the back-
ground.
Preset: 15 .
[Min. character height] Sets the minimum height of a letter in pixels. The setting does
not affect punctuation and separators.
[Max. character height] Sets the maximum height of a letter in pixels. The setting
does not affect punctuation and separators.
[Min. character width] Sets the minimum width of a letter in pixels. The setting does
not affect punctuation and separators.
[Max. character width] Sets the maximum width of a letter in pixels. The setting does
not affect punctuation and separators.
[Min. stroke width] Sets the minimum stroke width of a letter in pixels. The setting
does not affect punctuation and separators.
[Max. stroke width] Sets the maximum stroke width of a letter in pixels. The set-
ting does not affect punctuation and separators.
[Separate touching characters] Sets the separation of regions of adjacent characters. Detect-
ed regions are divided into at least 2 individual characters.
Settings:
[standard]: delivers fast results.
[enhanced]: gives more accurate results, a little slower.
[false]: deactivates the separation of touching characters.
Preset: [standard].
[Eliminate blobs at the edge] Removes the regions that touch the edge of the defined im-
age.
[Output of punctuation marks] Assigns small punctuation marks such as full stops or com-
mas to the segmented letters. Provided the punctuation marks
are close to the baseline of the line of text.
[Output of separators] Assigns separators such as minus or equal signs to the seg-
mented letters.
[Add fragments] Assigns fragments like the dot on an "i" to the segmented let-
ters. Matching the fragments can lead to disturbances of the
segmented letters.
[Dot print] Sets dot printing for the segmented letters.

93
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Setting Description
[Dot print close character spacing] Sets the recognition of small spaces between dot print letters.
The distance between 2 letters is detected if it is smaller than
the distance between 2 points of a letter.
If the exact distance between the letters is known, the setting
[Dot print min. character spacing] provides better results.
When using [Dot print min. character spacing], the setting [Dot
print close character spacing] is ignored.
[Dot print max. dot spacing] Sets the maximum distance between 2 points of a dot print
letter in pixels. The setting can speed up text recognition con-
siderably.
If the distance between 2 dots is greater than the distance be-
tween 2 letters, additionally use the following setting: [Dot
print close character spacing] or [Dot print min. dot spacing].
[Dot print min. character spacing] Sets the minimum distance between 2 dot print letters in pix-
els. The setting increases the accuracy of segmented letters
when the distance between 2 letters is smaller than the dis-
tance between 2 dots.
If the distance between the letters is minimal and the mini-
mum distance between 2 dots is unknown, use the setting
[Dot print close character spacing].

Mode [manual]
In [manual] mode, the following settings are displayed:

Setting Description
[Character height] Sets the height of an upper case letter in pixels.
Preset: 30 .
[Character width] Sets the width of an upper case letter in pixels.
Preset: 20 .
[Stroke width] Sets the stroke width of a letter in pixels.
Preset: 4 .
[Base line tolerance] Sets the maximum deviation of a letter from the baseline as a
percentage of the letter height.
Preset: 0 .
[Upper case only] Restricts text recognition to upper case letters and numbers.
[Is dot print] Sets dot printing for the segmented letters.
[Is imprinted] Recognises text with a lot of local changes in polarity due to
reflections.
[Eliminate horizontal lines] Eliminates horizontal lines near the segmented texts.
[Eliminate blobs at the edge] Removes the regions that touch the edge of the defined im-
age.
[Output of punctuation marks] Assigns small punctuation marks such as full stops or com-
mas to the segmented letters.
[Output of separators] Assigns separators such as minus or equal signs to the seg-
mented letters.
[Add fragments] Assigns fragments like the dot on an "i" to the segmented let-
ters. Matching the fragments can lead to disturbances of the
segmented letters.
[Min. fragment size] Sets a minimum size for the fragments. The setting is relevant
when [Add fragments] is used.

10.2.4.11 ROI size check


The checkbox [Enabled] activates the function [ROI size check]. The function provides a warning if the
text moves to the edge of the defined ROI.
The function can be used as a predictive maintenance tool. The results of the function are forwarded
to a controller if desired.

94
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Example
Packages, each with a text to be read, move on a belt. If the belt speed or the position of the
packages does not match the trigger rate of the device exactly, the packages and thus the texts will
move out of the ROI. If the text is completely outside the ROI, it will be no longer read. The function
[ROI size check] warns against this.
The function [ROI size check] contains the following input fields:

Input box Description


[Threshold ROI warning] If the distance of at least one text contour to the ROI falls be-
low the set value, a warning is issued.
If a text is found, the warning area is displayed in colour in the
live image.
[Threshold distance to mean position] A mean value is formed from N read texts and stored as the
mean position (Nmax = 100, where N = number of texts read).
If the distance of a read text is greater than the set value, a
warning is issued.
The mean value is reset if a text is not read successfully. The
last N positions are displayed in the form of a “track” in the
live image.
[Threshold movement score] If a text continues to move in the same direction in several
consecutive images, the probability that it will continue to
move in that direction increases. If the probability exceeds the
set value, a warning is issued.

10.2.5 BLOB analysis


The [BLOB analysis] model analyses the number, arrangement and brightness of the pixels of an
object.
First, the greyscale value histogram is used to select which pixels are to be analysed. The selection is
then refined with the object properties:
• Geometry
• Circular
• Rectangular
• Greyscale
• etc.
BLOB analysis is particularly suitable for applications in which the objects vary in shape, size or colour
value.

Operating elements
The model contains the following operating elements:

Operating element Name Description


Copy model Copies the model and changes to the
new model.
Rename model Renames the model.

Delete model Deletes the model irrevocably.

Image assignment
The [Image assignment] area contains the following operating elements:

95
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[l1 New image 1] Checkbox Assigns the selected images to the mod-
el.
A new image is added with [Images &
triggers]. Images & trigger (Ò / 29)
Up to 5 images can be added. The 5 im-
ages are assigned to the model as “l1”
to “l5”.

Object definition
In the [Object definition] area, an area is set to be searched for within the ROI.
After opening the [Object definition] area, the [Object definition area] is displayed in the live image. (Ò
Object definition area of a BLOB / 103)
The area contains the following operating elements above the live image:

Operating element Type Description


Button Copies the object definition image from
another model.
Button Uses the last image as the object defini-
tion image.
Button Triggers and updates the object defini-
tion image.
Button Uses the set trigger to update the object
definition image. (Ò Trigger mode / 33)
Button Saves the object definition image as a
JPEG.
Button Loads an image and uses it as an object
definition image.
Possible file formats:
*.o2x5xximg
*.o2i5xximg
*.jpg

BLOB definition
In the [BLOB definition] area, you set what is to be recognised as an object.
The area contains the following operating elements:

Operating element Type Description


[Auto segmentation] Button Segments the greyscale levels displayed
in the histogram.
The selected segment is highlighted in
blue in the histogram.
List Displays the recognised segments in a
list.
The selected segment is highlighted in
blue in the histogram.
Slider Adjusts the segment with the vertical or-
ange lines in the histogram.

Slider Adjusts the segment in the histogram.

96
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Min] Input field Sets the lower limit of the segment
(darkest colour).
Button Changes the mouse pointer to a pipette
for selecting a pixel in the live image.
The greyscale level of the pixel sets the
lower limit of the segment (darkest col-
our).
[Max] Input field Sets the upper limit of the segment
(lightest colour).
Button Changes the mouse pointer to a pipette
for selecting a pixel in the live image.
The greyscale level of the pixel sets the
upper limit of the segment (lightest col-
our).
[Inverted segmentation] Checkbox Inverts the lower and upper limits of the
segment. This makes it easier to detect
particularly bright or dark areas.

Edit BLOBs
In the [Edit BLOBs] area, the allowed area, the handling of holes and the filter are set.
The area contains the following operating elements:

Operating element Type Description


[Include objects] Checkbox Detects objects whose area lies within a
certain range. This is particularly suita-
ble for excluding very small or very large
objects.
[Min] Input field Sets the minimum area of the objects to
be detected.
[Max] Input field Sets the maximum area of the objects to
be detected.
[Fill holes] Checkbox / input field Fills holes inside objects. Holes appear
in uniformly bright objects as bright or
dark pixels. Triggers for this are, for ex-
ample, rough material surfaces or image
distortions.
The size of the holes is entered abso-
lutely in number of pixels.
[Morphology type] List Sets a filter for the object:
[None]: does not use a filter.
[Opening]: removes objects that are
smaller than the [Morphology radius].
For example, narrow connections be-
tween 2 objects are removed.
[Closing]: fills holes within objects that
are smaller than the [Morphology radi-
us].
[Erosion]: reduces objects at their edge
by the [Morphology radius]. This applies
to all objects that are larger than the
[Morphology radius].
[Dilation]: enlarges objects at their edge
by the [Morphology radius]. This fills
holes that are smaller than the [Morphol-
ogy radius].
[Morphology radius] Input field Sets the radius used for [Morphology
type] in pixels.
[Sort objects] List Sets the sorting of the objects:
• [by size]
• [by position]

97
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Model parameters
In the [Model parameters] area, the ROI, the ROD and the allowed number of objects can be set.
The area contains the following operating elements:

Operating element Type Description


[Add regions (ROI)] Button Sets the region of interest (ROI). (Ò
Creating a region of interest (ROI) / 23)
[Exclusion zone (ROD)] Button Sets the exclusion zone (ROD: region of
disinterest). (Ò Creating a region of dis-
interest (ROD) / 24)
[Activate anchor tracking] Checkbox Activates anchor tracking. (Ò Activate
anchor tracking / 103) Anchor tracking
recognises objects despite changed po-
sition and rotation.
[Number of objects per ROI] Input field Sets the number of objects per ROI. (Ò
Number of objects per ROI / 98)
This setting influences the pass/fail sta-
tus of the overall statistics. (Ò
Test / 166)

Object properties
In the [Object properties] area, recognition of objects is restricted.
The area contains the following operating elements:

Operating element Type Description


Button Adds object properties to the model. (Ò
Object properties / 99) In the [Object
properties] area, recognition of an object
can be restricted.
Button Deletes all [Object properties]. The ob-
ject properties will be irrevocably delet-
ed.

ROI size check


In the “ROI size check” area, a warning is activated for objects that move out of the ROI.
The area contains the following operating elements:

Operating element Type Description


[Activated] Checkbox Warns if the object moves further out of
the ROI with each trigger. (Ò ROI size
check / 103)
[Threshold ROI warning] Input field Sets the threshold for the warning.
[Threshold distance to mean position] Input field Sets a distance to the mean position of
the objects.
[Threshold movement score] Input field Sets a threshold for the movement trend
of the objects.

10.2.5.1 Number of objects per ROI


The [Number of objects per ROI] function sets the number of objects to be detected per ROI.

If more or fewer objects are found, the model will be considered “failed” in the overall statistics.
(Ò Test / 166)

Example
With the input fields [Min] = " 1 " and input field [Max] = " 3 " 1-3 objects per ROI are recognised.
• If 1 ROI exists: Within the ROI, 1 to 3 objects are searched for.

98
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

• If 2 non-grouped ROIs exist: In each ROI, 1 to 3 objects are searched for.


• If 1 ROI group of 2 ROIs exists: In the ROI group, 1 to 3 objects are searched for.

The function increases the evaluation time of the device.

10.2.5.2 Object properties


The object properties restrict the recognition of objects. If an object meets the setting criteria, it can be
processed further.

After the [Add object properties] button has been clicked, the following menu is displayed:

[Geometry] area Description


[Object area] The area of the object.

[Object areas in ROI] The total area of all objects in the ROI.

[Position X] The centre of gravity of the object on the X-axis, measured


from the left border of the image.

[Position Y] The centre of gravity of the object on the Y-axis, measured


from the top border of the image.

[Object height] The height of the smallest rectangle that completely encloses
the object and whose sides are parallel to the image border.

[Object width] The width of the smallest rectangle that completely encloses
the object and whose sides are parallel to the image border.

99
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

[Circular] area Description


[Roundness] The roundness of the object describes the shape factor of the
contour. The distance of the contour to the centre of gravity of
the area is measured. Wide notches in the object do not
greatly affect the value, as the centre of gravity does not
change significantly.
A perfectly round circle has the value “100”.

[Circularity] The circularity of the object describes the similarity to a per-


fect circle. A perfectly round circle has the value “100”.

[Outer radius] The radius of the smallest circle that completely encloses the
object.

[Inner radius] The radius of the largest circle that fits completely inside the
object.

[Rectangular] area Description


[Rectangularity] The rectangularity of the object. A perfect rectangle has the
value “100”.

[Inner width] The width of the largest rectangle that fits completely inside
the object and whose sides are parallel to the image border.

100
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

[Rectangular] area Description


[Inner height] The height of the largest rectangle that fits completely inside
the object and whose sides are parallel to the image border.

[Greyscale] area Description


[Min. greyscale value] The lowest greyscale value of the object.

[Max. greyscale value] The highest greyscale value of the object.

[Average greyscale value] The average greyscale value of the object.

[Greyscale value deviation] The standard deviation of the object (homogeneity). The value
is low for uniformly grey objects and high for irregular surfaces
or greyscale gradients.

[Other] area Description


[Compactness] The compactness of the object. Empty regions have the value
“ 0 “; circular objects have the value “ 1 ”. Long narrow objects
have average values. Entangled objects and objects with
holes have high values.

101
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

[Other] area Description


[Number of holes] The number of holes in the object.

[Orientation] The orientation of the object in degrees.

[Shape] area Description


[Perimeter] Length of the outer contour of the object.

[Centre X of the bounding box] Horizontal coordinate of the object's geometric centre, meas-
ured from the left image border.

[Centre Y of the bounding box] Vertical coordinate of the object's geometric centre, measured
from the top image border.

Activating the checkboxes activates the corresponding object properties.


An activated object property has the following operating elements:

Operating element Type Description


Button Deletes the object property irrevocably.

Button Displays the description of the object


property.

[Min] Input field Sets the minimum value of the object


property.
[Max] Input field Sets the maximum value of the object
property.
Slider Sets the minimum and maximum value
of the object property.

102
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

10.2.5.3 ROI size check


The checkbox [Enabled] activates the function [ROI size check]. The function warns when an object
moves to the edge of the defined ROI.
The function can be used as a predictive maintenance tool. The results of the function are forwarded
to a controller if desired.
The function [ROI size check] contains the following input fields:

Input box Description


[Threshold ROI warning] If the distance of at least one object to the ROI falls below the
set value, a warning will be issued.
If an object is found, the warning area will be displayed in col-
our in the live image.
[Threshold distance to mean position] An average value is formed from N detected objects and
saved as the average position.
Nmax = 100
N = Anzahl der erkannten Objekte

If the distance of a detected object is greater than the set val-


ue, a warning will be issued.
The mean value will be reset if an object is not successfully
detected. The last N positions are displayed in the form of a
“track” in the live image.
[Threshold movement score] If an object continues to move in the same direction in suc-
cessive images, the probability that the object will continue to
move in that direction increases. If the probability exceeds the
set value, a warning is issued.

Example
Objects to be recognised move on a belt.
If the belt speed or position of the objects does not match the trigger rate of the device exactly, the
objects will move out of the ROI. If the object is outside the ROI, it is not recognised. The function [ROI
size check] warns against this.

10.2.5.4 Activate anchor tracking


The anchor tracking recognises objects despite changed position and rotation. The anchor tracking is
taught automatically.

Before anchor tracking can be used, the [Contour anchor tracking] model must be added. (Ò
Contour anchor / 112)

The evaluation time of the device may increase when anchor tracking is activated, depending
on the number and size of the objects.

10.2.5.5 Object definition area of a BLOB


After opening the [Object definition] area, the [Object definition area] is displayed in the live image.

103
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Fig. 40: Object definition area


1 Object definition area

In the [Object definition area], an area is set to be searched for within the ROI. The [Object definition
area] can be detected several times within the ROI, for example solder joints of the same shape and
size.
Setting the object definition area:
u Using the mouse pointer, place the object definition area in the live image as close as possible
around the object.
w The object is automatically recognised within the object definition area and highlighted in colour.

10.2.6 Contour detection


The [Contour detection] model analyses the contours of an object.
First, the contour of a good part is taught. The contour detection checks the degree of conformity
between the taught contour and the contours of the test parts. Additional quality characteristics can be
defined for the test.
Contour detection is particularly suitable for applications where the shape of the objects is repeated.

Operating elements
The model contains the following operating elements:

104
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Name Description


Copy model Copies the model and changes to the
new model.
Rename model Renames the model.

Delete model Deletes the model irrevocably.

Image assignment
The [Image assignment] area contains the following operating elements:

Operating element Type Description


[l1 New image 1] Checkbox Assigns the selected images to the mod-
el.
A new image is added with [Images &
triggers]. Images & trigger (Ò / 29)
Up to 5 images can be added. The 5 im-
ages are assigned to the model as “l1”
to “l5”.

Object definition
In the [Object definition] area, contours are taught in using an object definition image.
After opening the [Object definition] area, the [Object definition area] is displayed in the live image. (Ò
Object definition area of a BLOB / 103)
The area contains the following operating elements above the live image:

Operating element Type Description


Button Copies the object definition image from
another model.
Button Uses the last image as the object defini-
tion image.
Button Triggers and updates the object defini-
tion image.
Button Uses the set trigger to update the object
definition image. (Ò Trigger mode / 33)
Button Saves the object definition image as a
JPEG.
Button Loads an image and uses it as an object
definition image.
Possible file formats:
*.o2x5xximg
*.o2i5xximg
*.jpg

Define contours
The contours to be recognised are set in the [Define contours] area.
The area contains the following operating elements:

Operating element Type Description


[Contrast threshold] Checkbox / input field Sets the lower contrast threshold for de-
termining the contours.
[Min. contour length] Checkbox / input field Sets the minimum contour length. Small-
er contours are ignored.

Advanced
Advanced settings can be changed in the [Advanced] section.

105
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

w The [Advanced] area contains settings that are unnecessary for typical configurations.
u Only change the settings if you know exactly what the effects will be.

The area contains the following operating elements:

Operating element Type Description


[Point reduction] List Reduces the number of contour points
used.
[Automatic]: Automatically reduces the
contour points.
[None]: Does not reduce contour points.
[Point reduction low]: Reduces a low
number of contour points.
[Point reduction medium]: Reduces a
medium number of contour points.
[Point reduction high]: Reduces a high
number of contour points.

Edit contours
In the [Edit contours] area, the contours are edited directly in the live image using the mouse pointer.
The area contains the following operating elements:

Operating element Type Description


Button Draws an additional contour in the live
image. The mouse pointer is displayed
as a crosshair.
Button Deletes parts of the contour. A mouse
click deletes the contour inside the cir-
cle. The deleted contour is shown as a
blue line.
The size of the circle can be adjusted
with [Tool size].
Button Restores parts of the deleted contour.
The deleted contour is shown as a blue
line. A mouse click restores the contour
inside the circle.
The size of the circle can be adjusted
with [Tool size].
Button Deletes the complete contour. A mouse
click deletes the selected contour. The
deleted contour is shown as a blue line.
The size of the circle can be adjusted
with [Tool size].
Button Restores deleted contour. The deleted
contour is shown as a blue line. A
mouse click restores the selected con-
tour.
The size of the circle can be adjusted
with [Tool size].
[Tool size] Input field / slider Sets the size of the circle.

Used contours
The used contours are displayed in the [Used contours] area. The used contours are shown in green
in the live image.
The area contains the following operating elements:

106
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[111 point contour] Text field Displays the number of points "111" of
the selected contour. The selected con-
tour is displayed orange in the live im-
age. The contour is selected by moving
the mouse pointer over the text field.
Button Deletes the selected contour. The delet-
ed contour is moved to the [Unused con-
tours] area.

Unused contours
The unused contours are displayed in the [Unused contours] area. The unused contours are shown in
blue in the live image.
The area contains the following operating elements:

Operating element Type Description


[111 point contour] Text field Displays the number of points "111" of
the selected contour. The selected con-
tour is displayed orange in the live im-
age. The contour is selected by moving
the mouse pointer over the text field.
Button Restores deleted contour. The restored
contour is moved to the [Used contours]
area.

Reference point
The zero point of the objects is set in the [Position reference] area. The coordinates of objects given in
the ifmVisionAssistant refer to the zero point. Example: in the [Results] tab, the "Position X" and
"Position Y". (Ò Monitor / 21)
The area contains the following operating elements:

Operating element Type Description


Button Moves the zero point to a new position
with the mouse pointer.
Button Restores the original position.

Model parameter
In the [Model parameters] area, the defined contour is further set so that it can be found in an
application.
The area contains the following operating elements:

Operating element Type Description


[Add region (ROI)] Button Sets the Region of Interest (ROI) (Ò
Creating a region of interest (ROI) / 23)
[Exclusion zone (ROD)] Button Sets the Region of Disinterest (ROD) (Ò
Creating a region of disinterest
(ROD) / 24)
[Activate anchor tracking] Checkbox Activates anchor tracking. (Ò Mod-
els / 65) The anchor tracking recognis-
es objects despite changed position and
rotation.
[Number of objects per ROI] Input box Sets the allowed number of objects per
ROI. (Ò Number of objects per
ROI / 109)
The setting influences the pass/fail sta-
tus of the overall statistics. (Ò
Test / 166)

107
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[Tolerance width (±)] Input box Sets the tolerance range of the contour
detection. A contour within the tolerance
width is detected.
[Maximum orientation] Input fields Sets the maximum orientation. Objects
are detected within the area.
[Min. score] Input field / slider Sets the minimum score for the evalua-
tion. The contour must reach the score
to be recognised. The theoretical value
" 1 " is not achieved in practice.
[Analysis mode] Checkbox Activates the analysis mode. If the anal-
ysis mode is activated, the objects that
do not reach the [minimum score] will al-
so be displayed.
The setting is only intended for analysis
in case of problems. The evaluation time
of the device increases considerably.
[Max. overlap] Input field / slider Sets the maximum overlap of contours.
The contours of 2 objects may overlap
by the set value.
Example: 2 square objects lie next to
each other and the contours overlap (red
line).

1 2
With the value " 0.25 " (1/4 of the object,
i.e. one side) 2 objects will be detected.

Advanced
Advanced settings can be changed in the [Advanced] section.

w The [Advanced] area contains settings that are unnecessary for typical configurations.
u Only change the settings if you know exactly what the effects will be.

The area contains the following operating elements:

Operating element Type Description


[Timeout] Checkbox / input field Sets a timer for detecting objects. With
the end of the timer, the detection will be
cancelled.
[Min. contrast] Input box Sets a minimum contrast that must be
present in the current image.
No analysis will be carried out in areas
with a low grey value difference (min.
contrast <10). In areas with higher con-
trast, the defined contours will be
searched for. This means that the
search algorithm can, for example, very
quickly exclude a homogeneous back-
ground (consistently white or black).
[Orientation steps] Checkbox / input field Sets the orientation steps for the [Maxi-
mum orientation] [°]. A rotated object is
searched for in the set steps.
Small values increase the evaluation
time of the device.

108
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Subpixel] List Activates the sub-pixels. The sub-pixels
form a mean value from the neighbour-
ing pixels. The sub-pixels can reduce jit-
ter in the live image and improve the ac-
curacy of the orientation.
[None]: Deactivates the sub-pixels. The
setting is only recommended if a particu-
larly short evaluation time of the device
is required.
[Interpolation]: Uses linear interpolation
for the average of 2 neighbouring pixels.
[Least squares]: Determines a function
(usually a straight line) for a set of pixels
that summarises the pixels as well as
possible.
[Least squares high]: Determines a func-
tion, as with [Least Squares], with high
accuracy.
[Least squares very high]: Determines a
function, as with [Least Squares], with
very high accuracy.
[Search acceleration] Input field / slider Accelerates the search for contours with
a high value. Accelerated search is less
accurate than slow search.
A high value can influence the [min.
score] because then the [min. score] will
be outside the defined limits.
[Number of levels] Checkbox / input field Sets the resolution levels at which con-
tours are searched for. With only 1 level,
a search is made in the original image of
the device, which significantly increases
the evaluation time. Each additional lev-
el reduces the resolution and thus the
evaluation time.
[Lowest level] Input box Sets the level up to which images are
analysed.
[Position limits] Input fields Restricts the area in which contours are
detected.

ROI size check


In the “ROI size check” area, a warning is activated for objects that move out of the ROI.
The area contains the following operating elements:

Operating element Type Description


[Activated] Checkbox Warns if the object moves further out of
the ROI with each trigger. (Ò ROI size
check / 103)
[Threshold ROI warning] Input field Sets the threshold for the warning.
[Threshold distance to mean position] Input field Sets a distance to the mean position of
the objects.
[Threshold movement score] Input field Sets a threshold for the movement trend
of the objects.

10.2.6.1 Number of objects per ROI


The [Number of objects per ROI] function sets the number of objects to be detected per ROI.

If more or fewer objects are found, the model will be considered “failed” in the overall statistics.

109
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Example
With the setting " 3 ", 3 objects will be detected per ROI.
• If 1 ROI exists: Within the ROI, 3 objects are searched for.
• If 2 non-grouped ROIs exist: In each ROI, 3 objects are searched for.
• If 1 ROI group of 2 ROIs exists: 3 objects are searched in the ROI group.

The function increases the evaluation time of the device.

10.2.6.2 Pyramid levels


The “pyramid” level method in industrial image processing is used to speed up the evaluation time of
image analyses. In this method, resolution of the original image is reduced. Reducing the resolution
can be done several times, so that there are several "levels” that are pictorially represented as a
pyramid. With each level, the resolution of the image is reduced to ¼ and thus the evaluation time is
reduced.
Then, the defined contours will first be searched for in the low-resolution images. When contours are
found, they will be verified again on the high-resolution image. The procedure leads to significantly
shorter evaluation times with consistently high accuracy.

5 4 6

3
5 6

5 6

Fig. 41: Number of levels


1 Image of the first level (high resolution image) 2 Image of the second level
3 Image of the third level 4 Image of the fourth level
5 Reduce resolution to ¼ 6 Detecting contours in the image

First the original image is reduced by the set [number of levels] . Then, starting with the smallest
image, contours will be searched for. If all contours are already found in the smallest image, the
following (larger) image will be used for verification.

110
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

For typical applications, the setting [Number of levels] = " 4 " is sufficient, which corresponds to the 4th
level. Level with 1/64 of the original resolution. Only to recognise very small structures, a high
resolution is required.

Lowest level
The lowest level sets up to which level the images are to be analysed. For example, with the value
" 2 ”, the 1st level (the image in the original resolution) is not used for analyses.

10.2.6.3 Object definition area of a contour


After opening the [Object definition] area, the [Object definition area] is displayed in the live image.

Fig. 42: Object definition area


1 Object definition area

Contours are taught in the [object definition area]. By setting the object definition image, the contours
are determined automatically. The contours can then be adjusted manually. The [object definition
area] can be detected several times within the ROI, for example the orientation of screw heads.
The contour taught-in in the object definition image is also called "target contour" and "good part".
Setting the object definition area:
u Using the mouse pointer, place the object definition area in the live image as close as possible
around the object.
w The object is automatically recognised within the object definition area and highlighted in colour.

111
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10.2.7 Contour anchor


The [Contour anchor] model analyses the contours of an object and tracks the contours of other
models. Additional quality characteristics can be defined for the analysis.
The [Contour anchor] model is particularly suitable for applications where the shape of the objects is
repeated.

Object definition
The object definition is set in the live image. An object for the model is set in the [Object definition
area]. The contours found in the [Object definition area] are taught in automatically.
The contours taught in the object definition screen are also called “target contour" and “good part”.
The [Object definition area] can be detected several times within the ROI, for example the orientation
of screw heads.
Set the object definition area:
u Set the object definition area in the live image with the mouse pointer as close as possible around
the contour to be recognised.
w The contour is automatically recognised within the object definition area and highlighted in
colour.
u Adjust the contours with the operating elements described below.

Operating elements
The [Contour anchor] model includes the following operating elements:
[Define contours] area:

Operating element Type Description


[Contrast threshold] Checkbox / input field Sets the lower contrast threshold for de-
termining the contours.
[Min. contour length] Checkbox / input field Sets the minimum contour length. Small-
er contours are ignored.

[Advanced] area:

Operating element Type Description


[Point reduction] List Reduces the number of contour points
used.
[Automatic]: automatically reduces the
contour points.
[None]: does not reduce any contour
points.
[Point reduction low]: reduces a low
number of contour points.
[Point reduction medium]: reduces a me-
dium number of contour points.
[Point reduction high]: reduces a high
number of contour points.

[Edit contours] area:

Operating element Type Description


Button Draws additional contours.

Button Deletes unwanted parts of contours.

Button Restores deleted parts of contours.

Button Deletes unwanted contours.

Button Restores deleted contours.

112
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Tool size] Input field / slider Sets the size of the tool used to draw
and erase contours.

The mouse wheel can be used to zoom in and out of the live image. Individual contours are thus
more visible.

[Used contours] area:


The used contours are displayed in the [Used contours] area. The used contours are shown in the live
image in the colour [Green].
[Unused contours] area:
The unused contours are displayed in the [Unused contours] area. The unused contours are shown in
the live image in the colour [Blue].
[Position reference] area:
The zero point of the object definition area can be set in the position reference area. The area
contains the following operating elements:

Operating element Type Description


Button Sets the zero point of the object defini-
tion area. The set zero point is indicated
by a cross.
Button Resets the zero point setting.

[Model parameters] area:

Operating element Type Description


Button Sets a rectangular search zone (ROI).

Button Sets an elliptical search zone (ROI).

Button Sets a polygonal search zone (ROI).

Button Sets a rectangular exclusion zone


(ROD).
Button Sets an elliptical exclusion zone (ROD).

Button Sets a polygonal exclusion zone (ROD).

[Tolerance width] Input field Sets the maximum permissible deforma-


tion of the contour. Contours within the
tolerance width are taken into account.
[Max. orientation] Input fields Sets the maximum orientation. Objects
within the orientation will be detected.
[Min. score] Input field / slider Sets the minimum score of a detected
contour so that it will be considered
found. The value 1 means complete
agreement. The value is not achieved in
practice due to disturbances in the envi-
ronment such as image noise, changing
light conditions, etc.
[Analysis mode] Checkbox Also takes into account contours that lie
below the [Min. score]. The contours will
be marked as failed .
The [Analysis mode] is only intended for
analysis in case of problems. The pro-
cessing time of the device increases
considerably.

[Advanced] area:

113
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[Timeout] Checkbox / input field Sets a timeout for the search for con-
tours.
The value 0 sets the timeout to the
maximum value 7000 ms .
[Min. contrast] Input field Sets a minimum contrast that must be
present in the current image.
No analysis will be carried out in areas
with a low greyscale value difference
([min. contrast] <10 ). In areas with
higher contrast, the defined contours will
be searched for. This means that the
search algorithm can, for example, very
quickly exclude a homogeneous back-
ground (consistently white or black) and
only search in the contrasting ones.
[Orientation steps] Checkbox / input field Sets the size of the intermediate steps of
the set orientation. (Ò Orientation / 74)
A rotated object is searched for in the
set steps.
Small values increase the processing
time of the device.
[Subpixels] List Activates the subpixels. The subpixels
form a mean value from the neighbour-
ing pixels. The subpixels can reduce jit-
ter in the live image and improve the ac-
curacy of the orientation.
[None]: deactivates the subpixels. The
setting is only recommended if a particu-
larly short processing time of the device
is required.
[Interpolation]: uses linear interpolation
for the average of 2 neighbouring pixels.
[Least squares]: determines a function
(usually a straight line) for a set of pixels
that summarises the pixels as well as
possible.
[Least squares high]: determines a func-
tion, as with [Least squares], with high
accuracy.
[Least squares very high]: determines a
function, as with [Least squares], with
very high accuracy.
[Search acceleration] Input field / slider Accelerates the contour search at a high
rate. Accelerated search is less accurate
than slow search.
The setting affects [Min. Score] because
[Min. Score] cannot be within the set lim-
its in a fast/inaccurate search.
[Number of levels] Checkbox / input field Sets the resolution levels at which con-
tours are searched for. (Ò Pyramid lev-
els / 110) With only 1 level, a search is
made in the original image of the device,
which significantly increases the pro-
cessing time. Each additional level re-
duces the resolution and thus the pro-
cessing time.
[Lowest level] Input field The lowest level sets the level up to
which the images are to be analysed. (Ò
Pyramid levels / 110)
[Position limits] Input fields Restricts the area in which contours are
detected.

[ROI size check] area:

114
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Activated] Checkbox Warns if the OCR content moves further
out of the ROI with each trigger. (Ò ROI
size check / 76)

10.2.8 1D barcode anchor tracking


The [1D barcode anchor tracking] model tracks the search zone (ROI) from other models. The position
and orientation are taken into account. This means that objects do not have to be in the same position
and in the same orientation for the application to work correctly.
The [1D barcode anchor tracking] model is linked to models that have already been created. It then
traces the search zone (ROI). The search zone is used to search for objects.
Only one model of the [1D barcode anchor tracking] type can be created.
The operating elements of the [1D barcode anchor tracking] model are identical to those of the [1D
barcode] model. Only difference: The [1D barcode] model can recognise multiple contours; the [1D
barcode anchor tracking] model can only detect one contour.

For reliable recognition of objects, the contour anchor may only be visible once in the image.

The function increases the evaluation time of the device.

10.2.9 2D data code anchor tracking


The [2D data code anchor tracking] model tracks the search zone (ROI) from other models. The
position and orientation are taken into account. This means that objects do not have to be in the same
position and in the same orientation for the application to work correctly.
The [2D data code anchor tracking] model is linked to models that have already been created. It then
traces the search zone (ROI). The search zone is used to search for objects.
Only one model of the [2D data code anchor tracking] type can be created.
The operating elements of the [2D data code anchor tracking] model are identical to those of the [2D
data rcode] model. Only difference: The [2D data code] model can recognise multiple contours; the
[2D data code anchor tracking] model can only detect one contour.

For reliable recognition of objects, the contour anchor may only be visible once in the image.

The function increases the evaluation time of the device.

10.3 Flow
The [Flow] function displays the images and models available in a flow chart. In just a few steps
• the evaluation order of the images and models is set,
• images are activated/ deactivated,
• timeouts for models are set.
In addition, the capture times of images and evaluation times of the models are displayed.

115
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

1 3 4

3 5

Fig. 43: [Flow] function


1 Flow settings 2 Start of the flow chart
3 Image 4 Model
5 End of the flow chart

Flow settings
The flow settings are used to set the processing order of the images and models.
The flow settings contain the following settings:

Flow settings Type Description


[First Fit] Option field Runs the model that first successfully
completes the search task in the flow
chart. The models following in the flow
chart are not executed.
[Autosort] Checkbox Runs the model first that successfully
detected a code in the previous pass.
[All Models] Option field Runs all models. The total evaluation
time increases.
[Application timeout] Checkbox Sets a timeout for the application. If the
evaluation time of the application ex-
ceeds the timeout, the evaluation is
stopped.

116
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Flow chart
The flow chart starts with the image underneath [START] (2). The active models (4) are connected
with the blue line. Disabled models are greyed out and encircled with a dashed line.
The flow chart ends with [OUTPUT] (5). The total evaluation time is displayed underneath [OUTPUT].
The total evaluation time is composed of the capture time of the image and the evaluation times of the
models.

Double-clicking on an image or a model links directly to the respective settings.

The following properties are displayed within an image (3):


• image name
• current camera image
• capture time (corresponds to the exposure and image read-out time of the image)

Within a model (4) the following properties are displayed:


• Model name
• Model type
• Model status (ROI criteria met)
• Number of ROI criteria met / number of ROI
• Processing time (influenced by the settings of the model) (Ò Models / 65)
• Timeout (Ò Models / 65)

Setting images
The following functions are used to set images:

Function Button Description


[Change position] - Press the mouse button and move the
image to the new position.
[Activate/deactivate] Activates the selected image.

[Delete] Deletes the selected image from the de-


vice’s system.

Setting models
The following functions are used to set a model:

Function Button Description


[Change position] - Press the mouse button and move the
model to the new position.
[Image references] Connects the model to an available im-
age.
[Delete] Deletes the selected model from the de-
vice’s system.
[Timeout] - Sets a timeout for the model. If the eval-
uation time of the model exceeds the
timeout, the evaluation is stopped.

117
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10.4 Logic
The function [Logic] creates an output logic by means of logic blocks. In addition to binary signals,
numbers and strings are also processed. The model and pin data is assigned to the outputs in the
output logic. Then the data is transferred to a controller (PLC/PC) via the outputs.

2 5

Fig. 44: Function [Logic]


1 Search by name 2 Logic utilities
3 Logic blocks 4 Overview area
5 Chart

Search by name
In the input field [Search by name], a logic element is searched for by entering the name.

Logic utilities
The [Logic utilities] import and export an output logic. (Ò Logic utilities / 119) The export saves the
output logic and makes it interchangeable with other devices.

Logic blocks
The logic blocks are used to create an output logic in the diagram. (Ò Logic block / 119) The logic
blocks are placed in the diagram by clicking and dragging using the mouse:
u Click the logic block and keep the mouse button pressed.
u Drag the logic block into the diagram and release the mouse button.

118
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

w The logic block is placed.

At the edge of each logic block, there is at least one contact area via which the logic blocks are
connected. (Ò Logic block / 119)

Overview area
The overview area displays a reduced overview of the diagram. The white frame is shifted using the
mouse. By this, the logic blocks outside the visible area can be displayed.

Diagram
The output logic is created in the diagram. (Ò Output logic / 120) The pin events and the outputs are
displayed as logic blocks with different designations and font colours. The logic blocks are connected
by connecting lines. The connecting lines represent the data flow between the logic blocks. Next to the
inputs of the logic blocks, the current state of the input is displayed.

10.4.1 Logic utilities


The [Logic utilities] function imports, exports and deletes an output logic. The export saves the output
logic in a file that can be used with other devices.
The function [Logic utilities] contains the following operating elements:

Operating element Name Description


Importing logic Imports the output logic from a file with
the extension *.o2xlgc .
Exporting entire logic Exports the output logic to a file with the
extension *.o2xlgc .
Deleting logic Deletes the created output logic in the
diagram after confirmation.
The data cannot be restored.

10.4.2 Logic block


Editing a logic block
Logic blocks can be edited in several ways. The available functions are displayed as buttons for the
selected logic block.
The following buttons are used to edit a logic block:

Operating element Function Description


Duplicate Creates a duplicate of the selected logic
block.
Delete Deletes the selected logic block.

Set Sets the selected logic block.

Connect logic block


The contact areas at the border connect the logic blocks.

Fig. 45: Contact surfaces with red connecting line


1 Contact areas

Connect a logic block:

119
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

u Click the contact area at the right border of a logic block and keep the mouse button pressed.
w The contact areas of the outputs are at the right border.
u Drag the connecting line to a free contact area on the left border of a logic block and release the
mouse button.
w The contact areas of the inputs are at the left border.

During connection, the compatibility of the logic block is verified. For example, a numeric output
cannot be connected to a Boolean input. The inputs compatible with the output value are
displayed in blue.
The units of measurement of the logic blocks are not checked when connecting.

Delete a connecting line:


u Click the connecting line.

u Click on the button [Delete]:


w The connecting line is deleted.

Select multiple logic blocks


By holding down the Ctrl key and clicking with the left mouse button, several logic blocks are selected.
The selected logic blocks can then be exported as a bundle, duplicated or deleted.
The selected logic blocks are highlighted with a frame. The following operating elements are displayed
below the frame:

Operating element Function Description


Duplicate Duplicates the selected logic blocks.
Some logic blocks may only be present
once in the output logic and therefore
cannot be duplicated.
Export Exports the selected logic blocks to a file
with the extension *.o2xlgc .
Delete Deletes the selected logic blocks.

10.4.3 Output logic


The output logic is created in the diagram. The model and pin data is assigned to the outputs in the
output logic. The following rules apply for creating an output logic:
• The pin events are provided as Boolean numbers (1 = true, 0 = false) and assigned to digital
outputs.
• The model results are numerical values and are processed as follows:
– use of operators,
– digitalisation by comparison with other results or values,
– transfer of digitalised numerical values by applying arithmetic operators and logic functions,
– output of a Boolean value via a digital output or a virtual pin.
The following figure shows an overview of the configuration options in the output logic. The numbers
identify the connection between the logic blocks.

120
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

1
Pin event DIGITAL_OUT Digitization DIGITAL_OUT
Virtual pin

3
Logical
Functions
1
Modell
Arithmetic Digitization 2
results
Arithmetic

Statistics

2 3
DIGITAL_OUT Logical DIGITAL_OUT
Arithmetic
Virtual pin Functions Virtual pin
3 1
Logical
Functions Digitization

2
Statistics
Arithmetic

1
Digitization
Statistics

Fig. 46: Output logic

10.4.4 Logic block [New note]


The logic block [New note] inserts notes into the diagram. The notes contain any text and behave like
a sticky note.
The following functions are used to set the logic block:

Function Button Description


Edit text - Clicking within the note displays a cursor
that is used to add and edit text.

10.4.5 Logic blocks [Model results]


In the area [Model results], the created models are listed as logic blocks. (Ò Models / 65) The logic
blocks provide the characteristics of the detected codes at the outputs.

1D barcode and 2D data code


After placing the logic block [Model results] -> [1D barcode] or [2D data code] with or without anchor
tracking, the function is set:

List Description
[Code details] Sets the function of the logic block:
[Code details]: provides the code details at the outputs.
[Quality grading]: provides the results of the quality assess-
ment at the outputs.
[ROI]: provides the status of a specific ROI group at the out-
puts.
[OCR]: provides the results of the text recognition at the out-
puts.
[Model overview]: provides the status of all ROI groups and
the decoding status at the outputs.
[ROI 0] Sets the ROI group.
[Code index 0] Sets a specific code.
For access to a specific code, the number of codes per ROI
group must be set to a value “>=0”. (Ò Number of codes per
ROI group / 70)
[Row number 0] Sets a specific line of text recognition.

With the [Code details] function, the logic block provides the following outputs:

121
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Output Number format Description


[Code content] alphanumeric Content of the code taking into account
the coding. (Ò Encoding / 69)
[Code found] bool State of the code:
“ Code gefunden “
“ Code nicht gefunden “
[Centre X] numeric The centre of the code on the X-axis.
[Centre Y] numeric The centre of the code on the Y-axis.
[Orientation] numeric Orientation of the code in degrees.
[Half width] numeric Half width of the code.
[Half height] numeric Half height of the code.
[Binary code content] byte array Code content as uncoded raw data (byte
array). The output must be connected to
the “Equal bytes” logic block. (Ò Logic
blocks [Binary operations] / 135)

With the [Quality grading] function, the logic block provides the following outputs for 1D barcodes:

Output Number format Description


[User-defined overall quality] Float The overall quality of the code corre-
sponds to the individual characteristic
with the poorest rating, depending on
the quality parameters set.
[Overall quality] Float The overall quality of the code corre-
sponds to the individual characteristic
with the poorest effect. The following
quality gradings exist:
0-4 (0 = bad; 4 = very good)
A-F (F = bad; A = very good)
The grading 0-4 or A-F is determined
by the standard used.
[Additional requirements] Float Specific requirements of the symbology
[Decodability] Float Deviations of the symbol element widths
from the nominal value. The nominal val-
ue is defined in the symbology standard.
[Decode] Float Readability of the code:
4 : readable
0 : not readable
[Defects] Float Defects are irregularities in the grey-
scale value profile of symbol elements or
quiet zones.
[Minimum edge contrast] Float Assessment of the minimum contrast
between two adjoining symbol elements
in the grey-scale value profile (light to
dark element or dark to light element).
[Minimum reflectance] Float Minimum reflection value of the grey-
scale value profile:
4 : the minimum reflection value is
<= 0,5 of the maximum reflection val-
ue
0 : the minimum reflection value is
> 0,5 of the maximum reflection value
[Modulation] Float Uniformity of the light and dark bars.
[Symbol contrast] Float Contrast of the bars against the back-
ground.

With the [Quality grading] function, the logic block provides the following outputs for 2D data codes
and the quality grading “ISO”:

122
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Out Number format Description


[User-defined overall quality] Float The overall quality of the code corre-
sponds to the individual characteristic
with the poorest effect, depending on the
quality parameters set.
[Overall quality] Float The overall quality of the code corre-
sponds to the individual characteristic
with the poorest effect. The following
quality gradings exist:
0-4 ( 0 = bad; 4 = very good)
A-F ( F = bad; A = very good)
The grading 0-4 or A-F is determined
by the standard used.
[Contrast] Float Contrast of the modules relative to the
background.
[Modulation] Float Uniformity of the light and dark modules.
[Fixed pattern damage] Float Error rate in the 3 basic elements of the
code: finder pattern, alternating pattern
and quiet zone.
[Decode] Float Readability of the code:
4 : readable
0 : not readable
[Axial non-uniformity] Float Ratio of the module size in horizontal
and vertical direction.
[Grid non-uniformity] Float Orientation of the modules relative to the
specific symbol grid.
[Unused error correction] Float Codes with error correction have redun-
dancy for correcting reading errors. The
value indicates how much of the redun-
dancy is not used during reading.
[Reflectance] Float Assessment of the amplitude between
the DataCode modules.
[Print growth] Float Ratio dark/light modules in the alternat-
ing pattern.
[Contrast uniformity] Float Smallest numerical value for the modu-
lation in the entire code.
[Aperture] Float Size indication of the synthetic aperture
in relation to the module size of the sym-
bol. The aperture is used to create the
reference grey-scale image which is re-
quired for the quality assessment.
[Format information (µQR/QR only)] Float Assessment of the modules containing
the format information.
[Version information (µQR/QR only)] Float Assessment of the modules containing
the version information.

With the [Quality grading] function, the logic block provides the following outputs for 2D data codes
and the quality grading “AIM / ISO-TR29158”:

Out Number format Description


[User-defined overall quality] Float Overall quality of the code corresponds
to the individual characteristic with the
poorest effect. Depends on the set quali-
ty parameters.
[Overall quality] Float Overall quality of the code corresponds
to the individual characteristic with the
poorest effect.
[Cell contrast] Float Contrast of the modules relative to the
background.
[Cell modulation] Float Uniformity of the light and dark modules.

123
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Out Number format Description


[Fixed pattern damage] Float Error rate in the 3 basic elements of the
code: finder pattern, alternating pattern
and quiet zone.
[Decode] Float Readability of the code:
4 : readable
0 : not readable
[Axial non-uniformity] Float Ratio of the module size in horizontal
and vertical direction.
[Grid non-uniformity] Float Orientation of the modules relative to the
specific symbol grid.
[Unused error correction] Float Error of the code and share of the avail-
able error correction mechanisms to
successfully decode the code.
[Mean light] Float Assessment of the image quality, calcu-
lated via the medium grey-scale value of
the centres of the bright DataCode mod-
ules. The values are in the range 0,0
(0 %) to 1,0 (100 %) of the full grey-
scale range (255 for byte images).
[Reflectance] Float Assessment of the amplitude between
the DataCode modules.
[Print growth] Float Ratio dark/light modules in the alternat-
ing pattern.
[Contrast uniformity] Float Smallest numerical value for the modu-
lation in the entire code.
[Aperture] Float Size indication of the synthetic aperture
in relation to the module size of the sym-
bol. The aperture is used to create the
reference grey-scale image which is re-
quired for the quality assessment.
[Format information (mqr/qr only)] Float Assessment of the modules containing
the format information.
[Version information (mqr/qr only)] Float Assessment of the modules containing
the version information.

With the [Quality grading] function, the logic block provides the following outputs for 2D data codes
and the quality grading “SEMI T10”:

Out Number format Description


[P1 row] Float Corner 1 position Y coordinates
[P1 column] Float Corner 1 position X coordinates
[P2 row] Float Corner 2 position Y coordinates
[P2 column] Float Corner 2 position X coordinates
[P3 row] Float Corner 3 position Y coordinates
[P3 column] Float Corner 3 position X coordinates
[P4 row] Float Corner 4 position Y coordinates
[P4 column] Float Corner 4 position X coordinates
[Rows] Float ECC200 N (rows)
[Columns] Float ECC200 M (columns)
[Symbol contrast] Float The value for symbol contrast desig-
nates the contrast between light and
dark classified symbol pixels in percent
of the full grey-scale value range (255
for byte images).
[Symbol contrast SNR] Float Symbol contrast SNR is the correspond-
ing signal-to-noise ratio.

124
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Out Number format Description


[Horizontal mark growth] Float Module width relative to the sum of light
and dark modules [%]
[Vertical mark growth] Float Module height relative to the sum of light
and dark modules [%]
[Data matrix cell width] Float Average module width
[Data matrix cell height] Float Average module height
[Horizontal mark misplacement] Float Misplacement in horizontal direction [%]
[Vertical mark misplacement] Float Misplacement in vertical direction [%]
[Cell defects] Float Misclassified symbol pixels [%]
[Finder pattern defects] Float Misclassified finder pattern pixels [%]
[First unused error correction] Float Unused capacities for error correction
[%]

With the [Quality grading as string] function, the logic block provides the following output:

Out Number format Description


[Qualities] numeric Result of the quality grading as a string.
The individual quality values are sepa-
rated by delimiters. Which quality values
are output is determined by the setting
of the model.
The quality values are output in the pin
order of their respective code quality
blocks.
The grading scheme and the delimiters
can be set.

With the [ROI] function, the logic block provides the following outputs:

Out Number format Description


[All codes found] bool State of the code:
“all codes found”
“no code found” or “not all codes found”
[Decoding status] numeric State of the selected ROI group:
0 : The ROI group has not yet been
completely executed.
1 : The ROI group has been completely
executed.
2 : The ROI group is not executed due
to timeout.
3 : The ROI group is not completely ex-
ecuted because of an unknown error.
[Found codes] numeric Number of codes found.
[Searched codes] numeric Number of codes searched for.

With the [OCR] function, the logic block provides the following outputs:

Out Number format Description


[Corrected text] string The detected text.
Preset: empty.
[Centre X] numeric The centre of the code on the X-axis.
[Centre Y] numeric The centre of the code on the Y-axis.
[Result] numeric How well the line of text matches the
regular expression. Preset: 0.0

With the [Model overview] function, the logic block provides the following outputs:

125
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Out Number format Description


[All ROIs passed] bool State of the ROI groups.
[Decoding status] numeric State of the selected model:
0 : The model was not completely exe-
cuted.
1 : The model was completely executed.
2 : The model is not executed due to a
timeout.
3 : The model is executed incompletely
due to unknown errors.

Contour detection and anchor tracking contour


After placing the logic element [Model results] -> [Contour detection] or [Contour anchor tracking], the
function is set. Some functions are only available for calibrated devices. (Ò Calibration wizards / 35)

List Description
[Object properties] Sets the function of the logic block:
[Object properties]: provides the properties of the object at the
outputs.
[Object properties (calibrated)]: provides the properties of the
object with calibrated values at the outputs of a device.
[ROI result]: provides the status of a specific ROI group at the
outputs.
[ROI result (calibrated)]: provides the status of a specific ROI
group at the outputs of a calibrated device.
[Model overview]: provides the status of all ROI groups and
the decoding status at the outputs.
[ROI 0] Sets the ROI group.
[0] Sets an object index.
For access to a specific object, the number of objects per ROI
group must be set to a value " >=0 ".

With the functions [Object properties] and [Object properties (calibrated)], the logic block provides the
following outputs:

Out Number format Description


[Position X] numeric Position of the object on the X-axis.
[Position Y] numeric Position of the object on the Y-axis.
[Position Z] numeric Position of the object on the Z-axis.
The output is only available for calibrat-
ed devices.
[Orientation] numeric Orientation of the object.
[Score] numeric Evaluation of the object.

With the functions [ROI result] and [ROI result (calibrated)], the logic block provides the following
outputs:

Out Number format Description


[ROI passed] bool Status of an ROI.
[Number of objects] numeric Number of objects found
[ROI score min.] numeric Value of the contour with the lowest
score within the ROI.
[ROI score max.] numeric Value of the contour with the highest
score within the ROI.
[Matches centre X] numeric Averaged X-value of all contours found
in the ROI. The position reference of the
contour is decisive. (Ò Contour detec-
tion / 104)

126
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Out Number format Description


[Matches centre Y] numeric Averaged Y-value of all contours found
in the ROI. The position reference of the
contour is decisive. (Ò Contour detec-
tion / 104)
[Matches centre Z] numeric Averaged Z-value of all contours found
in the ROI. The position reference of the
contour is decisive. (Ò Contour detec-
tion / 104)
The output is only available for calibrat-
ed devices. (Ò Calibration wizards / 35)

With the [Model overview] function, the logic block provides the following outputs:

Out Number format Description


[All ROIs passed] bool Status of all ROI groups.
[Model number of matches] numeric Number of objects found
[Model score min.] numeric Value of the contour with the lowest
score within the model.
[Model score max.] numeric Value of the contour with the highest
score within the model.

BLOB analysis
After placing the logic element [Model results] -> [BLOB analysis], the function is set. Some functions
are only available for calibrated devices. (Ò Calibration wizards / 35)

List Description
[Geometry] Sets the function of the logic block:
[Geometry]: The geometrical properties of the object.
[Geometry (calibrated)]: The numerical properties of an object
with calibrated values at the outputs of the device.
[Circular]: The circularity of the object [0..100]. For a perfect
circle, the output value is ” 100 ”.
[Circular (calibrated)]: The circularity [0..100] of an object with
calibrated values at the outputs of the device. For a perfect
circle, the output value is “ 100 ”.
[Rectangular]: The rectangularity of the object [0..100]. For a
perfect rectangle, the output value is ” 100 “.
[Rectangular (calibrated)]: The rectangularity [0..100] of an
object with calibrated values at the outputs of the device. For
a perfect rectangle, the output value is “ 100 ”.
[Greyscales]: The greyscales of the object.
[Other]: Other properties of the object.
[ROI result]: The status of a particular ROI group.
[ROI result (calibrated)]: The status of a specific ROI group of
an object with calibrated values at the outputs of the device.
[Model overview]: The status of all ROI groups, the number of
model objects and the total area model.
[Model overview (calibrated)]: The status of all ROI groups,
the number of model objects and the total area model of an
object with calibrated values at the outputs of the device.
[ROI 0] Sets the ROI index.
[0] Sets an object index.
For access to a specific object, the number of objects per ROI
group must be set to a value " >=0 ".

With the [Geometry] and [Geometry (calibrated)] functions, the logic block provides the following
outputs:

Out Number format Description


[Valid object area] bool Validity of the object area.

127
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Out Number format Description


[Object area] numeric The area of the object defined via the
ROI and object index.

[Valid position X] bool Validity of position X.


[Position X] numeric The centre of gravity of the object on the
X-axis, measured from the left image
border.

[Valid position Y] bool Validity of position Y.


[Position Y] numeric The centre of gravity of the object on the
Y-axis, measured from the top image
border.

[Position Z] numeric Position Z starting from the position ref-


erence.
The output is only available for calibrat-
ed devices. (Ò Calibration wizards / 35)
[Valid object height] bool Validity of the object height.
[Object height] numeric The height of the smallest rectangle that
completely encloses the object and
whose sides are parallel to the image
border.

[Valid object width] bool Validity of the object width.


[Object width] numeric The width of the smallest rectangle that
completely encloses the object and
whose sides are parallel to the image
border.

With the [Circular] and [Circular (calibrated)] functions, the logic block provides the following outputs:

Out Number format Description


[Valid roundness] bool Validity of roundness.

128
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Out Number format Description


[Roundness] numeric The roundness describes the ratio of all
distances of the contour points to the
centre of gravity to the average devia-
tion.
A narrow notch in the object (pizza slice)
has a significant effect on its roundness.
A perfectly round circle has the value
“100”.
Formula
• “p”: centre of gravity.
• “pi”: points.
• “F”: area of the contour.

[Valid outer radius] bool Validity of the outer radius.


[Outer radius] numeric The radius of the smallest circle that
completely encloses the object.

[Valid inner radius] bool Validity of the inner radius.


[Inner radius] numeric The radius of the largest circle that fits
completely inside the object.

[Valid circularity] bool Validity of circularity.


The output is only available for non-cali-
brated devices.

129
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Out Number format Description


[Circularity value] numeric The circularity describes the maximum
distance of a contour point to the centre
of gravity in relation to the object areaof
the BLOB object.
A narrow notch in the object (pizza slice)
has a small effect on the circularity.
A perfectly round circle has the value
“100”.
The output is only available for uncali-
brated devices.
Formula
• “F”: area of the region.
• “max”: max. distance from the centre
of gravity to all contour points.
• “C“: Circularity

With the [Rectangular] and [Rectangular (calibrated)] functions, the logic block provides the following
outputs:

Out Number format Description


[Valid rectangularity] bool Validity of the rectangularity.
[Rectangularity] numeric The rectangularity of the object. A per-
fect rectangle has the value “100”.

[Valid inner width] bool Validity of the inner width.


[Inner width] numeric The width of the largest rectangle that
fits completely inside the object and
whose sides are parallel to the image
border.

[Valid inner height] bool Validity of the inner height.


[Inner height] numeric The height of the largest rectangle that
fits completely inside the object and
whose sides are parallel to the image
border.

130
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

With the [Greyscales] function, the logic block provides the following outputs:

Out Number format Description


[Valid min. greyscale value] bool Validity of the minimum greyscale value.
[Min. greyscale value] numeric The lowest greyscale value of the object.

[Valid max. greyscale value] bool Validity of the maximum greyscale val-
ue.
[Max. greyscale value] numeric The highest greyscale value of the ob-
ject.

[Valid greyscale value deviation] bool Validity of the greyscale value deviation.
[Greyscale value deviation] numeric The standard deviation of the object (ho-
mogeneity). The value is low for uniform-
ly grey objects and high for irregular sur-
faces or greyscale gradients.

[Valid average greyscale value] bool Validity of the average greyscale value.
[Average greyscale value] numeric The average greyscale value of the ob-
ject.

With the [Others] function, the logic block provides the following outputs:

Out Number format Description


[Valid compactness] bool Validity of compactness.
[Compactness] numeric The compactness of the object. Empty
regions have the value " 0 ”; circular ob-
jects have the value " 1 ”. Long narrow
objects have average values. Entangled
objects and objects with holes have high
values.

[Valid number of holes] bool Validity of the number of holes.

131
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Out Number format Description


[Number of holes] numeric The number of holes in the object.

[Valid orientation] bool Validity of the orientation.


[Orientation] numeric The orientation of the object in degrees.

With the [Shape] function, the logic block provides the following outputs:

Out Number format Description


[Valid perimeter] bool Validity of the perimeter.
[Perimeter] numeric Length of the outer contour of the object.

[Valid centre X of the bounding box] bool Validity of the horizontal coordinate of
the object’s geometric centre.
[Centre X of the bounding box] numeric Horizontal coordinate of the object's ge-
ometric centre, measured from the left
image border.

[Valid centre Y of the bounding box] bool Validity of the vertical coordinate of the
object’s geometric centre.
[Centre Y of the bounding box] numeric Vertical coordinate of the object's ge-
ometric centre, measured from the top
image border.

With the [ROI result] and [ROI result (calibrated)] functions, the logic block provides the following
outputs:

Out Number format Description


ROI passed bool Status of an ROI.
Number of matches numeric Number of objects found within the ROI.
Total area numeric Size of the total area within the ROI.

With the [Model overview] and [Model overview (calibrated)] functions, the logic block provides the
following outputs:

132
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Out Number format Description


All ROIs passed bool Status of all ROIs
Number of model objects numeric Number of objects found within the mod-
el.
Total area model numeric Size of the total area within the model.

10.4.6 Logic block [Application result]


The following logic blocks are provided in the [Application result] area:

Logic block Number format on the output Description


[Application results] bool Displays the state of the models con-
tained in the application:
[All models found]
[Not all models found]
[Image quality] [Sharp warning]: bool Provides the warning flag and value for
[Sharpness]: numerical each quality aspect (sharpness, bright-
ness, underexposure, overexposure).
[Brightness warning]: bool
[Brightness]: numerical
[Under-exposed warning]: bool
[Under-exposed]: numerical
[Overexposed warning]: bool
[Overexposed]: numerical
[Anchor tracking result] [Valid] bool Displays values of the anchor tracking at
[Orientation]: numerical the outputs.
[Displacement X]: numerical
[Shift Y]: numerical

10.4.7 Logic blocks [String operations]


The following logic blocks are provided in the [String operations] area:

Number format on the out-


Logic block Number format on the input Description
put
[Fixed string] - alphanumeric Provides an adjustable char-
acter string which is used for
operations with character
strings.
[PCIC input string] - alphanumeric Provides an adjustable char-
acter string (ID 00 to 09 )
which is transferred to a con-
troller for operations. The
character string can be
changed during the runtime
with the J Command (see
separate Programmer’s
Guide).
[Equal strings] alphanumeric bool Compares the character
string on both inputs in view
of identical content:
a==b = 1 : The character
strings are identical.
a!=b = 1 : The character
strings are not identical.

133
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Number format on the out-


Logic block Number format on the input Description
put
[Match regex] alphanumeric alphanumeric Applies a regular expression
(regex) to the character string
on the input. If an expression
is found, the [Match pattern]
output provides a Boolean 1 .
The expression found is pro-
vided at the [Output string]
output.
Example:
For \b([0-9]{4})\b , the
result is True if the code has
exactly 4 digits.
[Match pattern] alphanumeric bool Searches a pattern in the in-
put string.
Wildcards such as * and ?
are accepted for the pattern
(example: *.png ).
If the input string contains the
pattern, a Boolean 1 is pro-
vided at the output.
[Split by delimiter] alphanumeric alphanumeric Searches the delimiter in the
character string. The charac-
ter string is split at the posi-
tions of the delimiter. The split
string of characters is provid-
ed one after the other on the
7 outputs without the delimit-
er.
If the character string is split
in more than 7 segments at
the input, the segments >7
are provided at the outputs.
[Split string at position] character string: alphanumer- alphanumeric Splits a character string at a
ic, position: numeric certain position. The split
character string is provided
on the outputs.
[Concatenation] alphanumeric alphanumeric Concatenates up to 7 charac-
ter strings, optionally with a
delimiter. The character string
and the optional delimiter are
provided via the [Fixed string]
logic blocks, for example.
[Selection] bool / alphanumeric alphanumeric If there is a Boolean 0 on the
[Selection (true / false)] input,
the character string is provid-
ed on the [Option false] input.
If there is a Boolean 1 on the
[Selection (true / false)] input,
the character string is provid-
ed on the Option true input.
[Find first wildcard match] string [is a match]: bool Applies the pattern to all
[ID of match]: number strings until the result is true.
If successful, the output con-
[match string]: string tains:
• state,
• ID of the string,
• matching
Wildcards such as * and ?
are accepted for the pattern
(example: *.png ).
Case sensitivity can be acti-
vated in the settings.
[Case converter] string string Converts the string at the in-
put into lower- or upper-case
letters.

134
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

10.4.8 Logic blocks [Binary operations]


The following logic blocks are provided in the area [Binary operations]:

Number format on the out-


Logic block Number format on the input Description
put
[Fixed binary data] - byte array Provides adjustable binary
data which is used for opera-
tions with binary data.
[Binary data input] - byte array Provides adjustable binary
data (ID 00 to 09 ) which is
transferred to a controller for
operations. The binary data
can be changed during the
runtime with the J Command
(see separate document Pro-
grammers Guide).
[Equal bytes] byte array bool Compares the binary data at
the inputs in view of identical
content:
a==b = 1 : The binary data
is identical.
a!=b = 1 : The binary data
is not identical.
[Split binary by delimiter] byte array byte array Searches the delimiter in the
binary data. The binary data
is split at the positions of the
delimiter. The split binary data
is provided one after the other
at the 7 outputs without the
delimiter.
If the binary data is split in
more than 7 segments at the
input, the segments >7 are
provided at the outputs.
[Split binary at position] byte array byte array Splits the binary data at a cer-
tain position. The split binary
data is provided at the out-
puts.
[Concatenate binaries] byte array byte array Concatenates up to 7 binary
data, optionally with a delimit-
er. The binary data and the
optional delimiter are provid-
ed via the logic blocks “Fixed
binary data”, for example.
[Select binary] byte array byte array If there is a Boolean 0 at the
input [Switch (0/1)], the binary
data is provided at the input
[Option 0].
If there is a Boolean 1 at the
input [Switch (0/1)], the binary
data is provided at the input
[Option 1].

10.4.9 Logic blocks [Arithmetic]


The following logic blocks are provided in the area [Arithmetic]:

Number format on the out-


Logic block Number format on the input Description
put
[DIFF] numeric numeric The signals on the inputs are
subtracted. The two outputs
provide the result with differ-
ent signs.
[ADD] numeric numeric The signals on the inputs are
added.

135
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Number format on the out-


Logic block Number format on the input Description
put
[COUNT] bool numeric The signals on the inputs are
added. The Boolean values at
the input are treated as nu-
merical values.
[Min max value] numeric numeric The minimum and maximum
values are determined on the
basis of the signals on the in-
puts.
[Fixed value] – numeric A floating point number is set
as a fixed value. The fixed
value is provided and can be
used for the logic blocks
[DIFF] and [ADD] (for exam-
ple to set an offset).
[Distance between points] numeric numeric Calculates the distance be-
tween 2 points in pixels. All 4
values are necessary for the
calculation.
[DIV] numeric [a/b]: numeric Divide [a] by [b.]
[Valid]: bool If divided by zero, [a/b] will
output 0 and [Valid] false .
[MUL] numeric numeric Multiplies [a] by [b.]
[MOD] numeric [a%b]: numeric Calculates the quotient and
[Remainder]: numeric remainder from the Modulo
operation of [a] by [b.]
[Valid]: bool
If divided by zero, [a%b] will
output 0 and [Valid] false .
[Amount] numeric numeric Outputs the absolute value of
[a].
The values at the input and
output have the same number
format.

10.4.10 Logic blocks [Converter]


The following logic blocks are provided in the area [Converter]:

Number format on the out-


Logic block Number format on the input Description
put
[String to number] alphanumeric numeric Converts the alphanumeric
data at the input to numeric
data at the output.
[Number to string] numeric alphanumeric Converts the numeric data at
the input to alphanumeric da-
ta at the output.
[Binary to string] byte array alphanumeric Converts the binary data at
the input to alphanumeric da-
ta at the output.
[String to binary] alphanumeric byte array Converts the alphanumeric
data at the input to binary da-
ta at the output.
[Number to binary] numeric byte array Converts the numeric data at
the input to binary data at the
output.

136
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Number format on the out-


Logic block Number format on the input Description
put
[Bool to string] bool alphanumeric / numeric Converts the Boolean data at
the input depending on the
setting of the logic element:
[Word] setting: returns the al-
phanumeric value [true] or
[false].
[Numeric] setting: outputs the
numeric value 1 or 0 .
[Bool to binary] bool byte array Converts the Boolean data at
the input to binary data at the
output.
[String to bool] alphanumeric bool Converts the alphanumeric
data at the input to Boolean
data at the output.
[Binary to bool] byte array bool Converts the binary data at
the input to Boolean data at
the output.
[Number to bits] numeric bool Converts the numeric data at
the input into an integer num-
ber with 32 bits. The lowest 8
bits are output.
If the [Cap bit] setting of the
logic element is activated and
set to >0 , true is present
at all outputs, even if the
number at the input would re-
sult in a larger bit value.
Example for [Cap bit] = >0 :
There are 5 pins available at
the output. If a number >=32
appears at the input, true
should be present at all out-
puts.
[Bits to number] bool numeric Converts bit data at the input
to numeric data between 0
and 255 .
Unused inputs are treated as
0.

10.4.11 Logic blocks [Digitalisation]


The following logic blocks are provided in the area [Digitalisation]:

Number format on the out-


Logic block Number format on the input Description
put
[Min. quality check] numeric bool Compares the [quality grading
value] at the input with a set
comparison value. Possible
comparison values:
A-F
4-0
If the result of the comparison
is better or equal, 1 will be
output.
[Comparator] numeric bool Compares the signals at the
inputs with each other. Sig-
nals on the outputs:
1 : The relation displayed in
the output name applies.
0 : The relation displayed in
the output name does not ap-
ply.

137
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Number format on the out-


Logic block Number format on the input Description
put
[Window function FNC] numeric bool Compares the [value] input
with the 2 threshold values
[THR1] and [THR2].
The output behaves like an
NC contact (normally closed):
0 : Value < THR1
0 : Value > THR2
1 : Value ≥ THR1 AND Value
≤ THR2
[Window function FNO] numeric bool Compares the [value] input
with the 2 threshold values
[THR1] and [THR2].
The output behaves like a NO
contact (normally open):
1 : Value < THR1
1 : Value > THR2
0 : Value ≥ THR1 AND Value
≤ THR2
[Window function FNCh] numeric bool Compares the [value] input
with the 2 threshold values
[THR1] and [THR2].
The output behaves like an
NC contact (normally closed).
Switching at the threshold val-
ues is delayed (switching hys-
teresis):
0 : Value < (THR1 - 0.5 *
hysteresis)
0 : Value > (THR2 + 0.5 *
hysteresis)
1 : Value ≥ (THR1 + 0.5 *
hysteresis) AND value ≤
(THR2 – 0.5 * hysteresis)
The output value remains un-
changed in the hysteresis
ranges:
Hysteresis = 0.05 * (THR2 –
THR1)
Hysteresis = 2 mm if 0.05 *
(THR2 – THR1) ≤ 2 mm
[Window function FNOh] numeric bool Compares the [value] input
with the 2 threshold values
[THR1] and [THR2].
The output behaves like an
NO contact (normally open).
Switching at the threshold val-
ues is delayed (switching hys-
teresis):
1 : Value > (THR2 + 0.5 *
hysteresis)
0 : Value ≥ (THR1 + 0.5 *
hysteresis) AND value ≤
(THR2 – 0.5 * hysteresis)
The output value remains un-
changed in the hysteresis
ranges:
Hysteresis = 0.05 * (THR2 –
THR1)
Hysteresis = 2 mm if 0.05 *
(THR2 – THR1) ≤ 2 mm

138
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Number format on the out-


Logic block Number format on the input Description
put
[Hysteresis HNC] numeric bool Compares the [value] input
with the 2 threshold values
[THR1] and [THR2].
The output behaves like an
NC contact (normally closed).
The threshold values corre-
spond to the switching values
of the hysteresis:
0 : Value < THR1
1 : Value > THR2
In the hysteresis ranges, the
output value remains un-
changed (THR1 ≤ value ≤
THR2).
[Hysteresis HNO] numeric bool Compares the [value] input
with the 2 threshold values
[THR1] and [THR2].
The output behaves like an
NO contact (normally open).
The threshold values corre-
spond to the switching values
of the hysteresis:
1 : Value < THR1
0 : Value > THR2
In the hysteresis ranges, the
output value remains un-
changed (THR1 ≤ value ≤
THR2).
[String changed] alphanumeric bool Monitors a string at the input
for changes:
true : The character string
has changed.
false : The character string
has not changed.
After changing or switching
an application, a true will be
output once, regardless of the
character string at the input.

139
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Number format on the out-


Logic block Number format on the input Description
put
[Number changed] numeric bool Monitors a number at the in-
put for changes:
true : The number structure
has changed.
false : The number has not
changed.
A tolerance can be set. Sub-
sequently, only changes that
exceed the tolerance will be
detected.
u For integer numbers at the
input, set the tolerance to
0.
u For floating point numbers
at the input, set the
tolerance to >0 .
w This takes into account
natural inaccuracies of
floating point numbers.
If the tolerance is set to >0 ,
the comparison value remains
unchanged until the tolerance
has been exceeded. This al-
lows subtle value deviations
to be detected.
After changing or switching
an application, a true will be
output once, regardless of the
character string at the input.
Example: The tolerance is set
to 0,1 . The following values
are at the input: 1 | 1.03 | 1.06
| 1.09 | 1.12 | 1.15 | 1.22 |
1.23. The logic element out-
puts the following values:
true, false, false, false, true,
false, false, true.
[Bool changed] bool bool Monitors a Boolean value at
the input for changes:
true : The value has
changed.
false : The value has not
changed.
After changing or switching
an application, a true will be
output once, regardless of the
character string at the input.

10.4.12 Logic blocks [Logical functions]


The following logic blocks are provided in the [Logical functions] area:

Number format on the out-


Logic block Number format on the input Description
put
[AND] bool bool Compares the signals on the
inputs with each other. Sig-
nals on the output:
1 : All signals on the inputs
are 1 .
0 : At least one signal on the
inputs is 0 .

140
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Number format on the out-


Logic block Number format on the input Description
put
[OR] bool bool Compares the signals on the
inputs with each other. Sig-
nals on the output:
1 : At least one signal on the
inputs is 1 .
0 : All signals on the inputs
are 0 .
[Terminal board] bool bool Transmits the signal on the
input to the output.
[NOT] bool bool Inverts the signal on the input.
Signals on the output:
1 : The signal on the input is
0.
0 : The signal on the input is
1.
[Fixed bool] bool bool Outputs an adjustable
Boolean value.

10.4.13 Logic blocks [Output]


The following logic blocks are provided in the area [Output]:

Logic block Number format on the input Description


[String output] alphanumeric Saves the received string. The logic
block [Output string] is available up to
10x. The content of the logic blocks is
retrieved via the process interface. (Ò
Interfaces / 146)
[Binary output] byte array Saves the received binary data. The log-
ic block [Binary output] is available up to
10x. The byte array has a maximum size
of 256 bytes. The content of the logic
blocks is retrieved via the process inter-
face. (Ò Interfaces / 146)
[FTP file prefix] string Uses the string at the input as a prefix
for the FTP/SFTP file name.
A maximum of 128 characters is accept-
ed as a string. After that, it will be cut off.
[FTP file suffix] string Uses the string at the input as a suffix
for the FTP/SFTP file name.
A maximum of 128 characters is accept-
ed as a string. After that, it will be cut off.
[DIGITAL_OUT1] bool Switches the digital output with or with-
out limited signal duration. The digital
[DIGITAL_OUT2] bool outputs have the following settings:
[Static]: Switches the digital output with-
out a limited signal duration (recom-
mended setting).
[Pulsed]: Switches the digital output with
a limited signal duration ( >= 10 ms ).
[Virtual pins bytes 1-8] bool Transfers the data from the logic area to
an interface. The virtual pins are memo-
ry banks.
A virtual pin consists of an 8-bit order.
The 8 virtual pins are arranged in se-
quence to maximum 64 Boolean values
and provided via an interface. (Ò Inter-
faces / 146) Non-assigned virtual pins
provide a Boolean 0 .

141
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

10.4.14 Logic blocks [Pin events]


The following logic blocks are provided in the [Pin events] area:

Logic block Number format on the output Description


[Ready for trigger] bool The device is ready for trigger to capture
a new image.
[Error] bool The device has found an error.
[Image capture finished] bool The device has finished the image cap-
ture.
[Process Interface] bool The digital output is switched via the
process interface with the o command
(set logic state of an ID).
The command is described in the sepa-
rate “Programmer’s Guide”: documenta-
tion.ifm.com
[Change of application finished] bool The application was successfully
changed.
The logic element can only be connect-
ed to the [DIGITAL_OUTx] logic ele-
ments.

10.4.15 Logic element [Result status]


The following logic block is provided in the [Result status] area:

Logic block Number format on the input Description


[Status definition: pass/fail] bool Outputs the result of an application:
1 : The application was successfully ex-
ecuted.
0 : The application was not successfully
executed.
The result is written in the service report
and is available for statistical calcula-
tions.
[Publish result (scanner mode)] bool Displays a result on the output interfaces
(I/O, PCIC, fieldbuses, FTP, USB) when
true is present at the input.
The results of the output interfaces are
published as usual if the logic block is
not used.
In edit mode, the logic block has no ef-
fect, except for test mode.

10.4.16 Example – code found


A digital signal is provided when the device finds a code.

142
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Fig. 47: Example of “Code found”

If the model set in the [Application results] logic block detects the code, the [DIGITAL_OUT2] logic
block will provide the signal High . If no code is detected, the [DIGITAL_OUT2] logic block will provide
the signal Low .

10.4.17 Example – compare reference code


A digital signal is provided if two character strings are identical.

Fig. 48: Example of “Compare reference code”

143
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

If the code content of the set model is identical with the character string in the [Fixed string] logic
block, the [DIGITAL_OUT2] logic block will provide the signal High . If no code is recognised, the
[DIGITAL_OUT2] logic block will provide the signal Low .

10.4.18 Example – compare reference code


A digital signal is provided if two character strings are identical. One of the character strings can be
adapted via the process interface.

Fig. 49: Example of “Compare reference code”

If the code content of the set model matches the character string in the [PCIP input string] logic block,
the [DIGITAL_OUT2] logic block will provide the signal High . If no code is recognised, the
[DIGITAL_OUT2] logic block will provide the signal Low .
The [PCIP input string] logic block is addressed with the j Command via the process interface.

The j Command is described in the separate Programmer’s Guide.

10.4.19 Example – compare distance values


The X and Y positions of two models are compared.

144
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Fig. 50: Example of “Compare distance values”

If the distance is greater than “450 pixels”, the “High” signal is output.

10.4.20 Example – counter and comparator


Sums up the ROI results of the models and outputs them as a binary value.

Fig. 51: Example of a counter and comparator

In addition, the sum of the ROI results is compared with a constant and statistically recorded if there
are fewer than five ROI results.

10.4.21 Example – converter


The number of objects found in a model is converted into a string and output.

145
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Fig. 52: Example of a converter

10.5 Interfaces
The [Interfaces] function sets the interfaces of the selected application.
For that purpose the data packages that are sent via the [TCP/IP] and [IO-Link] interface are defined.

146
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

1 2

4 3

Fig. 53: [Interfaces] function


1 Settings 2 Main area
3 String output 4 Overview area

Settings
The interfaces are set in the [TCP/IP] and [IO-Link] area. The area contains the following operating
elements:

Operating element Type Description


Button Imports the configuration from a file with
the extension *.o2xpcic .
Button Exports the configuration to a file with
the extension *.o2xpcic .
Button Generates the TPCP/IP command of the
current configuration in the main area.
The c command sets the TCP/IP inter-
face configuration. The command is de-
scribed in the separate “Programmer’s
Guide”: documentation.ifm.com
The button has no function for the [IO-
Link] interface.
Button Deletes the created configuration in the
main area after confirmation.
The configuration cannot be restored.
Button Resets the setting to the default setting.

[Presets] List Sets sets of preset data packages. In-


cluded are sets for typical applications
and robot manufacturers.
The sets can be adapted. An adapted
set is saved as a [Custom] preset.

147
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[Data encoding] List Sets the data encoding:
[ASCII]
[binary]
[Precision] Input field Sets the number of decimal places.
[Display format] List Sets the display format:
[Fixed]: fixed point number
[Scientific]: exponential
[Decimal separator] Input field Sets the decimal separator. The decimal
separator is a 7-bit character (e.g. . ).
[Base] List Sets the output format:
[Binary]: Base 2
[Octal]: Base 8
[Decimal]: Base 10
[Hex]: Base 16
[Width] Input field Sets the minimum total length of the val-
ue.
[Numeric fill] List Sets the values of non-used bits:
[On]: Each non-used bit is assigned a
Boolean 0 and positive values are
preceded by a plus sign.
[Off]: Bits which are not used remain
blank.
[Fill] Input field Sets the fill character.
[Alignment] List Sets the alignment of the value within
the defined bit width:
[Left]
[Right]
[Byte order] List Sets the byte order:
[little endian]: least significant byte of bi-
nary data at the first position or at the
lowest memory address.
[big endian]: most significant byte of bi-
nary data at the first position or at the
lowest memory address.
[Network byte order]: byte order speci-
fied by the network protocol.
[Fieldbus-dependent]: byte order speci-
fied by the fieldbus.

Main area
The data packages of the interface are set in the main area. The data packages are displayed as
rectangles. The size and type are displayed above each data package.
The data packages are connected via dashed connecting lines. The data is sent from left to right in the
order of the data packages.
The [main area] contains the following operating elements:

Operating element Type Description


Button Adds a data package at the position. (Ò
Data packages / 149)
Button Sets the selected data package.

Button Deletes the selected data package.

148
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


Button Shows or hides the loop of the [Models]
data package.
The [Models] data package consists of
several data packages which are con-
nected via loops.
Depending on the ROIs contained, this
loop is passed through several times.

String output
The [String output] is changed with the data packages in the main area. Depending on the selected
data encoding in the [Settings] area, [String output] will be displayed as ASCII or Boolean code.
The [String output] area contains the following operating elements:

Operating element Type Description


[String output] List Sets the format of the [String output]:
[String output]: output as ASCII or
Boolean code, depending on the set da-
ta coding.
[Hex view]: output in hexadecimal sys-
tem. The output can be adjusted. (Ò
Output as hex view or INT16 ta-
ble / 165)
[INT16 table]: output as INT16 table.
The output can be adjusted. (Ò Output
as hex view or INT16 table / 165)
Button Adds an overlay to the output string. (Ò
Output as hex view or INT16 ta-
ble / 165)
Button Deletes the overlay. (Ò Output as hex
view or INT16 table / 165)
Button Copies the [String output] to the clip-
board.
Button Displays the data of data packages. (Ò
Display data from the data packag-
es / 165)

[String output] cannot be set directly. The setting is made via the data packages in the main
area.

Overview area
The [overview area] displays a reduced overview of the main area. The red frame is shifted using the
mouse. This way, the data packages outside the visible area can be displayed.

10.5.1 Data packages


A data package is inserted in the main area with the button. After clicking on the button, a list
opens. The data package is set with the elements in the list.

The content of the list is variable and depends on the position of the data package in the
[Output string].

Data Packages [General]


The list contains the following data packages in the area [Allgemein]:

Data package Description


[Start string] Adjustable character string for starting a data transfer.
[End string] Adjustable character string for ending a data transfer.

149
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Data package Description


[User-driven input] Adjustable character string within data transfer.
[Index of the active application] Index of the active application
[Application decoding time [ms]] Evaluation time of the application in [ms]

Data packages [Logic layer]


The list contains the following data packages in the area [Logic layer]:

Data package Description


[Reading result (pass/fail)] Reading result of the logic block "Status definition: pass/fail
(Ò Logic element [Result status] / 142)
[Number of bytes in output string 0-9] Size in bytes of the content of the logic block “Output string”.
(Ò Logic blocks [Output] / 141)
[Output string 0-9] Content of the logic block “Output string”. (Ò Logic blocks
[Output] / 141)
[Digital output] Bit order with the values at the digital outputs.
[Virtual output] 8-byte order with the values at the inputs of the virtual pins.

Data packages [Application results]


The area [Application results] contains the following data packages:

Data package Description


[Anchor model ID] Model ID of the anchor tracking
[Anchor result] Result of the anchor tracking (Ò Anchor result / 150)
[Number of images] Number of the images defined for the application.
[Images] The image information is output in a grouping element. (Ò Im-
ages / 151)
[Number of models] Number of the images created for the application.
[Models] The data of the defined models is provided one after the oth-
er. (Ò Models / 154)

Data package [Delimiter]


The data package [Delimiter] has the following function:

Data package Description


[Delimiter] Delimiter to split data packages.

10.5.1.1 Anchor result

Data packages [Anchor result]


The [Anchor result] area contains the following data packages:

Data package Description


[Valid] Model recognised.
[Translation X] Displacement on the X-axis, related to the taught position.
[Translation Y] Displacement on the Y-axis, related to the taught position.

150
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Data package Description


[Rotation] The orientation of the object in degrees, related to the taught
position.

[User-driven input] Adjustable character string within data transfer.


[Delimiter] Delimiter to split data packages.

10.5.1.2 Images

Data packages [Images]

The quality values are only displayed when the image quality check is activated.
u Activate [Image quality check] for the selected image. (Ò Image quality check / 64)
w The quality values are displayed.

The [Images] area contains the following operating elements:

Data package Description


[Image ID] ID of the image.
[Image blob] Provides the recorded images in JPEG format one after the
other. The header and image data must be decoded.
Information on decoding can be found in the Programmer’s
Guide of the device, in the chapters "Receiving Images" and
"Image Data":
www.ifm.com
[Quality] Quality values of the images. (Ò Quality / 151)
[User-driven input] Adjustable character string within data transfer.
[Delimiter] Delimiter to split data packages.

Quality

Data packages [Quality]

The quality values are only displayed when the image quality check is activated.
u Activate [Image quality check] for the selected image.
w The quality values are displayed.

The [Quality] area contains the following data packages:

Data package Description


[Overexposed area] Quality values for overexposed areas of the image. (Ò Over-
exposed area / 152)
[Underexposed area] Quality values for underexposed areas of the image. (Ò Un-
derexposed area / 152)
[Brightness] Quality values for brightness of the image. (Ò Bright-
ness / 153)
[Sharpness] Quality values for sharpness of the image. (Ò Sharp-
ness / 153)
[User-driven input] Adjustable character string within data transfer.
[Delimiter] Delimiter to split data packages.

151
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Overexposed area

Data packages [Overexposed area]


The [Overexposed] area contains the following data packages:

Data package Description


[Warning] Warning if the [Value] is higher than [Threshold max] or lower
than [Threshold min].
[Value] Value for the image.
[Value min] Smallest [Value] reached since the start of the measure-
ments.
[Value max] Highest [Value] reached since the start of the measurements.
[Value short term] Short-term exponential smoothing (exponential filter). Formula
(1 - 0.2) * [Value short term] + 0.2 * [Value
short term]
[Value mid term] Medium-term exponential smoothing (exponential filter). For-
mula
(1 - 0.02) * [Value mid term] + 0.02 * [Value mid
term]
[Value long term] Long-term exponential smoothing (exponential filter). Formula
(1 - 0.005) * [Value long term] + 0.005 * [Value
long term]
[Threshold min] The set [min. value] from the [Image quality check] area. Im-
age quality check (Ò / 64)
[Threshold max] The set [max. value] from the [Image quality check] area. Im-
age quality check (Ò / 64).
[Trend] Absolute deviation calculated from the last N values (default:
N = 500 ). The sign indicates the direction of the trend.
[User-driven input] Adjustable character string within data transfer.
[Delimiter character] Delimiter to split data packages.

Underexposed area

Data packages [Underexposed area]


The [Underexposed] area contains the following data packages:

Data package Description


[Warning] Warning if the [Value] is higher than [Threshold max] or lower
than [Threshold min].
[Value] Value for the image.
[Value min] Smallest [Value] reached since the start of the measure-
ments.
[Value mx] Highest [Value] reached since the start of the measurements.
[Value short term] Short-term exponential smoothing (exponential filter). Formu-
la:
(1 - 0.2) * [Short-term value] + 0.2 * [Short-
term value]
[Value mid term] Medium-term exponential smoothing (exponential filter). For-
mula:
(1 - 0.02) * [Medium-term value] + 0.02 * [Medi-
um-term value]
[Value long term] Long-term exponential smoothing (exponential filter). Formu-
la:
(1 - 0.005) * [Long-term value] + 0.005 * [Long-
term value]

152
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Data package Description


[Threshold min] The set [min value] from the [Image quality check] area.
[Threshold max] The set [max value] from the [Image quality check] area.
[Trend] Absolute deviation calculated from the last N values (default:
N = 500 ). The sign indicates the direction of the trend.
[User-driven input] Adjustable character string within data transfer.
[Delimiter] Delimiter to split data packages.

Brightness

Data packages [Brightness]


The [Brightness] area contains the following data packages:

Data package Description


[Warning] Warning if the [Value] is higher than [Threshold max] or lower
than [Threshold min].
[Value] Value for the image.
[Value min] Smallest [Value] reached since the start of the measure-
ments.
[Value mx] Highest [Value] reached since the start of the measurements.
[Value short term] Short-term exponential smoothing (exponential filter). Formu-
la:
(1 - 0.2) * [Short-term value] + 0.2 * [Short-
term value]
[Value mid term] Medium-term exponential smoothing (exponential filter). For-
mula:
(1 - 0.02) * [Medium-term value] + 0.02 * [Medi-
um-term value]
[Value long term] Long-term exponential smoothing (exponential filter). Formu-
la:
(1 - 0.005) * [Long-term value] + 0.005 * [Long-
term value]
[Threshold min] The set [min value] from the [Image quality check] area.
[Threshold max] The set [max value] from the [Image quality check] area.
[Trend] Absolute deviation calculated from the last N values (default:
N = 500 ). The sign indicates the direction of the trend.
[User-driven input] Adjustable character string within data transfer.
[Delimiter] Delimiter to split data packages.

Sharpness

Data packages [Sharpness]


The [Sharpness] area contains the following data packages:

Data package Description


[Warning] Warning if the [Value] is higher than [Threshold max] or lower
than [Threshold min].
[Value] Value for the image.
[Value min] Smallest [Value] reached since the start of the measure-
ments.
[Value mx] Highest [Value] reached since the start of the measurements.

153
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Data package Description


[Value short term] Short-term exponential smoothing (exponential filter). Formu-
la:
(1 - 0.2) * [Short-term value] + 0.2 * [Short-
term value]
[Value mid term] Medium-term exponential smoothing (exponential filter). For-
mula:
(1 - 0.02) * [Medium-term value] + 0.02 * [Medi-
um-term value]
[Value long term] Long-term exponential smoothing (exponential filter). Formu-
la:
(1 - 0.005) * [Long-term value] + 0.005 * [Long-
term value]
[Threshold min] The set [min value] from the [Image quality check] area.
[Threshold max] The set [max value] from the [Image quality check] area.
[Trend] Absolute deviation calculated from the last N values (default:
N = 500 ). The sign indicates the direction of the trend.
[User-driven input] Adjustable character string within data transfer.
[Delimiter] Delimiter to split data packages.

10.5.1.3 Models

[Models] data packages


The [Models] area contains the following data packages:

Data package Description


[Model ID] The [Model ID] consists of a consecutive number (0-999) in
the order in which the models were defined. After 999 IDs
have been assigned in an application, the IDs of deleted mod-
els are assigned again.
[Numeric model type] Type of model in numerical form.
[Model decoding status] Decoding status of the model. (Ò Logic blocks [Model re-
sults] / 121)
[Model: Number of codes searched for]: Number of codes searched for in the model.
[Model pass/fail] Pass/fail status definition.
[Number of ROI results] Number of ROI groups in the model. (Ò Logic blocks [Model
results] / 121)
[ROI results] Results of the ROI groups (Ò ROI results / 154)
[All models passed] Status of the models.
[String model type] Model type as a string.
[Model name] Name of the model.
[User-driven input] Adjustable character string.
[Delimiter character] Delimiter to split data packages.

ROI results

[ROI results] data packages


The [ROI results] area contains the following data packages:

Data package Description


[ROI check] Results of ROI monitoring. (Ò ROI monitoring / 155)
[Contour detection ROI values] Results of the ROI values for contour detection. (Ò Contour
detection ROI values / 155)

154
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Data package Description


[BLOB analysis ROI values] Results of the ROI values for the BLOB analysis. (Ò BLOB
analysis ROI values / 156)
[ROI ID] ID of an ROI
[ROI passed] status of an ROI
[Number of codes found] Number of codes found within an ROI.
[Number of codes searched] Number of codes searched for.
[Codes] Results of the code search.
[Lines (OCR)] Results of text recognition.
[Number of lines (OCR)] Number of lines in text recognition.
[Number of contour matches] Number of contour matches within an ROI.
[Contour matches] Results of the contour matches. (Ò Contour matches / 158)
[Number of failed contour matches] Number of contour matches that do not fulfil [Model parame-
ters] such as [Min. score] or [Max. orientation].Contour detec-
tion (Ò / 104)
[Number of failed contour matches] will only be output when
the analysis mode is activated. Contour detection (Ò / 104)
[Failed contour matches] Contour matches that do not fulfil [Model parameters] such as
[Min. score] or [Max. orientation]. Failed contour matches
(Ò / 158)
[Failed contour matches] will only be output when the analysis
mode is activated. Contour detection (Ò / 104)
[Number of BLOBs] Number of BLOBs within an ROI.
[BLOB analysis] Results of the BLOB analysis. (Ò BLOB analysis / 159)
[Number of failed BLOBs] Number of BLOB objects that do not fulfil Model parameters
(Ò BLOB analysis / 95) or Object properties (Ò Object prop-
erties / 99). Failed BLOB analysis
[Failed BLOB analysis] BLOB objects which do not fulfil Model parameters (Ò BLOB
analysis / 95) or Object properties (Ò Object proper-
ties / 99). Failed BLOB analysis
[Image ID] ID of an image.
[User-driven input] Adjustable character string.
[Delimiter character] Delimiter to split data packages.

ROI monitoring

Data packages [ROI monitoring]


The [ROI monitoring] area contains the following data packages:

Data package Description


[ROI warning] Warning if [Min. distance to ROI] is smaller than the parame-
ter [Threshold ROI warning].
[Min. distance to ROI] Least distance of an object to the boundary of its ROI.
[Threshold ROI warning] Threshold value for the minimum distance between an ROI
and a code.

Contour detection ROI values

Data packages [Contour detection ROI values]


The [Contour detection ROI values] area contains the following data packages:

Data package Description


[ROI score min.] Value of the contour with the lowest score within a ROI.
[ROI score max.] Value of the contour with the highest score within a ROI.

155
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Data package Description


[Matches centre X] Averaged X-value of all contours found in the ROI. The posi-
tion reference of the contour is decisive. (Ò Contour detec-
tion / 104)
[Matches centre Y] Averaged Y-value of all contours found in the ROI. The posi-
tion reference of the contour is decisive. (Ò Contour detec-
tion / 104)
[Number of objects] Number of matches of the contour detection.

BLOB analysis ROI values

Data packages [BLOB analysis ROI values]


The [BLOB analysis ROI values] area contains the following data packages:

Data package Description


[Number of objects] Number of matches of the BLOB analysis.
[Total area] Total area of all found objects

Codes

[Codes] data packages


The [Codes] area contains the following operating elements:

Data package Description


[Geometry details] Details of the geometry of the code. (Ò Geometry de-
tails / 157)
[ROI check] Results of ROI monitoring. (Ò ROI check / 157)
[Code Index] Index of the code found.
[Code found] Message regarding code found.
[Code family] Family of the code found.
[Number of bytes in content] Number of bytes in the code content.
[Content] Content of the code.
[Number of bytes in composite code] Number of bytes in a composite code.
[Composite code (1D composite only)] Content of the 1D composite code.
[Has ISO 15415 quality] Code quality corresponds to ISO 15415.
[ISO 15415 numbers (2D only)] Content of the code in numbers in accordance with
ISO 15415.
[ISO 15415 letters (2D only)] Content of the code in letters in accordance with ISO 15415.
[Has ISO 15416] Code quality corresponds to ISO 15416.
[ISO 15416 numbers (1D only)] Content of the code in numbers in accordance with
ISO 15416.
[ISO 15416 letters (1D only)] Content of the code in letters in accordance with ISO 15416.
[Has AIM / ISO-TR29158 quality] Code quality corresponds to AIM / ISO-TR29158.
[AIM / ISO-TR29158 quality (numbers)] Content of the code in numbers in accordance with AIM / ISO-
TR29158.
[AIM / ISO-TR29158 quality (letters)] Content of the code in letters in accordance with AIM / ISO-
TR29158.
[Has SEMI-T10] Code quality corresponds to SEMI T10.
[SEMI-T10] Content of the SEMI T10 code.
[User-defined overall quality (number)] Configured overall quality of the code as a number.
[User-defined overall quality (letter)] Configured overall quality of the code as a letter.

156
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Data package Description


[User-driven input] Adjustable character string.
[Delimiter character] Delimiter to split data packages.

Geometry details

[Geometry details] data packages


The [Geometry details] area contains the following operating elements:

Data package Description


[Orientation] Orientation of the code in degrees.
[Centre point] Centre point of the code.
[Half width] Half width of the code.
[Half height] Half height of the code.
[Outline] Outline of the code.

ROI check

[ROI check] data packages


The [ROI check] area contains the following data packages:

Data package Description


[Distance warning to mean position] Warning if [Distance to mean position] is smaller than the pa-
rameter [Threshold distance to mean position].
[Distance to mean position] Smallest distance of a code to the boundary of an ROI.
[Threshold distance to mean position] Threshold value for the minimum distance between an ROI
and a code.
[Movement warning] If the distance of at least one code contour to the ROI falls be-
low the set value, a warning is issued.
[Movement score] A mean value is formed from N read codes and stored as the
mean position (Nmax = 100 , where N = number of codes
read). If the distance of a read code is greater than the set
value, a warning is issued.
The mean value is reset if a code is not read successfully.
[Threshold movement score] If an object continues to move in the same direction in suc-
cessive images, the probability that the code will continue to
move in that direction increases. If the probability exceeds the
set value, a warning is issued.

Lines (OCR)

[Lines (OCR)] data packages


The [Lines (OCR)] area contains the following data packages:

Data package Description


[Corrected text] Corrected detected text.
Corrected text length Length of the corrected detected text.
[Score] Evaluation of the detected text.
[Centre X] The centre of the detected text on the X-axis.
[Centre Y] The centre of the detected text on the Y-axis.
[Line index] Line index of the detected text.
[User-driven input] Adjustable character string.
[Delimiter character] Delimiter to split data packages.

157
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Contour matches

Data packages [Contour matches]


The [Contour matches] area contains the following operating elements:

Data package Description


[Robot sensor calibration] After calibrating (Ò Calibration wizards / 35) the device, the
following data packages are available:
[X (calibrated)]: calibrated X-position of the object in world/ro-
bot coordinates [m].
[Y (calibrated)]: calibrated Y-position of the object in world/ro-
bot coordinates [m].
[Z (calibrated)]: calibrated Z-position of the object in world/ro-
bot coordinates [m].
[Angle (calibrated)]: Calibrated orientation of the object in
world/robot coordinates.
[Rotation] Orientation of the object
[Position X] X-position of the object in world/robot coordinates [m].
[Position Y] Y-position of the object in world/robot coordinates [m].
[Score] Evaluation of the object.
[Object index] Index of the object.
[User-driven input] adjustable character string
[Delimiter] Delimiter to split data packages.

Failed contour matches

[Failed contour matches] data packages


[Failed contour matches] are contour matches that do not fulfil [Model parameters] such as
[Min. score] or [Max. orientation]. Contour detection (Ò / 104)
[Failed contour matches] will only be output when the [analysis mode] is activated. Contour detection
(Ò / 104)
The [Failed contour matches] area contains the following data packages:

Data package Description


[Robot sensor calibration] After calibrating the device, the following data packages are
available:
[X (calibrated)]: calibrated X-position of the object in world/ro-
bot coordinates [m].
[Y (calibrated)]: calibrated Y-position of the object in world/ro-
bot coordinates [m].
[Z (calibrated)]: calibrated Z-position of the object in world/ro-
bot coordinates [m].
[Angle (calibrated)]: Calibrated orientation of the object in
world/robot coordinates.
[Rotation] Orientation of the object
[Position X] X-position of the object in world/robot coordinates [m].
[Position Y] Y-position of the object in world/robot coordinates [m].
[Score] Evaluation of the object.
[Object index] Index of the object.
[User-driven input] adjustable character string
[Delimiter] Delimiter to split data packages.

158
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

BLOB analysis

Data packages [BLOB analysis]


The [BLOB analysis] area contains the following the data packages:

Data package Description


[Robot sensor calibration] After calibrating (Ò Calibration wizards / 35) the device, the
following data packages are available:
[Centre X (calibrated)]: calibrated centre of the X-position of
the object in world/robot coordinates [m].
[Centre Y (calibrated)]: calibrated centre of the Y-position of
the object in world/robot coordinates [m].
[Centre Z (calibrated)]: calibrated centre of the Z-position of
the object in world/robot coordinates [m].
[Object index] Index of the object.
[Object area] The area of the object defined via the ROI and object index.

[Position X] The centre of gravity of the object on the X-axis, measured


from the left edge of the image.

[Position Y] The centre of gravity of the object on the Y-axis, measured


from the top of the image.

[Object height] The height of the smallest rectangle that completely encloses
the object and whose sides are parallel to the edge of the im-
age.

[Object width] The width of the smallest rectangle that completely encloses
the object and whose sides are parallel to the edge of the im-
age.

159
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Data package Description


[Roundness] The roundness of the object describes the shape factor of the
contour. The distance of the contour to the centre of gravity of
the surface is measured. Wide cut-outs into the object do not
have much effect on the value, as the centre of gravity does
not change significantly.
A perfectly round circle has the value "100".

[Circularity] The circularity of the object describes the similarity to a per-


fect circle. A perfectly round circle has the value "100".

[Compactness] The compactness of the object. Empty regions have the value
" 0 ”; circular objects have the value " 1 ”. Long narrow objects
have average values. Entangled objects and objects with
holes have high values.

[Rectangularity] The rectangularity of the object. A perfect rectangle has the


value "100".

[Outer radius] The radius of the smallest circle that the object completely en-
closes.

[Inner radius] The radius of the largest circle that fits completely inside the
object.

[Inner width] The width of the largest rectangle that fits completely inside
the object and whose sides are parallel to the edge of the im-
age.

160
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Data package Description


[Inner height] The height of the largest rectangle that fits completely inside
the object and whose sides are parallel to the edge of the im-
age.

[Number of holes] The number of holes in the object.

[Rotation] The orientation of the object in degrees.

[Min. grey-scale value] The lowest grey value of the object.

[Max. grey-scale value] The highest grey value of the object.

[Average grey-scale value] The average grey value of the object.

[Grey-scale value deviation] The standard deviation of the object (homogeneity). The val-
ue is low for uniformly grey objects and high for irregular sur-
faces or greyscale gradients.

[User-driven input] adjustable character string.


[Delimiter] Delimiter to split data packages.

161
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Failed BLOB analysis

[Failed BLOB analysis] data packages


[Failed BLOB analysis] are BLOB objects that do not fulfil Model parameters (Ò BLOB analysis / 95)
or Object properties (Ò Object properties / 99).
The [Failed BLOB analysis] area contains the following the data packages:

Data package Description


[Robot sensor calibration] After calibrating the device, the following data packages are
available:
[Centre X (calibrated)]: calibrated centre of the X-position of
the object in world/robot coordinates [m].
[Centre Y (calibrated)]: calibrated centre of the Y-position of
the object in world/robot coordinates [m].
[Centre Z (calibrated)]: calibrated centre of the Z-position of
the object in world/robot coordinates [m].
[Object index] Index of the object.
[Object area] The area of the object defined via the ROI and object index.

[Position X] The centre of gravity of the object on the X-axis, measured


from the left edge of the image.

[Position Y] The centre of gravity of the object on the Y-axis, measured


from the top of the image.

[Object height] The height of the smallest rectangle that completely encloses
the object and whose sides are parallel to the edge of the im-
age.

[Object width] The width of the smallest rectangle that completely encloses
the object and whose sides are parallel to the edge of the im-
age.

162
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Data package Description


[Roundness] The roundness of the object describes the shape factor of the
contour. The distance of the contour to the centre of gravity of
the surface is measured. Wide cut-outs into the object do not
have much effect on the value, as the centre of gravity does
not change significantly.
A perfectly round circle has the value "100".

[Circularity] The circularity of the object describes the similarity to a per-


fect circle. A perfectly round circle has the value "100".

[Compactness] The compactness of the object. Empty regions have the value
" 0 ”; circular objects have the value " 1 ”. Long narrow objects
have average values. Entangled objects and objects with
holes have high values.

[Rectangularity] The rectangularity of the object. A perfect rectangle has the


value "100".

[Outer radius] The radius of the smallest circle that the object completely en-
closes.

[Inner radius] The radius of the largest circle that fits completely inside the
object.

[Inner width] The width of the largest rectangle that fits completely inside
the object and whose sides are parallel to the edge of the im-
age.

163
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Data package Description


[Inner height] The height of the largest rectangle that fits completely inside
the object and whose sides are parallel to the edge of the im-
age.

[Number of holes] The number of holes in the object.

[Rotation] The orientation of the object in degrees.

[Min. grey-scale value] The lowest grey value of the object.

[Max. grey-scale value] The highest grey value of the object.

[Average grey-scale value] The average grey value of the object.

[Grey-scale value deviation] The standard deviation of the object (homogeneity). The val-
ue is low for uniformly grey objects and high for irregular sur-
faces or greyscale gradients.

[User-driven input] adjustable character string.


[Delimiter] Delimiter to split data packages.

10.5.2 Example of “Provide overall quality”


In the example, the data packages were configured such that the overall quality is provided via the
interface.

164
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Fig. 54: Provide overall quality

10.5.3 Output as hex view or INT16 table


The output of the clipboard as a [hex view] or [INT16 table] can be customised:

Operating element Type Description


[Little endian] List Sets the byte order of the integer values.
The setting is only available for [INT16
table].
Button Adds an overlay to the output string.
After clicking the button, the data format
and the byte order of the overlay are set.
Button Deletes the overlay.

10.5.4 Display data from the data packages


The [List byte positions of data fields] button displays the data contained in static data packages.
After clicking the button, the [Data layout info] window opens. The following data is displayed in
the window:
• Offset
• Size [bytes]
• Meaning
• Context
The window contains the following operating elements:

Operating element Type Description


Split by comma / semicolon / tab char List Sets the separator for export as a CSV
file.
Export as CSV Button Exports the table as a CSV file.

165
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Only data from static data packages can be displayed. A dynamic data package in the main
area cancels the table view in the [Data layout info] window.

10.6 Test
The function [Test] records statistical data on the selected application. During the test, the current
states of the device are displayed.

1 5

Fig. 55: [Test] function


1 Start/stop test and apply trigger 2 State of the digital outputs
3 Overall statistics 4 Image quality check
5 Test images

The function [Test] contains the following operating elements:

Operating element Type Description


[Start test] Button Starts the test according to the setting
“Trigger mode”. (Ò Trigger mode / 33)
[Stop test] Button Stops the test.
Button Forces a trigger manually.

[Reset All Satistics] Button Resets all statistics.

166
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Overall statistics
The overall statistics include the following data:
• number of detected and non-detected codes
• evaluation time of the test images
• number of the total measurements

Image quality check


The image quality check checks whether the measured values received from the camera are within
the permitted value range. (Ò Image quality check / 64)

Test images
Image capture generates test images while the test is active. The test images are chronologically
sorted. The most current test image is on the very left.
Additional information is saved with each test image:
• State of the digital outputs
• overall statistics
• capture time as from test start in minutes:seconds
By clicking on a test image, it is displayed enlarged in the area [Live image]. The states of the digital
outputs and the overall statistics are displayed at the time the test image is captured.

Clicking the reduced test image several times switches between the selected test image and the
last recorded test image.

167
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

11 Service report
The area [Service report] creates an evaluation of the last 17 pass and fail evaluations with
information on the software and hardware of the device. The service report can be exported for
support requests.

Fig. 56: [Service report] area

The area [Service report] contains the following operating elements:

Operating element Name Description


Reload Reloads the evaluation of the service re-
port. Reloading may take up to 1 min.

Export Exports the evaluation of the service re-


port to a folder.

[Sort and filter] area:

168
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Name Type Description


[Sort by] List Sorts the evaluation according to the fol-
lowing characteristics:
[Newest first]: The most recent measure-
ments are displayed first.
[Failed -> Passed]: Failed measure-
ments are displayed first.
[Passed -> Failed]: Passed measure-
ments are displayed first.
[OUT1 -> OUT2]: Output 1 is displayed
before output 2.
[OUT2 -> OUT1]: Output 2 is displayed
before output 1.
[Application name]: The measurements
are sorted alphabetically by the name of
the application.
[Duration long -> short]: The longest
measurement is displayed first.
[Duration short -> long]: The shortest
measurement is displayed first.
[Filter status failed] Checkbox Filters the measurements with the status
[Failed] if the checkbox is deactivated.
[Filter status passed] Checkbox Filters the measurements with the status
[Passed] if the checkbox is deactivated.
[Filter OUT1 active] Checkbox Filters the measurements with active
output 1 if the checkbox is deactivated.
[Filter OUT1 inactive] Checkbox Filters the measurements with inactive
output 1 if the checkbox is deactivated.
[Filter OUT2 avtive] Checkbox Filters the measurements with active
output 2 if the checkbox is deactivated.
[Filter OUT2 inactive] Checkbox Filters the measurements with inactive
output 2 if the checkbox is deactivated.

The evaluation is not filtered by default.

169
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

12 device set-up
The device and the networks used are set in the area [Device setup].

Fig. 57: [Device setup] area.


1 List

The [Device setup] area contains the following list items:

Element Description
[General] Sets the device, updates the firmware and imports/exports the
settings. (Ò General / 170)
[Network] Sets the Ethernet interface. (Ò Network / 171)
[Interfaces] Sets the process interfaces. (Ò Interfaces / 172)
[NTP] Synchronises the time of the device. (Ò NTP / 173)
[FTP / SFTP] Sets the connection to an FTP/SFTO server. (Ò FTP /
SFTP / 174)
[RTSP] Sets the Real-Time Streaming Protocol. (Ò RTSP / 175)
[ifm storage device] Sets the ifm mass storage device. (Ò ifm mass storage de-
vice / 176)

12.1 General
The item [General] sets the device, updates the firmware and imports/exports the settings.
The [General] item contains the following operating elements:

Operating element Type Description


Button Saves the settings on the device.

170
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


Button Resets the changed settings.

[Name] Input field Sets the name of the device.


[Description] Input field Sets a description for the device.
[Password protection] Switch Activates the password protection. Pass-
word protection activates write protec-
tion for the following areas:
- [Application]
- [Device setup]
- Teaching via button on the device
The password unlocks the areas. The
[Monitor] area can always be accessed,
regardless of password protection.
[Change password] Button Changes the password.
If the password is lost, contact the man-
ufacturer’s support quoting the serial
number of the device.
[Device button functions] Switch Sets the function of the device buttons.
The [Button teach] function is used to
teach the device directly (see operating
instructions).
[Save and restore statistics on applica- Switch Saves the statistics of an application be-
tion switch] fore switching to another application.
If there are already statistics saved for
an application, they are restored.
[Export] Button Exports the settings of the device to a
file.
[Import] Button Imports the settings of the device from a
file.
The settings and applications on the de-
vice are overwritten during import.
[Update] Button Updates the firmware of the device.
The current firmware version is shown
next to the button.
In order for the firmware to update suc-
cessfully, a static IP address needs to
be assigned to the device beforehand.
(Ò Assigning a static IP address / 178)
[Reset] Button Restores the factory settings and de-
letes all settings and applications.
[Reboot] Button Reboots the device.

12.2 Network
The [Network] item sets the Ethernet interface.
The item [Network] contains the following operating elements:

Operating element Type Description


Button Saves the settings on the device.

Button Resets the changed settings.

[DHCP] Switch Activates the automatic assignment of


the network settings (DHCP).
With activated DHCP, the input fields [IP
address], [Subnet mask] and [Gateway]
are not available.

171
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[IP address] Input field Changes the IP address of the device.
Preset: 192.168.0.69 .
The device must be rebooted if the pro-
cess interface is used after a change of
the IP address. If TCP/IP is used as pro-
cess interface, it is not necessary to re-
boot the device.
[Connected via …] Output field Displays the current connection type and
IP address.
[Subnet mask] Input field Sets the subnet mask of the device.
Preset: 255.255.255.0 .
[Gateway] Input field Sets the gateway of the device.
Preset: 192.168.0.201 .
[MAC address] Output field Displays the MAC address of the device.

12.3 Interfaces
The [Interfaces] item sets the process interfaces. In addition, a wiring test can be carried out.
The [Interfaces] item contains the following operating elements:

Operating element Type Description


Button Saves the settings on the device.

Button Resets the changed settings.

[Process interface version] List Sets the version of the process interface
protocol.
[TCP/IP port for PCIC] Input field Sets the TCP/IP port for the data of the
process interface with a socket connec-
tion.
Preset: 50010 .
[PCIC TCP/IP scheme auto update] Switch Activates the corresponding PCIC data
output (see operating instructions) when
the active application is changed.
If the switch is deactivated, the PCIC da-
ta output of the previous application re-
mains active when the active application
is changed (see operating instructions).
Only if the connection to the device is
separated, will the PCIC data output
change.
[Active fieldbus] List Sets the fieldbus for the communication
with connected controllers.
The setting has an effect on all applica-
tions.
[IO-Link segmentation enabled] Switch Activates dividing of IO-Link data blocks
that are >30 bytes. Each data block
must be acknowledged manually or the
[Maximum IO-Link hold time] must be
waited for.
[Maximum IO-Link hold time] Input field Sets the time between sending data
blocks. When this time has elapsed, the
next data block is sent.
Preset: 20 ms .
[Output logic] List Sets the output logic of the digital out-
puts of the device:
[PNP]: switch positive potential to the
output.
[NPN]: switch ground to the output.

172
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[IO debouncing] Switch Activates the debouncing of the trigger.
Then a signal has to be present for at
least 4 ms to be detected as trigger sig-
nal. Shorter signals are ignored.
[External illumination] List Reserves the digital output OUT2 for ex-
ternal illumination (see operating instruc-
tions):
[Disabled]: The external illumination is
not used and is deactivated.
[Using OUT2] The digital output OUT2 is
reserved for external illumination. Output
OUT2 is no longer available as a logic
block.
[OUT 1] Switch Switches the digital output OUT1. The
wiring test must be active.
[OUT 2] Switch Switches the digital output OUT2. The
wiring test must be active.
[Start] Button Starts the wiring test to test the digital
outputs. During the wiring test the appli-
cations are disabled.

12.4 NTP
The [NTP] item synchronises the time of the device. The time is synchronised via the Network Time
Protocol (NTP).

In the event of connection problems activate port 123 in the firewall.

The clock is not buffered by a battery. If the current supply fails, the clock is reset.

The [NTP] item contains the following operating elements:

Operating element Type Description


Button Saves the settings on the device.

Button Resets the changed settings.

[Activate NTP] Switch Activates the Network Time Protocol.


[NTP server IP] Input field Sets the IP address of the server. The
date and the time are synchronised with
the server. Several servers can be set.
Besides the IP address, the status of the
server is displayed:
[Green field]: The server responds.
[Red field]: The server does not re-
spond.
[Grey field]: So far no request has been
sent to the set NTP server.
Button Checks the IP address of the server.

Button Deletes the IP address of the server.


The button is only displayed after check-
ing an IP address.
Button Adds a server.
The button is only displayed after check-
ing an IP address.

173
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Operating element Type Description


[Synchronisation time] Input field Sets the waiting time of the NTP server
when the device is rebooted. During the
waiting time the device is not accessible
for the ifm Vision Assistant.
[Time zone] List Sets a time zone.
Button Selects the local time zone from the
[Time zone] list.
[Current time set on device] Output field Displays the time currently used in the
device.
[Current time in the time zone of the de- Output field Shows the time in the time zone of the
vice] device.
[Set time without NTP] Button Sets the ID of the device. The current
time of the system on which the ifm Vi-
sion Assistant is active is used as the
source.

12.5 FTP / SFTP


The item [FTP / SFTP] sets the connection to an FTP server. The device sends current images and
configurations to the FTP server if certain events occur.

FTP transfers data such as user name and password unencrypted. The data can be read and
manipulated by third parties.
u Do not use the user name and the password of the FTP server for other services.
u Restrict the visibility of the FTP server to the local network.
u Do not use the function [FTP] if the FTP server is visible on the internet.
u Use an SFTP server for additional security.

In the event of connection problems, activate ports 20 and 21 in the firewall.

The item [FTP / SFTP] contains the following operating elements:

Operating element Type Description


Button Saves the settings on the device.

Button Resets the changed settings.

Button Adds a server.

[Status of the FTP server] Output field Displays the status of the server via a
coloured field:
[Green field]: The server responds.
[Red field]: The server does not re-
spond.
[Grey field]: So far no request has been
sent to the server.
Button Renames the server.

Button Deletes the IP address of the server.

[Activate] Checkbox Activates the client of the device.

Area [Connection]:

174
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


[Connection type] List Sets the server type for the connection:
[FTP]
[SFTP]
[Server IP address] | [port] Input field Sets the IP address and the port of the
FTP server.
Presetting of the port: 21 .
[User] | [Password] Input field Sets the user name and the password of
the FTP server if authentication is re-
quired.

Area [Folders]:

Operating element Type Description


[Transfer decoding results] Checkbox Activates the transfer of decoding results
to the FTP server.
[Path] Input field Sets the path to transfer the decoding
results.
[Transfer image data] Checkbox Activates the transfer of the image data
to the FTP server.
[Path] Input field Sets the path to transfer the image data.
[Transfer device and application data] Checkbox Activates the transfer of device and ap-
plication data to the FTP server.
[Path] Input field Sets the path to transfer the device and
application data.

[Configuration] area:

Operating element Type Description


[Passive mode] Checkbox Activates the passive mode. The pas-
sive mode reduces connection problems
in context with a firewall.
[Keep alive] Checkbox Activates the keep-alive function. De-
pending on the configuration, the con-
nection is quickly stopped on the server
side. With the keep-alive function, the
connection remains active.
[Warranty of data transfer] Checkbox Activates the warranty of data transfer. It
is ensured that all data is transferred. If
the data is not transferred fast enough, it
is possible that
- image capture is delayed,
- the frame rate is reduced.
[Result types that should be pushed] List Sets the result type which is transferred
to the FTP server:
[Only fail results]
[Only pass results]
[All results]

12.6 RTSP
The item RTSP controls the transfer of the live image as video data stream. The stream can be
replayed with a client software (video player with RTSP support).
As soon as the item [RTSP] is active and the following conditions apply, the live image of the device is
transmitted:
• an application is active, (Ò Application / 26)
• at least one model has been added. (Ò Add new model / 66)

175
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

Together with the live image, the ROIs are transmitted, depending on the model set. (Ò Models / 65)
The live image can be retrieved via the displayed URL.

The function increases the evaluation time of the device.

In the event of connection problems activate the port 554 in the firewall.

The item [RTSP] contains the following operating elements:

Operating element Type Description


Button Saves the settings on the device.

Button Resets the changed settings.

[Activate RTSP] Switch Activates the Real-Time Streaming Pro-


tocol.
[Image repetition rate] Slider / input field Sets the images per second. High val-
ues lead to smoother image transitions
and require more band width in the net-
work.
[Image quality] Slider / input field Sets the quality of the images. High val-
ues increase the image quality, reduce
the compression and increase the re-
quired bandwidth.
Small values reduce the image quality,
increase the compression and reduce
the required bandwidth.
[Port] Output field Displays the preset port.
[RTSP stream url] Output field Displays the URL set for retrieving the
RTSP. Clicking on the URL opens the
URL in a compatible video player.
Right-click to copy the URL to the clip-
board.

12.7 ifm mass storage device


The [ifm storage device] item sets the ifm storage device of the device.
The ifm mass storage device is inserted into the device after opening a service lid (see operating
instructions). The configuration of the device and error images are saved on the ifm mass storage
device. The stored content can be accessed via a web interface.
If the device fails due to a defect, the configuration is saved on the ifm mass storage device and can
be quickly transferred to a replacement device (see operating instructions).

The ifm mass storage device has a data retention time of 3 years at a maximum storage
temperature of 55 °C. If this time is exceeded, data loss is possible. During operation, the stored
data is refreshed periodically.

The service lid may only be opened for the transfer of the configuration.
u Only open the service lid in a clean and dry environment (pollution degree 2).

The ifm mass storage device must only be used for the O2x4xx and O2x5xx devices.
u Do not use the ifm mass storage device with a PC, notebook, etc.

The [ifm storage device] item contains the following operating elements:

Operating element Type Description


Button Saves the settings on the device.

176
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Operating element Type Description


Button Resets the changed settings.

[Enable failed image storage] Switch Activates saving of images to the ifm
mass storage device in case of an error.
[Enable configuration change storage] Switch Activates saving of configurations on the
ifm mass storage device.
[Format storage] Button Formats the ifm mass storage device.
Formatting cannot be reversed. All data
on the ifm mass storage device is delet-
ed.
[Import config] Button Imports the configuration saved last on
the ifm mass storage device.
After clicking the button, the configura-
tions for the import are selected:
[Global settings]
[Network]
[Application settings]
The currently used configuration is over-
written by the import.
[Status] Output field Displays the status of the ifm mass stor-
age device. The status of the partitions
contained is also displayed.
[Web interface URL] Output field Displays the URL to the web interface of
the ifm mass storage device.
A click on the URL shows the content of
the ifm mass storage device in the web
browser.

177
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

13 Appendix

13.1 Assigning a static IP address


Below you find a description of how to assign a static IP address to the PC. A static IP address is
necessary if
• the assignment of a dynamic IP address is not possible due to the network configuration,
• the firmware of the device is to be updated.

The details of the network settings in this document describe the procedure for PCs with the
operating system Windows 10. Changing network settings on a PC requires administrator
rights. The following ports must be enabled in the firewall:
- UDP: 3321
- TCP/HTTP: 80 and 8080
- TCP: 50010

Assigning a static IP address:


u Open the [Network and Sharing Centre] in Windows.
u Click the name of the local network.
w The window [Ethernet Status] opens.
u Click on the [Properties] button.
w The window [Ethernet Properties] opens.
u Activate the checkbox [Internet Protocol Version 4 (TCP/IPv4)].
u Click on the [Properties] button.
w The window [Internet Protocol Version 4 (TCP/IPv4) Properties] opens.
u Activate the option field [Use the following IP address].
u Set 192.168.0.1 for the IP address.
u Set 255.255.255.0 for the subnet mask.
u Set 192.168.0.201 for the standard gateway.
u Click on the [OK] button.

178
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Glossary
Live image

The live image displays the received data of


the device.

NTP
The Network Time Protocol synchronises
the local time with the time of an NTP
server.

Processing time
The processing time indicates the transit
time of the signal from the trigger input to
the process interface.

RTSP
The Real-Time Streaming Protocol controls
the transmission of audio-visual data as a
stream. It controls the session between
receiver and transmitter. Communication
takes place via port “554”.

179
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

List of figures
Fig. 1 Home page........................................................................................................................... 14
Fig. 2 User interface....................................................................................................................... 19
Fig. 3 [Monitor] area ....................................................................................................................... 21
Fig. 4 [Application] area ................................................................................................................. 26
Fig. 5 [Edit application] area........................................................................................................... 28
Fig. 6 Function [Images & triggers] ................................................................................................ 30
Fig. 7 Rough measurement............................................................................................................ 36
Fig. 8 Rough measurement............................................................................................................ 37
Fig. 9 The device is aligned with an angle of 90° perpendicular to the working plane................... 37
Fig. 10 Measuring tool placed on the object with focus out of focus (left) and in focus (right)......... 38
Fig. 11 Setting the internal illumination ............................................................................................ 39
Fig. 12 Measuring points placed along the measuring tool.............................................................. 40
Fig. 13 Sensor calibration ................................................................................................................ 41
Fig. 14 Precise measurement .......................................................................................................... 43
Fig. 15 The marker sheet is placed on the object ............................................................................ 44
Fig. 16 Prepare calibration ............................................................................................................... 44
Fig. 17 List of available marker sheets............................................................................................. 45
Fig. 18 Printed marker sheet with scales ......................................................................................... 46
Fig. 19 The live image shows a section of the marker sheet. .......................................................... 47
Fig. 20 Setting the internal illumination ............................................................................................ 48
Fig. 21 Marker sheet placed underneath the device ........................................................................ 49
Fig. 22 Z offset ................................................................................................................................. 50
Fig. 23 Robot sensor calibration ...................................................................................................... 52
Fig. 24 Robot sensor calibration ...................................................................................................... 52
Fig. 25 The marker sheet is placed on the object ............................................................................ 53
Fig. 26 Prepare calibration ............................................................................................................... 53
Fig. 27 List of available marker sheets............................................................................................. 54
Fig. 28 Printed marker sheet with scales ......................................................................................... 55
Fig. 29 The live image shows a section of the marker sheet. .......................................................... 56
Fig. 30 Setting the internal illumination ............................................................................................ 57
Fig. 31 Marking point........................................................................................................................ 58
Fig. 32 The table contains exemplary coordinates for marker point A. ............................................ 58
Fig. 33 Marker sheet placed underneath the device ........................................................................ 59
Fig. 34 Z offset ................................................................................................................................. 60
Fig. 35 Setting the internal illumination ............................................................................................ 62
Fig. 36 Image quality check ............................................................................................................. 64
Fig. 37 [Models] function .................................................................................................................. 66
Fig. 38 Orientation of the code......................................................................................................... 74
Fig. 39 Text orientation .................................................................................................................... 92
Fig. 40 Object definition area ...........................................................................................................104
Fig. 41 Number of levels ..................................................................................................................110
Fig. 42 Object definition area ...........................................................................................................111
Fig. 43 [Flow] function ......................................................................................................................116

180
Universal vision sensor O2D5xx O2I4xx O2I5xx O2Uxxx

Fig. 44 Function [Logic]....................................................................................................................118


Fig. 45 Contact surfaces with red connecting line............................................................................119
Fig. 46 Output logic ..........................................................................................................................121
Fig. 47 Example of “Code found” .....................................................................................................143
Fig. 48 Example of “Compare reference code” ................................................................................143
Fig. 49 Example of “Compare reference code” ................................................................................144
Fig. 50 Example of “Compare distance values” ...............................................................................145
Fig. 51 Example of a counter and comparator .................................................................................145
Fig. 52 Example of a converter ........................................................................................................146
Fig. 53 [Interfaces] function .............................................................................................................147
Fig. 54 Provide overall quality ..........................................................................................................165
Fig. 55 [Test] function.......................................................................................................................166
Fig. 56 [Service report] area.............................................................................................................168
Fig. 57 [Device setup] area. .............................................................................................................170

181
O2D5xx O2I4xx O2I5xx O2Uxxx Universal vision sensor

List of tables
Tab. 1 Title bar ..................................................................................................................... 14
Tab. 2 Menu bar ................................................................................................................... 14
Tab. 3 Buttons ...................................................................................................................... 15
Tab. 4 Operating elements................................................................................................... 16

182

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy