MELFA Vision Instruction Manual
MELFA Vision Instruction Manual
CRn-700 Series
BFP-A8780-C
Revision History
Printing Date
Instruction Manual No.
2009-09-18
BFP-A8780-*
2009-10-16
BFP-A8780-A
2010-06-02
BFP-A8780-B
2010-10-19
BFP-A8780-C
Revision Contents
First Edition
Error in writing was corrected.
The explanation concerning the installation of MELFA-Vision was
added.
Error in writing was corrected.
Preface
Thank you for purchasing this network vision sensor for CRn-700 series Mitsubishi Electric industrial robots.
The network vision sensor is an option that is used in combination with a CRn-700 series controller to make it
possible to detect and inspect work through visual recognition. Before using this sensor, please read this
manual well so that you utilize the contents of this manual when using this network vision sensor.
This manual attempts to cover special handling as well. Please interpret the absence of an operation from
this manual as meaning that it can not be done.
The contents of this manual target the following software versions.
Robot controller: series
Ver. N1 or later
series
Ver. P4 or later
DANGER
WARNING
CAUTION
No part of this manual may be reproduced by any means or in any form,without prior consent from
Mitsubishi.
The details of this manual are subject to change without notice.
An effort has been made to make full descriptions in this manual.However,if any discrepancies or
unclear points are found,please contact your dealer.
The information contained in this document has been written to be accurate as much as
possible.Please interpret that items not described in this document cannot be perfomed..
Please contact your nearest dealer if you find any doubtful,wrong or skipped point.
Microsoft, Windows, and .NET Framework are registered trademarks of the Microsoft Corporation of
the United States in the United States and/or other countries.
In-Sight is registered trademark of the Cognex Corporation.
Adobe,the Adobe logo,Acrobat,and the Acrobat logo are trademarks of Adobe Systems incorporated.
Reference to registered trademarks and trademarks are omitted in this manual.
Safety Precautions
Always read the following precautions and the separate
"Safety Manual" before starting use of the robot to learn the
required measures to be taken.
CAUTION
All teaching work must be carried out by an operator who has received special training. (This also
applies to maintenance work with the power source turned ON.)
Enforcement of safety training
CAUTION
For teaching work, prepare a work plan related to the methods and procedures of operating the
robot, and to the measures to be taken when an error occurs or when restarting. Carry out work
following this plan. (This also applies to maintenance work with the power source turned ON.)
Preparation of work plan
a device that allows operation to be stopped immediately during teaching work. (This also
WARNING Prepare
applies to maintenance work with the power source turned ON.)
Setting of emergency stop switch
CAUTION
During teaching work, place a sign indicating that teaching work is in progress on the start switch,
etc. (This also applies to maintenance work with the power source turned ON.)
Indication of teaching work in progress
Provide a fence or enclosure during operation to prevent contact of the operator and robot.
Establish a set signaling method to the related operators for starting work, and follow this method.
Signaling of operation start
CAUTION
As a principle turn the power OFF during maintenance work. Place a sign indicating that
maintenance work is in progress on the start switch, etc.
Indication of maintenance work in progress
CAUTION
Before starting work, inspect the robot, emergency stop switch and other related devices, etc.,
and confirm that there are no errors.
Inspection before starting work
The points of the precautions given in the separate "Safety Manual" are given below.
Refer to the actual "Safety Manual" for details.
WARNING
CAUTION
When automatically operating the robot with multiple control devices (GOT, PLC,
pushbutton switch), the interlocks, such as each devices operation rights must be
designed by the user.
Use the robot within the environment given in the specifications. Failure to do so could lead to a
drop or reliability or faults. (Temperature, humidity, atmosphere, noise environment, etc.)
CAUTION
Transport the robot with the designated transportation posture. Transporting the robot in a
non-designated posture could lead to personal injuries or faults from dropping.
CAUTION
Always use the robot installed on a secure table. Use in an instable posture could lead to
positional deviation and vibration.
CAUTION
Wire the cable as far away from noise sources as possible. If placed near a noise source,
positional deviation or malfunction could occur.
CAUTION
Do not apply excessive force on the connector or excessively bend the cable. Failure to observe
this could lead to contact defects or wire breakage.
CAUTION
Make sure that the workpiece weight, including the hand, does not exceed the rated load or
tolerable torque. Exceeding these values could lead to alarms or faults.
WARNING
Securely install the hand and tool, and securely grasp the workpiece. Failure to observe this could
lead to personal injuries or damage if the object comes off or flies off during operation.
WARNING
Securely ground the robot and controller. Failure to observe this could lead to malfunctioning by
noise or to electric shock accidents.
CAUTION
Indicate the operation state during robot operation. Failure to indicate the state could lead to
operators approaching the robot or to incorrect operation.
WARNING
When carrying out teaching work in the robot's movement range, always secure the priority right
for the robot control. Failure to observe this could lead to personal injuries or damage if the robot
is started with external commands.
CAUTION
Keep the jog speed as low as possible, and always watch the robot. Failure to do so could lead to
interference with the workpiece or peripheral devices.
CAUTION
After editing the program, always confirm the operation with step operation before starting
automatic operation. Failure to do so could lead to interference with peripheral devices because of
programming mistakes, etc.
CAUTION
Make sure that if the safety fence entrance door is opened during automatic operation, the door is
locked or that the robot will automatically stop. Failure to do so could lead to personal injuries.
CAUTION
Never carry out modifications based on personal judgments, or use non-designated maintenance
parts.
Failure to observe this could lead to faults or failures.
WARNING
When the robot arm has to be moved by hand from an external area, do not place hands or
fingers in the openings. Failure to observe this could lead to hands or fingers catching depending
on the posture.
CAUTION
CAUTION
Do not stop the robot or apply emergency stop by turning the robot controller's main power
OFF. If the robot controller main power is turned OFF during automatic operation, the robot
accuracy could be adversely affected.Moreover, it may interfere with the peripheral device by
drop or move by inertia of the arm.
Do not turn off the main power to the robot controller while rewriting the internal information of
the robot controller such as the program or parameters.
If the main power to the robot controller is turned off while in automatic operation or rewriting
the program or parameters, the internal information of the robot controller may be damaged.
CAUTION
When using this products GOT direct connection function, do not connect a handy
GOT. The handy GOT can automatically operate the robot regardless of whether
the operation rights are enabled or not, so use could result in property damage or
personal injuries.
CAUTION
When using an iQ Platform compatible product with CRnQ, do not connect the
handy GOT. The handy GOT can automatically operate the robot regardless of
whether the operation rights are enabled or not, so use could result in property
damage or personal injuries.
CAUTION
When the SSCNETIII cable is removed, install the cap in the connector.
If the cap is not installed, there is a possibility of malfunctioning by adhesion of the
dust etc.
CAUTION
Don't remove the SSCNETIII cable, when the power supply of the robot controller is
turned on. Don't face squarely the light emitted from the tip of the SSCNETIII
connector or the cable. If light strikes the eyes, there is a possibility of feeling the
sense of incongruity for the eyes. (The light source of SSCNETIII is equivalent to the
class 1 specified to JISC6802 and IEC60825-1.)
CAUTION
Take care not to wire the units incorrectly. Connections which do not satisfy the
specifications could result in malfunction such as emergency stop not being
released.
When completed with the wiring, confirm that each function properly operates without
malfunction, including emergency stop with the robot controller operation panel,
emergency stop with the teaching box, users emergency stop, and each door switch,
etc.
Precautions for the basic configuration are shown below.(When CR1D-7xx/CR1Q-7xx is used for the
controller.)
CAUTION
Please install the earth leakage breaker in the primary side supply power supply
of the controller because of leakage protection
Controller
Earth leakage
breaker(NV)
Cover
Terminal
Cover
Earth screw
Earth
Contents
1.
1.1.
1.2.
1.3.
1.4.
2.
2.1.
Component Devices ......................................................................................................................... 2-6
2.1.1.
Constitution of MELFA-Vision and the network vision sensor ....................................................... 2-6
2.1.2.
Equipment provided by customer ................................................................................................ 2-10
2.2.
System configuration example...................................................................................................... 2-11
2.2.1.
Configuration with one robot controller (SD series) and one vision sensor..................................2-11
2.2.2.
Configuration with one robot controller (SD series) and two vision sensors ............................... 2-12
2.2.3.
Configuration with tree robot controllersSD seriesand one vision sensor ............................... 2-13
2.2.4.
Configuration with one robot controllerSQ seriesand one vision sensor ................................. 2-14
3.
SPECIFICATIONS...................................................................................................................................... 3-15
3.1.
Network vision sensor specifications........................................................................................... 3-15
3.1.1.
External Dimensions of Network Vision Sensor(5100/5400/5401/5403/5400C) ......................... 3-16
3.1.2.
External Dimensions of Network Vision Sensor 5400R............................................................... 3-17
3.2.
3.3.
MELFA-Vision ................................................................................................................................. 3-20
3.3.1.
Features ....................................................................................................................................... 3-20
3.3.2.
Operating Environment ................................................................................................................ 3-21
4.
4.1.
5.
5.1.
5.2.
Equipment connection ................................................................................................................... 5-24
5.2.1.
Individual equipment connections................................................................................................ 5-24
5.3.
Software installation....................................................................................................................... 5-26
5.3.1.
Vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later) installation ....................... 5-26
5.3.2.
Vision sensor dedicated software (In-Sight Explorer before Ver. 4.1) installation ....................... 5-28
5.3.3.
5.3.4.
5.3.5.
6.
6.1.
Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later) ................................................ 6-36
6.2.
Vision Sensor Initial Settings (In-Sight Explorer before Ver.4.1) ................................................ 6-38
6.3.
Work recognition test..................................................................................................................... 6-41
6.3.1.
Starting MELFA-Vision (network vision sensor support software) ............................................... 6-41
6.3.2.
Image adjustment......................................................................................................................... 6-46
6.3.3.
Image processing settings ........................................................................................................... 6-49
7.
7.1.
7.2.
7.3.
Robot Program Writing .................................................................................................................. 7-80
7.3.1.
Flow for starting of image processing by robot ............................................................................ 7-80
7.3.2.
Writing a Sample Robot Program ................................................................................................ 7-80
7.4.
Executing the automatic operation test........................................................................................ 7-83
7.4.1.
Put the vision sensor online. ........................................................................................................ 7-83
7.4.2.
Test by executing each step......................................................................................................... 7-83
7.4.3.
Starting a Robot Program............................................................................................................. 7-84
7.5.
When the robot can not grasp the work normally........................................................................ 7-86
7.5.1.
Check the MELFA-Vision [Camera Image]. ................................................................................. 7-86
7.5.2.
Comparison of the position data for the work recognized by the vision sensor and the position data
received by the robot...................................................................................................................................... 7-86
8.
8.1.
8.2.
8.3.
8.4.
8.5.
8.6.
9.
9.1.
Vision Sensor Dedicated Commands and Status Variables ........................................................ 9-98
9.1.1.
How to Read Items....................................................................................................................... 9-98
9.1.2.
MELFA-BASIC V Commands ...................................................................................................... 9-98
9.1.3.
Robot status variables................................................................................................................ 9-114
9.2.
9.2.1.
9.2.2.
9.2.3.
9.2.4.
9.3.
Vision program detailed explanation .......................................................................................... 9-126
9.3.1.
Templates provided for MELFA-Vision....................................................................................... 9-126
9.3.2.
Image processing - blobs........................................................................................................... 9-128
9.3.3.
Image processing Color .......................................................................................................... 9-131
9.3.4.
Using image processing for which there is no template ............................................................ 9-139
9.3.5.
To shorten the time for transferring data with the robot controller ............................................. 9-140
9.4.
Detailed explanation of systems combining multiple vision sensors and robots................... 9-143
9.4.1.
Systems with one robot controller and multiple vision sensors ................................................. 9-143
9.4.2.
Systems with one vision sensor and multiple robot controllers ................................................. 9-144
10.
10.1.
11.
11.1. Performance of this product (comparison with built-in type RZ511 vision sensor) .............. 11-149
11.1.1. Comparison of work recognition rate ....................................................................................... 11-149
11.1.2. Comparison of image processing capacity .............................................................................. 11-149
11.1.3. Factors affecting the processing time ...................................................................................... 11-150
11.2.
1 Summary
1. Summary
1.1.What A Network Vision Sensor Is
The network vision sensor is an option that makes it possible to discriminate the position of various types
of work and transport, process, assemble, inspect, and measure work with MELFA robots.
It consists of MELFA-Vision and the vision sensor, and the related options.
1-1
1 Summary
1.2.Features
The network vision sensor has the following functions.
(1) Position detection through high-speed image processing
High-speed image processing makes it possible to detect the work at high speed, not only when the
angle is not detected, but even when the work includes 360 rotation.
When the angle is not detected
: about 50 ms
When detecting 360
: about 150 ms
* Measurement conditions Search area: 640x480, Pattern: 90x90
This is the pattern matching processing time using In-Sight5400 (camera
exposure time of 8 ms)
* These values are reference values. These values are not guaranteed.
(2) Ethernet communication
Since the system can be configured with an Ethernet network, a wide variety of system
configurations can be realized.
Up to seven vision sensors can be controlled with one robot controller.
Up to three robot controllers can share control of one vision sensor.
Systems can be configured with multiple robot controllers and multiple vision sensors.
Both robot controller and vision sensor can be debugged using one PC.
"MELFA-Vision Network Vision Support Software" has image log functions, so it is possible to
check the image state when an error occurred.
(3) Easy setting
Connect only the Ethernet cable and the power cable to the vision sensor. Connect only the
Ethernet cable to the robot controller at the Ethernet interface.The vision sensor and robot
controller settings can be made simply with MELFA-Vision.
The vision sensor and robot controller settings can be made simply with MELFA-Vision.
(4) Easy robot program calibration
The program can be made easily with MELFA-BASIC V commands available for vision sensor
exclusively.
"NVOpen" command that connects the robot and vision sensor line
"NVPst" command that starts the vision program and obtains the results
"NVClose" command that cuts off the robot and vision sensor line
This system is equipped with a simple calibration function that can handle a variety of camera
installation positions.
(5) Space saving, wiring saving
Since the vision sensor combines the camera and controller in one piece, the only wiring needed is
the Ethernet cable and power supply cable, so wiring does not take up space.
(6) Easy maintenance
It is possible to store recognized images on a PC with MELFA-Vision running, to check the image
when an error occurred, and to find the cause of the error easily.
Features
1-2
1 Summary
1.3.Applications
Here are major applications of the network vision sensor.
(1) Loading/Unloading Machined Parts
1-3
Applications
1 Summary
(4) Small Electrical Product Assembly
Applications
1-4
1 Summary
1.4.Explanation of terms
This section explains the terms used in this manual.
CCD (Charge Coupled Device) This is the most general pickup element used in cameras.
Degree of matching (score) This value indicates the degree to which the image matches the registered
pattern. This value ranges from 0 to 100. The closer to 100, the higher the
degree of matching.
Offline This is a vision sensor mode for such work as setting the vision sensor
operating environment, setting the image processing, and backing up data to a
PC.
Online This is the vision sensor mode in which the vision sensor executes image
processing under command from the robot controller.
Picture element (pixel) This is the smallest unit of data making up the image. One image comprises
640x480 pixels. Depending on the type of vision sensor, one image comprises
1024x768 or 1600x1200 pixels.
Contrast This is a yardstick expressing the "brightness" of a pixel in units from 0-255. The
smaller this value, the darker the pixel; the higher this value, the brighter the
pixel.
Calibration This is coordinate conversion for converting from the image processing
coordinate system to the robot coordinate system.
Threshold This is the cutoff point for degree of matching scores.
Shutter speed This is the exposure time (the time during which the CCD accumulates charge).
Sort This rearranges the order in which data (recognition results) is output to the
robot according to the specified item.
Trigger This is the starting signal for starting the exposure (image capture).
Pattern matching This is processing for detecting the pattern that matches the pattern registered
from the captured image.
Vision program (job) This is the program that executes such image processing as pattern matching,
blobbing, etc. The image processing can be set freely.
Filter This is a form of image processing for improving the picture quality.
Blob This is a type of image processing for detecting blobs with features in the image
captured. Bright sections are expressed as white; dark sections are expressed
as black.
Host name This is the network vision sensor name. This is registered in the initial settings.
Live Images can be displayed in real time by shooting continuously.
Area This is the processing area for executing image processing.
Log function This function stores the image taken in with online operation and the execution
results (log).
Exposure This is the accumulation of charge on the CCD. When light strikes the CCD,
charge accumulates and the degree of this accumulation becomes the degree
of brightness of the image.
1-5
Explanation of terms
2 System configuration
2. System configuration
2.1.Component Devices
2.1.1.
The composition of the MELFA-Vision and the network vision sensor basic set that you have purchased is
shown in "Table 2-1 List of Network Vision Sensor Basic Set Composition.
Table 2-1 List of Network Vision Sensor Basic Set Composition
4D-2CG
5100
-PKG
4D-2CG
5400
-PKG
4D-2CG
5401
-PKG
4D-2CG
5403
-PKG
MELFA-Vision
4D-2CG
5400C
-PKG
4D-2CG
5400R
-PKG
3D-51C
-WINE
Prepare by
the customer
(*5)
(*6)
Lens cover
(*1) The cable length can be changed. For details, see "Table 2-2 List of Network Vision Sensor
Related Options".
(*2) This is a CD-ROM that comes with a vision sensor made by the Cognex Corporation.
This CD-ROM contains the software and operations manual required for using the network
vision sensor. The *** in the model name part number is the version number.
(*3) The instructions manual is included in CD-ROM of MELFA-Vision.
(*4) The camera cable which connects the remote head camera and the vision sensor,and the
remote head camera installation bracket is bundled for Network Vision sensor 5400R.
(*5) These specifications apply when the vision sensor and related options are prepared by the user
and only MELFA-Vision (network vision sensor support software and instruction manual) is
provided. The applicable vision sensors (COGNEX brand) are listed in Table 3-4 THE
CORRESPONDENCE TYPE AND VERSION OF MELFA-VISION for reference.
(*6) Note: The vision sensor must be equipped with an image processing algorithm (PatMax).
Component Devices
2-6
2 System configuration
Table 2-2 List of Network Vision Sensor Related Options
Option name
Network cable
Breakout cable
Camera cable
I/O Module
0.6m
2m
5m
10m
15m
30m
2m
5m
10m
15m
5m
10m
15m
Terminal block conversion module
I/O Expansion module
(8 inputs/8 outputs)
2-7
Component Devices
Model
CCB-84901-1001-00
CCB-84901-1002-02
CCB-84901-1003-05
CCB-84901-1004-10
CCB-84901-1005-15
CCB-84901-1006-30
CCB-84901-0101-02
CCB-84901-0102-05
CCB-84901-0103-10
CCB-84901-0104-15
CCB-84901-0303-05
CCB-84901-0304-10
CCB-84901-0305-15
CIO-1350
CIO-1450
IFS-DRL-050
IFS-RRL050
IFS-WRL050
BFP-A8780
2 System configuration
The composition of the basic set(All-in-one design) are shown in figures.
Lens cover
O ring
Thread guard
Breakout cable
Network cable
MELFA-Vision CD-ROM
Installation guide
Figure 2-1 Basic set(All-in-one design) composition
Component Devices
2-8
2 System configuration
The composition of 4D-2CG-5400R-PKG(Remote Head) are shown in figures.
Camera Cable
Breakout Cable
Network Cable
Installation guide
Figure 2-2 Basic set(Remote Head Type) composition
2-9
Component Devices
MELFA-Vision CD-ROM
2 System configuration
2.1.2.
In addition to this product, the system also includes equipment provided by the customer. Table 2-3 List of
Equipment Provided by Customer" shows the minimum necessary equipment. The equipment for the
customer to provide depends on the system. For details, see 2.2 System configuration example.
Table 2-3 List of Equipment Provided by Customer
Device name
Vision sensor
Breakout cable
Network cable
Camera lens
24V power supply
Recommended product
In-Sight5000 series
(Refer Table 2-1) (*1)
(Refer Table 2-1,Table 2-2) (*1)
(Refer Table 2-1,Table 2-2) (*1)
C mount lens (CS mount lens is possible for 5400R.)
24 VDC (10%)
(5100/5400/5400C/5401 are 350mA or larger, 5403 is 500mA or larger,
5400R is 250mA or larger.)
PC
CPU
Intel Pentium III 700MHz (or equivalent) or faster
Memory size
256 MB min.
Hard disk
Available capacity of 200 MB min.
OS
Microsoft Windows 2000, Service Pack 4
Microsoft Windows XP Professional,
Service Pack 2
Display
An SVGA (800x600) or higher resolution display with
graphic functions that can display at least 16 colors
Disc device
CD-ROM drive
Keyboard
PC/AT compatible keyboard
Pointing device Device that operates in Windows operating system
Communications Must have Ethernet line that operates in Windows
operating system
Hub
A switching hub is recommended.
Ethernet straight cable
Any straight Ethernet cable is OK.
Lighting device
Select the optimum lighting for the work to be recognized.
LED lights are recommended for their long service life.
(*1) It is attached to the network vision sensor set
Component Devices
2-10
2 System configuration
Configuration with one robot controller (SD series) and one vision sensor
Below is shown the entire configuration (robot system) when one camera is used.
24V Power
Hub
Robot
Personal
computer
Robot controller
Part name
Format
Manufacturer
Q'ty
Remarks
Robot controller
CRnD-700 series
1
S/W version P4 or later
Mitsubishi
Electric
Robot main unit
All models
1
3D-51C-WINE
MELFA-Vision
1
(4)
Vision sensor Vision sensor
In-Sight 5000 series COGNEX
1
Software: 3.20 or later(4)
Breakout cable
1
(4)
Network cable
1
(4)
Lens
C mount lens(*1)
1
Provided by customer (*2)
Hub
1
Lighting device
1
2-11
2 System configuration
2.2.2.
Configuration with one robot controller (SD series) and two vision sensors
Below is shown the entire configuration (robot system) when two cameras are used.
In-Sight 5100/5400/5400C
In-Sight 5401/5403
24V
24VPower
Hub
Personal
Tool
computer
Robot
Robot controller
Figure 2-4 Configuration (Robot System) When Two Cameras Are Used
Below is a list of the equipment configuration when two cameras are used.
Table 2-5 List of Configuration When Two Cameras Are Used
Part name
Robot controller
Robot main unit
MELFA-Vision
Vision sensor
(*4)
Lens (*4)
24V power supply
Format
Manufacturer
Q'ty
Mitsubishi
Electric
1
1
1
2
2
2
2
1
1
1
2
1
CRnD-700 series
All models
3D-51C-WINE
Vision sensor
In-Sight 5000 series
Breakout cable
Network cable
C mount lens(*1)
COGNEX
Remarks
S/W version P4 or later
(5)
Software: 3.20 or later(5)
(5)
(5)
Provided by customer (*2)
(*3)
Hub
Lighting device
2-12
2 System configuration
2.2.3.
Below is shown the entire configuration (robot system) when one camera is used with three robots.
In-Sight 5100/5400/5400C
In-Sight 5401/5403
24V
Power
24V
Hub
Personal
Tool
computer
Robot
controller
Robot
Figure 2-5 Configuration (Robot System) When One Camera Is Used with Three Robots
Below is a list of the equipment configuration when one camera is used with three robots.
Table 2-6 List of Configuration When One Camera Is Used with Three Robots
Part name
Format
Manufacturer
Q'ty
Remarks
Mitsubishi
Robot controller (*4)
CRnD-700 series
1
S/W version P4 or later
Electric
Robot main unit
All models
3
3D-51C-WINE
MELFA-Vision
1
(5)
Vision sensor Vision sensor
In-Sight 5000 series
1
Software: 3.20 or later
Breakout cable
1
Network cable
1
Lens
C mount lens(*1)
1
Provided by customer (*2)
PC
1
Hub
1
Lighting device
1
2-13
2 System configuration
2.2.4.
Below is shown the entire configuration (robot system) when one camera is used.
In-Sight 5100/5400/5400C
In-Sight 5401/5403
24V
24VPower
Hub
Personal
computer
Tool
MELSE C
Q6xP
POWER
Q01CPU
Q41X
Q172DRCPU
RUN
ERR
SW
CN1
PULL
RS-232
CN2
PULL
TU I/F
EMI
DISPLAY I/F
STOP RUN
MIT SUBISHI
PLC
system
iQ Platform
Robot
CPU unit
(robot controller)
()
Driveunit
(robot controller)
()
(Example is CR2Q-700 series)
Robot
Part name
Format
Robot controller
Robot main unit
CRnQ-700 series
All models
MELFA-Vision
Vision sensor Vision sensor
3D-51C-WINE
In-Sight5000 series
Manufacturer
Q'ty
Mitsubishi
Electric
1
1
COGNEX
1
1
Remarks
S/W version N1or later
(*4)
S/W version 3.20 or
later (*4)
Breakout cable
Network cable
1
(*4)
1
(*4)
Lens
C mount lens (*1)
1
Provided by customer
(*2)
24V power supply
1
(*3)
PC
1
Hub
1
Lighting device
1
(*1) Select from general C mount lenses. C mount lens is possible for 5400R.
(*2) The half tone (gray) section is the equipment provided by the customer
(*3) For the 24 VDC (10%) power supply, the vision sensor requires a minimum of 350mA (5403:a minimum of
500mA / 5400R: a minimum of 250mA).
(*4) It is attached to the network vision sensor set
2-14
3 Specifications
3. Specifications
3.1.Network vision sensor specifications
Here are the specifications of the network vision sensor by itself.
Table 3-1 Network Vision Sensor Stand-Alone Specifications
Magnificati
on ratio
Standard
5100
High-Perfor
mance
5400
x1
x2.5
Memory
Firmware Version
Resolution
Camera
Display
option
I/O option
(*6)
Interface
(*6)
Lighting
Application
development
Lens
mounting
Voltage condition
Power
The maximum current
supply
Image processing
Environme
ntal
Ambient temperature
(operation / storage)
Ambient humidity
Protection
Impact[G]
Vibration[G]
Certificatio
n
CEFCCULCUL
Color
5400C (*1)
High-resolution
5401 (*1)
x2
Remote Head
5400R (*1)
x2.5
0.032-1000
60
256
5403 (*1)
:32MB
:64MB
1600x1200
1/1.8 inch
0.027-1000
0.025-1000
15
40
20
16,777,216
640x480
1/3 inch
256
297.6
294.8
/2(*5)
(Communication lines:
3 lines)
C/CS
24VDC10%
350mA
500mA
250mA
Pattern matching / Blob / Edge/Bar code 2D codes / Text comparison / Histogram / Color
0 - 45-30 - 80 (*7)
90% no condensation allowed
IP67 (When lens cover installed)
80 IEC68-2-27
10 10 - 500Hz IEC68-2-6
(*1) High-resolution edition,Color edition and Remote-Head edition correspond from Ver.1.1 of
MELFA-Vision.
(*2) The performance values do not include the image capture speed.
(*3) The image capture speeds are the values with an exposure time of 8 ms and full image frame capture.
(*4) A lens cover (that comes with this sensor) is required that was designed to meet the NEMA standard
protection specifications.
(*5) One high-speed output is for strobe.
(*6) I/O and Ethernet cable The maximum curve radius is 38 mm.
(*7) The maximum operating temperature of the remote head is possible for 5400R up to 50 .
3-15
3 Specifications
3.1.1.
3-16
3 Specifications
3.1.2.
Figure 3-2 External Charts of Network Vision Sensor 5400R (Processor part)
3-17
3 Specifications
Unit:mm
Figure 3-3 External Charts of Network Vision Sensor 5400R (Remote Head part)
Unit:mm
Figure 3-4 External Charts of Network Vision Sensor 5400R (Bracket part)
3-18
3 Specifications
3-19
7 maximum
3 maximum
3 Specifications
3.3.MELFA-Vision
3.3.1.
Features
MELFA-Vision is software that provides support for those using vision sensors for the first time and support
for connections between robot controllers and vision sensors. Below are the basic functions and features of
MELFA-Vision.
Function
Logon and logoff
Image operations
Capture request
2
Camera image
adjustment
Online and offline
3
Vision program writing
4
Recognition result
display
Robot controller
communication settings
Robot and
vision sensor calibration
Image Log
File transfer
Backup
Restore
Cloning
5
6
MELFA-Vision
3-20
3 Specifications
Version correspondence with the vision sensor by COGNEX and MELFA-Vision is shown in the following.
Indicates supported model name of each version
Table 3-4 THE CORRESPONDENCE TYPE AND VERSION OF MELFA-VISION
In-Sight
model name
5100
5101
5103
5100C
5400
5401
5403
5400C
5400R
5400S
5403S
5400CS
5600
5603
1100(Micro)
1400(Micro)
1403(Micro)
1100C(Micro)
1400C(Micro)
1403C(Micro)
3.3.2.
Specification
Ver.1.0
Ver.1.1
Ver.1.1.1
Ver.1.2
Standard
Standard + High resolution1
1,024x768
Standard + High resolution2
1,600x1,200
Standard + color
High performance
High performance + High resolution1
1,024x768
High performance + High resolution2
1,600x1,200
High performance+ color
High performance+ Remote head
High performance+ Stainless steel body
High performance + High resolution2
+ Stainless steel body
1,600x1,200
High performance+ color
+ Stainless steel body
High speed
High speed + High resolution2
1,600x1,200
Micro standard
Micro high performance
Micro high performance + High
resolution2
1,600x1,200
Micro standard + color
Micro high performance + color
Micro high performance + High
resolution2
1,600x1,200 + color
Operating Environment
Disc device
Keyboard
Pointing device
Communications
Main memory
Hard disk
OS
Display
3-21
MELFA-Vision
4 Work Charts
4. Work Charts
4.1.Work procedure chart
This chapter explains the work procedure for building a vision system using our robots.
Check the following procedure before working.
Start of work
Step1
(Chapter 5)
Prepare and connect the required equipment and install the software
(p.5-23)
Step2
Vision Sensor Initial Settings
Step3
Step4
(p.6-29)
(p.6-33)
Maintenance
(Chapter 6)
(Chapter 7)
(p.7-54)
Calibration settings
(p.7-59)
(p.7-70)
(p.7-72)
(Chapter 8)
(p.8-77)
4-22
5.1.Equipment preparation
The following equipment is required for building the vision system. Included is equipment that must be
provided by the customer, so prepare what is necessary for your system.
Table 5-1 List of Configuration When One Camera Is Used
Part name
5-23
Format
Robot controller
CRnQ-700 series
or
CRnD-700 series
All models
R32TBR56TB
In-Sight 5000 series
Equipment preparation
C mount lens
2A-RZ365/2A-RZ375
Manufacturer
Q'ty
Mitsubishi
Electric
Mitsubishi
Electric
1
1
1
1
1
1
1
1
1
2
1
1
1
Remarks
CRnQ-700 series:
S/W version N1 or later
CRnD-700 series:
S/W version P4 or later
5.2.Equipment connection
This section explains how to connect the equipment prepared.
5.2.1.
In-Sight5100/5400
Hub
(6) Connect the Ethernet straight cable to the hub and the other end to the PC.
In-Sight5100/5400
Hub
PC Tool
Equipment connection
5-24
CRnQ-7 series
iQ Platform
Platform
PLC
iQ
DISPcontroller).
I/F
(robot
Ethernet
Connect
the Ethernet
cable to the hub
Connect
the Ethernet cable to the hub
Ethernet
Connect
with CNDISP of
CNDISP
the
drive unit.
Drive
unit (robot
controller)
CRnD-7 serie
Ethernet
Connect
with LAN1 of
theLAN1
Ethernet
interface
Robot controller
Connect
Ethernet cable to the hub
Ethernetthe
Personal
computer
24V power
24V
Personal
computer
24V power
Hub
Hub
Personal
computer
PLC
system
iQ Platform
MELSEC
Q6xP
POWER
Q01CPU
Q41X
Q172DRCPU
RUN
ERR
SW
CN1
PULL
RS-232
CN2
PULL
TU I/F
EMI
DISPLAY I/F
STOP RUN
MITSUBISHI
Robot
Robot controller
Robot
CRnD-7 series
Equipment connection
Driveunit
(robot
controller)
()
CRnQ-7 series
Figure 5-2 System configuration
5-25
Robot
CPU unit
(robot
controller)
()
5.3.Software installation
This product comes with two CD-ROMs (In-Sight and MELFA-Vision). Each CD-ROM contains software
necessary for starting up the vision system.
This section explains how to install this software.
Before installing the vision sensor dedicated software (In-Sight Explorer), always check the model and type of
vision sensor and the version of the vision sensor dedicated software (In-Sight Explorer) being used.
Before installing MELFA-Vision, check the version of MELFA-Vision being used.
5.3.1.
This section explains how to install the vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later).
(1) End all applications that are running
(2) Insert the In-Sight installation CD-ROM into the PC's CD-ROM drive. When the installation
program starts automatically, the following screen is displayed.
(3) Click the items indicated as not installed, and install each tool
Software installation
5-26
An installed check
(4) When installation is completed, the icon for the installed software will appear on the personal
computer's desktop
(5) Start the installed software to make sure it has been installed correctly
5-27
Software installation
5.3.2.
This section explains how to install the vision sensor dedicated software (In-Sight Explorer before Ver.4.1).
(1) End all applications that are running.
(2) Insert the In-Sight installation CD-ROM into the PC's CD-ROM drive. When the installation
program starts automatically, the following screen is displayed.
(3) Select the language displayed on the right side of the screen.
(4) Click the [1] [3] buttons in order to install the respective software.
(5) For [4], click if your PC does not yet have Adobe Reader installed. Also click to install Adobe
Reader if you have an older version.
(6) When each piece of software has been installed, "Installed" is displayed next to that item on the
installation program screen. Check that "Installed" is displayed next to [1] [3]. Whether or not to
install [4] is up to your judgment.
(7) When the installation is complete, the icons for the installed software are displayed on the PC's
desktop.
Software installation
5-28
5.3.3.
MELFA-Vision installation
This section explains how to install MELFA-Vision (network vision sensor support software).
Install this product with the following procedure.
Caution
Caution
When installing, log in as a user with administrator authority.
When installing, log in as a user with administrator authority. The system will not let you install if you log
in as a user who does not have administrator authority.
When "MELFA-Vision" is installed in the personal computer, ". NET Framework 1.1" is installed.
Microsoft Windows 2000 Professional Operating System
Microsoft Windows XP Professional Operating System
(1) Set this CD-ROM in the personal computer's CD-ROM drive. The Setup screen will be started
up automatically.
(2) If the screen does not start up automatically, carry out the following procedure.
(a) Select the [start] menu and [run]
(b) Check the CD-ROM drive name. Input as shown below.
"Drive name":\Setup.exe
(Example : If the CD-ROM drive is "D:", this will be "D:\Setup.exe".)
5-29
Software installation
Product ID is printed
on the License Ce rtific ate.
Finish
Doc
Instruction Manual(pdf)
Caution
Software installation
5-30
5.3.4.
Connecting the CRnD-700 series robot controller with USB requires installation of the robot USB driver. Install with
the following procedure.
Caution
If the USB driver cannot be installed, check the following setting.
<When Windows 2000 is used>
If you have selected "Block-Prevent installation of unsigned files" after [Control Panel] - [System] [Hardware] - [Driver Signing], the USB driver may not be installed.
Choose "Ignore-Install all files, regardless of file signature" or "Warn-Display a message before
installing an unsigned file" for [Driver Signing], and install the USB driver.
<When Windows XP is used>
If you have selected "Block-Never install unsigned driver software" after [Control Panel] - [System] [Hardware] - [Driver Signing], the USB driver may not be installed.
Choose "Ignore-Install the software anyway and don't ask for my approval" or "Warn-Prompt me each
time to choose an action" for [Driver Signing], and install the USB driver.
(Completed)
5-31
Software installation
5.3.5.
Connecting the CRnQ-700 series robot controller with USB requires installation of the robot USB driver. Install with
the following procedure.
Caution
If the USB driver cannot be installed, check the following setting.
<When Windows 2000 is used>
If you have selected "Block-Prevent installation of unsigned files" after [Control Panel] - [System] [Hardware] - [Driver Signing], the USB driver may not be installed.
Choose "Ignore-Install all files, regardless of file signature" or "Warn-Display a message before
installing an unsigned file" for [Driver Signing], and install the USB driver.
<When Windows XP is used>
If you have selected "Block-Never install unsigned driver software" after [Control Panel] - [System] [Hardware] - [Driver Signing], the USB driver may not be installed.
Choose "Ignore-Install the software anyway and don't ask for my approval" or "Warn-Prompt me each
time to choose an action" for [Driver Signing], and install the USB driver.
(1) When using Windows 2000
The following indicates the procedure for installing the USB driver when using Windows 2000.
1) The screen shown on the left appears
when you connect the personal
computer and Universal model QCPU by
the USB cable.
Click the [Next] button.
Software installation
5-32
(Completed)
5-33
Software installation
Software installation
5-34
(Completed)
5-35
Software installation
(3) Select and click [Add Sensor/Device To Network] from the displayed screen's menu bar
6-36
(5) The devices to add to the network will appear, so select the displayed device and input the IP
address. When finished inputting, click the [Close] button.
Key point: For details on the In-Sight connection manager, see the In-Sight Explorer help.
6-37
Key point: For details on the In-Sight connection manager, see the In-Sight Explorer help.
6-38
(5) Your PC's [Subnet mask] (mandatory), [Default gateway] (option), [DNS server] (option), [Domain]
(option) settings are automatically acquired and displayed. Check that these values are correct, then
click the [Next] button.
6-39
(7) Click the [Configure] button, cut off the power for the vision sensor, wait at least 5 seconds, then
switch the power back on again.
(8) When "Settings complete" is displayed in the [Status] column, the settings are complete. Finally click
the [Close] button to close the screen.
6-40
6.3.1.
This section explains the procedure for starting MELFA-Vision, which can easily execute a work recognition
test.
(1) From the Windows Start menu, click [All Programs] [MELSOFT Application] [RT ToolBox]
[MELFA-Vision] to start "MELFA-Vision".
(2) Select the appropriate vision sensor from the displayed vision sensor list, then click the [Log On]
button.
Input a [User Name] whose access rights for the vision sensor are "Full access" and the [Password],
then click the [OK] button.
* The [User Name] and [Password] are registered in the vision sensor. The default setting is a user
name of "admin" with no password. If the user ID and password have been changed, input the new
user name and password.
6-41
(1) Window
(3) Menu
(4) Tool buttons
(1) Window
The default window size is "800x600".
(2) Title
The title character string is "MELFA-Vision [logged in vision sensor name]".
6-42
Menu
File
View
Sensor
Controller
Help
6-43
Item
Button
Tool tip
Logon/Logoff
Online/Offline
Manual Trigger
Live Mode
Zoom In
Zoom Out
Zoom to Max
Zoom 1:1
Zoom to Fit
Zoom to Fill
Image Log
This makes the settings for FTP transfer of images captured with the vision
sensor to the PC.
On: Image log reception enabled
Off: Image log reception disabled
Operation
This is the host name of the vision sensor logged onto. Blank when vision sensor logged off.
Displays the name of the job being edited.
Displays the recognition count set with the recognition conditions on the job editing screen.
Displays the threshold set with the recognition conditions on the job editing screen.
Displays the start angle set with the recognition conditions on the job editing screen.
Displays the end angle set with the recognition conditions on the job editing screen.
Operation
Displays the background color and the target color for recognition set with the color setting on the
job editing screen.
Displays the minimum and maximum values set with the work surface area on the job editing
screen.
Displays the threshold for the grayscale minimum set with the grayscale threshold value on the job
editing screen.
6-44
Explanation
A job (vision program) is created
newly.
A job is deleted.
A job list is renewed.
A job is edited (change).
The name of a job is changed.
The job is named and saves
Control
Left frame
Center frame
Right frame
6-45
Explanation
This displays the mouse position image information.
(X coordinate value, Y coordinate value) = Contrast value
When the vision sensor status changes, the following character strings are
displayed. Anything else is blank.
"Online"
"Offline"
"Live"
"Incomplete online"
"Discrete online"
This displays the PC image log reception status.
When reception enabled: "Image log reception enabled"
When disabled: Blank
6.3.2.
Image adjustment
This section explains how to adjust the brightness and Diaphragm for the image captured by the vision
sensor.
(1) Check the image shot with MELFA-Vision [Camera Image].
From MELFA-Vision menu, click [Sensor] [Live Mode] or
from the tool bar to put MELFA-Vision into live image mode.
click
Put the work to be recognized under the vision sensor and check the resulting image with
MELFA-Vision [Camera Image].
(2) If the field of vision is not appropriate, adjust the distance between the vision sensor and the
work or replace the lens.
When the image is too large
6-46
If the appropriate brightness can not be achieved by just adjusting the Diaphragm, provide different
lighting.
Too bright
Too dark
6-47
6-48
6.3.3.
This section explains how to make the image processing settings, using pattern matching image processing
(only one robot, results output as robot absolute coordinate values) as an example. For details on other image
processing, see "9.3.1 Templates provided for MELFA-Vision".
(1) Click [New] under Job (Vision Program) List at the left of the MELFA-Vision main screen. Select
the process method from the displayed [Processing Method] screen, and then click the [OK]
button.
6-49
Track bar
(3) When you change all the displayed items, then click the [Test] button, the picture is displayed for
when the setting is changed to the main screen [Camera Image], so adjust for clear contrast
between the work and the background. For details on the setting items, see below.
Setting item
Exposure
Gain
Orientation
Normal
Mirrored
horizontally
Flipped vertically
Rotated
180
degrees
6-50
Setting range
Camera
Continuous
External
Manual
Network
Explanation
This specifies the image take-in trigger for when the vision
sensor is "online".
[Camera]
The image is taken in at the rising edge detected at the
camera hardware trigger input port.
[Continuous]
Images are taken in continuously.
[External]
The image is taken in at the rising edge of a discrete I/O (*1)
input bit or serial command.
[Manual]
When the <F5> key is pressed.
[Network]
The image is taken in when the trigger is input to the master
vision sensor on the network.
Direction of a
camera
The vertical
allation
ction
Plate
Robot
Plate
Robot
Direction of a camera
Facing down or facing up is specified with the [Camera]
tab.
Photography picture
A picture becomes a front side and the back side by the
direction of a camera.
Recognition result
If the recognition result has the the same coordinates of
a vision sensor and a robot when the same work is
photographed facing up and downward, C axis
component will turn into an axial component of
downward (front side ) plus and upward (reverse side)
minus.
(*1) For details on discrete I/O, see the "In-Sight Installation Guide" that comes with this system.
6-51
6-52
Camera image
6-53
Camera image
6-54
The [O.K.]
button and
[Cancel] button
Camera picture
Camera picture
6-55
When you change a displayed setting item, then click the [Test] button, the results of image
processing under the specified conditions are displayed at the main screen [Camera Image], so
check whether or not the work is correctly recognized. For details on the setting items, see
below.
Setting item
Number to Find
Accept
Find
Tolerances
Sort By
1 - 100
Angle
Start
Angle
End
-180 - 180
-180 - 180
This sets how much the detected work must match the
registered work in order to be recognized.
For the vision sensor, the degree of matching of the detected
work is expressed as 1-100%. Work whose degree of
matching is lower than the value set here is not recognized.
Sets the detected work tilt (start angle end angle).
This sets the start angle and end angle with the angle for the
registered work as 0.
None
X
Y
6-56
When 3 is set
When you click the "Job Editing" screen [Test] button, the three pieces of work with the highest
degree of match are detected. They are displayed with "+" pointer mark and a number from 0 to
2 in order of highest degree of match attached to each piece of work.
6-57
When you click the "Job Editing" screen [Test] button, even though there is one piece of work
at the top left, two pieces of work are recognized. Large work is also recognized and the
recognition count becomes 7.
When you click the "Job Editing" screen [Test] button, only the registered four pieces of work are
recognized, which is correct.
6-58
When you click the "Job Editing" screen [Test] button, only work is detected that is within the
45 range with the registered work angle as 0.
When you click the "Job Editing" screen [Test] button, only work is detected that is within the
range 45 to +180 with the registered work angle as 0.
6-59
Sort direction: Y
When you click the "Job Editing" screen [Test] button, the recognized work is displayed with a
number from 0 in order of the +Y direction (from left to right in the figure above) of the frame
specified with the search area specification.
6-60
In the initial display, recognition results monitors for three robots are displayed.
To view just the results for [Robot 1], move the mouse pointer to the right edge of the screen
and while dragging the screen right edge, move the mouse to the left.
6-61
(10)Click the "Job Editing" screen [Close] button to close the "Job Editing" screen.
6-62
6-63
TB ENABLE switch
Lamp lighting: Enable .
Lamp lights-out: Disable .
MANUAL
mode
3.PARAM
Inputs
parameter
name
Inputs
data
(3) From the Windows Start menu, click [All Programs] [MELSOFT Application] [RT ToolBox]
[MELFA-Vision] to start "MELFA-Vision".
7-64
Click the [Detail] button to display the "TCP/IP Communication Protcol" screen.
IP Address
Robot Name
Vision])
7-65
(Here,
[For
Check that all the frames on the "Communication Server" main screen become light blue. If a frame
is green, redo the setting.
(5) This sets the parameters for the robot controller and vision sensor to communicate.
From MELFA-Vision menu, select [Controller] [Communication Setting] to display the
"Communication Setting" screen.
This sets the device number for the COM number used. Here is an example in which a COM number
of "COM2:" is used and the setting content is "OPT15".
Click the "COM2" pull-down, then select "OPT15".
7-66
On the displayed "Device Setting" screen, switch On the [Change the Parameter to connect Vision]
checkbox, then input the vision sensor IP address as the IP Address.
Click the [OK] button and check that a "*" is displayed in the "Communication Setting" screen
"Device List".
7-67
If the parameters were written normally, click the [Exit] button to close the " Communication Setting"
screen.
7-68
7.2.Calibration Setting
Calibration is a function that converts the vision sensor coordinate system into the robot coordinate system.
This calibration work is necessary for recognizing what position in the robot coordinate system the recognized
work is at. If this setting is not made, the coordinates for work recognized by the vision sensor display the
results in the sensor coordinate system.
This section explains calibration work using MELFA-Vision.
(1) Prepare the equipment used in calibration work.
Prepare four marking labels (copy the marking sheet in the appendix, align it with the image field
of vision and make enlarged and reduced copies) and the calibration jigs (for example a hand
with sharpened tip for specifying the center of the marking label with the robot).
(2) Set MELFA-Vision to a live image.
From the MELFA-Vision menu, click [Sensor] [Live Mode] or from the MELFA-Vision tool bar,
to put MELFA-Vision into live image mode. Check that
click
sinks.
(3) Adjust the mark positions so that four marking labels for calibration fit in the screen. Here is an
example in the appendix marking sheet is placed. Here, the four marks are set to be marks 1-4
as in the figure below.
Mark 1
Mark 2
Mark 3
Mark 4
7-69
Calibration Setting
(5) From the MELFA-Vision main screen, select [No. 1] in the [Calibration Data List]. This section
explains [No. 1] data creation.
(6) On the "Create Calibration Data" screen, click the [About How to specify Reference Point] button
to check the calibration operation method.
Calibration Setting
7-70
Mark 1
Mark 1
7-71
Calibration Setting
Mark 2
Mark 2
Calibration Setting
7-72
Mark 3
Mark 3
7-73
Calibration Setting
Mark 4
Mark 4
Calibration Setting
7-74
On the "Create Calibration Data" screen, click the [Position] button for the first point to acquire
the robot's current position.
7-75
Calibration Setting
(15)Input a comment.
In the "Create Calibration Data" screen [Comment] column input a comment to make the
meaning of this work easy to understand, then click the [Create Data] button.
Calibration Setting
7-76
On the displayed "Job Editing" screen, click the [Processing Conditions] tab.
Click the "Calibration No." "Robot 1:" pull-down, then with the "Job Editing" screen
[Processing Conditions] tab, select "1" as the [Calibration No.]
7-77
Calibration Setting
From the MELFA-Vision menu, when you click [Sensor] [Recognition Test Results], the
coordinates for the recognized work are displayed with the robot coordinate system.
Calibration Setting
7-78
7-79
Calibration Setting
7.3.1.
Check the line connection with the vision sensor (State variable
:M_NVOpen)
Line connection with vision sensor
(MELFA-BASIC V :NVOpen)
Vision program start
(MELFA-BASIC V :NVPst)
Vision sensor detection quantity acquisition
(State variable
:M_NvNum)
Vision detection position data acquisition
(State variable
:P_NvS1 - P_NvS8)
After this, the robot is moved with the position data detected with the vision sensor.
For details on the vision program dedicated MELFA-BASIC V commands and status variables, see "9.1
Vision Sensor Dedicated Commands and Status Variables".
7.3.2.
1 ' Before this program is run, the evacuation position P0, the work grasping position P1, and the work placement position P2 must have
already been taught.
2 ' Example: P0=(+250.000,+350.000,+300.000,-180.000,+0.000,+0.000)(7,0)
3'
P1=(+500.000, +0.000, +100.000, -180.000, +0.000, +10.000)(7,0)
4'
P2=(+300.000, +400.00, +100.000, -180.000, +0.000, +90.000)(7,0)
5 If M_NVOpenN(1)<>1 Then
' When logon has not been completed for vision sensor number 1
6
NVOpen COM2: As #1
' Connects with the vision sensor connected to COM2.
7 EndIf
8 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
9 NVPst #1,Job1,E76,J81,L85,0,10
' Start vision program [Job1] and receives the number of recognitions by the
vision sensor from the [E67] cell.
10
' and receives the recognized coordinates from the [J81] -[L85] cells, and stores this in P_NvS1 (30).
11 Mov P0
' Moves to the evacuation point.
12 If M_NvNum(1)=0 Then *NG
' If the detection count is 0, jumps to an error.
13 For M1=1 TO M_NvNum(1)
' Loops once for each detection by vision sensor number 1.
14 P10=P1
' Creates the target position P10 using the vision sensor 1 results data.
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33 '
34
35
36
37
P10.X=P_NvS1(M1).X
P10.Y=P_NvS1(M1).Y
P10.C=P_NvS1(M1).C
Mov P10,10
Mvs P10
Dly 0.1
HClose 1
Dly 0.2
Mvs P10,10
Mov P2,10
Mvs P2
Dly 0.1
HOpen 1
Dly 0.2
Mvs P2,10
Next M1
Hlt
End
*NG
Error 9000
Hlt
End
7-80
Jump
FILE/EDIT
Select
Program
Set STEP
number
TEACH
Enable
switch
Enable
switch
Position Edit
screen
YES
7-81
TEACH
7-82
7.4.1.
7.4.2.
CAUTION
There are command words not completed in a single step execution.
When execution does not move to the next step when the step is executed one time,
execute the step again.
Example: NVOpen requires at least seven repetitions of step execution.
7-83
7.4.3.
This section explains the work for starting the stored robot program "1" with the robot controller
operation panel (O/P).
Turn the operation panel key switch in the "Auto (Op)" direction.
Turn the controller's MODE select switch to [AUTOMATIC].
Press the [CHNG DISP] button to display the override at the Status Number.
Press the [UP/DOWN] button to set the Status Number display to "o.010". (This sets the robot
override to 10%.)
Press the [CHNG DISP] button to display the robot program number at the Status Number.
Press the [UP/DOWN] button to set the Status Number display to "P.0001". (This selects robot
program 1.)
Press the [SVO ON] button to switch On the robot servo power supply.
Check around the robot to make sure that everything will be safe even if the robot operates.
Press the [START] button.
7-84
7-85
The main screen [Camera Image] displays the recognition results and the robot transports all the
work recognized by the vision sensor. After transporting, the robot program stops.
7.5.1.
(3) If the position of the recognized work is abnormal, re-edit the MELFA-Vision job.
7.5.2.
Comparison of the position data for the work recognized by the vision sensor
and the position data received by the robot
Check if the robot received the work position data normally from the vision sensor.
(1) From the main screen menu, click [Sensor] - [Recognition Test Results].
When the robot can not grasp the work normally 7-86
(3) Select the line connecting the robot controller and the vision sensor (in the explanation up till now
"COM2:"), then click the [Recognition Details] button.
(4) Compare the "Display Test Result(s)" screen and "Detail Monitor" screen "P_NvS1" values to check
if the robot controller is receiving the data normally.
(5) If the work position data received by the robot is abnormal, check the [Start Cell] and [End Cell]
positions specified with the robot program "NVPst" command.
(6) If the work position data received by the robot is normal, re-do the calibration setting.
7-87
8 Maintenance
8. Maintenance
This chapter explains vision sensor data backup and restoration, the image log function, vision sensor
cloning, the startup function, and user list registration overall maintenance.
Sensor List
8-88
8 Maintenance
(3) If you select a vision sensor other than the one currently logged on the "User Name And Password"
screen is displayed, so input the user name and password for the vision sensor to be backed up,
then click the [OK] button.
This screen is not displayed if the currently logged on vision sensor is selected with the "Sensor
List".
This screen is also not displayed if a sensor is selected that is not logged on but that vision sensor
can be logged on with the currently logged on user name and password.
(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.
* A vision sensor can be backed up even when it is online, but file transfer operations may delay the
robot and vision sensor access.
(5) When the backup starts, the indicator progresses as on the screen below.
To cancel a backup that is underway, click the [Stop] button.
8-89
8 Maintenance
Sensor List
(3) If you select a vision sensor other than the one currently logged on to, the "User Name and
Password" screen is displayed, so input the user name and password for the vision sensor to be
restored, then click the [OK] button.
This screen is not displayed if the currently logged on vision sensor is selected with the "Sensor
List".
This screen is also not displayed if a sensor is selected that is not logged on but that can be logged
onto with the currently logged on user name and password.
8-90
8 Maintenance
(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.
(5) A confirmation screen is displayed to ask whether or not to enable restoration of the vision sensor
network setting files.
To restore the vision sensor setting file (proc.set) and the host table file (hosts.net) vision sensor
network setting file too, click the [Yes] button.
CAUTION
Only restore a network setting file to the sensor it was backed up from.
Restoring a network settings file to any other sensor can cause trouble.
(6) When the restoration starts, the indicator progresses as on the screen below.
To cancel a restoration that is underway, click the [Stop] button.
8-91
8 Maintenance
(3) If the clone source vision sensor(s) and the vision sensor(s) to be cloned from it are different, the
following warning message is displayed.
To continue the work, click the [Yes] button; to cancel it, click the [No] button.
8-92
8 Maintenance
(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.
(5) When the cloning work starts, the indicator progresses as on the screen below.
To cancel cloning work that is underway, click the [Stop] button.
8-93
8 Maintenance
(2) Enter the FTP server user name and password on the displayed "Image Log Setting" screen.
This user name and password are the ones for the FTP server and are different from the user name
and password for logging on to the vision sensor. However, the same user name and password may
be set for both.
Also, enter here the same user name and password as for the "Job Editing" screen "Image Log" tab
"User Name of FTP" and "Password of FTP".
For details on the Image Log tab setting method, see "9.2.2 Job Editing screen ([Image Log] tab)
".
The storage destination for acquired images can be changed with the [Browse] button.
When the settings are complete, click the [OK] button to close the "Image Log Setting" screen.
The next time the "Image Log Setting" screen is opened, the screen is opened with the same
settings as the previous time. To make the same settings as the previous time, remove the check
from the [Change User Name] check box.
(3) When the image log is started, the FTP server is started.
From the MELFA-Vision menu, click [Sensor] - [Image Log] - [Start Log] or from the MELFA-Vision
.
tool bar, click
When the FTP server
starts
up
and
image
log
reception
becomes
possible,
is displayed at the right end of the status bar of the MELFA-Vision main
screen.
In this state, the vision sensor images are stored in the specified folder under the conditions set with
the "Jog Editing" screen "Image Log" tab.
(4) When the image log is ended, the FTP server is ended.
From the MELFA-Vision menu, click [Sensor] - [Image Log] - [Quit Log] or from the MELFA-Vision
tool bar, click
8-94
8 Maintenance
8-95
8 Maintenance
Access
Full
Explanation
The user has full access (without restriction) to the vision sensor.
The job can be loaded, edited, and stored.
Normally log on with this right when using MELFA-Vision.
Protected
This user is not permitted to do FTP writing under the initial conditions.
However, it is possible for writing to be permitted.
Locked
This user is only permitted to check the vision sensor processing state with
the MELFA-Vision camera picture.
There are two types of display item settings - normal and custom; for MELFA-Vision, the custom
view is not displayed even if custom is selected.
FTP writing and reading can be permitted and prohibited with [Yes] and [No].
Also, the following three types of users can be set for the initial state for the vision sensor. The
respective settings are shown in the table below.
Table 8-2 Registered User Name List
User
name
admin
monitor
operator
Password
None
None
None
Access
Display
FTP writing
FTP reading
Full
Locked
Protected
Normal
Custom
Custom
8-96
8 Maintenance
(3) To add a user, click the [Add] button on the "User List" screen; to edit an existing user, select the user
from the list and click the [Edit] button.
The "User" screen is displayed, so set the required items and click the [OK] button.
(4) To delete an existing a user, select the user from the "User List" screen list and click the [Delete]
button.
Check the contents of the confirmation screen, then click the [Yes] button.
8-97
9.1.1.
[Function]
[Format]
[Term]
[Sample sentence]
[Explanation]
[Error]
9.1.2.
MELFA-BASIC V Commands
Here are the dedicated vision sensor commands. These commands can be used by the following software
versions.
CRnQ-700 series: N1 or later
CRnD-700 series: P1 or later
Table9-1 List of Dedicated Vision Sensor Commands
Command word
Contents
NVOpen
Connects with the vision sensor and logs on to the vision sensor.
NVPst
NVRun
NVIn
NVClose
NVLoad
NVTrg
9-98
Error example
1 Open COM2: As #1
2 NVOpen COM2: As #2 <COM number> used
3 NVOpen COM3: Ass#1 <Vision sensor number>
Used
It is not possible to open more than one line in a configuration with one robot controller and one vision
sensor. If the same IP address is set as when the [NETHSTIP] parameter was set, an "Ethernet
parameter NETHSTIP setting" error occurs.
4) Logging on to the vision sensor requires the "User name" and "Password". It is necessary to set a user
name for which full access is set in the vision sensor and the password in the robot controller [NVUSER]
and [NVPSWD] parameters.
The user name and password can each be any combination of up to 15 numbers (0-9) and letters (A-Z).
(T/B only supports uppercase letters, so when using a new user, set the password set in the
vision sensor with uppercase letters.)
The user name with full access rights when the network vision sensor is purchased is "admin". The
password is "". Therefore, the default values for the [NVUSER] and [NVPSWD] parameters are
[NVUSER] = "admin" and [NVPSWD] = "".
When the "admin" password is changed with MELFA-Vision or a new user is registered, change the
[NVUSER] and [NVPSWD] parameters. When such a change is made, when the content of the
[NVPSWD] parameter is displayed, "****" is displayed. If the vision sensor side password is changed,
open the [NVPSWD] parameter and directly change the displayed "****" value. After the making the
change, reset the robot controller power.
[Caution]
When multiple vision sensors are connected to one robot controller, set the same user name and
password for all of them.
5) The state of communications with the network vision sensor when this command is executed can be
checked with M_NVOpen. For details, see the explanation of M_NVOpen.
9-99
SLOT 3
10 NVOpen "COM2:" As #2
20
and
with Slot 3,so an error occurs.
If the same vision sensor number is used in another task, the "attempt was made to open an
already open communication file" error occurs.
SLOT 2
10 NVOpen "COM2:" As #1
20
SLOT 3
10 NVOpen "COM3:" As #1
20
error occurs.
8) A program start condition of "Always" and the continue function are not supported.
9) Three robots can control the same vision sensor at the same time. If a fourth robot logs on, the line for
the first robot is cut off, so be careful when constructing the system.
10) The line is not closed with an End command in a program called out with a Callp command, but the line
is closed with a main program End command. The line is also closed by a program reset.
11) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately even during processing of this command.
[Error]
1) If data type for an argument is incorrect, the "syntax error in input command" error is generated.
2) If there is an abnormal number of command arguments (too many or too few), the "incorrect argument
count" error occurs.
3) If the character specified in <COM number> is anything other than "COM2:" through "COM8:", the
"argument out of range" error occurs.
4) If the value specified as the <vision sensor number> is anything other than "1" through "8", the
"argument out of range" error occurs.
5) If a <COM number> for which the line is already connected is specified (including the <File number> for
which the line has been opened with an Open command), the "attempt was made to open an already
open communication file" error occurs.
6) If the vision sensor is not connected before the line is opened, the "vision sensor not connected"
error occurs. (The same set manufacturer parameter [COMTIMER] as in the Ethernet specifications is
used. Currently "1s")
7) If the same <COM number> or the same <vision sensor number> is specified in another task, the
"attempt was made to open an already open communication file" error occurs.
8) If the user name or password specified in the [NVUSER] parameter (user name) and [NVPSWD]
(password) is wrong, the "wrong password" error occurs.
9) If the communications line is cut while this command is being executed, the "abnormal
communications" error occurs and the robot controller side line is closed.
10) If a program is used for which the starting condition is "Always", the "this command can not be used if
the start condition is ERR or ALW" error occurs.
9-100
When creating a vision program this way and acquiring the data (X, Y, C) only for Robot 1, specify
<Start cell> = "J96" <End cell> = "L98".
<Type> (Can not be omitted)
Specifies the status variable cell in which the results recognized by the vision sensor are stored.
As a result of the recognition, one cell can store plural data by switching off the comma district.
However, there is a limitation up to 255 characters or less on one cell.
The specified character-string data (two or more of one data or the comma district switching off data)
preserved from < Start cell > to < End cell > is preserved in a state variable either of character type a
positional type variable and a numeric type by the specification of < type >.
Setting range: 0 7
9-101
DataCell
P_NvS*()
M_NvS*()
C_NvS*()
Position
type
Single-pre
cision real
number
type
Text type
M_NvS*()
C_NvS*()
Single-pre
cision real
number
type,
Text type
4
5
6
7
Two or more of comma(,) district switching off data
Cell
P_NvS*()
M_NvS*()
C_NvS*()
M_NvS*()
C_NvS*()
Text type
Single-preci
Position
Single-pre
sion
real
type
cision real
number
number
type,
type
Text type
(*1)* sign of the correspondence state variable specifies < Vision Sensor Number >.
The Position data P_NvS*() is converted into the numerical value for a positional variable and it
preserves it in X, Y, and Z coordinates sequentially.
When the character which cannot be converted is included, it preserves it as "0".
Moreover, the data preserved in the row since the fourth row in cell which specifies it for < Start cell >
and < End cell > must not be acquired.
The Numeric type data M_NvS*() is converted into the numerical value for a numeric variable and it
preserves it.
When the character which cannot be converted is included, it preserves it as "0".
M_NvS*() is two dimension array, and all the data specified for < Start cell > and < End cell > can be
preserved.
It explains the content by "Explanation".
The Text type dataC_NvS*() is preserved as it is for the character type variable.
It replies from the Vision Sensor the function and kanji code etc. of the vision program as #"
character.
Moreover, it replies by the NULL character for a blank cell.
All these situations are preserved in C_NvS*() as NULL character.
C_NvS*() is two dimension array, and all the data specified for < Start cell > and < End cell > can be
preserved.
The example of storing information up to 255 characters or less in one cell by switching off the
comma district is shown as follows.
When "4" is specified in the <Type> in this example, "J91" is specified for the <Start cell>, and "J91" is
specified for the <End cell>, the following result is obtained.
Variable
Data(X,Y,Z,A,B,C,L1,L2)
P_NvS1(1) (+336.43,-71.14,+0.00,+0.00,+0.00,+122.27,+0.00+0.00)
P_NvS1(2) (+344.10,+151.54,+0.00,+0.00,+0.00,-5.78,+0.00+0.00)
P_NvS1(3) (+224.58,+274.84,+0.00,+0.00,+0.00,+31.24,+0.00+0.00)
P_NvS1(4) (+0.00,+0.00,+0.00,+0.00,+0.00,+0.00,+0.00+0.00)
9-102
When as in the figure above, the information to the first robot is stored in vision program sheet cells
<J96> through <M98> and the information to the second robot is stored in cells <O96> through <R98>,
<J96> and <M98> are specified as the <Start cell> and <End cell>.
When "1' is specified as the type with the NVPst command, it is stored in M_NvS1() as follows.
Column
Row
M_NvS1()
9-103
1
2
3
4
5
1
347.147
381.288
310.81
0.0
0.0
2
-20.232
49.018
43.65
0.0
0.0
3
-158.198
10.846
-34.312
0.0
0.0
4
97.641
97.048
0.0
0.0
0.0
5
0.0
0.0
0.0
0.0
0.0
6
0.0
0.0
0.0
0
0
7
0.0
0.0
0.0
0
0
8
0.0
0.0
0.0
0
0
9
0.0
0.0
0.0
0
0
M_NvS1()
1
2
3
4
5
6
7
8
9
1 347.147 -20.232 -158.198 97.641 0.0 110.141 120.141 72.645
97.641
2 381.288 49.018 10.846
97.048 0.0 89.582
99.582
-118.311 97.048
3 310.81
43.65
-34.312
0.0
0.0 139.151 149.151 -163.469 95.793
4 0.0
0.0
0.0
0.0
0.0 0.0
0.0
0.0
0.0
5 0.0
0.0
0.0
0.0
0.0 0.0
0.0
0.0
0.0
8) Up to three robots can control the same vision sensor at the same time, but this command can not be
used by more than one robot at the same time. Use this command on any one of the robots.
Example of tracking system with three robots and one vision sensor
1)Photography
request
3) Receiving
data
Controller
3) Receiving
data
3) Receiving
data
2) Reception
permission
Controller (master)
2) Reception permission
<Procedure>
Of the three robots, one is set as the master and the controller (master) outputs the "image capture
request" to the vision sensor with the NVPst command. The vision sensor starts the image capture and
when it is complete, returns that to the controller (master).
The controller (master) outputs the "reception enabled notice" to the other two robots. (Taking cost and
degree of difficulty into account, we recommend to connect between robots with I/O. The other robots are
connected with Ethernet, so interactive notification with text string transmission/reception is possible.)
The respective robots receive the information they respectively require with NVIn commands.
9-104
Example assembling system with two robots and one vision sensor
2)Photography
request
6)Photography
request
3) Receiving data
7) Receiving data
5) Using ON
8) Using OFF
Controller
Controller
1) Using ON
4) Using OFF
<Procedure>
The controller using the vision sensor checks that the vision sensor is not being used by another
controller and outputs the "Using" On signal to that controller.
It outputs the "Image capture request" to the vision sensor.
When the vision sensor image processing is complete, the controller receives the necessary data.
The controller switches Off the "Using" signal it had output to the other controller.
The other controller executes Steps 1 - 4.
In this way, the two robot controllers use the vision sensor alternating or as necessary.
If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately.
[Errors]
1) If the data type for an argument is incorrect, a "syntax error in input command statement" error is
generated.
2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument
count" error occurs.
3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error
occurs.
4) If the NVOpen command is not opened with the number specified as the <vision sensor number>, an
"abnormal vision sensor number specification" error occurs.
5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error
occurs.
6) If a <vision program name> uses a character other than "0" - "9", "A" - "Z", "-", or "_" (including
lowercase letters), an "abnormal vision program name" error occurs.
7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program
not present" error occurs.
8) If the <Recognition count cell>, <Start cell>, or <End cell> contains a number other than "0" - "399" or a
letter other than "A - "Z", an "argument out of range" error occurs.
9) If there is no value in the cell specified in "Recognition count cell", an "incorrect value in recognition
count cell" error occurs.
10) If the <Start cell> and <End cell> are reversed, a "specified cell value out of range" error occurs.
11) If the number of data included in the cell which specifies it by <Start cell> and <End cell> exceeds 90,
a"specified cell value out of range" error occurs.
12) If the range specified by <Star cell> and <End cell> exceeds line 30 and row 10, a"specified cell value
out of range" error occurs.
13) If the <Type> is other than "0" - "7", an "argument out of range" error occurs.
14) If the <Timeout> is other than "1" - "32767", an "argument out of range" error occurs.
15) If the vision sensor does not respond without the time specified as the <Timeout> or within the first 10
seconds if the <Timeout> parameter is omitted, a "vision sensor response timeout" error occurs.
16) If the vision program's image capture specification is set to anything other than "Camera" (all trigger
command), "External trigger", or "Manual trigger", an "abnormal image capture specification" error
occurs.
17) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".
18) If the communications line is cut while this command is being executed, an "abnormal
communications" error occurs and the robot controller side line is closed.
9-105
9-106
9-107
9-108
100 NVClose #1 'Cuts the line with the vision sensor connected to COM2.
[Explanation]
1) Starts the specified vision program in the specified vision sensor.
2) This command moves to the next step after it has verified that the vision sensor has received the image
capture and image processing command.
3) If the program is cancelled while this command is being executed, it stops immediately.
4) If the specified <vision program name> is already loaded, only image capture and image processing are
executed. (The vision program is not loaded.)
5) For receiving data from the vision sensor, use the NVIN command.
6) When this command is used with multi-tasking, it is necessary to execute the NVOPEN command in the
task using this command. Also, use the <vision sensor number> specified with the NVOPEN command.
7) A program start condition of "Always" and the continue function are not supported.
8) When multi-mechanism mode is used and data for multiple robots is required, make a vision program
that creates data for multiple robots with one image capture request.
Example
9) Up to three robots can control the same vision sensor at the same time, but this command can not be
used by more than one robot at the same time. Use this command on any one of the robots.
10) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately.
9-109
9-110
9-111
9-112
' When logon has not been completed for vision sensor number 1
' Connects with the vision sensor connected to COM2 and sets its number
as number 1.
'Connects with vision sensor number 1 and waits for logon to be
[Explanation]
1) Cuts the line with the vision sensor connected with the NVOpen command.
2) If the <vision sensor number> is omitted, cuts the line with all the vision sensors.
3) If a line is already cut, execution shifts to the next step.
4) Because up to seven vision sensors can be connected at the same time, <Vision sensor numbers> are
used in order to identify which vision sensor to close the line for.
5) If the program is cancelled while this command is being executed, execution continues until processing
of this command is complete.
6) When this command is used with multi-tasking, in the task using this command, it is necessary to close
only the lines opened by executing an NVOpen command . At this time, use the <Vision sensor
number> specified with the NVOpen command.
7) A program start condition of "Always" and the continue function are not supported.
8) If an End command is used, all the lines opened with an NVOpen command or Open command are
closed. However, lines are not closed with an End command in a program called out with a CAllp
command.
Lines are also closed by a program reset, so when an End command or a program reset is executed, it
is not necessary to close lines with this command.
9) The continue function is not supported.
10) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed after this command is completed.
[Errors]
1) If the value specified as the <vision sensor number> is anything other than "1" through "8", the
"argument out of range" error occurs.
2) If there are more than eight command arguments, an "incorrect argument count" error occurs.
9-113
9.1.3.
Variable name
Array elements
Contents
Attribute
(*)
Data type
M_NVOpen
Integer type
M_NvNum
Integer type
P_NvS*(*=1 8)
30
Position type
M_NvS*(*=1 - 8)
30, 10
C_NvS*(*=1 - 8)
30, 10
(*1) R indicates that a status variable is read-only.
The details of the status variables are as follows.
R
R
Single-precision
real number type
Text type
(1) M_NVOpen
[Function]
Indicates the vision sensor line connection status.
[Array meaning]
Array elements (1 - 8) Vision sensor numbers
[Explanation of values returned]
0: Line connecting (logon not complete)
1: Logon complete
[Usage]
After an NVOpen command is executed, checks whether or not the line with the vision sensor is connected
and the vision sensor logged onto.
[Sample sentence]
1 If M_NVOpen(1)<>1 Then
2 NVOpen COM2: As #1
number as number 1.
3 EndIf
4 Wait M_NVOpen(1)=1
5
:
100 NVClose #1
[Explanation]
1) Indicates the status of a line connected with a network vision sensor with an NVOpen command when
the line is opened.
2) The initial value is "-1". At the point in time that the NVOpen command is executed and the line is
connected, the value becomes "0" (line connecting). At the point in time that the network vision sensor
logon is completed, the value becomes "1" (logon complete).
3) This variable strongly resembles the status of status variable M_OPEN, but whereas M_Open
4) becomes "1" when the connection is verified, M_NVOpen becomes "1" when the vision sensor logon is
complete.
[Errors]
(1) If the type of data specified as an array element is incorrect, a "syntax error in input command
statement" error occurs.
(2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type"
error occurs.
(3) If an array element other than "1" through "8" is specified, an "array element mistake" error occurs.
9-114
(2) M_NvNum
[Function]
Indicates the number of pieces of work detected by the vision sensor.
[Array meaning]
Array elements (1 - 8): Vision sensor numbers
[Explanation of values returned]
Work detection count (0-255)
[Explanation]
1) Indicates the number of pieces of work detected by the vision sensor with the NVPst command or NVIn
command.
2) The stored recognition count is held until the next NVPst command or NVIn command is executed.
When an NVPst command or NVIn command is executed, the data is cleared to "0".
3) When the <Recognition count cell> specified with the NVPst command or NVIn command is a blank cell
in the vision program or a vision program command is specified, this becomes "0".
[Sample sentence]
1 If M_NVOpen(1)<>1 Then
2 NVOpne COM2: As #1
3 EndIf
4 Wait M_NVOpen(1)=1
' When logon has not been completed for vision sensor number 1
' Connects with the vision sensor connected to COM2.
' Connects with vision sensor number 1 and waits for logon to be
completed.
5 NVPst #1,TEST,E76,J81,L84,1,10
'Starts the "Test" program, receives the recognition count from the E76
cell and the recognition results from cells J81 through L84, and stores
this in M_NvS1().
6 'Processes referencing the acquired data.
7 MVCnt=M_NvNum(1)
'Acquires the number of pieces of work recognized by the vision sensor.
8
:
'Cuts the line with the vision sensor connected to COM2.
100 NVClose #1
[Errors]
1) If the type of data specified as an array element is incorrect, a "syntax error in input command
statement" error occurs.
2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type"
error occurs.
3) If an array element other than "1" through "8" is specified, an "array element mistake" error occurs.
9-115
In the above vision program, when "J96" and "L98" are specified in the <Start cell> and <End cell> of the
NVPST command or NVIN command, P_NVS1() becomes the following values.
P_NvS1(1)=(+347.14 , -20.23 , +0.00 , +0.00 , +0.00 , -158.19 , +0.00, +0.00)(0 , 0)
P_NvS1(2)=(+381.28 , +49.01 , +0.00 , +0.00 , +0.00 , +10.84 , +0.00, +0.00)(0 , 0)
P_NvS1(3)=(+310.81 , +43.65 , +0.00 , +0.00 , +0.00 , -34.312 , +0.00, +0.00)(0 , 0)
P_NvS1(4)=( +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00, +0.00)(0 , 0)
P_NvS1(5)=( +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00, +0.00)(0 , 0)
9-116
3) The stored data is held until the next NVPst command or NVIn command is executed. However, this
data is cleared by a program reset, End command, or power supply reset. Even if the continue function
is enabled, the data is cleared (to 0 for all axes) for a power supply reset.
4) Also, if anything other than "0" is specified as the type with the NVPst command or NVIn command, all
axes are cleared to "0".
5) If the acquired data is a vision program function or character string, "0" is stored in the corresponding
axis.
6) The data for this variable is the valid position data for 8 axes.
7) When using multi-mechanism mode, see the explanation of the NVPst command.
[Errors]
1) If the type of data specified as an array element is incorrect, a "syntax error in input command
statement" error occurs.
2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type"
error occurs.
3) If an array element other than "1" through "30" is specified, an "array element mistake" error occurs.
9-117
In the above vision program, when "J96" and "Q98" are specified in the <Start cell> and <End cell> of
the NVPst command or NVIn command, the value of M_NvS1() becomes the following values.
M_NvS1()
Element 2
Element 1
1
2
3
4
5
1
347.147
381.288
310.81
0.0
0.0
2
-20.232
49.018
43.65
0.0
0.0
3
-158.198
10.846
-34.312
0.0
0.0
4
97.641
97.048
0.0
0.0
0.0
5
0.0
0.0
0.0
0.0
0.0
6
110.141
89.582
139.151
0.0
0.0
7
120.141
99.582
149.151
0.0
0.0
8
72.645
-118.311
-163.469
0.0
0.0
9
0.0
0.0
0.0
0.0
0.0
9-118
9-119
In the above vision program, when "J95" and "Q98" are specified in the <Start cell> and <End cell> of the
NVPst command or NVIn command, the value of C_NvS1() becomes the following values.
C_NvS1()
Element 2
Element 1
1
2
3
4
5
X
347.147
381.289
C
-158.19
8
Score
Score
Y
120.141
97.641
X
110.141
72.645
97.641
89.585
99.585
-118.31
3
97.227
139.151
149.151
-163.47
96.217
-20.232
49.017
10.844
97.227
310.81
43.649
-34.313
96.217
9-120
9-121
9.2.1.
For explanations concerning the MELFA-Vision main screen, see "6.3.1 Starting MELFA-Vision (network
vision sensor support software)".
Figure 9-1 MELFA-Vision Main Screen
9-122
9.2.2.
On the job edit screen [Image Log] tab, the conditions are set for the PC in which the images captured with
the vision sensor are stored to the PC. It is necessary to start the FTP server on the PC storing the images
For explanations concerning the MELFA-Vision main screen, see "8.4 Image Log Acquisition Settings and
Reception Start/End".
This section explains the conditions set with the [Image Log] tab.
Setting item
Save the
Image Log
Explanation
To acquire the image log, put a check in the [Save the Image Log] checkbox. To not
acquire the image log, remove the check from the checkbox.
Sets the condition for storing images captured with the vision sensor to the PC.
Always All images captured with the vision sensor are stored
Save Condition
OK images The image is stored if the count of recognized pieces of work is not 0.
NG images The image is stored if the count of recognized pieces of work is 0.
Images captured with the vision sensor are stored as bit map (bmp) files on the PC.
File Name
This specifies the file name.
* Up to 50 files names can be specified.
This specifies the number of images stored on the PC.
Serial numbers up to specified number of images are attached after the file name.
Example: When the file name is "NGImage"
Max Number
NGImage001.bmpNGImage002.bmpNGImage010.bmp
If the specified number of images is exceeded, the serial number is reset and already
stored bmp files are overwritten.
This resets the serial numbers attached to the file name.
Reset
The file name for the image captured after resetting becomes "file name 001.bmp".
User Name of FTP Specifies the FTP server user name set with the displayed "Image Log Setting"
(*1)
screen.
Password of FTP
Specifies the FTP server password set with the displayed "Image Log Setting"
(*1)
screen.
IP Address of FTP
Specifies the IP address of the PC on which the FTP server is running.
Get IP Address
When an MELFA-Vision FTP server is used and this button is clicked, the IP address
From PC
of the PC MELFA-Vision is running on is displayed in the [IP Address of FTP].
(*1) For details on the "Image Log Setting" screen, see "8.4 Image Log Acquisition Settings and Reception
Start/End".
9-123
9.2.3.
The Job Editing screen [Result Cell Position] tab displays "Found Number Cell", "Start", and "End"
specified with the dedicated MELFA-BASIC V command for the network vision sensor. A cell is a position
indicated by the column character and row character in the vision program.
This section explains the cell positions displayed on the screen.
Setting item
Found
Number Cell
Explanation
The number of pieces of work recognized by the vision sensor is stored in the
displayed cell position.
The coordinates (robot 1 coordinates) for the work recognized by the vision sensor
Robot 1
are stored from the displayed start cell position to the displayed end cell position.
The coordinates (robot 2 coordinates) for the work recognized by the vision sensor
Robot 2 (*1)
are stored from the displayed start cell position to the displayed end cell position.
The coordinates (robot 3 coordinates) for the work recognized by the vision sensor
Robot 3 (*2)
are stored from the displayed start cell position to the displayed end cell position.
When you specify the range from 0 to 3 for a value of the type of "NVPst" and "NVIn"
[Type]: 0 to 3 command of MEFLA-BASIC V command, the position of the cell which specifies it for
A and B is shown.
When you specify the range from 4 to 7 for a value of the type of "NVPst" and "NVIn"
[Type]: 4 to 7 command of MEFLA-BASIC V command, the position of the cell which specifies it
for A and B is shown.
(*1) Displayed when a job for two robots is selected.
(*2) Displayed when a job for three robots is selected.
9-124
9.2.4.
The vision sensor network settings can be changed. From the MELFA-Vision menu, click [Sensor]
[Connection] [Communication Setting] to display the "Network Settings" screen. Check with your network
administrator for the items to set.
Setting item
Host Name
Use DHCP Server
IP Address
Subnet Mask
Default Gateway
DNS Server
Domain Name
DHCP Timeout
Transition to Time Out
Auto Delete
9-125
Explanation
Changes the vision sensor name.
Check this when using the DHCP server to allocate the IP address.
Input the IP address.
Defines which part of the IP address shows the network and which part shows
the host.
Data can be relayed between hosts on different networks by specifying the
gateway address.
Input the network host IP that supplies the DNS resolution (converting from
host name to IP address).
Defines the domain name of the network the vision sensor is on.
Specifies the DHCP server response wait time.
Shifts to another connection without closing the connection.
Closes the connection.
9.3.1.
Output
coordinates
(*4)
Pattern matching
(*2)
1
Absolute
coordinates
Relative
coordinates
Blob
(binarization
processing)
(*3)
Color(*7)
1
Absolute
coordinates
result
1
4
10
20
30
1
4
10
20
30
1
4
10
20
30
1
4
10
20
30
1
4
10
20
30
1
4
10
20
30
(*1) Pattern matching, blobs, edges, histograms, ID recognition, text comparison, etc. are provided for vision
sensor image processing, but the image processing supported by MELFA-Vision is pattern matching
and blobs.
(*2) Pattern matching is a method of detecting patterns in images based on registered patterns.
(*3) Blobbing is a method for detecting two-dimensional shape information such as the size, shape, position,
linking, etc. of patterns expressed as blobs.
The blob template is effective for the following types of subjects.
Large subjects
Subjects whose shapes change irregularly
For details on blob image processing, see "9.3.2 Image processing - blobs".
9-126
Output method
Absolute
coordinate
output
Relative
coordinate
output
Explanation
The detected pattern position is output converted to the robot coordinate system.
The detected pattern position is output with the robot coordinate system offset
quantities for the relative position based on the registered pattern position.
(*5) Templates are also provided that secure the data for two robots or for three robots with one image.
When multiple robots are connected to one vision sensor, it is possible to acquire the operation
positions for each robot by capturing one image.
(*6) It is necessary for the vision program to prepare beforehand an area in which to store the work position
data that the robot acquires. Expanding this area increases the amount of information that the robot can
obtain, but also increases the vision program load time and the time for sending the information from the
vision sensor to the robot. Therefore, MELFA-Vision provides templates for areas for storing
1/4/10/20/30 sets of work information. Select the one that best matches your system.
These quantities - 1/4/10/20/30 indicate the maximum number of sets of work data that the robot can
acquire. For example, for acquiring 8 sets of work data, select the 10 template.
(*7) Color is a method to detect the pattern in the image based on the specified color pattern.
Refer to 9.3.3 Image processing Color for details of the color image processing.
9-127
9.3.2.
This section explains how to make the blob image processing settings, using pattern matching image
processing (only one robot, results output as robot absolute coordinate values) as an example.
(1) In the [Job(Vision Program)List] on the left side of the main MELFA-Vision screen, click [New].
From the "Image Processing Method" screen displayed, select blob image processing, then click
the [OK] button.
9-128
(2) Execute
the
work
in
order
of
the
When you change a displayed setting item, then click the [Test] button, the results of image
processing under the specified conditions are displayed at the main screen [Camera Image], so
check whether or not the work is correctly recognized.
For details on the setting items, see "Table 9-9 List of [Search Area and Recognition Condition
(1)] Tab Items".
Table 9-9 List of [Search Area and Recognition Condition (1)] Tab Items
Setting item
Setting range
Explanation
Color
Blob
Black/white/Either Select the color of the work tobe recognized
Setting
(black or white or any desired color)
Background Black/white
This specifies the color (black or white) that is
the background for captured images.
Search Area
Click the [Image] button to set the range for
detecting the work (blob).
9-129
Min
Max
0 - 900000
When you change a displayed setting item, then click the [Test] button, the results of image
processing under the specified conditions are displayed at the main screen [Camera Image], so
check whether or not the work is correctly recognized.
For details on the setting items, see "Table 9-9 List of [Search Area and Recognition Condition
(1)] Tab Items".
Table 9-10 List of [Processing Condition(2)] Tab Items
Setting
Setting item
Explanation
range
Manual Threshold
1 - 100
This sets what degree of recognition is required for recognition of work
detected with the threshold specified with the grayscale threshold.
For the vision sensor, the degree of matching of the detected work is
expressed as 1-100%. Work whose degree of matching is lower than
the value set here is not recognized.
Greyscale Threshold
1 - 255
This sets the grayscale threshold. When the "Auto Setting" checkbox is
checked, the value is set automatically from the images captured.
Sort By
None
Returns the recognized work results in the specified sort order.
X
When "None" is specified, the results are returned with the work sorted
Y
in order of high recognition ratio.
This sorting is used for cases such as when multiple work pieces are
detected and you want to grasp the work in order from left to right in the
image.
The "X" and "Y" specified here indicate the "X" and "Y" at the red frame
displayed with the search area setting.
Offset of Rotation
-180 - 180 When outputting the recognized work results, this function adds the
specified offset amount to the detection angle.
When registering patterns, this is used if the 0-tilt image can not be
captured.
Calibration No.
None
This selects the data when outputting the recognized work coordinate
1 - 10
value converted to the robot coordinate value.
Work information can be converted to the coordinate systems for up to
three robots and sent.
Therefore, it is possible to select calibration numbers for three robots.
* The figure above shows a screen assuming a system with one robot.
When a system is selected with three robots, [Robot 2:] and [Robot 3:]
display appears.
* For all the items, if a value outside the range is input, it is replaced with the upper or lower limit
value near the image going out of focus.
(5) For details on the [Image Log] tab, see "9.2.2Job Editing screen ([Image Log] tab) "; for details
on the [Results Cell Position] tab, see "9.2.3 Job edit screen ([Result Cell Position] tab) ".
9-130
9.3.3.
This section explains how to make the Color image processing settings, using pattern matching image
processing (only one robot, results output as robot absolute coordinate values) as an example.
(1) In the [Job(Vision Program)List] on the left side of the main MELFA-Vision screen, click [New]. From
the "Image Processing Method" screen displayed, select Color image processing, then click the [OK]
button.
9-131
[White Balance] button is clicked, standard color is recognized and the RGBcolor can be recognized more
accurately.
A standard color is a color of 100*100 dots at the center of the camera picture when [White Balance] button
is clicked.
In this case, standard color is made white, it comes to recognize the color by a color near man's recognition.
9-132
When [Test] button is clicked after the displayed set item is changed, the result of processing the image on
[Camera Image] of the main screen and the condition of specifying it is displayed.
Confirm whether the light and shade of work is clear according to the specified color on the screen.
For details on the setting items, see "Table 9-1 List of [Color] Tab Items"
Table 9-2 List of [Color] Tab Items
Setting item
Color Area Setting
Setting range
Representation
ON/OFF
Select Filter
ON/OFF
Histogram
Threshold
-1Auto
0 - 255Manual
9-133
Explanation
[Image] button is clicked, shifts to a graphic image and a
square frame is displayed.
Enclose the color which wants to be recognized with the
frame and click the [Enter] key.
Whether the specified color is acquired in
RGB(Red/Green/Blue) information or it acquires it in
HIS(Hue/Intensity/Saturation) information is selected.
Whether the image displayed on the screen is displayed
by the color or it displays it with grey scale which puts the
specified color filter is selected.
Color specified by [Color Area Setting] information is
displayed.
When [Representation] CheckBox is OFF, information on
RGB is displayed.
And the CheckBox is ON, information on HIS is displayed.
Initial value is "-1".
When the value is "-1", the color of work is converted into
white putting the color specified by [Color Area Setting]
filter.
When the light and shade of work is not clear by the self
adjustment filter, the color recognize can be adjusted
which input the value of 0 - 255 to Threshold.
Camera Image
9-134
9-135
9-136
9-137
9-138
9.3.4.
The only templates provided for MELFA-Vision are pattern matching and blobs. * When using a robot using
other image processing, write the vision program using "In-Sight Explorer" installed on the PC with "5.3.1
Vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later) installation".
For details on how to write a vision program using "In-Sight Explorer", see the "In-Sight Explorer" help.
9-139
9.3.5.
To shorten the time for transferring data with the robot controller
The image processing templates prepared for MELFA-Vision use the mechanism of transferring the
information on recognized work to the robot controller one set at a time (three communications, X, Y, and C
per piece of work).
When it is desired to shorten the tact time, it is recommended that the vision program and robot program be
altered to shorten the data transfer time.
Below is the method for transferring a maximum of four sets of data (127 bytes maximum) in each data
transfer.
<Vision program change example>
Before change
Data exists in each cell in the vision program and the robot controller can use them without processing the
acquired values.
However, in the example above, since a total of 12 data transfers are required for cells [J81] through [L84],
the transfer time becomes longer.
After change
The value converts errors into a text string as is. Text string cells are linked to form a single cell.
The above program is added to the vision program before change.
Cells [O81] through [Q84] use the vision program "count error" function. If there is an error, they display the
character "E". Also, cell [S81] stores the data for the four cells [O81] [Q84] in one cell. The vision program
"concatenate" function is used. Coordinates are delimited with "," and recognized work is delimited with "/".
For details on the functions used in vision programs, see the "In-Sight Explorer" help.
CAUTION
The maximum number of characters the robot can receive in one
communication is 127.
Due to restrictions on communications with the robot, if the information for one piece of
work is X, Y, and C, one data transfer can handle up to four sets of data.
9-140
9-142
9.4.1.
This section only explains those aspects of the setting method for constructing a system with one robot
controller and multiple vision sensors that are different from the contents covered in Chapter 5 through
Chapter 0.
(1) Change the robot controller communication settings.
From the MELFA-Vision menu, select [Controller] [Communication Setting] to display the
"Communication Setting" screen.
Set the "Line and Device" and "Device List" for the number of vision sensors connected.
Below is an example for connecting three vision sensors.
9-143
20 NVClose
9.4.2.
This section only explains those aspects of the setting method for constructing a system with one vision
sensor and multiple robot controllers that are different from the contents covered in Chapter 5 through
Chapter 0.
This section shows a system with two robots as an example.
(1) Write the vision program for two robots.
See "6.3.3 Image processing settings" and on the "Image Processing Method Selection" screen, select
the template for two robots and write the vision program.
When the template for two robots is selected, on the "Job Editing" screen [Processing Condition] tab,
the calibration setting items for two robots are displayed.
9-144
20 NVClose
<Robot 2>
1 If M_NVOpen(1)<>1 Then NVOpen COM2: AS #1
'Connects to vision sensor 1 (COM2).
2 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
3 Wait M_In(10)=1
'Waits for contact from Robot 1.
4 M_Out(10)=1
'Outputs to Robot 1 that it received the notice.
5 Wait M_In(10)=0
'Checks that Robot 1 has verified reception.
6 M_Out(10)=0
' Switches Off the notice received signal to Robot 1.
7 NVin #1,Job1,E76,O81,Q85,0,10
'Acquires the results.
20 NVClose
* Note that the cell positions for Robot 1 and Robot 2 to acquire data from the vision sensor are different.
9-145
10 Troubleshooting
10. Troubleshooting
This chapter lists the errors that can occur in using network vision sensors and explains the causes of and
solutions to these errors.
Level
H
High-level error
L
Low-level error
C
Warning
Explanation
The servo is switched Off and program execution stopped.
Program execution is stopped.
Program execution is continued.
Table 10-2 List of Errors for Vision Sensor Use
Error
No.
3110
3120
3130
3141
3142
3287
3810
4220
Syntax error
command
4370
7810
Abnormal
Ethernet The parameter
parameter setting
incorrect.
Level
Error contents
in
input
Cause
Solution
setting
Error list
10-146
10 Troubleshooting
Table 10-3 List of Errors Only for Vision Sensors
8601
8602
8603
8610
8620
8621
8622
8630
8631
8632
8633
8634
10-147
Error list
Cause
not There is no vision sensor
connected to the specified
COM number.
Logon not possible
The communication line was
opened, but there is no
response from the vision
sensor.
Wrong password
The password for the user set
with the "NVUSER" password
is not set in the "NVPSWD"
parameter.
Parameter abnormality
The user name or password
parameter is abnormal.
Abnormal
Communication with the vision
communications
sensor was cut off before or
during command execution.
Abnormal vision sensor The specified vision sensor
number specification
number is not defined with an
NVOpen command.
Incorrect
value
recognition count cell
Solution
Check the specified vision
program number, "COMDEV"
parameter, etc. settings.
Reset the program and start it
again.
10 Troubleshooting
L
8635
8640
8650
8660
8670
Error list
10-148
11 Appendix
11. Appendix
11.1. Performance of this product (comparison with built-in type RZ511
vision sensor)
Below is a comparison of the performance of this product with that of our built-in type vision sensors.
Work condition
Network vision sensor
Built-in vision sensor
Overlap
Approach or contact
Tilt
Front/rear judgment
Function
Area size change
Specification of coordinates
for output to robot
Area angle change
Area shape change
Yes
(No need to change the work
angle)
Yes
(Square/fan shape/round)
No
(Necessary to change the work
angle)
No
(Square only)
"Table 11-3 Image Processing Capacity" shows the measurement results when the work is recognized
with the same conditions.
The image processing time can be reduced by using network vision sensors.
For network vision sensors, the increase in the number of pieces of work recognized increases the data
transfer time.
11-149
Performance of this product (comparison with built-in type RZ511 vision sensor)
11 Appendix
Performance of this product (comparison with built-in type RZ511 vision sensor)
11-150
11 Appendix
11-151