0% found this document useful (0 votes)
385 views

MELFA Vision Instruction Manual

robotics factory manual to handle and program small instructional robot

Uploaded by

James Jdf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
385 views

MELFA Vision Instruction Manual

robotics factory manual to handle and program small instructional robot

Uploaded by

James Jdf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 166

Mitsubishi Industrial Robot

CRn-700 Series

Network Vision Sensor Instruction Manual


3D-51C-WINE
4D-2CG5100-PKG-E
4D-2CG5400-PKG-E
4D-2CG5401-PKG-E
4D-2CG5403-PKG-E
4D-2CG5400C-PKG-E
4D-2CG5400R-PKG-E

BFP-A8780-C

Revision History
Printing Date
Instruction Manual No.
2009-09-18
BFP-A8780-*
2009-10-16
BFP-A8780-A
2010-06-02
BFP-A8780-B
2010-10-19

BFP-A8780-C

Revision Contents
First Edition
Error in writing was corrected.
The explanation concerning the installation of MELFA-Vision was
added.
Error in writing was corrected.

Preface
Thank you for purchasing this network vision sensor for CRn-700 series Mitsubishi Electric industrial robots.
The network vision sensor is an option that is used in combination with a CRn-700 series controller to make it
possible to detect and inspect work through visual recognition. Before using this sensor, please read this
manual well so that you utilize the contents of this manual when using this network vision sensor.
This manual attempts to cover special handling as well. Please interpret the absence of an operation from
this manual as meaning that it can not be done.
The contents of this manual target the following software versions.
Robot controller: series
Ver. N1 or later
series
Ver. P4 or later

Symbols & Notation Method in This Manual

DANGER

This indicates a situation in which a mistake in handling will expose the


user to the danger of death or severe injury.

WARNING

This indicates a situation in which a mistake in handling has the


possibility of resulting in death or severe injury of the user.

CAUTION

This indicates a situation in which a mistake in handling has the danger of


causing injury to the user. Equipment damage is also possible.

No part of this manual may be reproduced by any means or in any form,without prior consent from
Mitsubishi.
The details of this manual are subject to change without notice.
An effort has been made to make full descriptions in this manual.However,if any discrepancies or
unclear points are found,please contact your dealer.
The information contained in this document has been written to be accurate as much as
possible.Please interpret that items not described in this document cannot be perfomed..
Please contact your nearest dealer if you find any doubtful,wrong or skipped point.
Microsoft, Windows, and .NET Framework are registered trademarks of the Microsoft Corporation of
the United States in the United States and/or other countries.
In-Sight is registered trademark of the Cognex Corporation.
Adobe,the Adobe logo,Acrobat,and the Acrobat logo are trademarks of Adobe Systems incorporated.
Reference to registered trademarks and trademarks are omitted in this manual.

Copyright(C) 2006-2010 MITSUBISHI ELECTRIC

CORPORATION ALL RIGHTS RESERVED

Safety Precautions
Always read the following precautions and the separate
"Safety Manual" before starting use of the robot to learn the
required measures to be taken.

CAUTION

All teaching work must be carried out by an operator who has received special training. (This also
applies to maintenance work with the power source turned ON.)
Enforcement of safety training

CAUTION

For teaching work, prepare a work plan related to the methods and procedures of operating the
robot, and to the measures to be taken when an error occurs or when restarting. Carry out work
following this plan. (This also applies to maintenance work with the power source turned ON.)
Preparation of work plan

a device that allows operation to be stopped immediately during teaching work. (This also
WARNING Prepare
applies to maintenance work with the power source turned ON.)
Setting of emergency stop switch

CAUTION

During teaching work, place a sign indicating that teaching work is in progress on the start switch,
etc. (This also applies to maintenance work with the power source turned ON.)
Indication of teaching work in progress

Provide a fence or enclosure during operation to prevent contact of the operator and robot.

WARNING Installation of safety fence


CAUTION

Establish a set signaling method to the related operators for starting work, and follow this method.
Signaling of operation start

CAUTION

As a principle turn the power OFF during maintenance work. Place a sign indicating that
maintenance work is in progress on the start switch, etc.
Indication of maintenance work in progress

CAUTION

Before starting work, inspect the robot, emergency stop switch and other related devices, etc.,
and confirm that there are no errors.
Inspection before starting work

The points of the precautions given in the separate "Safety Manual" are given below.
Refer to the actual "Safety Manual" for details.

WARNING
CAUTION

When automatically operating the robot with multiple control devices (GOT, PLC,
pushbutton switch), the interlocks, such as each devices operation rights must be
designed by the user.
Use the robot within the environment given in the specifications. Failure to do so could lead to a
drop or reliability or faults. (Temperature, humidity, atmosphere, noise environment, etc.)

CAUTION

Transport the robot with the designated transportation posture. Transporting the robot in a
non-designated posture could lead to personal injuries or faults from dropping.

CAUTION

Always use the robot installed on a secure table. Use in an instable posture could lead to
positional deviation and vibration.

CAUTION

Wire the cable as far away from noise sources as possible. If placed near a noise source,
positional deviation or malfunction could occur.

CAUTION

Do not apply excessive force on the connector or excessively bend the cable. Failure to observe
this could lead to contact defects or wire breakage.

CAUTION

Make sure that the workpiece weight, including the hand, does not exceed the rated load or
tolerable torque. Exceeding these values could lead to alarms or faults.

WARNING

Securely install the hand and tool, and securely grasp the workpiece. Failure to observe this could
lead to personal injuries or damage if the object comes off or flies off during operation.

WARNING

Securely ground the robot and controller. Failure to observe this could lead to malfunctioning by
noise or to electric shock accidents.

CAUTION

Indicate the operation state during robot operation. Failure to indicate the state could lead to
operators approaching the robot or to incorrect operation.

WARNING

When carrying out teaching work in the robot's movement range, always secure the priority right
for the robot control. Failure to observe this could lead to personal injuries or damage if the robot
is started with external commands.

CAUTION

Keep the jog speed as low as possible, and always watch the robot. Failure to do so could lead to
interference with the workpiece or peripheral devices.

CAUTION

After editing the program, always confirm the operation with step operation before starting
automatic operation. Failure to do so could lead to interference with peripheral devices because of
programming mistakes, etc.

CAUTION

Make sure that if the safety fence entrance door is opened during automatic operation, the door is
locked or that the robot will automatically stop. Failure to do so could lead to personal injuries.

CAUTION

Never carry out modifications based on personal judgments, or use non-designated maintenance
parts.
Failure to observe this could lead to faults or failures.

WARNING

When the robot arm has to be moved by hand from an external area, do not place hands or

fingers in the openings. Failure to observe this could lead to hands or fingers catching depending
on the posture.

CAUTION

CAUTION

Do not stop the robot or apply emergency stop by turning the robot controller's main power
OFF. If the robot controller main power is turned OFF during automatic operation, the robot
accuracy could be adversely affected.Moreover, it may interfere with the peripheral device by
drop or move by inertia of the arm.
Do not turn off the main power to the robot controller while rewriting the internal information of
the robot controller such as the program or parameters.
If the main power to the robot controller is turned off while in automatic operation or rewriting
the program or parameters, the internal information of the robot controller may be damaged.

CAUTION

When using this products GOT direct connection function, do not connect a handy
GOT. The handy GOT can automatically operate the robot regardless of whether
the operation rights are enabled or not, so use could result in property damage or
personal injuries.

CAUTION

When using an iQ Platform compatible product with CRnQ, do not connect the
handy GOT. The handy GOT can automatically operate the robot regardless of
whether the operation rights are enabled or not, so use could result in property
damage or personal injuries.

CAUTION

When the SSCNETIII cable is removed, install the cap in the connector.
If the cap is not installed, there is a possibility of malfunctioning by adhesion of the
dust etc.

CAUTION

Don't remove the SSCNETIII cable, when the power supply of the robot controller is
turned on. Don't face squarely the light emitted from the tip of the SSCNETIII
connector or the cable. If light strikes the eyes, there is a possibility of feeling the
sense of incongruity for the eyes. (The light source of SSCNETIII is equivalent to the
class 1 specified to JISC6802 and IEC60825-1.)

CAUTION

Take care not to wire the units incorrectly. Connections which do not satisfy the
specifications could result in malfunction such as emergency stop not being
released.
When completed with the wiring, confirm that each function properly operates without
malfunction, including emergency stop with the robot controller operation panel,
emergency stop with the teaching box, users emergency stop, and each door switch,
etc.

Precautions for the basic configuration are shown below.(When CR1D-7xx/CR1Q-7xx is used for the
controller.)

CAUTION

Please install the earth leakage breaker in the primary side supply power supply
of the controller because of leakage protection

Controller

Earth leakage
breaker(NV)
Cover

Terminal

Cover

Earth screw
Earth

Contents
1.

SUMMARY ................................................................................................................................................... 1-1

1.1.

What A Network Vision Sensor Is.................................................................................................... 1-1

1.2.

Features ............................................................................................................................................ 1-2

1.3.

Applications ...................................................................................................................................... 1-3

1.4.

Explanation of terms ........................................................................................................................ 1-5

2.

SYSTEM CONFIGURATION ....................................................................................................................... 2-6

2.1.
Component Devices ......................................................................................................................... 2-6
2.1.1.
Constitution of MELFA-Vision and the network vision sensor ....................................................... 2-6
2.1.2.
Equipment provided by customer ................................................................................................ 2-10
2.2.
System configuration example...................................................................................................... 2-11
2.2.1.
Configuration with one robot controller (SD series) and one vision sensor..................................2-11
2.2.2.
Configuration with one robot controller (SD series) and two vision sensors ............................... 2-12
2.2.3.
Configuration with tree robot controllersSD seriesand one vision sensor ............................... 2-13
2.2.4.
Configuration with one robot controllerSQ seriesand one vision sensor ................................. 2-14
3.

SPECIFICATIONS...................................................................................................................................... 3-15

3.1.
Network vision sensor specifications........................................................................................... 3-15
3.1.1.
External Dimensions of Network Vision Sensor(5100/5400/5401/5403/5400C) ......................... 3-16
3.1.2.
External Dimensions of Network Vision Sensor 5400R............................................................... 3-17
3.2.

Robot controller specifications ..................................................................................................... 3-19

3.3.
MELFA-Vision ................................................................................................................................. 3-20
3.3.1.
Features ....................................................................................................................................... 3-20
3.3.2.
Operating Environment ................................................................................................................ 3-21
4.

WORK CHARTS ........................................................................................................................................ 4-22

4.1.
5.

Work procedure chart .................................................................................................................... 4-22

EQUIPMENT PREPARATION AND CONNECTION ................................................................................. 5-23

5.1.

Equipment preparation................................................................................................................... 5-23

5.2.
Equipment connection ................................................................................................................... 5-24
5.2.1.
Individual equipment connections................................................................................................ 5-24
5.3.
Software installation....................................................................................................................... 5-26
5.3.1.
Vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later) installation ....................... 5-26
5.3.2.
Vision sensor dedicated software (In-Sight Explorer before Ver. 4.1) installation ....................... 5-28

5.3.3.
5.3.4.
5.3.5.
6.

MELFA-Vision installation ............................................................................................................ 5-29


USB driver (CRnD-700 series robot controller) installation ................................................... 5-31
CRnQ communications USB driver installation ...................................................................... 5-32

VISION SENSOR SETTINGS .................................................................................................................... 6-36

6.1.

Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later) ................................................ 6-36

6.2.

Vision Sensor Initial Settings (In-Sight Explorer before Ver.4.1) ................................................ 6-38

6.3.
Work recognition test..................................................................................................................... 6-41
6.3.1.
Starting MELFA-Vision (network vision sensor support software) ............................................... 6-41
6.3.2.
Image adjustment......................................................................................................................... 6-46
6.3.3.
Image processing settings ........................................................................................................... 6-49
7.

ROBOT CONTROLLER SETTINGS.......................................................................................................... 7-64

7.1.

Robot Controller Parameter Settings............................................................................................ 7-64

7.2.

Calibration Setting.......................................................................................................................... 7-69

7.3.
Robot Program Writing .................................................................................................................. 7-80
7.3.1.
Flow for starting of image processing by robot ............................................................................ 7-80
7.3.2.
Writing a Sample Robot Program ................................................................................................ 7-80
7.4.
Executing the automatic operation test........................................................................................ 7-83
7.4.1.
Put the vision sensor online. ........................................................................................................ 7-83
7.4.2.
Test by executing each step......................................................................................................... 7-83
7.4.3.
Starting a Robot Program............................................................................................................. 7-84
7.5.
When the robot can not grasp the work normally........................................................................ 7-86
7.5.1.
Check the MELFA-Vision [Camera Image]. ................................................................................. 7-86
7.5.2.
Comparison of the position data for the work recognized by the vision sensor and the position data
received by the robot...................................................................................................................................... 7-86
8.

MAINTENANCE ......................................................................................................................................... 8-88

8.1.

Vision Sensor Data Backup ........................................................................................................... 8-88

8.2.

Vision Sensor Data Restoration .................................................................................................... 8-90

8.3.

Vision Sensor Cloning ................................................................................................................... 8-92

8.4.

Image Log Acquisition Settings and Reception Start/End .......................................................... 8-94

8.5.

Vision Startup Settings .................................................................................................................. 8-95

8.6.

User List Settings ........................................................................................................................... 8-96

9.

DETAILED EXPLANATION OF FUNCTIONS ........................................................................................... 9-98

9.1.
Vision Sensor Dedicated Commands and Status Variables ........................................................ 9-98
9.1.1.
How to Read Items....................................................................................................................... 9-98
9.1.2.
MELFA-BASIC V Commands ...................................................................................................... 9-98
9.1.3.
Robot status variables................................................................................................................ 9-114
9.2.

MELFA-Vision Function Details ................................................................................................... 9-122

9.2.1.
9.2.2.
9.2.3.
9.2.4.

MELFA-Vision Main Screen ....................................................................................................... 9-122


Job Editing screen ([Image Log] tab)......................................................................................... 9-123
Job edit screen ([Result Cell Position] tab)................................................................................ 9-124
Vision sensor network settings .................................................................................................. 9-125

9.3.
Vision program detailed explanation .......................................................................................... 9-126
9.3.1.
Templates provided for MELFA-Vision....................................................................................... 9-126
9.3.2.
Image processing - blobs........................................................................................................... 9-128
9.3.3.
Image processing Color .......................................................................................................... 9-131
9.3.4.
Using image processing for which there is no template ............................................................ 9-139
9.3.5.
To shorten the time for transferring data with the robot controller ............................................. 9-140
9.4.
Detailed explanation of systems combining multiple vision sensors and robots................... 9-143
9.4.1.
Systems with one robot controller and multiple vision sensors ................................................. 9-143
9.4.2.
Systems with one vision sensor and multiple robot controllers ................................................. 9-144
10.
10.1.
11.

TROUBLESHOOTING ....................................................................................................................... 10-146


Error list ...................................................................................................................................... 10-146
APPENDIX.......................................................................................................................................... 11-149

11.1. Performance of this product (comparison with built-in type RZ511 vision sensor) .............. 11-149
11.1.1. Comparison of work recognition rate ....................................................................................... 11-149
11.1.2. Comparison of image processing capacity .............................................................................. 11-149
11.1.3. Factors affecting the processing time ...................................................................................... 11-150
11.2.

Calibration No. marking sheet ................................................................................................... 11-151

1 Summary

1. Summary
1.1.What A Network Vision Sensor Is
The network vision sensor is an option that makes it possible to discriminate the position of various types
of work and transport, process, assemble, inspect, and measure work with MELFA robots.
It consists of MELFA-Vision and the vision sensor, and the related options.

1-1

What A Network Vision Sensor Is

1 Summary

1.2.Features
The network vision sensor has the following functions.
(1) Position detection through high-speed image processing
High-speed image processing makes it possible to detect the work at high speed, not only when the
angle is not detected, but even when the work includes 360 rotation.
When the angle is not detected
: about 50 ms
When detecting 360
: about 150 ms
* Measurement conditions Search area: 640x480, Pattern: 90x90
This is the pattern matching processing time using In-Sight5400 (camera
exposure time of 8 ms)
* These values are reference values. These values are not guaranteed.
(2) Ethernet communication
Since the system can be configured with an Ethernet network, a wide variety of system
configurations can be realized.
Up to seven vision sensors can be controlled with one robot controller.
Up to three robot controllers can share control of one vision sensor.
Systems can be configured with multiple robot controllers and multiple vision sensors.
Both robot controller and vision sensor can be debugged using one PC.
"MELFA-Vision Network Vision Support Software" has image log functions, so it is possible to
check the image state when an error occurred.
(3) Easy setting
Connect only the Ethernet cable and the power cable to the vision sensor. Connect only the
Ethernet cable to the robot controller at the Ethernet interface.The vision sensor and robot
controller settings can be made simply with MELFA-Vision.
The vision sensor and robot controller settings can be made simply with MELFA-Vision.
(4) Easy robot program calibration
The program can be made easily with MELFA-BASIC V commands available for vision sensor
exclusively.
"NVOpen" command that connects the robot and vision sensor line
"NVPst" command that starts the vision program and obtains the results
"NVClose" command that cuts off the robot and vision sensor line
This system is equipped with a simple calibration function that can handle a variety of camera
installation positions.
(5) Space saving, wiring saving
Since the vision sensor combines the camera and controller in one piece, the only wiring needed is
the Ethernet cable and power supply cable, so wiring does not take up space.
(6) Easy maintenance
It is possible to store recognized images on a PC with MELFA-Vision running, to check the image
when an error occurred, and to find the cause of the error easily.

Features

1-2

1 Summary

1.3.Applications
Here are major applications of the network vision sensor.
(1) Loading/Unloading Machined Parts

Figure 1-1 Example of Loading/Unloading Machined Parts

(2) Processed Food Pallet Transfer

Figure 1-2 Example of Processed Food Pallet Transfer

(3) Lining Up and Palletizing Electronic Parts

Figure 1-3 Example of Lining Up and Palletizing Electronic Parts

1-3

Applications

1 Summary
(4) Small Electrical Product Assembly

Figure 1-4 Example of Small Electrical Product Assembly

(5) Lining Up Parts

Figure 1-5 Example of Lining Up Parts

(6) Small Electronic Parts Mounting

Figure 1-6 Example of Small Electronic Parts Mounting

Applications

1-4

1 Summary

1.4.Explanation of terms
This section explains the terms used in this manual.
CCD (Charge Coupled Device) This is the most general pickup element used in cameras.
Degree of matching (score) This value indicates the degree to which the image matches the registered
pattern. This value ranges from 0 to 100. The closer to 100, the higher the
degree of matching.
Offline This is a vision sensor mode for such work as setting the vision sensor
operating environment, setting the image processing, and backing up data to a
PC.
Online This is the vision sensor mode in which the vision sensor executes image
processing under command from the robot controller.
Picture element (pixel) This is the smallest unit of data making up the image. One image comprises
640x480 pixels. Depending on the type of vision sensor, one image comprises
1024x768 or 1600x1200 pixels.
Contrast This is a yardstick expressing the "brightness" of a pixel in units from 0-255. The
smaller this value, the darker the pixel; the higher this value, the brighter the
pixel.
Calibration This is coordinate conversion for converting from the image processing
coordinate system to the robot coordinate system.
Threshold This is the cutoff point for degree of matching scores.
Shutter speed This is the exposure time (the time during which the CCD accumulates charge).
Sort This rearranges the order in which data (recognition results) is output to the
robot according to the specified item.
Trigger This is the starting signal for starting the exposure (image capture).
Pattern matching This is processing for detecting the pattern that matches the pattern registered
from the captured image.
Vision program (job) This is the program that executes such image processing as pattern matching,
blobbing, etc. The image processing can be set freely.
Filter This is a form of image processing for improving the picture quality.
Blob This is a type of image processing for detecting blobs with features in the image
captured. Bright sections are expressed as white; dark sections are expressed
as black.
Host name This is the network vision sensor name. This is registered in the initial settings.
Live Images can be displayed in real time by shooting continuously.
Area This is the processing area for executing image processing.
Log function This function stores the image taken in with online operation and the execution
results (log).
Exposure This is the accumulation of charge on the CCD. When light strikes the CCD,
charge accumulates and the degree of this accumulation becomes the degree
of brightness of the image.

1-5

Explanation of terms

2 System configuration

2. System configuration
2.1.Component Devices
2.1.1.

Constitution of MELFA-Vision and the network vision sensor

The composition of the MELFA-Vision and the network vision sensor basic set that you have purchased is
shown in "Table 2-1 List of Network Vision Sensor Basic Set Composition.
Table 2-1 List of Network Vision Sensor Basic Set Composition

Network vision sensor set


Type
Composition article name

Vision sensor 5100


Vision sensor 5400
Vision sensor 5401
Vision sensor 5403
Vision sensor 5400C
Vision sensor 5400R
Vision sensor 5400S/5600/
Micro series
Thread guard
Breakout cable (5m) (*1)
Network cable (5m) (*1)
Camera cable (5m) (*4)
remote head camera installation bracket
(*4)
In-Sight5000 series installation guide
CD-ROM Part# 206-6364-*** (*2)
In-Sight Explorer
In-Sight Display Control
In-Sight OPC Server Software
Document (Help/Installation Manuals)
MELFA-Vision :
CD-ROM 3D-51C-WINE (*3)
MELFA-Vision
(Network vision support software)
Instruction manual .
BFP-A8780

4D-2CG
5100
-PKG

4D-2CG
5400
-PKG

4D-2CG
5401
-PKG

4D-2CG
5403
-PKG

MELFA-Vision

4D-2CG
5400C
-PKG

4D-2CG
5400R
-PKG

3D-51C
-WINE

Prepare by
the customer
(*5)
(*6)

(*7)Availavle only for the support function.(by


MELFA-Vision)

Lens cover

(*1) The cable length can be changed. For details, see "Table 2-2 List of Network Vision Sensor
Related Options".
(*2) This is a CD-ROM that comes with a vision sensor made by the Cognex Corporation.
This CD-ROM contains the software and operations manual required for using the network
vision sensor. The *** in the model name part number is the version number.
(*3) The instructions manual is included in CD-ROM of MELFA-Vision.
(*4) The camera cable which connects the remote head camera and the vision sensor,and the
remote head camera installation bracket is bundled for Network Vision sensor 5400R.
(*5) These specifications apply when the vision sensor and related options are prepared by the user
and only MELFA-Vision (network vision sensor support software and instruction manual) is
provided. The applicable vision sensors (COGNEX brand) are listed in Table 3-4 THE
CORRESPONDENCE TYPE AND VERSION OF MELFA-VISION for reference.
(*6) Note: The vision sensor must be equipped with an image processing algorithm (PatMax).

Component Devices

2-6

2 System configuration
Table 2-2 List of Network Vision Sensor Related Options

Option name
Network cable

Breakout cable

Camera cable

I/O Module

0.6m
2m
5m
10m
15m
30m
2m
5m
10m
15m
5m
10m
15m
Terminal block conversion module
I/O Expansion module
(8 inputs/8 outputs)

Diffused ring light(red)


Direct ring light(red)
Direct ring light(white)
Network vision sensor instruction manual (booklet)

2-7

Component Devices

Model
CCB-84901-1001-00
CCB-84901-1002-02
CCB-84901-1003-05
CCB-84901-1004-10
CCB-84901-1005-15
CCB-84901-1006-30
CCB-84901-0101-02
CCB-84901-0102-05
CCB-84901-0103-10
CCB-84901-0104-15
CCB-84901-0303-05
CCB-84901-0304-10
CCB-84901-0305-15
CIO-1350
CIO-1450
IFS-DRL-050
IFS-RRL050
IFS-WRL050
BFP-A8780

2 System configuration
The composition of the basic set(All-in-one design) are shown in figures.

Lens cover

O ring
Thread guard

In-sight vision sensor

Breakout cable

Network cable

In-Sight Software CD-ROM

MELFA-Vision CD-ROM

Installation guide
Figure 2-1 Basic set(All-in-one design) composition

Component Devices

2-8

2 System configuration
The composition of 4D-2CG-5400R-PKG(Remote Head) are shown in figures.

Remote Head Camera

Camera Cable

Network Vision sensor

Breakout Cable

Network Cable

In-Sight Software CD-ROM

Installation guide
Figure 2-2 Basic set(Remote Head Type) composition

2-9

Component Devices

MELFA-Vision CD-ROM

2 System configuration

2.1.2.

Equipment provided by customer

In addition to this product, the system also includes equipment provided by the customer. Table 2-3 List of
Equipment Provided by Customer" shows the minimum necessary equipment. The equipment for the
customer to provide depends on the system. For details, see 2.2 System configuration example.
Table 2-3 List of Equipment Provided by Customer

Device name
Vision sensor
Breakout cable
Network cable
Camera lens
24V power supply

Recommended product
In-Sight5000 series
(Refer Table 2-1) (*1)
(Refer Table 2-1,Table 2-2) (*1)
(Refer Table 2-1,Table 2-2) (*1)
C mount lens (CS mount lens is possible for 5400R.)
24 VDC (10%)
(5100/5400/5400C/5401 are 350mA or larger, 5403 is 500mA or larger,
5400R is 250mA or larger.)
PC
CPU
Intel Pentium III 700MHz (or equivalent) or faster
Memory size
256 MB min.
Hard disk
Available capacity of 200 MB min.
OS
Microsoft Windows 2000, Service Pack 4
Microsoft Windows XP Professional,
Service Pack 2
Display
An SVGA (800x600) or higher resolution display with
graphic functions that can display at least 16 colors
Disc device
CD-ROM drive
Keyboard
PC/AT compatible keyboard
Pointing device Device that operates in Windows operating system
Communications Must have Ethernet line that operates in Windows
operating system
Hub
A switching hub is recommended.
Ethernet straight cable
Any straight Ethernet cable is OK.
Lighting device
Select the optimum lighting for the work to be recognized.
LED lights are recommended for their long service life.
(*1) It is attached to the network vision sensor set

Component Devices

2-10

2 System configuration

2.2.System configuration example


2.2.1.

Configuration with one robot controller (SD series) and one vision sensor

Below is shown the entire configuration (robot system) when one camera is used.

24V Power

Hub

Robot

Personal
computer

Robot controller

Figure 2-3 Configuration (Robot System) When One Camera Is Used

Below is a list of the equipment configuration when one camera is used.


Table 2-4 List of Configuration When One Camera Is Used

Part name

Format

Manufacturer

Q'ty

Remarks

Robot controller
CRnD-700 series
1
S/W version P4 or later
Mitsubishi
Electric
Robot main unit
All models
1
3D-51C-WINE
MELFA-Vision
1
(4)
Vision sensor Vision sensor
In-Sight 5000 series COGNEX
1
Software: 3.20 or later(4)
Breakout cable
1
(4)
Network cable
1

(4)
Lens
C mount lens(*1)
1
Provided by customer (*2)

24V power supply


1
(*3)

Hub
1

Ethernet cable (straight)


2

Lighting device
1

(*1) Select from general C mount lenses.


(*2) The half tone (gray) section is the equipment provided by the customer.
(*3) For the 24 VDC (10%) power supply, the vision sensor requires a minimum of 350 mA(5403:a minimum of
500mA / 5400R: a minimum of 250mA).
(*4) It is attached to the network vision sensor set

2-11

System configuration example

2 System configuration

2.2.2.

Configuration with one robot controller (SD series) and two vision sensors
Below is shown the entire configuration (robot system) when two cameras are used.
In-Sight 5100/5400/5400C
In-Sight 5401/5403

24V
24VPower

Hub

Personal
Tool
computer

Robot

Robot controller

Figure 2-4 Configuration (Robot System) When Two Cameras Are Used

Below is a list of the equipment configuration when two cameras are used.
Table 2-5 List of Configuration When Two Cameras Are Used

Part name
Robot controller
Robot main unit
MELFA-Vision
Vision sensor
(*4)
Lens (*4)
24V power supply

Format

Manufacturer

Q'ty

Mitsubishi
Electric

1
1
1
2
2
2
2
1
1
1
2
1

CRnD-700 series
All models
3D-51C-WINE

Vision sensor
In-Sight 5000 series
Breakout cable
Network cable
C mount lens(*1)

COGNEX

Remarks
S/W version P4 or later
(5)
Software: 3.20 or later(5)
(5)
(5)
Provided by customer (*2)

(*3)

Hub

Ethernet cable (straight)

Lighting device

(*1) Select from general C mount lenses.


(*2) The half tone (gray) section is the equipment provided by the customer.
(*3) For the 24 VDC (10%) power supply, the vision sensor requires a minimum of 350 mA(5403:a minimum of
500mA / 5400R: a minimum of 250mA).
(*4) Up to seven vision sensors can be connected at the same time to one robot controller, so prepare the
"necessary quantity" for the number of vision sensors you use.
(*5) It is attached to the network vision sensor set

System configuration example

2-12

2 System configuration

2.2.3.

Configuration with tree robot controllersSD seriesand one vision sensor

Below is shown the entire configuration (robot system) when one camera is used with three robots.
In-Sight 5100/5400/5400C
In-Sight 5401/5403

24V
Power
24V

Hub

Personal
Tool
computer

Robot
controller

Robot

Figure 2-5 Configuration (Robot System) When One Camera Is Used with Three Robots

Below is a list of the equipment configuration when one camera is used with three robots.
Table 2-6 List of Configuration When One Camera Is Used with Three Robots

Part name

Format

Manufacturer

Q'ty

Remarks

Mitsubishi
Robot controller (*4)
CRnD-700 series
1
S/W version P4 or later
Electric
Robot main unit
All models
3
3D-51C-WINE
MELFA-Vision
1
(5)
Vision sensor Vision sensor
In-Sight 5000 series
1
Software: 3.20 or later
Breakout cable
1
Network cable
1
Lens
C mount lens(*1)
1
Provided by customer (*2)

24V power supply


1
(*3)

PC
1

Hub
1

Ethernet cable (straight)


4

Lighting device
1

(*1) Select from general C mount lenses.


(*2) The half tone (gray) section is the equipment provided by the customer.
(*3) For the 24 VDC (10%) power supply, the vision sensor requires a minimum of 350 mA(5403:a
minimum of 500mA / 5400R: a minimum of 250mA).
(*4) It is attached to the network vision sensor set

2-13

System configuration example

2 System configuration

2.2.4.

Configuration with one robot controllerSQ seriesand one vision sensor

Below is shown the entire configuration (robot system) when one camera is used.
In-Sight 5100/5400/5400C
In-Sight 5401/5403

24V
24VPower

Hub

Personal
computer
Tool

MELSE C
Q6xP

POWER

Q01CPU

Q41X

Q172DRCPU

RUN
ERR

SW

CN1

PULL

RS-232

CN2

PULL

TU I/F

EMI

DISPLAY I/F

STOP RUN

MIT SUBISHI

PLC
system
iQ Platform
Robot
CPU unit

(robot controller)
()
Driveunit

(robot controller)
()
(Example is CR2Q-700 series)

Robot

Figure 2-6 Configuration (Robot System) When One Camera Is Used

Table 2-7 List of Configuration When One Camera Is Used

Part name

Format

Robot controller
Robot main unit

CRnQ-700 series
All models

MELFA-Vision
Vision sensor Vision sensor

3D-51C-WINE

In-Sight5000 series

Manufacturer

Q'ty

Mitsubishi
Electric

1
1

COGNEX

1
1

Remarks
S/W version N1or later

(*4)
S/W version 3.20 or
later (*4)

Breakout cable
Network cable

1
(*4)

1
(*4)

Lens
C mount lens (*1)
1
Provided by customer

(*2)
24V power supply
1
(*3)

PC
1

Hub
1

Ethernet cable (straight)


2

Lighting device
1

(*1) Select from general C mount lenses. C mount lens is possible for 5400R.
(*2) The half tone (gray) section is the equipment provided by the customer
(*3) For the 24 VDC (10%) power supply, the vision sensor requires a minimum of 350mA (5403:a minimum of
500mA / 5400R: a minimum of 250mA).
(*4) It is attached to the network vision sensor set

System configuration example

2-14

3 Specifications

3. Specifications
3.1.Network vision sensor specifications
Here are the specifications of the network vision sensor by itself.
Table 3-1 Network Vision Sensor Stand-Alone Specifications

Magnificati
on ratio

Average performance with


standard edition as 1 (*2)

Standard
5100

High-Perfor
mance
5400

x1

x2.5

Memory
Firmware Version
Resolution
Camera

Display
option
I/O option
(*6)

Interface

(*6)
Lighting
Application
development

CCD sensor size


Color
Exposure[ms]
Image capture speed
(frames/sec.) (*3)
Capture[greytones]
Weight[g]
(lens cover mounted, no
lens)
VGA board
PC
Trigger/high-speed
output
count
I/O breakout
expansion module
Ethernet I/O support
(512 input max./512 output)
Ethernet
Integrated lighting option
Controller pad/VGA
In-Sight Explorer/PC
C or CS

Lens
mounting
Voltage condition
Power
The maximum current
supply
Image processing
Environme
ntal

Ambient temperature
(operation / storage)
Ambient humidity
Protection
Impact[G]
Vibration[G]

Certificatio
n

CEFCCULCUL

Color
5400C (*1)

High-resolution
5401 (*1)
x2

Remote Head
5400R (*1)
x2.5

Vision program storage area


Image processing area
Ver. 3.2 or later
640x480
1024x768
1/3 inch

0.032-1000
60

256

5403 (*1)

:32MB
:64MB
1600x1200
1/1.8 inch

0.027-1000

0.025-1000

15

40

20
16,777,216

640x480
1/3 inch

256

297.6

294.8

/2(*5)

(Communication lines:

3 lines)

C/CS

24VDC10%
350mA
500mA
250mA
Pattern matching / Blob / Edge/Bar code 2D codes / Text comparison / Histogram / Color
0 - 45-30 - 80 (*7)
90% no condensation allowed
IP67 (When lens cover installed)
80 IEC68-2-27
10 10 - 500Hz IEC68-2-6

(*1) High-resolution edition,Color edition and Remote-Head edition correspond from Ver.1.1 of
MELFA-Vision.
(*2) The performance values do not include the image capture speed.
(*3) The image capture speeds are the values with an exposure time of 8 ms and full image frame capture.
(*4) A lens cover (that comes with this sensor) is required that was designed to meet the NEMA standard
protection specifications.
(*5) One high-speed output is for strobe.
(*6) I/O and Ethernet cable The maximum curve radius is 38 mm.
(*7) The maximum operating temperature of the remote head is possible for 5400R up to 50 .

3-15

Network vision sensor specifications

3 Specifications

3.1.1.

External Dimensions of Network Vision Sensor(5100/5400/5401/5403/5400C)

Externals dimensions of Network Vision Sensor(5100/5400/5401/5403/5400C) is shown below.


please refer when you fix the Vision sensor.

Figure 3-1 External Charts of Network Vision Sensor(5100/5400/5401/5403/5400C)

Network vision sensor specifications

3-16

3 Specifications

3.1.2.

External Dimensions of Network Vision Sensor 5400R

Externals dimensions of Network Vision Sensor 5400R is shown below.


please refer when you fix the Vision sensor.
Unit:mm

Figure 3-2 External Charts of Network Vision Sensor 5400R (Processor part)

3-17

Network vision sensor specifications

3 Specifications
Unit:mm

Figure 3-3 External Charts of Network Vision Sensor 5400R (Remote Head part)

Unit:mm

Figure 3-4 External Charts of Network Vision Sensor 5400R (Bracket part)

Network vision sensor specifications

3-18

3 Specifications

3.2.Robot controller specifications


The robot controller specifications related to the network vision sensor is shown below.
Table 3-2 Robot Controller Specifications
Specifications
Software
Robot controller
CRnQ-700 seriesS/W version N1 or later
CRnQ-700 seriesS/W version P4 or later
RT ToolBox 2
Recommend Ver.1.0 or later.
Applicable robot controllers CRn-500 / CRnQ-7xx / CRnD-7xx
Connectable robots
All robots
Number of sensors and Number of cameras per robot controller
robots connectable
Number of robot controllers that can be connected per vision sensor
Robot program language
MELFA-BASIC V with special vision sensor commands
MELFA-BASIC IV can be used
Item

3-19

Robot controller specifications

7 maximum
3 maximum

3 Specifications

3.3.MELFA-Vision
3.3.1.

Features

MELFA-Vision is software that provides support for those using vision sensors for the first time and support
for connections between robot controllers and vision sensors. Below are the basic functions and features of
MELFA-Vision.

Function
Logon and logoff
Image operations
Capture request

2
Camera image
adjustment
Online and offline
3
Vision program writing
4

Recognition result
display
Robot controller
communication settings
Robot and
vision sensor calibration

Image Log

File transfer
Backup
Restore
Cloning

5
6

Table 3-3 MELFA-Vision Basic Functions and Features


Features
You log on to specify the vision sensors on the network and control them. Also, you log
off to end control.
Images captured with the vision sensor are operated on as follows.
This manually requests the vision sensor to capture an image and requests live (real
time) capturing.
This changes the display magnification ratio for images captured with the vision sensor.
When a robot controller controls vision sensors, it is put online (making it controllable
from the outside); when making such settings as vision program writing, changing, and
deleting, it is put offline.
This registers frequently used image processing (pattern matching, blob and color) as
templates. Each of these image processing types easily using the setting screen with
easy-to-understand work procedure.
It is also possible to edit, delete, and change the name of written vision programs.
The vision sensor image processing results can be displayed and the recognized
quantity and recognized work position are checked.
It is easy to make settings for communicating between the robot controller and vision
sensor.
The position of work recognized by a vision sensor can be converted the robot
coordinate system. In this way, work positions received from the vision sensor become
positions at which the robot directly holds the work.
Images recognized by a vision sensor can be stored on a PC. This makes it possible to
analyze later pictures of work that could not be recognized and aids in finding the cause.
Files can be transferred between a vision sensor and a PC.
All data set on a vision sensor can be stored on a PC.
Backup data stored on a PC can be returned to a vision sensor.
It is possible to set multiple vision sensors with the same settings as one vision sensor.

MELFA-Vision

3-20

3 Specifications
Version correspondence with the vision sensor by COGNEX and MELFA-Vision is shown in the following.
Indicates supported model name of each version
Table 3-4 THE CORRESPONDENCE TYPE AND VERSION OF MELFA-VISION

In-Sight
model name
5100
5101
5103
5100C
5400
5401
5403
5400C
5400R
5400S
5403S

5400CS
5600
5603
1100(Micro)
1400(Micro)
1403(Micro)

1100C(Micro)
1400C(Micro)
1403C(Micro)

3.3.2.

Specification

Ver.1.0

Ver.1.1

Ver.1.1.1

Ver.1.2

Standard
Standard + High resolution1
1,024x768
Standard + High resolution2
1,600x1,200
Standard + color
High performance
High performance + High resolution1
1,024x768
High performance + High resolution2
1,600x1,200
High performance+ color
High performance+ Remote head
High performance+ Stainless steel body
High performance + High resolution2
+ Stainless steel body
1,600x1,200
High performance+ color
+ Stainless steel body
High speed
High speed + High resolution2
1,600x1,200
Micro standard
Micro high performance
Micro high performance + High
resolution2
1,600x1,200
Micro standard + color
Micro high performance + color
Micro high performance + High
resolution2
1,600x1,200 + color

Operating Environment

Below is the PC operating environment for MELFA-Vision.


Item

Disc device
Keyboard

Intel Pentium III 700MHz (or equivalent) or faster


256 MB min.
Available capacity of 200 MB min.
Microsoft Windows 2000, Service Pack 4
Microsoft Windows XP Professional, Service Pack 2
An SVGA (800x600) or higher resolution display with graphic functions that can
display at least 16 colors
CD-ROM drive
PC/AT compatible keyboard

Pointing device
Communications

Device that operates in Windows operating system


Must have Ethernet line that operates in Windows operating system

Main memory
Hard disk
OS
Display

3-21

Table 3-5 MELFA-Vision Operating Environment


Requirement

MELFA-Vision

4 Work Charts

4. Work Charts
4.1.Work procedure chart
This chapter explains the work procedure for building a vision system using our robots.
Check the following procedure before working.

Start of work
Step1

Equipment preparation and connection

(Chapter 5)

Prepare and connect the required equipment and install the software

(p.5-23)

Step2
Vision Sensor Initial Settings

Step3

Step4

Vision sensor default settings

(p.6-29)

Work recognition test

(p.6-33)

Robot Controller Settings

Maintenance

(Chapter 6)

(Chapter 7)

Robot controller communication settings

(p.7-54)

Calibration settings

(p.7-59)

Robot program writing (MELFA-BASIC V)

(p.7-70)

Automatic operation test

(p.7-72)
(Chapter 8)

Vision sensor data backup

(p.8-77)

Work procedure chart

4-22

5 Equipment preparation and connection

5. Equipment preparation and connection


This chapter explains how to prepare necessary equipment, connect it to the system, etc., using a system with
one vision sensor and one robot controller as an example.

5.1.Equipment preparation
The following equipment is required for building the vision system. Included is equipment that must be
provided by the customer, so prepare what is necessary for your system.
Table 5-1 List of Configuration When One Camera Is Used

Part name

5-23

Format

Robot controller

CRnQ-700 series
or
CRnD-700 series

Robot main unit


Teaching pendant
Network
Vision sensor
vision sensor Breakout cable
basic set
Network cable
Lens
24V power supply
PC
Hub
Ethernet cable (straight)
Lightning device
Robot hand
Hand interface card

All models
R32TBR56TB
In-Sight 5000 series

Equipment preparation

C mount lens

2A-RZ365/2A-RZ375

Manufacturer

Q'ty

Mitsubishi
Electric

Mitsubishi
Electric

1
1
1
1
1
1
1
1
1
2
1
1
1

Remarks
CRnQ-700 series:
S/W version N1 or later
CRnD-700 series:
S/W version P4 or later

Software: 3.20 or later


Provided by customer

Equipment arranged for as


necessary

5 Equipment preparation and connection

5.2.Equipment connection
This section explains how to connect the equipment prepared.

5.2.1.

Individual equipment connections

This section explains how to connect each piece of equipment.


For details on how to install the lens on the vision sensor main unit, how to install the breakout cable, and how
to install the network cable, see the "In-Sight 5000 Series Installation Guide".
(1) Install the C mount lens on the vision sensor. The C mount lens focal distance depends on the
distance between the lens and the work and the field of vision the customer requires for image
processing.
(2) Connect the Ethernet cable to the connector (female) labeled "ENET".
(3) Connect the breakout cable to the connector (male) labeled "24VDC".
(4) Connect the other end of the 24V power supply with the "24VDC" (white/green) wire and the
"GND" (brown) wire.
(5) Connect the other end of the Ethernet cable to hub.

In-Sight5100/5400

24V power supply

Hub

(6) Connect the Ethernet straight cable to the hub and the other end to the PC.
In-Sight5100/5400

24V power supply

Hub

PC Tool

Equipment connection

5-24

5 Equipment preparation and connection


(7) Connect the Ethernet straight cable to the hub and the other end to the robot controller's
Ethernet interface card.

CRnQ-7 series
iQ Platform
Platform
PLC
iQ

Connect with DISP I/F of the robot CPU unit

DISPcontroller).
I/F
(robot
Ethernet

Connect
the Ethernet
cable to the hub

Connect
the Ethernet cable to the hub
Ethernet

Connect
with CNDISP of
CNDISP

the
drive unit.

Drive
unit (robot

controller)

CRnD-7 serie

Ethernet
Connect
with LAN1 of

theLAN1
Ethernet
interface

Robot controller
Connect
Ethernet cable to the hub
Ethernetthe

Figure 5-1 Connect the Ethernet cable


In-Sight 5100/5400/5400C
In-Sight 5401/5403

Personal
computer

24V power
24V

Personal
computer

24V power
Hub

Hub

Personal
computer

PLC
system
iQ Platform
MELSEC
Q6xP

POWER

Q01CPU

Q41X

Q172DRCPU

RUN
ERR

SW

CN1

PULL

RS-232

CN2

PULL

TU I/F

EMI

DISPLAY I/F

STOP RUN

MITSUBISHI

Robot

Robot controller

Robot

CRnD-7 series

Equipment connection

Driveunit

(robot
controller)
()

CRnQ-7 series
Figure 5-2 System configuration

5-25

Robot
CPU unit

(robot
controller)
()

5 Equipment preparation and connection

5.3.Software installation
This product comes with two CD-ROMs (In-Sight and MELFA-Vision). Each CD-ROM contains software
necessary for starting up the vision system.
This section explains how to install this software.
Before installing the vision sensor dedicated software (In-Sight Explorer), always check the model and type of
vision sensor and the version of the vision sensor dedicated software (In-Sight Explorer) being used.
Before installing MELFA-Vision, check the version of MELFA-Vision being used.

5.3.1.

Vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later)


installation

This section explains how to install the vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later).
(1) End all applications that are running
(2) Insert the In-Sight installation CD-ROM into the PC's CD-ROM drive. When the installation
program starts automatically, the following screen is displayed.
(3) Click the items indicated as not installed, and install each tool

Software installation

5-26

5 Equipment preparation and connection

An installed check

Figure 5-3 In-Sight Software Setup Screen

(4) When installation is completed, the icon for the installed software will appear on the personal
computer's desktop
(5) Start the installed software to make sure it has been installed correctly

5-27

Software installation

5 Equipment preparation and connection

5.3.2.

Vision sensor dedicated software (In-Sight Explorer before Ver. 4.1)


installation

This section explains how to install the vision sensor dedicated software (In-Sight Explorer before Ver.4.1).
(1) End all applications that are running.
(2) Insert the In-Sight installation CD-ROM into the PC's CD-ROM drive. When the installation
program starts automatically, the following screen is displayed.

Figure 5-4 In-Sight Software Setup Screen

(3) Select the language displayed on the right side of the screen.
(4) Click the [1] [3] buttons in order to install the respective software.
(5) For [4], click if your PC does not yet have Adobe Reader installed. Also click to install Adobe
Reader if you have an older version.
(6) When each piece of software has been installed, "Installed" is displayed next to that item on the
installation program screen. Check that "Installed" is displayed next to [1] [3]. Whether or not to
install [4] is up to your judgment.
(7) When the installation is complete, the icons for the installed software are displayed on the PC's
desktop.

Software installation

5-28

5 Equipment preparation and connection

5.3.3.

MELFA-Vision installation

This section explains how to install MELFA-Vision (network vision sensor support software).
Install this product with the following procedure.

Caution

Uninstall the old version before installing the new one.


If an old version of "MELFA-Vision" is installed, uninstall the old version of "MELFA-Vision", then
install the new version of "MELFA-Vision".

Caution
When installing, log in as a user with administrator authority.
When installing, log in as a user with administrator authority. The system will not let you install if you log
in as a user who does not have administrator authority.

When "MELFA-Vision" is installed in the personal computer, ". NET Framework 1.1" is installed.
Microsoft Windows 2000 Professional Operating System
Microsoft Windows XP Professional Operating System
(1) Set this CD-ROM in the personal computer's CD-ROM drive. The Setup screen will be started
up automatically.
(2) If the screen does not start up automatically, carry out the following procedure.
(a) Select the [start] menu and [run]
(b) Check the CD-ROM drive name. Input as shown below.
"Drive name":\Setup.exe
(Example : If the CD-ROM drive is "D:", this will be "D:\Setup.exe".)

Figure 5-5 Run

Figure 5-6 Start installation

5-29

Software installation

5 Equipment preparation and connection

(3) Installation procedure


Start

(a) Set the CD-ROM in PC's CD-ROM drive.


(g) Input Product ID
(b) Open "Setup.exe" in CD-ROM.
(when it is not s tarted automatically)

(c) Starting installation W izard

(d)Installation of .NET Framework 1.1


(When .NET Framwork 1.1 is not installed)

Product ID is printed
on the License Ce rtific ate.

(e) License Agreement

(h) Choose Destination Loc ation

(f) Input "Customer Information"

(i) Installation Wizard Complete

(j) Start the program, and confirm whether


the product was installed correc tly

Finish

Below are the contents of the CD-ROM.


:\
Setup.exe

Doc

The files for installation of MELFA-Vision.

Instruction Manual(pdf)

Caution

About the confirmation and warning message displayed during


installation
During installation on Windows XP or Windows Vista, the following confirmation and warning messages are
displayed, but select to continue installation. If you select not to install, please execute the installation
again.
(1) Installation confirmation message for USB driver software (for Windows XP)

(4) Installation check


When the installation is complete, the installed software can be started from the Windows Start
menu. For details, see "6.3.1 Starting MELFA-Vision (network vision sensor support software)".

Software installation

5-30

5 Equipment preparation and connection

5.3.4.

USB driver (CRnD-700 series robot controller) installation

Connecting the CRnD-700 series robot controller with USB requires installation of the robot USB driver. Install with
the following procedure.

Caution
If the USB driver cannot be installed, check the following setting.
<When Windows 2000 is used>
If you have selected "Block-Prevent installation of unsigned files" after [Control Panel] - [System] [Hardware] - [Driver Signing], the USB driver may not be installed.
Choose "Ignore-Install all files, regardless of file signature" or "Warn-Display a message before
installing an unsigned file" for [Driver Signing], and install the USB driver.
<When Windows XP is used>
If you have selected "Block-Never install unsigned driver software" after [Control Panel] - [System] [Hardware] - [Driver Signing], the USB driver may not be installed.
Choose "Ignore-Install the software anyway and don't ask for my approval" or "Warn-Prompt me each
time to choose an action" for [Driver Signing], and install the USB driver.

(1) When using Windows 2000


When you connect the CRnD-700 robot controller and the computer with a USB cable, installation starts and
completes automatically.
(2) When using Windows XP
Below is the installation procedure for the USB driver using Windows XP (Professional).
1) When you connect the computer and
CRnD-700 series robot controller with a
USB cable, the screen on the left is
displayed. Select "Install the software
automatically (Recommended)", then
click the [Next] button. Installation of the
USB driver starts.

2) When the screen on the left is displayed,


the installation is complete.
Click the [Finish] button to end the
installation.

(Completed)
5-31

Software installation

5 Equipment preparation and connection

5.3.5.

CRnQ communications USB driver installation

Connecting the CRnQ-700 series robot controller with USB requires installation of the robot USB driver. Install with
the following procedure.

Caution
If the USB driver cannot be installed, check the following setting.
<When Windows 2000 is used>
If you have selected "Block-Prevent installation of unsigned files" after [Control Panel] - [System] [Hardware] - [Driver Signing], the USB driver may not be installed.
Choose "Ignore-Install all files, regardless of file signature" or "Warn-Display a message before
installing an unsigned file" for [Driver Signing], and install the USB driver.
<When Windows XP is used>
If you have selected "Block-Never install unsigned driver software" after [Control Panel] - [System] [Hardware] - [Driver Signing], the USB driver may not be installed.
Choose "Ignore-Install the software anyway and don't ask for my approval" or "Warn-Prompt me each
time to choose an action" for [Driver Signing], and install the USB driver.
(1) When using Windows 2000
The following indicates the procedure for installing the USB driver when using Windows 2000.
1) The screen shown on the left appears
when you connect the personal
computer and Universal model QCPU by
the USB cable.
Click the [Next] button.

2) Choose "Search for a suitable driver for


my device [recommended]" and click the
[Next] button.

Software installation

5-32

5 Equipment preparation and connection

3) Check "Specify a location" and click the


[Next] button.

4) As the left screen appears, set the


C:\Melsec\EasySocket\USBDrivers
and click the [Next] button.
If volume MELSOFT products have been
installed,
browse
the
installation
destination EasySocket\USBDrivers
of the first product.

5) The screen on the left appears to


indicate completion of installation.
Click the [Finish] button to terminate
installation.

(Completed)

5-33

Software installation

5 Equipment preparation and connection


(2) When using Windows XP
The following indicates the procedure for installing the USB driver when using Windows XP (Professional).
1) The screen shown on the left appears
when you connect the personal
computer and Universal model QCPU by
the USB cable.
Choose "Yes, now and every time I
connect a device" and click the [Next]
button.

2) As the screen on the left appears,


choose "Install from a list or specific
location [Advanced]" and click the
[Next] button.

3) As the screen on the left appears,


choose "Search for the best driver in
these locations".
Check "Include this location in the
search" and set the
C: \Melsec\EasySocket\USBDrivers.
After setting, click the [Next] button.
If volume MELSOFT products have been
installed,
browse
the
installation
destination "EasySocket\USBDrivers"
of the first product.

Software installation

5-34

5 Equipment preparation and connection

4) As the screen on the left appears, click


the [Continue Anyway] button to
continue the installation of the USB
driver.
(No problem will occur after installation
of the USB driver.)

5) The screen on the left appears to


indicate completion of installation.
Click the [Finish] button to terminate
installation.

(Completed)

5-35

Software installation

6 Vision Sensor Settings

6. Vision Sensor Settings


This chapter explains the vision sensor settings for recognizing work images.

6.1.Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later)


The first time you use your vision sensor, if you use a DHCP server, just switching on the power for the vision
sensor automatically sets its IP address, but if you are not using a DHCP server, it is necessary to make this
initial setting with the "In-Sight Connection Manager" installed with 5.3.1 Vision sensor dedicated software
(In-Sight Explorer Ver.4.1 or later) installation. The method for this initial setting is as follows.
(1) From the Windows Start menu, click the installed [In-Sight Explorer4.3.1] to start "In-Sight
Connection Manager"

displayed on the desktop


Alternatively, double click
(2) Select and click [System] from the displayed screen's menu bar

(3) Select and click [Add Sensor/Device To Network] from the displayed screen's menu bar

Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later)

6-36

6 Vision Sensor Settings


(4) The following screen appears when [Add Sensor/Device to Network] is selected.
Click the [OK] button, and turn the vision sensor power OFF.
Wait at least five seconds, and then turn the power ON again.

(5) The devices to add to the network will appear, so select the displayed device and input the IP
address. When finished inputting, click the [Close] button.

An IP address is required for a personal


computer, a robot controller, and a camera.
Ex.) Personal computer: 192-168-0-10 .
Controller: 192-168-0-20
Camera: 192-168-0-30

Key point: For details on the In-Sight connection manager, see the In-Sight Explorer help.

6-37

Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later)

6 Vision Sensor Settings

6.2.Vision Sensor Initial Settings (In-Sight Explorer before Ver.4.1)


The first time you use your vision sensor, if you use a DHCP server, just switching on the power for the vision
sensor automatically sets its IP address, but if you are not using a DHCP server, it is necessary to make this
initial setting with the "In-Sight Connection Manager" installed with 5.3.2 Vision sensor dedicated software
(In-Sight Explorer before Ver. 4.1) installation. The method for this initial setting is as follows.
(1) From the Windows Start menu, click the installed [In-Sight Connection Manager] to start "In-Sight
Connection Manager".

displayed on the desktop.


Alternatively, double click
(2) On the displayed screen select [Setup one or more In-Sight vision sensors to work on my network],
then click the [Next] button.

Key point: For details on the In-Sight connection manager, see the In-Sight Explorer help.

Vision Sensor Initial Settings (In-Sight Explorer before Ver.4.1)

6-38

6 Vision Sensor Settings


(3) Input the MAC address listed on the vision sensor main unit sticker, then click the [Add] button.
When connecting multiple vision sensors, add a MAC address for each vision sensor connected.
Also, if you restart by switching Off, then On the power for all the vision sensors set, the MAC
addresses are automatically displayed in a list.

(4) Click the [Next] button.

(5) Your PC's [Subnet mask] (mandatory), [Default gateway] (option), [DNS server] (option), [Domain]
(option) settings are automatically acquired and displayed. Check that these values are correct, then
click the [Next] button.

6-39

Vision Sensor Initial Settings (In-Sight Explorer before Ver.4.1)

6 Vision Sensor Settings


(6) Input the vision sensor [New Name] (host name) and [New IP], then click the [Next] button.
Check with your network administrator for the IP address to set.
Here is an example in which an IP address of "10.50.0.100" is set.

(7) Click the [Configure] button, cut off the power for the vision sensor, wait at least 5 seconds, then
switch the power back on again.

(8) When "Settings complete" is displayed in the [Status] column, the settings are complete. Finally click
the [Close] button to close the screen.

Vision Sensor Initial Settings (In-Sight Explorer before Ver.4.1)

6-40

6 Vision Sensor Settings

6.3.Work recognition test


This section explains how to register the work to be recognized with the vision sensor and how to test
recognition of this work.

6.3.1.

Starting MELFA-Vision (network vision sensor support software)

This section explains the procedure for starting MELFA-Vision, which can easily execute a work recognition
test.
(1) From the Windows Start menu, click [All Programs] [MELSOFT Application] [RT ToolBox]
[MELFA-Vision] to start "MELFA-Vision".

(2) Select the appropriate vision sensor from the displayed vision sensor list, then click the [Log On]
button.
Input a [User Name] whose access rights for the vision sensor are "Full access" and the [Password],
then click the [OK] button.
* The [User Name] and [Password] are registered in the vision sensor. The default setting is a user
name of "admin" with no password. If the user ID and password have been changed, input the new
user name and password.

Figure 6-1 Starting MELFA-Vision

6-41

Work recognition test

6 Vision Sensor Settings


This section explains the MELFA-Vision main screen. For details on the MELFA-Vision functions, see "9.2
MELFA-Vision Function Details.
(2) Title

(1) Window

(3) Menu
(4) Tool buttons

(5) Vision Sensor Information

(6) Job Editing


(8) Camera image

(7) Calibration data creation

(9) Status bar

Figure 6-2 Main Screen

(1) Window
The default window size is "800x600".
(2) Title
The title character string is "MELFA-Vision [logged in vision sensor name]".

Work recognition test

6-42

6 Vision Sensor Settings


(3) Menu

Menu
File
View

Sensor

Controller

Help

6-43

Table 6-1 MELFA-Vision Menu List


Sub-item
Explanation
Exit
Exits MELFA-Vision.
Refresh Job Files
Updates the job list display on the left of
the screen.
Refresh Calibration Data
Updates the calibration data list display at
the bottom left of the screen.
Raises the magnification ratio for display of
Image
Zoom In
the background picture.
Lowers the magnification ratio for display
Zoom Out
of the background picture.
Sets the magnification ratio for display to
Zoom 1:1
1:1.
Raises the magnification ratio for display of
Zoom to Max
the background picture to the maximum.
Applies the background picture to the
Zoom to Fit
screen.
Adjusts so that the picture is displayed on
Zoom to Fill
the entire screen.
Connection
Logon
Logs on to make it possible to control the
specified vision sensor.
Logoff
Logs off from the specified vision sensor.
Communication
Edits the vision sensor communication
setting
settings.
Adjust Lens
Check the adjustment method for the lens
mounted on the vision sensor.
Manual trigger
Requests the vision sensor to capture an
image.
Select whether the vision sensor can be
Online
controlled from the outside (online) or
editing of image processing (offline).
Shoots continuously and recognizes
Live Mode
images in real time.
Display Test Result(s)
Monitors the information on the work
recognized by the vision sensor.
Image Log
Setting
This makes the FTP settings for storing
images captured with the vision sensor to
the PC.
Start Log
Starts the image log.
Quit Log
Quits the image log.
User List
Adds, edits, and deletes the user name
and password with which the vision sensor
is logged on to.
Startup
This sets the initial state for when the
vision sensor power has just been
switched On.
Backup
Backs up and restores the vision sensor
setting contents. A clone can be prepared
Restore
and copied to another vision sensor.
Clone To
Delete Calibration Job
Deletes calibration data.
Delete All Files
Deletes all the calibration data and vision
programs.
Communication setting
This makes the settings for the line
connection between the robot controller
and vision sensor.
Monitor
Monitors the data acquired by the robot.
Checks the MELFA-Vision version.
About MELFA-Vision

Work recognition test

Item

6 Vision Sensor Settings


(4) Tool buttons
Figure 6-3 Tool Bar

Button

Table 6-2 Tool Button List


Explanation
On: Logged on
Off: Logged off
On: Online
Off: Offline
Each time this button is clicked, the image is shot.

Tool tip
Logon/Logoff
Online/Offline
Manual Trigger
Live Mode
Zoom In

On: Live display underway


Off: Live display ended
Raises the magnification ratio for display of the background picture.

Zoom Out

Lowers the magnification ratio for display of the background picture.

Zoom to Max

Increases the background image's display magnification to the maximum.

Zoom 1:1

Sets the magnification ratio for display to 1:1.

Zoom to Fit

Adjusts the background image to the screen.

Zoom to Fill

Adjusts the image so it is displayed on the entire screen.

Image Log

This makes the settings for FTP transfer of images captured with the vision
sensor to the PC.
On: Image log reception enabled
Off: Image log reception disabled

Start Log/Quit Log

(5) Vision Sensor Information


This displays the information for logged on vision sensors.

Figure 6-4 Vision Sensor Information (Pattern Matching)

Table 6-3 Vision Sensor Information Items (Pattern Matching)


Control
Name
Current Job
Found No.
Threshold
Angle Start
Angle End

Operation
This is the host name of the vision sensor logged onto. Blank when vision sensor logged off.
Displays the name of the job being edited.
Displays the recognition count set with the recognition conditions on the job editing screen.
Displays the threshold set with the recognition conditions on the job editing screen.
Displays the start angle set with the recognition conditions on the job editing screen.
Displays the end angle set with the recognition conditions on the job editing screen.

Figure 6-5 Vision Sensor Information (Blobs)

Table 6-4 Vision Sensor Information Items (Blobs)


Control
Color
Area Limit
Greyscale

Operation
Displays the background color and the target color for recognition set with the color setting on the
job editing screen.
Displays the minimum and maximum values set with the work surface area on the job editing
screen.
Displays the threshold for the grayscale minimum set with the grayscale threshold value on the job
editing screen.

Work recognition test

6-44

6 Vision Sensor Settings


(6) Job Editing
A list of the job files for the logged on vision sensors is displayed and job files are managed (created,
edited, name changed, updated).
Button
New (N)
Deletion (D)
Renewal (R)
Edit (T)
Name change (M)
Alias preservation
(A)

Explanation
A job (vision program) is created
newly.
A job is deleted.
A job list is renewed.
A job is edited (change).
The name of a job is changed.
The job is named and saves

Figure 6-6 Job (Vision Program) List

(7) Calibration data creation


A list of the calibration for the logged on vision sensors is displayed and calibration data is created.

Figure 6-7 Calibration List

(8) Camera image


This displays the logged-on camera image. Black when logged off.
(9) Status bar
This displays the vision sensor mode, image information for the mouse position, and PC image log
reception status.
Figure 6-8 Status Bar
Table 6-5 Status Bar

Control
Left frame
Center frame

Right frame

6-45

Work recognition test

Explanation
This displays the mouse position image information.
(X coordinate value, Y coordinate value) = Contrast value
When the vision sensor status changes, the following character strings are
displayed. Anything else is blank.
"Online"
"Offline"
"Live"
"Incomplete online"
"Discrete online"
This displays the PC image log reception status.
When reception enabled: "Image log reception enabled"
When disabled: Blank

6 Vision Sensor Settings

6.3.2.

Image adjustment

This section explains how to adjust the brightness and Diaphragm for the image captured by the vision
sensor.
(1) Check the image shot with MELFA-Vision [Camera Image].
From MELFA-Vision menu, click [Sensor] [Live Mode] or
from the tool bar to put MELFA-Vision into live image mode.
click
Put the work to be recognized under the vision sensor and check the resulting image with
MELFA-Vision [Camera Image].

Figure 6-9 Image Check Example

(2) If the field of vision is not appropriate, adjust the distance between the vision sensor and the
work or replace the lens.
When the image is too large

Figure 6-10 Example in Which the Image Is Too Large

When the image is too small

Figure 6-11 Example in Which the Image Is Too Small

Work recognition test

6-46

6 Vision Sensor Settings


(3) If the brightness is not appropriate, adjust the lens "Diaphragm".

Figure 6-12 Camera Lens adjustment

If the appropriate brightness can not be achieved by just adjusting the Diaphragm, provide different
lighting.
Too bright

Figure 6-13 Example in Which the Image Is Too Bright

Too dark

Figure 6-14 Example in Which the Image Is Too Dark

6-47

Work recognition test

6 Vision Sensor Settings


(4) If the focus is not appropriate, adjust the lens "focus".

Figure 6-15 Example in Which The Image Is Out of Focus

Work recognition test

6-48

6 Vision Sensor Settings

6.3.3.

Image processing settings

This section explains how to make the image processing settings, using pattern matching image processing
(only one robot, results output as robot absolute coordinate values) as an example. For details on other image
processing, see "9.3.1 Templates provided for MELFA-Vision".
(1) Click [New] under Job (Vision Program) List at the left of the MELFA-Vision main screen. Select
the process method from the displayed [Processing Method] screen, and then click the [OK]
button.

Figure 6-16 Selection of Image Processing Method

6-49

Work recognition test

6 Vision Sensor Settings


(2) Execute the work in order of the
tabs from left
to right on the displayed "Job Editing" screen. First, adjust the image with the [Adjust Image] tab.

[Picture adjustment] tab.


Refer to Table 6-6

Track bar
(3) When you change all the displayed items, then click the [Test] button, the picture is displayed for
when the setting is changed to the main screen [Camera Image], so adjust for clear contrast
between the work and the background. For details on the setting items, see below.
Setting item
Exposure

Gain

Orientation

Table 6-6 List of [Adjust Image] Tab Items


Setting range
Explanation
0.032 - 1000
This adjusts the exposure time for images captured.
Lowering this value shortens the image take-in time and reduces
the amount of light accumulated on the CDD array, so the image
becomes darker. On the other hand, if this value is raised, the
amount of light accumulated increases, so the image becomes
brighter.
0 - 255
This adjusts the image brightness.
Adjust by moving the track bar left and right.
When value decreased
When value increased

Normal
Mirrored
horizontally
Flipped vertically
Rotated
180
degrees

This changes the direction in which the image is displayed.


Normal image
Mirrored horizontally Image

Flipped vertically Image

Rotated 180 degrees Image

Work recognition test

6-50

6 Vision Sensor Settings


Setting item
Trigger

Setting range
Camera
Continuous
External
Manual
Network

Explanation
This specifies the image take-in trigger for when the vision
sensor is "online".
[Camera]
The image is taken in at the rising edge detected at the
camera hardware trigger input port.
[Continuous]
Images are taken in continuously.
[External]
The image is taken in at the rising edge of a discrete I/O (*1)
input bit or serial command.
[Manual]
When the <F5> key is pressed.
[Network]
The image is taken in when the trigger is input to the master
vision sensor on the network.

Direction of a
camera
The vertical
allation
ction
Plate
Robot

Plate
Robot

Direction of a camera
Facing down or facing up is specified with the [Camera]
tab.
Photography picture
A picture becomes a front side and the back side by the
direction of a camera.
Recognition result
If the recognition result has the the same coordinates of
a vision sensor and a robot when the same work is
photographed facing up and downward, C axis
component will turn into an axial component of
downward (front side ) plus and upward (reverse side)
minus.
(*1) For details on discrete I/O, see the "In-Sight Installation Guide" that comes with this system.

6-51

Work recognition test

6 Vision Sensor Settings


(4) The area in which work is detected, registration of work to search for, and the work position
output to the robot are set with the "Job Editing" screen [Pattern & Search Area] tab.

Work recognition test

6-52

6 Vision Sensor Settings

(4-1)Determining the search area


When you click the "Search area" [Image] button, the focus shifts to the main screen and a red frame is
displayed around [Camera Image] on the main screen. The registered work is detected from the area
enclosed by the red frame. The area in which the work is detected can be changed with the mouse or
keyboard.
If you use the keyboard, each time the [F9] key is pressed, the "area adjustment mark" changes and fine
adjustments can be made with the [arrow keys]. To finalize the area, press the [OK] key; to cancel it,
press the [Cancel] key. The focus returns to the "Job Editing" screen.

The [O.K.] button and [Cancel] button

Camera image

Area adjustment mark

The [O.K.] button and [Cancel] button

6-53

Work recognition test

6 Vision Sensor Settings


(4-2)Determining the recognition pattern
When you click the "Pattern select" [Image] button, the focus shifts to the main screen
and a red frame is displayed around [Camera Image] on the main screen. The registered
work is enclosed by the red frame. For operations, use the mouse or keyboard. If you use
the keyboard, each time the [F9] key is pressed, the "area adjustment mark" changes and
fine adjustments can be made with the [arrow keys]. To finalize the pattern selection,
press the [OK] key; to cancel it, press the [Cancel] key. The focus returns to the "Job
Editing" screen.

The [O.K.] button and [Cancel] button

Camera image

Area adjustment mark

The [O.K.] button and [Cancel] button

Work recognition test

6-54

6 Vision Sensor Settings


(4-3)This specifies the work coordinates sent to the robot.
When you click the "Output position setting" [Image] button, the focus shifts to the main
screen and a red circle is displayed at [Camera Image] on the main screen. Move this
circle with the mouse or keyboard to specify what position to send to the robot for the
registered work. If you use the keyboard, fine adjustments can be made with the [arrow
keys]. To finalize the setting, press the [OK] key; to cancel it, press the [Cancel] key. The
focus returns to the "Job Editing" screen.

Use the following steps when using MELFA-Vision


earlier than Ver. 1.2.
"Display mark at center of pattern" does not appear.
Designate the area position mark of the pattern to be
transmitted.

Only when using MELFA-Vision Ver. 1.2 or later.


If "Display mark at center of pattern" is checked, the
pattern position area mark decided in step (4-2) will
automatically appear.

The [O.K.]
button and
[Cancel] button

Camera picture

Camera picture

The center of red O is


a coordinates output
position.

6-55

Work recognition test

6 Vision Sensor Settings


(5) This determines the recognition conditions.
When you click the "Job Editing" screen [Processing Condition] tab, the conditions for searching
for the registered work are set.

When you change a displayed setting item, then click the [Test] button, the results of image
processing under the specified conditions are displayed at the main screen [Camera Image], so
check whether or not the work is correctly recognized. For details on the setting items, see
below.
Setting item
Number to Find

Accept

Find
Tolerances
Sort By

Table 6-7 List of [Processing Condition] Tab Items


Setting
Explanation
range
1 - 255
This sets the maximum number of pieces that can be
detected in one image processing.

1 - 100

Angle
Start
Angle
End

-180 - 180
-180 - 180

This sets how much the detected work must match the
registered work in order to be recognized.
For the vision sensor, the degree of matching of the detected
work is expressed as 1-100%. Work whose degree of
matching is lower than the value set here is not recognized.
Sets the detected work tilt (start angle end angle).
This sets the start angle and end angle with the angle for the
registered work as 0.

None
X
Y

Returns the recognized work results in the specified sort


order.
When "None" is specified, the results are returned with the
work sorted in order of high recognition ratio.
This sorting is used for cases such as when multiple work
pieces are detected and you want to grasp the work in order
from left to right in the image.
The "X" and "Y" specified here indicate the "X" and "Y" at the
red frame displayed with the search area setting.
Offset of Rotation
-180 - 180 When outputting the recognized work results, this function
adds the specified offset amount to the detection angle.
When registering patterns, this is used if the 0-tilt image can
not be captured.
Calibration No.
This selects the data when outputting the recognized work
None
coordinate value converted to the robot coordinate value.
1 - 10
Work information can be converted to the coordinate
systems for up to three robots and sent.
Therefore, it is possible to select calibration numbers for
three robots.
* The figure above shows a screen assuming a system with
one robot. When a system is selected with three robots,
[Robot 2:] and [Robot 3:] display appears.
* For all the items, if a value outside the range is input, it is replaced with the upper or lower limit value near
the image going out of focus.

Work recognition test

6-56

6 Vision Sensor Settings


(5-1)This shows setting examples for the maximum detection count.
When 10 is set
When you click the "Job Editing" screen [Test] button, the 6 pieces of work captured in the image
are recognized and they are displayed with "+" pointer mark and a number from 0 in order of
highest degree of match attached to each piece of work.

When 3 is set
When you click the "Job Editing" screen [Test] button, the three pieces of work with the highest
degree of match are detected. They are displayed with "+" pointer mark and a number from 0 to
2 in order of highest degree of match attached to each piece of work.

6-57

Work recognition test

6 Vision Sensor Settings


(5-2)This shows setting examples for the threshold.
The higher the threshold, the greater the precision of the detection.
When 40% is set as the threshold
Maximum detection count of 10:
Threshold of 40%:

When you click the "Job Editing" screen [Test] button, even though there is one piece of work
at the top left, two pieces of work are recognized. Large work is also recognized and the
recognition count becomes 7.

When 60% is set as the threshold

When you click the "Job Editing" screen [Test] button, only the registered four pieces of work are
recognized, which is correct.

Work recognition test

6-58

6 Vision Sensor Settings


(5-3)This shows setting examples for the start angle and end angle.
When Start angle: 45, end angle: 45 is set

When you click the "Job Editing" screen [Test] button, only work is detected that is within the
45 range with the registered work angle as 0.

When Start angle: 45, end angle: 180 is set

When you click the "Job Editing" screen [Test] button, only work is detected that is within the
range 45 to +180 with the registered work angle as 0.

6-59

Work recognition test

6 Vision Sensor Settings


(5-4)This shows setting examples for the sort direction.
Sort direction: X
When you click the "Job Editing" screen [Test] button, the recognized work is displayed with a
number from 0 in order of the +X direction (from top to bottom in the figure above) of the frame
specified with the search area specification.

Sort direction: Y
When you click the "Job Editing" screen [Test] button, the recognized work is displayed with a
number from 0 in order of the +Y direction (from left to right in the figure above) of the frame
specified with the search area specification.

Work recognition test

6-60

6 Vision Sensor Settings


(6) The "Job Editing" screen [Image log] tab is explained in "9.2.2Job Editing screen ([Image Log]
tab)"; the [Results Cell Position] tab is explained in "9.2.3Job edit screen ([Result Cell Position]
tab)".
(7) When you want to check not only the image but also the numeric data in the image processing
results, click [Sensor] [Display Test Result(s)] in the main screen menu.

In the initial display, recognition results monitors for three robots are displayed.
To view just the results for [Robot 1], move the mouse pointer to the right edge of the screen
and while dragging the screen right edge, move the mouse to the left.

6-61

Work recognition test

6 Vision Sensor Settings


(8) If the recognition results are not what was expected, change the recognition conditions with the
"Job Editing" screen [Processing Condition].
(9) If the recognition results are what was expected, click the "Job Editing" screen [Save] button to
save the image processing conditions set up till now to the vision sensor.
When you click the [Save] button, a "Confirmation" screen is displayed to check that you want to
save the settings. If you click [No], the save is cancelled. If you click [Yes], the "Input the File
Name" screen is displayed, so input the desired vision program name, then click the [OK] button.
You can check that the file was saved with the "Job Editing" screen "File Name" or the main
screen "Current Job", or the "Job (Vision Program) List".

(10)Click the "Job Editing" screen [Close] button to close the "Job Editing" screen.

Work recognition test

6-62

6 Vision Sensor Settings


(11)Choose Job1.job and click "Save As" to save Job1.job by the alias.
(Ex.) Here, job1 is saved to job2.
The contents of the same vision program are saved by another name.
The contents of a program, such as a front reverse side judging which used the same work,
can be saved by the alias.

6-63

Work recognition test

7 Robot Controller Settings

7. Robot Controller Settings


This chapter explains the items set in the robot controller, using a system with one vision sensor and one robot
controller as an example.

7.1.Robot Controller Parameter Settings


In order for the robot controller to control the vision sensor, it is necessary to set the parameters for the
communication connection with the vision sensor. This section explains the methods for setting the
parameters.
(1) Switch On the robot controller power.

TB ENABLE switch
Lamp lighting: Enable .
Lamp lights-out: Disable .

MANUAL
mode

(1) Power supply


ON

(2) Set the robot controller IP address.


Set the MODE switch of a operation panel to [MANUAL].
Turn ON the TB ENABLE switch of a teaching pendant. (T/B) (lamp lighting).
Select [menu] ->[3. parameter], and input "NETIP" by the T/B. (R32TB) Set the IP address. (For
example, "192.168.0.20")
Turn OFF the TB ENABLE switch of a teaching pendant. (T/B) (lamp lights-out).
Turn off the robot controller power supply, and turn on again
MENU

3.PARAM

Inputs
parameter
name
Inputs
data

(3) From the Windows Start menu, click [All Programs] [MELSOFT Application] [RT ToolBox]
[MELFA-Vision] to start "MELFA-Vision".

Robot Controller Parameter Settings

7-64

7 Robot Controller Settings


(4) This makes the settings for the robot controller and MELFA-Vision to communicate.
Click the "Communications server"
displayed on the Windows taskbar to
display the "Communication Server" screen. (If the communication parameters have not been set yet,
all the information is written in red.

Click the [Setting] button to display the "Communication Setting" screen.

Click [Method], then select "TCP/IP".

Click the [Detail] button to display the "TCP/IP Communication Protcol" screen.
IP Address
Robot Name
Vision])

Click the [OK] button to finalize the settings.

7-65

Robot Controller Parameter Settings

: Robot controller IP address.


: Easy-to-understand name

(Here,

[For

7 Robot Controller Settings


Click the [Set(Save and Close)] button to store the communication settings you have set.

Check that all the frames on the "Communication Server" main screen become light blue. If a frame
is green, redo the setting.
(5) This sets the parameters for the robot controller and vision sensor to communicate.
From MELFA-Vision menu, select [Controller] [Communication Setting] to display the
"Communication Setting" screen.

This sets the device number for the COM number used. Here is an example in which a COM number
of "COM2:" is used and the setting content is "OPT15".
Click the "COM2" pull-down, then select "OPT15".

Robot Controller Parameter Settings

7-66

7 Robot Controller Settings


From the "Device List", select [OPT15], then click the [Change] button.

On the displayed "Device Setting" screen, switch On the [Change the Parameter to connect Vision]
checkbox, then input the vision sensor IP address as the IP Address.

Click the [OK] button and check that a "*" is displayed in the "Communication Setting" screen
"Device List".

Click the [Write] button to display the "Confirmation" screen.

7-67

Robot Controller Parameter Settings

7 Robot Controller Settings


If you click [Yes], the parameters are written to the robot controller. A message is displayed that the
controller power will be switched Off, then On again to put the parameter change into effect.
<Supplement> .
With the CRnQ-700 Series, if only the power for the robot
CPU is turned ON again, an error may occur in the PLC
CPU module, so the power cannot be reset automatically.
Turn the PLC CPU power ON again manually.
With the CRnD-700 Series, the power is reset automatically.
Click the [Yes] button.
The CRnD-700 series: The robot controller power is automatically turned OFF and ON again.
The CRnQ-700 series: To Turn the PLC CPU power OFF once and then turn it ON again.
Click the [Yes] button and wait for the robot controller power supply to be reset.
When the robot controller starts, click the [Read] button on the "Communication Setting to check if
the parameters have been written normally.

If the parameters were written normally, click the [Exit] button to close the " Communication Setting"
screen.

Robot Controller Parameter Settings

7-68

7 Robot Controller Settings

7.2.Calibration Setting
Calibration is a function that converts the vision sensor coordinate system into the robot coordinate system.
This calibration work is necessary for recognizing what position in the robot coordinate system the recognized
work is at. If this setting is not made, the coordinates for work recognized by the vision sensor display the
results in the sensor coordinate system.
This section explains calibration work using MELFA-Vision.
(1) Prepare the equipment used in calibration work.
Prepare four marking labels (copy the marking sheet in the appendix, align it with the image field
of vision and make enlarged and reduced copies) and the calibration jigs (for example a hand
with sharpened tip for specifying the center of the marking label with the robot).
(2) Set MELFA-Vision to a live image.
From the MELFA-Vision menu, click [Sensor] [Live Mode] or from the MELFA-Vision tool bar,
to put MELFA-Vision into live image mode. Check that

click

sinks.

(3) Adjust the mark positions so that four marking labels for calibration fit in the screen. Here is an
example in the appendix marking sheet is placed. Here, the four marks are set to be marks 1-4
as in the figure below.

Mark 1

Mark 2

Mark 3
Mark 4

7-69

Calibration Setting

7 Robot Controller Settings


(4) Exit the live image.
From the MELFA-Vision menu, click [Sensor] [Live Mode] or from the MELFA-Vision tool bar,
click

to exit live image mode.

(5) From the MELFA-Vision main screen, select [No. 1] in the [Calibration Data List]. This section
explains [No. 1] data creation.

(6) On the "Create Calibration Data" screen, click the [About How to specify Reference Point] button
to check the calibration operation method.

Calibration Setting

7-70

7 Robot Controller Settings


(7) Specify the first point on the vision sensor.
Click the [Image] button for the first point.

Mark 1

Use the mouse or the [arrow keys] to move the

mark to mark 1, then click the [OK.] button.


The [OK.] button and [Cancel]
button .

Mark 1

7-71

Calibration Setting

7 Robot Controller Settings


(8) Specify the second point on the vision sensor.
Click the [Image] button for the second point.

Mark 2

Use the mouse or the [arrow keys] to move the

mark to mark 2, then click the [OK.] button.


The [OK.] button and [Cancel]
button .

Mark 2

Calibration Setting

7-72

7 Robot Controller Settings


(9) Specify the third point on the vision sensor.
Click the [Image] button for the third point.

Mark 3

Use the mouse or the [arrow keys] to move the

mark to mark 3, then click the [OK.] button.


The [OK.] button and [Cancel]
button .

Mark 3

7-73

Calibration Setting

7 Robot Controller Settings


(10)Specify the fourth point on the vision sensor.
Click the [Image] button for the fourth point.

Mark 4

Use the mouse or the [arrow keys] to move the

mark to mark 4, then click the [OK.] button.


The [OK.] button and [Cancel]
button .

Mark 4

Calibration Setting

7-74

7 Robot Controller Settings


(11)Specify the first point with the robot.
Use the teaching box to move the robot hand to the first point.
* For this work, the use of a pointed-tip object in the hand is recommended.

On the "Create Calibration Data" screen, click the [Position] button for the first point to acquire
the robot's current position.

(12)Specify the second point with the robot.


Use the teaching box to move the robot hand to the second point.
On the "Create Calibration Data" screen, click the [Position] button for the second point to
acquire the robot's current position.

(13)Specify the third point with the robot.


Use the teaching box to move the robot hand to the third point.
On the "Create Calibration Data" screen, click the [Position] button for the third point to acquire
the robot's current position.

7-75

Calibration Setting

7 Robot Controller Settings


(14)Specify the fourth point with the robot.
Use the teaching box to move the robot hand to the fourth point.
On the "Create Calibration Data" screen, click the [Position] button for the fourth point to
acquire the robot's current position.

(15)Input a comment.
In the "Create Calibration Data" screen [Comment] column input a comment to make the
meaning of this work easy to understand, then click the [Create Data] button.

(16)Check that the calibration data is created.


Check the MELFA-Vision main screen [Calibration Data List] column and check that there is an
"*" in the "No. 1" [Existance] column.

(17)Close the "Create Calibration Data" screen.


Click the "Create Calibration Data" screen [Exit] button.

Calibration Setting

7-76

7 Robot Controller Settings


(18)The calibration data is set for the created job and the recognized work is displayed with the robot
coordinate system.
From the MELFA-Vision main screen "Job(Vision Program)List", select "Job1.job", then click
the [Edit] button.

On the displayed "Job Editing" screen, click the [Processing Conditions] tab.

Click the "Calibration No." "Robot 1:" pull-down, then with the "Job Editing" screen
[Processing Conditions] tab, select "1" as the [Calibration No.]

7-77

Calibration Setting

7 Robot Controller Settings


Place the work under the vision sensor, then click the "Job Editing" screen [Test] button.

From the MELFA-Vision menu, when you click [Sensor] [Recognition Test Results], the
coordinates for the recognized work are displayed with the robot coordinate system.

Calibration Setting

7-78

7 Robot Controller Settings


(19)The "Job Editing" screen "Calibration" specification has been changed, so the recognition
conditions are saved.
Click the "Job Editing" screen [Save] button.

(20)Close the "Job Editing" screen.


Click the "Job Editing" screen [Exit] button.

7-79

Calibration Setting

7 Robot Controller Settings

7.3.Robot Program Writing


In order to start (execute) image processing with the vision sensor from the robot, it is necessary to
execute commands controlling the vision sensor in a robot program written in MELFA-Basic V

7.3.1.

Flow for starting of image processing by robot


Next is shown the method for starting image processing from a robot program.

Check the line connection with the vision sensor (State variable
:M_NVOpen)
Line connection with vision sensor
(MELFA-BASIC V :NVOpen)
Vision program start
(MELFA-BASIC V :NVPst)
Vision sensor detection quantity acquisition
(State variable
:M_NvNum)
Vision detection position data acquisition
(State variable
:P_NvS1 - P_NvS8)
After this, the robot is moved with the position data detected with the vision sensor.
For details on the vision program dedicated MELFA-BASIC V commands and status variables, see "9.1
Vision Sensor Dedicated Commands and Status Variables".

7.3.2.

Writing a Sample Robot Program


The robot program below is written and stored in the robot controller. For details on the storage
method, see the "RT ToolBox2 PC Support Software Instruction Manual".
Example acquiring data in the absolute coordinates using pattern matching

1 ' Before this program is run, the evacuation position P0, the work grasping position P1, and the work placement position P2 must have
already been taught.
2 ' Example: P0=(+250.000,+350.000,+300.000,-180.000,+0.000,+0.000)(7,0)
3'
P1=(+500.000, +0.000, +100.000, -180.000, +0.000, +10.000)(7,0)
4'
P2=(+300.000, +400.00, +100.000, -180.000, +0.000, +90.000)(7,0)
5 If M_NVOpenN(1)<>1 Then
' When logon has not been completed for vision sensor number 1
6
NVOpen COM2: As #1
' Connects with the vision sensor connected to COM2.
7 EndIf
8 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
9 NVPst #1,Job1,E76,J81,L85,0,10
' Start vision program [Job1] and receives the number of recognitions by the
vision sensor from the [E67] cell.
10
' and receives the recognized coordinates from the [J81] -[L85] cells, and stores this in P_NvS1 (30).
11 Mov P0
' Moves to the evacuation point.
12 If M_NvNum(1)=0 Then *NG
' If the detection count is 0, jumps to an error.
13 For M1=1 TO M_NvNum(1)
' Loops once for each detection by vision sensor number 1.
14 P10=P1
' Creates the target position P10 using the vision sensor 1 results data.

15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33 '
34
35
36
37

P10.X=P_NvS1(M1).X
P10.Y=P_NvS1(M1).Y
P10.C=P_NvS1(M1).C
Mov P10,10
Mvs P10
Dly 0.1
HClose 1
Dly 0.2
Mvs P10,10
Mov P2,10
Mvs P2
Dly 0.1
HOpen 1
Dly 0.2
Mvs P2,10
Next M1
Hlt
End

' Moves to 10 mm above the work grasping position P10.


' Moves to the work grasping position P10.
' Wait time of 0.1 second
' Closes hand 1
' Wait time of 0.2 second
' Moves to 10 mm above the work grasping position P10
' Moves to 10 mm above the work placement position P2
' Moves the work placement position P2
' Wait time of 0.1 second
' Opens hand 1.
' Wait time of 0.2 second
' Moves to 10 mm above the work placement position P2
' Repeats.
' Program pause (Create the appropriate processing.)
' Exit

*NG
Error 9000
Hlt
End

' Error processing


' Error 9000 output.
' Program pause (Create the appropriate processing.)
' Exit

Robot Program Writing

7-80

7 Robot Controller Settings


(1) The evaluation position, work grasping position, and work placement position are taught in order
to operate the robot.
Use the teaching box to open the stored robot program and open the position edit screen.
Turn the controller's MODE select switch to [MANUAL].
Turn ON the teaching pendant (R32TB) TB ENABLE switch.
Input [Menu] - [3. Parameter] = "NETIP" from the teaching pendant (R32TB), and input the IP
address (ex., "192.168.0.20").
Display a command edit screen, and push the [JUMP]-[STEP] key, and push the
[TEACH]-[Yes] key.
MENU

Jump

FILE/EDIT

Select
Program

Set STEP
number

TEACH

(2) Move the robot to the evacuation position.


Switch On the robot servo power supply and move the robot with Jog operation.
Press the [Enable switch] + [SERVO] to turn the servo power ON.
Press the [Enable switch] + [each axis key (+X/-X, etc.)] to move the robot to the
evacuation point.

Enable
switch

Enable
switch

(3) Teach the evacuation point.


Open the Position Edit screen by selecting [MENU] - [1. FILE/EDIT] - [Program Select] - [Step
Select].
The position of the selected step will appear, so edit the coordinates to be set, and then press
[TEACH] on the screen to set the coordinates.
When the [Yes] is pressed, the recorded coordinate position will appear. Press the [CLOSE].
Call the
STEP

Position Edit
screen

YES

7-81

Robot Program Writing

TEACH

7 Robot Controller Settings


(4) Teach the work grasping position and work placement position in the same way.
(5) When this work is complete, press the teaching pendant [CLOSE] button to store the robot
program.
(6) Turn the teaching box key switch in the "Disable" direction.
(7) Turn OFF the teaching pendant (R32TB) TB ENABLE switch.

Robot Program Writing

7-82

7 Robot Controller Settings

7.4.Executing the automatic operation test


This section explains automatic operation that starts the program created with 7.3 Robot Program
Writing and transports the work recognized with the vision sensor.

7.4.1.

Put the vision sensor online.


In order for the robot controller to control the vision sensor, it is necessary to put the vision sensor
"online". This section explains the work for putting the vision sensor online.
From the MELFA-Vision "Main" screen menu, click [Sensor] - [Online] or
click

7.4.2.

on the toolbar button.

Test by executing each step.


Open the robot program created with the teaching box and while executing one line at a time, check
the robot program operations.
For details on the step execution method, see "Detailed explanations of functions and operations
(BFP-A8661)" "3.6 Debugging Operations".

CAUTION
There are command words not completed in a single step execution.
When execution does not move to the next step when the step is executed one time,
execute the step again.
Example: NVOpen requires at least seven repetitions of step execution.

7-83

Executing the automatic operation test

7 Robot Controller Settings

Starting a Robot Program

7.4.3.

This section explains the work for starting the stored robot program "1" with the robot controller
operation panel (O/P).

Turn the operation panel key switch in the "Auto (Op)" direction.
Turn the controller's MODE select switch to [AUTOMATIC].

Press the [CHNG DISP] button to display the override at the Status Number.

Press the [UP/DOWN] button to set the Status Number display to "o.010". (This sets the robot
override to 10%.)
Press the [CHNG DISP] button to display the robot program number at the Status Number.

Press the [UP/DOWN] button to set the Status Number display to "P.0001". (This selects robot
program 1.)
Press the [SVO ON] button to switch On the robot servo power supply.

Check around the robot to make sure that everything will be safe even if the robot operates.
Press the [START] button.

Executing the automatic operation test

7-84

7 Robot Controller Settings

7-85

The main screen [Camera Image] displays the recognition results and the robot transports all the
work recognized by the vision sensor. After transporting, the robot program stops.

Executing the automatic operation test

7 Robot Controller Settings

7.5.When the robot can not grasp the work normally


This section explains what to do if the robot program started normally, but the robot could not grasp the
work normally.

7.5.1.

Check the MELFA-Vision [Camera Image].


Check if the position of the work recognized by the vision sensor is correct.
(1) Check the main screen [Camera Image] and check that the "+" is on the recognized work.
(2) Check if the position of the "+" is the position specified with the "Job Editing" screen "Output
position setting".

(3) If the position of the recognized work is abnormal, re-edit the MELFA-Vision job.

7.5.2.

Comparison of the position data for the work recognized by the vision sensor
and the position data received by the robot
Check if the robot received the work position data normally from the vision sensor.
(1) From the main screen menu, click [Sensor] - [Recognition Test Results].

When the robot can not grasp the work normally 7-86

7 Robot Controller Settings


(2) From the main screen menu, click [Controller] - [Monitor] to display the "Monitor of Controller"
screen.
This screen monitors the controller's dedicated status variables for the vision sensor.

(3) Select the line connecting the robot controller and the vision sensor (in the explanation up till now
"COM2:"), then click the [Recognition Details] button.

(4) Compare the "Display Test Result(s)" screen and "Detail Monitor" screen "P_NvS1" values to check
if the robot controller is receiving the data normally.

(5) If the work position data received by the robot is abnormal, check the [Start Cell] and [End Cell]
positions specified with the robot program "NVPst" command.
(6) If the work position data received by the robot is normal, re-do the calibration setting.

7-87

When the robot can not grasp the work normally

8 Maintenance

8. Maintenance
This chapter explains vision sensor data backup and restoration, the image log function, vision sensor
cloning, the startup function, and user list registration overall maintenance.

8.1.Vision Sensor Data Backup


The backup function stores on a PC all the files (*.job, *.bmp, *.jpg, proc.set and hosts.net) stored on the
vision sensor.
This function can be used with the specified vision sensor either Online or Offline.
Also, although this function can be used with the specified vision sensor either logged on or logged off,
since the robot and vision sensor access can be slowed down by file transfer operations, normally back
up with the vision sensor offline.
This section backup work using MELFA-Vision.
(1) Display the MELFA-Vision backup screen.
From the MELFA-Vision menu, click [Sensor] - [Backup] to display the Backup screen.
(2) From the Backup screen "Sensor List", select the vision sensor to back up.
The destination to which backed up files are transferred can be changed with the [Browse] button.
Select whether to back up all of the files in the selected vision sensor, or to back up only the vision
programs, and then click the [Backup] button.

Sensor List

Vision Sensor Data Backup

8-88

8 Maintenance
(3) If you select a vision sensor other than the one currently logged on the "User Name And Password"
screen is displayed, so input the user name and password for the vision sensor to be backed up,
then click the [OK] button.
This screen is not displayed if the currently logged on vision sensor is selected with the "Sensor
List".
This screen is also not displayed if a sensor is selected that is not logged on but that vision sensor
can be logged on with the currently logged on user name and password.

(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.
* A vision sensor can be backed up even when it is online, but file transfer operations may delay the
robot and vision sensor access.
(5) When the backup starts, the indicator progresses as on the screen below.
To cancel a backup that is underway, click the [Stop] button.

(6) When the backup is complete, the completion message is displayed.


When the [OK] button is clicked, display returns to the Backup screen.

8-89

Vision Sensor Data Backup

8 Maintenance

8.2.Vision Sensor Data Restoration


The restore function takes the files backed up to the PC with the backup function and returns them to the
vision sensor.
The restored files are all the files that were backed up.
This function can be used with the vision sensor either logged on or logged off.
Also, although this function can be used with the specified vision sensor either Online or Offline, since the
robot and vision sensor access can be slowed down by file transfer operations, it is recommended to
restore with the vision sensor offline.
This section explains restoration work using MELFA-Vision.
(1) Display the MELFA-Vision restore screen.
From the MELFA-Vision menu, click [Sensor] - [Restore] to display the Restore screen.
(2) Specify the folder to transfer from.
The folder to transfer from can be changed with the [Browse] button.
Select the vision sensor to restore from the "Sensor List".
Select whether to restore all files or only the vision program, and then click the [Restore] button.

Sensor List

(3) If you select a vision sensor other than the one currently logged on to, the "User Name and
Password" screen is displayed, so input the user name and password for the vision sensor to be
restored, then click the [OK] button.
This screen is not displayed if the currently logged on vision sensor is selected with the "Sensor
List".
This screen is also not displayed if a sensor is selected that is not logged on but that can be logged
onto with the currently logged on user name and password.

Vision Sensor Data Restoration

8-90

8 Maintenance
(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.
(5) A confirmation screen is displayed to ask whether or not to enable restoration of the vision sensor
network setting files.
To restore the vision sensor setting file (proc.set) and the host table file (hosts.net) vision sensor
network setting file too, click the [Yes] button.

CAUTION
Only restore a network setting file to the sensor it was backed up from.
Restoring a network settings file to any other sensor can cause trouble.

(6) When the restoration starts, the indicator progresses as on the screen below.
To cancel a restoration that is underway, click the [Stop] button.

(7) When the restoration is complete, the completion message is displayed.


To reflect the restoration settings, restart the vision sensor.
When the [OK] button is clicked, display returns to the [Backup from Vision Sensor] screen.

8-91

Vision Sensor Data Restoration

8 Maintenance

8.3.Vision Sensor Cloning


The cloning function can create multiple vision sensors with the same files as the original one vision
sensor.
This function can be used with the vision sensors either logged on or logged off.
Also, although this function can be used with the vision sensors either Online or Offline, since the robot
and vision sensor access can be slowed by file transfer operations, it is recommended to restore with the
vision sensor offline.
This section explains cloning work using MELFA-Vision.
(1) Display the MELFA-Vision cloning screen.
From the MELFA-Vision menu, click [Sensor] - [Clone To] to display the Clone screen.
(2) Select the vision sensor to be the cloning source and the vision sensor to be turned into a clone on
the "Clone" screen. Multiple vision sensors can be selected. To select multiple vision sensors, select
with mouse operations while pressing down the keyboard [Shift] key or [Ctrl] key. Select whether to
back up all files from the clone source vision sensor and create clones, or to create clones of only the
vision program, and then click the [Clone] button

(3) If the clone source vision sensor(s) and the vision sensor(s) to be cloned from it are different, the
following warning message is displayed.
To continue the work, click the [Yes] button; to cancel it, click the [No] button.

Vision Sensor Cloning

8-92

8 Maintenance
(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.
(5) When the cloning work starts, the indicator progresses as on the screen below.
To cancel cloning work that is underway, click the [Stop] button.

(6) When the cloning is complete, the completion message is displayed.


To reflect the restoration settings, restart the vision sensor.
When the [OK] button is clicked, display returns to the Clone screen.

8-93

Vision Sensor Cloning

8 Maintenance

8.4.Image Log Acquisition Settings and Reception Start/End


The image log acquisition function is a function that stores the images captured by the vision sensor with
the conditions set with the job (Always/OK images/NG images) while the vision sensor is communicating
with the controller in online mode.
Using this function makes it possible to check afterwards on images that could not be recognized and
track down the reason why they could not be recognized.
In order to acquire the image log, the FTP server is started on the PC on which the images are stored, so
set the FTP server user name and password.
This section explains the method for acquiring the image log using MELFA-Vision.
(1) Display the MELFA-Vision image log setting screen.
From the MELFA-Vision menu, click [Sensor] - [Image Log] - [Setting] or from the MELFA-Vision tool
bar, click

to display the "Image Log Setting" screen.

(2) Enter the FTP server user name and password on the displayed "Image Log Setting" screen.
This user name and password are the ones for the FTP server and are different from the user name
and password for logging on to the vision sensor. However, the same user name and password may
be set for both.
Also, enter here the same user name and password as for the "Job Editing" screen "Image Log" tab
"User Name of FTP" and "Password of FTP".
For details on the Image Log tab setting method, see "9.2.2 Job Editing screen ([Image Log] tab)
".
The storage destination for acquired images can be changed with the [Browse] button.
When the settings are complete, click the [OK] button to close the "Image Log Setting" screen.
The next time the "Image Log Setting" screen is opened, the screen is opened with the same
settings as the previous time. To make the same settings as the previous time, remove the check
from the [Change User Name] check box.

(3) When the image log is started, the FTP server is started.
From the MELFA-Vision menu, click [Sensor] - [Image Log] - [Start Log] or from the MELFA-Vision
.
tool bar, click
When the FTP server

starts

up

and

image

log

reception

becomes

possible,

is displayed at the right end of the status bar of the MELFA-Vision main
screen.
In this state, the vision sensor images are stored in the specified folder under the conditions set with
the "Jog Editing" screen "Image Log" tab.
(4) When the image log is ended, the FTP server is ended.
From the MELFA-Vision menu, click [Sensor] - [Image Log] - [Quit Log] or from the MELFA-Vision
tool bar, click

When the image log processing ends,


end of the status bar of the MELFA-Vision main screen.

is no longer displayed at the right

Image Log Acquisition Settings and Reception Start/End

8-94

8 Maintenance

8.5.Vision Startup Settings


The startup settings are a function that sets the startup conditions for when the power supply to the vision
sensor is switched On (select whether to start up online or offline and select the job to load).
This section explains the work for setting the startup using MELFA-Vision.
(1) Display the MELFA-Vision startup screen.
From the MELFA-Vision menu, click [Sensor] - [Startup] to display the Startup screen.
(2) Select whether to start up online or offline and what job to load when starting the vision sensor.
To start online, put a check in the [Online] checkbox. To start offline, remove the check from the
checkbox.
The jobs in the vision sensor are displayed in the [Job] drop-down list so select the job to load. If
there is no particular job to specify, select [<New>].
When the settings are complete, click the [OK] button.
The "Startup" screen is closed and display returns to the main screen.

8-95

Vision Startup Settings

8 Maintenance

8.6.User List Settings


The user list settings function is the function that sets the access rights for users that use the vision
sensor, the FTP read and write rights, and password settings.
This section explains the work for setting the user list using MELFA-Vision.
(1) Display the MELFA-Vision "User List" screen.
From the MELFA-Vision menu, click [Sensor] - [User List] to display the User List screen.
(2) The "User List" screen is displayed.

There are three types of user access rights.


Table 8-1 List of User Access Rights for Vision Sensors

Access
Full

Explanation
The user has full access (without restriction) to the vision sensor.
The job can be loaded, edited, and stored.
Normally log on with this right when using MELFA-Vision.
Protected
This user is not permitted to do FTP writing under the initial conditions.
However, it is possible for writing to be permitted.
Locked
This user is only permitted to check the vision sensor processing state with
the MELFA-Vision camera picture.
There are two types of display item settings - normal and custom; for MELFA-Vision, the custom
view is not displayed even if custom is selected.
FTP writing and reading can be permitted and prohibited with [Yes] and [No].
Also, the following three types of users can be set for the initial state for the vision sensor. The
respective settings are shown in the table below.
Table 8-2 Registered User Name List

User
name
admin
monitor
operator

Password
None
None
None

Access

Display

FTP writing

FTP reading

Full
Locked
Protected

Normal
Custom
Custom

User List Settings

8-96

8 Maintenance
(3) To add a user, click the [Add] button on the "User List" screen; to edit an existing user, select the user
from the list and click the [Edit] button.
The "User" screen is displayed, so set the required items and click the [OK] button.

(4) To delete an existing a user, select the user from the "User List" screen list and click the [Delete]
button.
Check the contents of the confirmation screen, then click the [Yes] button.

* The "admin" user can not be deleted.


(5) When the settings are complete, click the [OK] button on the "User List" screen. The "User List"
screen is closed and display returns to the main screen.

8-97

User List Settings

9 Detailed Explanation of Functions

9. Detailed Explanation of Functions


This chapter explains the functions of this product in detail.

9.1.Vision Sensor Dedicated Commands and Status Variables


The robot controller has status variables and dedicated commands for controlling vision sensors. This
section explains these dedicated commands and status variables.

9.1.1.

How to Read Items

[Function]
[Format]

[Term]
[Sample sentence]
[Explanation]
[Error]

9.1.2.

Shows the command word function.


Shows the command word argument input method.
indicates an argument.
[ ] indicates that it can be omitted.
indicates that a space is required.
Shows the argument meaning, range, etc.
Shows a sample sentence.
Shows the functions in detail and caution item.
Shows an error generated when the command word is executed.

MELFA-BASIC V Commands

Here are the dedicated vision sensor commands. These commands can be used by the following software
versions.
CRnQ-700 series: N1 or later
CRnD-700 series: P1 or later
Table9-1 List of Dedicated Vision Sensor Commands

Command word

Contents

NVOpen

Connects with the vision sensor and logs on to the vision sensor.

NVPst

Starts the specified vision program and receives the results.

NVRun

Starts the specified vision program.


Receives the results of the vision program specified with the NVRun
command.
Cuts off the connection with vision sensor.
Puts the specified vision program into the state in which it can be
started.
Requests the vision sensor to capture an image and acquires the
encoder value after the specified time.

NVIn
NVClose
NVLoad
NVTrg

Additional command word details are shown below.

Vision Sensor Dedicated Commands and Status Variables

9-98

9 Detailed Explanation of Functions

(1) NVOpen (network vision sensor line open)


[Function]
Connects with the specified vision sensor and logs on to that vision sensor.
[Format]
NVOpen<COM number>As#<Vision sensor number>
[Term]
<Com number> (Can not be omitted):
Specify the communications line number in the same way as for the Open command.
"COM1:" can not be specified by it is monopolized by the operation panel front RS-232C.
Setting range: "COM2:" "COM8:"
<Vision sensor number> (Can not be omitted)
Specifies a constant from 1 to 8 (the vision sensor number). Indicates the number for the vision
sensor connection to the COM specified with the <COM number>.
Be careful. This number is shared with the <file number> of the Open command.
Setting range: 1 8
[Sample sentence]
1 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 log on is not complete
2 NVOpen COM2: As#1 ' Connects with the vision sensor connected to COM2 and sets its number as
number 1.
3 ENDIf
4 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
[Explanation]
1) Connects with the vision sensor connected to the line specified with the <COM number> and logs on to
that vision sensor.
2) It is possible to connect to a maximum of 7 vision sensors at the same time. <Vision sensor numbers>
are used in order to identify which vision sensor is being communicated with.
3) When used together with the Open command, the Open command <COM number> and <File number>
and the <COM number> and <Vision sensor number> of this command are shared, so use numbers
other than those specified with the Open command <COM number> and <File number>.
Example: Normal example
1 Open COM1: As #1
2 NVOpen COM2: As #2
3 NVOpen COM3: As #3

Error example
1 Open COM2: As #1
2 NVOpen COM2: As #2 <COM number> used
3 NVOpen COM3: Ass#1 <Vision sensor number>
Used
It is not possible to open more than one line in a configuration with one robot controller and one vision
sensor. If the same IP address is set as when the [NETHSTIP] parameter was set, an "Ethernet
parameter NETHSTIP setting" error occurs.
4) Logging on to the vision sensor requires the "User name" and "Password". It is necessary to set a user
name for which full access is set in the vision sensor and the password in the robot controller [NVUSER]
and [NVPSWD] parameters.
The user name and password can each be any combination of up to 15 numbers (0-9) and letters (A-Z).
(T/B only supports uppercase letters, so when using a new user, set the password set in the
vision sensor with uppercase letters.)
The user name with full access rights when the network vision sensor is purchased is "admin". The
password is "". Therefore, the default values for the [NVUSER] and [NVPSWD] parameters are
[NVUSER] = "admin" and [NVPSWD] = "".
When the "admin" password is changed with MELFA-Vision or a new user is registered, change the
[NVUSER] and [NVPSWD] parameters. When such a change is made, when the content of the
[NVPSWD] parameter is displayed, "****" is displayed. If the vision sensor side password is changed,
open the [NVPSWD] parameter and directly change the displayed "****" value. After the making the
change, reset the robot controller power.
[Caution]
When multiple vision sensors are connected to one robot controller, set the same user name and
password for all of them.
5) The state of communications with the network vision sensor when this command is executed can be
checked with M_NVOpen. For details, see the explanation of M_NVOpen.
9-99

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions


6) If the program is cancelled while this command is being executed, it stops immediately. In order to log on
to the vision sensor, it is necessary to reset the robot program, then start.
7) When this command is used with multi-tasking, there are the following restrictions.
The <COM number> and <Vision sensor number> must not be duplicated in different tasks.
If the same <COM number> is used in another task, the "attempt was made to open an already
open communication file" error occurs.
SLOT 2
10 NVOpen "COM2:" As #1
20

SLOT 3
10 NVOpen "COM2:" As #2
20

"COM2:" is specified with Slot 2


"COM2:"

and
with Slot 3,so an error occurs.

If the same vision sensor number is used in another task, the "attempt was made to open an
already open communication file" error occurs.
SLOT 2
10 NVOpen "COM2:" As #1
20

SLOT 3
10 NVOpen "COM3:" As #1
20

"COM2:" and "COM3:" are specified with


"COM2:""COM3:"
Slot 2 and with Slot 3,but the <Vision
<>#1
sensor number> is specified as #1,so an

error occurs.

8) A program start condition of "Always" and the continue function are not supported.
9) Three robots can control the same vision sensor at the same time. If a fourth robot logs on, the line for
the first robot is cut off, so be careful when constructing the system.
10) The line is not closed with an End command in a program called out with a Callp command, but the line
is closed with a main program End command. The line is also closed by a program reset.
11) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately even during processing of this command.
[Error]
1) If data type for an argument is incorrect, the "syntax error in input command" error is generated.
2) If there is an abnormal number of command arguments (too many or too few), the "incorrect argument
count" error occurs.
3) If the character specified in <COM number> is anything other than "COM2:" through "COM8:", the
"argument out of range" error occurs.
4) If the value specified as the <vision sensor number> is anything other than "1" through "8", the
"argument out of range" error occurs.
5) If a <COM number> for which the line is already connected is specified (including the <File number> for
which the line has been opened with an Open command), the "attempt was made to open an already
open communication file" error occurs.
6) If the vision sensor is not connected before the line is opened, the "vision sensor not connected"
error occurs. (The same set manufacturer parameter [COMTIMER] as in the Ethernet specifications is
used. Currently "1s")
7) If the same <COM number> or the same <vision sensor number> is specified in another task, the
"attempt was made to open an already open communication file" error occurs.
8) If the user name or password specified in the [NVUSER] parameter (user name) and [NVPSWD]
(password) is wrong, the "wrong password" error occurs.
9) If the communications line is cut while this command is being executed, the "abnormal
communications" error occurs and the robot controller side line is closed.
10) If a program is used for which the starting condition is "Always", the "this command can not be used if
the start condition is ERR or ALW" error occurs.

Vision Sensor Dedicated Commands and Status Variables

9-100

9 Detailed Explanation of Functions

(2) NVPst (Network vision program start)


[Function]
Starts the specified vision program and obtains the results.
The data received from the vision sensor is stored in the robot controller robot status variables.
[Format]
NVPst#<Vision sensor number>,"<Vision program (job) name>"
, "<Recognition count cell>", "<Start cell>", "<End cell>", <Type> [, <Timeout>]
[Term]
<Vision sensor number> (Can not be omitted)
This specifies the number of the vision sensor to control.
Setting range: 1 - 8
<Vision program (job) name> (Can not be omitted)
Specifies the name of the vision program to start.
The vision program extension (.job) can be omitted.
The only characters that can be used are "0" "9", "A" "Z", "a" "z", "-", and "_".
<Recognition count> (Can not be omitted)
Specifies the cell in which the count of work recognized by the vision sensor is stored.
Setting range: Row: 0-399 Column: "A" "Z" Example: "A5"
The count of work recognized by the vision sensor stored in the specified cell is saved in
M_NvNum(*).(*=1 8)
* When a vision program is created with MELFA-Vision, see "9.2.3Job edit screen ([Result Cell
Position] tab)" and input the value indicated by MELFA-Vision.
<Start cell>/<End cell> (Can not be omitted)
Specifies the cell range (rows and columns) in which the results recognized by the vision sensor are
stored.
The contents of the specified cell are stored in any of the status variables P_NvS*(30)
M_NvS*(30,10)C_NvS*(30,10).(*=1 - 8)
Setting range: Row: 0-399 Column: "A" "Z" Example: "A5", "C10", etc.
However, the error "specified cell value out of range" occurs when the number of data that the
range specified by < Start cell > and < End cell > is included in row 30, colimn 10 or the cell exceeds
as many as 90.
* When a vision program is created with MELFA-Vision, see "9.2.3Job edit screen ([Result Cell
Position] tab)" and input the value indicated by MELFA-Vision.

Cell space in vision program

When creating a vision program this way and acquiring the data (X, Y, C) only for Robot 1, specify
<Start cell> = "J96" <End cell> = "L98".
<Type> (Can not be omitted)
Specifies the status variable cell in which the results recognized by the vision sensor are stored.
As a result of the recognition, one cell can store plural data by switching off the comma district.
However, there is a limitation up to 255 characters or less on one cell.
The specified character-string data (two or more of one data or the comma district switching off data)
preserved from < Start cell > to < End cell > is preserved in a state variable either of character type a
positional type variable and a numeric type by the specification of < type >.
Setting range: 0 7
9-101

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions


About details of a set value, see the " Table 9-2 Preservation specification to state variable by <Type>
specified value".
Table9-2 Preservation specification to state variable by <Type> specified value
Specified value
State of cell
Correspondence
state variable(*1)
Data type

DataCell
P_NvS*()

M_NvS*()

C_NvS*()

Position
type

Single-pre
cision real
number
type

Text type

M_NvS*()
C_NvS*()
Single-pre
cision real
number
type,
Text type

4
5
6
7
Two or more of comma(,) district switching off data
Cell
P_NvS*()
M_NvS*()
C_NvS*()
M_NvS*()
C_NvS*()
Text type
Single-preci
Position
Single-pre
sion
real
type
cision real
number
number
type,
type
Text type

(*1)* sign of the correspondence state variable specifies < Vision Sensor Number >.
The Position data P_NvS*() is converted into the numerical value for a positional variable and it
preserves it in X, Y, and Z coordinates sequentially.
When the character which cannot be converted is included, it preserves it as "0".
Moreover, the data preserved in the row since the fourth row in cell which specifies it for < Start cell >
and < End cell > must not be acquired.
The Numeric type data M_NvS*() is converted into the numerical value for a numeric variable and it
preserves it.
When the character which cannot be converted is included, it preserves it as "0".
M_NvS*() is two dimension array, and all the data specified for < Start cell > and < End cell > can be
preserved.
It explains the content by "Explanation".
The Text type dataC_NvS*() is preserved as it is for the character type variable.
It replies from the Vision Sensor the function and kanji code etc. of the vision program as #"
character.
Moreover, it replies by the NULL character for a blank cell.
All these situations are preserved in C_NvS*() as NULL character.
C_NvS*() is two dimension array, and all the data specified for < Start cell > and < End cell > can be
preserved.
The example of storing information up to 255 characters or less in one cell by switching off the
comma district is shown as follows.

When "4" is specified in the <Type> in this example, "J91" is specified for the <Start cell>, and "J91" is
specified for the <End cell>, the following result is obtained.
Variable
Data(X,Y,Z,A,B,C,L1,L2)
P_NvS1(1) (+336.43,-71.14,+0.00,+0.00,+0.00,+122.27,+0.00+0.00)
P_NvS1(2) (+344.10,+151.54,+0.00,+0.00,+0.00,-5.78,+0.00+0.00)
P_NvS1(3) (+224.58,+274.84,+0.00,+0.00,+0.00,+31.24,+0.00+0.00)
P_NvS1(4) (+0.00,+0.00,+0.00,+0.00,+0.00,+0.00,+0.00+0.00)

<Time out> (If omitted, 10)


Specifies the time-out time (in seconds).
Specification range: Integer 1-32767

Vision Sensor Dedicated Commands and Status Variables

9-102

9 Detailed Explanation of Functions


[Sample sentence]
1 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 log on is not complete
2 NVOpen COM2: As #1 'Connects with the vision sensor connected to COM2.
3 EndIf
4 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
5 NVPst #1,TEST,E76,J81,L84,1,10
'Starts the "Test" program, receives the recognition count from the E76 cell
and the recognition results from cells J81 through L84, and stores this in
M_NvS1 ().
6 'Processes referencing the acquired data.
7
:
10 NVClose #1 'Cuts the line with the vision sensor connected to COM2.
[Explanation]
1) Starts the specified vision program on the specified vision sensor and receives the results.
2) Within the timeout time, does not move to the next step until the results are received from the vision
sensor. However, if the robot program is stopped, this command is immediately cancelled. Processing is
continued with a restart.
3) If the specified <vision program name> is already loaded, processing is executed without loading the
program, so the processing time is shortened.
4) When this command is used with multi-tasking, it is necessary to execute the NVOpen command in the
task using this command. Also, use the <vision sensor number> specified with the NVOpen command.
5) When type from "4" to "7" is specified for <Type>, the improvement of the data receiving speed from the
Vision Sensor can be expected.
A specified position of <Start Cell> and <End Cell> is different depending on a specified value of <Type>,
and refer to "Job Editting" Screen - "Result Cell Position" Tab of MELFA-Vision for a specified position.
6) A program start condition of "Always" and the continue function are not supported.
7) When multi-mechanism mode is used, specified the <Start cell> and <End cell> to acquire information
for the number of robots used and specify a type from "1" to "3".
Example: Handling of vision sensor information on only one mult-mechanism mode

When as in the figure above, the information to the first robot is stored in vision program sheet cells
<J96> through <M98> and the information to the second robot is stored in cells <O96> through <R98>,
<J96> and <M98> are specified as the <Start cell> and <End cell>.
When "1' is specified as the type with the NVPst command, it is stored in M_NvS1() as follows.
Column
Row

M_NvS1()

9-103

1
2
3
4
5

1
347.147
381.288
310.81
0.0
0.0

2
-20.232
49.018
43.65
0.0
0.0

3
-158.198
10.846
-34.312
0.0
0.0

4
97.641
97.048
0.0
0.0
0.0

Vision Sensor Dedicated Commands and Status Variables

5
0.0
0.0
0.0
0.0
0.0

6
0.0
0.0
0.0
0
0

7
0.0
0.0
0.0
0
0

8
0.0
0.0
0.0
0
0

9
0.0
0.0
0.0
0
0

9 Detailed Explanation of Functions


Example: Handling of vision sensor information on two multi-mechanism mode
<J96> and <R98> are specified as the <Start cell> and <End cell>.
When "1' is specified as the type with the NVPst command, it is stored in M_NvS1(30,10) as follows.
Column
Row

M_NvS1()

1
2
3
4
5
6
7
8
9
1 347.147 -20.232 -158.198 97.641 0.0 110.141 120.141 72.645
97.641
2 381.288 49.018 10.846
97.048 0.0 89.582
99.582
-118.311 97.048
3 310.81
43.65
-34.312
0.0
0.0 139.151 149.151 -163.469 95.793
4 0.0
0.0
0.0
0.0
0.0 0.0
0.0
0.0
0.0
5 0.0
0.0
0.0
0.0
0.0 0.0
0.0
0.0
0.0
8) Up to three robots can control the same vision sensor at the same time, but this command can not be
used by more than one robot at the same time. Use this command on any one of the robots.

Example of tracking system with three robots and one vision sensor

1)Photography
request
3) Receiving
data

Controller

3) Receiving
data

3) Receiving
data

2) Reception
permission

Controller (master)

2) Reception permission

<Procedure>
Of the three robots, one is set as the master and the controller (master) outputs the "image capture
request" to the vision sensor with the NVPst command. The vision sensor starts the image capture and
when it is complete, returns that to the controller (master).
The controller (master) outputs the "reception enabled notice" to the other two robots. (Taking cost and
degree of difficulty into account, we recommend to connect between robots with I/O. The other robots are
connected with Ethernet, so interactive notification with text string transmission/reception is possible.)
The respective robots receive the information they respectively require with NVIn commands.

Vision Sensor Dedicated Commands and Status Variables

9-104

9 Detailed Explanation of Functions

Example assembling system with two robots and one vision sensor
2)Photography
request

6)Photography
request
3) Receiving data
7) Receiving data
5) Using ON
8) Using OFF

Controller

Controller
1) Using ON
4) Using OFF

<Procedure>
The controller using the vision sensor checks that the vision sensor is not being used by another
controller and outputs the "Using" On signal to that controller.
It outputs the "Image capture request" to the vision sensor.
When the vision sensor image processing is complete, the controller receives the necessary data.
The controller switches Off the "Using" signal it had output to the other controller.
The other controller executes Steps 1 - 4.
In this way, the two robot controllers use the vision sensor alternating or as necessary.
If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately.
[Errors]
1) If the data type for an argument is incorrect, a "syntax error in input command statement" error is
generated.
2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument
count" error occurs.
3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error
occurs.
4) If the NVOpen command is not opened with the number specified as the <vision sensor number>, an
"abnormal vision sensor number specification" error occurs.
5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error
occurs.
6) If a <vision program name> uses a character other than "0" - "9", "A" - "Z", "-", or "_" (including
lowercase letters), an "abnormal vision program name" error occurs.
7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program
not present" error occurs.
8) If the <Recognition count cell>, <Start cell>, or <End cell> contains a number other than "0" - "399" or a
letter other than "A - "Z", an "argument out of range" error occurs.
9) If there is no value in the cell specified in "Recognition count cell", an "incorrect value in recognition
count cell" error occurs.
10) If the <Start cell> and <End cell> are reversed, a "specified cell value out of range" error occurs.
11) If the number of data included in the cell which specifies it by <Start cell> and <End cell> exceeds 90,
a"specified cell value out of range" error occurs.
12) If the range specified by <Star cell> and <End cell> exceeds line 30 and row 10, a"specified cell value
out of range" error occurs.
13) If the <Type> is other than "0" - "7", an "argument out of range" error occurs.
14) If the <Timeout> is other than "1" - "32767", an "argument out of range" error occurs.
15) If the vision sensor does not respond without the time specified as the <Timeout> or within the first 10
seconds if the <Timeout> parameter is omitted, a "vision sensor response timeout" error occurs.
16) If the vision program's image capture specification is set to anything other than "Camera" (all trigger
command), "External trigger", or "Manual trigger", an "abnormal image capture specification" error
occurs.
17) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".
18) If the communications line is cut while this command is being executed, an "abnormal
communications" error occurs and the robot controller side line is closed.
9-105

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(3) NVLoad (network vision sensor load)


[Function]
Loads the specified vision program into the vision sensor.
[Format]
NVLoad#<Vision sensor number>,<Vision program (job) name>
[Term]
<Vision sensor number> (Can not be omitted)
This specifies the number of the vision sensor to control.
Setting range: 1 - 8
<Vision program (job) name> (Can not be omitted)
Specifies the name of the vision program to start.
The vision program extension (.job) can be omitted.
The only characters that can be used are "0" - "9", "A" - "Z", "a" - "z", "-", and "_".
[Sample sentence]
1 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 log on is not complete
2 NVOpen COM2: As #1 'Connects with the vision sensor connected to COM2.
3 EndIf
4 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
5 NVLoad #1,TEST
'Loads the "Test".
6 NVPst #1, ,E76,J81,L84,0,10
'Receives the recognition count recognized with the "Test" program from the
E76 cell and the recognition results from cells J81 through L84, and stores them
in P_NvS1().
7
:
10 NVClose #1 'Cuts the line with the vision sensor connected to COM2.
[Explanation]
1) Loads the specified vision program into the specified vision sensor.
2) This command moves to the next step at the point in time when the vision program is loaded into the
vision sensor.
3) If the program is cancelled while this command is being executed, it stops immediately.
4) If the specified <vision program name> is already loaded, the command ends with no processing.
5) When this command is used with multi-tasking, it is necessary to execute the NVOpen command in the
task using this command. Also, use the <vision sensor number> specified with the NVOpen command.
6) A program start condition of "Always" and the continue function are not supported.
7) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately.
[Errors]
1) If data type for an argument is incorrect, a "syntax error in input command statement" error is
generated.
2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument
count" error occurs.
3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error
occurs.
4) If the NVOpen command is not opened with the number specified as the <vision sensor number>, an
"abnormal vision sensor number specification" error occurs.
5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error
occurs.
6) If a <vision program name> uses a character other than "0" "9", "A" "Z", "-", or "_" (including
lowercase letters), an "abnormal vision program name" error occurs.
7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program
does not exist" error occurs.
8) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".
9) If the communications line is cut while this command is being executed, an "abnormal
communications" error occurs and the robot controller side line is closed.
Vision Sensor Dedicated Commands and Status Variables

9-106

9 Detailed Explanation of Functions

(4) NVTrg (network vision sensor trigger)


[Function]
Requests the specified vision program to capture an image.
[Format]
NVTrg#<Vision sensor number>,<delay time>, <encoder 1 value read-out variable>
[,[<encoder 2 read-out variable>][,[<encoder 3 value read-out variable>]
[,[<encoder 4 read-out variable>][,[<encoder 5 value read-out variable>]
[,[<encoder 6 read-out variable>][,[<encoder 7 value read-out variable>]
[,[<encoder 8 read-out variable>]
[Term]
<Vision sensor number> (Can not be omitted)
This specifies the number of the vision sensor to control.
Setting range: 1 - 8
<Delay time> (Can not be omitted)
This specifies the delay time (in ms) from when the image capture request is output to the vision
sensor until the encoder value is obtained.
Setting range: 0 - 150 ms
<Encoder n value read-out variable> (Can be omitted from the second one on)
Specifies the double precision numeric variable into which the read out external encoder n value is
set.
Note: n is 1 - 8.
[Sample sentence]
1 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 logon is not complete
2 NVOpen COM2: As #1 'Connects with the vision sensor connected to COM2.
3 EndIf
4 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
5 NVRun #1,TEST
'Starts the "Test" program.
6 NVTrg #1,15,M1#,M2#
'Requests the vision sensor to capture an image and acquires encoders 1
and 2 after 15 ms.
7 NVIn #1, TEST,E76,J81,L84,0,10
'Receives the recognition count recognized with the "Test" program from the
E76 cell and the recognition results from cells J81 through L84, and stores
this in P_NvS1 ().
8
:
100 NVClose #1 'Cuts the line with the vision sensor connected to COM2.
[Explanation]
1) Outputs the image capture request to the specified vision sensor and acquires the encoder value after
the specified time. The acquired encoder value is stored in the specified numeric variable.
2) This command moves to the next step at the point in time when the encoder value is acquired the
specified time after the image capture request to the vision sensor.
3) If the program is cancelled while this command is being executed, it stops immediately.
4) For receiving data from the vision sensor, use the NVIn command.
5) When this command is used with multi-tasking, it is necessary to execute the NVOpen command in the
task using this command. Also, use the <vision sensor number> specified with the NVOpen command.
6) A program start condition of "Always" and the continue function are not supported.
7) Up to three robots can control the same vision sensor at the same time, but this command can not be
used by more than one robot at the same time. Use this command on any one of the robots.
8) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately.

9-107

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions


[Errors]
(1) If data type for an argument is incorrect, a "syntax error in input command statement" error is
generated.
(2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument
count" error occurs.
(3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error
occurs.
(4) If the NVOpen command is not opened with the number specified as the <vision sensor number>, an
"abnormal vision sensor number specification" error occurs.
(5) If the vision program's image capture specification is set to anything other than "Camera" (all trigger
command), "External trigger", or "Manual trigger", an "abnormal image capture specification" error
occurs.
(6) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".
(7) If the communications line is cut while this command is being executed, an "abnormal
communications" error occurs and the robot controller side line is closed.

Vision Sensor Dedicated Commands and Status Variables

9-108

9 Detailed Explanation of Functions

(5) NVRun(network vision sensor run)


[Function]
Starts the specified vision program.
[Format]
NVRun#<Vision sensor number>,"<Vision program (job) name>"
[Term]
<Vision sensor number> (Can not be omitted)
This specifies the number of the vision sensor to control.
Setting range: 1 - 8
<Vision program (job) name> (Can not be omitted)
Specifies the name of the vision program to start.
The vision program extension (.job) can be omitted.
The only characters that can be used are "0" - "9", "A" - "Z", "a" - "z", "-", and "_".
[Sample sentence]
1 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 log on is not complete
2 NVOpen COM2: As #1 'Connects with the vision sensor connected to COM2.
3 EndIf
4 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
5 NVRun #1,TEST
'Starts the "Test" program.
6 NVIn #1, TEST,E76,J81,L84,0,10
'Receives the recognition count recognized with the "Test" program from the
E76 cell and the recognition results from cells J81 through L84, and stores
this in P_NvS1 (30).
7

100 NVClose #1 'Cuts the line with the vision sensor connected to COM2.
[Explanation]
1) Starts the specified vision program in the specified vision sensor.
2) This command moves to the next step after it has verified that the vision sensor has received the image
capture and image processing command.
3) If the program is cancelled while this command is being executed, it stops immediately.
4) If the specified <vision program name> is already loaded, only image capture and image processing are
executed. (The vision program is not loaded.)
5) For receiving data from the vision sensor, use the NVIN command.
6) When this command is used with multi-tasking, it is necessary to execute the NVOPEN command in the
task using this command. Also, use the <vision sensor number> specified with the NVOPEN command.
7) A program start condition of "Always" and the continue function are not supported.
8) When multi-mechanism mode is used and data for multiple robots is required, make a vision program
that creates data for multiple robots with one image capture request.
Example

9) Up to three robots can control the same vision sensor at the same time, but this command can not be
used by more than one robot at the same time. Use this command on any one of the robots.
10) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately.

9-109

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions


[Errors]
1) If data type for an argument is incorrect, a "syntax error in input command statement" error is
generated.
2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument
count" error occurs.
3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error
occurs.
4) If the NVOpen command is not opened with the number specified as the <vision sensor number>, an
"abnormal vision sensor number specification" error occurs.
5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error
occurs.
6) If a <vision program name> uses a character other than "0" "9", "A" "Z", "-", or "_" (including
lowercase letters), an "abnormal vision program name" error occurs.
7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program
not present" error occurs.
8) If the vision program's image capture specification is set to anything other than "Camera" (all trigger
command), "External trigger", or "Manual trigger", an "abnormal image capture specification" error
occurs.
9) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".
10) If the communications line is cut while this command is being executed, an "abnormal
communications" error occurs and the robot controller side line is closed.

Vision Sensor Dedicated Commands and Status Variables

9-110

9 Detailed Explanation of Functions

(6) NVIn (network vision sensor input)


[Function]
Receives the results of the recognition by the vision sensor.
The data received from the vision sensor is stored in the robot controller robot status variables.
[Format]
NVIn#<Vision sensor number>,[<Vision program (job) name]
, <Recognition count cell>,<Start cell>,<End cell>,<Type> [,<Timeout>]
[Term]
<Vision sensor number> (Can not be omitted)
This specifies the number of the vision sensor to control.
Setting range:1 - 8
<Vision program (job) name> (Can not be omitted)
Specifies the name of the vision program to obtain the recognition results of.
If this parameter is omitted, the results are obtained from the currently active vision program.
The vision program extension (.job) can be omitted.
The only characters that can be used are "0" - "9", "A" - "Z", "a" - "z", "-", and "_".
<Recognition count> (Can not be omitted)
Specifies the cell in which the count of work recognized by the vision sensor is stored.
Setting range: Row: 0-399 Column: "A" - "Z" Example: "A5"
* When a vision program is created with MELFA-Vision, input the value specified by MELFA-Vision is
input.
<Start cell>/<End cell> (Can not be omitted)
Specifies the cell range in which the results recognized by the vision sensor are stored.
The contents of the specified cell are stored in any of the status variables
P_NvS*()EM_NvS*()EC_NvS*(). (*=1 8)
Setting range: Row: 0-399 Column: "A" - "Z" Example: "A5", "C10", etc.
* When a vision program is created with MELFA-Vision, input the value specified by MELFA-Vision.
<Type> (Can not be omitted)
Specifies the status variable cell in which the results recognized by the vision sensor are stored.
Setting range: 0 - 7
Refer to the explanation of NVPst for the content of the processing of a specified value.
<Time out> (If omitted, 10)
Specifies the time-out time (in seconds).
Specification range: Integer 1-32767
[Sample sentence]
1 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 log on is not complete
2 NVOpen COM2: As #1 'Connects with the vision sensor connected to COM2.
3 EndIf
4 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
5 NVRun #1,TEST
'Starts the "Test" program.
6 NVIn #1, TEST,E76,J81,L84,0,10
'Receives the recognition count recognized with the "Test" program from the
E76 cell and the recognition results from cells J81 through L84, and stores
this in P_NvS1 (30).
7
:
100 NVClose #1 'Cuts the line with the vision sensor connected to COM2.

9-111

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions


[Explanation]
1) Receives the recognition results from the specified vision program of the specified vision sensor.
2) Within the timeout time, does not move to the next step until the results are received from the vision
sensor.
However, if the robot program is stopped, this command is cancelled. Processing is executed from the
cancelled state with a restart.
3) When this command is used with multi-tasking, it is necessary to execute the NVOpen command and
NVRun command in the task using this command. At this time, use the <vision sensor number>
specified with the NVOpen command.
4) When type from "4" to "7" is specified for <Type>, the improvement of the data receiving speed from the
Vision Sensor can be expected.
A specified position of <Start Cell> and <End Cell> is different depending on a specified value of <Type>,
and refer to "Job Editting" Screen - "Result Cell Position" Tab of MELFA-Vision for a specified position.
5) A program start condition of "Always" and the continue function are not supported.
6) When using multi-mechanism mode, see the explanation of the NVPst command.
7) Up to three robots can control the same vision sensor at the same time, but this command can not be
used by more than one robot at the same time. Use this command on any one of the robots.
8) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed immediately. Processing is executed when the interrupt processing ends or is continued with a
restart.
9) When this command is executed, it is necessary to specify beforehand with the NVPst command or
NVRun command the vision program specified with the <Vision program name>.
10) In order to shorten the tact time, it is possible to do other work after executing the NVRun command and
execute NVIn when it is required.
11) Note that if the program stops between NVRun and NVIn, the results when NVRun is executed and the
results when NVIn is executed may be different.
[Errors]
1) If the data type for an argument is incorrect, a "syntax error in input command statement" error is
generated.
2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument
count" error occurs.
3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error
occurs.
4) If the NVOpen command is not opened with the number specified as the <vision sensor number>, an
"abnormal vision sensor number specification" error occurs.
5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error occurs.
6) If a <vision program name> uses a character other than "0" "9", "A" "Z", "-", or "_" (including
lowercase letters), an "abnormal vision program name" error occurs.
7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program
does not exist" error occurs.
8) If the program specified in the <vision program name> is not started by an NVRun command, a
"abnormal vision program name" error occurs.
9) If the <Recognition count cell>, <Start cell>, or <End cell> contains a number other than "0" "399" or a
letter other than "A "Z", an "argument out of range" error occurs.
10) If there is no value in the cell specified in "Recognition count cell", an "invalid value in specified for
recognition count cell" error occurs.
11) If the number of data included in the cell which specifies it by <Start cell> and <End cell> exceeds 90,
a"specified cell value out of range" error occurs.
12) If the range specified by <Star cell> and <End cell> exceeds line 30 and row 10, a"specified cell value
out of range" error occurs.
13) If the <Type> is other than "0" - "7", an "argument out of range" error occurs.
14) If the <Start cell> and <End cell> are reversed, a "specified cell value out of range" error occurs.
15) If the <Type> is other than "0" "3", an "argument out of range" error occurs.
16) If the <Timeout> is other than "1" "32767", an "argument out of range" error occurs.
17) If the vision sensor does not respond without the time specified as the <Timeout> or within the first 10
seconds if the <Timeout> parameter is omitted, a "vision sensor response timeout" error occurs.
18) If the communications line is cut while this command is being executed, an "abnormal
communications" error occurs and the robot controller side line is closed.

Vision Sensor Dedicated Commands and Status Variables

9-112

9 Detailed Explanation of Functions

(7) NVClose(network vision sensor line close)


[Function]
Cuts the line with the specified vision sensor.
[Format]
NVClose[[#]<Vision sensor number>[,[[#]<Vision sensor number>]
[Term]
<Vision sensor number> (Can be omitted)
Specifies a constant from 1 to 8 (the vision sensor number). Indicates the number for the vision
sensor connection to the COM specified with the <COM number>.
When this parameter is omitted, all the lines (vision sensor lines) opened with an NVOpen command
are closed.
Also, up to 8 <vision sensor numbers> can be specified. They are delimited with commas.
Setting range: 1 - 8
[Sample sentence]
1 If M_NVOpen(1)<>1 Then
2 NVOpen COM2: ASs#1
3 EneIf
4 Wait M_NVOpen(1)=1
completed.
5
:
100 NVClose #1

' When logon has not been completed for vision sensor number 1
' Connects with the vision sensor connected to COM2 and sets its number
as number 1.
'Connects with vision sensor number 1 and waits for logon to be

'Cuts the line with the vision sensor connected to COM2.

[Explanation]
1) Cuts the line with the vision sensor connected with the NVOpen command.
2) If the <vision sensor number> is omitted, cuts the line with all the vision sensors.
3) If a line is already cut, execution shifts to the next step.
4) Because up to seven vision sensors can be connected at the same time, <Vision sensor numbers> are
used in order to identify which vision sensor to close the line for.
5) If the program is cancelled while this command is being executed, execution continues until processing
of this command is complete.
6) When this command is used with multi-tasking, in the task using this command, it is necessary to close
only the lines opened by executing an NVOpen command . At this time, use the <Vision sensor
number> specified with the NVOpen command.
7) A program start condition of "Always" and the continue function are not supported.
8) If an End command is used, all the lines opened with an NVOpen command or Open command are
closed. However, lines are not closed with an End command in a program called out with a CAllp
command.
Lines are also closed by a program reset, so when an End command or a program reset is executed, it
is not necessary to close lines with this command.
9) The continue function is not supported.
10) If an interrupt condition is established while this command is being executed, the interrupt processing is
executed after this command is completed.
[Errors]
1) If the value specified as the <vision sensor number> is anything other than "1" through "8", the
"argument out of range" error occurs.
2) If there are more than eight command arguments, an "incorrect argument count" error occurs.

9-113

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

9.1.3.

Robot status variables

Here are the status variables for vision sensors.


Be careful. The data for these status variables is not backed up by the RT ToolBox2 backup function.
These status variables can be used by the following software versions.
CRnQ-700 series: N1 or later
CRnD-700 series: P1 or later
Table 9-3 Vision Sensor Status Variable List

Variable name

Array elements

Contents

Attribute
(*)

Data type

M_NVOpen

Line connection status

Integer type

M_NvNum

Vision sensor work detection count

Integer type

P_NvS*(*=1 8)

30

Position type

M_NvS*(*=1 - 8)

30, 10

Vision sensor detection data

C_NvS*(*=1 - 8)
30, 10
(*1) R indicates that a status variable is read-only.
The details of the status variables are as follows.

R
R

Single-precision
real number type
Text type

(1) M_NVOpen
[Function]
Indicates the vision sensor line connection status.
[Array meaning]
Array elements (1 - 8) Vision sensor numbers
[Explanation of values returned]
0: Line connecting (logon not complete)

1: Logon complete

-1: Not connected

[Usage]
After an NVOpen command is executed, checks whether or not the line with the vision sensor is connected
and the vision sensor logged onto.
[Sample sentence]
1 If M_NVOpen(1)<>1 Then
2 NVOpen COM2: As #1
number as number 1.
3 EndIf
4 Wait M_NVOpen(1)=1
5
:
100 NVClose #1

' If vision sensor number 1 is not connected


' Connects with the vision sensor connected to COM2 and sets its
' Connects with vision sensor number 1 and waits for the logon state.
'Cuts the line with the vision sensor connected to COM2.

[Explanation]
1) Indicates the status of a line connected with a network vision sensor with an NVOpen command when
the line is opened.
2) The initial value is "-1". At the point in time that the NVOpen command is executed and the line is
connected, the value becomes "0" (line connecting). At the point in time that the network vision sensor
logon is completed, the value becomes "1" (logon complete).
3) This variable strongly resembles the status of status variable M_OPEN, but whereas M_Open
4) becomes "1" when the connection is verified, M_NVOpen becomes "1" when the vision sensor logon is
complete.
[Errors]
(1) If the type of data specified as an array element is incorrect, a "syntax error in input command
statement" error occurs.
(2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type"
error occurs.
(3) If an array element other than "1" through "8" is specified, an "array element mistake" error occurs.

Vision Sensor Dedicated Commands and Status Variables

9-114

9 Detailed Explanation of Functions

(2) M_NvNum
[Function]
Indicates the number of pieces of work detected by the vision sensor.
[Array meaning]
Array elements (1 - 8): Vision sensor numbers
[Explanation of values returned]
Work detection count (0-255)
[Explanation]
1) Indicates the number of pieces of work detected by the vision sensor with the NVPst command or NVIn
command.
2) The stored recognition count is held until the next NVPst command or NVIn command is executed.
When an NVPst command or NVIn command is executed, the data is cleared to "0".
3) When the <Recognition count cell> specified with the NVPst command or NVIn command is a blank cell
in the vision program or a vision program command is specified, this becomes "0".
[Sample sentence]
1 If M_NVOpen(1)<>1 Then
2 NVOpne COM2: As #1
3 EndIf
4 Wait M_NVOpen(1)=1

' When logon has not been completed for vision sensor number 1
' Connects with the vision sensor connected to COM2.

' Connects with vision sensor number 1 and waits for logon to be
completed.
5 NVPst #1,TEST,E76,J81,L84,1,10
'Starts the "Test" program, receives the recognition count from the E76
cell and the recognition results from cells J81 through L84, and stores
this in M_NvS1().
6 'Processes referencing the acquired data.
7 MVCnt=M_NvNum(1)
'Acquires the number of pieces of work recognized by the vision sensor.
8
:
'Cuts the line with the vision sensor connected to COM2.
100 NVClose #1

[Errors]
1) If the type of data specified as an array element is incorrect, a "syntax error in input command
statement" error occurs.
2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type"
error occurs.
3) If an array element other than "1" through "8" is specified, an "array element mistake" error occurs.

9-115

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(3) P_NvS1 - P_NvS8


[Function]
Stores the data recognized by the vision sensor in position data format.
In an NVPst command or NVIn command, when a <type> of "0" is specified, the data in the cell range
specified with <Start cell> - <End cell> is stored as the X, Y, and C coordinates.
In an NVPst command or NVIn command, data must be stored in the order X, Y, C in the cells specified with
<Start cell> - <End cell>.
Example:

In the above vision program, when "J96" and "L98" are specified in the <Start cell> and <End cell> of the
NVPST command or NVIN command, P_NVS1() becomes the following values.
P_NvS1(1)=(+347.14 , -20.23 , +0.00 , +0.00 , +0.00 , -158.19 , +0.00, +0.00)(0 , 0)
P_NvS1(2)=(+381.28 , +49.01 , +0.00 , +0.00 , +0.00 , +10.84 , +0.00, +0.00)(0 , 0)
P_NvS1(3)=(+310.81 , +43.65 , +0.00 , +0.00 , +0.00 , -34.312 , +0.00, +0.00)(0 , 0)
P_NvS1(4)=( +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00, +0.00)(0 , 0)
P_NvS1(5)=( +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00, +0.00)(0 , 0)

P_NvS1(30)=( +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00, +0.00)(0 , 0)


[Array element count]: 30
The maximum number of pieces of work that a vision sensor can recognize at one time is 255, but the
maximum number of sets of work information that a robot can acquire is 30.
[Sample sentence]
1 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 log on is not complete
2 NVOpen COM2: As #1 'Connects with the vision sensor connected to COM2.
3 EndIf
4 Wait M_NVOpen(1)=1
'Connects with vision sensor number 1 and waits for logon to be completed.
5 NVPst #1,TEST,E76,J96,L98,0,10
'Starts the "Test" program, receives the recognition count from the E76 cell
and the recognition results from cells J96 through L98, and stores this in
P_NvS1().
6 MVCnt=M_NvNum(1)
'Acquires the number of pieces of work recognized by the vision sensor.
7 For MCnt=1 TO MVCnt
'Repeated once for each piece of work recognized
8 P10=P1
'Copies the reference position P1 to target position P10.
9 P10=P10*P_NvS1(MCnt) 'Corrects the difference from the reference work for the recognized work
and substitutes it in P10.
10 Mov P10,-50
'Moves to above the first recognized piece of work.
11 Mvs P10
'Moves to the position of the first recognized piece of work.
12 HClose 1
'Grasps the work.
13 Mvs P10,-50
'Moves to above the first recognized piece of work
14 Next MCnt

Vision Sensor Dedicated Commands and Status Variables

9-116

9 Detailed Explanation of Functions


[Explanation]
1) In an NVPst command or NVIn command, when a <type> of "0" is specified, the data recognized by the
vision sensor is stored in position data format.
2) When this variable is used, write the vision program to store the data in the order X, Y, and C.
Example:

3) The stored data is held until the next NVPst command or NVIn command is executed. However, this
data is cleared by a program reset, End command, or power supply reset. Even if the continue function
is enabled, the data is cleared (to 0 for all axes) for a power supply reset.
4) Also, if anything other than "0" is specified as the type with the NVPst command or NVIn command, all
axes are cleared to "0".
5) If the acquired data is a vision program function or character string, "0" is stored in the corresponding
axis.
6) The data for this variable is the valid position data for 8 axes.
7) When using multi-mechanism mode, see the explanation of the NVPst command.
[Errors]
1) If the type of data specified as an array element is incorrect, a "syntax error in input command
statement" error occurs.
2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type"
error occurs.
3) If an array element other than "1" through "30" is specified, an "array element mistake" error occurs.

9-117

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(4) M_NvS1 - M_NvS8


[Function]
Stores the data recognized by the vision sensor in numeric data format.
In an NVPst command or NVIn command, when a <type> of "1" or "3" or 5 or 7 is specified, the data in
the cell range specified with <Start cell> - <End cell> is converted into numbers and stored.
Example:

In the above vision program, when "J96" and "Q98" are specified in the <Start cell> and <End cell> of
the NVPst command or NVIn command, the value of M_NvS1() becomes the following values.

M_NvS1()

Element 2
Element 1
1
2
3
4
5

1
347.147
381.288
310.81
0.0
0.0

2
-20.232
49.018
43.65
0.0
0.0

3
-158.198
10.846
-34.312
0.0
0.0

4
97.641
97.048
0.0
0.0
0.0

5
0.0
0.0
0.0
0.0
0.0

6
110.141
89.582
139.151
0.0
0.0

7
120.141
99.582
149.151
0.0
0.0

8
72.645
-118.311
-163.469
0.0
0.0

9
0.0
0.0
0.0
0.0
0.0

[Array element count]: (30.10)


It is possible to acquire 30 lines and 10 columns of information from all the cell information in the vision
program.
[Sample sentence]
1 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 log on is not complete
2 NVOpen COM2: As #1 'Connects with the vision sensor connected to COM2 and sets its number as
number 1.
3 EndIf
4 Wait M_NVOpen(1)=1
'Connects with vision sensor number 1 and waits for logon to be completed.
5 NVPst #1,TEST,E76,J96,Q980.1,10
'Starts the "Test" program, receives the recognition count from the E76 cell
and the recognition results from cells J96 through Q98, and stores this in
M_NvS1().
6 MVCnt=M_NvNum(1)
'Acquires the number of pieces of work recognized by the vision sensor.
7 For MCnt=1 TO MVCnt
'Repeated once for each piece of work recognized
8 P10=P1
'Copies the reference position P1 to target position P10.
9 P10.X=M_NvS1(MCnt,1) 'Substitutes the X coordinate of the recognized work in the X coordinate of
P1.
10 P10.Y=M_NvS1(MCnt,2) 'Substitutes the Y coordinate of the recognized work in the Y coordinate of
P1.
11 P10.C=M_NvS1(MCnt,3) 'Substitutes the C coordinate of the recognized work in the C coordinate of
P1.
12 Mov P10,-50
'Moves to above the first recognized piece of work
13 Mvs P10
'Moves to the position of the first recognized piece of work
14 HClose 1
'Grasps the work.
15 Mvs P10,-50
'Moves to above the first recognized piece of work
16 Next MCnt

Vision Sensor Dedicated Commands and Status Variables

9-118

9 Detailed Explanation of Functions


[Explanation]
1) In an NVPst command or NVIn command, when a <type> of "1" or "3" or 5 or 7 is specified, the data
recognized by the vision sensor is stored in numeric data format.
2) The stored data is held until the next NVPst command or NVIn command is executed. However, this
data is cleared (to 0) by a program reset, End command, or power supply reset.
Also, if anything other than "1" and "3" and 5 and 7 is specified as the type with the NVPst
3) command or NVIn command, the data is cleared to "0".
4) If the acquired data is a vision program function or character string, "0" is stored in the corresponding
axis.
5) When using multi-mechanism mode, see the explanation of the NVPst command.
[Errors]
1) If the type of data specified as an array element is incorrect, a "syntax error in input command
statement" error occurs.
2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type"
error occurs.
3) If an array element 1 specification specifies other than "1" through "30", an "array element mistake"
error occurs.
4) If an array element 2 other than "1" through "10" is specified, an "array element mistake" error occurs.

9-119

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(5) C_NvS1 - C_NvS8


[Function]
Stores the data recognized by the vision sensor in text string data format.
In an NVPst command or NVIn command, when a <type> of "2" or "3" or 6 or 7 is specified, the data in
the cell range specified with <Start cell> - <End cell> is stored.
Example:

In the above vision program, when "J95" and "Q98" are specified in the <Start cell> and <End cell> of the
NVPst command or NVIn command, the value of C_NvS1() becomes the following values.

C_NvS1()

Element 2
Element 1
1
2
3
4
5

X
347.147

381.289

C
-158.19
8

Score

Score

Y
120.141

97.641

X
110.141

72.645

97.641

89.585

99.585

-118.31
3

97.227

139.151

149.151

-163.47

96.217

-20.232
49.017

10.844

97.227

310.81

43.649

-34.313

96.217

[Array element count]: (30.10)


It is possible to acquire 30 lines and 10 columns of information from all the cell information in the vision
program.
[Sample sentence]
1 DIM MScore(100) 'Declares the variable for storing scores.
2 If M_NVOpen(1)<>1 Then
' When logon has not been completed for vision sensor number 1
3 NVOpen COM2: As #1
' Connects with the vision sensor connected to COM2 and sets its
number as number 1.
4 EndIf
5 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be
completed.
6 NVPst #1,TEST,E76,J95,Q98,3,10
'Starts the "Test" program, receives the recognition count from the E76
cell and the recognition results from cells J95 through Q98, and stores
this in C_NvS1(1,1) - and M_NvS1(1,1) - .
7 MVCnt=M_NvNum(1)
'Acquires the number of pieces of work recognized by the vision sensor.
8 For MCnt2=1 TO 8
9 If C_NvS1(1,MCnt2)=Score Then Break
10 Next MCnt2
11 For MCnt1=1 TO MVCnt
'Repeated once for each piece of work recognized
12 MScore(MCnt1)=VAL(C_NvS1(MCnt1+1,MCnt2)) ' Stores the score for the recognized work into MScore
13 Next MCnt1
14 MOK=0
' Clears MOK
15 For MCnt=1 TO MVCnt
'Repeated once for each piece of work recognized
16 If MScore(MCnt)>90 Then MOK=MOK+1 'If the score is 90 points or higher, adds 1 to MOK.
17 Next MCnt
18 ' Checks the value of MOK and checks the number of pieces of work for which the score is 90 points or
higher.

Vision Sensor Dedicated Commands and Status Variables

9-120

9 Detailed Explanation of Functions


[Explanation]
1) In an NVPst command or NVIn command, when a <type> of "2" or "3" or 6 or 7 is specified, the data
recognized by the vision sensor is stored in text string format.
However, kanji codes can not be acquired.
2) The maximum number of pieces of work that a vision sensor can recognize at one time is 255, but the
maximum number of sets of work information that a robot can acquire is 30.
3) The stored data is held until the next NVPst command or NVIn command is executed. However, this
data is cleared by a program reset, End command, or power supply reset.
Also, if anything other than "2" and "3" and 6 and 7 is specified as the type with the NVPst
4) command or NVIn command, the data is cleared to null.
5) If the acquired data is a vision program function or kanji code, a null character is stored in the
corresponding axis.
6) When using multi-mechanism mode, see the explanation of the NVPst command.
[Errors]
1) If the type of data specified as an array element is incorrect, a "syntax error in input command
statement" error occurs.
2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type"
error occurs.
3) If an array element 1 other than "1" through "30" is specified, an "array element mistake" error occurs.
4) If an array element 2 other than "1" through "10" is specified, an "array element mistake" error occurs.

9-121

Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

9.2.MELFA-Vision Function Details


This section explains MELFA-Vision functions other than those explained in "Chapter 5 - 0.

9.2.1.

MELFA-Vision Main Screen

For explanations concerning the MELFA-Vision main screen, see "6.3.1 Starting MELFA-Vision (network
vision sensor support software)".
Figure 9-1 MELFA-Vision Main Screen

MELFA-Vision Function Details

9-122

9 Detailed Explanation of Functions

9.2.2.

Job Editing screen ([Image Log] tab)

On the job edit screen [Image Log] tab, the conditions are set for the PC in which the images captured with
the vision sensor are stored to the PC. It is necessary to start the FTP server on the PC storing the images
For explanations concerning the MELFA-Vision main screen, see "8.4 Image Log Acquisition Settings and
Reception Start/End".
This section explains the conditions set with the [Image Log] tab.

Figure 9-2 Job Edit Screen [Image Log] Tab


Table 9-4 Job Edit Screen [Image Log] Tab Setting Item List

Setting item
Save the
Image Log

Explanation
To acquire the image log, put a check in the [Save the Image Log] checkbox. To not
acquire the image log, remove the check from the checkbox.
Sets the condition for storing images captured with the vision sensor to the PC.
Always All images captured with the vision sensor are stored
Save Condition
OK images The image is stored if the count of recognized pieces of work is not 0.
NG images The image is stored if the count of recognized pieces of work is 0.
Images captured with the vision sensor are stored as bit map (bmp) files on the PC.
File Name
This specifies the file name.
* Up to 50 files names can be specified.
This specifies the number of images stored on the PC.
Serial numbers up to specified number of images are attached after the file name.
Example: When the file name is "NGImage"
Max Number
NGImage001.bmpNGImage002.bmpNGImage010.bmp
If the specified number of images is exceeded, the serial number is reset and already
stored bmp files are overwritten.
This resets the serial numbers attached to the file name.
Reset
The file name for the image captured after resetting becomes "file name 001.bmp".
User Name of FTP Specifies the FTP server user name set with the displayed "Image Log Setting"
(*1)
screen.
Password of FTP
Specifies the FTP server password set with the displayed "Image Log Setting"
(*1)
screen.
IP Address of FTP
Specifies the IP address of the PC on which the FTP server is running.
Get IP Address
When an MELFA-Vision FTP server is used and this button is clicked, the IP address
From PC
of the PC MELFA-Vision is running on is displayed in the [IP Address of FTP].
(*1) For details on the "Image Log Setting" screen, see "8.4 Image Log Acquisition Settings and Reception
Start/End".

9-123

MELFA-Vision Function Details

9 Detailed Explanation of Functions

9.2.3.

Job edit screen ([Result Cell Position] tab)

The Job Editing screen [Result Cell Position] tab displays "Found Number Cell", "Start", and "End"
specified with the dedicated MELFA-BASIC V command for the network vision sensor. A cell is a position
indicated by the column character and row character in the vision program.
This section explains the cell positions displayed on the screen.

Figure 9-3 Job Editing Screen ([Result Cell Position] Tab)


Table 9-5 Job Editing Screen [Result Cell Position] Tab Display Item List

Setting item
Found
Number Cell

Explanation
The number of pieces of work recognized by the vision sensor is stored in the
displayed cell position.
The coordinates (robot 1 coordinates) for the work recognized by the vision sensor
Robot 1
are stored from the displayed start cell position to the displayed end cell position.
The coordinates (robot 2 coordinates) for the work recognized by the vision sensor
Robot 2 (*1)
are stored from the displayed start cell position to the displayed end cell position.
The coordinates (robot 3 coordinates) for the work recognized by the vision sensor
Robot 3 (*2)
are stored from the displayed start cell position to the displayed end cell position.
When you specify the range from 0 to 3 for a value of the type of "NVPst" and "NVIn"
[Type]: 0 to 3 command of MEFLA-BASIC V command, the position of the cell which specifies it for
A and B is shown.
When you specify the range from 4 to 7 for a value of the type of "NVPst" and "NVIn"
[Type]: 4 to 7 command of MEFLA-BASIC V command, the position of the cell which specifies it
for A and B is shown.
(*1) Displayed when a job for two robots is selected.
(*2) Displayed when a job for three robots is selected.

MELFA-Vision Function Details

9-124

9 Detailed Explanation of Functions

9.2.4.

Vision sensor network settings

The vision sensor network settings can be changed. From the MELFA-Vision menu, click [Sensor]
[Connection] [Communication Setting] to display the "Network Settings" screen. Check with your network
administrator for the items to set.

Figure 9-4 MELFA-Vision "Network Settings" Screen


Table 9-6 "Network Settings" Screen Setting Items List

Setting item
Host Name
Use DHCP Server
IP Address
Subnet Mask
Default Gateway
DNS Server
Domain Name
DHCP Timeout
Transition to Time Out
Auto Delete

9-125

MELFA-Vision Function Details

Explanation
Changes the vision sensor name.
Check this when using the DHCP server to allocate the IP address.
Input the IP address.
Defines which part of the IP address shows the network and which part shows
the host.
Data can be relayed between hosts on different networks by specifying the
gateway address.
Input the network host IP that supplies the DNS resolution (converting from
host name to IP address).
Defines the domain name of the network the vision sensor is on.
Specifies the DHCP server response wait time.
Shifts to another connection without closing the connection.
Closes the connection.

9 Detailed Explanation of Functions

9.3.Vision program detailed explanation


MELFA-Vision provides a number of programs (job files) as templates.
This section explains the templates provided.

9.3.1.

Templates provided for MELFA-Vision

The table below shows the templates provided for MELFA-Vision.


Table 9-7 List of Job Templates Provided

No. Image processing (*1)


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

Output
coordinates
(*4)

Number of robots Displayable


(*5)
count (*6)

Pattern matching
(*2)
1

Absolute
coordinates

Relative
coordinates
Blob
(binarization
processing)
(*3)

Color(*7)

1
Absolute
coordinates

result

1
4
10
20
30
1
4
10
20
30
1
4
10
20
30
1
4
10
20
30
1
4
10
20
30
1
4
10
20
30

(*1) Pattern matching, blobs, edges, histograms, ID recognition, text comparison, etc. are provided for vision
sensor image processing, but the image processing supported by MELFA-Vision is pattern matching
and blobs.
(*2) Pattern matching is a method of detecting patterns in images based on registered patterns.
(*3) Blobbing is a method for detecting two-dimensional shape information such as the size, shape, position,
linking, etc. of patterns expressed as blobs.
The blob template is effective for the following types of subjects.
Large subjects
Subjects whose shapes change irregularly
For details on blob image processing, see "9.3.2 Image processing - blobs".

Vision program detailed explanation

9-126

9 Detailed Explanation of Functions


(*4) There are two types of output coordinates.
Table 9-8 List of Coordinates Output to Robot

Output method
Absolute
coordinate
output
Relative
coordinate
output

Explanation
The detected pattern position is output converted to the robot coordinate system.
The detected pattern position is output with the robot coordinate system offset
quantities for the relative position based on the registered pattern position.

(*5) Templates are also provided that secure the data for two robots or for three robots with one image.
When multiple robots are connected to one vision sensor, it is possible to acquire the operation
positions for each robot by capturing one image.
(*6) It is necessary for the vision program to prepare beforehand an area in which to store the work position
data that the robot acquires. Expanding this area increases the amount of information that the robot can
obtain, but also increases the vision program load time and the time for sending the information from the
vision sensor to the robot. Therefore, MELFA-Vision provides templates for areas for storing
1/4/10/20/30 sets of work information. Select the one that best matches your system.
These quantities - 1/4/10/20/30 indicate the maximum number of sets of work data that the robot can
acquire. For example, for acquiring 8 sets of work data, select the 10 template.
(*7) Color is a method to detect the pattern in the image based on the specified color pattern.
Refer to 9.3.3 Image processing Color for details of the color image processing.

9-127

Vision program detailed explanation

9 Detailed Explanation of Functions

9.3.2.

Image processing - blobs

This section explains how to make the blob image processing settings, using pattern matching image
processing (only one robot, results output as robot absolute coordinate values) as an example.
(1) In the [Job(Vision Program)List] on the left side of the main MELFA-Vision screen, click [New].
From the "Image Processing Method" screen displayed, select blob image processing, then click
the [OK] button.

Figure 9-5 Blob Image Processing Selection

Vision program detailed explanation

9-128

9 Detailed Explanation of Functions

(2) Execute

the

work

in

order

of

the

tabs from left to right on


the displayed "Job Editing" screen.
For details on the [Adjust Image] tab, see "6.3.3 Image processing settings".
(3) When you click the "Job Editing" screen [Search Area & Condition(1)] tab, the conditions for
executing blob image processing are set.

When you change a displayed setting item, then click the [Test] button, the results of image
processing under the specified conditions are displayed at the main screen [Camera Image], so
check whether or not the work is correctly recognized.
For details on the setting items, see "Table 9-9 List of [Search Area and Recognition Condition
(1)] Tab Items".
Table 9-9 List of [Search Area and Recognition Condition (1)] Tab Items

Setting item
Setting range
Explanation
Color
Blob
Black/white/Either Select the color of the work tobe recognized
Setting
(black or white or any desired color)
Background Black/white
This specifies the color (black or white) that is
the background for captured images.
Search Area
Click the [Image] button to set the range for
detecting the work (blob).

For details on the setting method, see 6.3.3


Image processing settings".
Number to Find
1 - 255
This sets the maximum number of pieces that
can be detected in one image processing.
Area
Limit

This sets the surface area range (minimum to


maximum) for detected work (blobs). This
surface area range is specified in pixels.
Fill Holes
ON/OFF
When there are holes in the work, to recognize
the work in the state with the holes filled, put a
check in the [Fill Holes] checkbox. To
recognize with holes unfilled, remove the
check from the checkbox.
* For all the items, if a value outside the range is input, it is replaced with the upper or lower limit
value near the image going out of focus.

9-129

Min
Max

Vision program detailed explanation

0 - 900000

9 Detailed Explanation of Functions


(4) When you click the "Job Editing" screen [Processing Condition(2)] tab, the conditions for
executing blob image processing are set.

When you change a displayed setting item, then click the [Test] button, the results of image
processing under the specified conditions are displayed at the main screen [Camera Image], so
check whether or not the work is correctly recognized.
For details on the setting items, see "Table 9-9 List of [Search Area and Recognition Condition
(1)] Tab Items".
Table 9-10 List of [Processing Condition(2)] Tab Items
Setting
Setting item
Explanation
range
Manual Threshold
1 - 100
This sets what degree of recognition is required for recognition of work
detected with the threshold specified with the grayscale threshold.
For the vision sensor, the degree of matching of the detected work is
expressed as 1-100%. Work whose degree of matching is lower than
the value set here is not recognized.
Greyscale Threshold
1 - 255
This sets the grayscale threshold. When the "Auto Setting" checkbox is
checked, the value is set automatically from the images captured.
Sort By
None
Returns the recognized work results in the specified sort order.
X
When "None" is specified, the results are returned with the work sorted
Y
in order of high recognition ratio.
This sorting is used for cases such as when multiple work pieces are
detected and you want to grasp the work in order from left to right in the
image.
The "X" and "Y" specified here indicate the "X" and "Y" at the red frame
displayed with the search area setting.
Offset of Rotation
-180 - 180 When outputting the recognized work results, this function adds the
specified offset amount to the detection angle.
When registering patterns, this is used if the 0-tilt image can not be
captured.
Calibration No.
None
This selects the data when outputting the recognized work coordinate
1 - 10
value converted to the robot coordinate value.
Work information can be converted to the coordinate systems for up to
three robots and sent.
Therefore, it is possible to select calibration numbers for three robots.
* The figure above shows a screen assuming a system with one robot.
When a system is selected with three robots, [Robot 2:] and [Robot 3:]
display appears.

* For all the items, if a value outside the range is input, it is replaced with the upper or lower limit
value near the image going out of focus.
(5) For details on the [Image Log] tab, see "9.2.2Job Editing screen ([Image Log] tab) "; for details
on the [Results Cell Position] tab, see "9.2.3 Job edit screen ([Result Cell Position] tab) ".

Vision program detailed explanation

9-130

9 Detailed Explanation of Functions

9.3.3.

Image processing Color

This section explains how to make the Color image processing settings, using pattern matching image
processing (only one robot, results output as robot absolute coordinate values) as an example.
(1) In the [Job(Vision Program)List] on the left side of the main MELFA-Vision screen, click [New]. From
the "Image Processing Method" screen displayed, select Color image processing, then click the [OK]
button.

Figure 9-6 Color Image Processing Selection

9-131

Vision program detailed explanation

9 Detailed Explanation of Functions


(2) Execute the work in order of the
tabs from left to right on the displayed "Job Editing" screen.
(3) [White Balance] button of [Adjust Image] tab is clicked, and a standard color is specified.
[White Balance] button is displayed only in case of the color image processing.
The image in the state that the [White Balance] button is not clicked cannot accurately recognize the
color like Figure 9-1 When you do not click [White Balance] button.

Figure 9-2 When you do not click [White Balance] button

[White Balance] button is clicked, standard color is recognized and the RGBcolor can be recognized more
accurately.
A standard color is a color of 100*100 dots at the center of the camera picture when [White Balance] button
is clicked.
In this case, standard color is made white, it comes to recognize the color by a color near man's recognition.

Figure 9-3 When you click [White Balance] button

Vision program detailed explanation

9-132

9 Detailed Explanation of Functions


(4) The condition of executing the color image processing is set in [Color] tab of "Job Editting" screen.

When [Test] button is clicked after the displayed set item is changed, the result of processing the image on
[Camera Image] of the main screen and the condition of specifying it is displayed.
Confirm whether the light and shade of work is clear according to the specified color on the screen.
For details on the setting items, see "Table 9-1 List of [Color] Tab Items"
Table 9-2 List of [Color] Tab Items

Setting item
Color Area Setting

Setting range

Representation
ON/OFF
Select Filter
ON/OFF
Histogram

Threshold
-1Auto
0 - 255Manual

9-133

Vision program detailed explanation

Explanation
[Image] button is clicked, shifts to a graphic image and a
square frame is displayed.
Enclose the color which wants to be recognized with the
frame and click the [Enter] key.
Whether the specified color is acquired in
RGB(Red/Green/Blue) information or it acquires it in
HIS(Hue/Intensity/Saturation) information is selected.
Whether the image displayed on the screen is displayed
by the color or it displays it with grey scale which puts the
specified color filter is selected.
Color specified by [Color Area Setting] information is
displayed.
When [Representation] CheckBox is OFF, information on
RGB is displayed.
And the CheckBox is ON, information on HIS is displayed.
Initial value is "-1".
When the value is "-1", the color of work is converted into
white putting the color specified by [Color Area Setting]
filter.
When the light and shade of work is not clear by the self
adjustment filter, the color recognize can be adjusted
which input the value of 0 - 255 to Threshold.

9 Detailed Explanation of Functions


(4-1)Color Area Setting is specified.
When [Image] button of Color Area Setting is clicked, foci move to the main screen, and Area adjustment
mark is displayed in [Camera Image] .
The color which wants to be recognized from the area enclosed with this frame is detected.
Enclose the color which wants to be recognized with the frame.
The area in which the work is detected can be changed with the mouse or keyboard.
If you use the keyboard, each time the [F9] key is pressed, the "area adjustment mark" changes and fine
adjustments can be made with the [arrow keys].
To finalize the area, press the [Enter] key; to cancel it, press the [ESC] key.
The focus returns to the "Job Editing" screen.
When press the [Enter] key, color specified for Histogram information is displayed.

Camera Image

Area adjustment mark

Vision program detailed explanation

9-134

9 Detailed Explanation of Functions


(4-2)Changes to the gray-scale imagery by the color which specifies [Camera Image].
[Select Filter] CheckBox is ON, and [Test] button is clicked.
Specified color is displayed in white putting the same filter as the specified color.

9-135

Vision program detailed explanation

9 Detailed Explanation of Functions


(4-3)Color is adjusted.
Value of "Threshold" is changed, and [Test] button is clicked.
For instance, when green is recognized more emphatically the value of "Green" is increased, and other
items are decreased.
Refer to "Histogram" value for the value.

Vision program detailed explanation

9-136

9 Detailed Explanation of Functions


(4-4)When work is recognized specifying not only Hue but also Saturation and Intensity, "Representation" is
changed.
"Representation" CheckBox is ON, all the values of the item of "Threshold" are set to "-1", and [Test] button
is clicked.

9-137

Vision program detailed explanation

9 Detailed Explanation of Functions


(4-5)Color is adjusted.
Value of "Threshold" is changed, and [Test] button is clicked.
For instance, when recognize the vivid color work the value of "Saturation" is increased, and other items are
decreased.
Refer to "Histogram" value for the value.

(5) Pattern and Search Area are specified.


[Pattern & Search Area] tab is displayed, and the recognized work is registered.
Refer to "6.3.3 Image processing settings" for registration method.
(6) The condition of recognizing it is specified.
[Pattern & Search Area] tab is displayed, and the recognized work is registered.
[Condition] tab is displayed, and the condition of recognizing it is specified.
Refer to "6.3.3 Image processing settings" for registration method.

Vision program detailed explanation

9-138

9 Detailed Explanation of Functions

9.3.4.

Using image processing for which there is no template

The only templates provided for MELFA-Vision are pattern matching and blobs. * When using a robot using
other image processing, write the vision program using "In-Sight Explorer" installed on the PC with "5.3.1
Vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later) installation".
For details on how to write a vision program using "In-Sight Explorer", see the "In-Sight Explorer" help.

9-139

Vision program detailed explanation

9 Detailed Explanation of Functions

9.3.5.

To shorten the time for transferring data with the robot controller

The image processing templates prepared for MELFA-Vision use the mechanism of transferring the
information on recognized work to the robot controller one set at a time (three communications, X, Y, and C
per piece of work).
When it is desired to shorten the tact time, it is recommended that the vision program and robot program be
altered to shorten the data transfer time.
Below is the method for transferring a maximum of four sets of data (127 bytes maximum) in each data
transfer.
<Vision program change example>
Before change

Data exists in each cell in the vision program and the robot controller can use them without processing the
acquired values.
However, in the example above, since a total of 12 data transfers are required for cells [J81] through [L84],
the transfer time becomes longer.
After change

The value converts errors into a text string as is. Text string cells are linked to form a single cell.
The above program is added to the vision program before change.
Cells [O81] through [Q84] use the vision program "count error" function. If there is an error, they display the
character "E". Also, cell [S81] stores the data for the four cells [O81] [Q84] in one cell. The vision program
"concatenate" function is used. Coordinates are delimited with "," and recognized work is delimited with "/".
For details on the functions used in vision programs, see the "In-Sight Explorer" help.

CAUTION
The maximum number of characters the robot can receive in one
communication is 127.
Due to restrictions on communications with the robot, if the information for one piece of
work is X, Y, and C, one data transfer can handle up to four sets of data.

Vision program detailed explanation

9-140

9 Detailed Explanation of Functions


<Robot program change example>
Change the program example in "7.3.2 Writing a Sample Robot Program" as follows. The parts of the
program in the boxes are the locations changed.
1 ' The work grasping position P1, and the work placement position P2 must have been taught beforehand.
2 ' Example: P0=(+250.000,+350.000,+500,000,-180.000,+0.000,+0.000)(7,0)
3'
P1=(+500.000, +0.000, +300,000, -180.000, +0.000, +10.000)(7,0)
4'
P2=(+300.000,+400.00,+300.000,-180.000,+0.000,+90.000)(7,0)
5 Dim MV(30,3)
6 If M_NVOpen(1)<>1 Then 'If vision sensor number 1 logon is not complete
7 NVOpen COM2: AS #1 'Connects with the vision sensor connected to COM2.
8 EndIf
9 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
10 NVPst #1,"Job2","E76","S81","S81",2,10 'Starts the [Job2] vision program and
11
'receives the recognized count from cell [E76] and the
recognized coordinates from cell [S81] and stores them in
C_NVS1().
12 Mov P0
' Moves to the evacuation point.
13 If M_NVNum(1)=0 Then *NG
' If the detection count is 0, jump to error
14 MROW=1
' Clears the acquired cell row.
15 MROW=1
' Clears the recognized work number.
16 MKOSU=0
' Specifies the work information (X, Y, C)
17 MST=1
' Clears the head position extracting characters from the
text string.
18 MED=1
' Clears the tail position from which characters were
extracted from the text string the previous time.
19 MFLG1=0
' Clears the flag showing that all the information has been
acquired.
20 While MFLG1=0
' Loops until all the information is acquired.
21 CGET$=C_NvS1(MROW,1)
' Acquires cell information.
22 MLEN=Len(CGET$)
' Obtains the character count for the cell information
acquired.
23 If MLEN<>0 THEN
' When acquired cell information exists
24
For MPT=1 TO MLEN
' Loops for amount of cell information acquired
25
C1$=Mid$(CGET$,MPT,1)
' Acquires cell information one cell at a time
26
If C1$="," OR C1$="/" Then
' If the acquired character is "," or "/"
27
C2$=Mid$(CGET$,MPT-1,1)
' Acquires the information one character before a "," or "/".
28
MKOSU=MKOSU+1
' Updates the work information
29
If C2$<>"E" Then MV(MNUM,MKOSU)=Val(Mid$(CGET$,MST,MPT-MED)) Else
MV(MNUM,MKOSU)=0
30
MST=MPT+1
' Specifies the head position extracting text.
31
MED=MST
' Specifies the tail position from which text was extracted.
32 EndIf
33
If C1$="/" Then
' If the acquired character is "/"
34
MNUM=MNUM+1
' Updates the recognized work number.
35
MKOSU=0
' Clears the work information.
36 EndIf
37
NEXT MPT
38
MROW=MROW+1
' Moves to the next cell information.
39 Else
40
MFLG1=1
' All information acquisition is complete.
41 EndIf
42 WEnd
43 MFLG1=0
' Clears all information acquisition flags.
44 For M1=1 TO M_NVNum(1)
' Loops once for each detection by vision sensor 1.
45 P10=P1
' Creates the target position P10 using the vision sensor 1 results
data.
46 P10.X=MV(M1,1)
47 P10.Y=MV(M1,2)
48 MC=MV(M1,3)
49 P10.C=Rad(MC)
50 Mov P10,10
' Moves to 10 mm above the work grasping position P10.
9-141

Vision program detailed explanation

9 Detailed Explanation of Functions


51 Mvs P10
52 Dly 0.1
53 HClose 1
54 Dly 0.2
55 Mvs P10,10
56 Mov P2,10
57 Mov P2
58 Dly 0.1
59 HOpen 1
60 Dly 0.2
61 Mvs P2,10
62 Next
63 Hlt
64 End
65 '
66 *NG
67 Error 9000
68 Hlt
69 End

' Moves to the work grasping position P10.


' Wait time of 0.1 second
'Closes hand 1.
' Wait time of 0.2 second
' Moves to 10 mm above the work grasping position P10.
' Moves to 10 mm above the work placement position P2.
' Moves to the work placement position P2.
' Wait time of 0.1 second
'Opens hand 1.
' Wait time of 0.2 second
' Moves to 10 mm above the work placement position P2.
' Repeats.
' Program pause (Create the appropriate processing.)
' Exit
' Error processing
' Error No. 9000 output
' Program pause (Create the appropriate processing.)
' Exit

Vision program detailed explanation

9-142

9 Detailed Explanation of Functions

9.4.Detailed explanation of systems combining multiple vision sensors


and robots
The systems explained in Chapter 5 through Chapter 0 were systems with one vision sensor and one robot
controller.
With this system it is also possible to construct systems with one robot controller and up to seven vision
sensors and systems with one vision sensor and up to three robot controllers.
This chapter explains the construction of these systems.

9.4.1.

Systems with one robot controller and multiple vision sensors

This section only explains those aspects of the setting method for constructing a system with one robot
controller and multiple vision sensors that are different from the contents covered in Chapter 5 through
Chapter 0.
(1) Change the robot controller communication settings.
From the MELFA-Vision menu, select [Controller] [Communication Setting] to display the
"Communication Setting" screen.

Set the "Line and Device" and "Device List" for the number of vision sensors connected.
Below is an example for connecting three vision sensors.

Click the [Write] button to change the robot controller parameters.

9-143

Detailed explanation of systems combining multiple vision sensors and robots

9 Detailed Explanation of Functions


(2) Write the vision program for each vision sensor.
Log onto the connected vision sensors and write the vision program for each vision sensor. For details
on the logon method and vision program writing method, see "6.3 Work recognition test".
(3) Write the robot program to control multiple vision sensors.
Write a robot program like the following.
1 If M_NVOpen(1)<>1 Then NVOpen COM2: AS #1 'Connects to vision sensor 1 (COM2).
2 If M_NVOpen (2)<>1 Then NVOpen COM3: AS #2 'Connects to vision sensor 2 (COM3).
3 If M_NVOpen (2)<>1 Then NVOpen COM4: AS #3 'Connects to vision sensor 3 (COM4).
4 Wait M_NVOpen (1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
5 Wait M_NVOpen (2)=1
' Connects with vision sensor number 2 and waits for logon to be completed.
6 Wait M_NVOpen (3)=1
' Connects with vision sensor number 3 and waits for logon to be completed.
7 NVPst #1,Job1,E76,J81,L85,0,10 'Starts the vision program and acquires the results.
8 NVPst #2,Job1,E76,J81,L85,0,10 'Starts the vision program and acquires the results.
9 NVPst #3,Job1,E76,J81,L85,0,10 'Starts the vision program and acquires the results.

Operates the robot with the results received.

20 NVClose

9.4.2.

Systems with one vision sensor and multiple robot controllers

This section only explains those aspects of the setting method for constructing a system with one vision
sensor and multiple robot controllers that are different from the contents covered in Chapter 5 through
Chapter 0.
This section shows a system with two robots as an example.
(1) Write the vision program for two robots.
See "6.3.3 Image processing settings" and on the "Image Processing Method Selection" screen, select
the template for two robots and write the vision program.
When the template for two robots is selected, on the "Job Editing" screen [Processing Condition] tab,
the calibration setting items for two robots are displayed.

(2) Execute the calibration work for two robots.


See "7.2 Calibration Setting" and execute the calibration work for two robots. When doing the work for
the second robot, change the "Robot datum" robot selection.

Detailed explanation of systems combining multiple vision sensors and robots

9-144

9 Detailed Explanation of Functions


(3) Set the calibration number.
Specify the calibration number for [Robot 1] and [Robot 2] in [Calibration No.] on the "Job Editing"
screen [Processing Condition] tab.

(4) Write the robot program


<Robot 1>
1 If M_NVOpen(1)<>1 Then NVOpen COM2: AS #1
'Connects to vision sensor 1 (COM2).
2 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
3 NVPst #1,Job1,E76,J81,L85,0,10
'Starts the vision program and acquires the results.
4 M_Oot(10)=1
'Notifies Robot 2 that reception is possible.
5 Wait M_In(10)=1
'Checks that Robot 2 has received the notice.
6 M_Out(10)=0
' Switches Off the reception possible signal to
Robot 2.

Operates the robot with the results received.

20 NVClose

<Robot 2>
1 If M_NVOpen(1)<>1 Then NVOpen COM2: AS #1
'Connects to vision sensor 1 (COM2).
2 Wait M_NVOpen(1)=1
' Connects with vision sensor number 1 and waits for logon to be completed.
3 Wait M_In(10)=1
'Waits for contact from Robot 1.
4 M_Out(10)=1
'Outputs to Robot 1 that it received the notice.
5 Wait M_In(10)=0
'Checks that Robot 1 has verified reception.
6 M_Out(10)=0
' Switches Off the notice received signal to Robot 1.
7 NVin #1,Job1,E76,O81,Q85,0,10
'Acquires the results.

Operates the robot with the results received.

20 NVClose

* Note that the cell positions for Robot 1 and Robot 2 to acquire data from the vision sensor are different.

9-145

Detailed explanation of systems combining multiple vision sensors and robots

10 Troubleshooting

10. Troubleshooting
This chapter lists the errors that can occur in using network vision sensors and explains the causes of and
solutions to these errors.

10.1. Error list


Below are the messages for error numbers, their causes, and the solutions.
The meanings of the error levels in the table are as follows.
Table 10-1 Error Category List

Level
H
High-level error
L
Low-level error
C
Warning

Explanation
The servo is switched Off and program execution stopped.
Program execution is stopped.
Program execution is continued.
Table 10-2 List of Errors for Vision Sensor Use

Error
No.
3110

Argument out of range

3120

Incorrect argument count

3130

Attempt was made to


open an already open
communication file.

3141

The NVOpen command


is not executed.

3142

The communication line


can not be opened.

3287

3810

This command can not


be used if the start
condition is ERR or ALW.
Incorrect argument type

4220

Syntax error
command

4370

Array element mistake

7810

Abnormal
Ethernet The parameter
parameter setting
incorrect.

Level

Error contents

in

input

Cause

Solution

One of the argument values


specified in a command is out
of its range.
The number of arguments in
the executed command is
incorrect.
The communications line that
was the subject of the
attempted opening is already
open.
No NVOpen command was
executed before execution of
a command communicating
with the vision sensor.
The line for communication
with the vision sensor can not
be opened.
This command can not be
used if the start condition is
ERR or ALW.
The arithmetic calculation,
single-item
calculation,
comparison,
or
function
argument type is incorrect.
There is a mistake in the
structure
of
the
input
command.
1. An array element is outside
the defined range.
2. A variable was specified
that can not be arrayed.

Check the argument range


and re-input.

setting

Check the argument count


and re-input.
Check the COM number and
vision sensor number and
re-execute. Or check the
communications parameters.
Revise the robot program to
execute the NVOpen
command.
Check the communication
cable or the communications
parameters.
Revise the program.

Specify the correct argument.

Check the program contents,


then re-input with the correct
syntax.
1. Revise the array elements
to be within 1 maximum
element.
2.
Stop
array
element
specification.
is Check
the
NETHSTIP,
NETPORT, NETMODE, and
other such parameters.

Error list

10-146

10 Troubleshooting
Table 10-3 List of Errors Only for Vision Sensors

Level Error No.


Error contents
L
8600
Vision
sensor
connected
L

8601

8602

8603

8610

8620

8621

8622

8630

8631

8632

8633

8634

10-147

Error list

Cause
not There is no vision sensor
connected to the specified
COM number.
Logon not possible
The communication line was
opened, but there is no
response from the vision
sensor.
Wrong password
The password for the user set
with the "NVUSER" password
is not set in the "NVPSWD"
parameter.
Parameter abnormality
The user name or password
parameter is abnormal.
Abnormal
Communication with the vision
communications
sensor was cut off before or
during command execution.
Abnormal vision sensor The specified vision sensor
number specification
number is not defined with an
NVOpen command.

Abnormal vision program The specified vision program


name
name is more than 15
characters.
Vision
program
not The specified program does
present.
not exist in the specified vision
sensor.

Incorrect
value
recognition count cell

in The recognition count value


was not in the cell specified as
the recognition count cell.
Specified cell value out of Corresponding to either the
range
following.
The values specified for the
start cell and end cell are
reversed.
The range specified by Start
Cell and End Cell exceeds line
30 and row 10.
The number of data included
in the cell which specifies it by
Start Cell and End Cell
exceeds 90.
Vision sensor response There is no response from
timeout
vision sensor within the
specified time or within a
specific time.
NVTRG response timeout No response to image capture
request.
There is a comma within There is a comma on the cell
the specified range of the which specifies it for Start Cell
cell.
and End Cell though the range
from 1 to 3 is specified for a
value of Type.

Solution
Check the specified vision
program number, "COMDEV"
parameter, etc. settings.
Reset the program and start it
again.

Set the correct password.

Check the NVUSER and


NVPSWD parameters.
Check the communication
cable between the robot and
vision sensor.
Check that the specified vision
sensor number is correct.
Also, check that that number is
defined with an NVOpen
command.
Specify a vision program name
with no more than 15
characters.
Check whether the specified
vision program exists in the
specified vision sensor. Also
check that the vision program
name specified is correct.
Check that the correct cell is
specified.

Check that the correct cell is


specified.
Check the number of data
acquired from the cell which
specifies it by
Start Cell and End Cell.

Check that the specified time


is correct. Or check that the
vision sensor settings are
correct.
Check the communications
cable.
Check the value set to Type or
Start Cell and End Cell.

10 Troubleshooting
L

8635

There is no comma within There is no comma on the cell


the specified range of the which specifies it for Start Cell
cell.
and End Cell though the range
from 4 to 7 is specified for a
value of Type.
Abnormal image capture The
image
capture
specification
specification is other than
"Network", "external", and
"manual".
Put online.
The vision sensor is offline.

8640

8650

8660

Not permitted to control The NVUSER and NVPSWD


vision sensor
parameters set for logging on
to the vision sensor do not
have the right to full access to
the vision sensor.

8670

Restart not possible after stop

After the program was stopped, it


was started without being reset.

Check the value set to Type or


Start Cell and End Cell.

Specify an image capture


specification of "Network",
"external", or "manual".
Put the vision sensor online to
enable control from the
outside.
Check the vision sensor side
user list registration and
specify the name of a user
with full access in NVUSER
and
their
password
in
NVPSWD.
Reset the robot program, then
start it.

Error list

10-148

11 Appendix

11. Appendix
11.1. Performance of this product (comparison with built-in type RZ511
vision sensor)
Below is a comparison of the performance of this product with that of our built-in type vision sensors.

11.1.1. Comparison of work recognition rate


(1) Comparative results by work shape and conditions
Table 11-1 Comparative Results by Work Shape and Conditions

Work condition
Network vision sensor
Built-in vision sensor
Overlap

Approach or contact

Tilt

Front/rear judgment

O: Recognition with pretty much no problems possible


: Recognition possible under some conditions
: Recognition almost impossible
With network vision sensors, it is possible to recognize overlapping work and work nearly in contact
or in contact, work that is difficult to recognize with the built-in vision sensors.
Network vision sensors also improve the recognition rate for tilted work and front/rear work.
(2) Comparison of functions for recognition pattern registration
Table 11-2 Comparison of Functions for Recognition Pattern Registration

Function
Area size change
Specification of coordinates
for output to robot
Area angle change
Area shape change

Network vision sensor


Yes
Yes

Built-in vision sensor


Yes
Yes

Yes
(No need to change the work
angle)
Yes
(Square/fan shape/round)

No
(Necessary to change the work
angle)
No
(Square only)

Network vision sensors improve the work pattern registration functions.


It is possible to change the area angle without any need to change the work angle and the area
shape can be changed to square, fan shaped or circle.

11.1.2. Comparison of image processing capacity


Table 11-3 Image Processing Capacity

Image processing time


Data transfer time
Total time

Network vision sensor


4
10
30
276ms
278 ms
367 ms
121 ms
142 ms
249 ms
397 ms
420 ms
616 ms

Built-in vision sensor


4
10
30
794ms
946ms
946ms
80ms
99ms
127ms
874ms
1045ms
1073ms

"Table 11-3 Image Processing Capacity" shows the measurement results when the work is recognized
with the same conditions.
The image processing time can be reduced by using network vision sensors.
For network vision sensors, the increase in the number of pieces of work recognized increases the data
transfer time.

11-149

Performance of this product (comparison with built-in type RZ511 vision sensor)

11 Appendix

11.1.3. Factors affecting the processing time


Factors affecting the processing time
(1) Delay in communication time by hub and communication time when no hub used
There is almost no difference in communication time due to a hub.
There is no problem with any hubs, but when an old hub is used, there is a possibility of some
variation in communication time.
There is no particular difference in the communication time when connecting directly with a
cross-cable without using a hub.
(2) About the affect of other equipment connected to the network
When equipment other than the network vision sensor, robot controller, or monitor PC is connected
to the network, the communication time may become longer.
Even when a program operates that communicates using a network with a monitor PC connected to
the network, the communication time may become longer.

Performance of this product (comparison with built-in type RZ511 vision sensor)

11-150

11 Appendix

11.2. Calibration No. marking sheet


This is a marking sheet used in calibration work. Enlarge or reduce it as necessary to match the size of the
field of vision of the image.

11-151

Calibration No. marking sheet

Oct.,2012 MEE Printed in Japan on recycled paper.

Specifications are subject to change without notice.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy