User Guide
User Guide
User Guide
Table of contents
1 User Guide. Introduction ................................................................................................. 28
1.1 General Information ..................................................................................................................... 28
1.2 Purpose of the Document............................................................................................................. 28
1.3 Purpose and functionality of Axxon Next .................................................................................... 28
2 Software Lifecycle Policy................................................................................................. 30
2.1 Software Lifecycle Phases ............................................................................................................ 30
2.2 Software Technical Support......................................................................................................... 30
2.3 Standard Period for Release of Software Updates ..................................................................... 30
2.4 Licensing Policy with Regard to Software Updates ................................................................... 30
3 Description of the Software Package.............................................................................. 31
3.1 Basic principles of building a security system based on the Axxon Next software package .... 31
3.2 Axxon Next features: reference information................................................................................ 31
3.2.1 Micromodule architecture............................................................................................................................ 31
3.2.2 Support for IP cameras................................................................................................................................. 32
3.2.3 Support for analog cameras in Axxon Next ................................................................................................. 32
3.2.4 Video and Audio Detection Tools ................................................................................................................. 32
3.2.5 Video archive ................................................................................................................................................. 33
3.2.6 Interactive 3D Map ........................................................................................................................................ 33
3.2.7 User Interface ................................................................................................................................................ 33
3.2.8 Face Recognition........................................................................................................................................... 34
3.2.9 NumberPlate Recognition ............................................................................................................................ 34
3.2.10 Receiving Events from External Systems..................................................................................................... 34
3.3 Functions of the Distributed Security System ............................................................................. 34
3.4 Network Topologies of the Axxon Next Software Package......................................................... 35
3.5 Specifications of the Axxon Next Software Package................................................................... 35
3.6 Implementation Requirements for the Axxon Next Software Package ..................................... 36
3.6.1 Limitations of the Axxon Next Software Package ....................................................................................... 36
3.6.2 Operating system requirements .................................................................................................................. 38
3.6.3 Disk storage subsystem requirements......................................................................................................... 39
3.6.3.1 General requirements................................................................................................................................... 40
3.6.3.2 Storage requirements................................................................................................................................... 40
3.6.3.2.1 Minimum requirements ................................................................................................................................ 40
– 2
User Guide
– 3
User Guide
– 4
User Guide
– 5
User Guide
– 6
User Guide
– 7
User Guide
– 8
User Guide
7.4.9.2.1 Video requirements for object tracker-based scene analytics ................................................................. 216
7.4.9.2.2 Video requirements for object tracker (with neural filter)-based scene analytics.................................. 217
7.4.9.2.3 Video requirements for neural tracker-based scene analytics................................................................. 217
7.4.9.2.4 Video requirements for VMD-based scene analytics................................................................................. 218
7.4.9.3 Camera requirements for neural filter operation...................................................................................... 219
7.4.9.4 Camera requirements for neural tracker operation ................................................................................. 219
7.4.9.5 Camera requirements for abandoned objects detection ......................................................................... 219
7.4.9.6 Configuring Scene Analytics Detection Tools............................................................................................ 220
7.4.9.6.1 General information on Scene Analytics ................................................................................................... 220
7.4.9.6.2 Setting up Tracker-based Scene Analytics ................................................................................................ 220
7.4.9.6.2.1 Setting General Parameters .......................................................................................................................................... 220
7.4.9.6.2.2 Settings Specific to Cross Line Detection ..................................................................................................................... 229
7.4.9.6.2.3 Settings specific to Abandoned Object detection........................................................................................................ 231
7.4.9.6.2.4 Settings Specific to Loitering Detection ....................................................................................................................... 232
7.4.9.6.2.5 Settings Specific to Multiple objects............................................................................................................................. 232
7.4.9.6.2.6 Settings Specific to Stop in area detection .................................................................................................................. 233
7.4.9.6.2.7 Settings Specific to Move from area to area detection................................................................................................ 233
7.4.9.6.3 Setting up Neural Tracker-based Scene Analytics .................................................................................... 234
7.4.9.6.4 Setting up VMD-based Scene Analytics...................................................................................................... 236
7.4.10 Face detection tool ..................................................................................................................................... 237
7.4.10.1 Functions of facial detection...................................................................................................................... 237
7.4.10.2 Camera requirements for facial detection ................................................................................................ 237
7.4.10.3 Configure Facial Recognition ..................................................................................................................... 237
7.4.10.3.1 Setting up advanced facial detection tools............................................................................................... 239
7.4.10.3.1.1 Configuring masks detection ........................................................................................................................................ 239
7.4.10.3.1.2 Configuring appearance in area detection................................................................................................................... 240
7.4.10.3.1.3 Configuring loitering in area detection......................................................................................................................... 241
7.4.10.3.2 Fine-tuning the facial recognition tool ...................................................................................................... 242
7.4.10.4 Configuring real-time facial recognition.................................................................................................... 242
7.4.10.4.1 Setting the automatic response to an identification of a recognized face against the list .................... 243
7.4.10.4.1.1 Examples of macros used for working with face lists .................................................................................................. 244
7.4.10.4.2 Configuring FaceCube integration ............................................................................................................. 247
7.4.11 Face Detection and Temperature Control with Mobotix M16 TR cameras .............................................. 248
7.4.11.1 Functional parameters of the Face Detection and Temperature Control ............................................... 248
7.4.11.2 Camera requirements for face detection and temperature control ........................................................ 248
7.4.11.3 Temperature screening protocol ............................................................................................................... 248
– 9
User Guide
– 10
User Guide
– 11
User Guide
– 12
User Guide
– 13
User Guide
7.7.5.4.3 Selecting the default video mode for a camera ........................................................................................ 376
7.7.5.4.4 Moving input and output icons in a viewing tile ....................................................................................... 376
7.7.5.4.5 Configuring default zoom levels (the Fit screen function)........................................................................ 377
7.7.5.4.6 Configuring pan/tilt angle for video cameras with Immervision lenses in 180о Panorama display
format .......................................................................................................................................................... 378
7.7.5.4.7 Configuring display of water level detection............................................................................................. 379
7.7.5.4.8 Adding links to other cameras to the Camera Window ............................................................................ 380
7.7.5.5 Configuring information boards ................................................................................................................ 382
7.7.5.5.1 Configuring information board templates ................................................................................................ 382
7.7.5.5.2 Configuring Events Boards ......................................................................................................................... 382
7.7.5.5.3 Configuring a Health Board ........................................................................................................................ 384
7.7.5.5.4 Configuring a Statistics Board.................................................................................................................... 385
7.7.5.5.5 Configure the Web Board............................................................................................................................ 386
7.7.5.5.6 Configure the Dialog Board ........................................................................................................................ 387
7.7.5.6 Configuring Alarms Panel on a layout ....................................................................................................... 391
7.7.5.7 Exiting layout editing mode ....................................................................................................................... 391
7.7.6 Share Layouts.............................................................................................................................................. 391
7.7.7 Configuring special layouts ........................................................................................................................ 392
7.7.7.1 Creating special layouts ............................................................................................................................. 392
7.7.7.2 Configuring Alerted Cameras layouts ........................................................................................................ 393
7.7.7.3 Configuring Selected Cameras layout ....................................................................................................... 395
7.7.8 Configuring user-defined slide shows........................................................................................................ 397
7.7.9 Setting the default layout........................................................................................................................... 397
7.7.10 Setting a layout ID....................................................................................................................................... 398
7.8 Configuring Video Wall................................................................................................................ 398
7.9 Configuring the Interactive Map................................................................................................. 400
7.9.1 Creating a new map .................................................................................................................................... 400
7.9.2 Adding system objects to the map............................................................................................................. 403
7.9.2.1 Adding video cameras ................................................................................................................................ 403
7.9.2.2 Adding inputs and outputs ......................................................................................................................... 404
7.9.2.3 Adding switches to another map ............................................................................................................... 405
7.9.3 Configuring cameras on the map............................................................................................................... 406
7.9.3.1 Configuring a camera in standard map viewing mode............................................................................. 406
7.9.3.2 Configuring cameras in immersion mode ................................................................................................. 407
7.9.3.3 Configuring cameras with a built-in GPS tracker ...................................................................................... 408
– 14
User Guide
– 15
User Guide
– 16
User Guide
– 17
User Guide
– 18
User Guide
– 19
User Guide
– 20
User Guide
– 21
User Guide
– 22
User Guide
– 23
User Guide
8.11.9.1 Controlling a PTZ camera through the web client by using presets ........................................................ 689
8.11.9.2 Changing the optical zoom of a PTZ camera in the web client ................................................................ 689
8.11.9.3 Changing the positioning speed of a PTZ camera in the web client ........................................................ 689
8.11.9.4 Changing the tilt of a PTZ camera in the web client ................................................................................. 689
8.11.10 Viewing video archives through the web client......................................................................................... 690
8.11.11 Archive position selection panel for the web client .................................................................................. 691
8.11.12 Archive search through the web client ...................................................................................................... 692
8.11.12.1 Types of Archive search available via the Web Client ............................................................................... 693
8.11.12.2 Building a Heat Map.................................................................................................................................... 693
8.11.12.3 Simultaneous search in multiple camera Video Footage via the web Client .......................................... 695
8.11.12.4 Reporting the search results ...................................................................................................................... 695
8.11.13 Alarm Monitoring via the Web Client ......................................................................................................... 696
8.11.14 Listening to a camera's microphone via the web client ........................................................................... 697
8.11.15 Digital video zoom in the web client.......................................................................................................... 698
8.11.16 Export in Web Client.................................................................................................................................... 698
8.11.17 Viewing Camera and Archive Statistics...................................................................................................... 699
8.11.18 Working with bookmarks in the Web Client .............................................................................................. 700
8.11.18.1 Editing a bookmark..................................................................................................................................... 701
8.11.18.2 Deleting a bookmark .................................................................................................................................. 702
8.11.18.3 Un-protecting a video ................................................................................................................................. 702
8.11.18.4 Deleting a protected video ......................................................................................................................... 702
8.12 Working with Axxon Next Through the Mobile Clients .............................................................. 702
9 Description of utilities.................................................................................................... 703
9.1 Axxon Next Tray Tool .................................................................................................................. 703
9.2 Activation Utility ......................................................................................................................... 703
9.3 Axxon Support Tool .................................................................................................................... 705
9.3.1 Purpose of the Support.exe Utility............................................................................................................. 705
9.3.2 Launching and Closing the Utility .............................................................................................................. 705
9.3.3 Description of the Support.exe utility interface ........................................................................................ 706
9.3.4 The Processes Service................................................................................................................................. 707
9.3.5 Collecting Data on the Configuration of Servers and Clients Using the Support .................................... 708
9.4 Log Management Utility ............................................................................................................. 711
9.4.1 Starting and closing the utility ................................................................................................................... 711
9.4.2 Configuring a Log Archive........................................................................................................................... 711
9.4.3 Configuring Logging Levels ........................................................................................................................ 712
– 24
User Guide
9.4.4 Set the size and maximum number of logs ............................................................................................... 713
9.4.5 Configuring Client RAM usage logging ....................................................................................................... 714
9.5 Digital Signature Verification Utility .......................................................................................... 715
9.6 Backup and Restore Utility......................................................................................................... 717
9.6.1 Purpose of BackupTool.exe........................................................................................................................ 717
9.6.2 Starting and quitting BackupTool.exe....................................................................................................... 717
9.6.3 Roll back the local configuration to a selected restore point................................................................... 718
9.6.4 Roll back the global configuration to a selected restore point ................................................................ 720
9.6.5 Backing up a configuration ........................................................................................................................ 721
9.6.6 Restoring a configuration ........................................................................................................................... 724
9.6.7 Changing the Server Name......................................................................................................................... 726
9.7 POS-terminal log collection utility............................................................................................. 728
9.8 Console utility for working with archives .................................................................................. 729
9.9 Network settings utility .............................................................................................................. 731
10 Appendices ..................................................................................................................... 733
10.1 Appendix 1. Glossary................................................................................................................... 733
10.2 Appendix 2. Known issues in the Axxon Next Software Package ............................................. 735
10.2.1 Possible Errors During Installation ............................................................................................................ 735
10.2.1.1 Error starting NGP Host Service ................................................................................................................. 735
10.2.1.2 Errors Connecting to the Postgres Database ............................................................................................ 735
10.2.1.3 Error installing Drivers Pack ....................................................................................................................... 736
10.2.1.4 Error installing DetectorPack ..................................................................................................................... 736
10.2.1.5 Window OS 10 installation error ................................................................................................................ 736
10.2.1.6 An error occurred while installing on Windows with the language pack Norsk (bokmål) ...................... 737
10.2.1.7 Error uninstalling Axxon Next on systems with Videoinspector installed ............................................... 737
10.2.2 Possible Errors During Start-Up ................................................................................................................. 737
10.2.2.1 The Client cannot be connected to the Server .......................................................................................... 737
10.2.3 Possible Errors During Operation............................................................................................................... 737
10.2.3.1 All video cameras or archives stop working once the license maximum is reached .............................. 738
10.2.3.2 No signal from video cameras and failure to connect to other servers ................................................... 738
10.2.3.3 Incorrect display of Client interface elements .......................................................................................... 738
10.2.3.4 Server error on Windows Server 2012........................................................................................................ 738
10.2.3.5 Emergency shutdown of the Client on Windows 8.1................................................................................. 738
10.2.3.6 Error creating new archives even when license restriction on total size is observed ............................. 739
10.2.3.7 The Axxon Next VMS operation along with Windows Defender software ................................................ 739
– 25
User Guide
– 26
User Guide
– 27
User Guide
On page:
• General Information
• Purpose of the Document
• Purpose and functionality of Axxon Next
The modern and constantly expanding feature set of Axxon Next allows implementing new video surveillance functionality that
increases the convenience and precision of protection at end-user sites.
On page:
Major and minor releases are available on the company's official website. Releases with bug and security fixes can be requested
from technical support.
* Software bugs can only be fixed in the latest official release.
3.1 Basic principles of building a security system based on the Axxon Next software
package
Building a security system based on the Axxon Next software package includes the following recommended stages:
1. Selecting a configuration for the security system (with the help of professionals)
2. Building a separate local area network with restricted access
3. Calculating the sufficient bandwidth required for each segment of the local area network
4. Selecting and configuring the software and hardware platform on which the selected security system configuration will be
implemented (selecting and configuring personal computers to act as servers and clients in accordance with the
requirements, as referenced in the section titled Implementation Requirements for the Axxon Next Software Package,
Operating system requirements)
5. Selecting and connecting reliable equipment that is optimally suited for a specific security system (with the help of
professionals)
6. Training personnel to work with the Axxon Next software package in accordance with the requirements (see the section
titled Requirements for Personnel Quantity and Qualifications).
On page:
• Micromodule architecture
• Support for IP cameras
• Support for analog cameras in Axxon Next
• Video and Audio Detection Tools
• Video archive
• Interactive 3D Map
• User Interface
• Face Recognition
• NumberPlate Recognition
• Receiving Events from External Systems
The advanced features available in Axxon Next are continuously updated and extended.
Axxon Next offers virtually unlimited opportunities for system scaling, task-based customization, and reallocation of resources
(based on changes in the number or quality of video and audio monitoring tasks) at end-user sites.
Video surveillance systems based on Axxon Next can scale infinitely: there are no restrictions on the number of video servers,
workstations or video cameras.
Support for over 1500 models of IP cameras is included, as well as remote access from mobile devices and a web interface. The
Axxon Next software package supports touchscreens.
Note
To work with OpenStreetMap maps in Axxon Next, you need to purchase an OpenStreetMap license.
Axxon Next allows you to create custom camera layouts. Layouts can be configured in any way the user wants and the aspect
ratios of viewing tiles can be fine-tuned. Editable layouts efficiently fit different cameras with different aspect ratios on the same
screen, as well as support display of dewarped fisheye camera footage.
Autozoom
Autozoom helps to monitor moving objects by automatically adjusting the level of digital zoom. Autozoom shows close-in video
for parts of the frame that contain a moving object or objects and follows them as they move, just as a movie camera does when
taking a close-up shot.
A distributed security system based on the Axxon Next software package offers the user the following functional capabilities:
1. Viewing and manual processing of video and audio data from several servers on one client
2. Controlling video cameras connected to various servers from one client
3. Configuring all servers of the distributed system on one client
4. Execution of automatic responses when detection tools are triggered (audio notification, triggering of relays, SMS and e-
mail notification, etc.) within the distributed system.
Note
If a Server is not accessible by NetBiosName or some TCP and UDP ports are closed, it is possible to build a distributed
security system on a virtual private network (VPN). For example, with the help of OpenVPN. Detailed information on
OpenVPN and examples of virtual private network configuration are given in the official documentation.
Suppose, the Server relies on a defined port range (see Installation), and you want to set up a surveillance system based
on several networks. You do not have to use VPN in that case. Use port forwarding instead.
Axxon Domain configuration is described in detail in the section titled Configuring Axxon domains.
Attention!
If your system is based on star topology, please take into account the following:
1. In a failover (see General information about a failover system) and non-failover systems, the Client must have
access to all Servers (located centrally).
2. In a failover system, all Manager Servers must have access to each other (located centrally).
Characteristics Value
Number of video capture channels for "live video" processing on one Unlimited
Server
Characteristics Value
Number of audio output channels (to speakers, headphones, etc.) depends on the sound card used for playback
Number of license plate recognition channels Is determined by the license; there is no upper limit
Number of face recognition channels Is determined by the license; there is no upper limit
IP device support IP cameras and IP video servers This list is continuously expanding: support for
new hardware is added through updates to Axxon Driver Pack
Video compression algorithms MJPEG, MPEG-2, MPEG-4, MxPEG, H.264, H.264+, H.265, H.265+, Hik264 (only for
x86)
No. Limitation
1 To work with Axxon Next software the following requirements for OpenGL are to be fulfilled:
2 Axxon Next Client cannot be started if the scale of all items on the screen (DPI) is over 100%. You may have issues with Axxon
Next VMS if the screen resolution is set at 1280*720 pixels or lower.
3 The Server and Client must be of the same version. If not, Axxon Next VMS may have issues.
4 For correct operation of Axxon Next VMS, the OS should use the UTF-8 locale.
6 In one LAN, two Servers with the same name are not allowed, even if they belong to different Axxon domains
9 For proper installation of Axxon Next, there should be no spaces at the beginning of the name of the folder which contains the
installer
10 For correct and full-feature operation of Axxon Next software, the system must not limit network activity between all Servers
and Clients.
TCP and UDP access to these ports should be enabled. Otherwise, access to all ports should be allowed in the system
11 Time must be synchronized among all computers in the system (to be configured by the user).
12 If you have edge storage enabled in the system, synchronization between the server and the IP device is necessary (see The
Embedded Storage object).
Lack of synchronization may lead to bad DB entries of events detected on the edge device.
13 Before installing Axxon Next, make sure the video card drivers on the computer are fully up to date
15 The face detection tool requires a CPU supporting SSE4.2, FMA3 or AVX2.0 instruction set.
16 The Client cannot be started on a remote desktop through the Remote Desktop Connection utility built into Windows
No. Limitation
17 If a computer is linked to an Active Directory domain, one of the following conditions must be met to enable disk access:
1. Access control lists must contain only local or built-in groups and users.
2. Create an AxxonFileBrowser user in the domain and add it to the Users group (see Installation
step 8).
This behavior is typical only of file systems that have access permissions (for example, NTFS).
Windows 7 SP1 (x86, x64) Starter Restrictions, posed by OS edition (2GB of main Stretch cards are supported in 32-bit version
(x86) memory, 1 physical processor, 1 monitor) - see only
http://www.microsoft.com
Windows 8 (x86, x64) Core OS edition, enabling to use all realized product
features.
Windows Server 2012 (x64) Foundation Restrictions, posed by OS edition (1 physical Full Installation type is supported.Server Core
processor) Installation type is not supported
Windows Server 2012 R2 Essentials Restrictions, posed by OS edition (2 physical Full Installation type is supported.Server Core
(x64) processors) Installation type is not supported
Windows Server 2016 (x64) Essentials Restrictions, posed by OS edition (2 physical Full Installation type is supported. Server Core
processors) Installation type is not supported
Windows 10 (x86, x64) Pro OS edition, enabling to use all realized product
features.
Windows 10 IoT (x86, x64) Enterprise OS edition, enabling to use all realized product
features.
Windows Server 2019 (x64) Essentials Restrictions, posed by OS edition (2 physical Full Installation type is supported. Server Core
processors) Installation type is not supported
Windows Server IoT 2019 - OS edition, enabling to use all realized product Full Installation type is supported. Server Core
(x64) features. Installation type is not supported
Debian 9 (x64) - Appendix 8. Configuring and operating the Axxon Next in Linux OS
Debian 10 (x64)
On page:
• General requirements
• Storage requirements
• Minimum requirements
• Size of archives
• Database of the system log
• Object trajectory database
• Self-diagnostics service
IOPS during archive recording IOPS (writing) = 0,29 * N IOPS (writing) = 0,065 * M
Archive recording includes both data input
(writing) and output (reading) operations IOPS (reading) = 0,035 * M IOPS (reading) = 0,035 * M
IOPS during simultaneous recording and IOPS (writing) = 0,29 * N IOPS (writing) = 0,065 * M
playback
IOPS (reading) = 0,035 * M + 0,035 * R * S IOPS (reading) = 0,035 * M + 0,035 * R * S
where
2. If you use RAID storage, please specify write-back policy for writing to cash memory.
Average frame size is the average size of the camera frame, kilobytes.
Note
Average frame size for 640х480 resolution is:
H.264 from 8 KB to 17 KB
MPEG4 from 8 KB to 35 KB
MJPEG from 23 KB to 60 KB
Average frame size may vary over a wide range depending on the vendor, model and settings of the camera and video
image complexity
Note
To calculate the frame size one can use the ratio, that while increasing vertical or horizontal resolution two times, the
average frame size will be increased four times (this rule is a relative one and can be applied only to some cameras'
models)
Examples of calculating a size of disk subsystem (without size of syslog database) are presented below.
4 cameras with 25 fps and 640х480 resolution, guaranteed recording of 24 hours per day during one week H.264: from 500 GB to 1 TB
MPEG4: from 500 GB to 2 TB
MJPEG: from 1.3 TB to 3.5 TB
16 cameras with 12 fps and 640х480 resolution, guaranteed recording of 12 hours per day during one week H.264: from 500 GB to 1 TB
MPEG4: from 500 GB to 2 TB
MJPEG: from 1.3 TB to 3.5 TB
4 cameras with 25 fps and 1280х960 resolution, guaranteed recording of 24 hours per day during one week H.264: from 2 TB to 4 TB
MPEG4: from 2 TB to 8 TB
MJPEG: from 5.3 TB to 14 TB
Attention!
If Axxon Next is installed on a computer with two processors, it is recommended to disable the Hyper-threading.
Client 2GB
If you increase RAM speed by using memory with a higher frequency or using memory in dual-channel (or more) mode, you will
reduce CPU usage and boost the performance of Axxon Next.
Attention!
We recommend that you use the latest drivers for both Integrated (on-board) and Dedicated (shared) Graphics Cards .
Note
Extensions availability can be checked using the OpenGL Extension Viewer program (download).
Note
To connect an Intel NCS, insert the device into a USB port and make sure that it is recognized by Windows OS as
one of the following: Movidius, Myriad X, or VSC Loopback Device.
Intel NCS may be used with any PC that conforms to Axxon Next hardware requirements (see Hardware
requirements).
Attention!
We do not recommend using more than one Intel NCS device per Server.
You can use several Intel HDDL devices on the Server if they have the same revision.
Attention!
For Intel HDDL to work correctly with AMD processors, pre-install the OpenVINO ™ toolkit version 2019.3.379
(see https://docs.openvinotoolkit.org/latest/
openvino_docs_install_guides_installing_openvino_windows.html).
2. To use Intel CPU or GPU for analytics, please consider that the following processors are supported: Intel Core starting
from 6th generation, Intel Xeon and Intel Pentium N4200/5, N3350/5, or N3450/5 with Intel HD Graphics https://
software.intel.com/en-us/openvino-toolkit/hardware.
3. Video card: NVIDIA GeForce 1050 Ti or higher. Requirements:
a. at least 2 GB of memory;
b. Compute Capability 3.0 or higher.
Note
You can check the GPU's Compute Capability version on the manufacturer's website.
Attention!
If you use NVIDIA graphics cards, make sure to download the latest drivers from the manufacturer's
website.
When using a video card, a single neural network requires 500MB of video memory. For example: a "neural" fire detection tool
and a "neural" smoke detection tool, both with unlimited number of channels, require a 1 GB graphics card or higher. You can
use multiple video cards in your system.
Attention!
For correct operation of a detection tool, video image must match a specified set of requirements.
Requirements for each particular detection tool are listed in corresponding sections (see Configuring detection tools).
Note
You can check the GPU's Compute Capability version on the manufacturer's website
Note
You can check the GPU's Compute Capability version on the manufacturer's website.
Note
You can check the GPU's Compute Capability version on the manufacturer's website
Attention!
The minimum bit rate through the communication channel (network bandwidth consumption / goodput) for the Axxon
Next VMS, should be at least 2 Mbit/s.
To determine the required TCP/IP network bandwidth for video transmission from IP devices and some video capture cards, we
recommend you to use Axxon Platform Calculator (check the Total bitrate from ip devices (Mbit/s) parameter).
Attention!
Different motherboard manufacturers use different names for these technologies. Therefore, for every particular
motherboard model you must find the technology name in the documentation for this model.
Attention!
To run the Client, 3D acceleration must be enabled.
Note
The operating system in a virtual machine must meet the general requirements.
If you use VirtualBox on Windows 7 SP1 and in HyperV, you won't be able to access a Guardant USB key from the guest
system.
In HyperV, you can use third-party utilities (for example, USB Network Gate) instead.
Note
Attention!
You can configure access to any given tab and buttons in the top panel individually for each user role (see Configuring
user permissions).
When you are configuring VMS, the top panel shows tabs with different groups of settings, the button for switching to the system
log, current user name and the Quit button.
When you select Server or any child object in the Hardware, Archive or Detection Tools tabs, the upper panel shows the current
CPU and network load.
In the Layouts interface in the upper pane, you see the following elements:
1. Camera search panel.
2. System time.
3. Context menu.
4. Video surveillance mode selection tabs.
5. Video wall management panel.
6. Monitor management panel.
7. Layouts panel.
8. Macro menu.
9. Layout scaling buttons.
In the Layouts and Search interface, you can hide the top panel. To do this, click the button in the top right corner. To show
the top panel, hover over it with the pointer.
Note
In Full Screen (see Configuring the Client screen mode (full screen or window)), to collapse the client window, click the
button.
20109 Yes Server and Client NativeBL Core (see Configuring the Server ports) Required
Failover Server and Client
80 (Windows) Yes Server and Client Web server (see Configuring the web server) If necessary
8000 (Linux) Failover Server and Client + HTTPS port 443
554 (Windows) Yes Server and Client RTSP server (see Configuring an RTSP Server) Not required
50554 (Linux) Failover Server and Client
8888 No Server and Client Client HTTP API (see Client HTTP API) Not required
Failover Server and Client
Client
4000 No Failover Server and Client Failover (see Ports used by the failover system) Required
4646-4648
8300-8302
8500
8600
* in a configuration including several Servers in firewalled LANs, or port forwarding on the router under NAT environment.
Note
You can connect analog video cameras to Axxon Next via video capture cards, which the software defines as IP devices
The following types of equipment are IP video and audio surveillance devices:
1. IP video cameras
2. Various types of IP video servers
IP video servers which use analog video cameras directly connected to them, digitize the analog video signal, and transmit it to
users via TCP/IP. When working with analog video cameras connected to IP video servers, users can utilize the same video image
viewing and transmission functions as with IP video cameras.
Attention!
If these requirements are met, IP devices should be properly handled. However, correct functioning is not guaranteed
Based on the video signal coming in from the IP device, an assessment is made of the guarded location and the system responds
to events registered for that location. The content and quality of the obtained video information depends on how the IP device is
installed and configured. There are a number of rules that must be followed to obtain a high-quality video signal. In particular,
high-quality peripheral equipment (hubs/routers) must be used; we advise against use of Home and Office-class devices, which
are not intended for use in such security systems.
Note
IP devices connected to such equipment will transmit a video stream with an unacceptably long delay (from 1.5 to 3
seconds per frame)
Detailed information about creating a local network and connecting IP equipment to it is presented in the corresponding
reference documents.
Attention!
Without assigning preliminary IP addresses to the devices, it is not possible to access their Web interface
2. Web interface of the IP device. This interface is used to accomplish the following tasks:
a. Configuring the IP devices with consideration for routing
b. Configuring modes for the IP devices to work with video and audio signals
c. Viewing video images coming in from IP devices in standard Web browser mode
Configuration of IP devices in Windows is described in detail in the official reference documentation for the respective devices.
4.2.1 Installation
To install Axxon Next, regardless of the type of installation, you must perform the following steps:
1. Insert the Axxon Next installation disc into the CD drive, or unpack the archive with installer package.
Note
If you cannot run the installation files downloaded from Internet, do as follows : allow running programs and
unsafe files in Windows OS.
3. In the dialog box, choose the desired language from the list and click OK
4. Select the Axxon Next software installation type in the dialog box by clicking the appropriate option button:
a. Client – This type of installation is used for installing the software's user interfaces, which enable any user to
connect to any server within a single security system and to perform administration/management/monitoring of a
guarded location based on the permissions granted by the administrator.
b. Server and Client — installs Client and Server services. Axxon Next Server:
i. interacts with devices (cameras, microphones, inputs, outputs, etc.) that constitute a security system;
ii. writes video footage to archives on system disks; interacts with archives on NAS;
iii. hosts VMDA database;
iv. employs detection tools to analyze live video;
v. keeps configurations of the security system, user settings, custom layouts, macros, etc.
c. Failover Server and Client— installs Client and Server services enhanced with the Failover capability. In
emergency (power outage, network problems), the Failover technology restores the server configuration on
another server. Please refer to the section titled Configuring Failover VMS for details on how to install VMS with the
Failover capability.
Note
We offer a separate software package containing only the Axxon Next Client. To obtain it, contact our
technical support.
This package is intended only for Client updates; you cannot install it on a PC where no Axxon Next
software was previously installed.
5. To record all installation-related events to a log file, select the Enable full installation log check box.
6. Click the Next button.
A dialog box prompts you to select the components for installation.
7. Select check boxes for the components that you want to install. We recommend installing all components.
8. Click the Install button. All selected components will be installed. The installation process may take considerable time.
Attention!
Starting from Axxon Driver Pack 3.51, this driver package requires the Windows update KB2999226 to be
installed. If this update is missing, you will see a warning. To continue installation, download the upgrade from
the official Microsoft website.
Two different versions of the Windows KB2999226 update are available for 32 and 64 bit system versions.
Note
The following required software is installed, if necessary:
1) PostgreSQL 10.8.0 server database If an older version of PostgreSQL is installed, it is updated to version 10.8.0.
A new log database is automatically created (name: ngp, user name: ngp, password: ngp).
2) .NET Framework 2.0, .NET Framework 3.5 SP1 and .NET Framework 4.0
3) Acrobat Reader, which is necessary for exporting in PDF format and printing freeze frames (see Frame export).
4) VLC Player The VideoLan folder, in the Axxon Next installation folder, contains the file VLC.exe, which is a
version of the VLC Player that can be run from any connected disk without installation. This file can be used to
view exported archive video.
9. After installation of the required software and drivers, preparation begins for Axxon Next installation.
11. To proceed with installation, accept the terms of the license agreement by selecting the radio button next to I accept the
terms of the License Agreement and click Next.
12. Indicate the destination folders for installation of Axxon Next components and click Next.
Attention!
The installation path for Axxon Next and its databases must contain only Latin letters and numbers.
Note
By default, the software will be installed to the directory C:\Program Files\AxxonSoft\AxxonNext\.
13. By default, shortcuts are added to both quick launch bar and desktop. De-select the corresponding checkboxes if it's not
required.
14. Upon connection to a Server, the Client may be automatically updated. De-select the Create an archive for automatic
update checkbox if it's not required. In this case, you can save Axxon Next VMS installation time but the Clients will not be
automatically updated upon their connection to the Server.
15. By default, the Axxon Next Server's name is identical to the PC's. If the PC name contains forbidden symbols, you have to
set an appropriate name for the Server according to recommendations, and click Next.
16. In the window that opens, select an installation method and click the Next button.
If the Custom installation method is selected, you can perform advanced configuration of the installation of Axxon Next.
If the Standard installation method is selected, you are prompted to select an Axxon domain (Step 20). Default values will
Note
The file browser helps to navigate through the Server's file system (such as when choosing disks for log
volumes). The user account for the Windows file browser will be created with administrator privileges.
Attention!
After installation of Axxon Next, make sure that a file browser account has been created in Windows and belongs
to the Administrators group.
18. Select a folder for storing the files and folders of the Axxon Next configuration.
Note
By default, the files and folders of the configuration are stored at the following path: C:
\ProgramData\AxxonSoft\AxxonNext\
19. Select a folder for storing Axxon Next databases: the log database and object trajectory database.
Attention!
The installation path for Axxon Next and its databases must contain only Latin letters and numbers.
Attention!
You are advised to place the log database and object trajectory database on a disk that has sufficient space. If
you will be using only a log database, the disk capacity must be at least 5% larger than the archive size. If you will
also be using a trajectory database, the disk must be at least 15% larger than the archive.
The following formulas can help to determine the required disk size for the trajectory database:
Size of object trajectory database = N×T×(0,5GB / day)– sufficient disk size;
Size of object trajectory database = N×T×(1GB / day)– sufficient disk size plus reserve space;
Size of object trajectory database = N×T×(5GB / day)– sufficient disk size plus a large reserve.
N equals the number of video cameras in the system actively recording metadata; T equals the period of time
(number of days) that metadata will be stored. By default, T = 30 days.
If you have less than 15 GB of free disk space, the Object Tracking DB is overwritten - new data records over the
oldest data records.
Note
By default, the log database and the object trajectory database will be placed in: C:\Program
Files\AxxonSoft\Axxon Next\Metadata (in the pg_tablespace and vmda_db subdirectories, respectively).
In the future, the metadata database can reside on network storage (see Configuring storage of the system log
and metadata)
Note
The number of ports that you select affects the scalability of the system. Keep the following in mind when
specifying the number of ports:
After the Server is installed, it occupies 10 ports, including one for sending e-mails (via SMTP) or text messages
(via SMS).
In a 64-bit configuration, 4 ports are required for any number of IP devices. In a 32-bit configuration, 4 ports are
required for each 32 cameras.
Each archive requires 1 port.
1 port is required for viewing Video Footage through the Web Client.
2 ports are required for each decoded video stream on the currently opened layout in the Web Client.
2 ports are required for any number of loudspeakers in the system.
1 port is required for recording metadata into the DB.
2 ports are required for service detection tools operation.
2 ports are required for scene analytics detection tools operation.
2 ports are required for neuro tracker operation.
2 ports are required for neural counter operation.
Attention! The Failover Server and Client installation type uses 9 base ports and preset port ranges for each
node (see Ports used by the failover system).
22. Set the outside local address for a Server behind the NAT.
23. To restrict visibility of Servers on particular networks in the list of Servers during Axxon Next setup:
a. Click the button Select network interfaces... The Network interfaces window opens.
b. By default, use of all available network interfaces on the Server is allowed, meaning that Servers on the relevant
networks will be visible in the list. If you do not want for the Servers on the networks of certain network interfaces
to be visible in the list, clear the relevant check boxes.
Note
Depending on the network topology, it will still be possible be reach the Servers manually (if broadcasting
is allowed between the network segments).
Note
When reinstalling Axxon Next, you have the option of using the previous Axxon Domain (select Use existing
configuration)
Note
Using the same Axxon Domain name does not guarantee that the Servers will be in the same Axxon Domain. To
place all Servers into one Axxon Domain, you must use the Axxon Next interface to add each Server to the
necessary Axxon Domain. Axxon Domain configuration is described in detail in the section titled Configuring
Axxon domains.
26. A dialog box then appears, showing the installation parameters corresponding to the selected type of installation.
27. Verify your installation settings and click Next to begin installation of Axxon Next.
A message indicating the completion of Axxon Next installation will appear in a new dialog box.
Note
To ensure that Axxon Next is re-installed correctly, all related applications should be closed before starting the repair
installation
To run a repair installation of the Axxon Next software, you must perform the following steps:
1. Insert the Axxon Next installation CD into the CD-ROM drive. A dialog box will display the disk contents.
A dialog box will appear, showing the Axxon Next repair process.
A dialog box will appear, indicating the completion of the repair process. Click Finish. Repair of Axxon Next is now complete.
4.2.4 Removal
The Axxon Next installation program can also remove the software. Use this option when you need to remove all components of
Axxon Next from your computer.
Note
All related applications should be closed before beginning removal of the Axxon Next software
You can run the Axxon Next uninstaller via one of the following methods:
1. from the Start menu
2. using Add or Remove Programs in the Windows control panel
3. By starting the executable file named setup.exe, which is included with the installed version of the product.
When you do this, the setup wizard's welcome screen appears. To remove Axxon Next, you must observe the following
procedure:
2. Select Remove.
3. To save your Axxon Next settings in a database, select the Save configuration check box. This option may be useful when
updating the product.
4. Click Next.
A dialog box will appear showing the Axxon Next removal process.
A dialog box will appear, indicating the completion of the removal process. Click Finish. Removal of Axxon Next is now complete.
Note
To completely remove Axxon Next, use the Windows Control Panel to remove the following software:
1. PostgreSQL.
2. AxxonSoft Situation detectors.ItvDetectorPack.
3. Axxon Driver Pack.
4.2.5 Update
If you have an earlier version installed, please perform a step-by-step upgrade: 3.6.4. → 4.1.0 → 4.3.2 → 4.4.6 → 4.5.0.
Attention!
You cannot directly upgrade to Axxon Next 4.5.0 from versions 3.6.466 and lower.
To upgrade from an older version, do the following:
1. Remove the old version.
2. Install the new version.
3. Reconfigure the system.
Attention!
For a failover system, we recommend you to update Servers through the supervisor service (see Upgrading Servers
within a cluster).
Note
Before you start updating, do the following:
• back up your configuration (see Backup and Restore Utility);
• stop the Server (see Shutting down a Server) or, in case of failover system, the NGP_Supervisor service. If you run
multiple Servers in the Axxon domain, make sure to stop them all.
Note
You do not need to remove the previous build.
2. Install the build using an existing configuration. The procedure is the same as for a new installation (see Installation).
Attention!
When you update the software, keep DBs in the same folder. If you set a different location for DBs, the metadata
and events from it will not be available.
Note
During installation you may have to reboot the system. After rebooting the installation will continue automatically.
When Redist.exe process ends and not listed in Windows Task Manager, the installation is complete.
This mode of installation can be configured by adding command-line options to setup.exe. See the command-line options in the
table.
/ADD="[]" Hers is the list of components to install or remove (if you remove/uninstall software) See
the possible values in the table below.
/REMOVE="[]" Hers is the list of components NOT to install or remove (if you remove/uninstall software)
See the possible values in the table below.
/CMD="[commands]" Basic installation options and values. Commands are [option] = \ "[value] \" or [option] =
[value] '. See available installation options in the table below.
Attention!
Occasionally, when installing the Bosch VideoSDK driver, the CLI window opens. To continue with installation, close
this window.
x86 x64
Acrobat Acrobat
BaseProduct BaseProduct
IPDriverPack_x86 IPDriverPack_x86
Guardant_x86 Guardant_amd64
Postgres Postgres
dotnetfx35_x86 dotnetfx35_x86
Redist2005_x86 Redist2005_x86
Redist2010_x86 Redist2010_x86
DetectorPack DetectorPack
Installation options:
NGP_IFACE_WHITELIST="0.0.0.0/0" Network interfaces The default value is "0.0.0.0/0" (all available network interfaces)
Format of network interfaces: "IP-address1 / number of unit bits in the mask, IP-
address2 /number of unit bits in the mask"
NGP_ALT_ADDR="0.0.0.0" Setting the outside local address for a Server behind the NAT).
Format of network interfaces: "IP Address1 or DNS-name1, IP address2 or DNS
Name2"
PORT_RANGE_START="20111" The initial value of the port range for Server. 20111 - default.
PORT_RANGE_COUNT="100" Number of ports in use. The minimum number is 20. 100 - default.
FBUSER_NAME='[AxxonFileBrowser]' Setting a user name and password for an account in file explorer.
When you choose SPECIFY of the FBUSER_TYPE parameter
FBUSER_PSW='[Axxon2.0.0]'
The command for silent installation of Axxon Next may look like:
setup.exe /quiet /norestart /debug /INSTALLTYPE="ServerClient" /REMOVE="Guardant_x86" /
CMD="CREATE_QUICKLAUNCH_SHORTCUT=\"0\" PORT_RANGE_COUNT=\"50\" DOMAIN_NAME_TYPE=\"WithoutDomain\""
This will launch installation with the following options:
1. quite mode (/quiet);
2. no reboot (/norestart);
3. log installation to file (/ debug);
4. Server and Client (/ INSTALLTYPE = "ServerClient");
5. No Guardant drivers (/ REMOVE = "Guardant_x86");
6. And with the following properties (/ CMD =):
a. no shortcut (="CREATE_QUICKLAUNCH_SHORTCUT=\"0\");
b. 50 ports for Server (PORT_RANGE_COUNT="50");
c. Server NOT added to Axxon-domain (DOMAIN_NAME_TYPE = '[WithoutDomain]').
Note
Please note that, if Drivers Pack 3.2.0 or an earlier version is installed, you should first uninstall the earlier version.
Attention!
You should launch the Client as administrator to make it automatically updatable.
Note
If the Client PC has a Server installed, no update will be possible.
Next, the Client will be updated automatically along with Detector Pack and Driver Pack driver modules (if required).
Note
The update may lead to a reboot.
Functionality Type of license Axxon Next Demo Axxon Next Free Axxon Next
(8:00-18:00)*
Cross-System Client No No No
• MomentQuest
• Face search
• LPR search***
• Visitors counter
• Queue detection
• Heat map
• AxxonNet reports
Data replication No No No
Failover No No No
Offline analytics No No No
Information about the type of license you are using is displayed in the server properties in the Product Type field.
* The system will operate in demo mode from 8:00 AM to 6:00 PM.
** Axxon Next Free license allows you to further use metadata from core VMD with any Scene Analytics detection tool except line
crossing and abandoned objects tools.
*** Please contact our sales team to confirm if LPR country is supported and find out more about licensing policy.
**** Axxon Next Start license allows you to use metadata from a basic VMD with the MomentQuest forensic search.
***** To use VT and IV automatic number plate recognition tools, you have to re-activate the updated license of the ANPR.
****** the module has to be purchased separately.
Note
You (admins users) get a reminder to renew license 30 days before it expires.
a. The activation request should be sent from the computer that will host the Axxon Next Server.
b. You can upgrade your license only if you retain the initial basic hardware configuration of all the Servers.
c. It is not possible to transfer a license from one computer to another.
2. License file + Guardant dongle.
this method allows replacing server hardware and transferring the license to another computer. To activate Axxon Next
via this method, contact AxxonSoft to receive a license file and Guardant dongle.
If you already have a Guardant dongle, you can perform activation yourself. To do so, connect the Guardant dongle to the
computer that you wish to activate and perform the standard activation steps.
Attention!
You may use a Guardant Sign key with Linux.
Note
If you install virtualization products such as VirtualBox, VmWare etc. , this may affect the license. Should you
encounter this problem, you are advised to uninstall all virtualization products or apply for a new license file
Note
The product activation utility program file LicenseTool.exe is located in the folder <Directory where Axxon Next is
installed>\AxxonSoft\AxxonSmart\bin\
Then you must select the name of one of the Axxon Domain servers to which the license file will be applied (the file is applied to
all Axxon Domain servers launched at the moment of activation) and connect to the system, under an administrator's user name
and password, to continue the activation process.
6.1 Startup
Note
Run the command with administrator permissions.
Note
To launch the Client from command line, you have to specify the following parameters: LOGIN, PASSWORD and SERVER.
For example: C:\Program Files\AxxonSoft\AxxonNext\bin>AxxonNext.exe -LOGIN=root -PASSWORD=root -
SERVER=127.0.0.1
To connect to multiple Servers, specify their addresses separated by commas.
For example: C:\Program Files\AxxonSoft\AxxonNext\bin>AxxonNext.exe -LOGIN=root -PASSWORD=root -
SERVER=10.0.11.30, 10.0.11.34
Note
The Axxon Next software package program file AxxonNext.exe is located in the folder <Axxon Next installation
folder>\Axxon Next\bin\
Note
To start the client in Safe mode with OpenGL software emulation, select: Start-> Programs-> Axxon Next-
AxxonNext (Safe mode).
The Axxon Next client will then launch and an authorization window will appear
Note
If the software is accessed by a remote user, the NetBIOS name or IP address of the computer with which the
connection is established should be indicated in the Server name or IP address field.
Note
The order of the servers in the list is as follows:
a. Preferred Servers (see Selecting Preferred Servers).
b. The latest server that was connected to.
c. Other Servers are in alphabetical order.
3. Enter the user name and password (2) and click Connect.
Note
The first login to the system is done with the user root, which has administrator permissions . Enter root in the
User Name and Password fields. The administrator then needs to configure the system for multi-user access
described in detail in the section titled Configuring user permissions )
Attention!
You need to match software versions between the Server and the Client. The Drivers Pack's version must be the
same as well.
If the Server's version is higher than the Client's, you will be offered to update your Client software (see
Automatic update of a remote Client).
It is strongly recommended to avoid any connections if the product versions do not match.
4. If the user requires the access confirmation by the system administrator, enter corresponding credentials and click
Connect.
Attention!
When you first start the client, the archive settings tab opens (see Configuring Archives).
After the archive is created, camera addition starts automatically (see Adding and removing IP devices). IP Device
Discovery Wizard launches.
Note
If Axxon Next is launched in demo mode, then after you enter the authorization parameters, a message to this effect will
appear (see the section Axxon Next in demo mode)
If the Server to which Axxon Next is connecting does not belong to any Axxon Domain, after the Connect button in the
authorization window is clicked, a message is displayed.
To connect to the Server, you must either create a new Axxon Domain based on the server or add the Server to an existing Axxon
Domain.
If you choose the first option, click OK in the message and follow the instructions given in the section Creating a new domain. For
the second option, click the button and follow the instructions given in the section Adding a Server to an existing Axxon
Domain.
Attention!
The maximum number of running Clients is limited to the number of connected monitors that support the minimum
required resolution (see Limitations of the Axxon Next Software Package).
You can run only one Client on a single monitor.
Note
If a Client is started in window mode (see Configuring the Client screen mode (full screen or window)) and moved to
another monitor, the situation changes: Clients will be started on the specified monitors even if a Client is already
running on one or more of them.
Active Axxon Next can be started between the hours of 8:00 AM and 6:00 Using Axxon Next in demo mode
PM
Inactive Axxon Next started outside the hours of 8:00 AM and 6:00 PM The Axxon Next server is not available, only the system
configuration can be viewed
If a Client is connected to an Axxon Domain in which there is at least one Server running in demo mode, an appropriate message
is displayed, along with a list of Servers in the Axxon Domain and their types of licenses.
Note
The notification is displayed after successful authorization
If an Axxon Domain includes at least one Server running in active demo mode, you will be given the option to continue working
(2) or start the activation utility (1).
Note
You can also configure automatic authorization for the Client upon start-up (see Configuring Cross-System Client and
autologon).
6.2 Shutdown
1. Click the button located in the top-right corner of the Axxon Next dialog box.
Note
If the client is opened in full-screen mode (enabled by default), the!1.jpg! is not displayed. In this case you can
exit the user interfaces using actions 2 and 3
When you perform one of these actions, the authorization window will appear. To close Axxon Next (completely exit the client),
click the Close button.
Note
Run the command with administrator permissions.
Attention!
While the Server restarts, the connection to cameras is temporarily lost and recording stops
1. Exit the Axxon Next user interface (see the section Shutdown).
2. When the authorization window appears, enter the user name under which you need to log in and the corresponding
password and click Connect.
Switching users is now complete.
Note
You can also configure the Client connections to Axxon domains in the sign-up in the authorization dialog box. To do
this, enter comma-separated values for Servers as follows: <Server 1 Name or IP address>:<Connection port>, <Server 2
Name or IP address>:<Connection port>. Server 1 is the primary connection
Attention!
Each Client to which LDAP users are connecting must have access to the LDAP catalog.
Note
When an LDAP user connects, the user's login and password in the LDAP as configured in the Server settings are used
(see Creating LDAP connections). The login and password in the LDAP directory are not used when connecting to the
Server.
Attention!
Please avoid changing system settings from different Clients simultaneously.
If you want to discard changes and have not clicked the Apply button, click Cancel.
If an attempt is made to close the Client but not all changes have been saved yet, a dialog box asks whether to confirm closing or
to cancel closing and save changes.
When setting up hardware, you can reset parameters to default values, or read configuration from the device at any time.
To reset parameters to default values, do the following:
1. Select the required device in the objects tree.
If a device's configuration differs from settings specified within the system, you can download the configuration from the device.
To do it, follow the steps below:
1. Select the required device in the objects tree.
If the description of a parameter is truncated, you can stretch this area above the upper border.
3. Event Source objects are used to integrate Axxon Next with external systems (see Receiving Events from External
Systems).
4. SMS and e-mail objects used in macros and automatic rules for SMS and e-mail notifications (see The SMS Object, The E-
mail Object).
5. Export Agent objects used in macros and automatic rules for exporting video recordings and snapshots (see Configuring
export).
6. Archive objects (see Configuring Archives).
7. Detection tool objects (see Configuring detection tools).
8. Role and user objects (see Configuring user permissions).
9. Macro objects (see Configuring Macros).
Some objects are created automatically, while others are created manually or are pre-created in the system.
2. Enter the IP address, the full or partial name of the object in the Search field.
Note
Search is not case-sensitive
Note
A search can also be run based on object ID
The search starts automatically once you enter something in the box. When the search is complete, you will see the number of
objects found in the tree, along with the currently displayed search results highlighted in beige.
The parts of names corresponding to the characters you entered will be highlighted in yellow on the found objects.
Note
If you search by an IP address, the found object will be fully highlighted.
Note
If a found object is located in a collapsed branch of objects, the branch will be highlighted with a yellow dotted line.
Note
If a found object is located in a collapsed branch of objects, the branch will be highlighted in yellow
You can use the buttons or press ENTER to navigate through the search results.
The search results rotate in a loop; moving from the last object takes you back to the first object.
Note
If you move to an object located in a collapsed branch, the branch will automatically expand
Attention!
You cannot combine regular and Failover Servers within the same Axxon domain (see Configuring Failover VMS).
To configure Axxon Domains, you must have the appropriate permissions (see the section Configuring user permissions).
This section gives step-by-step instructions for each operation used in configuring Axxon Domains, and then describes typical
instances of their use.
Note
The number of created devices means the total number of enabled IP video channels.
In the Computer/Key HID mismatch (2) group, the license information / error is displayed.
If you select a licensing option in the relevant group (1), the License feature owners (3) group will include the objects currently
using this license.
You can also rename the Axxon-domain. To do so, enter the new name in the corresponding field (4) and click the Apply button.
The Name new Axxon Domain window will appear. In the New Axxon Domain name field, enter the Axxon Domain name to
create a new group of computers based on the Server and click Apply.
Attention!
It is not possible to use the above steps to add a Server to an existing Axxon Domain. Assigning the same Axxon Domain
name to several Servers does not guarantee that those Servers will be in the same Axxon Domain. Different Axxon
Domains can have identical names
This will create a new Axxon Domain based on the Server. The Axxon Next software package will then be launched with the
entered authorization parameters (see the section Startup).
Attention!
Before configuring a distributed system, be sure to combine your Servers into an Axxon-domain
Attention!
It is not recommend to use Servers with different versions of Axxon Next on the same Axxon-domain.
Note
Only unallocated Servers, i.e., Servers which do not already belong to any Axxon Domain, can be added
There are two ways to add a Server to an Axxon Domain, depending on whether or not it is present in the search results (in the
Unallocated Servers group).
If one of these Servers is present in the search results, select it and click the Add to Axxon-domain button.
The Server will then be added to the Axxon Domain from the Unallocated Servers group.
Since the search for unallocated Servers is conducted using broadcast packets, the results may not include Servers located in a
different subnetwork (for example, beyond a router which blocks broadcast packets).
In this case the option of manually adding a Server to an Axxon Domain can be useful; this option can be used with all
unallocated Servers, including those present in the Unallocated Servers group.
A Server can be manually added to an Axxon Domain as follows:
2. In the Server Name field, enter the NetBIOS name of the Server to be added to the Axxon Domain (2).
3. Enter the Server IP address and port number (3).
4. Click the Add to Axxon-domain button (4).
The Server will then be manually added to the Axxon Domain.
After a Server is added to an Axxon Domain using any of the methods described, it will appear in the object tree.
If a Server is not currently accessible when it is added to an Axxon Domain, it will be displayed in the object tree with the icon.
To undo addition of a Server to the Axxon-domain, select the Server and click the Exclude from Axxon-domain button.
Attention!
By excluding a Server, you also delete the macros, layouts, maps, object groups, roles, and users that have been
created on the Server
To remove a Server from an Axxon Domain, you must perform the following steps:
1. Select the Server in the list and click the Exclude from Axxon-domain button.
2. In the window which appears, confirm that you want to remove the Server from the Axxon Domain by clicking the Yes
button.
The Server will then be removed from the Axxon Domain. If the current Client was connected to the excluded Server, the user
interfaces will be unloaded and the user will be prompted to repeat the authorization procedure for Axxon Next (see the section
Startup).
Note
You can also exclude a Server from an Axxon domain using the activation utility (see Excluding the Current Server from
an Axxon Domain).
In the first typical case, the Servers for the future Axxon Domain are selected before Axxon Next installation. This case involves
the following steps:
1. Selecting a Server on the basis of which the new Axxon Domain will be created. Installing the Axxon Next software
package with the Server and Client configuration type, indicating the name of the new Axxon Domain (see also step 8 of
the instructions in the section Installation).
Note
Any Server in the future Axxon Domain can be selected as the primary Server
2. Installing the Axxon Next software package with the Server and Client configuration type on the other servers of the
future Axxon Domain, without adding them to the Axxon Domain (see also step 8 of the instructions in the section
Installation).
In the second typical case it is necessary to add servers which are part of another Axxon Domain to a new Axxon Domain. This
case involves the following steps:
1. Excluding all the Servers which are to be added to the new Axxon Domain from their current Axxon Domains, according to
the instructions in the section Removing a Server from an Axxon Domain.
2. Naming the new Axxon Domain according to the instructions in the section Creating a new domain, when attempting to
connect to one of the Servers excluded in step 1.
3. Adding the remaining Servers to the Axxon Domain from the primary Server according to the instructions in the section
Adding a Server to an existing Axxon Domain.
7
When the Wizard is opened for the first time after the Client is started, automatic search for new devices will begin. During
subsequent sessions, to launch the Wizard you must click the corresponding button. A progress bar indicates search progress.
Note
Since multicast packets are used for device search, the search results may not contain the Servers and devices from
other subnets
The search results are color-coded based on the status of the device.
Note
If you click the IP address, you will jump to the device web interface.
2. By manufacturer, model or IP address. To do this, use the Filter field. For example, this filters the LG devices, the name of
which contains the LW model and whose IP address contains 192.
When adding a device, you can immediately set several configuration options, such as:
• manufacturer and model,
Note
You can search by manufacturer and model of the device.
Note
An object identifier must contain: numbers, English characters and the "_" sign.
In the Object Tree, added devices will be sorted by ID.
• Select an archive and set the recording parameters (see Binding a camera to an archive).
• No - camera is linked to the archive, no recording;
• Always - continuous recording;
• On motion (default setting) - a VMD tool and an automatic rule for writing to the specified archive are
automatically created for the camera you are adding. By default, recording stops when an event detection is
finished.
• On motion/Embedded detection - an embedded VMD tool and an automatic rule for writing to the specified
archive are automatically created for the camera you are adding. By default, recording stops when an event
detection is finished.
Note
This option is available only for devices that have on-board VMD.
Note
When creating a new device, the pre-alarm time interval for video footage recording is automatically set
to 3 seconds (see Binding a camera to an archive).
• Camera coordinates (latitude, longitude, azimuth) which are used when the camera is added to the geo map (see
Adding video cameras).
In addition, three modes are available for adding a device to a configuration. These are described in the following table.
1 Add device with default The IP device is added to the configuration with the default settings (the default settings are determined
settings by Axxon Next itself). Adding a device in this mode will change the current settings of the device.
2 Add device with current The IP device is added to the configuration with the current settings, as specified in the web interface.
settings
3 Add device with template The IP device is added to the configuration with the settings that have been previously specified for a
settings device of the same model in the configuration. Select a device of the same model (the "template device")
in the list.
Only devices of the same model are shown in the list of search results for choosing the template device.
The following settings will be copied from the template device to the new, similar device: firmware, video
stream settings, buffering settings, Other settings (see The Video Camera Object), and Other settings for
Microphone and Speaker objects, if these are configured for the template device.
This mode is best when multiple cameras of the same model are in use at a site. If this is the case, we
advise to:
Add and configure one device.
Add the remaining devices, copying settings from the "template device" as decsribed previously.
which compatibility is not guaranteed). To add one device, click the button. To add all devices, click the Add all button.
If you set no individual access parameters while adding hardware, a dialog window appears for setting unified access
parameters.
Note
Note
Remember that if you add all IP devices at the asme time, the same mode and settings will be applied to all of them.
If an IP device is not shown in the search results (beacuse it is located on another subnet or contact has been temporarily lost),
you can add it manually. To do so, in the neutral-colored area above the search results, select the type of IP device that you are
adding (with or without edge storage), specify an IP address and port, and select the manufacturer and model.
Then add the device to the configuration by following the steps described previously.
To remove IP devices, select them in the device list (by left-clicking one or more devices, holding down the CTRL key to select
multiple devices) and click the Delete button.
If you click the IP address, you will jump to the device web interface.
Attention!
You have to disable the UAC first.
Do the following:
1. Create a CSV file with devices listed as follows:
IP address, Port, Manufacturer, Model, Login, Password, Identifier, Object name, Latitude,
Longitude, Azimuth, Archive name, Recording mode
Attention!
For each added camera, three parameters are required: IP address, Make and Model.
If a required parameter is not specified, it will be automatically set to its default value.
You should include commas even if no additional parameters are set.
For example:
10.0.12.245, 80, Bosch, Dinion IP starlight 8000 MP, service, Admin12345!, 1441, Camera 1,
0, 0, 0, Archive AliceBlue, Always
10.0.12.246,, Bosch, Dinion IP starlight 8000 MP,,,,,,,,,
10.0.12.247, 80, Bosch, Dinion IP starlight 8000 MP,,,, Camera 3,,, Archive AliceBlue, On
motion
Attention!
The manufacturer and model of the device must be specified exactly as in the list of supported devices.
Note
For correct display of the object name in Axxon Next VMS, the CSV file must be UTF-8 or UTF-32 encoded.
2. Drag & drop the created file to the field in IP Device Discovery Wizard in Axxon Next.
1. buttons for creating a system speaker or SMS and email notifications; button for excluding the Server from the Axxon-
domain, and button for launching the Configuration management utility (1).
Note
The number of connected devices means the total number of available IP video channels, including disabled.
The list of cameras is shown as a table with the following columns: Name, IP address, Vendor, Model, Quality, Video
codec, Frame rate and Resolution.
The table can be sorted by any of the columns.
Note
If no cameras have been created on a Server, you are prompted to search for IP devices on the network (the IP Device
Discovery Wizard is launched, see Adding and removing IP devices).
If a camera supports multistreaming, the information in the Quality, Video codec, Frame rate and Resolution columns will be
displayed as follows> value for the lowest-quality stream/value for the highest-quality stream.
Attention
On the local computer with the Web server running, ports from the range [9001; 9001 + number of logical cores of the
processor] must be open.
Attention
The web server recodes incoming non-H.264 videos into MJPEG format, therefore the incoming traffic may increase
dramatically.
2. If you want to disable the web server, set the value of Enable to No (1).
3. In the Port field, enter the port number on which the web server will be located (2).
4. In the URL path field, enter the prefix that is added to the server address (3).
5. To connect to a web server via the HTTPS protocol, do the following:
a. Specify a path to the certificate (1).
Attention!
Axxon Next supports TLS encryption v1.2 and 1.3 with AES GCM, AES CCM and AES CBC algorithms.
3. In the RTSP/HTTP port field, specify the port number for transfer of RTSP data via HTTP tunnel (2).
4. Click the Apply button.
Configuration of the RTSP Server is now complete.
To receive videos from an RTSP server, use the following link format:
Attention!
For correct operation of the RTSP Server, a user name has to match the following rules:
• start with a letter;
• contain only Latin, numerical and following extra characters: "/", "-", "_", ".", ":", "+".
Note
You can configure recording options for a camera in the corresponding tab (see Configuring Archives).
When you have added a camera via the IP Device Discovery Wizard (see Adding and removing IP devices), you can edit the
camera's parameters. The camera parameters are grouped as follows.
In the Object identification group, you can see the camera ID, and you can enter a camera name / short name and text
comments.
Note
You may use camera's short name in hotkeys (see Notes regarding hot key actions).
Note
By default, the short name is a camera's ID. The full name of the video camera in the object tree is displayed in the
<Short name>. <Name> format.
After changing the short name and restarting the Client, the cameras in the object tree will be sorted by their short
names.
Also, you can disable the camera by selecting No in the Enable field.
Attention!
In terms of licensing, every camera enabled is one channel. Disabled cameras are not subject to licensing (see Licensing
of the software product). If you run out of camera licenses, disable offline / unused cameras.
In the Object features group you can see the following camera properties:
1. The IP address (assigned automatically and can be changed if necessary).
Note
The port used to transmit data between the camera and the Axxon Next VMS (this value is set to 80 by default but
can be changed if necessary).
2. At first the port number is set through the camera's Web interface.
3. Camera positioning coordinates (latitude, longitude, azimuth).
4. The MAC address.
5. Manufacturer, model, firmware.
6. The number of the video channel (for an IP Server).
7. Device serial number (for Axis devices only, see Axis IP Devices).
You can also customize a number of options shared by all video cameras in this group:
1. If you want to interrupt video streaming from the camera to the Server whenever it is not needed, select Yes for Break
unused connections (21).
Conditional interruption of video transmission from a camera to the Server, if:
a. the video stream is not displayed on either Client or web client layout.
b. the stream is currently not being recorded into Video Footage.
c. the stream is currently not being processed by any detection tool.
2. After starting the Client, the default setting is to display video only after the first I-frame (key frame) is received. If the
stream comes with a relatively long GOP length (Group of Pictures) or GOV length (Group of Video Object Planes), e. the
number of P- and B-frames between I-frames in the stream, the video may be not available for a minute. In this case,
select Yes for the Low GOP keyframe rate setting (2). This will reduce the waiting time for video by pushing the preceding
I-frame that can be stored in the memory buffer on the Server. In some cases, the I-frame will not be buffered, but in most
cases this means that it will soon be received from the device.
In the Authentication group, you can set the username and password to connect to the camera.
If the username and/or password for connecting to the camera are different from the factory settings, select No in the Default
field and enter the current credentials.
Attention!
If the camera supports the Digest HTTP-authorization, add the symbol " : " to the last character of the password.
To enable video buffering on Clients, set the buffer length in milliseconds in the Video buffering group.
This value should be between 50 and 1000 milliseconds. If the value 0 is selected, video buffering is disabled.
In the Camera settings group you can see video image parameters (contrast, brightness, color saturation, etc.). When
configuring these, you can look up short parameters' descriptions in Axxon Next GUI. For more detailed information, please refer
to the camera manual.
Note
If you set up a camera via its web page, you cannot edit the parameters in the VMS (see Adding and removing IP
devices). To configure the camera in the VMS, select the Send settings to device checkbox.
Select a standby / substitute camera from the current Axxon domain in the Alternative view list. The sub camera shows in the
layout when the main camera is offline.
Then you can configure them to show the nearest cameras to the alerted one (see Configuring Alerted Cameras layouts).
You can configure video streams under the viewing tile. If a camera supports multistreaming, you can configure two video
streams separately: high quality and low quality. When creating an IP device with a high quality video stream, a stream with a
higher resolution is selected.
To configure video streams, you should make sure that the Send settings to device checkbox is selected.
An adaptive video stream can be configured if necessary (see Configuring an Adaptive Video Stream).
Note
In most cases, the following parameters are set for video streams: bit rate, compression rate, frame rate, and resolution.
Detailed information on configurable parameters can be found in the official reference documentation for the video
camera.
To configure video streams, use the settings available in the Axxon Next interface, not the web interface of the camera itself.
If you are using the web interface to configure camera's video streaming, you have to set the expected fps value to have the
stream parameters correctly displayed (see Viewing camera status).
You can alter video stream settings by sending an HTTP request to the device (see its manufacturer's documentation) through
some macros (see Executing a web query, Starting an external program on Servers, Starting an external program on Clients).
Important
You can any select any stream for live view or recording (see Selecting video stream quality in a viewing tile, Binding a
camera to an archive).
If a camera does not support multistreaming, the parameters of the video streams are identical. In this case only the parameters
of the high-quality video stream are editable (the parameters of the low-quality video stream are adjusted automatically).
Note
When some video stream parameters are changed, the video camera may automatically restart, in which case it will
become unavailable for some time (depending on the video camera)
Note
The indicator in the upper right corner displays the current time and recording status (see Time Display).
To switch between streams in the preview window, click the High-quality stream and Low-quality stream tabs.
Note
When a stream is selected in the preview window, the settings for the relevant stream are displayed; settings for the
other stream are hidden.
2. Click the button and select cameras the same settings should be applied to.
A list of cameras of the same model and firmware opens. To quickly select multiple cameras, hold down the Shift key,
select the first and last cameras the settings should be applied to. Selecting any camera from highlighted ones will result
in selecting them all.
Note
The number in brackets refers to the number of configured cameras.
If the video camera is single-channel, select Yes in the Adaptive Video Stream list.
If you specify resolution on only one side, then a scaling step that does not exceed the specified number is selected.
If you enter resolution values for both dimensions, the compressed frame will be displayed in the designated rectangle
with the constrained proportions.
Note
Axxon Next server scales the adaptive video stream to 2, 4, 8, and so on.
For example, if you select the horizontal resolution 680 for a 1280х720 stream, the adapted video be 640x360.
If the dimensions are not set (0 value), then the adaptive video stream will have the same resolution as the original video
stream.
3. Click Apply to save the changes.
Configuring an adaptive video stream is complete.
2. In the Video camera position list (2), select the mount of the video camera.
Important!
Some system features and functions depend on the chosen position of the video camera: digital zoom, display of
video in the surveillance sector on the map, and immersive mode
3. If it is a fisheye camera, select the Common fisheye-lens lens type (3).If it is a video camera with a panomorph lens, select
the corresponding type (3). When using wide angle dual lens XingYun devices, select the Double sphere fisheye-
lens type.
Note
The types of device lenses certified by ImmerVision are listed in the document.
You cannot select ImmerVision lenses in Linux.
4. If it is a video camera with an Immervision lens, select the appropriate display mode (4): 360о panorama with virtual PTZ
(PTZ) or 180о panorama (Perimeter).
5. A typical fisheye lens with standard settings produces a skewed image in the upper part of the screen. If this is the case,
enable the Fit to frame option (5).
Important!
If you have multiple streams from a camera, you need to calibrate each stream. To do this, before applying the
settings, switch to the required stream tab in the viewing tile (see The Video Camera Object).
Important!
Video is calibrated every time you change any parameters in the Panomorph group.
d. Click the Apply button.
After applying the settings, the area outside the circle will be cut.
Configuration of the fisheye camera is complete.
On page:
By default, all ONVIF devices in the system are added as multistreaming (the ONVIF 2.0 driver, see Adding and removing IP
devices).
If the camera does not support multistreaming, then the video stream of lower quality will be disabled.
Note
In some cases (for example, if you do not have video from a camera), you may need to synchronize the time between the
server and the camera when you connect them via ONVIF.
Attention!
If you connect cameras via ONVIF, auto focus (AF) and auto aperture are not available
Non-megapixel Maximum camera resolution Average camera resolution Minimum camera resolution
Megapixel Maximum camera resolution Camera resolution closest to 1024x768 Camera resolution closest to 640x480
To connect IP devices which only partially support ONVIF functions to the Axxon Next software package, you must use an ONVIF
driver (1) with compatibility mode enabled.
Note
Such video cameras include Hikvision models and early versions of firmware from Sony, Samsung, and others.
Compatibility mode makes it possible to receive a video image from video cameras; however, some capabilities of the Axxon Next
software package will be unavailable.
Enabling compatibility mode for a video camera (2) connected using the ONVIF protocol (1) is recommended if the connection
settings are correct, but there is no video image.
2. URL of the RTSP feed (2). In general form, the address is as follows: rtsp://<IP address of RTSP server>:<Port on RTSP
server>/<Path>.
Up to three simultaneous video streams are supported from RTSP-connected cameras. To access multiple streams, enter
the relevant RTSP addresses, placing a semi-colon (;) after each address: rtsp://<IP address of RTSP server1>:<Port on
RTSP server1>/<Path>; rtsp://<IP address of RTSP server2>:<Port on RTSP server2>/<Path>; rtsp://<IP address of
RTSP server3>:<Port on RTSP server3>/<Path>.
Important!
Generally, RTSP server parameters (port and path) are set through the web interface of the video camera. To do
so, refer to the manufacturer's documentation for the video camera
Important!
If the username and/or password contain forbidden characters, such as "@", you have to escape these
characters with relevant ASCII codes to avoid log-in problems. The "@" symbol is escaped as %40. For example,
for a successful RTSP connection your device's URL may look like this: "rtsp://
admin:New%40edge@192.168.0.75:554/RVi/1/1".
Note
In some cases, the address format may be different. For example, a user name and password may be added to
the address for connecting to the video camera.
You are advised to refer to the manufacturer's documentation for the video camera.
Even if the password field is empty, the address string must include a colon (:).
A correct address may look like this: rtsp://user:@10.10.27.50:10017/...
An example of an incorrect address: rtsp://user@10.10.27.50:10017/...
The Video camera object is created. If the address of the RTSP server is correctly specified, the video feed from the camera is
shown in a preview tile.
Note
Port, Login and Password can not be edited. These settings are specified in the URL of the feed.
If video is unavailable, examine the log file APP_HOST.lpint, which is located in the folder <Axxon Next installation
folder>\AxxonNext\Logs.
Important!
If APP_HOST.lpint is empty, in the log management utility, check the detail level of logging for the Axxon Next Server
(see Configuring Logging Levels), The recommended detail level is Debug.
Since Drivers Pack version 3.62.2953, RTSP streaming over HTTPS is supported. To set this option, set the Transport Protocol
parameter to rtspoverhttps.
Important!
To get good video from some cameras, you should select No for the SSRC filter in the video settings
On the page:
Note.
For telemetry to work correctly, set RefreshRegTime to more than 600.
Important!
No more than one SIP server can be used for IP devices connection via the GB/T28181 protocol. This means that
several Video Capture Device objects with GBT28181 type can be created in the Axxon Next hardware tree,
however, the part of the address after @ must match for all of them. The server ID, local address, external
address, and port must be the same for all devices. If at least one parameter is different (for example, the local IP
address is not set for some device when it is set for other devices), then such a device will not start.
Note.
Axxon Next does not support auto-discovery of devices connected via GB/T28181 and these devices are not added using
the Camera discovery tool.
After configuring the device as described earlier, add it to Axxon Next as follows:
1. Run the IP discovery wizard (see Adding and removing IP devices).
2. In the form for manually adding an IP device, in the Vendor list, select GBT28181 (1).
3. In the IP address field specify the value of Device ID parameter set during IP device configuration (2). The following
additional parameters can be specified optionally as follows:
[gbt://]deviceID[/videoPort]@serverID[-serverLocalIP[/serverExternalIP]]
OR
[gbt://]deviceID[/videoPortFirst-videoPortLast]@serverID[-serverLocalIP[/serverExternalIP]]
where:
deviceID is the Device ID parameter;
serverID is the identifier of the Axxon Next Server generated according to the same rules as the IP device ID (see
above).
videoPort is the port for receiving video;
videoPortFirst - videoPortLast is range of ports for receiving video;
serverLocalIP is the local IP address of the Axxon Next Server, which sets the network interface on which the Server
should be available;
serverExternalIP is the global IP address of the Axxon Next Server; this parameter is in use when the Axxon Next
Server is behind the gateway. In this case, this IP address is specified as the SIP Server IP address in the IP device
settings.
Examples.
34020000001320000008@34020000002000000001
34020000001320000008@34020000002000000001-10.0.40.246/113.125.160.58
34020000001320000008@34020000002000000001-10.0.40.246
34020000001320000008@34020000002000000001-/113.125.160.58
34020000001320000008/50200@34020000002000000001
34020000001320000008/50200-50210@34020000002000000001-10.0.40.246
4. In the Port field, enter the local port number that the Axxon Next Server shall listen for receiving messages from the
IP device (3). Usually this is the default SIP port: 5060.
Note.
The IP device SIP port is detected automatically.
5. The Username and the Password fields are not used (4).
6. Click the
button.
Connection of the camera via GB/T28181 is now complete.
Examples of IP device settings for connection via GB/T28181 standard
On the page:
• Jovision
• Bosch
• Huawei
• Hikvision
• GB/T28181-2011
• GB/T28181-2016
• Dahua
Examples of IP device settings and connection settings in Axxon Next for GB/T28181 standard are given below.
Note
The protocol is usually supported by cameras for China market not having any English interface. This is why some of the
screenshots below are given in Chinese.
Jovision
Configure a Jovision camera for operation via GB/T28181 standard as follows:
1. Perform the following settings of the IP-device:
a. Go to the IP device web interface.
b. Enter your login and password.
e. In the Heartbeat timeout field, enter the period in seconds for sending messages confirming the device activity
(3).
f. In the Server ID field, enter the Axxon Next server identification number (4). The example shows
34020000002000000001.
g. In the Server IP address field, enter the Axxon Next server IP-address (5). The example shows 172.17.12.2
h. In the Server port field, enter Axxon Next server port number assigned for receiving messages from the IP device
(6). The example shows port 5070.
i. In the Device ID field, enter the device identification number as described in Connecting cameras via the GB/
T28181 protocol (7). Example on the picture shows Device ID 34020000001350000001.
j. In the Device port field, enter the IP device SIP port number (8).
k. In the Alarm device ID field, enter the channel idettification number (9). The same value as Device ID may be
used.
l. Click the Set button (10).
2. In Axxon Next:
a. Example value for the IP address field: 34020000001350000001@34020000002000000001-10.0.40.246/172.17.12.2
b. Set Port to 5070.
Huawei
Configure a Huawei camera for operation via GB/T28181 standard as follows:
1. Perform the following settings of the IP-device:
a. Go to the IP device web interface.
b. Go to Settings - Platform connections - Second Protocol Parameters - T28181.
b. From the 分辨率 (Resolution) drop-own list, select the main stream resolution (2).
c. From the 视频编码 (Codec) drop-own list, select the main stream codec (3).
3. Configure the second stream:
a. From the 码流类型 (Stream type) drop-down list, select 子码流 (Second stream) (1).
b. From the 分辨率 (Resolution) drop-down list, select the second stream resolution (2).
c. From the 视频编码 (Codec) drop-down list, select the second stream codec (3).
4. Click 保存 (Save).
GB/T28181-2011
14. Enter the identifiers of all channels of the IP device in the same format as the device identifiers (2). The example
shows ID 34020000001320000002.
15. Click the 保存 (Save) button (3).
In Axxon Next:
1. Example value for the IP address field: 34020000001320000001@34020000002000000001-109.248.191.112
2. From the 传输协议 (Transport protocol) drop-down list, select the transport level protocol to be in use: UDP or TCP (1).
3. Set the 启用 (Enable) checkbox (2).
4. From the 协议版本 (Protocol version) drop-down list, select GB/T28181-2016 (3).
5. In the SIP服务器ID (SIP Server ID) field, enter the Axxon Next Server ID (4). The example shows Server ID
34020000002000000001.
6. In the SIP服务器域 (SIP Server domain) field, enter first 10 digits of the address according to GB/T-2260-2007 (5).
7. In the SIP服务器地址 (SIP Server address) field, enter the Axxon Next server IP-address (6). The example shows IP
109.248.191.112.
8. In the SIP服务器端口 (SIP Server Port) field, enter the Axxon Next server port number assigned for receiving messages
from the IP device (7). The example shows port 5070.
9. In the SIP用户名 (SIP user name) field, enter the device identification number as described in Connecting cameras via the
GB/T28181 protocol (8). Example on the picture shows Device ID 34020000001320000001.
10. In the 注册有效期 (Registration period) field, enter the devise discovery period in seconds (9). The value shall not be less
than 600.
11. In the 心跳周期 (Heartbeat period) field, enter the period in seconds for sending messages confirming the device activity
(10).
12. From the 28181码流索引 (Video stream) drop-down list, select one of the streams configured earlier (主码流 (定时) for
Main stream or 子码流 for Second stream) (11).
13. In the 注册间隔 (Registration interval) field, enter the devise discovery interval in seconds (12).
14. In the 最大心跳超时次数 ( number of timeouts for Heartbeat messages) field, enter the maximum number of Heartbeat
message omissions after which the device connection is considered lost (13).
15. Go to the 视频通道编码ID (Video channel ID) tab at the bottom of the setings page (1).
16. Enter the identifiers of all channels of the IP device in the same format as the device identifiers (2). The example
shows ID 34020000001320000002.
17. Click the 保存 (Save) button (3).
In Axxon Next:
1. Example value for the IP address field: 34020000001320000001@34020000002000000001-109.248.191.112
2. Set Port to 5070.
Dahua
Configure a Dahua camera for operation via GB/T28181 standard as follows:
1. Perform the following settings of the IP-device:
a. Go to the IP device web interface.
b. Go to 网络设置 - 平台接入 - 国标28181 (Network settings - Platform access - GBT28181).
General Device + + + + +
generic + + - - - +
Devices connected via General Device drivers are findable via the IP device discovery wizard. The method for adding them to the
system is the same as for ordinary devices (see Adding and removing IP devices).
Note
Axis devices are affected by a special restriction: if the user name and password for device access do not equal the
default values, the number of channels for the device is not discoverable. Therefore, all non-integrated devices whose
user name and password for device access do not equal the default values will be shown in search results as 1-channel
General Devices.
2. In the Model field, select General Device or generic (2). For Axis and Bosch General devices, select the number of
channels on the device.
3. Enter the IP address and port for the device connection (3).
4. Enter the user name and password for connecting to the device (4).
Attention!
If a device, connected via a generic driver, is temporarily not available or it has incorrect connection settings, then it is
not added to the configuration.
Attention!
FFmpeg currently has the following limitations:
• only one stream is supported;
• video codecs are limited to H.264/H.265, audio to AAC.
protocol://[login:password@]IP-address[:port][/path]
Note
You can set login and password either in the address bar or in corresponding fields when adding the device.
If authentication parameters are specified both ways, the address bar has the priority.
Attention!
If you use the address bar method, you must specify the port number. If no port number is specified, default ports are
used (554 for RTSP, 1935 for RTMP).
After you add a device, you can set a parameter string for FFmpeg app in the Additional Options field. Parameters and their
values differ by format, particular device and protocol used.
Note
See the full list of parameters for RTSP protocol on the page.
dshow(<index>)://(<video_device_name>)(:<audio_device_name>)
If no index is specified in the address, the value is 0. Use a non-zero index if you use multiple devices with the same name.
For example:
Note
If a video or audio device is not present, it may be not specified in the address.
After you added the device, you have to set up its streams. For archive recording and transferring videos over the network,
MJPEG codec is recommended; for detection purposes, use YUV422.
If required, you can set a parameter string for FFmpeg app in the Additional Options field.
For example: receive video from a USB camera in YUV420P format, 1280x960 resolution.
Parameters and their values differ by format and particular device. To list possible parameter values, run the following command
from the Windows command line:
Receiving video from the Server monitor with the FFmpeg driver
To receive video from the Server monitor screen, add an object with the following address format:
gdigrab://desktop
Note
To receive video from remote Clients, you have to use RTSP transmission (see Receiving video from the remote Client
monitor with the FFmpeg driver), or install Axxon Next's Server services on your Client (see Installation).
By default, videos are transmitted from all Server monitors in MJPEG format. YUV422 format is also available.
Note
The YUV422 requires more network bandwidth. Take this into account when you select a format.
Receiving video from the remote Client monitor with the FFmpeg driver
Your Server can receive video along with system and microphone audio from a remote Client with the FFmpeg driver over RTSP.
To do it, follow the steps below:
1. On the Server:
a. Open the port for receiving data from the remote Client
b. Add a 1 channel device and specify its address in the IP address field in the following format:
listenrtsp://<Server IP-address>:<Port>/<RTSP-link>
Note
RTSP link may be omitted.
where
Codec parameter may take mpeg2video, mpeg4, h264 or hevc value;
video_size 640x480 and -muxdelay 0.1 parameters may be omitted or altered.
If necessary, you may specify additional parameters in this command.
Supported parameters Description
After the command execution, remote Client's screen is shared on your display.
Receiving video from the application window on the Server with the FFmpeg driver
To receive video from the application window on the Server, add an object with the following address format:
gdigrab://"Window title"
Attention!
The address may contain only Latin characters. If the app window header contains some other characters, use any 3rd
party utility to change them.
By default, videos are transmitted in MJPEG format. YUV422 and MPEG4 formats are also available.
Note
The YUV422 requires more network bandwidth. Take this into account when you select a format.
Note
Do not use video with B-frames.
To create and configure a virtual video camera, complete the following steps:
1. Run IP Device Discovery Wizard (see Adding and removing IP devices).
2. In the form for manually adding an IP device, select AxxonSoft in the Vendor drop-down list (1).
3. Select Virtual from the models list to emulate a single-stream video camera. Select Virtual several streams to emulate a
video camera supporting multiple streams (2).
Note
The name of the video file and its file path must consist only of Latin characters
Note
Scanning for files in a specified directory is limited to one minute.
6. By default, a video will be played back endlessly. To switch to one-shot playback, set Yes for the corresponding
parameter.
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
Note
You cannot do it if the button is depressed (see Putting a text over the camera window).
2. In the FoV, set the nodes of the closed area you want to obscure.
Note
When the area is being constructed, the nodes are connected by a two-color dotted line which outlines the area's
borders.
Action Result
Position the cursor on a node and hold down the left mouse button while you move Moves the area node
the mouse
3. You can mask the selected area with black matte (by default), or pixelate it. To pixelate the area, do the following:
a. Click .
b. Select the checkbox.
Note
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
2. Click any mouse button anywhere in FoV to set two anchor points of the rectangular area where the text will be
displayed. The higher the area, the more the font size.
Note
To remove the area, click the button.
3. Click .
4. Enter the desired text.
Attention!
The text string has to fit the width of the area. If not, you cannot apply settings. For a longer text, make the area
wider.
2. Select Yes from the list in the Enable field to enable the object (2).
3. Enter the name of the IP server in the Name field (3).
4. Specify the number of the network port (4). The default value is 80.
Note
The port number is initially set through the IP server's web interface
Note
The login and password for connecting to the IP server are set through its Web interface
Configuration of IP server channels must be performed separately for each channel (with the help of child objects of Video
camera).
By default, you cannot delete child Video Camera objects from the IP server. To enable this feature, do as follows:
1. Quit Client.
2. Start a text editor and open the AxxonNext.exe.config configuration file located in: <Axxon Next installation folder Axxon
Next>\bin
3. Find the line <add key="AllowIpServerChannelRemove" value="false" /> and change false to true.
4. Save the changes to the file.
You can now delete camera objects from the IP server.
Attention!
You cannot restore a deleted object. You will need to create the IP server again.
3. Select Virtual from the models list to emulate a single-stream video camera. Select Virtual several streams to emulate a
video camera supporting multiple streams (2).
Atteintion
When a microphone is reassigned from one camera to another, all previously recorded audio is also transferred; when
recorded video on the new camera is played, the transferred audio is played back
Note
This setting is used during synchronized video and audio monitoring of a situation as well as during synchronized video
and video recording to the archive (see the section Audio Monitoring)
In all other cases the Microphone object will automatically be displayed in the objects tree as a child of the video camera itself.
To configure the Microphone object, perform the following:
1. Select the Microphone object in the objects tree (1).
5. Choose a video camera to associate this microphone with (5). As a result of this operation, the selected camera will
become a parent object for the microphone.
6. Click the Apply button.
The microphone will then be switched to its assigned work mode.
To check the microphone's operation, you must perform the following steps:
1. Select the Sound on/off check box in the Summary group.
2. Enable the PTZ device by selecting Yes in the Enable field (2).
3. Enter the name of the PTZ device (3).
4. You can use discrete PTZ control buttons even if a camera does not support discrete mode; to enable this, select Yes for
the Discrete PTZ Control via Continuous parameter. In this case, discrete PTZ control will be emulated via continuous
control commands (4).
5. Select the Home preset by specifying the required identifier (5). The Home preset will be applied automatically after the
time period specified in the Home preset timeout field (6).
6. To simultaneously control a PTZ camera by multiple users with the same priority, select Yes in the Multiple control List
(7). Otherwise, only one user at a time will have a control (see Controlling a PTZ Camera).
7. To use device's existing presets in Axxon Next, do the following:
a. If a camera supports the ONVIF protocol, set Yes for the New PTZ Interface parameter (8). The device's presets
will automatically appear in the PTZ control panel.
b. In any other case:
Attention!
If you have not enabled the Use device presets option, the existing presets can be lost if the
following conditions are in place:
1. In Axxon Next, the recording presets to the device option enabled (see. P.8).
2. In Axxon Next, a preset with the same ID is created (see Creating and editing presets).
8. Configuring patrol:
a. Choose the default patrol mode: Yes - on, No - off (9).
If the default patrolling is enabled, it can be stopped in PTZ Control Panel (see Patrolling). However, after you
finish the PTZ control session, patrols will resume automatically.
b. Set the transition speed from one preset to another in arbitrary units from 0 to 100 (10).
c. Set the interval of time (in seconds) at which the PTZ device will switch between presets while in Patrol mode (11).
9. By default, presets are stored on IP cameras. If you want to store presets on Server, select No in Save presets (12).
Note
This option is available only for devices that support Absolute Positioning.
10. If necessary, configure Tag & Track (13, see Configuring object tracking).
11. Depending on camera, you may find other options in the Other group (14). To configure them, please refer to the
interface help section and tooltips or official documentation.
12. Click the Apply button.
The PTZ device will then be switched to its assigned work mode.
To check the functioning of the PTZ device, click the Test button. If the PTZ device is configured correctly, it will turn one step
and return to its original position.
Note
The corresponding Camera object should have the Send settings to device option enabled (see The Video
Camera Object).
Note
When a channel object is enabled / disabled, all Input and Output child objects are automatically enabled /
disabled.
If a device is defined as an IP Server, aInput will be displayed as a child of a Channel object in the object tree (see The Channel
object).
To configure a Input object, perform the following:
1. Select the Input object in the objects tree (1).
Note
If a input is a child of a Channel object, then:
1) Turning on a input automatically enables its parent Channel object.
2) Turning off all Input and Output child objects automatically disables their parent Channel object.
If a Input is part of an IP server, the Input settings allow choosing the video camera of the IP server it will be matched to. When
you do this, the Input object will appear as the child object of the specified camera in the object tree.
The Axxon Next software package enables you to work with virtual Inputs. This involves triggering a virtual Input and producing a
virtual Input event / alarm in the VMS. When triggered, the virtual Input status switches - Closed / Open.
To create and configure a virtual Input, complete the following steps:
1. Run IP Device Discovery Wizard (see Adding and removing IP devices).
2. In the form for manually adding an IP device, select HttpListener in the Vendor drop-down list (1).
3. In the Port field, specify the port number that will be used Input status queries (2).
Attention!
For a virtual Input to work correctly, please do as follows: use the Open circuit.
You can configure virtual Inputs in the same way as real ones. Also you can specify the time-out when virtual Inputs reset their
status in the Alarm ExpirationTime field of the IP Server object.
Note
It ranges from 0 to 100.
Note
This setting is applied only after you disable and enable the Input again.
Note
If a output is a child of a Channel object, then:
1) Turning on a output automatically enables its parent Channel object.
2) Turning off all Input and Output child objects automatically disables their parent Channel object.
If a Output is part of an IP server, the sensor settings allow choosing the video camera of the IP server it will be matched to. When
you do this, the Output object will appear as the child object of the specified camera in the object tree.
Attention!
Audio notifications cannot be played back via the system speakers on a remote Client. In this case, you are advised to
run an external program on Clients.
In Axxon Next you can create the following types of Speaker objects:
1. IP speaker device. Created automatically if there is an audio outlet on an IP device.
Note
One audio outlet on an IP device corresponds to one child Speaker of the Camera object
2. System speaker. Created manually. Sound on the system speaker is played back using the server's sound card.
A Speaker object can play audio notification files with the extensions:
1. .wav
2. .mp3
3. .mkv
4. .avi
The following audio notification file encoding formats are supported:
1. G.711
2. G.726
3. PCM
The audio notification file should be stored on the computer corresponding to the Server object on the basis of which the
Speaker object is registered.
3. Select the speaker mode: disabled, play back on Server, play back on Clients (2).
4. In the Audio file field (3), enter the full path to the audio notification file. This parameter is mandatory.
5. In the Volume field (4), enter the desired speaker volume level.
Note
By default, IP device speakers are disabled. To enable, for the Enable value (1), select Yes. When configuring the
speaker of an IP device, you can set other parameters as well, such as the compression algorithm for the audio
signal sent to the speaker for playback (2). Which speaker parameters you can configure is determined by the
protocol for integration of the IP device and the Axxon Next software package
2. Select the speaker mode: disabled, play back on Server, play back on Clients (2).
3. In the Name field (3), enter the desired name of the Speaker object.
4. In the Audio file field (4), enter the full path to the audio notification file.
5. In the Volume field (5), enter the desired speaker volume level.
6. To parent an IP device to a speaker:
a. By default, IP device speakers are disabled. To enable, for the Enable value (1), select Yes.
b. When configuring the speaker of an IP device, you can set other parameters as well, such as the compression
algorithm for the audio signal sent to the speaker for playback (2). Which speaker parameters you can configure is
determined by the protocol for integration of the IP device and the Axxon Next software package.
c. Choose a video camera to associate this speaker with (3). As a result of this operation, the selected camera will
become a parent object for the speaker.
7. Click the Apply button.
Configuration of the Speaker object is now complete.
When you do this, the audio notification file whose path you indicated in the corresponding field plays back (see the section
Configuring a Speaker Object).
The device is controlled via the USB interface. Electrical and technical specifications of the card are given in the Electrical and
technical specifications of AGRG-IO-16/8-WD-DS devices section.
Connect the AGRG-IO-16/8-WD-DS card to the Server as follows:
1. Switch the computer power supply off. Remove the system cover.
2. Install the AGRG-IO-16/8-WD-DS card into a vacant motherboard slot and fix it in the casing.
3. Connect the loop (bundled with the distribution kit) to the J1 connector and to a vacant USB connector on the
motherboard of computer.
4. To activate the hardware control of the hang, connect the wires to the H2 H3 connector.
5. To connect sensors and relays unsolder the connector bundled with the distribution kit.
a. The connecting wires from the executive devices are soldered to the contacts marked as "Relay" (see the table
below).
Connector Application Connector Application
2 Relay 1 27 Sensor 5
3 Relay 2 28 Sensor 6
4 Relay 2 29 Sensor 6
5 Relay 3 30 Sensor 7
6 Relay 3 31 Sensor 7
7 Relay 4 32 Sensor 8
8 Relay 4 33 Sensor 8
9 Relay 5 34 Sensor 9
10 Relay 5 35 Sensor 9
11 Relay 6 36 Sensor 10
12 Relay 6 37 Sensor 10
13 Relay 7 38 Sensor 11
14 Relay 7 39 Sensor 11
15 Relay 8 40 Sensor 12
16 Relay 8 41 Sensor 12
b. The connecting wires from the sensors are soldered to the contacts marked as "Sensor" (see the table below).
Connector Application Connector Application
17 Sensor 1 42 Sensor 13
18 Sensor 1 43 Sensor 13
19 Sensor 2 44 Sensor 14
20 Sensor 2 45 Sensor 14
21 Sensor 3 46 Sensor 15
22 Sensor 3 47 Sensor 15
23 Sensor 4 48 Sensor 16
24 Sensor 4 49 Sensor 16
6. Fix the unsoldered connector in the casing bundled with the distribution kit.
7. Connect ready-for-use connector to external connector of the card in order to connect sensors and relays to the Server.
The AGRG-IO-16/8-WD-DS card is now connected.
Parameter Specification
Inputs Quantity - 16
Type - current loop
Galvanic isolation - Yes
Maximum voltage - 60 V
Rated voltage - 12 V
Maximum current - 60 mA
Outputs Quantity - 8
Type - open collector
Galvanic isolation - Yes
Maximum voltage - 300 V
Maximum current - 150 mA
Minimum pick-up voltage - 1.0 V
Minimum pick-up current - 5 mA
Ping interval of all alarm inputs 100 ms for all contacts. Customizable
Parameter Specification
2. Click Apply.
3. The device will be reconnected. Upon reconnection, the entered serial number will be checked against the real one. If
numbers do not match, a separate event will be registered in the system Log.
Note
The Friendly name parameter is configured through the Web interface of the IP device: Setup -> System options ->
Network -> Bonjour
Note
The default value of the Friendly name parameter is as follows: AXIS <model name> - <mac address>, where <model
name> is the model of the Axis IP device and <mac address> is its MAC address (for example, AXIS 214 - 00408C7D2610
Attention!
In HID mode, some control panel buttons may not work in Axxon Next.
2. Connecting a panel via the Axxon Next driver. Using this method, the panel is added to the system similarly to that of IP
devices (see Adding and removing IP devices).
Attention!
To connect a control panel in this way, you need the Axxon Next software to be installed in the Server and
Client configuration. It's not possible to operate a board as a remote client.
The following control panels are supported in the current version of Axxon Next:
Dahua DH-NKB1000 + +
PELCO KBD5000 + +
Videotec DCZ + +
Hikvision DS-1005KI + +
Hikvision DS-1100KI - +
Hikvision DS-1200KI - +
Hikvision DS-1600KI - +
UNIVIEW KB-1100 - +
On page:
• Moving through the menu items and within the fields is carried out by deviating the remote control vertically /
horizontally.
5. Click the .
6. Enable the newly created IP Server and its Channel child object.
You have configured the UNIVIEW KB-1100 Joystick Remote Control and CCTV keyboard.
On the page:
7.2.4.3.5.1 Configuring the Hikvision DS-1200KI control device before adding it to Axxon Next
Configure the Hikvision DS-1200KI PTZ control device as follows before creating the corresponding object in Axxon Next:
1. Set the control device address: Select System - Network in the device internal menu, then disable DHCP and set IP
address and gateway.
2. Set the Axxon Next server IP address and port on the device: Open the device web interface by entering the above set IP
address in the web browser, then select Platform Access - Third-Party Platform.
Note.
The web interface is only available on https by default, so please use https prefix before the IP address.
4. Transfer the PTZ control device to operating mode by one of the following actions:
a. Select any monitor MON (monitor) [integer in the range 1-9999], any camera CAM (camera) [integer in the
range 0-999999]; in this mode all keys will work, including the WIN (subwindow of video wall) and MULT
(layout size) keys for the monitor and camera.
b. Select any camera CAM (camera) [integer number in the range 1-999999 (number 0 is available only when
setting the monitor)]; the WIN, MULT and CAM-G keys will not work in this mode.
c. Select any monitor MON (monitor) [integer in the range 1-9999] and any group of cameras CAM-G (camera
group) [integer in the range 1-999999]; in this mode, only the MULT, OK keys and the rotations of the joystick
axes will work.
Note.
To select these parameters, enter a number on the device keypad, then press the corresponding key
(MON / WIN / CAM / CAM-G).
If the the server IP address and/or port is not accessible from the PTZ control device, the "Connect failed"
message will be displayed on the control device screen after pressing the MON / WIN / CAM / CAM-G keys.
7.2.4.3.5.2 Features of the Hikvision DS-1200KI control device operation in Axxon Next
When the MULT, PRESET, PATROL, or PATTERN keys are pressed on the device, the following actions are performed:
• when entering a number in the range 1-99 and pressing MULT for each number, Axxon Next receives a message
about pressing the corresponding separate key with a number in the range 23-121 (B22-B120);
• when pressing PresetRec for the first time, Axxon Next receives a message about pressing key 13 (B12), and the
device displays Record started;
• after pressing PresetRec for the second time, Axxon Next receives a message about pressing key 12 (B11), and
Record ended appears on the device display;
• when entering a number in the range 1-65535 and pressing PresetRec, Axxon Next receives a message about
pressing key 22 (B21), and the device displays PRESET:;
• when entering a number in the range 1-65535 and pressing Patrol, Axxon Next receives a message about pressing
key 17 (B16), and the device displays PATROL:;
• when entering a number in the range 1-65535 and pressing PatternPlay, Axxon Next receives a message about
pressing key 18 (B17), and the device displays PATTERN:;
• when entering a number in the range 65536-999999 and press any of the PresetRec / Patrol / PatternPlay buttons,
nothing happens: such numbers are not processed.
If the keyboard is recognized as an HID device, hold down the Shift key on the device to toggle its operating mode.
Note
To learn about connecting the device, consult the manufacturer's official documentation.
Note
If you have a WS-216 card added through the Yuan driver, in Windows Server OS you should activate: Desktop
Experience Feature
Сameras connected to Axxon Next through the WS-216 card require the following configuration: add ITV tw5864 PCI device
configuration (2) and select the checkbox Send settings to device (see The Video Camera Object).
Note
Axxon Next does not support receiving uncompressed video from WS-216 video capture cards.
For video cameras that are connected through WS-216 video capture cards, you can choose one of the two codecs for a
video stream:
1. H.264 (configurable)
2. H.264 (minimum resolution, non-configurable)
Attention!
Please see the list of supported OS for the YUAN PD652 board on the official website of the manufacturer.
1. Disable the system check for the digital signature of the drivers and install the card driver.
2. Connect the camera to the card.
3. Create an IP device in Axxon Next. The search result shows the camera connected through the YUAN PD652 card as
follows:
7.2.4.7 Joysticks
Only joysticks that are detected in Windows as gaming input devices can be used in Axxon Next for controlling PTZ cameras.
Information on how to view the status for a connected joystick is available in official Microsoft documentation.
Note
We recommend that you calibrate the joystick before you start working with Axxon Next.
6. Set permissible ranges for temperature (in centigrade) and relative humidity (in per cent)(2). If a reading falls out of the
range, the corresponding Input (sensor) triggers an alarm.
7. Set the check period in milliseconds (3).
8. If you need to report sensors statuses when readings are within the range, set Yes for Always report other sensors
status (4). In this case, the following records will appear in the log file in specified intervals of time (see paragraph 7):
Special humidity ray#16 changed status to: false ,Sensor value: 16,8 Correct range [15,
58]. Time: …..
Special temperature ray#17 changed status to: false ,Sensor value: 29,8 Correct range [20,
60]. Time: …..
9. Click Apply.
Now, the board is configured. Current temperature/humidity value will be displayed next to input's icon on the Map (see
Displaying device status).
Attention!
To use Tag & Track, make sure you have a PTZ camera in Axxon Next that supports Absolute Positioning. The devices
that support Tag & Track Pro are listed in the Drivers Pack documentation. If a PTZ camera does not meet the
requirement, you should add it to the VMS via Onvif.
With Tag & Track Lite, the operator is alerted to the camera in front of which the moving object is most likely to appear next. The
camera is predicted based on object trajectory and mapping of cameras to map locations.
For these features to work, you must enable General information on Scene Analytics on all relevant cameras.
1. The degree of prediction (1). This value should be in the range from 1 to 3000. The higher the value, the smoother is the
panning of the camera.
2. The rate at which coordinates are sent in milliseconds (2). This value should be in the range from 100 to 3000.
Note
To search for a camera, enter its ID, the full or partial name in the 1 box.
Note
You can add only those cameras for which the Object Tracking was created and activated.
3. Repeat this action for all cameras that you want to link to the PTZ. You can connect any number of panoramic cameras to
a PTZ camera.
Note
2. Left-click to add a point in the frame of the panoramic camera toward which the PTZ camera is currently oriented.
Attention!
Set calibration points on the same plane (floor, ground). Do not set the points on different planes (for example,
when some are on the ground while others are on a tree, etc.).
Attention!
The entirety of the moving object must be inside the field of view of the PTZ camera.
After setup is complete, we recommend that you perform a calibration check. To do this:
1. Click the button to the right of the preview for that PTZ camera.
2. Click different points in the camera's field of view. If the PTZ camera is positioned correctly, it needs no calibration.
Note
To delete calibration points, click the button.
Note
Once created, the Event Source object is enabled by default. To disable, select No in the Enable field.
1. In the Video sources group, select a camera from the list and click the + button to add the camera for titles overlay(1).
Titles from any one POS device can be overlaid on video from several cameras.
Note
To disable titles overlay for a camera, select the Delete checkbox and click the - button.
2. Select a camera in the Video sources group (2). The Adjust the text area group shows video from the selected camera
and the adjustable titles area (3).
3. You can configure the titles area: Resize it by moving the anchor points. Move it with Drag-and-drop.
4. Change transparency with the slider (4). Slide left for more transparency, slide right for less.
5. In the Font filed, click the button and specify font settings in a standard Windows box (1).
Note
For shops where the checkout is never crowded, we recommend the captions display duration under 10
seconds.
10. If you don't need to display any lines following the end marker of the receipt, set Yes for the Erase Upon Completion
parameter (6).
11. Select a method of processing incoming data (7):
a. PROCESS_LINEARLY – incoming data is stored in a buffer until the next EOL is received, and only after that is
transmitted to Axxon Next.
b. PROCESS_EVENT – each data portion is immediately transferred to Axxon Next.
c. PROCESS_JSON – not available in the current version.
12. Click the Apply button.
You have configured the titles view.
Attention!
It is strongly recommended that you configure this setting for shops with low-intensity events at the checkout.
Otherwise, the accumulation of 2000 lines can take a long time.
1. Populate the Begin words (1) group. To add words, click the + button. To remove words, select their Delete check boxes
and click the - button.
Note
You can add any number of delimiting words. Double click a word to edit it.
3. Delimiting words are case-sensitive by default. To ignore case, select Yes in the corresponding field (1).
Note
To remove words, select their Delete check boxes and click the - button.
If the setup procedure was done correctly, the events from specified objects will be displayed in the viewing tile on Intellect in the
same way as POS device captions do (see Viewing titles from POS terminals).
Note
CommaxComplexServer is smart apartment complex management software. It receives input from gates, doors,
elevator call buttons etc.
.
The Event Source object is added to the system.
5. Select the new object.
Note
You can present information as caption (titles) superimposed on video in the Camera window. See POS configuration
instructions for that (see Configuring POS devices).
8. Click Apply.
The CommaxComplexServer connection is now configured.
2. To create a Group object, click the button or select Add group in the context menu of the main group.
Note
By default, an object (whose name is the same as the Axxon-domain) is available, including all cameras that have
been created in the system. This object is referred to here and elsewhere in the document as the "main group".
This object cannot be deleted. Cameras in this group cannot be deleted.
Note
Video cameras are added to groups via management operations (see the section titled Managing Group and Video
camera objects). The standard method for adding video cameras to groups is presented below
1. In the main group, select a video camera to add to the selected group.
2. Click the button or select Copy from the context menu of the selected video camera.
3. Select the Group object to which you need to add the video camera.
4. Click the button or select Paste from the context menu of the selected group.
5. Fill the groups with the necessary video cameras (see steps 1-4).
Note
One video camera can be assigned to multiple groups
A system of groups and subgroups can be created via group management operations and video camera management operations
(see the section titled Managing Group and Video camera objects).
Group objects can be moved or copied to other Group objects or to the main group.
Action Execution
Action Execution
Note
This is useful, for example, in the following case:
• The video from a bus on the route network is written to a temporary archive on the local server specific for each
bus;
• When the bus arrives at the depot, the archive and camera events are automatically transferred to the
centralized server.
Attention!
The Server name is case-sensitive.
If the name is Server1, no connection will occur if you enter server1 or SERVER1.
4. Specify a port of the local Server from which data is transmitted (2). If you use TCP protocol (see 8), please specify the
RTSP port number (554 by default, see Configuring an RTSP Server). If you use rtspoverhttp or rtspoverhttps protocol,
specify the web server's port (80 by default, Configuring the web server).
5. Specify the user name and password (3). The user must have permissions to access the local server.
6. If you want to display live video from the local server when it is available, select Yes in the appropriate field (4).
7. Specify the maximum playback speed (5).
8. Select the data transfer protocol (6).
9. Click the Apply button.
10. Repeat the above steps for all the required cameras from all local servers.
11. Configure automatic replication from the embedded storages of the added devices to the centralized server archive (see
Configuring data replication).
Attention!
With a high network load, there may be gaps in the centralized server archive.
You have successfully set up automatic data copying from the local servers to the centralized server.
On the base of one server you can create an unlimited number of archives.
Attention!
We do not recommend you to operate a large single volume video footage archive.
It's more practical to divide the archive to multiple volumes located on different hardware devices.
An archive can be distributed on several volumes of the server Archive can be stored on multiple network storage devices.
On one logical disk for one archive you can create only one volume, which occupies Archive can be stored only as a file of a specified size.
either a file of a set size or the entire partition (logical disk).
An archive can contain multiple volumes, which may be in the form of a file or a
partition.
1. Create archives.
2. Configure recording of the video stream from video cameras to the archives.
3. Configure data replication, if necessary.
Note
You can also create an archive by selecting the matching command in the context menu of the Server object (the
menu can be brought up by right-clicking the name of the Server)
Note
The file system on the disk can be erased by using the standard Disk Management utility in
Windows.Instructions for starting and using the utility are given on the Microsoft website.
Deleting the file system on the disk in the disk management utility consists of the following:
i. Delete the volume.
ii. Create a new volume in the resulting unformatted area.
iii. Assign a letter to the volume, but do not format it.
The system disk cannot be completely allocated for an archive
Important!
When selecting the disk on which to place the archive volume, take its size into account. If the archive is
completely filled, the oldest data will be overwritten with new data.
Note
Note that you cannot create an archive volume as a partition on a removable disk, since its partition
cannot be erased through the Disk Management utility
c. On disks that have a file system, you can store an archive volume in the form of a file.
For this archive volume, you must enter a file size (in gigabytes) or set it by moving the slider. The size of the
archive file must be more than 1 GB. For the Fat32 file system, the maximum archive size is 4 GB.
Important!
If the archive is completely filled, the oldest data will be overwritten with new data.
Note
By default, the file name will be the same as the name of the archive, and the file will be located at the
root directory of the disk. To change the name and/or location of the file, click the button.
6. Click the Apply button.
If volumes are configured in the form of partitions, a dialog box is displayed, warning about formatting of the relevant
system disks.
7. Read through the list of partitions that will be formatted. If the list is correct, select I have read the warning and realize
the risk of losing important data, then click Format. Otherwise, click Cancel to return to the archive settings.
Creation of the local archive is now complete.
Attention!
If a particular PC is used to access a particular network archive from multiple user accounts, do the following:
a. Launch text editor and open the file: C:\Windows\System32\Drivers\etc\hosts ,then add the following
string: "192.168.1.1 DNSname1", where 192.168.1.1 - IP address of the NAS, DNSname1 - domain name
of the NAS.
b. The name of a newly created network archive must include the actual domain name.
If you need to add several network archives under different user accounts, do the following:
a. Launch text editor and open the file: C:\Windows\System32\Drivers\etc\hosts, and add IP addresses and
domain names of all necessary NAS.
b. The names of newly created archives must include actual domain names.
In a backup Server-driven failover system (see Setting up a configuration with the backup Server), if domain
names differ from one Server to another, the hosts file on the backup Server must include records from all
Servers.
5. Enter the user name and password (4). The user must have permissions to access the NAS.
Attention!
The login should be specified with a prefix of the domain (domainname\username) or name of the computer
(computername\username) where this account is located.
Attention!
Only one network user can connect to NAS at a time. This is a limitation of OS Windows.
If this error occurs, disconnect the previous user in one of the following ways:
1. Run the command net use /delete.
2. Use PsExec to open the command prompt as the LocalSystem user (psexec.exe -i -s cmd.exe) and execute the
same command.
7. For this archive volume, you must enter a file size (in gigabytes) or set it by moving the slider. The size of the archive file
must be more than 1 GB. For the Fat32 file system, the maximum archive size is 4 GB.
Note
To change the archive folder, click the button and browse to a desired location
8. If you want, you can add other NAS for your archive and set them up.
9. Click the Apply button.
You have created your network archive. After creating the archive, the NAS status is displayed.
Note
A new archive can contain volumes that previously belonged to different archives
3. If the volume is in file form, select the archive file to which recording was performed (2).
4. Clear the Format check box (3).
Note
If the Format check box is selected, the archive entries that are currently stored on the volume will be erased.
If a time schedule is selected (see Configuring schedules), video will be recorded non-stop to the archive during the
selected time period. Recording to the archive can also be initiated by the operator or an automatic rule.
b. In the Depth field, specify a video footage retention time value for the given camera (in days). Zero value means
unlimited retention.
Attention!
If your footage archive includes videos from one or more cameras with unlimited retention, the archive
will run out of free space at some point, and FIFO based re-recording will be automatically started
regardless of retention time settings. In this case, we cannot guarantee that the actual retention time
settings will be preserved for other cameras.
Therefore, if you set unlimited retention time for at least one camera, you may find it pointless to limit
retention time for other cameras.
To avoid retention time collisions between cameras, please make sure to provide enough storage
capacity for your footage archive (see Disk storage subsystem requirements).
Attention!
Further, if you are increasing the available archive retention time, or setting the parameter to 0, please
note that this setting may be not applied to earlier records. Older footage falling outside the initial
retention time may become inaccessible.
Note
When setting a retention time parameter for a particular camera archive, please note that global
retention time limits have higher priority (see Configuring access restrictions to older footage).
If your entire archive is set to, say, 10 days, and the camera retention time is set to 20 days, camera
footage will be actually retained for 10 days only.
c. In the Pre-alarm recording time field (4), enter the buffering time of the video stream from the camera in
seconds. This value should be in the range [0, 30].
Note
Pre-alarm recording is the period of pre-event recording that will be added to the beginning of an alarm
event recording
Attention!
If a macro starts recording, the pre-alarm recording time may be longer, according to your settings (see
Record to archive).
d. If you want to record pruned, decimated video, choose By keyframes from Gapping, fps. This applies to all video
streams except MJPEG. With MJPEG codec, please use the explicit value of frame rate. Video pruning by frame
dropping reduces the size of recorded video and saves storage, but video with skipped frames feels like the
movement is delayed, and motion feels more choppy.
Attention!
When you prune by frame dropping, in all video streams except MJPEG , only I-fames (Intra-Coded Frame
or Key Frames) are saved. Different codecs feature various compression levels with key frames rates going
down from 3 to under 1 I-frames per second.
MJPEG video contains only I frames (Intra-coded pictures with a complete image), so it makes sense to
set a desired frame rate here.
Note
This setting is relevant for cameras that support multistreaming.
2. Configure archive recording settings for the group of cameras (marked with yellow). The indicated settings for archive
recording are applied to the selected cameras.
3. Perform custom configuration of camera recording settings, if necessary.
4. Click the Apply button.
The camera is now bound to the archive.
2. To change the default archive for a camera, move the designator to the relevant archive.
Attention!
Replication occurs as information blocks are accumulated. 1 block may contain more than a minute of video.
Attention!
Replication is performed only to the end of the archive. It is not possible to overwrite existing data in the archive.
To transfer the old data from Archive 1 to Archive 2 and continue writing new data to Archive 2, do as follows:
1. Replicate the data from Archive 1 to Archive 2, while Archive 2 cannot be written to.
2. Configure the camera to write to Archive 2.
Note
The primary purpose of data replication is to ensure long-term storage and access to multimedia recordings on remote
storage devices.
Any archive can be the source or recipient of replication. Moreover, every archive can simultaneously be both the sender and
recipient of data.
Note
Events indicating the start and successful completion of data replication are generated in the system (see Event
Control). These events can be used as macro triggers
3. For each archive, select the cameras from which data will be copied to the source archive (2). To select all cameras, click
the Select All button.
Note
Data for a particular camera can be copied to the source archive only from one archive. When you select a
camera for replication from an archive, the camera becomes unavailable for replication from any other archive.
Note
You cannot select cameras if they are already being recorded to the source archive.
Attention!
You can use macros to replicate on schedule (see Start replication).
Note
If the history value is set at 0, all recorded video is available for playback.
You can view only video recordings not exceeding the retention time setting. All other videos will be deleted.
Attention!
Further, if you are increasing the Archive retention time (0 - stands for unlimited time), this setting is applied for new
records only. Earlier records falling outside the initial retention time become inaccessible.
7.3.8 How to preserve Video Footage continuity after replacing a video source
Axxon Next supports keeping video source's recorded footage after the device is replaced.
To do this:
1. Copy the old video device ID and delete it from the system.
2. Create a new device under the same ID (1) and bind it to the same footage archive (2, see Adding and removing IP
devices).
Attention!
Any other parameters of the new device may differ (make and model, ID, etc.).
If you swap NVRs, make sure you preserve the same order of video channels for connected cameras.
Creating a new device with the old ID makes the previously recorded footage available for viewing / processing within the system.
2. The timeout delay value sets the time interval for checking Video Footage for protected records (2).
3. The second action in the macro must be checking Video Footage for protected records (3). On this step, you have to:
a. Select a camera or a group of cameras whose Video Footages have to be checked for protected records (4).
b. If you select a particular camera, specify the archive to be checked (5). If a group of cameras was selected in the
previous step, only default archive will be checked for each of them (see Setting the default archive).
c. Specify the depth of the check in HH:MM:SS format (6). The time interval between checks is calculated as follows:
[starting time of the earliest recording in archive, starting time of the earliest recording in archive + depth of the
check].
d. Add data replication as a conditional action upon discovery of protected records within the scanning interval (7,
see Start replication).
e. Select the Replication time for a time interval (8). The replication duration defines the time interval from which
protected records have to be copied to another Video Footage. All protected intervals starting within the [starting
time of the earliest recording in Video Footage; starting time of the earliest recording in Video Footage +
replication duration] range will be copied to Video Footage for replication. Normally, the replication duration has
to be equal to the depth of the check defined in Step 3c.
The screenshot shows settings that make the system scan once in a minute (2) for protected records in Camera 4's (4) archive
AliceBlue (5) within the [starting time of the earliest recording in archive + 10 minutes] time interval (6). If this interval contains
protected records, the replication will be launched to copy all protected records falling into [starting time of the earliest
recording in archive; starting time of the earliest recording in archive + 10 minutes] (8) interval to archive specified for
replication.
1. Select the required camera on the layout and proceed to archive mode (see Switching to Archive Mode).
Note
If you need to protect the same time interval in multiple camera archives, switch the necessary cameras to
archive mode.
2. Set the protected time interval on the timeline with and buttons (see The Timeline).
The protected interval will be highlighted in light grey while its beginning will be marked with .
Attention!
AxxonSoft cannot guarantee archive data integrity after this action is complete.
3. Click Apply. A dialog box is displayed, warning about formatting of the selected volumes.
Note
If there is only one volume in the archive, you cannot delete it
3. Click Apply.
4. If required, you can remove archive files.
Attention!
If you delete archive files, all video footage contained will be lost.
If you do not delete archive files, you can re-use them to create another archive (see Creating an archive based
on existing archive volumes). You can as well use a partition to re-create an archive.
Attention!
If you delete archive files, all video footage contained will be lost.
If you do not delete archive files, you can re-use them to create another archive (see Creating an archive based
on existing archive volumes). You can as well use a partition to re-create an archive.
7.3.14 View information about the size of archives and Server disk space.
Selecting a Server object displays statistical information about the available Server disks and created archives.
Figure 1 shows the overall balance of free disk space between the Server archives.
In addition, a list of disks is displayed, containing information about the total disk capacity, space used, and total free space (2).
When you select an archive, the following information is displayed:
1. The percentage of used space on each volume in the archive. This parameter indicates whether the volume's data is being
rewritten. If the percentage is 100%, the new data is overwriting the old data.
2. The approximate volume usage in percentage and gigabytes (2) is displayed in the Filled archive volume field of the
Archive Parameters group.
When you update Axxon Next or restart the Server, or create the archive based on the existing volume etc., the archive is
reindexed.
2. Create a new device in the system with the same ID (1) and bind it to the old archive (2, see Adding and removing IP
devices).
Attention!
Device model, IP-address and any other device parameters may differ from the previous one.
In the case of replacing the NVR, the video cameras should be connected to the same channels as before.
After creating a new device with the old ID, the archive recorded earlier will be available in the system.
b. Audio analytics.
5. Detection tools embedded in a video camera.
Detection setup takes place using the interface in the Detection Tools tab (under Settings). For detection setup you must have
the appropriate permissions.
Attention!
For a video camera and its corresponding branch to appear in the Detection Tools list, the camera must be enabled in
Axxon Next
Note
By default, camera inputs that can be activated through an automatic rule are displayed on the list of detection tools.
In the viewing tile, on the right side of the detection configuration window, you can set visual parameters.
Note
The indicator in the upper right corner displays the current time and recording status (see Time Display).
The default system setting to a detection event is not to trigger any response actions. To set system response, create an
automatic rule or a macro (see Configuring Macros, Automatic Rules).
Attention!
To extract metadata from video, you have to de-compress and analyze the video stream which, in its turn, increases the
Server's workload, thus limiting the number of available camera channels.
1. Object Tracker.
2. Neural Tracker.
3. VMD.
Note
Object tracker and Neural Tracker generates metadata containing the following information about moving
objects in scene: object type, position, size and color, motion speed and direction, etc.
VMD generates less accurate data; it does not detect object type and color.
Note
Face detection metadata contain facial bounding boxes and their positions, as well as facial vectors.
Note
ANPR metadata contain license plate bounding boxes and their positions, as well as vehicle registration
numbers.
6. Pose detection.
Note
Metadata from pose detection tools contains information on positions and postures of all persons in FOV.
Scene Analytics Object Tracker, Neural Tracker, built-in detection tool, or VMD.
Forensic search MomentQuest Object Tracker, Neural Tracker, built-in detection tool, or VMD.
Tag&Track Pro Object Tracker, Neural Tracker, built-in detection tool, or VMD.
Autozoom Any
Attention!
If a camera uses several sources of metadata, the required source is selected automatically, except for MomentQuest.
To perform facial/license number searches, only metadata from corresponding detection tools is used.
By default, metadata files are stored in Server's object trajectory database: C:\Program
Files\AxxonSoft\AxxonNext\Metadata\vmda_db\VMDA_DB.0\vmda_schema; if necessary, you can place them on any available
network storage (see Configuring storage of the system log and metadata).
For example, if your neural network is intended to analyze outdoor video feeds, your footage must contain all range of weather
conditions (sun, rain, snow, fog, etc.) in different times of day (daytime, twilight, night).
Extra requirements for video footage for each neural analytics tool are listed in the following table:
Tool Requirements
Neural Filter No less than 1000 frames containing objects of interest in given scene conditions, and the same
amount of footage containing no objects (background footage).
Neural Tracker 3 to 5 minutes of video containing objects of interest in given scene conditions. The more the
number and variability of the situations in the scene, the better.
Posture detection tools No less than 100 different persons in given scene conditions.
Attention! Different conditions mean, among others, different postures of an individual in
scene (tilting, different limbs patterns, etc.).
Segmenting detector * 3 to 5 minutes of video containing objects of interest in given scene conditions.
Food recognition * You need to submit no less than 80% of actual menu positions. Each position requires 20 to 40
images shot in different conditions.
Note
* Will be available in future versions of Axxon Next software.
VMD
Neural Tracker
Neuralcounter
Pose detection tools
Note
Service Video and Audio Detection Tools are listed under Additional.
Furthermore, you create Scene Analytics based on VMD and the Motion Mask's metadata (see Configuring VMD).
Note
To enable Scene Analytics detection tools under a basic VMD, you have to activate the object tracking option in its
settings.
Facial recognition tools are created in the same fashion: the basic Face decetion object and its derivative detection tools.
A posture detection tool is created under a parent Pose Detection object.
You can create a detection tool of the same type for a number of cameras. To do this:
1. Create the required tool on any video camera.
2. Click Apply.
3. Click the button and select the cameras for which you want to create the same detection tool.
4. Click Apply.
To remove a detection tool, select the parent object and click Remove. To disable, select No in the Enable field.
Attention!
When you delete a detection tool all its metadata is also deleted. If you have deleted a detection tool, you cannot search
its video with the forensic MomentQuest search.
c. Camera shaking must not cause image shifting of more than 1% of the frame size.
2. Lighting requirements:
a. Moderate lighting. Lighting that is too little (night) or too much (bright sunlight) may impact the quality of video
analytics.
b. No major fluctuations in lighting levels.
3. Scene and camera angle requirements:
a. Moving objects must be visually separable from each other in the video.
b. The background must be primarily static and not undergo sudden changes.
c. Minimal obscuration of moving objects by static objects (columns, trees, etc.).
d. Reflective surfaces and harsh shadows from moving objects can affect the quality of analytics.
e. Long single-color objects may not be tracked properly.
4. Object requirements:
a. There is no noise on the video image and there are no artifacts caused by the compression algorithm.
b. The width and height of the objects in the image must be not less than 1% of the frame size (if resolution is over
1920 pixels) or 15 pixels for lower resolution.
c. The width and height of the objects in the image must not exceed 75% of the frame size.
d. The speed of objects in the frame must be at least 1 pixel per second.
e. In order to detect the object it is to be visible at not less than 8 frames.
f. Within two adjacent frames the object cannot move in the movement direction for the distance that is longer than
its size. This condition is essential for correct calculation of the object’s trajectory (track).
2. If a camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video
stream allows reducing the load on the Server.
3. In the Alarm end delay field, set a value in seconds for the time interval within which the detection tool remains triggered
after motion stops (3). If motion is re-detected within this interval, no new event will be created.
4. To reduce false alarms rate from a fish-eye camera, you have to position it properly (4). For other devices, this parameter
is not valid.
5. If necessary, enable the video stream grooming (5). In this case, only the I-frames will be decoded.
Important
This setting applies to all codecs. If a codec has keyframes and p-frames, the keyframe is decoded no more often
than every 500 milliseconds. For the MJPEG codec, each frame is considered to be I frame.
This feature reduces the Server load but, as can be expected, negatively impacts the quality of detection.
This setting should be activated on "blind" Servers (Servers that do not display video) on which it is necessary to
perform detection.
Important
Period and Decode key frames parameters are correlated.
If no local Clients are connected to a Server, the following rules are set for remote Clients:
• If the interval between consecutive I-frames exceeds the value specified in the Period field, the detection
tool will process I-frames.
• If the I-frame frequency is lower than the value specified in the Period field, the detection tool will use the
set value.
If at least one local Client connects to the Server, the detection tool is forced to use the set value. After the local
Client disconnects, the indicated rules become valid again.
6. Select a processing resource for decoding video streams (6). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
7. The VMD analyzes differences between the current frame and the static background (10, 11). Depending on a particular
scene, we recommend the following sensitivity values for contrast and size:
a. Maximum sensitivity (street scenes where target objects are smaller):
i. Sensitivity: contrast = 16.
ii. Sensitivity: size = 10.
b. Medium sensitivity (default values for generic scenes):
i. Sensitivity: contrast = 12.
ii. Sensitivity: size = 9.
c. Low sensitivity (indoor cameras with an average distance to object of ca. 4m):
i. Sensitivity: contrast = 8.
ii. Sensitivity: size = 8.
For your convenience with setting sensitivity value, in the preview window you can see the Motion Mask. To
disable it, select No in the Motion Mask (7) field.
If there is motion, but it does not exceed the threshold value (because of the detection sensitivity), the
mask cells are colored green. If motion triggers VMD, the cells turn red.
8. To get tracked objects and their parameters (percentage of the FoV width/height, color) displayed in the Preview window,
select Yes in the Object Tracking field (8).
9. In the Period field (9) enter the time in milliseconds. This the time before the next video frame is processed. This value
should be in the range [0, 65535]. If the value is 0, each frame is processed.
10. By default, VMD (video motion detection) covers the entire FoV. In the FoV, you can set privacy masks - closed areas, inside
of which you want no detection.
Privacy masks are created similar to scene analysis configuration (see Setting General Zones for Scene Analytics).
11. Click Apply.
VMD configuration is now complete.
Quality degradation A detection tool which is triggered when the video image received from a video camera
loses quality.
For example, the detection tool may trigger upon excessive light, loss of focus, lens
blocking, or sudden drop in scene illumination.
Scene change A detection tool triggered by a change in the video image background indicating a change
in the video camera's position in space
Image Noise Detection A tool that triggers in the presence of video noise.
For instance, triggering may occur upon low bit rate, or ripples on video.
2. The default setting for all detection tools is Decode key frames (2). In this case, the frames are processed every 500
milliseconds or less often. To disable Decode key frames, select No in the appropriate field.
Important
This setting applies to all codecs. If a codec has keyframes and p-frames, the keyframe is decoded no more often
than every 500 miliseconds. For the MJPEG codec, each frame is considered to be I frame.
This feature reduces the Server load but, as can be expected, negatively impacts the quality of detection.
This setting should be activated on "blind" Servers (Servers that do not display video) on which it is necessary to
perform detection.
Important
Period and Decode key frames parameters are correlated.
If no local Clients are connected to a Server, the following rules are set for remote Clients:
• If the interval between consecutive I-frames exceeds the value specified in the Period field, the detection
tool will process I-frames.
• If the I-frame frequency is lower than the value specified in the Period field, the detection tool will use the
set value.
If at least one local Client connects to the Server, the detection tool is forced to use the set value. After the local
Client disconnects, the indicated rules become valid again.
3. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
4. In the Period field (4) enter the time in milliseconds. This the time before the next video frame is processed. This value
should be in the range [0, 65535]. If the value is 0, each frame is processed.
Audio loss A detection tool which is triggered by the line break from the microphone to the Server (complete absence of sound)
Signal A detection tool which is triggered by the reception of an audio signal from an audio device
Attention!
Audio loss detection may operate incorrectly with video cameras emitting a background signal with a non-zero volume,
even if the integrated microphone is physically disabled
|
2. For the signal detection tool, set the threshold in percentage (2). The tool triggers on exceeding the threshold. Set the
value by trial-and-error.
3. For the noise detection tool, set the threshold in percentage (3). The tool triggers on exceeding the threshold. Set the
value by trial-and-error.
Loitering a detection tool triggered by the lengthy presence of an object in an area of a video camera's
field of view
Abandoned object a detection tool triggered by the appearance of an abandoned object in an area of a video
camera's field of view
Line crossing a detection tool triggered by the trajectory of an object crossing a virtual line
Multiple objects a detection tool is triggered when the number of objects within the designated area exceeds a
predefined value
Move from area to area a detection tool is triggered when an object moves from one pre-specified area to another
within the scene
Attention!
To detect any motion within an area, you need to apply two detection tools:
• Motion in Area and
• Appearance in area.
Note
Nueral filter and Neural Tracker require Addon Neuro Pack to be installed.
Attention!
Pixel resolutions over 800x600 are not recommended for this detection tool. Higher resolutions lead to
increased RAM consumption and CPU load with no significant increase in tracker's performance.
d. Camera shaking must not cause image shifting of more than 1% of the frame size.
2. Lighting requirements:
a. Moderate lighting. Lighting that is too little (night) or too much (bright sunlight) may impact the quality of video
analytics.
b. No major fluctuations in lighting levels.
3. Scene and camera angle requirements:
a. Moving objects must be visually separable from each other in the video.
b. The background must be primarily static and not undergo sudden changes.
c. Minimal obscuration of moving objects by static objects (columns, trees, etc.).
d. Reflective surfaces and harsh shadows from moving objects can affect the quality of analytics.
e. Long single-color objects may not be tracked properly.
4. Object requirements:
a. There is no noise on the video image and there are no artifacts caused by the compression algorithm.
b. The width or height of the objects in the image must be not less than 1% of the frame size (if resolution is over 1920
pixels) or 15 pixels for lower resolution.
c. The width or height of the objects in the image must not exceed 75% of the frame size.
d. The speed of objects in the frame must be at least 1 pixel per second.
e. In order to detect the object it is to be visible at not less than 8 frames.
f. Within two adjacent frames the object cannot move in the movement direction for the distance that is longer than
its size. This condition is essential for correct calculation of the object’s trajectory (track).
7.4.9.2.2 Video requirements for object tracker (with neural filter)-based scene analytics
For video analytics to work correctly, the following requirements must be met:
1. Camera requirements:
a. Resolution min. 640х360 pixels.
b. Frames per second: Min. 6 fps.
c. Color: video analytics work with both black-and-white and color images.
d. Camera shaking must not cause image shifting of more than 1% of the frame size.
2. Lighting requirements:
a. Moderate lighting. Lighting that is too little (night) or too much (bright sunlight) may impact the quality of video
analytics.
b. No major fluctuations in lighting levels.
3. Scene and camera angle requirements:
a. Moving objects must be visually separable from each other in the video.
b. The background must be primarily static and not undergo sudden changes.
c. Minimal obscuration of moving objects by static objects (columns, trees, etc.).
d. Reflective surfaces and harsh shadows from moving objects can affect the quality of analytics.
e. Long single-color objects may not be tracked properly.
4. Object requirements:
a. There is no noise on the video image and there are no artifacts caused by the compression algorithm.
b. The width or height of the objects in the image must be not less than 1% of the frame size (if resolution is over 1920
pixels) or 15 pixels for lower resolution.
c. The width or height of the objects in the image must not exceed 75% of the frame size.
d. The speed of objects in the frame must be at least 1 pixel per second.
e. In order to detect the object it is to be visible at not less than 8 frames.
f. Within two adjacent frames the object cannot move in the movement direction for the distance that is longer than
its size. This condition is essential for correct calculation of the object’s trajectory (track).
5. Camera requirements for neural filter operation.
1. Camera requirements:
a. Resolution min. 300x300 pixels.
b. Frames per second: Min. 2 fps.
c. Color: video analytics work with color images.
d. Camera shaking must not cause image shifting of more than 1% of the frame size.
2. Lighting requirements:
a. Moderate lighting. Lighting that is too little (night) or too much (bright sunlight) may impact the quality of video
analytics.
b. No major fluctuations in lighting levels.
3. Scene and camera angle requirements:
a. Moving objects must be visually separable from each other in the video.
b. The background must be primarily static and not undergo sudden changes.
c. Minimal obscuration of moving objects by static objects (columns, trees, etc.).
d. Reflective surfaces and harsh shadows from moving objects can affect the quality of analytics.
e. Long single-color objects may not be tracked properly.
4. Object requirements:
a. There is no noise on the video image and there are no artifacts caused by the compression algorithm.
b. The width or height of the objects in the image must be not less than 1% of the frame size (if resolution is over 1920
pixels) or 15 pixels for lower resolution.
c. The width or height of the objects in the image must not exceed 75% of the frame size.
d. The speed of objects in the frame must be at least 1 pixel per second.
e. In order to detect the object it is to be visible at not less than 8 frames.
f. Within two adjacent frames the object cannot move in the movement direction for the distance that is longer than
its size. This condition is essential for correct calculation of the object’s trajectory (track).
5. Camera requirements for neural tracker operation.
Attention!
We cannot guarantee normal operation of the neural filter with a fisheye camera.
Attention!
We cannot guarantee normal operation of the neural tracker with a fisheye camera.
3. The width and height of the objects in the image must be not less than 1% of the frame size (if resolution is over 1920
pixels) or 15 pixels for lower resolution.
4. The width and height of the objects in the image must not exceed 25% of the frame size.
If these conditions are in place, the Abandoned Object detection tool is guaranteed to:
• Detect 92 items out of 100
• Keep false positives to 20 out of 100.
Attention!
The abandoned objects detection tool works only with the primary tracker.
When created, both Object Tracker and Neural Tracker objects are enabled by default. Tracked objects' parameters (relative
width and height, color) are displayed in the camera window.
Note
Up to 25 objects can be tracked at the same time.
2. By default, video stream's metadata are recorded in the database. You can disable it by selecting No in the Record object
tracking list (1).
Attention!
Video decompression and analysis are used to obtain metadata, which causes high Server load and limits the
number of video cameras that can be used on it.
3. If a video camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality
video stream allows reducing the load on the Server.
Attention!
To display oject trajectories properly, make sure that all video streams from multi-streaming camera have the
same aspect ratio settings.
4. If you require automatic adjustment of the sensitivity of scene analytic detection tools, in the Auto Sensitivity list, select
Yes (3).
Note
Enabling this option is recommended if the lighting fluctuates significantly in the course of the video camera's
operation (for example, in outdoor conditions)
5. To reduce false alarms rate from a fish-eye camera, you have to position it properly (4). For other devices, this parameter
is not valid.
6. Select a processing resource for decoding video streams (5). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
7. In the Motion detection sensitivity field (6), set the sensitivity for motion detection tools, on a scale of 1 to 100.
8. To correct for camera shake, set Antishaker to Yes (7). This setting is recommended only for cameras that show clear
signs of shaking-related image degradation.
9. Analyzed framed are scaled down to a specified resolution (8, 1280 pixels on the longer side). This is how it works:
a. If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by
two.
b. If the resulting resolution falls below the specified value, it is used further.
c. If the resulting resolution still exceeds the specified limit, it is divided by two, etc.
Note
For example, the source image resolution is 2048 * 1536, and the limit is set to 1000.
In this case, the source resolution will be divided two times (down to 512 * 384): after the first division, the
number of pixels on the longer side exceeds the limit (1024 > 1000).
10. Enter the time interval in seconds, during which object's properties will be stored in the Time of Object in DB field (9). If
the object leaves and enters the FoV within the specified time, it will be identified as one and the same object (same ID).
11. If necessary, configure the neural network filter. The neural network filter processes the results of the tracker and filters
out false alarms on complex video images (foliage, glare, etc.).
a. Enable the filter by selecting Yes (1).
b. Select the processor for the neural network — CPU, one of GPUs or a IntelNCS (2).
Attention!
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
c. Select a neural network (3). To access a neural network, contact technical support. If no neural network file is
specified, or the settings are incorrect, no filtering will occur.
Attention!
A neural network filter can be used either only for analyzing moving objects, or only for analyzing abandoned
objects. You cannot operate two neural networks simultaneously.
Note
When the area is being constructed, the nodes are connected by a two-color dotted line which outlines the area's
borders.
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
Action Result
Position the cursor on a node and hold down the left mouse button while you move the mouse Moves the area node
Note
These settings will apply for all situation analysis detection tools on the selected camera.
Attention!
If you activate the Object Calibration parameter in tracker settings, please note that in this case min/max size of
objects is specified in decimeters, not in FoV percentage (see Configure Perspective).
3. In the Min. Object height and Min. Object width fields (2), enter the minimum height and width of a detectable object as
a percent of the FoV height. The values should be in the range [0,05; 100].
4. Click the Apply button.
To set the reference area on screen, do as follows:
1. Select the Object Tracker object.
2. Click the Min button and set the minimum size of a detectable object. You can do so by dragging and dropping the
nodes of the reference area.
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
3. Click the Max button and set the maximum size of a detectable object.
Note
By default, the maximum size is the whole size of FoV, so the nodes are located in the corners.
4. Click the Apply button.
You have successfully set the minimum and maximum object size for detection.
Note
These settings will apply for all situation analysis detection tools on the selected camera.
Configure Perspective
The Perspective enhances detection tools performance and helps evaluate real sizes of objects based on simplified calibration
system.
To configure the perspective, do the following:
by stretching its anchor points . You can move it on screen by dragging and dropping.
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
Note
4. Enter the height in decimeters of the object you want to find in the Leveling Rod Height field (2).
Attention!
Objects smaller than the specified value will not be detected.
Note
These settings will apply for all situation analysis detection tools on the selected camera.
Configuration of the detection zone is carried out in the same way as for general zones for scene analysis (see Setting General
Zones for Scene Analytics).
Setting up objects to be detected (target objects)
You can set parameters describing target objects' geometry and/or behaviour for each type of Scene Analytics tool.
To configure settings for target objects, you must:
2. Set the maximum and minimum height of the target object as a percentage of the FoV height (1, 4).
3. Set the maximum and minimum speed of the target object as a percentage of the frame per second (2, 5).
Attention!
For better visual understanding of speed settings, proceed to setting values for searches in Archive (see
Configuring minimum and maximum object speed).
Note
In Abandoned Object detection, these parameters are not used.
4. Set the maximum and minimum width of the target object as a percentage of the FoV width (3, 6).
5. Set a color (or color range) for the target object:
a. Select the relevant detection tool object and click in the Object Color field. The Object Color dialog opens.
b. You can set the color range with Drag&Drop on the RGB / black-and-white color palette.
Any click on the palette is interpreted as the beginning of a new range; the previous range will disappear.
Attention!
The Axxon Next's inner logic treats all objects as monochrome. The object color is averaged within the
object's contour.
All objects of specified colors will be detected.
Attention!
If no colors are set, the detection tool triggers on objects of any color.
Note
To cancel a selected color, click on any palette, save changes, and click Apply.
6. Select the type of the target object that should trigger the detector (8).
Attention!
2 or more human objects moving along each other are treated by the system as a Group. If you select this type of
object for tracking, single human motion, even if detected, will not cause alarms.
If you select the Human type, group motion will not cause alarms.
7. Click the Apply button.
The setting procedure for target objects is now complete.
Note
When the line is being constructed, the end points are connected by a two-color dotted line. The direction
of the object's motion across the line is indicated by dotted arrows.
Action Result
Position the cursor on an end point and, holding down the left mouse button, move the mouse Moves the line end point
Position the cursor on the line, holding down the left mouse button, move the mouse Moving the line
b. By default, Line Crossing detection monitors object motion across the line in both directions. To suspend detection
of motion in one direction, click the button to that direction.
Attention!
At least one direction must be selected for detection
Note
An unmonitored direction of object motion is indicated by a dimmed arrow
2. If an object in FoV performs repetitive movements near the virtual line, it may cause excessive triggering of the detection
tool. In this case, set an area within the object's track which must cross the line completely to trigger the tool. To do it, set
the X and Y offset values as percentage of track width and height.
Here's an example: the green triangle is the track of the object. If you specify X and Y offset values as 25%, the tool will be
triggered only if the entire red triangle goes beyond the virtual line.
3. Click the Apply button.
3. In the Abandoned Object Detection Sensitivity field (2), set the sensitivity for situational analytic tools for unattended
objects. This value should be in the range [1, 100].
Note
This parameter depend on the lighting conditions and should be chosen empirically. It is recommended to start
by setting the sensitivity at 20.
4. In the Alarm on Object's Max. Idle Time in the Area field (3) specify the time in seconds. If the object remains idle for the
specified time or longer, it will be considered unattended. This value should be in the range [15, 1800].
Note
This parameter is used only for tracking "lost items", i.e objects Unattended for longer time intervals
Note
It is recommended to start by setting the value of this parameter at 15.
5. If you require using the unattended objects detection tool for longer time intervals, select Yes for the corresponding
parameter (4).
6. Under the Object Tracker object, create the Abandoned Object object (see Creating Detection Tools).
7. Click the Apply button.
Abandoned object detection is now configured.
2. In the Event duration to trigger detection (sec) field (2), enter the maximum object loitering time in seconds. This value
should be in the range [0, 3600].
3. Click the Apply button.
The maximum loitering time is now set.
Note
The object doesn't have to be completely idle, minor jitter is permitted.
To set a specified time period, you must perform the following steps:
1. In the Detection Tools list, highlight a Stop in area object (1).
2. Set the idle period duration for triggering the tool. This value should fall into the range of [0, 3600].
Attention!
Set this parameter to a value lower than the Time of Object in DB in Object tracker settings (see Setting General
Parameters).
3. Click the Apply button.
2. Set two areas in FoV. The tool is triggered if any object moves from one area to another in any direction.
By default, the areas are stacked on top of each other. To reshape an area, drag its anchor points . To move the entire
area, left click and drag its border. To add anchor points, click .
3. Click the Apply button.
2. By default, metadata are recorded into the database. To disable metadata recording, select No (1) from the Record
object tracking list.
3. If a camera supports multistreaming, select the stream to apply the detection tool to (2).
4. To reduce false alarms rate from a fish-eye camera, you have to position it properly (3). For other devices, this parameter
is not valid.
5. Select a processing resource for decoding video streams (4). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
6. Set the recognition threshold for objects in percent (5). If the recognition probability falls below the specified value, the
data will be ignored. The higher the value, the higher the accuracy — for the cost of sensitivity.
7. Specify the Minimum number of detection triggers for the neural tracker to display the object's trajectory (6). The
higher the value, the more is the time interval between the object's detection and display of its trajectory on screen. Low
values may lead to false triggering.
8. Select the neural network file (7).
Attention!
A trained neural network does a great job for a particular scene if you want to detect only objects of a certain
type (e.g. person, cyclist, motorcyclist, etc.).
To train your neural network, contact AxxonSoft (see Requirements to data collection for neural network
training).
Note
For correct neural network operation under Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNext/
directory.
9. You can use the neural filter to sort out video recordings featuring selected objects and their trajectories. For example, the
neural tracker detects all freight trucks, and the neural filter sorts out only video recordings that contain trucks with cargo
door open. To set up a neural filter, do the following:
a. To use the neural filter, set Yes in the corresponding field (8).
b. In the Neurofilter file field, select a neural network file (9).
c. In the Neurofilter mode field, select a processor to be used for neural network computations (10).
10. Select the processor for the neural network: the CPU or one of GPUs (11).
Attention!
We recommend the GPU.
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
11. Set the frame rate value for the neural network (12). The other frames will be interpolated. The higher the value, the more
accurate tracking, the higher the CPU load.
Attention!
6 FPS or more is recommended. For fast moving objects (running individuals, vehicles), you must set frame rate
at 12 FPS or above.
12. If you don't need to detect moving objects, select Yes in the Hide moving objects field (13). An object is treated as static if
it does not change its position more than at 10% of its width or height during its track's lifetime.
13. If you don't need to detect static objects, select Yes in the Hide stationary objects field (14). This parameter lowers the
false alarm rate when detecting moving objects.
14. In the Track retention time field, set a time interval in seconds after which the tracking of a vehicle is considered lost
(15). This helps if objects in scene temporarily obscure each other. For example, a larger vehicle may completely block
the smaller one from view.
15. By default, the entire FoV is a detection zone. If you need to narrow down the area to be analyzed, you can set one or
several detection zones.
Note
The procedure of setting zones is identical to the primary tracker's (see Setting General Zones for Scene
Analytics). The only difference is that the neural tracker's zones are processed while the primary tracker's are
ignored.
Attention!
To trigger a Motion in Area detection tool under a neural network tracker, an object must be displaced by at
least 25% of its width or height in FoV.
Attention!
The abandoned objects detection tool works only with the primary tracker.
Attention!
VMD-generated metadata does not include object type and color information.
Attention!
The abandoned objects detection tool works only with the primary tracker.
Note
These detection tools require Addon Face Recognition Pack to be installed (see Installing DetectorPack add-ons).
Note
The required distance between the camera and the face can be set using a lens with required focal length.
2. If you require using this detection tool for real-time facial recognition, set the corresponding parameter to Yes (1).
3. If you want to use this facial recognition tool in real time in parallel with FaceCube Recognition Server (see Configuring
FaceCube integration), set Yes for Real-time recognition on external Service (4).
4. If you need to enable recording of metadata, select Yes from the Record Objects tracking list (3).
5. If a camera supports multistreaming, select the stream for which detection is needed (4). Selecting a low-quality video
stream allows reducing the load on the Server.
6. If you need to save age and gender information for each recognized face, select Yes in the corresponding field (1, see
Facial recognition and search).
7. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
8. If you plan to apply the masks detection tool, set Yes for the Face mask detection parameter (3, see Configuring masks
detection).
9. In some cases, the detection tool may take other object for a face. To filter out non-facial objects, select Yes in the False
Detection Filtering field while calculating the vector model of a face and its recording into the metadata DB (4). If the
filtering is on, false results will appear in the detection feed but will be ignored during searches in Archive.
10. Set the time (in milliseconds) between face search operations in a video frame in the Period of face search field (5).
Acceptable values range: [1, 10000]. Increasing this value decreases the Server load, but can result in some faces being
missed.
11. Analyzed framed are scaled down to a specified resolution (6, 1280 pixels on the longer side). This is how it works:
a. If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by
two.
b. If the resulting resolution falls below the specified value, it is used further.
c. If the resulting resolution still exceeds the specified limit, it is divided by two, etc.
Note
For example, the source image resolution is 2048 * 1536, and the limit is set to 1000.
In this case, the source resolution will be divided two times (down to 512 * 384): after the first division, the
number of pixels on the longer side exceeds the limit (1024 > 1000).
12. Specify the minimum and maximum sizes of detectable faces in % of the frame size (7).
13. In the Minimum threshold of face authenticity field, set the minimum level of face recognition accuracy for the creation
of a track (8). You can set any value by trial-and-error; no less than 90 is recommended. The higher the value is, the fewer
faces are detected, while recognition accuracy increases.
14. Select the processor for the face detection - CPU or NVIDIA GPU (9).
Attention!
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
15. If you use FaceCube integration (see Configuring FaceCube integration), activate the Send face image parameter (10).
16. Enter the time in milliseconds after which the face track is considered to be lost in the Track loss time field (11).
Acceptable values range: [1, 10000]. This parameter applies when a face moves in a frame and gets obscured by an
obstacle for some time. If this time is less than the set value, the face will be recognized as the same.
17. When using wide angle dual lens XingYun devices, the detector will analyze two 180° spherical images by default (see
Configuring fisheye cameras). This may decrease recognition quality. To de-warp the image before detection, select Yes
for the Use Camera Transform parameter (12). This parameter works as well for other types of image transformation.
18. Select a rectangular area to be searched for faces in the preview window. To select the area, move the intersection points
.
2. In the Number of Consecutive Mask Detections field, set the minimum number of frames with the mask for triggering
the tool.
3. Depending on your particular task, set other parameters' values according to the table.
Triggering event Parameter Values
Note
The Other Types of Masks parameter is not supported in this software version.
4. If required, you can set advanced parameters (see Fine-tuning the facial recognition tool).
5. Click Apply.
1. Select an object.
2. Configure parameters for the captured face (just as with detected objects, see Setting up objects to be detected (target
objects)).
Attention!
Select the Object class: Any or Face.
Note
Face color is not detected during Facial recognition, so there are no color settings.
3. Set an area inside FoV for detection. This is similar to the situation analysis configuration (see Configuring the Detection
Zone).
4. Click the Apply button.
2. Configure parameters for the captured face (just as with detected objects, see Setting up objects to be detected (target
objects)).
Attention!
Select the Object class: Any or Face.
Note
Face color is not detected during Facial recognition, so there are no color settings.
3. Set the maximum time an object can stay in the zone. When the maximum time is exceeded, the detection tool is
triggered. This value should be in the range [0, 3600].
4. Set an area inside FoV for detection. This is similar to the situation analysis configuration (see Configuring the Detection
Zone).
5. Click the Apply button
Attention!
To fine tune this detection tool, you will require assistance from AxxonSoft tech support.
7.4.10.4.1 Setting the automatic response to an identification of a recognized face against the list
To set an automatic response to an FR event, do as follows:
1. Create a macro (see Create Macros).
2. As a starting condition, select the Face Recognition event and the desired list (1,see Configuring filters for event-driven
macros).
3. By default, the macro is triggered by recognition of any face from the list. If required, you can specify a particular person
whose facial recognition will trigger the macro (2).
Note
To select another person, clear the 2 field and re-open the list.
4. Program an action or a sequence of actions to be performed in response to an identification of a recognized face against
the designated list (see Settings specific to actions).
5. If the response involves initiating an alarm, you can configure the Dialog Board to filter Alarm Initiated by Macro
Command events (see Configure the Dialog Board).
On this page:
• Alarm initiation
• Response to a recognition of a
non-listed person
• Sending an e-mail
• Starting export
Alarm initiation
This macro can be used together with the Dialog Board (see Configure the Dialog Board).
If the Dialog Board is configured to display an alarm and linked with a video camera (see Linking cells), then when a face from the
list is recognized, the following information will be displayed on the Dialog Board:
1. Reference photo from the face list.
2. An enlarged image of the recognized face in the frame.
3. The name of the recognized person, the comment in quotation marks, the name of the face list in square brackets, and
the percentage of similarity with the reference face in parentheses (see Lists of facial templates).
Sending an e-mail
If the e-mail is sent via the SMTP server (see Configuring the E-mail Object), then 3 files will be attached to the message:
• full frame at the moment of face recognition;
• reference photo from the face list;
• an enlarged image of the recognized face in the frame.
Starting export
As with the e-mail notifications, three files will be exported when you export an image for a facial recognition event i.e. when a
match from Lists of Facial Templates is detected:
3. Activate the Real-time recognition on external Service parameter and Send face image in the facial recognition
settings (see Configure Facial Recognition).
4. Configure automatic responses to positive identification against the list (see Setting the automatic response to an
identification of a recognized face against the list).
7.4.11 Face Detection and Temperature Control with Mobotix M16 TR cameras
Note
The required distance between the camera and the face can be set using a lens with required focal length.
Attention!
Install MX-V5.2.6.7 (2020-06-16) or later firmware.
For detailed configuration via the web interface, refer to manufacturer's instructions.
Before starting Axxon Next, please configure the camera via the web interface:
7. To caption video with temperature readings, select Display Mode (Right) (1), and then Picture in Picture (2)
checkboxes.
b. In the Preview window, drag anchor points to draw an area where thermal data should be displayed.
b. In the Preview window, double click to add anchor points at the position of the electromagnetic radiation
absorber (BB).
6. If you have discovered that your readings differ from actual temperature values, add the offset value into the
Temperature Value Offset field (4). Bias values may be positive or negative.
Note
The rest of this detection tool parameters are identical to those of the basic face detection tool (see Configure
Facial Recognition).
7. Click the Apply button.
You have configured detection tool now.
Note
Install the additional Addon VT LPR to work with License plate recognition (VT).
Note
If you transmit MPEG-4 or H.264 streams over a stable connection, please set GOP length (Group of Pictures), i.e. the
number of P- and B-frames between I-frames, to no more than 4–8 frames.
Attention!
the maximum speed of the vehicle shall not exceed 120 km/h.
For stable ANPR operation, make sure that the image of the license plate is not:
• unequally lit;
• overexposed;
• blurred;
• distorted;
• interlaced;
• dirty.
Attention!
Otherwise, recognition accuracy might be compromised.
Some example number plate images that should be recognized fully and properly:
The camera setup is described in detail in the recognition module manufacturer's documentation.
Attention!
The license plate recognition requires a CPU that supports the SSE4.1 / SSE4.2 instructions.
Attention!
Axxon Next ANPR / LPR is not compatible with Auto Intellect.
Uninstall the Axxon Next VMS and Auto Intellect, then reinstall Axxon Next.
2. If you require using this detection tool for real-time number plate recognition (see Configuring real-time vehicle license
plate recognition), set the corresponding parameter to Yes (2).
3. If you need to enable recording of metadata, select Yes from the Record Objects tracking list (3).
4. If a camera supports multistreaming, select the stream for which detection is needed. Selecting a low-quality video
stream allows reducing the load on the Server (4).
5. Select detection tool working mode: standard or neural (1).
6. In the appropriate fields, select one or more countries for ANPR (2).
Important
The more countries you select, the slower the recognition and the greater the likelihood of the recognition error.
7. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
8. Analyzed framed are scaled down to a specified resolution (4, 1920 pixels on the longer side). This is how it works:
a. If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by
two.
b. If the resulting resolution falls below the specified value, it is used further.
c. If the resulting resolution still exceeds the specified limit, it is divided by two, etc.
Note
For example, the source image resolution is 2048 * 1536, and the limit is set to 1000.
In this case, the source resolution will be divided two times (down to 512 * 384): after the first division, the
number of pixels on the longer side exceeds the limit (1024 > 1000).
Search in archive Basic ANPR Search License. Attention! This type of license provides a 30-sec delay between
the number recognition and the corresponding event. (see Vehicle number plate
recognition and search).
25 FPS or 6 FPS The 25 FPS license has a priority. The 6 FPS license is used only if you have no 25 FPS license.
You cannot use the number plate recognition function if you have no FPS licenses purchased.
25 FPS The "Fast" license allows you to process video feeds up to 25 fps with a maximum vehicle
speed of 150 km/h. You have to obtain an appropriate license to make ANPR work.
6 FPS The "Slow" license allows you to process video feeds up to 6 fps with a maximum vehicle
speed of 20 km/h. You have to obtain an appropriate license to make ANPR work.
Important
All license types except the standard one (Search in archive) require the hardware key or the software key
activation (see Activating a software license key for the ANPR).
You can use a 60-day trial key (see Activating the trial version of the ANPR via a software key).
Important
On a virtual machine, you may use only a hardware key. To use a hardware key, select the 25 FPS or 6 FPS license
type.
10. Set the maximum and minimum width of the vehicle number plate as a percentage of the FoV width (6, 7).
Important
The Minimum LP width parameter affects the CPU load as follows: the higher the parameter is, the higher the
load.
11. Choose a processor to run detection on (8). In standard mode, detection runs on the CPU only.
12. Set the number of frames required for LPR / ANPR (9). This is though a necessary condition, but not sufficient for the first
output. This condition delays LPR output. This parameter allows to increase the reliability of the results, as well as to hide
false positives.
13. If necessary, configure advanced detection settings (see Fine-tuning the VT number plate detection tool).
14. You can configure an ANPR zone in FoV. The zone is resized by moving the anchor points.
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
Note
Detection zone is displayed by default. You can click the button to hide the zone. To undo, click this button
again.
Attention!
To fine tune this detection tool, you will require assistance from AxxonSoft tech support.
2. To recognize the numbers at sharp angles with respect to the camera, select Yes in the Algorithm of recognition of
distorted LP image field (2).
3. Set the image contrast threshold (3). The default value is 40 . On a high-quality image, increase this value to 50-60. If the
image has poor contrast, lower this value.
4. If you expect different sizes of number plates within FOV, activate the LP advanced search algorithm (4). This parameter
accounts for max and min width of number plates, and, in some cases, may help increasing recognition quality and
optimizing CPU load.
5. To set up a number plate recognition event:
a. Specify the minimum percentage of similarity between the recognition result and the corresponding number plate
template, for positive LPR result (7). Use this parameter to filter the results by reliability.
b. By default, a number plate recognition event is registered after a track containing a number plate disappears from
FOV. You can set the moment of registration to a Timeout in seconds (8), or to reaching a similarity percentage
specified in the LP display quality field (5). If both parameters are set to non-zero, the recognition event will be
registered upon matching the first condition.
Note
The LP display quality parameter value must be no lower than the Minimum similarity value.
6. Specify the maximum number of recognition threads (6). If the value is 0, the recognition process will occur in the same
thread that starts it.
Important
The cumulative value of this parameter across all NPR detection tools must not exceed:
- the licensed number of recognition threads (check your license with the lsvpwcutility – please refer to
manufacturer's documentation);
- the number of CPU cores;
- 100.
7. In the Tracker Timeout field, set a time interval in seconds after which the tracking of a vehicle is reset (10).
Note
Use this setting to eliminate double-triggering in such cases as, for example, another recognition of the same
number after it has been obstructed for some time, and then reappears in scene.
If you set Tracker Timeout to a value greater than a probable time of obstruction, the detection tool will not
double-trigger.
8. If necessary, set additional parameters. For more details, refer to the table.
Parameter Description
VodiCTL_VPW_DYNAMIC_ENABLE Enable/disable the number recognition dynamics (by default, the dynamics is enabled).
If the value is Yes, then tracking is enabled, and the number is recognized by the set of
frames. If the value is No, then tracking is disabled, and the number is recognized by each
frame separately without taking to account the previous ones, and the quality can vary from 0
to 100%.
VodiCTL_VPW_DYNAMIC_OUTPUT_PERIOD Time period (in microseconds) over which the recognition result is to be displayed to the user.
This parameter can be used only if the VodiCTL_VPW_DYNAMIC_WITH_DUPLICATE parameter
is set.
VodiCTL_VPW_DYNAMIC_OUTPUT_TIMEOUT The minimum time required to monitor the license plate (in microseconds) before displaying
the recognition result to the user. This parameter can only be used when the “Dynamic” mode
is on. In this mode, the trajectory of the vehicle is monitored, and the user does not
immediately receive the recognition result of the license plate, but after the time specified for
this setting. In this case, the first recognition result will be replaced by the result of higher
quality and subsequently displayed to the user. If parameter 0 is set for this value, the user
gets the first result of recognizing the detected license plate. After the time specified in this
parameter expires, the monitoring of the trajectory of the license plate continues until it
disappears from the frame.
VodiCTL_VPW_IMAGE_BLUR The parameter for internal use. The recommended value to set is 13.
VodiCTL_VPW_PLATE_FILTER_RODROPFACTOR The license plate filter coefficient by the so-called image density - ratio of white pixels to total
pixels (second strategy). The type is unsigned. This coefficient is used for image thresholding
and has the optimal values, which are determined by AutoSDK developers using their own
test samples. The parameter is considered as a service one, and its value should be set
according to the recommendations of technical support specialists.
VodiCTL_VPW_PLATE_FILTER_ROFACTOR The license plate filter coefficient by the so-called image density - ratio of white pixels to total
pixels (first strategy). The type is unsigned. This coefficient is used for image thresholding and
has the optimal values, which are determined by AutoSDK developers using their own test
samples. The parameter is considered as a service one, and its value should be set according
to the recommendations of technical support specialists.
VodiCTL_VPW_PLATE_FILTER_SYMCOUNT Enable/disable the simple license plates filter algorithm by the minimum number of
recognized symbols on them. If the algorithm is enabled (the value of the parameter is greater
than 0), the base search for symbols on the prospective license plate (geometry, proportions)
is performed. If less symbols are recognized on the prospective license plate than specified in
this parameter, this prospective license plate is not considered a license plate. That is, the
value of this parameter is the minimum characters that must be present on the prospective
license plate when the basic algorithm is in use.
VodiCTL_VPW_PLATE_STAR_MAX Maximum unrecognized symbols on the license plate, at which the result will still be
considered the result of the recognition of the license plate.
9. Click the Apply button.
Attention!
In a Failover system, the ANRP license is not automatically transferred to a new node (see Configuring Failover VMS).
To enable the detection tool, you need to manually activate the license on the PC where the new node is launched.
On order to receive the software license key for the ANPR, proceed as follows:
1. Download the utilities by the links below:
1. haspdinst_EOAWT.exe
2. RUS_EOAWT.exe
• In the Windows command line, execute the following two commands one after another to install the protection key driver:
Note
Make sure that the current directory in the command line is similar to the one where the haspdinst_EOAWT.exe
file is located.
• Check installation correctness by opening the http://127.0.0.1:1947/_int_/ACC_help_index.html page in the web browser.
• Run the hasp_RUS.exe file to start the Remote Update System. The RUS dialog box opens.
Note.
The RUS abbreviation stands for Remote Update System.
• Set the Collect information from this computer to enable: switch into the Installation of new protection key position
in case if license for a "clean computer" is needed, i.e. if there is no demo license on it, or to the Update of existing
protection key position, if demo license is already in use (1).
• Click Collect information.
• Save the file with .c2v extension to any folder.
• Close the hasp_RUS.exe tool.
• Hand the .c2v file to your AxxonSoft manager.
• Receive the license file with .v2c extension from your AxxonSoft manager.
• Run the hasp_RUS.exe tool and go to the Apply License File tab (1).
• Specify location of the license file in the Update File field using the ... button (2).
• Click Apply Update (3).
Receiving the software license key for the ANPR is now completed.
7.4.12.1.6 Activating the trial version of the ANPR via a software key
You can use the ANPR in trial mode.
In this mode, the number of channels is limited to 4 in both 25 FPS and 6 FPS versions for all available countries. Activate the
software key to start the 60-day trial.
Attention!
You cannot operate the ANPR tool in trial mode on a virtual machine.
Note
Install the additional Addon IV LPR to work with License plate recognition (IV).
Attention!
Only a 64-bit version is available.
Important!
It is also recommended to study the manufacturer's specification.
Video images that do not meet the video camera mounting and setup requirements for the Licence plate recognition (IV).
Attention!
The detection tool will not operate on a Server having a different MAC address.
6. If you require using this detection tool for real-time number plate recognition (see Configuring online Vehicle License
Plate recognition), set the corresponding parameter to Yes (2).
7. If you need to enable recording of metadata, select Yes from the Record Objects tracking list (3).
8. If a camera supports multistreaming, select the stream for which detection is needed. Selecting a low-quality video
stream allows reducing the load on the Server (4).
9. Select the country from the list (1).
Attention!
Several profiles are provided for India, USA, Russia, Taiwan, Australia and African countries, differing by
recognition parameters and hardware requirements.
To recognize US license plates with vertical orientation of characters, a profile with higher accuracy of
recognition is recommended.
Note
The list of supported countries is given in the manufacturer's specifications.
10. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
11. Specify the minimum number of milliseconds between frames during recognition (3).
12. Analyzed framed are scaled down to a specified resolution (4, 1920 pixels on the longer side). This is how it works:
a. If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by
two.
b. If the resulting resolution falls below the specified value, it is used further.
c. If the resulting resolution still exceeds the specified limit, it is divided by two, etc.
Note
For example, the source image resolution is 2048 * 1536, and the limit is set to 1000.
In this case, the source resolution will be divided two times (down to 512 * 384): after the first division, the
number of pixels on the longer side exceeds the limit (1024 > 1000).
13. Specify the maximum number of processor cores available for the detector. '0' means all cores (5).
14. Set the maximum and minimum width of the vehicle number plate as a percentage of the FoV width (6, 7).
15. Set minimum quality of ANPR (8). The higher the minimum recognition quality, the less false detections will occur.
16. By default, CPU resources are solely used for recognition. If you want to apply GPU computing resources to increase the
recognition performance, select GPU in the Processor Type field (9).
17. Specify the maximum and minimum number of digits in the number (1, 2).
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
Note
Detection zone is displayed by default. You can click the button to hide the zone. To undo, click this button
again.
Note
List of supported countries: Armenia, Azerbaijan, Belarus, Estonia, Georgia, Kyrgyzstan, Kazakhstan, Lithuania,
Latvia, Moldova, Russia, Ukraine, Uzbekistan, Vietnam, Peru, Spain, Mexico, Poland, Myanmar.
1. License plate recognition (RR) - recognizes number plates in real time. Recognition is performed in Fast mode (video
stream processing at a speed of up to 30 fps).
2. License plate recognition - Parking (RR) - recognizes number plates in real time. Recognition is performed in Slow mode
(video stream processing at a speed of up to 8 fps).
Note
The sampling rate for detection tool is 150 milliseconds, all other frames are skipped.
3. License plate recognition - Search in archive (RR) - recognizes number plates in recorded video. Recognition is
performed in Fast mode (video stream processing at a speed of up to 30 fps).
Note
This detection tool generates events with a 30 sec delay after number recognition.
Note
License plate recognition (RR) requires Addon RR LPR to be installed (see Installing DetectorPack add-ons).
Note
To make the detection tool work under Windows Server 2012 R2, make sure to install the Media Foundation
component.
3. If you require using this detection tool for real-time number plate recognition (see Configuring real-time vehicle license
plate recognition), set the corresponding parameter to Yes (2).
4. If you need to enable recording of metadata, select Yes from the Record Objects tracking list (3).
5. If a camera supports multistreaming, select the stream for which detection is needed. Selecting a low-quality video
stream allows reducing the load on the Server (4).
6. Select your target country from the list (5).
7. Select a processing resource for decoding video streams (6). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
8. Set minimum quality of ANPR (7). The higher the minimum recognition quality, the less false detections will occur.
9. Select the processor for the detection tool: the CPU or one of NVidia GPUs (8).
Attention!
It may take up to one minute to launch the algorithm on an NVIDIA GPU.
10. Specify the time interval between the initial recognition and event registration in the Force report timeout field
(10). Zero value sets the event registration to the moment when the track disappears from FOV.
Note
Skip this step with License plate recognition - Search in archive (RR).
11. Analyzed framed are scaled down to a specified resolution (10, 1920 pixels on the longer side). This is how it works:
a. If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by
two.
b. If the resulting resolution falls below the specified value, it is used further.
c. If the resulting resolution still exceeds the specified limit, it is divided by two, etc.
Note
For example, the source image resolution is 2048 * 1536, and the limit is set to 1000.
In this case, the source resolution will be divided two times (down to 512 * 384): after the first division, the
number of pixels on the longer side exceeds the limit (1024 > 1000).
12. Specify the maximum number of recognition threads (11). If the value is 0,the recognition process will occur in the same
thread that starts it.
Attention!
The cumulative value of this parameters across all LPR detection tools must not exceed the number of CPU cores
and is limited to 100.
13. You can configure an ANPR zone in FoV. The zone is resized by moving the anchor points .
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
Note
Detection zone is displayed by default. You can click the button to hide the zone. To undo, click this button
again.
7.4.12.4.1 Setting the automatic response to an identification of a recognized LP against the list
To set an automatic response to an FR event, do as follows:
1. Create a macro (see Create Macros).
2. As a starting condition, select the License plate recognition event and the desired list (1, see Configuring filters for event-
driven macros).
3. By default, the macro is triggered by recognition of any number from the list. If required, you can specify a particular
number whose recognition will trigger the macro (2).
Note
To select another number, clear the 2 field and re-open the list.
Attention!
We cannot guarantee normal operation of the neural counter with a fisheye camera.
Note
Unlike the Multiple objects detection tool (see Settings Specific to Multiple objects), Neuralcounter generates events of
one type, namely - triggering.
Neuralcounter is less resource-intensive than Multiple objects detection tool based on Neural Tracker.
Note
For the neural counter to work, install Addon Neuro Pack (see Installing DetectorPack add-ons).
2. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
3. If you need to outline objects in the preview window, select Yes in the Detected Objects parameter (3).
4. Set the recognition threshold for objects in percent (4). If the recognition probability falls below the specified value, the
data will be ignored. The higher the value, the higher the accuracy — for the cost of sensitivity.
5. Set the interval between the analyzed frames in seconds (5). The value should be within the range of 0,05 – 30.
6. Set the minimum number of frames with excessive numbers of objects for Neuralcounter to trigger (9). The value should
be within the range of 2 – 20.
Note
The default values (3 frames and 1 second) indicate that Neuralcounter will analyze one frame every second. If
Neuralcounter detects more objects than the specified threshold value on 3 frames, then it triggers.
7. Select the processor for the neural network - CPU, one of GPUs, or Intel NCS (6, see Hardware requirements for neural
analytics operation).
Attention!
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
Attention!
If you specify other processing resource than the CPU, this device will carry the most of computing load.
However, the detection tool will consume CPU as well.
Note
For correct neural network operation under Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNext/
directory.
Important!
Unlike standard smoke / fire detection systems, smoke and fire software detection tools face many issues with the
scene and the background mage. We cannot warrant 100% smoke / fire detection. The smoke and fire detection tools
are meant to increase the likelihood of detecting smoke / fire. However, there may be both false alarms and failures to
detect actual fire / smoke events in the camera's FoV.
We keep improving smoke and fire detection and use machine learning based on a Neural network.
If the fire / smoke detection tools does not respond to actual fire / smoke events, please record a video clip and send it
to AxxonSoft. We will update Axxon Next to refine detection. Help us train the neural network with video feeds from you
scene to deliver best results for your fire security.
Note
These detection tools require Addon Neuro Pack to be installed (see Installing DetectorPack add-ons).
Note
In some cases, when fire is expected to be clearly visible, the fire area may be sufficient at 1-3% of the FoV width /
height.
Attention!
If you set a rectangular recognition area for the detection tool, the limitations apply to this area rather than to the entire
frame (see Configuring Smoke and Fire Detection Tools).
2. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
3. Set the interval between the processes frames in seconds (3). The value should be in the [1;30] range.
Note
The default values (5 frames and 10 seconds) indicate that the tool will analyze one frame every 10 seconds.
When smoke (fire) is detected in 5 frames, the tool will trigger.
4. Select the processor for the neural network — CPU, one of GPUs or a IntelNCS (4).
Attention!
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
Attention!
If you specify other processing resource than the CPU, this device will carry the most of computing load.
However, the detection tool will consume CPU as well.
5. Select a neural network file (5). The following standard neural networks for different processor types are located in C:
\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK:
smoke_movidius.ann Smoke detector / IntelNCS
Enter full path to a custom neural network file into this field. This is not required if you use standard neural networks
which are selected automatically.
Note
For correct neural network operation under Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNext/
directory.
6. Set the minimum number of frames with smoke (fire) for triggering the tool (6). The value should be in the [5;20] range.
7. You can experiment with the sensitivity of the tool (7). The value must be in the range [1; 99]. The preview window
displays the sensitivity scale of the detection tool. It is color-coded as follows:
a. Green - smoke (fire) not detected.
b. Yellow - smoke (fire) detected, but not enough to trigger the tool.
c. Red - smoke (fire) detected.
If you increase the sensitivity value, you have more alerts (red scale).
8. By default, the detection is performed over full image area. In the preview window, you can set several detection zones by
their anchor points as follows:
a. Right-click anywhere in the Preview window.
b. Select Detection area (rectangle) for a rectangular zone. If you specify a rectangular area, the detection tool will
work only within its limits; the rest of the FOV will be ignored.
c. Select Analytics Area (polygon) to set one or several polygonal zones. If you specify one or several polygonal
areas, the detection tool will process the entire FOV while the remaining part of the FOV will be blacked out.
Note
You can configure detection zones similarly to privacy masks in scene analytics (see Setting General
Zones for Scene Analytics).
Important!
You can use trial and error method to decide which type of detection area (rectangular or polygonal) is more effective in
your case. Some neural networks give better detection with rectangles while others are better with polygons.
Attention!
We recommend you to apply detection in gateway mode: an employee entering a zone where equipment or PPE is
required should wait until the detection is complete (normally 5–10 seconds).
For detection tool's operation, you need at least two separate neural networks:
• a segmenting network structures up an image of a human body (locates head, shoulders, arms, hands, thighs, legs and
feet);
• a classifying network detects equipment (PPE) on a specified body part, and checks if it's properly applied.
Note
The equipment detection tool (PPE) requires Addon Neuro Pack to be installed.
Attention!
We cannot guarantee normal operation of PPE detection with a fisheye camera.
2. Select one or several files for the classifying neural network (2). There must be a separate classifying neural network to
recognize equipment on each body segment.
3. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
4. Set the minimum height and width of a person (4, 5) in the frame as a percentage of the frame height/width (0,15 = 15%).
Objects which are smaller than the specified size will not be detected.
5. Select the processor for the neural network - CPU, one of GPUs, or Intel processors (6, see Hardware requirements for
neural analytics operation).
Attention!
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
Attention!
If you specify other processing resource than the CPU, this device will carry the most of computing load.
However, the detection tool will consume CPU as well.
Attention!
To access the neural network, contact technical support https://support.axxonsoft.com/.
7. Select the Mask checkbox to display body segments in the preview window (8).
8. Set the interval between the analyzed frames - Number of frames for analysis - in milliseconds (9). The value should be
within the range of 30 – 10,000.
9. Set the minimum number of frames containing people with no PPE for triggering the tool - Number of measurements in
a row to trigger detection (10). The value should be within the range of 2 – 20.
Attention!
To apply detection in gateway mode (see Functions of the Equipment detection tool (PPE)), we recommend you
to use the detection tool set to default delay (1000 ms) and 3 frames for output.
To apply detection in continuous mode, set the delay to no more than 250 ms, and the number of frames to no
less than 6.
10. By default, each equipment element's triggering occurs once during a continuous tracking of a human object. You can set
triggering to multiple by setting the One Event per Equipment Element parameter to No (10).
Note
Example. An individual not wearing a helmet appears in FoV, puts on a helmet, then puts it off. If the One Event
per Equipment Element parameter is activated, you will have one alarm event, otherwise two.
11. In the preview window, you can set the detection zones with the help of anchor points much like privacy masks in Scene
Analytics (see Setting General Zones for Scene Analytics). By default, the entire FoV is a detection zone.
12. Click Apply.
Equipment detection tool (PPE) is now configured.
The equipment detection tool (PPE) triggers an alarm when a person not wearing required equipment (PPE) on specified body
parts, or wearing inappropriate equipment, appears in FoV.
Note
Person-based privacy masking requires the Add-on Neuro Pack to be installed
Attention!
We cannot guarantee normal operation with a fisheye camera.
1. If the camera supports multistreaming, select the stream for which detection is needed (1).
2. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
3. Select the processor for the neural network - CPU, one of GPUs, or Intel processors (3, see Hardware requirements for
neural analytics operation).
Attention!
If you specify other processing resource than the CPU, this device will carry the most of computing load.
However, the detection tool will consume CPU as well.
Attention!
To access the neural network, contact technical support https://support.axxonsoft.com/.
5. Set the interval between the analyzed frames - Number of frames for analysis - in milliseconds (5). The value should be
within the range of 30 – 10,000.
6. Select Yes for all body parts that should be masked within FoV.
7. Click Apply.
Privacy mask configuration is now complete. Selected body parts of all individuals within FoV will be hidden from view.
Man down detection Triggers alarm upon detection of a prostrate human within the scene.
Hands up detection Triggers alarm upon detection of a human raising one or two hands. A hand is treated as
raised if the arm is parallel to the backbone.
Active shooter detection Triggers alarm upon detection of a human with an arm parallel to the ground.
People masking People Masking is a non-triggering detection tool that blocks individuals' bodies with solid
color.
People counter This detection tool counts individuals within a specified area. Triggering occurs on
exceeding a specified limit.
Handrail holding detection Triggers an alarm if an individual in a specified part of the scene does not hold any of
specified handrails.
Close-standing people detection Triggers an alarm if the distance between two separate individuals in scene falls below a
specified minimum value.
Note
Pose detection tools require Addon Neuro Pack to be installed (see Installing DetectorPack add-ons).
Attention!
We cannot guarantee normal operation of pose detection with a fisheye camera.
2. By default, video stream's metadata are recorded in the database. You can disable it by selecting No in the Record object
tracking list (1). Video decompression and analysis are used to obtain metadata, which causes high Server load and limits
the number of video cameras that can be used on it.
3. If a camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video
stream allows reducing the load on the Server.
4. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
5. Set the interval between the analyzed frames in milliseconds (4). The value should be within the range of 30 – 10,000.
Attention!
With static individuals in scene, set the interval to no more than 500 ms. With moving individuals in scene, less
than 250 ms is recommended.
The less the interval, the higher accuracy of pose detection for the cost of CPU load. For 1000 ms interval,
accuracy will be no less than 70%.
We recommend that you use trial and error method to set the appropriate value.
6. Select the processor for the neural network - CPU, one of GPUs, or Intel NCS (5, see Hardware requirements for neural
analytics operation).
Attention!
If you specify other processing resource than the CPU, this device will carry the most of computing load.
However, the detection tool will consume CPU as well.
Attention!
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
Attention!
Man-down or sitting-pose detection accuracy may depend on the particular processor. If another selected
processor gives less accurate results, set the detection parameters empirically, and configure scene perspective.
7. By default, the entire FoV is an area for detection. If necessary, you can specify the areas for detection and skip areas in
the preview window. To set an area for detection, right click anywhere on the image, and select a desired area.
Note
The areas are set the same way as for the Scene Analytics (see Configuring the Detection Zone).
10. Set the minimum number of frames with a human in a pose of interest for triggering the tool.
Note
The default values (2 frames and 1000 milliseconds) indicate that the tool will analyze one frame every second.
When a pose is detected in 2 consequent frames, the tool will trigger.
Note
This parameter is not used for masking settings.
Note
Unlike the Multiple Objects detection tool (see Settings Specific to Multiple objects), the counter generates events of just
one type, namely triggering.
As opposed to Neuralcounter (see Configuring a Neuralcounter), People Counter counts just people.
Attention!
When setting up leveling rods, please make sure that the following conditions are met:
a. all rods have different height;
b. all rods must be based on the same plane (for example, floor);
c. in complex scenes where detection accuracy is crucial, we recommend you to add more than 3 rods;
d. in areas where the picture is apparently distorted, place the rods in parallel to known vertical elements
(doors, bookcases, etc.).
Attention!
In portrait-oriented scenes (such as store / warehouse aisles, etc.), set the rods in triangles.
Note
For your convenience, you can click the button to freeze the video. To unfreeze, click the button once
more.
Note
human figure placed in the same point of the scene. You can resize the rod by stretching its anchor points , or move it
on screen by dragging and dropping.
Attention!
When setting up leveling rods, please make sure that the following conditions are met:
a. all rods have different height;
b. all rods must be based on the same plane (for example, floor);
c. in complex scenes where detection accuracy is crucial, we recommend you to add more than 3 rods;
d. in areas where the picture is apparently distorted, place the rods in parallel to known vertical elements
(doors, bookcases, etc.).
Attention!
In portrait-oriented scenes (such as store / warehouse aisles, etc.), set the rods in triangles.
Note
For your convenience, you can click the button to freeze the video. To unfreeze, click the button once
more.
Note
Attention!
If lens distortion makes the handrail non-linear, use several lines.
4. Set anchor points to specify an area where people must hold handrails (2).
Attention!
Masking function works only for users whose role sets the View masked video parameter to No (see Creating and
configuring roles).
By default, the Сlose-Standing People detection tool triggers when bounding boxes around individuals collide.
You can set another triggering distance by adjusting perspective:
Attention!
For portrait-oriented scenes (such as corridors, warehouse aisles, etc.) perspective adjustment is a must.
by stretching its anchor points . You can move it on screen by dragging and dropping.
Attention!
Leveling rods should:
a. not be located on the same line;
b. have different height.
Note
For your convenience, you can click the button and configure the mask on a still frame / snapshot. To
undo, click this button again.
Note
3. Enter a triggering distance value (1) in meters into the Distance Sensitivity field. The detection tool will trigger if the
current distance becomes equal or less than this value.
4. Enter the height (in meters) of the leveling rod into the Leveling Rod Height field (2).
Queue detection Triggers if the specified number of people in the queue is exceeded
Visitors counter This detector monitors the number of visitors within the protected area, and triggers if the specified count is exceeded.
Camera • Resolution: 360 х 288 (CIF1) to 720 х 576 (CIF4) pixels; lager images
are scaled down to CIF4.
• Frames per second: 6 or more
• Color: color or greyscale.
• No camera jitter is allowed.
Scene and viewing angle: • Vertically downward position is the best for the purpose. The closer
to vertical, the more accurate counting.
• Camera FOV dimensions: min. 3 x 3m (6 x 6 humans), optimal 4 x 4m
(8 x 8x humans), max. 8 x 8m (16 x 16 humans).
• The background must be primarily static and not undergo sudden
changes.
• Reflective surfaces and harsh shadows from moving objects can
affect the quality of analytics.
• Leafage, TV screens or any periodic object movement in the
background may cause analytics glitches.
Images of objects within the scene: • Image quality: the image must be clear and sharp with no visible
compression artifacts.
• Dimensions of a human in scene: bounding rectangle has to occupy
0.25 to 10 percent of the frame area.
Camera • Resolution: 720 х 576 (CIF4) or 360 х 288 (CIF1) pixel resolution. Using
pixel resolutions higher than CIF4 do not lead to higher recognition
accuracy.
• Frames per second: 25.
• Color: color camera is obligatory.
• No camera jitter is allowed.
Scene and viewing angle: • Vertically downward position is the best for the purpose. The closer
to vertical, the more accurate counting.
• Camera FOV dimensions: min. 2 x 2m, optimal 4 x 4m.
• The background must be primarily static and not undergo sudden
changes.
• The counting area must not contain any moving objects except for
humans.
• Reflective surfaces and harsh shadows from moving objects can
affect the quality of analytics.
• Leafage, TV screens or any periodic object movement in the
background may cause analytics glitches.
• If possible, avoid obstruction of the humans by static objects such as
pillars, trees, etc.
Images of objects within the scene: • Image quality: the image must be clear and sharp with no visible
compression artifacts.
• Dimensions of a human in scene: the bounding rectangle must
occupy 10–60 percent of the frame area.
Other requirements: • The visitors must not move in a continuous flow; smaller groups of
humans are counted correctly.
2. Set an event transmission interval in seconds for sending data to the AxxonData report subsystem (see Queue length
report) if the queue length exceeds the set limit (2). Zero value means no transmission of events.
3. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
4. In the Frame processing period (msec) (4) field, specify the time period in milliseconds between frames analyzed by the
people counter within the area. The value should be within the range of 500 – 3000. The smaller the value of this
parameter, the greater the CPU load.
5. By default, the frame is compressed to 1920 pixels on the longer side. To avoid detection errors on streams with a higher
resolution, it is recommended that compression be reduced (5).
6. Specify the number of people to trigger the alarm when exceeded (6). The value should be within the range of 2 – 20.
7. Specify the detection tool sensitivity in standard units from 0 to 1 (7). The higher the sensitivity, the smaller the
disturbances analyzed by the queue detection algorithm. Alternatively, the lower the sensitivity, the greater changes in
scene are processed by the queue detection tool.
You should set the sensitivity value empirically based on the Motion Mask data displayed in the Preview window.
8. In the Preview window, you can set the detection zones with the help of anchor points much like privacy masks in Scene
Analytics.
9. Click and set the approximate size of a human. You can do so by dragging the corners of the rectangular area.
Camera "6. Camera". Detection "Queue detection" triggered, quaue (min.: 6, max.: 6)
Attention!
The visitor counter is better fit for producing average figures than exact values.
Attention!
We cannot guarantee correct operation of the visitors counter with fish-eye video cameras.
2. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
3. By default, the frame is compressed to 1920 pixels on the longer side. To avoid detection errors on streams with a higher
resolution, it is recommended that compression be reduced (3).
4. By default, the detection tool outputs the Camera. Visitor access in the direction of "Entrance" and Camera. Visitor access
in the direction of "Exit" events. If total footfall and visitor number control / exceeding notification is required, do the
following:
a. Select Yes for the People indoor counter parameter (5).
b. Enter the current number of visitors within the area (4).
c. Set the threshold value for the visitor counter; exceeding this limit will generate the corresponding event (6).
5. In the Preview window, set the detection area. It is divided to two sectors, #1 and #2. When an object moves from #2 to #1,
the system treats it as entry; alternatively, #1 to #2 is treated as exit.
To configure the sectors, you can drag them by corners or sides.
Note
6. Click and specify the approximate size of a human. You can do so by dragging the corners of the rectangular area.
7. Click Apply.
Configuration of the visitors counter is now complete.
2. If the camera supports multistreaming, select the stream to be used for detection. Select a low-quality video stream to
reduce Server load (1).
3. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes
priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick
Sync Video technology. Otherwise, CPU resources will be used for decoding.
4. Analyzed framed are scaled down to a specified resolution (3, 1280 pixels on the longer side). This is how it works:
a. If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by
two.
b. If the resulting resolution falls below the specified value, it is used further.
c. If the resulting resolution still exceeds the specified limit, it is divided by two, etc.
Note
For example, the source image resolution is 2048 * 1536, and the limit is set to 1000.
In this case, the source resolution will be divided two times (down to 512 * 384): after the first division, the
number of pixels on the longer side exceeds the limit (1024 > 1000).
5. Choose a processor to run detection on (4). Non-neural detection may be run on the CPU only.
6. Use the neural network if the tool fails to detect water level due to transparency of water. Please follow the steps below:
a. Select Yes for the Neural Network parameter (5).
b. Select the neural network file (6).
7. On the measurement scale (7, 8), set the upper and lower visible limits in normal conditions.
8. Move the anchor points in FoV:7
a. to set the measurement scale;
Attention!
Upper and lower limits of the measurement scale must match the actual settings (see paragraph 4).
b. draw a line to set the upper limit for water level upon reaching which the detection tool triggers an alarm
c. draw a line to set the upper limit for water level upon reaching which the detection tool highlights the sensor icon
in the Camera Window with yellow color.
9. Click the Apply button.
When you have created a detection tool, you can see a sensor on the layout in the camera window.
If the sensor icon is green , the water level is lower than both critical and warning marks. If the icon is yellow , the level is
somewhere between critical and warning. A red icon means a level above critical.
You can also display numerical value of current water levels for the detector (see Configuring display of water level detection).
Note
Some devices may have issues with interdependent embedded analytics. If there is already a relevant detection tool in
Axxon Next, you can add another one, but it will not work.
Attention!
As a rule, a camera requires specifying the temperature threshold, upon reaching which the detection tool would
trigger an alarm.
3. If required, set up macros to perform pre-defined actions upon triggering the detector (see Configuring Macros).
Some cameras are capable to display a bounding box over the facial image along with corresponding temperature readings. If
this option is available, it can be activated via the web interface of a particular camera.
7.4.20.3 ANPR
The VMS can process ANPR data from some cameras' on-board analytics.
Note
Please contact technical support https://support.axxonsoft.com/ for a list of cameras with this feature.
Generally, when configuring the embedded analytics you must follow official documentation for the corresponding video camera
or parameter description in the Axxon Next interface.
Motion Mask
If the camera supports Motion Mask, then when you configure VMD, it will be displayed in the preview window.
If there is motion, but it does not exceed the threshold value (because of the detection sensitivity), the mask cells are colored
green. If motion triggers VMD, the cells turn red.
Note
This option is available for all Axis devices on the following hardware platforms:
• MIPS (ARTPEC-4 and ARTPEC-5 CPUs)
• ARMv7 (ARTPEC-6 and ARTPEC-7 CPUs).
For full list of setup parameters, refer to the AxxonSoft Tracker help.
All CPU heavy analytics and metadata generation tasks are delegated to the camera in this case.
To do this:
1. Go to the device's web interface.
2. Select the Setup menu (1) -> Applications (2).
3. Select the ACAP application (3) and click Upload the Package (4).
Important!
For application, contact AxxonSoft help-desk. You will be given a license code that is to be registered along with
camera's MAC address on Axis website in order to get the license file.
10. Configure the tracker and create the required detection tools.
Note
You can configure the AxxonSoft Onboard tracker object similar to the General information on Scene Analytics.
You cannot configure perspective for solution based on Axis devices.
Perform the follow actions for the Input detection tool, on the Detection Tools tab:
1. Check triggering of the detection tool with the help of the Triggers ribbon (optional) (see the section Checking the
Triggering of a Detection Tool.
2. Set the rules to be automatically executed when the detection tool is triggered (see the section titled Automatic Rules).
2. Click the button and select detections tools the same settings should be applied to.
Attention!
Detection zones cannot be changed by bulk configuration
The list of detection tools of the same type in the current Axxon domain opens. To select multiple detection tools, hold
down the Shift key, select the first and last one the settings should be applied to. Selecting any tool from highlighted ones
will result in selecting them all.
Note
The number in brackets refers to the number of configured detection tools
Attention!
The Detection Tool object should be enabled and configured
2. Produce an event whose occurrence should trigger the detection tool: motion in the frame, turning the video camera,
providing sound to an audio device, etc.
3. If the detection tool is configured correctly, video image frames from the video camera corresponding to the detection
tool will be displayed on the trigger ribbon with the time they were received indicated.
VMS checks the on / off status of detection tools when they triggered and stopped. Not applicable to detection of: quality loss,
position change, disappearance of an object, motion stop and ANPR.
After the end of triggering, you get the Finished message.
Configuring automatic rules and their mode of operation is the same as configuring macros (see Configuring Macros).
Created automatic rules are displayed in the corresponding list under the Programming tab.
Note
When you create the Record to archive automatic rule, the recording stops when VMD (see Configuring Scene Analytics
Detection Tools) triggering stops
Attention!
You can apply macros within a single Axxon domain only. Macro conditions and actions cannot include objects from
another Axxon domain.
Attention!
If an event occurs while the cyclic macro is busy, it is skipped.
If an event occurs while the event-driven macro is busy, it is processed as configured.
Unless the macro has standby commands (see Wait for event, Wait for timeout, Wait till previous action finishes), all commands
are performed simultaneously.
Then do as follows:
1. Enter the name of the macro (1).
Newer Manual execution only (see Working with Dialog Disabled, manual execution is possible
Board).
Time time schedule Runs within the selected time schedule (see Runs within the specified time schedule..
Creating schedules). Initiated by the user at any
time.
3. If you need to include a macro in the control menu on the current layout, select the corresponding check box (3, see
Macros control).
4. To configure event-driven macros, click and select one or more trigger events (4, see Configuring filters for event-
driven macros).
Note
5. Add one or more actions into the macro (5). Click the button to do this. When the macro starts, all actions are
performed simultaneously.
Note
Note
To hide the start conditions and action for the macro, click the action name.
6. A cyclic macro can be launched at a specified time interval, or at a random moment within the specified time interval. To
configure this action, do as follows:
a. In the Heartbeat Interval field, specify a time interval in HH:MM:SS format (1). For example, if you set the interval
to 8 hours and leave the Random checkbox (see 5b) unselected, the macro will be launched every 8 hours. The
macro will be launched according to cycle settings even if the actions from its previous launch are not complete In
this case, several instances of the same macro will be executed simultaneously.
b. In the Launching Interval field, specify a time interval in HH:MM:SS format (1). For example, if you set the interval
to 8 hours and leave the Random checkbox (see 6b) unselected, the macro will be launched every 8 hours.
Attention!
If the macro is linked to a time schedule, and the random moment falls out of schedule, no launching
occurs.
Macros configuration is complete. Created macro are listed. If the macro is disabled (Never mode), it is grayed out.
You can copy macro commands. To do it, follow the steps below:
1. Select the macro to copy.
2. Click Create.
This creates a new identical macro.
Note
To create an empty macro command with no parameters specified, select any of the common macros' groups, and click
Create.
The following events are available for selection:
Server connected
Recording stopped
Camera armed
Camera disarmed
Alarm initiated
Alarm skipped
Alarm processed
Connected
Disconnected
Signal lost
Signal restored
You can set a threshold for these type of events: the time in seconds (0 to
100) between switching from Signal Lost to Signal Restored state. For
example, setting 10 seconds threshold for the Signal Lost condition means
triggering the macro only if the time interval between the last Signal
Restored event and the new Signal Lost event is no less than 10 seconds.
Triggering end
End
Event source Description for the Event Source object, you must
specify the trigger word or phrase. When it comes up in
the captions, the macro starts.
For example, this filter triggers the macro when the
word "Beer" appears in the captions.
If a macro is triggered by a simultaneous combination
of words and/or values, use braces for logical AND. For
example, {Beer} {Belgium}.
2. Click .
3. Specify a required parameter value in the corresponding field (1).
Attention!
The parameter is case-sensitive.
4. In the Value (2) and Max value (3) fields, set the range of parameter values within which the macro will be triggered.
Note
Examples.
If you set Age to [18, 100], the macro will be triggered only if the detection tool returns the age value of 18 or
more.
If you set Gender to [1, 1], the macro will be triggered only if the detection tool returns the individual's gender as
female.
If you set Gender to [2, 2], the macro will be triggered only if the detection tool returns the individual's gender as
male.
If you set TemperatureValue to [37, 100], the macro will be triggered only if the detection tool returns the
temperature value of 37 or more.
5. Add one or more actions into the macro (see Settings specific to actions).
6. Click the Apply button.
Disk usage SERVER1@C:\ where SERVER1 is the name of a server within the Axxon domain, C:\ is the volume name.
Important! You cannot monitor storage capacity of a disk fully allocated for Archive.
Memory usage (RAM) The name of a server within the Axxon domain.
Leaving The macro is triggered if a parameter value goes out of the specified range [Value - Delta; Value].
Rising The macro is triggered if the parameter value exceeds the threshold specified in the Value field.
Falling The macro is triggered if the parameter value falls behind the threshold specified in the Value field.
Attention!
The triggering conditions are set not for current but for future events. For instance, if the current CPU load is
85%, and you set the triggering condition to exceeding 80%, the macro will be launched only when the CPU load
exceeds 80% value next time.
6. If necessary, you can set several event and/or statistical conditions for triggering macros.
2. Break command - here you enter one or more events that override the Wait command. If you do not specify events, the
time-out applies.
3. If necessary, select and configure the action to perform when an event from Break Command occurs. A new Awaiting
instance is also an option.
4. If necessary, select and configure the action to perform if none of the events that were set in Break Command occurred
during time-out. A new Awaiting instance is also an option.
For example, this macro is conditioned by the Motion detected event on Camera 1 (1).
When it occurs, the macro continues. This also starts recording (2). Further macro actions are executed, if any.
If this event does not occur, the time-out is 4 minutes (3). After this time, a sound notification (4) plays.
Attention!
The Awaiting command does not affect the commands below (outside of) it.
For example, when performing that macro, an alarm will be initiated in the system (1), and then after 10 seconds (2) - an audio
alert (3).
2. Specify the action to perform if the previous command was completed within the specified timeout (2).
3. Specify the action to perform if the previous action was not completed within the specified timeout (3). If the timeout is
00:00:00, this setting is not applicable.
Attention!
This command does not affect the commands below (outside of) it.
Example: In this macro, replication (1) and the program on the client (5) start at the same time. If replication is completed within
10 minutes (2), an Email message (3) is sent. Otherwise, a voice alert (4) is played.
1. Selecting a camera or group of cameras for recording (1). An implicit selection of a video camera is also allowed - Camera
that initiated command execution.
Attention!
If the start of the macro was triggered by the activation of input or output (see Configuring filters for event-driven
macros) that is not connected to any camera, you need to select a specific camera here. If you select a group of
cameras or a camera that triggered the command, the action will not start.
Timer (3) Only a timer value is set. Recording stops according to the time setting.
Events Filter (5) One or several events are set, the timer is set to 00:00:00. Recording stops on any IF
event.
Timer (3) + Intelligent event filter (IF) (5). One or several events are set, the timer is set to non-zero. Recording stops when the
set time interval expires after any IF event (trigger) occurs.
Timer (3) + OR flag (4) + IF (5). One or several events are set, the timer is set to non-zero, the OR flag is set. In this
case, the recording stops on any of the two conditions: after the time interval expires
or if an IF event (trigger) occurs.
Note
The OR flag can be set only if the timer setting is not 00:00:00.
Note
An implicit selection of an event is allowed - Last event for condition that initiated execution.
For example, if the event that triggered the execution of the command was the Start time of detection tool
trigger from any type of detection tools, then the end event will be the End time of detection tool trigger from
the same tool.
4. Set the pre-alarm recording time (6). The maximum pre-alarm recording time is 30 seconds.
Attention!
By default, the pre-alarm recording time interval is set to the value specified in Archive settings (see Binding a
camera to an archive).
The longest pre-alarm recording time available in the Archive settings is used.
Changing this value in a specific macro does not affect the Archive settings.
5. If you have cameras in continuous recording mode (see Binding a camera to an archive) and you want to record with
specified fps (see example 2) change the frame rate, enter the required frame rate (7). After the macro command
completes, recording resumes at the frame rate specified in the archive settings.
You can use special codes:
0 - do not change the current frame rate (default);
-1- record only I-frames;
1000 - record at the standard fps of the camera.
Attention!
FPS for pre-recording will not change.
If the macro command is set to complete some time after a specified event has been received (see p. 3), the
frame rate will change for this Check-in Event-time.
Attention!
When you prune by frame dropping, in all video streams except MJPEG , only I-fames (Intra-Coded Frame or Key
Frames) are saved, so please use codes:-1 and 1000 .
If you go for a custom frame rate this will lead to dropping some key frames.
MJPEG video contains only I frames (Intra-coded pictures with a complete image), so it makes sense to set a
desired frame rate here.
Example 1. A macro-command to initiate VMD-triggered recording to the Archive from any camera within Default Axxon domain.
Example 2. All video cameras from the Default Axxon Domain are set to continuously record video at the specified fps by
dropping frames (see Binding a camera to an archive). When you have a motion detection event, you need to switch to full fps
recording. Configure the following macro to do so:
2. Selecting a camera or group of cameras (2). An implicit selection of a video camera is also allowed - Camera that
initiated command execution.
Attention!
If the start of the macro was triggered by the activation of input or output (see Configuring filters for event-driven
macros) that is not connected to any camera, you need to select a specific camera here. If you select a group of
cameras or a camera that triggered the command, the action will not start.
3. If you selected a group of cameras or an Axxon domain at the previous step, you can select the Random (3) checkbox to
initiate an alarm on a random camera from this group/domain.
4. Select an archive to write to (4).
5. In the Alarm flag position field, enter the number of seconds by which the alarm flag will be shifted back from the event
time that started the macro (5).
Note
If the alarm flag position is set, the event footage plays from the moment corresponding to the flag's position, and not
from the alarm start.
2. Outputs switch back after On-time (2), or after Check-in Event-time (3) for any specified events.
Attention!
If the start of the macro was triggered by the activation of input or output (see Configuring filters for event-driven
macros) that is not connected to any camera, you need to select a specific camera here. If you select a group of cameras
or a camera that triggered the command, the action will not start.
1. PTZ (1) - select a PTZ unit. Any pan/tilt positioners / PTZ cameras can be used, including those from other Servers (if they
are on).
2. Preset (2) - select the camera preset to go to, when the macro starts.
3. Speed (3) - panning speed. This value should be in the range [1, 100].
Note
In this software version, you cannot select a group of cameras for this action.
Zoom in camera The camera tile is highlighted on the layout, the viewing tiles take up 98% of the screen
Zoom in and show map The camera tile is highlighted on the layout, the viewing tiles take up 50% of the screen,
the map under the tiles shows the camera.
Switch to immersion mode Immersive mode, the viewing tiles take up 50% of the screen
Switch to Archive Mode The camera window is highlighted on the layout and it is in archive mode. If a group of
cameras is specified, then a layout is created with all the cameras in archive mode.
3. If the required layout does not yet exist, the system creates a new layout with a single video camera.
4. The system switches to the selected layout.
5. The specified process is running.
2. Camera (2) - select a camera for export. An implicit selection of a video camera is also allowed - Camera that initiated
command execution.
Attention!
If the start of the macro was triggered by the activation of input or output (see Configuring filters for event-driven
macros) that is not connected to any camera, you need to select a specific camera here. If you select a group of
cameras or a camera that triggered the command, the action will not start.
Image export Exports a snapshot with the time stamp identical to the start time of action. Important! The
image cannot be exported if the video camera does not have an archive.
During You should set the export duration in HH: MM: SS. The starting point of the exported video is the
command start. End point is defined on the basis of the specified duration -
(Interval [command start; command start + duration]).
Finish after Select one or more events that will trigger export stop. The starting point of the exported video will
be the command start, the end point - the moment of receiving any these events.
Attention!
You can use the following templates for file names and text comments:
• %startTime% , or [START_TIME], or {startTime}: the starting time of exported interval.
• %finishTime% , or [FINISH_TIME], or {finishTime} : the finishing time of exported interval.
You may use the following templates for macros triggered by a text message from an event source
(see Configuring filters for event-driven macros):
• %startEvent%, or [START_EVENT], or {startEvent}: an event that triggered exporting.
• %finishEvent% , or [FINISH_EVENT], or {finishEvent}: an event that stopped the export.
Attention!
If the start of the macro was triggered by the activation of input or output (see Configuring filters for event-driven
macros) that is not connected to any camera, you need to select a specific camera here. If you select a group of
cameras or a camera that triggered the command, the action will not start.
Whole period All missing video recordings made prior to the command start time will be copied.
Offline periods The system copies footage recorded to embedded storage (camera's SD card) during offline
periods (between consecutive Signal lost and Signal restored events). If no Offset parameter
(see paragraph 5) is specified, the replication covers offline footage recorded during the last 24
hours.
During Set the replication duration in HH:MM:SS. Video recordings from the period [start action, start
action + duration] will be copied. An additional Offset parameter can be specified as needed
(see 5).
Finish after Select one or more end events. All video recordings between the start point (when the action
starts) and the end point (when the event is received) will be copied. Important! Replication
will not start until the end event is received.
5. Specify Offset(4), if you specified Duration for replication. Video recordings from the period [start action - offset, start
action + duration - offset] will be copied.
To run replication on schedule (corresponding to Time schedule 1), configure the macro as follows:
Attention!
To make the client-side audio playback possible, you have to create a Speaker object allowing Play on Server playback
mode (see The Speaker Object).
2. In the Audio file field (2), enter the full path to the audio notification file.
3. Configure a condition (IF event, trigger) that will cancel the notification:
Condition Description
Timer (3) Only a timer value is set. Alerts are cancelled according to the time setting.
Events Filter (5) One or several events are set, the timer is set to 00:00:00. Alerts are cancelled
according to the IF event (trigger).
Timer (3) + Intelligent event filter (IF) (5). One or several events are set, the timer is set to non-zero. Alerts are cancelled in a
given time after the selected IF event (trigger) occurs.
Timer (3) + OR flag (4) + IF (5). One or several events are set, the timer is set to non-zero, the OR flag is set. In this
case, the alert is cancelled on any of the two conditions: after the time interval expires
or if an IF event (trigger) occurs.
Note
The OR flag can be set only if the timer setting is not 00:00:00.
Note
An implicit selection of an event is allowed - Last event for condition that initiated execution.
For example, if the event that triggered the execution of the command was the Start time of detection tool trigger
from any type of detection tools, then the end event will be the End time of detection tool trigger from the same tool.
Attention!
Connection to AxxonNet is required to receive alerts by email (see AxxonNet Setup and Operation).
Note
Multiple email addresses can be specified. Separate them with comma (,) or semicolon (;).
Note
Notifications will be sent at the address you specified when configuring the E-mail Message object (see The E-
mail Object).
3. E-mail subject (3) - select the the subject field for email notification .
4. E-mail text (4) - enter the text for email notification.
Note
You can use templates to build a message body (see Text templates in macros).
5. If necessary, you can attach exported videos or snapshots to your message. Click to add and configure additional
parameters. Configuration of these parameters is identical to configuration of export (see Starting export).
Note
If the Period is not specified, the snapshot is sent. You can set the format of video / snapshot export in the
Export agent settings (see Configuring an Export Agent).
6. If a macro is launched from a group of cameras (see Configuring filters for event-driven macros), you can send frames
from all cameras as attachment in a single email message. To make it possible, select the All in one Email checkbox (6). If
this checkbox is not selected, camera images will be sent in separate letters.
2. Activate the E-mail object (2) by selecting Yes in the Enable list.
3. In the Name field (3) enter the desired name of the E-mail object.
4. Select the mode for sending e-mail alerts: through AxxonNet or through the specified SMTP server (4).
Attention!
To send alerts via AxxonNet, you should connect to it (see AxxonNet Setup and Operation).
AxxonNet has a limit of 10 messages a day.
Note
Message via AxxonNet will be sent within one minute.
5. Configure the SMTP server, if this mode has been selected (5):
a. In the SSL certificate field, specify the path to the SSL certificate file, if you use this protocol.
b. If you need to use an SSL-encrypted connection when connecting to the outgoing mail server, select Yes from
the Use SSL list.
c. In the Name field, enter the name of the user account used to send messages on the outgoing mail server.
d. In the Password field, enter the password for the user account on the outgoing mail server.
e. In the Port field, enter the number of the port used by the outgoing mail server.
f. In the Outgoing mail server field, enter the name of the outgoing SMTP mail server.
g. In the From field, enter the e-mail address from which the messages will be sent (6).
Attention!
When using e-mail notification, mail servers may lock the user account, or deny authentication in some
cases. We recommend you to set all security parameters to off in your e-mail account.
Note
Several email addresses can be specified. Separate them with semicolon (;)
When you do this, the following message is sent to the e-mail address indicated in the Recipients field (see the section
Configuring the E-mail Object): "Test message"
Note
If the recipient does not receive the message, make sure that the settings of the E-mail object have been properly
configured
Attention!
The number of characters in a message is limited to:
• 160 ASCII characters;
• 70 Unicode characters.
If the limit is exceeded, a multi-part text message is transmitted.
Note
You can use templates to build a message body (see Text templates in macros).
Note
Several phone numbers can be specified. Separate them with semicolon (;).
Note
Notifications / alerts will also be sent at the phone numbers you specified when configuring The SMS Object
Attention!
To use SMS notification, you need a modem recognizable by the OS as a COM device. No other types of modems can be
used for this purpose.
For example, the following modem types are supported:
1. Siemens TC-35.
2. Flyer U12 (Windows 7 and higher).
Other modems may work or not. We recommend you to check supported Windows versions for each particular device.
Carrier-locked modems are not recommended.
Note
If a USB modem is used to send SMS messages, use the modem utility from the modem software bundle. It will unlock
the modem for correct operation
3. Make sure that the number of the SMS center is shown. Do not connect to the Internet.
4. Start the Server and Client. Create and configure an SMS object.
2. Activate the SMS object (2) by selecting Yes in the Enable list.
3. In the Name field (3) enter the desired name of the SMS object.
4. In the To field (4), enter the cellular telephone number, in international format (+<country code>хххххххххx), to which
messages will be sent.
5. In the SerialPort settings group (5), indicate the port settings used to connect to the GSM modem by which SMS
messages will be sent:
a. If you need to use a DTR control signal, select Yes from the DTR list.
b. In the Bits field, enter the number of bits in the byte of a data packet.
c. In the Stop bits length field, enter the number of bits in the stop bit of a data packet.
d. If you need to use a parity check when transmitting data, select the desired method of parity check from the Parity
list.
e. From the Port list, select the serial port used to connect to the GSM modem.
f. If hardware control of the serial port data protocol is enabled (see step 5.8) and you need to use an RTS signal,
select Yes from the RTS signal list.
g. Select the speed for data transmission via the GSM modem from the Baud rate list.
h. If you need to control the serial port data protocol, select the desired method of control from the Handshaking
list: hardware (RTS/STS), software (XOn/XOff), or alternating.
6. Click the Apply button.
Note
If the recipient does not receive the message, make sure that the settings of the SMS object have been properly
configured
Attention!
The external program is not started on a computer that is an Axxon Next Server, if the Client is not running on the
computer when a macro is triggered.
Attention!
To run the program, you need administrator permissions. You have to disable UAC (in OS Windows Server
2012 versions, 8, 8.1 and you need to edit the registry), or start Axxon Next with administrator rights
Attention!
Any software containing a GUI is not recommended to be executed on the Server. If you encounter a problem launching
interactive services, please refer to the Windows OS user manual.
Note
For Failover Server and Client installation type (see Installation), you have to allow the NGP RaFT supervisor
service to interact with desktop.
2. Add to folder <Directory where Axxon Next is installed>\UserScripts\ one or more .bat files with the application startup
command.
The command should include a path to the executable file. You can specify a network path and command-line options
(see Starting an external program on Clients).
3. Select the server where you want to run the program (1).
SET "datatime=%1"
SET "cameraIpAddress=%2"
Example 2: Exporting camera connection status events (offline/online) to a csv.bat file containing the following:
SELECT "timestamp"
,REGEXP_REPLACE("object_id", 'hosts/', '') as device,
CASE
WHEN ("any_values"::json->>'state') = '4' THEN 'Signal Lost'
WHEN ("any_values"::json->>'state') = '3' THEN 'Signal
Restored'
ELSE ''
END as state
FROM public."t_json_event"
WHERE type = '0' AND ("any_values"::json->>'state'='3' OR "any_values"::json->>'state'='4') AND
timestamp >= '20200211T0000'
ORDER by timestamp DESC
Example 3: Exporting detection tools triggering events to a csv.bat file containing the following:
SELECT "timestamp",
REGEXP_REPLACE("object_id", 'hosts/', '') as device,
CASE
WHEN ("any_values"::json->>'phase') = '1' THEN 'Closed'
WHEN ("any_values"::json->>'phase') = '2' THEN 'Opened'
ELSE ''
END as state
FROM public."t_json_event"
WHERE type = '1' AND timestamp >= '20200209T110000' AND "object_id" LIKE '%ray%'
ORDER by timestamp DESC
Note
To stop slideshow of layouts, you can select any camera window with the left click. After restarting the client, slideshow
resumes.
Attention!
If the start of the macro was triggered by the activation of input or output (see Configuring filters for event-driven
macros) that is not connected to any camera, you need to select a specific camera here. If you select a group of
cameras or a camera that triggered the command, the action will not start.
2. Select an archive where you want to check if recorded video is available (2). If you leave the field empty, all recorded
video in the camera's archives is checked.
3. Specify how far back in the past to scan (3). The verification time period covers: [the time of the action start - (minus) the
depth of the check; start time of the action].
4. Select a reaction if the archive entries are found (4).
5. Select a reaction if the archive entries are not found (5).
Note
If E-mail - or SMS-notification is selected as a reaction, then the target cameras will be indicated in the message (if a
group of cameras was selected) for which there are no entries in the archive for the specified period.
You can also use two special purpose templates in the message:
{failureRecordCheck} – failed verification of a record in a Archive (format: Server Name|Camera Name|Archive Name).
{successRecordCheck} – successful verification of a record in a Archive (format: Server Name|Camera Name|Archive
Name).
Important! For proper execution of the macro, synchronize the time across all Servers within the Axxon domain.
Attention!
If a Client resides behind the NAT, you have to specify the external IP address of the switch for this Client with a port
range larger than 1000 (see Network settings utility).
Attention!
To make the client-side audio playback possible, you have to create a Speaker object allowing Play on
Clients playback mode (see The Speaker Object).
c. Select the loudspeaker where you want to cancel audio alerts (1).
d. If you need to cancel alerts after a certain amount of time, set the required time interval (2).
Attention!
If the start of the macro was triggered by the activation of input or output (see Configuring filters for event-driven
macros) that is not connected to any camera, you need to select a specific camera here. If you select a group of
cameras or a camera that triggered the command, the action will not start.
2. Select query type (2). 4 types are available: POST, GET, PUT, DELETE.
3. Select HTTP or HTTPS Server protocol (3).
4. Enter the IP address of the Server (4).
5. Enter the port number of the Server (5).
6. Enter the user name (6) and password (7) to be used for automatic authorization.
7. Enter query string (8).
8. For a POST query, enter its body (9).
Note
You can use templates to build a query body (see Text templates in macros).
Attention!
The {age} and {gender} templates can be applied when the following conditions are met:
a. The Gender and Age parameter is activated in facial detection tool settings (see Configure Facial
Recognition).
b. The Face Appeared: Specified Triggering event is selected as a launch condition for this macro (s).
Attention!
You can apply statistics templates only if you launch a macro by a corresponding statistical condition (see
Triggering macros by statistical data).
Note
Templates allow {} and %%. For example, %cameraId%.
Note
Date/time templates (such as dateTime, serverDateTime, appearedTime and serverAppearedTime) offer an extended
input option which allows you to set date and time in arbitrary format. A format description parameter must be
contained within a pair of @ symbols.
Here's an example: {dateTime@%Y-%m-%d %H:%M:%S@}. In this case, the actual format is presented as 2020-10-04
18:43:23.
Available parameters:
Parameter Description
%F Fractions of second
Combined parameters:
Parameter Description
%D Equivalent to %m/%d/%y
%T Equivalent to %H:%M:%S
E.g. this macro sends an email of the following format when a water level detection tool triggers:
Example 1. This macro runs continuously (1), starting from the moment that you saved it.
The cycle consists of alternating layouts with marked cameras (2) and layouts with alarmed cameras (4). Interval - 1 minute (3 ,
5).
Example 2. This macro runs continuously within the time schedule 1 (1).
Every hour (5) video from camera 1 is exported (2). After the export is completed (3), an audio alert is sounded (4).
Note
Only admin users can create other admin role users.
Roles and users can be added and configured in Settings, on the Users tab.
There are two types of users: local (stored in the Server database) and LDAP (see Connecting LDAP users).
To enable LDAP users, you must configure access to LDAP catalogs.
The actions of all system users are recorded in the system log (see The System Log).
Note
The following user actions are logged:
• Client started/quit
• Settings for hardware, archive, or detection tool are deleted/added or changed
• Macros are created, deleted, or changed
• User permissions are added, deleted, or changed
• Camera alarm is initiated
• Camera is armed/disarmed
• Create / edit comments
• Snapshot or video is exported
• PTZ
In all user-specific events, the user IP address is indicated.
The new role is added in the system, with its properties displayed on the right side.
3. Select priority of PTZ cameras control for users with the current role.
4. Select priority of map access for users with the current role (2).
Access level Description
5. If you need to limit access of users of a given role to all system archives, you can specify the retention time (archive depth)
limit in the Archive depth viewing restriction field (3). If no limit is set, users may access the entire Video Footage.
6. Set access permissions for Axxon Next features (4). Access can be restricted to such features as:
a. Archive search (see Video surveillance in Archive Search mode).
b. Adding a camera to a layout in live video mode (see Adding cameras to cells).
c. Adding and editing presets for PTZ cameras (see Creating and editing presets).
d. Alarms Management (see Video surveillance in Alarm Management mode).
No access: users have no access to alarm videos. View only: users can view alarm videos but they can't evaluate
alarms. Full access: users can view alarm videos and evaluate alarms
e. Creating comments (see Operator comments) and protected records in Video Footage (see Protecting video
footage from FIFO overwriting).
Access level Description
Create / Protect / Edit / Delete Add comments to archives, create and edit protected records
f. Removing videos from the footage archive (see Delete a part of an archive).
g. Allow unprotected export (see Frame export, Standard video export). Set No to require setting a password for
exported images and video.
h. Exporting snapshots and video recordings (see Exporting Frames and Video Recordings).
i. Editing layouts (see Configuring Layouts).
j. Minimizing the Client to the system tray (see Interface of the Axxon Next Software Package).
k. Managing an Axxon domain (see Axxon Domain operations).
l. Access to the Web server (see Working with Axxon Next Through the Web Client).
m. Displaying captions (see Viewing titles from POS terminals).
n. Show faces (see Masking faces).
o. Viewing the system log (see The System Log).
p. Context menu of a video camera in a viewing tile (see Viewing Tile Context Menu).
q. View masked video (see Setting up privacy masking in Video Footage).
7. Configure access rights to the Settings tabs and to system error messages (5).
Attention!
If you set the User Permission Settings parameter to Device access rights only, all users of the given role will
be permitted to change only access rights to connected devices.
Note
Error messages are displayed in real time in the Layouts interface
8. Set access permissions for Layouts in Axxon Next (6). This setting is related to both primary and web clients.
9. Set the parameters to apply the four-eyes principle (7):
a. If the administrator has to confirm the launch of export for users of this role, select the appropriate role in the
Supervisor for access to export list.
b. If the administrator has to confirm the login of users of this role, select the corresponding role in the Supervisor
for authorization in client list.
10. If you need to grant the users in this role permissions only for a certain period of time, select a Time schedule (8) from the
list. These users will not be able to use their permissions outside of the selected time schedule.
11. Configure rights to manage connected Clients' monitors by setting permissions for each Server on an Axxon domain (9). A
user who has management permissions for the monitors of a particular Server can manage monitors of any Client
connected to that Server.
12. Set permissions for access to hardware and archives on an Axxon domain (10).
Device Access permissions Description
Live in Armed mode You can view video from the camera only when the camera is armed.
Live You can live video from the camera. Other functions and device
configuration are not available.
Live/Archive You can view live and recorded video from the camera. You cannot
arm/disarm/configure the camera.
Microphone No access The user is unable to listen to live sound from the video camera. The
user is unable to listen to sound recordings from the archive.
Live Audio The user is able to listen to live sound from the video camera (the
microphone must be turned on). The user is unable to listen to
sound recordings from the archive.
Minimum level The user can control the PTZ device with the corresponding priority
(see Controlling a PTZ Camera)
Low level
Medium level
High level
Maximum level
You can configure group rights for accessing devices and archives of a particular Server. To do so, select an access level
for the Server object. Depending on the level that is chosen, particular access levels are automatically configured for the
devices and archives of the relevant Server (see table).
Server access level Device/archive Device/archive access level
Custom - Access levels for devices and archives are set manually
Live in Armed Mode Video camera You can view armed cameras
Archive No access
Archive No access
Live/Archive Video camera You can view live and recorded video
Note
To create an empty user role with no parameters specified, select the Roles common group, and click Create.
2. Click Delete.
Note
You cannot delete a role if the user through which you logged into the system belongs to that role.
3. Click the Apply button.
The role has now been deleted. All users under this role will also be deleted.
An LDAP object is added in the system. On the right, a panel displays configuration settings for the LDAP catalog.
2. Enter a name for the catalog in the appropriate field (1).
Attention!
If LDAP users are located in multiple directories with a tree-like structure, you cannot establish instant
synchronization across all users.
To synchronize each user group within a DN branch, you have to specify the path to the corresponding directory.
For example, LDAP contains a directory Employees including subdirectories Managers, Cashiers and Salesmen.
DN branches for synchronizing users within Managers directory:
ou=Managers,ou=Employees,dc=example,dc=com.
DN branches for synchronizing users within Cashiers directory:
ou=Cashiers,ou=Employees,dc=example,dc=com.
DN branches for synchronizing users within Salesmen directory:
ou=Salesmen,ou=Employees,dc=example,dc=com.
5. Enter the name of a user who has write access to the base DN, in LDAP format (RDN + DN) with password (5).
6. If encryption (SSL) is required for connecting to the LDAP server, select the corresponding check box (6).
7. In the Search filter field, enter a string for filtering catalog entries (7).
Attention!
To upload users by groups, not by directories, you should use the Member Of filter attribute. For example:
(&(objectClass=user)(memberof=CN=YourGroup,OU=Users,DC=YourDomain,DC=com))
8. In the Login attribute field, enter the field from which the user's login is obtained (8).
Note
To search users by attribute sAMAccountName, enter their names in small letters - samaccountname
9. In the DN attribute field, enter the field from which the user's DN is obtained (9).
Note
To set a login and DN attribute, you can use Microsoft Active Directory and OpenLDAP LDAP templates. To use a
template, click the relevant link (10).
10. Specify a default user role for users created via LDAP (11). If no role is specified, no automatic user creation will be
possible for this catalog.
11. Сlick the Apply button.
The LDAP catalog is now added to the system.
To test the connection, click the Test connection button. If connection is successful, the form on the lower part of the screen
displays information about the catalog users. Otherwise, an error message appears.
The new user is then added to the system, and the permissions configuration panel for that user opens on the right.
Attention!
The limit on the number of connections will take effect after the server is restarted (see Launching and Closing
the Axxon Next Software Package).
7. If necessary, enter additional information about the user (e-mail, IP address, personal and company ID, etc.) in
appropriate fields.
8. Click the Apply button to save the settings.
The user has now been added and assigned a role.
4. Find a user via search (1) or manually select a user from the list (2). Click OK (3).
5. Specify the other user settings (see Creating local users).
6. Click the Apply button to save the settings.
The user has now been added and assigned a role.
When an LDAP user connects, the user's login from Server settings is used with the password from the LDAP catalog.
2. Choose Synchronize with LDAP and then the required LDAP directory (see Connecting to an LDAP catalog).
All users in the selected directory will be added to this role. By default, the user name will match the login in the LDAP directory.
Note
You cannot delete the user through which you logged into the system.
Attention!
If you delete a user account on the LDAP server, it will be automatically deleted from Axxon Next VMS.
Attention!
When unblocked, the user is offered only one authentication attempt. A successful authentication will reset
the failed attempts counter to zero, otherwise the user account will be blocked again.
8. Set the duration of user account locking on failed login attempts, in minutes (9). 0 – the account can be unlocked by the
administrator only.
9. Click Apply.
Attention!
If any user accounts created in your system before you applied changes in security policy are incompatible with the new
requirements, the users will be prompted to change their credentials upon their next login.
Note
Creation, editing, copying, and deletion of layouts are available to users that belong to roles with the Changing custom
layouts component activated (see Configuring user permissions).
After you configure a user's layouts, you may want to limit that user's privileges.
Layouts are created based on standard layout types. To create a new layout, click the button in the context menu and
select one of the standard layouts.
This takes you to layout editing mode (see Switching to layout editing mode).
Note
A new layout is also created when you select a video camera that is not displayed in any previously created layout (see
Objects Panel, Camera Search Panel).
If you do this, layout editing mode does not start and the layout will not be saved.
The newly created layout will be named automatically. You can rename it later.
To save the layout, exit layout editing mode and save changes (see Exiting layout editing mode).
The layout will then be placed at the beginning of the list in the layout panel.
If you do not save changes and exit, the layout will not be saved.
To delete layouts:
1. Select Delete/Move layouts in the context menu.
Note
You can not access it in the Layout Editing mode though.
Select the layout that you want to copy. Click the button to open the context menu and select Duplicate layout.
Note
Layouts cannot be copied while in editing mode
5. Web Board
Information boards are available on the layouts ribbon in editing mode.
When you create a layout, you are automatically taken to the layout editing mode. Alternately, you can click the button
and select Edit layout / map in the context menu of the layouts ribbon.
Note
To use layout editing mode, you must have required permissions.
In layout editing mode, space is divided by a grid of equal-sized squares for holding viewing tiles (1).
On the edge of the layout there are grid square fragments (2), which are parts of ordinary empty cells and allow adding new cells
to the layout (see Adding new cells to a layout).
Cells are added in rows. For example, when editing a six-square (3*2) layout, a column of two grid squares is added when a
fragment is chosen on the left or right side of the screen.
A row of three squares is added when you select a fragment from the upper or lower part of the screen.
When you select a corner fragment, both a row and column are added.
Increases the cell by a column to the left and row above Increases the cell by a column to the right and row below
Increases the cell by a column to the left Increases the cell by a column to the right
Increases the cell by a column to the left and row below Increases the cell by a column to the right and row above
Increases the cell by a row below Increases the cell by a row above
When you point the cursor at any button, a darkened area that shows the size of the cell after resizing is displayed.
You can also select and resize any tile. To resize, click the button on the cell border and expand / contract the cell as you
wish. You can resize the cell only in one direction. You cannot resize the cell in two directions with the corner buttons.
If you expand a tile, the neighboring tiles contract and vice versa.
Note
If the cell is in the outermost top / bottom row or left | right column, you cannot resize it by clicking and dragging the
borders. You should add an extra cell to the current row or column first
Attention!
If you expand the cell over the next one or several cells, they are deleted.
To move a cell, left-click the frame of the grid square fragment and drag it to the necessary position.
The cells are then switched: the contents of the previously occupied cell are moved to the location of the cell being moved.
If a cell is moved to a grid square fragment, new cells are added to the layout (see Adding new cells to a layout).
Note
Cameras from any Axxon-domain can be added to the layout.
Note
The same video camera can be added to several cells of the same layout.
Note
After you add an information board to the cell, you should configure it (see Configuring information boards).
To create a link, select a cell and click the button on the border. To delete the link, click .
2. An information board to viewing tile. This way you can link adjacent cells, up / down and across. A single information
board can be linked with multiple cameras. If the viewing tile is linked to Event Board, you can click an event and switch
to the Archive mode (see Switching a camera linked to an Events Board to the archives).
3. Also, you can link 2 information panels or empty cells to panels (see Configuring Alerted Cameras layouts).
All linked cells have a different scaling logic (see Scaling the Viewing Tile).
FrameMerge stitches video feeds from neighboring cameras into a single panoramic view.
The resulting video is available:
• as live video feeds,
• as recorded footage,
• as exported video files.
Attention!
The maximum horizontal resolution of exported video is 8184 pixels.
To use this option, cameras and their video feeds must match the following conditions:
1. Install cameras as close to each other as possible.
2. No more than 3 cameras' feeds can be merged horizontally.
3. The cameras must have:
a. pixel resolution of no less than 640 * 480;
b. identical aspect ratio for the high and low bitrate streams;
c. identical parameters of lenses.
4. You have to synchronize time on all cameras (for example, via NTP protocol).
5. Camera jitter must not result in more than 1% image shift in both directions.
6. The recommended image overlap across adjacent cameras is 20–25 percent of image width.
7. Camera images must be aligned vertically.
For best results in merging, ensure the following:
1. Daylight illumination.
2. Sufficient light to capture small details.
3. No over-exposed areas within the scene.
4. Minimum video noise and compression artifacts.
5. Moving objects must be visually separated within the FOV.
6. Same set of objects in overlapping areas.
Attention!
If overlapping areas are plain monochrome (e.g. the sky), no merging is possible.
3. If you need to display a sub-area of the merged video in a separate window, do the following:
a. Add a dialog board to the layout (see Configure the Dialog Board).
b. Configure the panel to display the selected camera.
c. Link the panel to the merged video.
Attention!
Do not move or reposition cameras after merging their video feeds.
If any of the cameras change its position, you have to reconfigure the merging.
To remove an information board or camera from a cell, in the upper-right corner, click the button.
If clearing cells in a row or column removes content from all of these cells, the entire row and/or column is removed from the
layout.
Default values for video stream quality, object tracking, autozoom, and video display (contrast, focus, deinterlacing and flip)
functions can be set for viewing tiles.
After the user switches to a layout, these functions are activated automatically.
To set a function as a default one, activate it during layout editing mode (see Selecting video stream quality in a viewing tile,
Tracking objects, Autozoom, Video image processing, Selecting viewing mode for videos from a fisheye camera) and save
changes before exiting the mode.
7.7.5.4.2 Select the default video stream for each camera within your layout
Use the upper panel to select the default video stream for all cameras within the layout:
- GreenStream. The default setting for video stream is low-quality. Upon selection of a Camera Window, the highest resolution
stream is displayed by default. After you switch to another Camera Window, the inactive camera window returns to lower
resolution / fps display
By default, when you switch to a layout, all video cameras are in real-time / live viewing mode.
You can select a default video mode for each camera: real-time mode or archive mode.
Note
This function is not available if the camera is not attached to an archive.
To select a default video mode, in the context menu of the viewing tile, select Viewing mode and select the necessary mode.
If archive mode is selected, when you switch to the layout, the camera is immediately in archive mode.
The Fit screen function allows displaying a viewing tile by default so that it occupies all of the available space on the screen (full
screen). The default zoom level for full screen display is calculated automatically as a minimum zoom value thal allows filling the
available screen space with the viewing tile contents.
To enable the Fit screen function for a specific video tile, display the digital zoom controller (see Digitally Zooming Video Images),
click the button on it, and save changes when exiting editing mode.
Note
To enable the Fit screen function for all video tiles on the layout open the context menu and select Image size adjustment.
Note
To disable Fit screen across all layouts reselect Image size adjustment.
Now when a user switches to this layout, the video in the viewing tile is displayed at the calculated minimum necessary level of
digital zoom and the viewing tile occupies all available space.
7.7.5.4.6 Configuring pan/tilt angle for video cameras with Immervision lenses in 180о Panorama display
format
You can set the pan/tilt angle for fisheye cameras in 180о Panorama display format when switching to a layout.
This is useful when needing to display the entire viewable area in the layout (two areas of 180о each). In this case, the video
camera is added twice but with different viewing angles.
To set the viewing angle, click and hold the button (see 180 degree Panorama).
If you have created a water level detector for a camera (see Configuring water level detection), you can see the water level sensor
in the camera window.
You can also display numerical value of current water levels for the detector. To do it, follow the steps below:
1. In the text field, enter "Water level: {0}". To configure fonts, click .
2. Use buttons to scale up / down the text and the sensor icon.
3. You can move the sensor just as any other object (see Moving input and output icons in a viewing tile).
When the layout is saved, you see the information displayed in the camera window.
You can add links to other cameras to the Camera Window. If you click on such a link, you go to the corresponding camera's
window (see Switching to other camera via a link in the Camera window).
To add a link, do the following:
1. While holding the Ctrl button on your PC keyboard, left-click the camera icon on the Objects panel (see Objects Panel),
and drag it into the Camera window.
A link will be added to a window as a thumbnail .
2. To rotate the thumbnail, click . Each click rotates the thumbnail by 45°.
Note
If you add a text note to the link (see 3), the thumbnail disappears after you rotate it by 360°, and you will have
just the text to switch to another camera.
Note
3. If required, you can add a text comment to a link via the appropriate text field. To set font attributes, click the button.
4. Use buttons to scale up / down the text and the thumbnail.
5. You can move the link just as any other object (see Moving input and output icons in a viewing tile).
When the layout is saved, you see the newly created link in the camera window.
To save information board parameters as a template, specify a name when configuring an information board.
If a name is not specified for the information board (it is not necessary to specify one), no template with the information board
parameters is saved or made available when creating new information boards.
When configuring a new information board, you can use previous templates for the type of information board in question by
selecting one from the Name list.
If you save the new information board with the same name, the template parameters are updated and all information boards
based on the template are updated as well.
To delete a template, in the Name list, click the button across from the template. The parameters of information boards
based on the deleted template are saved, but their names are discarded.
3. To allow operators to hide the information board, select the Allow hiding panel check box (1).
4. To automatically open the layout with this information board when an event matching the filter occurs, select the Switch
to layout check box (2) (see paragraph 5).
If other layouts contain information boards with the same parameters, the layout with the smallest number of cells is
opened. If there are multiple layouts with identical numbers of cells, the layout that comes first in the alphabet is chosen.
If a layout containing this information board is open when an event is received, no switch to another layout is performed.
5. By default, some events on the Event board come with audit events. If no display of audit events is required, select the
Enables strict messages filtering checkbox (3).
6. In the list, select the event types that you want to display on the board and click Add.
To add events of the same type from different devices, enter the name of the event in the Filter field. This will list only the
events of the chosen type. To list all events, clear the filter field.
Note
If no event type is selected, all system events are displayed on the information board.
Note
You can also add any text to the filter. For example, if you add Signal lost filter, Event Board will display event
data from all devices in the system.
Note
To display facial detection tool triggering, we recommend you to select the
Triggered specified detection "Face appeared".
7. Select the default view for information on the Events Board (see Options for displaying information on Events Boards): the
first frame of the event and time, first frame and text, or text only (4).
8. Click the Apply button to save changes (4).
Configuration of the Events Board is complete.
Health Boards display the status of selected system servers and connected cameras.
To configure a Health Board:
1. Add an information board to the layout (see Adding information boards to cells).
2. In the upper-right corner, click the button.
3. To allow operators to hide the information board, select the Allow hiding panel check box (1).
4. To automatically open the layout with this information board when the status of a monitored server or camera changes,
select the Move to layout check box (2) (see paragraphs 5 and 6).
If other layouts contain information boards with the same parameters, the layout with the smallest number of cells is
opened. If there are multiple layouts with identical numbers of cells, the layout that comes first in the alphabet is chosen.
If a layout containing this information board is open when an event is received, no switch to another layout is performed.
5. Select the Servers you want to monitor. To do so, select one server or all servers from the Axxon domain (click All selected
Servers) and click the Add button (3).
Note
6. To display the status of only distressed servers out of those selected, select Only malfunctioning Servers (3).
A server is classified as distressed if any of the following are true:
Note
Information about the status of Servers and cameras is given in the section Working with Health Boards.
7. Select the default view for display of information on the Health Board (see Working with Health Boards): diagram,
diagram with text, or table (4).
8. Click the Apply button to save changes (5).
Configuration of the Health Board is complete.
Statistics Boards display information on the number of events of the selected type or types, as a number and graph.
To configure a Statistics Board:
1. Add an information board to the layout (see Adding information boards to cells).
2. In the upper-right corner, click the button.
3. To allow operators to hide the information board from a layout, select the Allow hiding panel check box (1).
4. Select the event types that you want to be counted and click Add.
To add events of the same type from different devices, enter the name of the event in the Filter field. This will list only the
events of the chosen type. To list all events, clear the filter field.
Note
Web Board allows users to view a selected web page in the tile.
Note
Axxon Next VMS shows web pages in Internet Explorer.
3. To allow operators to hide the information board, select the Allow hiding panel check box (1).
Note
The URL field supports addresses in the following formats:
• http://www.site.com
• http://site.com
• https://www.site.com
• https://site.com
• www.site.com
• site.com
• [IP-adress]
• [IP-adress]:[Port]
• http://[IP-adress]
• http://[IP-adress]:[Port]
Dialog Board allows users to view info about alerts / detection events and quickly start macros to respond.
In addition, the panel can display:
• video from the selected camera on the layout;
• video from the Related camera for the selected camera on the layout;
• video from the Related camera for the camera linked to the panel;
• alarm event from the selected or linked camera;
• still image.
4. Select the Switch to layout check box (2) (see paragraph 5) to automatically open the layout with this information board
when an event matching the filter occurs.
If other layouts contain information boards with the same parameters, the layout with the smallest number of cells is
opened. If there are multiple layouts with identical numbers of cells, the layout that comes first in the alphabet is chosen.
If a layout containing this information board is open when an event is received, no switch to another layout is performed.
5. Select the event types that you want to display on the board and click Add.
To add events of the same type from different devices, enter the name of the event in the Filter field. This will list only the
events of the chosen type. To list all events, clear the filter field.
Note
Note
If the board has video, the event filter is not required.
If no event type is selected, all system events are displayed on the information board.
6. Configure the information board:
a. If you want video on the board, then click the button and select the item in the menu. You cannot add other
elements if you have video here.
i. Video – if you want to display video from the selected camera (1).
ii. Related camera of selected camera - if you want to display video from the related (alternative, see The
Video Camera Object) camera of the selected camera (2).
iii. Related camera of linked camera - if you want to display video from the related (alternative, see The
Video Camera Object) camera of the linked camera (3). In this case, the panel must be connected to
some camera window.
b. If you want to display a still image on the panel, select the "Image" element. To do this, select a desired image in
JPEG, PNG or BMP format (1) and, if necessary, adjust it to panel size (2).
c. If you want to display an alarm event in the panel from the selected or linked camera, select Alarm. If the panel is
linked to any camera window, then alarms from this camera will be displayed. If the panel is not linked to any tile,
then alarms from any selected camera will be displayed.
When you add the Alarm item to the message panel, you can also add pre-programmed buttons to evaluate the
alarm event (see item e below). It is color-coded as follows:
Green - False alarm
Yellow - Non-critical alarm
Red - Critical alarm
d. If necessary, add a message that will be displayed on the panel in case of event (4). You can select the font and
color (3) of the message. If it is necessary to display the event text, set the Show last event checkbox (1). To keep
all events on the panel and be able to navigate between them, set the Keep events log checkbox (2).
e. If necessary, add the comments field. Select the appropriate check box to make comments mandatory.
• Select a macro that will start when you click the button (2).
• If you want to hide the board after pressing the response button, select the checkbox (3).
• Select location for the button: On the left, in the center, on the right (4).
• Select a color (5).
7. Click Apply to save the changes
Configuration of the Dialog Board is complete.
Note
This feature comes in handy, when it is only the administrator's job to create and edit layouts. After configuring, the
administrator can assign the layouts to users/roles.
2. Click the button and open the context menu. Select one or several roles in Share layout.
Note
You can select the roles of the selected Axxon domain. Axxon domain can be selected on the Camera Search
Panel.
3. If you want to grant rights to edit layouts, select the Allow edit checkbox for the required user role.
4. If you want to make the layout accessible only within a pre-defined time window, select the corresponding Time Zone
from the list.
5. Exit layout editing mode (see Exiting layout editing mode).
This layout is shared with other users.
These users can not edit and share this layout. They can:
1. Work with layout (see The Layouts panel).
2. Delete it from their list (see Creating and deleting layouts).
Note
Only the layout owner can completely remove it from the VMS.
Upon removal, the layout becomes unavailable to all users. If the removed layout was open on some user's
monitor, it is immediately replaced by another layout.
3. Copy the layout (see Layout copying). If you copy the layout, you are the owner of your copy. You can edit it.
Shared layouts are sealed with the following sign:
3. A Dynamic Layout that displays the cameras added from Object Panel in Live Video mode .
To create a special layout, do as follows:
The layout has now been created and added to the panel.
You can manage the number of cameras on special layouts with the layout format - 1, 4, 9, 16, 25, 64 (select in the
menu). Dynamic Layout is created empty.
Note
By default, the Selected Cameras layout is 3 * 3, and the Alerted Cameras layout changes automatically depending on
the number of alarms.
Note
To return to automatic change of the Alerted Cameras layout, click the format that you selected again.
Also, you can change any standard layout to an alerted one. To do this, in the layout editing mode, open the menu and
select Enable alarm monitor mode.
In this mode, if no active alarms are present across the cameras of the current layout, all camera windows are displayed. If any
active alarms are present, only alarm camera windows are displayed.
Note
To undo, select: Exclude from alarm layout.
Also, you can customize the layout with active alarms to show:
1. An alarm and the Alarm Management option (1). If there are several alarms, the longest-standing alarm is displayed. If you
classify an alarm, the next alarm is displayed.
2. In the first cell, add a Dialog board (see Configure the Dialog Board) with the Alarm element and 3 buttons (green, yellow
and red) without specifying a macro (1).
Note
In cell 4, the first stand-by camera will be displayed, in cell 3 - the second.
• Click the star in the top left corner of the camera window;
• Click the star in Camera Search Panel (see Camera Search Panel);
Attention!
You can add tags to cameras on any video wall (see Managing monitors on a local Client)
If you exceed the maximum number of cameras allowed, only the cameras most recently selected remain on the layout.
Note
Attention!
Camera windows are not saved on the layout. After restarting the client or server, you should add them again.
Note
You can use Duplicate layout to create a copy of the current special layout.
4. Click Save.
To make the system operate without a default layout, click and select Do not use by default (2).
2. Enter an ID.
Note
When you hover the mouse cursor over a layout, the name and ID (if specified) of the layout are displayed separately.
A video wall may include any monitor connected to any Client within the Axxon domain.
You can set up a video wall via a dedicated panel (see Video Wall Panel).
7
To set up a video wall, do the following:
3. Drag and drop monitor icons according to their monitors' physical locations on the video wall.
Note
An ID number is assigned to each monitor added to a video wall. To display the ID number of a monitor, click
.
4. Click Apply.
Now, the video wall is configured.
Note
You can manage video wall's monitors the same way as Client monitors (see Managing monitors on a local Client,
Managing monitors on remote Clients within the Axxon domain).
Note
Creation, editing, and deletion of interactive maps are available to users with roles for which the Change maps
component is activated (see the section Configuring user permissions).
a. In the lower-left part of the screen, click the button (after displaying the map, see Opening and closing the
map).
b. Click the button to open the context menu of the layouts ribbon and select Add map.
c. In the map context menu (right-click the empty background), select Add new map
d. Select a video camera from the list on the video camera panel by clicking it and, while holding down the mouse
button, move the cursor to the empty map background and then release the mouse button.
Note
Actions c and d are available if no maps have been created in the system
3. Select what will be used as a map: an image or geodata from OpenStreetMap (2).
Attention!
Creation of a map based on OpenStreetMap geodata provider is limited by default. To create OpenStreetMap
maps, you should:
1. Purchase an OpenStreetMap license.
2. Quit Client (see Shutting down an Axxon Next Client).
3. Open a configuration file, AxxonNext.exe.config , in a word processor. The file is located in the <Installation
Directory of Axxon Next> \ bin.
4. Find the OpenStreetMap parameters group.
Note
The maximum image size is 4 million pixels (the number of pixels at 2000x2000 resolution). If a larger image is
selected, no map is created
4. In the corresponding field (3), select the image that will be used as the graphical blueprint of the site (if the Raster image
map type is selected, supported formats: png, jpg, jpeg, jpe, gif and bmp) or find the site in OpenStreetMap by address,
postal code, or geographical coordinates (enter the information in the Address field; detailed information about search is
given on the provider's website). Scale can be adjusted by the scaler control or mouse wheel. You can navigate around the
map using standard methods.
Note
If the Raster image map type is selected, it is not necessary to select an image. In this case, a map with a white
background is created.
5. If you use geo maps, enter the address, postal index or OpenStreetMap coordinates (refer to provider's website for details)
of the desired location into the Address field, then click . Use the slider or the mouse scroll wheel to zoom in and
Attention!
When a camera is added to a geo map, its icon is automatically positioned according to the camera coordinates (see
The Video Camera Object).
If the cameras have a built-in GPS tracker, their locations on the map change automatically according to the received
data (see Configuring cameras with a built-in GPS tracker).
2. Select the necessary video camera in the displayed list by using one of the following methods:
1. If the necessary video camera is included in a group, you must first select the group (the group may also contain
subgroups), then select the video camera.
2. If the necessary video camera is not included in one of the groups, you must select the list of all video cameras that
follows the list of groups.
Note
Only Input and Output objects that have been activated can be added to the map.
1. Right-click the icon of the video camera on the map. A context menu appears.
2. To add a Input, select Add Input (1). To add a Output, select Add Output (2).
2. Select the map in the system to which the new switch will point.
Then drag the switch icon to the necessary place on the map.
2. On the map, use the corner nodes to adjust the video camera's field of view to match the actual situation at the site (2).
Important!
For ceiling-mounted fisheye cameras (see Configuring fisheye cameras), you are advised to set a 360° field of
view. If you do so, the video from the camera will be directly available in the specified area:
To enable this feature on cameras with ImmerVision lenses, PTZ display mode must be chosen (see Configuring
fisheye cameras)
Important!
The video display area is not available for ceiling-mounted fisheye cameras
• using the points at the base (3), set the size of the area (left-click and drag the cursor)
• using a third point (4) to change the tilt of the area
Note
You can switch map display to flat while working with the Map (see Customizing an Interactive Map).
• using the slider in the lower-right corner to set the default transparency of the area
Important!
When specifying points on an image, follow the rules:
1. All 4 points should belong to the same horizontal plane. Place points on the floor or on the ground.
2. Do not place 3 points on one line.
3. Points should show the perspective of the plane.
2. Click on the depiction of the object on the map. A second point is added, connected by a line to the first point.
Important!
When a fourth link is made, it is possible that the second point cannot be placed in some areas. This occurs when the
system cannot find a valid angle for displaying the video and map for the given links. Most likely, the links have been set
incorrectly.
After a fourth link is added, an angle is chosen so that the surveillance objects in the video and on the map coincide.
To remove a link, place the cursor above the first point in the link and click the button. After all links are added, it is possible
to change the location of previously set points by dragging them while holding the left mouse button.
To save links between video and the map, exit layout editing mode and save changes. The links you make are discarded if any of
the following occur before you exit and save changes:
• The position of the video camera icon on the map is changed.
• The angle of display of the video display area for the camera on the map is changed.
• The field of view of the camera on the map is changed in any way.
Note
This feature is widely used when cameras are installed, for example, in ambulances or public transport.
1. Create a built-in GPS tracker object for video cameras (see Creating Detection Tools).
If a map is open in 2D mode when you save a layout, when you switch to that layout, the map will always open in 2D mode. The
layout icon resembles that shown in picture below.
Note
3. Click Apply.
A map properties configuration window opens, which is similar to the map creation window (see Creating a new map).
On the page:
To make it possible to perform Forensic Search MomentQuest of the archives of a video camera, the following conditions must be
met:
1. Video meets the requirements (see Video suitability for Forensic Search of recorded video (requirements)).
2. There are video stream recordings from the desired video camera in the archive (see Binding a camera to an archive).
3. There are metadata recordings from this video stream in the object trajectory database. Metadata can be generated by
Axxon Next (see General information on Scene Analytics, Face detection tool) or received from a video camera (see
Embedded Detection Tools).
4. The user has the appropriate permissions.
This section contains information on how to configure the Axxon Next software package to satisfy these conditions.
Note
The metadata recording is enabled by default.
In video motion detection (VMD) settings, you have to activate the Object Tracking parameter (see Setting up VMD-based Scene
Analytics).
Note
Information on configuring storage of metadata is provided in the section titled Configuring storage of the archive,
system log, and metadata
So if it is necessary, for example, to detect people moving at speeds of up to 10 km/hr, it is sufficient to record at 12 fps.
2. The minimum object speed should be such that the object moves at a rate of at least 1 pixel per frame.
Attention!
Privacy masks are displayed in the standalone Client software only. You cannot use masks with both web and mobile
Clients.
Attention!
Privacy masks are displayed in the standalone Client software only. You cannot use masks with both web and mobile
Clients.
To do this, set No for the Show Recorded Videos Unmasked option in Role Settings (see Creating and configuring roles). The
objects will be masked for all users belonging to this role.
To mask an object from Video Footage viewing, do the following:
1. Proceed to Archive Mode and find the object to be masked (see Switching to Archive Mode).
2. Locate the frame where the object first appears in the camera's FoV (see Navigating in the Archive).
Note
The procedure of setting up a mask is similar to adding a comment (see Adding a comment)
6. Locate the last frame before the object disappears from the camera's FoV and place the mask over it. To save mask
position, click the Add point button.
7. The system automatically interpolates the mask position on intermediate frames, assuming the object's motion is
uniform and rectilinear.
8. Check the video. If necessary, you can specify additional mask positions on intermediate frames for better masking. The
system automatically re-interpolates mask positions within the video sequence.
Note
9. After setting all necessary mask positions, enter your text notes and click Save.
Attention!
After the mask is saved, you can not delete it. Only the users having roles where View masked video in archive
parameter is set to Yes may bypass the masking (see Creating and configuring roles).
The object is now hidden. When viewing the Video Footage, users without appropriate access rights will see the object masked.
Note
You can search only by text comments entered after the video is imported.
Attention!
When you add a camera:
1) continuous replication from the on-board storage to the selected archive file (see Configuring data
replication);
2) the Scene Analytics object will be created (see Creating Detection Tools) and metadata enabled (see
Setting General Parameters).
3. Configure Object Tracking (see Setting General Parameters). If you want to find persons and car numbers in video
footage, then create and configure the appropriate detection tools (see Configuring License plate recognition (VT),
Configure Facial Recognition).
Attention!
Do not remove the Object Tracker from your system. Otherwise you cannot import videos into your system.
4. If necessary, change the mode of data replication (see Configuring data replication). If you select On Demand mode you
can start the analysis of the video image manually (see Indexing video from external sources).
Attention!
Replication is performed only to the end of the archive. It is not possible to overwrite existing data in the archive
(see Configuring data replication).
If you ignore this rule, the videos will not be indexed.
It's preferable to import all the videos from a folder at once, otherwise you have to manually remove metadata
and records from the Archive before the next replication (see Indexing video from external sources).
5. In the Folder field, specify the storage location of the video footage that will be used as External Archive.
Attention!
The following compression algorithms are supported: MJPEG, MPEG-2, MPEG-4, MxPEG, H.264, H.265, Hik264
(x86 only) as well as uncompressed ('raw") video.
A "raw" format is a stream of consecutive frames without time stamps.
6. Imported folders with video footage or video files must be ISO 8601 timestamped: YYYYMMDDTHHMMSS.
a. If the timestamp is in the folder name, all the videos starting from the specified date and time will be imported
without exception. The video recordings are ordered according to the file name as follows:
Note
For example, if the 20160719T100000_camera1 folder contains 3 files (1.avi, 2.avi, 3.avi), they will come
into the archive as follows:
1.avi: [19 July 2016, 10:00:00; 19 July 2016, 10:00:00 + the duration of 1.avi].
2.avi: [19 July 2016, 10:00:00 + the duration of 1.avi; 19 July 2016, 10:00:00 + the duration of 1.avi and
2.avi].
3.avi: [19 July 2016, 10:00:00 + the duration of 1.avi and 2.avi; 19 July 2016, 10:00:00 + the duration of 1.avi
and 2.avi and 3.avi].
b. If the folder name does not have the timestamp, all the video files will be imported in accordance with their
timestamps. If the file name is incorrect, the starting point of the recording on the timeline will correspond to the
modification date of the file.
Attention!
Axxon Next operation may be incorrect if video recordings in the folder overlap. No error messages are
displayed in this case.
The date in the folder name or file name (or their creation dates) may not precede the metadata storage
period as defined in the system (see Configuring storage of the system log and metadata).
If you add a Z character to the end of the timestamp, the time zones for the videos will be GMT +0, otherwise – the
time zone of the Server.
For example, 20150701T165130Z.avi. In the Archive mode, this video recording will fit into the timeline from 1 July
2015, 16:51:30 GMT +0 to July 1, 2015, 16:51:30 + recording time GMT +0.
7. Click the Apply button.
Attention!
You should not view the replicated archive during indexing.
Also, do not run multiple simultaneous indexing procedures.
If you set the On demand replication period (see Configuring data replication), then to start the indexing procedure select Start
import process (1) from the context menu of the viewing tile.
If you want to index the video from another folder, click Import folder (2), or click in the viewing tile.
Attention!
If you change the folder, all files in Mirror Archive and metadata will be lost.
Attention!
You can select only the folder specified in the settings and its sub-folders
To remove metadata and video from an archive, select Clear offline analytics data (3) and confirm file formatting.
In addition, decoding with both Intel QSV and NVDEC is possible (see Parallel decoding with both Intel's Quick Sync Video and
NVIDIA's NVDEC).
Attention!
Axxon Next applies Intel QSV for decoding:
1. Video formats: H.264, H.265 and H.265 +.
2. Live, Archive modes (forward playback only) and TimeCompressor.
Attention!
If the Client shares the same PC with a Server which applies a detection tool to а video stream, Intel Quick Sync Video
will not be used for displaying this stream.
Note
Maximum pixel resolution depends on your particular version of the Intel Quick Sync core; refer to https://
www.intel.com/content/www/us/en/homepage.html for details.
To use Intel Quick Sync Video, make sure your system meets the following requirements:
1. Since QSV is incorporated into the graphics processor, your CPU must have one (iGPU).
Note
You can check if the CPU supports QSV here.
Attention!
For H.265 and H.265 + video, Intel Quick Sync Video technology is supported only on Intel processors with
microarchitecture: Braswell (only decoding), Cherry Trail (only decoding), Skylake, Apollo Lake, Kaby, Gemini,
Coffee Lake.
Note
You can also update the driver automatically using the Intel Driver Update Utility utility.
Note
Depending on the BIOS version, it may be named differently (iGPU , Internal Graphics , Integrated Graphics
Adapter - PEG).
2. In general, the simultaneous use of the integrated graphics unit (iGPU) and external GPU has to be avoided. In this case,
to use Intel QSV, do as follows:
a. Enable Multi-Monitor in the BIOS settings.
b. Connect a “Fake/Virtual” Monitor/Display on Windows and then connect it to your iGPU and select Extend desktop
to this display in the Multiple displays list.
3. Go to Settings → Options → User interface (1–2).
Attention!
In Axxon Next VMS, the NVDEC chips are used to decode:
1. video compressed with the following codecs: H.264, Н.265 and MPEG-2.
2. In Live Video, Archive Video, and TimeCompressor.
Attention!
Before you start, please make sure to install the latest driver for your NVIDIA GPU.
Note
Please see the list of supported devices for the NVDEC chips on the NVIDIA official website.
7.10.1.4.3 Parallel decoding with both Intel's Quick Sync Video and NVidia's NVDEC
In Axxon Next, decoding with both Intel QSV and NVDEC is possible.
In this case, NVIDIA devices are the first to take up the decoding job. Then, if they max out, Intel QSV takes over and offloads
NVIDIA's chip.
For simultaneous decoding, you have to configure it on both Intel Quick Sync Video (see Hardware-based decoding with Intel
Quick Sync Video) and Nvidia's NVDEC chips (see Hardware-based decoding with NVIDIA NVDEC chips).
7.10.1.5 Configuring automatic response when Axxon Next VMS PC integrity check fails
When you start Servers and Clients , Axxon Next automatically checks the digital signature for all executable files (exe, dll, so).
If all files are in place and match their signatures, the "System integrity check passed successfully" record appears in the system
log (see The System Log).
Otherwise, you can set the system to automatically perform one of the pre-configured actions. To select a response, do as
follows:
1. Select Settings-> Options -> Security Policy (1-2).
2. From the Do the following actions when system integrity compromised (3) choose the necessary response:
a. Show warning to administrators only - if selected, when the Client starts , an alert will be displayed for users of
the admin.
Note
To resume launching the Client, click Continue; to quit, click No. To open a text file containing the list of
compromised files, click Details.
b. Show warning to all users - in this case, all users will be alerted.
c. Block users without admin rights - allows only users in the Admin role to access the Server.
d. Stop non-vital services - shuts down all system objects that are subject to licensing (video cameras, detection
tools, etc.). Upon launching the Client, each user will see a notification message.
3. Click Apply.
2. In the Period field, enter the amount of days to store the system log in the Server’s database and to store metadata in the
object trajectory database (3). The maximum time is 1000 days.
Attention!
If you enter zero value:
• System Log events will be stored 0 days;
• metadata retention time becomes unlimited.
Attention!
If you have less than 15 GB of free disk space, the Object Tracking DB is overwritten - new data records over the
oldest data records.
3. In the appropriate field, enter the amount of hours after which outdated events will be purged from the system log (4).
Outdated events are events that have been stored in the system log for a period greater than that indicated in step 2.
Note
The object trajectory database is purged of video recordings that have been stored for more than the specified
storage period:
1. Every 12 hours after Axxon Next is started.
2. Every time you start the MomentQuest forensic search tool (see Forensic Search for Fragments
(MomentQuest)).
If the camera is not recording when DB is cleaned up, then its recordings are preserved irrespective of their
timestamp.
4. The metadata database can be stored on NAS if necessary (by default, it is located locally as set during Installation). Do
the following:
a. Enter a path to the network destination for metadata database (5).
b. Enter the user name and password (6). The user must have permissions to access the NAS.
Note
If you clear the path to NAS, metadata will be stored in the local database again
Note
For example, an operator can exit alarm mode to view the video archive related to the alarm
2. In the Time period of operator reaction to alarm field, enter the time during which an operator who accepted an alarm
for processing and exited alarm mode without evaluating it must return to alarm mode (3). The minimum value is 2
minutes.
3. Set the maximum dwell time for alarms on the Special Layout (4). After the dwell time has elapsed, the alarm will be
removed from the layout and the next one shows.
4. Select the alarm classifications for which you want to require comments (5).
5. Click the Apply button.
Configuration of alarm handling is now complete.
2. Set the operator idle time in hours, minutes and seconds (3). When the time runs out, PTZ control is unlocked
automatically if the operator did not carry out any action.
3. By default, if the user with a higher priority controls the PTZ camera, then the PTZ control panel displays the name of the
user. To disable displaying this information, deselect the corresponding checkbox (4).
4. Click the Apply button.
You have configured PTZ control.
2. In the List of schedules, enter the name of the required schedule (3) and click .
3. Set the time intervals for the schedule:
a. Enter the interval's start time in the From column with the help of the buttons accessible by left-clicking the
appropriate cell twice (1).
Button Action
Button Action
b. Enter the interval's end time in the To column with the help of the buttons accessible by left-clicking the
appropriate cell twice (2).
c. Select the days of the week to be included in the interval by selecting the appropriate check boxes (3).
d. Create the necessary number of intervals to be included in the schedule
Note
A visual display of time intervals for each day of the week is provided on the time chart.
Attention!
The events are recorded in the system log in the same language as the Server's operating system language.
User apps are run in a selected language with the exception of the System Log which registers events in Server's GUI language.
To select the interface language, complete the following steps:
1. Go to Settings → Options → Regional settings (1–2).
2. Select an interface language from the interface language drop-down list (3).
3. Click Apply to save the changes.
4. Restart Axxon Next.
The newly selected interface language will be applied once Axxon Next is restarted.
To change the events language in system log, do the following:
1. On the Server, go to Control Panel → Clock and Region (Time & Language).
Note
Here follows the setup sequence for Windows 10.
5. In the Copy your current settings to area, select the Welcome screen and system accounts checkbox.
2. Select the calendar type that is used in Axxon Next from the calendar drop-down list (3).
Note
In the Axxon Next VMS, time and date formats and values are defined on the OS level (4).
Note
Switching layout modes is allowed only for users with Layout configuration permissions
2. Select the Enable map auto zoom on alarm (3) check box.
3. Click the Apply button to save changes.
It is possible to use window mode both on the main monitor and on additional monitors. Do the following:
1. Go to Settings ->Options ->User interface (1–2).
2. To use window mode on the main monitor, clear the Use full screen on main monitor check box (3).
3. To use window mode on additional monitors, clear the Use full screen on additional monitor check box (4).
4. Click the Apply button to save changes.
For changes to take effect, quit the Client and start it again.
1. Go to Settings → Options → User interface (1–2).
2. By default, surveillance mode selector buttons are located outside the Camera window. If you prefer having them inside
the window, uncheck the Selector Buttons Outside box (3).
Attention!
In this case, you cannot use digital zoom (in both real time and archive viewing modes) and immersive mode in
the Camera window.
3. By default, the Viewing Tile elements (context menus, export buttons, PTZ mode select buttons, etc.) are displayed over
the video. To move the controls outside the video area uncheck the Allow controls on top of video image checkbox (4).
4. If you don't need to display window elements in other windows than the active one, select the Show onscreen controls
only on selected channel checkbox (5). This parameter is valid only if the Allow controls on top of video
image checkbox is selected (4).
5. Click Apply to save the changes.
6. Reopen the Layout or create a new one.
1. Go to Settings → Options → User interface (1–2).
Note
When you enable the option, you may notice image quality drop.
To fix the size and expand tiles to all free space on the layout area, do as follows:
1. Go to Settings → Options → User interface (1–2).
Configuring Layouts
Configuring viewing tiles
1. Go to Settings → Options → User interface (1–2).
1. Go to Settings → Options → User interface (1–2).
Axxon Next will now launch instead of the standard Windows shell the next time you start Windows.
Note
If User Accounts Control is enabled in the Windows OS, Axxon Next VMS cannot automatically start in place of OS shell
(the appropriate check box is grayed out). Disable UAC. In Windows OS 8, 8.1 and 10 you also need to make changes to
the registry and reboot your PC.
2. Configuring the protocol used by Clients to connect to the Server allows prioritizing reliability or speed of data
transmission (3).The connection protocol is set individually for each Server in an Axxon-domain. All Clients connected to
the Server will receive video streams over the selected protocol.
Descriptions and recommendations for selecting a protocol are given in the table.
Protocol Description
TCP This protocol is more reliable but bandwidth-intensive. Recommended for Servers with small
numbers of cameras.
UDP unicast UDP is typically faster but less reliable for data transmission. Unicast involves data transmission
to a single recipient.
This protocol is best for Servers with many cameras connected to a single Client.
Protocol Description
UDP multicast Multicast refers to data transmission to a group of recipients. This protocol is designed for Servers
with many cameras connected to multiple Clients.
Important! This protocol has to be supported by all network components, in particular,
switches.
3. Type the user name and password needed for logging in to each Axxon domain (4).
4. Indicate the Servers to connect to. For each Server, perform the following steps:
a. Select the Server in the list (5).
b. Indicate the port for connecting to the Server (6).
Note
If the Port field is left blank, the standard port (20111) will be used for connecting
Note
It is possible to connect to only one Server on an Axxon domain. So when a Server is added to the list, all
other Servers on the Axxon domain become unavailable for selection
Note
5. After all Servers have been added to the list, select the main Axxon domain.
When connecting, the Client will use the parameters (maps, layouts, user rights) of the main Axxon domain.
To select a main Axxon domain, select the check box in the relevant column of a Server that is on the Axxon domain (7).
6. Click the Apply button.
7. Click Save (1) to save the configuration of automatic server connection to the main Server.
Note
You can also load a saved configuration after restoring the system using the appropriate utility (see
Backup and Restore Utility).
2. In the Period of inactivity before automatic logout field, enter the duration of user inactivity after which the Client
should be quit (3).
If the field is blank or equals 00:00, the Client will not be quit.
3. Enter a value in the Idle Time before Locking field to set the time interval (4). To unlock the Client, the user has to re-
login.
If the field is left blank or the value is set to 00:00, no locking will occur.
Note
If a viewing layout is open, no automatic blocking occurs.
Note
You can lock the Client at any time using hotkeys (see Assigning hot keys, Appendix 6. Hotkeys in Axxon Next).
4. Click the Apply button.
Configuration of automatic quit of the Client is now complete.
2. In the Filter of allowed IP addresses group, enter the IP address (3) and subnet mask (4) to set the range of addresses
from which a connection will be permitted.
3. Click the Add button (5).
4. Click Apply.
The range is now added to the list. No connection will be possible from addresses not in the list.
To remove an address or a range from the list, do the following:
1. Click the
button.
2. Click Apply.
1. Go to Settings → Options → Automatic connection (1–2).
1. Go to System settings → Options → Export (1–2).
2. In the Path to the folder for exporting a snapshot (3) and Pathto the folder for exporting video recording (4), enter
the full path to the folders where exported files are to be saved. To do this, click the button .
Note
During export you can specify any path.
Note
By default, on Windows XP, exported files are stored in C:\Documents and Settings\User\My
Documents\AxxonSoft\Export\. On Windows 7 and Windows Vista, they are stored in С:
\Users\User\Documents\AxxonSoft\Export\.
Attention!
These fields also allow you to specify the name template for the exported files, as follows:
{0}-Camera ID.
{1}-Camera name
{2}-Date.
{3} Time.
{4}-Recording duration (for video export only).
3. Select the default formats for export of video and snapshots (5). You can select any available format during export.
Snapshots can be exported in two formats: JPG and PDF. Videos can be exported into the following 4 formats: MP4, MKV,
EXE and AVI.
Note
Video is exported in MKV format without recompression.
Video is exported in AVI format with recompression in the selected codec (see point 4).
When video is exported in EXE format, a self-contained executable file is generated, containing video, playback
tools, and necessary codecs.
4. If you want to export to an encrypted zip archive, set a password (6). If you are exporting an .exe file, you will need to enter
a password when you open the file.
5. Specify settings for video export in AVI format: Select a codec (7) and compression quality (8).
6. If you want to superimpose captions in the exported video, select the Export date, time, and AVI titles check box. When
exporting to MKV, captions are always added, you can turn them off when playing (9).
7. You can watermark exported video footage as follows:
Attention!
The watermark settings are applied to the entire Axxon domain.
a. Select a file with a watermark (10). PNG, JPEG, BMP pictures are allowed.
b. Set the transparency of the watermark: 100% - opaque, 0% - clear (11).
c. Set the location of the watermark (12). To do this, specify the border of the watermarked area on each side of the
frame as percentage of the frame size. The top left corner should be taken as the origin point.
The default values are:
Left Top Right Bottom
0 0 100 100
the watermarked area will occupy the entire frame, and the watermark will be placed in the center of the image.
To place the watermark in a corner, specify the following values:
for the top-left corner:
8. Select a frame rate for the exported video: if Do not change is selected, the original frame rate is kept; if 1/2 is selected,
the exported frame rate will be two times smaller than the original one; if 1/4 is selected, four times smaller, and if 1/8 is
selected, eight times smaller (13).
Note
The minimum frame rate of exported video is 1 fps.
9. Set the limit for an exported video file size in megabytes (14). If the exported video exceeds the specified size, multiple
export files will be created.
Note
The minimum value is 5 megabytes.
Due to Windows limitations, you cannot export files of more than 4 Gb to EXE format.
Attention!
Zero value sets export to a single file irrelevant to its size.
Note
Section sizes and positions can be changed like standard windows.
1. In the device tree, select a server and click the Create Export Agent button.
2. In the Audio source field, select the system device that will be used as the audio source for playback on the camera
speaker (3).
Note
The default device is shown in the list in bold.
Note
Please refer to the list of hotkeys in Appendix 6 to this guide.
2. In the list, select the device for which you want to configure hot keys (3).
3. Select the mode for which you want to configure hot keys (4, see Introduction to hot keys in Axxon Next).
4. To assign a shortcut to a specific action:
a. Double-click the current shortcut assigned to the action (5). The field is now cleared.
Note
For some actions in Global mode, you cannot change the default hot keys.
Note
If the field is left empty, no hot key will be assigned to the action.
Enter the monitor ID and click OK. This monitor becomes active.
When a key or key combination assigned to the Camera selection in current layout action (Global mode) is pressed, the go to
camera by ID window opens.
Enter the destination camera's user- friendly ID (see The Video Camera Object), then click OK.
If the current layout contains a camera with the specified ID, the relevant viewing tile becomes active. If the current layout does
not contain a camera with the specified ID, a minimum layout containing the camera is opened.
If you press hotkeys for Select layout by number (Global mode), a window opens requesting you to enter the layout number.
The layouts are sorted left to right, starting from 1.
Descriptions of other non-trivial actions performed via hot keys are given in the table.
Global (general) Navigation (up, left, down, right) Navigate or move within the selected interface element.
These keys are active only when a navigable menu or
panel/ribbon is open.
Activate layout ribbon When this key is pressed, the layout ribbon expands,
allowing to navigate between and select layouts. When
the ribbon is minimized or a layout is selected, the
relevant viewing tile becomes active.
Activate panel of video walls When this key is pressed, the panel of video walls
expands, allowing to navigate between available
monitors. When the panel is minimized, the viewing tile
becomes active.
Activate camera panel When this key is pressed, the camera panel expands,
allowing to navigate between and select cameras. When
the panel is minimized or a camera is selected, a viewing
tile becomes active.
Activate panel of configuration Pressing this key a second time opens the tools panel, on
the Layouts ribbon.
Open alarm panel Pressing this key again minimizes the panel.
Open panel with hardware list (left) Pressing this key again minimizes the panel.
Global (map) Switch to 3D mode If the map is in 2D mode, clicking this button switches to
3D mode.
If the map is in 3D mode, clicking this button hides the
map.
Live Video mode Open the menu of the selected camera and Pressing this key again closes the menu.
select a menu item.
Switch to Archive mode If a viewing tile in a layout is active, the archive is opened
only for that particular camera. Pressing this key again
switches to Live Video mode. If there are no active
viewing tiles in the layout, the archive is opened for all
cameras in the layout.
Archive viewing and search in archive mode Go to Search in Archive mode. Pressing this key again switches to Archive mode.
Go to the next/previous video clip Holding this key moves forward/backward between video
clips until the key is released
Open list of timeline events Pressing this key again closes the list.
Move to next hour Listed are the actions available while operating the
Calendar (see Navigating Using the Timeline).
Move to next month
Move to next timestamp
Move to previous hour
Move to previous month
Move to previous timestamp
<HotKeysSchemaDeviceCommands>
<CommandName>DiscreteZoomOut</CommandName>
<HotKey>A2-</HotKey>
<Sensitivity>0.2</Sensitivity>
</HotKeysSchemaDeviceCommands>
The sensitivity values range from 0.0 (low sensitivity) to 1.0 (high sensitivity). Please see the commands that have
sensitivity settings in the table.
Command Command description
DiscreteZoomIn Zoom in
Attention!
Do not set values outside the range and edit other parameters in the file.
7. Click the Next button.
8. In the opened window select the RTSP value in the New destination drop-down list and click Add button.
9. Go to the RTSP tab.
10. Specify port and path to the stream if it's required.
11. Click Next button.
After that configure receiving of the RTSP stream and its record to the archive in the Axxon Next software package –
see Configuring connection of video cameras via RTSP and Binding a camera to an archive sections.
Configuring of the VLC Media Player for transmitting video from computer monitor by the RTSP protocol to the Axxon Next
software package is completed.
1. The basic configuration allows system supervisors to permit launching Axxon Next servers (nodes) on any Servers within
the system.
Note
While selecting a Server to transfer a node to, the supervisor tries to keep in balance the whole cluster's
performance. If all Servers deliver more or less the same performance, the selection is performed randomly.
If Servers significantly differ in their performance, the supervisor may launch several nodes on a more capable
Server, and no nodes on a less capable one.
2. In the configuration with the specified backup Server, a node from the primary Server can be migrated only to the backup
Server. After the primary Server is back online, the node is returned.
Node migration is automatic and takes no more than one minute.
Note
In a system counting 100 cameras, the node is transferred in less than one minute in both Failover System configuration
types. All Servers within the system have identical specifications: Intel i5-7400 3GHz 4-core CPU, 16Gb RAM.
Note
After installation, a shortcut is added to your desktop.
On Linux OS, no additional shortcuts are created after installation. To access the supervisor web interface, go to http://
localhost:4000.
To find out your version of Axxon Next and Driverpack, click Software version.
To search hosts and servers, enter their names in the search bar in the Host tab or Dashboard tabs at the top of the window.
1. Select the Server's IP address from the list and click Next.
2. Add the required Servers to the cluster. To do this, enter the IP address and click Link.
Attention!
All Cluster servers must be accessible to each other.
All servers must be hosted on computers with the same architecture (x 86, x 64).
Attention!
The first three Servers added become the master Servers.
The operation of the cluster is coordinated by its master Servers, which, in particular, take decisions to migrate
nodes from one Server to another.
You can have 3 or master servers in the cluster.
If only two Servers are added, they can be configured as 1+1 (primary + backup Server).
3. If required, you can add a user with administrator rights. After the cluster is created, only this user will have access to the
supervisor. Please follow the steps below:
a. Click the Add button.
This will initialize the cluster based on the selected Servers. To add more Servers to the cluster, do the following:
3. If the server you are adding is master, select the Manager checkbox (2).
4. Click Add Host (3).
5. Add all required Servers.
Note
Attention!
To change the IP addresses of the server in the cluster, do as follows:
1. Remove the server from the cluster.
2. Change the IP address of the server.
3. Add the server with a new IP address.
2. Click DB Agents.
3. The icon next to the Server indicates the current status of the agent.
Icon Status
Launch expected
Launched
Stopped
4. Click to stop or launch the agent, and select the required action.
2. Hover the mouse cursor over the button and click the N+M button.
a. Click .
b. Select a specific Server, or add all Servers at once.
4. Create nodes:
a. Click .
b. Enter the node name and click the Proceed button.
Attention!
A node name may contain Latin and numerical characters, and the "-" symbol.
Attention!
The number of nodes should be less than the number of Servers.
5. By default, the self-diagnostics service is running on all nodes (see Self-diagnostics service). To stop it, de-activate the
Self-diagnostics parameter (1).
The configuration is now created, and the nodes are now automatically started.
8. Merge all nodes into a single Axxon domain (see Connecting to a Node and Configuring of an Axxon domain).
9. In a Failover system, we recommend you to physically locate the footage archive on a separate NAS that all servers in the
cluster have access to.
For each node, create a separate archive on the NAS (see Creating a network archive).
If the node is moved to a different server, it will continue to write to the specified archive.
If necessary, you can further edit the configuration. To do so, click . You can perform the following actions:
• add / remove Servers;
• add / delete nodes;
• activate / de-activate the self-diagnostics service;
• change logging parameters;
• completely remove the configuration.
To manually stop or launch a node, click and select the required action.
1. Hover the mouse cursor over the button and click the N+1 button.
5. If you need to maintain local footage archives on primary Servers, activate the corresponding switch (2).
Attention!
The local video footag will be created as a 10 GB file located in the C:/temp_arch folder. If the node is migrated
from a primary Server to a backup one, video data will be recorded to this file and replicated to the main Archive
(video footage) on the backup Server (see paragraph 10).
6. If you need to replicate local archives to the backup Server on permanent basis, activate the corresponding switch (3).
Otherwise, the replication will be performed only when the corresponding node is migrated to the backup server.
7. By default, the self-diagnostics service is running on all nodes (see Self-diagnostics service). To stop it, de-activate the
Self-diagnostics parameter (4).
8. Click the Logging button to set the logging options (5, see Setting up basic configuration).
9. Click the Apply button.
The configuration is now created, and the nodes are now automatically started. "Failover" string will be automatically
added to the name of the backup node.
10. Merge all nodes into a single Axxon domain (see Connecting to a Node and Configuring of an Axxon domain).
11. Configure the footage archives operation:
a. On the backup node, create an archive for replication (see Creating archives).
b. On the primary nodes, configure the replication from the primary Servers' archives to the backup node's archive.
The replication period should be set to Always (see Configuring data replication). You have to set the replication
time period to Always regardless of the value of the Forced Replication parameter (see paragraph 6).
Its records will be replicated to Backup Server's main archive (see Archive 3 on the picture above) .
If necessary, you can further edit the configuration. To do so, click . You can perform the following actions:
• add / remove Servers;
• add / delete nodes;
• activate / de-activate the self-diagnostics service;
• change logging parameters;
• completely remove the configuration.
To manually stop or launch a node, click and select the required action.
Attention!
You can create only one independent node on each server.
4. By default, the self-diagnostics service is running (see Self-diagnostics service). To stop it, de-activate the Self-
diagnostics parameter (1).
5. Click the Logging button to set the logging options (see Setting up basic configuration).
6. Click the Apply button.
The nodes are now created; they should start automatically.
In this case, the primary node will reside on the Server from which the cluster was initialized.
If a cluster has been initialized on one Server only:
1. Add the second Server to the configuration.
2. Click the Convert to 1+1 button on the Configuration page.
Note
Further changes in the configuration may include only logging parameters. To do this, click the button .
Attention!
If the backup Server fails while the system is running, you cannot stop the primary node.
If the backup server is down, the primary node will not restart.
All Server's nodes then migrate to other Servers, and the status of the Server is updated.
To resume the Server's operation within the cluster, toggle the Service switch to its initial position.
Note
The first user you create will be automatically added to the Administrators role
Attention!
The object trajectory DB cannot be backed up. In the event of server hardware failure, all metadata stored in the
database will be lost irretrievably.
To create a backup:
1. Go to the Configuration tab.
2. Click the Back up button (1) and download the backup by clicking the button.
Attention!
Restoring configuration will stop all active tasks.
The server did not pass some checks. The red stripe around the icon reflects the percentage of failed checks.
2. The percentage of allocated CPU resources is calculated for all running nodes. For example, for a node consuming 9300
standard units out of total of 12400, the percentage is: (9300/12400)*100%=75%.
Attention!
This parameter does not reflect the actual Server CPU load; the amount of the node related load can be either
higher or lower than the displayed value.
To open the CPU server load diagram, click Show Resource.
Status Description
2. Enter the IP address of any cluster server (1) and click the button.
Attention!
You cannot connect to a node located behind a NAT.
3. Select from the list the node you want to connect to (2). Enter first characters of the node name into this field, and the fast
search starts.
4. Enter the user name and password (3) and click Connect (4).
During the first connection to the node, you will be prompted to create an Axxon domain (see Creating a new domain).
You can then merge nodes into a unified logical structure following standard procedures of Axxon domain configuration (see
Configuring Axxon domains).
Attention!
An Axxon domain cannot include nodes from different clusters.
2. Enter the IP address of any cluster server (2) and click the button (3).
3. Select from the list the node you want to add to autostart (4).
4. Click the Add button (5).
2. Click Update.
Attention!
You can bulk upgrade the entire cluster only if all of its servers are accessible.
3. On your PC, select the required distribution in zip archive, or specify a web link (1).
4. Select the update method: all Servers simultaneously, or one after another (2). In the former case, all nodes operation will
be halted until the update is finished, in the latter case the nodes will operate without interruption.
5. To record all installation-related events to a log file (3), select the Enable full installation log check box.
6. Click Start.
The installation package download starts. You can cancel the update unless the download is complete.
7. After the installation package is downloaded, the Axxon Next software suite will be updated in quiet mode on all Servers
within the cluster. Depending on the previously selected update method, the Servers will be updated simultaneously or
one after another.
After the update is completed on a Server, its status will change to Done.
Note
If the upgrade fails, you will see the Error message in the status bar.
After all Servers within the cluster are updated, the status at the top of the page will also change.
<item key="NGP_IFACE_WHITELIST">172.17.0.0/16</item>
Use the following settings format: "IP-address1 / number of unit bits in the mask, IP-address2 /number of unit bits in the
mask"
2. Add the same parameter to another file: C:\Program Files\AxxonSoft\AxxonNext\bin\raft\raft-settings.xml .
Note
Aside from the listed errors, the service detects internal errors of the Axxon Next software, and performs required
actions for each case.
Note
In a failover system, the self-diagnostics service is activated / de-activated in cluster settings (see Configure a Failover
System Cluster).
There are two states of a camera window within the layout: active or inactive.
A window in active state includes an additional navigation panel (see Advanced archive navigation panel) and video mode
selection tabs (see Video Surveillance Mode Selection Tabs).
To switch a window to active state, click anywhere inside the window; clicking outside de-activates it.
A more detailed description of the functions of the viewing tile can be found in the section titled Video Surveillance.
If the connection to the camera is lost, the camera window is darkened and you get a corresponding message on the most
recently received image.
Note
Depending on the settings (see Configure Time Display), the indicator may show the date
In archive, alarm, and video frame search modes, it shows the time of the fragment being viewed and the playback mode:
1. Forward playback .
2. Reverse playback .
3. Pause .
If the video is currently being recorded from the camera, the letter R is displayed in red to the right of the clock:
. Otherwise, the letter R is displayed in gray: .
If the camera is not linked to the archive, the letter R is crossed out: .
Server-side FPS Frame rate of the video stream received from a video camera or an archive.
Note
The video stream parameters are updated every 10 seconds.
Alarm Management mode is activated when an alarm is triggered (see Initiating an Alarm).
When you click a live camera tile, the advanced navigation panel shows only the timeline and the archive selection button.
Note
If the camera is not linked to a video archive, the panel will be unavailable
In Live Video mode, if you click the timeline, you go to Archive mode.
The advanced archive navigation panel includes the following components:
1. Timeline;
2. Playback control buttons;
3. Archive selection button;
4. Tabs for compressed and standard archive playback modes.
Tracks are marked in different colors depending on the alarm status or detection tool activation:
Additionally, the timeline on the advanced archive navigation panel features missing footage tags . A tag is displayed when
footage is missing for over 40% of the currently visible part of the timeline.
Depending on the duration of the missing footage, tags may have different thickness:
The date of the first recording in a Video Footage archive is displayed near the left edge of the archive stripe.
Note
In this case, the archive portion left of the marker contains videos shot when the camera was not linked to this
particular Archive.
The advanced archive navigation panel is used to position the archive at a specific time, control playback, and switch to
compressed archive playback mode.
The advanced archive navigation panel works completely in sync with the playback panel and the timeline:
1. The playback mode selected on the advanced navigation panel is displayed on the playback panel.
2. The playback speed that is set on the playback panel will be used as the playback speed when playback is restarted on
the advanced navigation panel, and vice versa.
3. The playback control buttons on the advanced navigation panel are the same as the buttons on the playback panel.
4. Any movement through the main timeline is duplicated onto the timeline of the advanced navigation panel.
2. Select the check boxes for the types of alarms which should be displayed on the archive navigation panel, according to
their status:
a. Confirmed alarm
b. Suspicious situation
c. False alarm
d. Unclassified alarm
e.
Note.
If you clear the check box for a certain type of alarm, this type of alarm and the corresponding track are
no longer displayed on the timeline
3. Select the check boxes for the types of alarms which should be displayed on the archive navigation panel, according to
the cause of their initiation:
a. Initiated by operator
b. Initiated by video detection tool (basic, situation analysis, or embedded)
c. Initiated by audio detection tool (basic, situation analysis, or embedded)
d. Initiated by input
Note.
By default, all check boxes are already selected.
Attention
To display alarms on the timeline, select at least one type of alarm event and one initiator
Note.
To close Events Filter, click to same button again.
Tracks are marked in different colors depending on the alarm status or detection tool activation:
Note
Pre-alarm recordings are white on the timeline, post-alarm footage is blue.
Note
If video recordings overlap or coincide in time, the available footage is prioritized as follows:
1. If there is recorded video, then red colored recordings have the highest priority and white ones have the least
priority.
2. Grey footage takes priority over dark grey.
At the moment when an alarm is assigned a status (critical, non-critical, false, or unclassified), a flag is added to the track. A flag
is added to the point on the timeline when the alarm began.
Note
Display of any particular alarm event in the list is determined by filter settings (see the section titled Events Filter).
Operator comments are displayed with the corresponding icons on the track. An icon is placed on the timeline at the point
corresponding to the commented frame (or to the first frame of the interval, if the comment is for an interval).
If comments were left during alarm classification, the icons are displayed in the appropriate colors.
You can scroll and zoom the timeline using the mouse.
To scroll the timeline, move the cursor on its background vertically while holding down the left mouse button. To change the
scale of the timeline, right-click the timeline's background and, while holding down the right mouse button, move the cursor
down to zoom out or up to zoom in.
The timeline lets you select at which moment to start playback of a recording in the viewing tile. To choose at which moment to
begin playback, you can either left-click the indicator and hold it down while dragging it to the desired position, or just left-click
the left portion of the timeline.
If there is no recording in the selected position, the indicator will automatically move to the position corresponding to the
nearest recording.
Note
You can also set a timeline indicator in the desired position by indicating the exact date and time (see the section titled
Navigating Using the Timeline).
You can also position the timeline indicator with the help of the events list (see the section Events List).
Note
Whether or not a particular event is displayed in the list depends on the filter settings (see the section Events Filter).
Note
The list displays only the alarm events that are currently in the visible portion of the timeline
Note
Navigation through the archive by using the events list is described in the section Navigating Using the Events list
5. Play/Pause.
The button also acts as a slider which sets the speed and mode (forward/backward) of playback.
Note
Use of the playback panel is described in detail in the section Navigating Using the Playback Panel.
The Video Wall panel is automatically displayed at the top of the screen.
The panel is used to set up a video wall from all monitors currently connected to Axxon domain Servers on which video walls
management is permitted for the given User (see Creating and configuring roles).
Note
The panel shows thumbnail views of Client monitors currently connected to Axxon domain Servers on which video walls
management is permitted for the given User (see Creating and configuring roles).
To open an expanded monitor view, click on its thumbnail.
Note.
You can change width of the Monitor panel. To do it, click and drag the panel border .
When you resize the Monitor panel, the Layout panel is automatically changed.
Note
If the client is connected to multiple Axxon domains, only layouts from the main Axxon domain are available.
Layouts Management
Note
To work with OpenStreetMap maps in Axxon Next, you need to purchase an OpenStreetMap license.
The map can contain icons for cameras, inputs, and outputs. The area in which live video is displayed and field of view are
indicated for each camera.
Please refer to the section titled Working with the Interactive Map for further details on how to work with the 3D map.
Note
If the client is connected to multiple Axxon domains, cameras from the main Axxon domain are listed by default. To find
cameras from another Axxon domain, select the domain from the dropdown list (3).
To search for a specific camera, enter its full name or part of it into the search bar.
If you click a camera, a layout opens with the minimum number of cells for displaying the selected camera views.
Note
If the current layout contains the selected camera, the relevant viewing tile becomes active.
If there is no layout with the selected video camera, a new layout with a single cell is created.
To open the Objects Panel (see Objects Panel), click the
button.
Alarms Panel is at the top of the screen. The default setting is Auto Hide. To open Alarms Panel, click the button.
The panel opens downwards, occupying a vertical strip on the screen. You can stretch or shrink it from 10% to 50% of the screen
height.
To resize the panel, left-click the Alarms button and hold and drag the pointer up or down.
You can also expand the panel to full screen. This will hide the camera views (layout). To do so, click the button.
You can do this when the size of the panel is exceeds the minimum size (10% of the screen height).
To resize Event Previews, use the slider in the bottom left corner of the panel - .
Note
If the Client is connected to multiple Axxon domains, you can see cameras from all Axxon domains according to your
permissions.
To open this panel, click the button in the upper left corner of the screen.
Note
On Object Panel, the cameras can be sorted either by name or by a short name (see Configuring camera sorting on
Object Panel).
The object tree on the panel can be represented as Servers (1), Groups (2) and Layouts (3).
If you select a camera on the panel, a layout opens with the minimum number of cells for displaying the selected camera views
(see Camera Search Panel).
Note
The search results are displayed in the Objects Panel dynamically — as users type and refine their queries.
Note
The PTZ control panel is displayed only if the PTZ object for the particular video camera is enabled (see the section
titled The PTZ Object).
The PTZ control panel is used for the following functions:
1. Controlling PTZ video cameras.
2. Setting and switching to camera presets.
3. Launching/stopping PTZ tours.
4. Launching/stopping patrolling.
The PTZ control panel includes the following interface elements:
1. Presets list.
2. PTZ tours list.
3. Dialer
4. PTZ controls for iris, focus, and optical zoom
Note
If a camera does not support a function, the controls for this function cannot be accessed
5. Virtual 3D joystick
Note
The type of virtual 3D joystick and adjustment scale depend on the type of PTZ cameras: discrete or continuous
control of Pan, Tilt, Zoom, Focus, and Iris.
6. Patrol button
Note
Use of the dialer, PTZ controls, joystick, and patrol button is described in the section Controlling a PTZ Camera.
To hide the board, click once more the Story board button.
Note
Alarm Management mode is available if an alarm has been initiated in the system.
When a viewing tile is enlarged, the scale of the entire layout is increased. Some of the cells are moved off the screen.
Viewing tiles are enlarged as follows:
1. If a viewing tile occupies 100% of any of the sides of the layout (maximum viewing tile size), it cannot be enlarged.
2. If a viewing tile occupies 50% or more (but not 100%) of any of the sides of the layout, it is enlarged as much as possible.
3. If a viewing tile occupies less than 50% on both sides of the layout, it is enlarged in two steps: the first step enlarges the
viewing tile to 50% on the corresponding side of the layout and the second step enlarges the viewing tile to the maximum
size.
Note
The third case applies to layouts that contain nine or more cells
If a viewing tile is linked to another one or an information board, at the first enlargement step (to 50%), the viewing tile and the
other tile / information board are displayed together and occupy all of the screen on one side.
Note
In this case, the first step takes into account the total size of the related cells: the related cells must be less than 50% of
both sides of the layout
Also, if you click a viewing tile, you can control its size with the buttons on the top panel:
Digital zoom scale:
To enlarge a video image, left-click the slider and hold and drag the digital zoom scale up to the desired value. The maximum
zoom is 16x. To return back to the original image, move the slider back to its original position.
To hide the digital zoom scale, select Hide digital zoom in the context menu of the viewing tile. Also, 3 seconds after you scale
down the video image to the minimum, the zoom scale will automatically hide.
After hiding the digital zoom scale, the selected zoom level of the image will be preserved when switching between image
viewing modes.
Note
If you select an area that requires a zoom of more than 16x to display, it will be marked with a red frame. The video
image will not be enlarged.
Mouse wheel is scrolled forward by one level The video image is enlarged by 2x
Mouse wheel is scrolled backward by one level The video image is reduced by 2x
To enable video image processing functions, use the Visualization option in the context menu of the viewing tile. Only one image
processing function can be enabled at a time.
To return to the original image, reselect the Contrast option in the Visualization context menu.
The image in the following picture shows an example of use of the Sharpness tool.
To utilize this tool, select the Deinterlace option in the Visualization context menu.
Note
If you enable video rotation, only video in Live Video and Archive modes is rotated:
• NA (not applicable) for video display on the map and on the alarms.
• NA to recording for archives.
• NA for export.
• NA for analytics (metadata).
To disable video rotation, select Visualization-> Disable rotation in the viewing tile context menu.
Attention!
Object Tracking is available if:
1. the object tracker is activated for this camera (see General information on Scene Analytics);
2. the Video Motion Detection tool is activated (see Configuring VMD);
3. at least one of the Embedded Analytic tools is activated (see Embedded Detection Tools).
To disable object tracking, click Hide tracking in the viewing tile context menu..
If you have created a situation analysis tool for this video camera (see Functions of Scene Analytics), then you can see the
detection parameters (areas, lines) in Live Video mode along with object tracking in the camera window.
Note
Areas to be excluded from surveillance are outlined in dotted black and green line while detection areas are outlined in
black and gray.
If an ANPR detection tool has been created for a camera, the license plates within the video image will be outlined (see Automatic
Number Plate Recognition (LPR/ANPR) tools).
Attention!
To ensure correct display of the outline, set the camera's Video Buffering parameter with the range of 500–1000 (see
The Video Camera Object).
If a face detection tool has been created for a camera, all faces within the video image will be outlined.
If a pose detection tool (see Configure Pose Detection) has been created for a camera, a human skeleton is highlighted over the
video image.
Note
If comments are added during playback in Archive or Archive Analysis mode, playback is paused after the button is
clicked.
In Alarm Management mode, operators can be required to give comments after classifying an event (see Configuring Alarm
Management Mode) or comments can be left in free form, before event classification, by clicking the button. The comment
applies to the entire duration of the alarm.
To add a comment, click the button. A dialog box opens for entering a comment.
2. Transparency of comment window, by adjusting the slider from left to right (from opaque to maximum
transparency).
3. Marking the area of interest in the frame, with a dot ( ), semicircle ( ), or rectangle ( ). To do so:
1. Click the relevant button and then click anywhere in the frame. The selected element is displayed.
2. Drag the element to the necessary place in the frame. To do so, left-click and drag the edge of the area (or for a dot, click
and drag the dot).
3. Set the size by dragging the corner points.
To save the comment, click the Save button. Otherwise, click to cancel.
After being saved, a comment is displayed in the frame as specified. To delete the comment, before you perform any other
command in the system, click the button..
You can have titles from several POS terminals in the same camera window.
Note
Keywords or lines can be highlighted according to the settings (see Configuring keywords).
Note
With certain captions' output settings (the Duration parameter = 0, see Configuring titles view) and low-intensity
events at the checkout, you may have a time lag between the captions displayed and the video time stamp.
To disable titles overlay, select Event Sources in the context menu of the viewing tile and a POS terminal that you want to hide.
To disarm a camera , select Disarm in the context menu of the viewing tile. The video camera will then be disarmed.
Attention!
PTZ camera is controlled in accordance with the priority settings (see Creating and configuring roles). If multiple users
have the same control priority, they can control a PTZ camera simultaneously.
If the user with higher priority controls the PTZ camera with the the PTZ control panel (as long as the camera is
selected) users with a lower priority can not control it. If the user with higher priority controls the PTZ camera, the
relevant information is displayed on the panel.
If you disabled the option to simultaneously control a PTZ camera by multiple users, users with the same priority take
over the control on a first-come, first-served basis.
However, a user with equal or higher priority can take over the PTZ control. To do this, click the Take Control button.
If the user that controls the PTZ camera is idle a certain time (see Configuring PTZ control), it is automatically unlocked
and the control becomes available to all users.
The following actions can be performed using the PTZ device control panel:
1. Use presets.
2. Modify the parameters of the iris, focus, and optical zoom.
3. Modify the horizontal and vertical tilt angle of the video camera.
4. Starting/stopping patrol mode.
Note
Setting presets is described in detail in the section The PTZ Control Panel.
8.2.3.4.1 Presets
The presets list created for a selected video camera is displayed in the upper part of the PTZ control panel in the
tab.
For each preset in the list, the following parameters are displayed:
1. The identification number
2. A descriptive name
The presets list is used for the following functions:
1. Creating presets.
2. Editing the identification number and name of an existing preset.
3. Deleting presets.
4. Switching to a preset.
You can create up to 100 presets with numbers from 0 to 99. To create a preset, you must perform the following steps:
1. Place the PTZ camera in the position which is to be saved as a preset.
2. Click . Fields for entering an identification number and a descriptive name for the preset will then appear.
Attention!
If a preset with the identification number entered already exists, its parameters, as well as the corresponding
PTZ camera position, will be overwritten.
4. Left-click anywhere in the presets list and press Enter to save changes.
Creation of a preset is now complete.
To edit the number and name of an existing preset, you must perform the following steps:
1. Highlight the desired preset in the list.
2. Click . The identification number and descriptive name fields will then become accessible for editing.
3. Modify the preset number and/or name as desired.
4. Left-click anywhere in the presets list to save changes.
Editing of the preset is now complete.
To delete an existing preset, you must perform the following steps:
1. Highlight the desired preset in the list.
2. Click .
The preset has now been deleted.
To switch to a preset, left-click the corresponding line in the presets list. The camera will then be switched to the desired
position.
Note
See the section Selecting a preset.
To switch a PTZ camera to a preset, you can use the Enter number panel. To display the Enter number panel, click the Enter
number button.
To switch to a preset using the Enter number panel, you must perform the following steps:
1. Using the numeric buttons (0-9), enter the number of the preset to which you want to switch.
The entered number is displayed in a special field.
To delete the last digit entered, click the button.
2. Click the button to switch to the preset with the number entered. The camera will then be switched to the desired
position.
Switching to a preset using the Enter number panel is now complete.
Note
Examples of entering a number:
Attention!
In Axxon Next, you can set up PTZ tours only for cameras connected via the ONVIF Generic driver (see Generic Drivers
(General device, generic)).
The presets list created for a selected video camera is displayed in the upper part of the PTZ control panel in the
tab.
Attention!
This parameter is reserved for future Axxon Next software versions.
Note
If your camera is set to Discrete PTZ Control via Continuous, your control options will be limited to buttons only (see The
PTZ Object).
Set the sensitivity level of PTZ step buttons by selecting a value from 1 to 10.
If you press and hold the buttons, the PTZ camera will move continuously.
Attention!
Interaction between PTZ cameras and the Client may cause the cameras to jerk in some cases.
Cameras also may jerk when they are connected by the Onvif protocol (see Notes on configuring video cameras
connected via ONVIF).
Note
You can also move the joystick by clicking and holding the left mouse button outside of the joystick border.
The turn speed depends on the tilt of the joystick: the greater the tilt, the higher the speed.
Note
If you rotate a camera view / viewing tile 180°, you will have PTZ controls inverted.
8.2.3.4.4 Patrolling
Patrolling is an automatic change in the position of a camera along a route defined in the camera's presets list.
Note
You can use a cycle macro to set up patrolling (PTZ camera tour, see Switch to a PTZ camera preset, Wait for timeout,
Cyclical macros).
Patrolling must be activated in camera settings (see The PTZ Object). In this case, the operator can stop patrolling by clicking the
Patrolling button on the PTZ control panel. After the manual PTZ control session is over, patrolling will resume automatically.
If patrolling is switched off in camera settings, you can switch it on by clicking the Patrolling button.
Attention!
Manual control takes priority over automatic control. Any interference in the patrolling process cancels it.
Note
Any user (regardless of priority, see Creating and configuring roles) can stop patrolling
Patrolling will automatically stop when a manual PTZ control session is over, or after you close the PTZ control panel.
Note
Control session automatically stops after a set idle time (see Configuring PTZ control).
To control focus, iris, and optical zoom, move the corresponding slider up or down.
If the camera has AF (auto focus), you can see the corresponding button under the slider.
Some devices allow to control optical zoom with the mouse scroll wheel.
To control optical zoom with the mouse, go to OnScreen PTZ mode (see Controlling a PTZ Video Camera in the OnScreen PTZ
Mode), otherwise the zooming will be digital (see Digitally Zooming Video Images).
Note
In OnScreen PTZ mode, the Areazoom function is disabled (see Control using Areazoom).
To change the viewing angle, click on a video image with the left mouse button and move the mouse pointer in the required
direction. During this action the software displays a visual element on the image showing the camera lens movement direction
and speed.
The faster you move the mouse, the faster the camera rotation will be.
Attention!
To operate Point&Click, go to OnScreen PTZ mode (see Controlling a PTZ Video Camera in the OnScreen PTZ Mode).
Once you have done that, the focus of the camera lens will automatically change to the selected area. The focus is changed using
Axxon Next algorithms.
Note
This function is available for only some CCTV cameras.
For more information, contact AxxonSoft.
Note
Areazoom function is unavailable if the OnScreen PTZ mode is enabled (see Controlling a PTZ Video Camera in the
OnScreen PTZ Mode).
To do so:
2. Holding down the mouse button, moving outward from the center of the focus area, the user sets the size of the area.
Releasing the mouse button finalizes the selection.
The lens is reoriented and the image is enlarged so that the selected area now fills the entire viewing tile.
Note
This function is available for only some CCTV cameras.
For more information, contact AxxonSoft.
Attention!
To use Tag & Track Pro, make sure your PTZ camera supports Absolute Positioning. The devices that support Tag &
Track Pro are listed in the Drivers Pack documentation.
When connecting via the ONVIF protocol, Absolute Positioning support is also required. Contact camera vendor for
Information on the Absolute Positioning support in the ONVIF protocol.
Attention!
If manual or control priority mode is selected, for Tag & Track Pro you must activate object tracking in the viewing tile of
the overview camera (see the Tracking objects section).
• If automatic mode is selected, the PTZ camera will track all active objects. In this case the PTZ camera switches focus
between each object with the specified dwell time.
Note
The PTZ camera is positioned so that the moving object is in the center of the frame.
• In manual mode, the PTZ camera tracks an object only after the object is manually selected in the viewing tile (left-click to
track). If you click anywhere in overview camera's FOV that contains no tracks, the PTZ camera cancels object tracking
and focuses on the specified point.
• If control priority mode is selected, the PTZ camera automatically tracks an object until another object is manually
selected for tracking in the viewing tile. If an object is deselected (by clicking again) or if it leaves the field of view of the
PTZ camera, automatic mode is re-activated.
• If the PTZ camera is in manual mode, the user controls it with on screen or with a joystick / CCTV keyboard. If the user
does not operate the PTZ camera (Control panel hidden), then the Automatic mode is used.
Note
Tag & Trac Pro cannot be used with the OnScreen PTZ mode simultaneously (see Controlling a PTZ Video Camera in the
OnScreen PTZ Mode).
Attention!
For Tag & Track Lite to work, you must activate object tracking in the viewing tile (see the Tracking objects section).
2. After the selected object leaves the field of view of the camera, the video camera's location on the map and trajectory of
the object are used to predict the camera in front of which the object is likely to appear.
3. The viewing tile of that camera is activated. If the current layout does not contain that camera, a minimal layout with the
camera is shown.
Note.
If the viewing tile for that camera is currently in archive mode, the tile is switched to Live Video mode and made
active.
Note.
If video in the viewing tile of the original camera is magnified and the predicted camera is not in the layout, the
viewing tile with the camera is made active and the same degree of digital zoom is applied.
Attention!
Tag & Track Lite merely predicts, and therefore cannot guarantee, that the object will appear in front of a given
camera.
8.2.3.5.3 Simultaneously using Tag & Track Pro and Tag & Track Lite
In some cases, it may be useful for Tag & Track Pro and Tag & Track Lite features to be active at the same time.
For example:
• In Tag & Trac Pro, manual or control priority mode is selected for the PTZ mode.
• In Tag & Track Lite, an object is selected in the field of view of the camera that is designated as the overview camera for
Tag & Track Pro.
In this case, both features are active: the object will be tracked by the PTZ camera and its trajectory will be used to predict the
camera in front of which it will appear next.
Note
You must first activate an object before you can control its output.
Note
To hide Output Switch, select Hide output in the context menu of the viewing tile.
You can toggle the output state by clicking the radio button.
Note
If a output is controlled by several operators simultaneously, the output will remain activated as long as at least one
operator requires it.
Normal
Activated
Note
You must first activate an object to display the status of its input.
The status of the input will now appear in the viewing tile.
Note
To hide the input status, select Hide input in the context menu of the viewing tile.
Note
If the video camera is not configured for multiple video streams, this action is not available (see The Video Camera
Object).
2. Select the quality of video stream that you want for display in the viewing tile.
Item Description
Auto (GreenStream) The default setting for video stream is low-quality. Upon selection of a Camera Window, the
highest resolution stream is displayed by default. After you switch to another Camera Window,
the inactive camera window returns to lower resolution / fps display
High A high-quality video stream is used for display in the viewing tile (see The Video Camera
Object).
Low (default) A low-quality video stream is used for display in the camera window (see The Video Camera
Object).. When you zoom on the camera window, it switches to a high quality video stream
(see Scaling the Viewing Tile).
Adaptive Viewing Tile shows an adaptive video stream (see Configuring an Adaptive Video Stream).
Note
Automatic video stream selection (enabled by the Auto option) is unavailable if automatic resolution selection
has been set for any stream (see The Video Camera Object).
8.2.3.9 Autozoom
The Autozoom function performs automatic control of digital zoom.
If a viewing tile is inactive and autozoom is enabled, the following actions occur:
1. The smallest rectangular area that contains all tracked objects (even if object tracking is disabled) is chosen.
2. Maximum digital zoom is performed for the selected area.
If autozoom is enabled but there are no moving objects in the video frame, the contents of the viewing tile are shown at their
original size.
Note
If the Fit screen function is activated for a viewing tile, the default digital zoom level is used.
Autozoom stops when a viewing tile is selected and resumes when the viewing tile is no longer active.
Autozoom can be enabled both for a single camera and for all video cameras in a layout.
To enable autozoom for a specific camera, in the viewing tile context menu, select Enable autozoom.
Important
Autozoom is available if:
1. the object tracker is activated for this camera (see General information on Scene Analytics);
2. the Video Motion Detection tool is activated (see Configuring VMD);
3. at least one of the Embedded Analytic tools is activated (see Embedded Detection Tools).
Note
Autozoom resizing takes into account objects from all tracking sources that are activated for a particular video camera
To disable autozoom, select the corresponding command in the viewing tile context menu.
To enable autozoom for all cameras in a layout, select Enable autozoom for all.
To disable autozoom for all cameras in a layout, select Disable autozoom for all.
Note
If autozoom is activated for one or more cameras in a layout, by default the menu displays the Disable autozoom for
all option.
Note
When you switch to the layout editing mode, the auto zoom is disabled for all cameras.
8.2.3.11 Snapshot
In Live Video mode, you can "freeze" video. To do so, click the time display (scrubber) in a viewing tile.
This will cause the viewing tile to be highlighted with a blue border. A snowflake icon will appear in the time field.
In this case, the camera window will also have buttons to set the moment of time, if you want to go to Archive mode (see
Navigation via the time indicator).
To return to live video, click the display again.
When you click a link, the corresponding camera is selected, and the Camera window expands.
If a linked camera is not present on the currently selected layout, a layout containing the required camera is selected. If there are
multiple layouts containing the required camera, the layout with the least number of windows is selected.
If a linked camera is not present on any layouts, a temporary layout is selected that will be automatically deleted after you
proceed to another layout.
Attention!
These changes are not saved, and the camera will reappear when you call the layout again, or relaunch the Client.
You can further drag and drop a camera from the Objects Panel (see Objects Panel) or Camera Search Panel (see Camera Search
Panel).
Note
If a camera is not linked to a video archive and has no on-board storage, this tab will be not available.
You can also switch from Live Video mode to Archive mode if you select a position on the advanced archive navigation panel
(see Advanced archive navigation panel).
Note
In Live Video mode, if the viewing tile is not active, the tabs for switching to other modes and the advanced archive
navigation panel are not displayed. To activate, click the viewing tile
To switch all cameras within a layout to Archive mode, click the Playback button on the upper panel.
Furthermore, if all cameras within a layout are in Live Video mode, you have to open the Archive navigation panel to switch the
cameras to Archive mode (see Show and Hide the Archive Navigation Panel).
Note
If archive mode is selected as the default video mode for a camera in a layout, when you switch to that layout, the
camera is immediately in archive mode (see Selecting the default video mode for a camera).
On first access to Archive mode, the most recently recorded video will be selected on the timeline (see The Timeline). On further
accesses to a particular camera archive, the timeline indicator will show the position of the most recent video in the Archive.
Attention!
If you prefer to open the most recently recorded video when accessing the Archive (Video Footage), create a
ResetArchivePosition parameter in the following registry key on the Client:
HKEY_LOCAL_MACHINE\SOFTWARE\AxxonSoft.
Click to create a temporary layout for Archive (video footage) viewing.
The temporary layout is not preserved after you switch to any other layout.
Note
Refer to section Real-time video surveillance for a description of switching to the results of a saved search query
and the Autozoom function.
Note
You can select all available Mirror Archives (if any, see Configuring data replication) and on-board storage (if
enabled, see The Embedded Storage object)
You can now view video footage from the selected archive in the viewing tile.
Attention!
The next time you enter the Archive mode, the selected (not default!) archive will be displayed.
Note
If there is no recording in the selected archive, a message to that effect will appear in the viewing tile.
Records from all checked Archives will appear on the timeline. You can apply any system function to a combined Archive.
Note
Clicking on a particular Archive brings you to viewing videos from this Archive only.
Note
If you select multiple archives for a particular camera, and then switch all the layout to Archive mode, all cameras will
be set to multi-archive display.
When combining multiple streams from the same camera into one Archive, the highest quality stream is prioritized.
For example, if
• a lower quality video stream is permanently recorded into Archive #1,
• and a high quality video stream is recorded into Archive #2 by VMD,
then the combined Archive will consist of high quality motion-triggered records and low quality "other" videos.
Synchronized archive playback is controlled through the playback panel in the same way as playback for a single archive.
Attention!
For the Object Tracker, you have to select the video stream currently selected for Video Footage recording (see Setting
General Parameters).
Note
The TimeCompressor mode allows you to view the results of a specific archive search (see Viewing Search
Results In TimeCompressor).
Note
Only one video camera can run TimeCompressor at one time. If synchronized playback is started and a video camera is
switched to TimeCompressor mode, playback of all other video cameras will be automatically paused.
Note
To switch back to the standard archive browsing mode, click the displayed area of the Advanced Archive Navigation
Panel .
Note
Once you have configured this setting, playback begins at the beginning of the selected interval.
Note
However, according to the logic of the algorithm, the number of displayed objects may be greater.
To stop or start playback, use the and buttons on the playback panel or the identical buttons on the advanced
navigation panel.
To start archive playback in TimeCompressor mode starting at the beginning of the selected interval, click the button (2).
The system will now automatically switch back to the original recording of the object in standard archive playback mode.
Playback of the recording will be paused, and the beginning of the recording will correspond to the moment at which the object
was selected.
The period during which the objects remains in the camera's field of view is displayed in the viewing tile.
Note
Once you have switched back to the original recording of the object, you can return to TimeCompressor mode to the
place where the switch was made. To do this, click the tab. In this case, playback in TimeCompressor mode will be
paused.
Comment text begins display five seconds before the frame for which the comment was added (before the first frame, if the
comment was set for an interval), with gradual outlining of the area (or point) that was specified when adding the comment.
When the commented frame is shown or during the commented interval, the area (or point) is also highlighted,
Five seconds after the commented frame (after the end of the interval, if the comment was for an interval), the comment is
hidden.
To minimize comments and the displayed area, if any was specified, click the button.
3. To delete footage within the specified time interval for all cameras within the archive, check the All box .
4. Click .
Attention!
You cannot recover deleted footage.
Attention!
If several archives were selected for viewing (see Viewing a combined Archive), the footage will be deleted from
all of them.
After the deletion is complete, the remaining footage may contain some artifacts near the cut points.
Note
Use of the timeline is described in detail in the section The Timeline.
You can select recordings in the archive for playback in a viewing tile by using the timeline, in one of two ways:
1. Left-click the indicator (1) and drag it to the corresponding position on the timeline. Alternatively, you can left-click the
left portion of the timeline.
Note
The position on the timeline is a graphical representation of a specific moment in time.
The frame corresponding to the selected position (moment in time) will then be displayed in the viewing tile (2).
Attention!
If the system clock is shifted back one hour (for example, in winter saving), the videos from the lost hour will
disappear from the timeline but remain accessible.
For example, at 3 PM the clock is shifted back one hour. If you place the timeline marker anywhere after 02:00:00,
videos shot after the shift will be played back.
If you set the marker to 01:59:59 or earlier, videos shot from 01:59:59 to 02:59:59 (time stamped before the shift)
will be played back.
2. Click the indicator. The calendar opens. Select the date to which you want to jump in the archive and specify the time in
HH:MM:SS format, by using the arrows or keyboard number keys.
Note
The Tab key can be used to navigate across various elements of the Calendar.
When you left-click and hold left the mouse button and the timeline is moved, you can view the corresponding recording in fast
forward. The further left you click, the faster is the playback speed.
Note
The current moment in time is determined by the cursor located in the center of the timeline (2). The position of the
cursor relative to the timeline never changes.
Once the selected moment is reached, playback stops. The speed of playback depends on the speed of the timeline’s movement.
To start archive playback click in the middle of the timeline.To pause playback, click the button or left-click the
timeline.
To control playback, use the playback panel (see the section titled Navigating Using the Playback Panel) or the advanced
navigation panel.
Playback Pause
Attention!
Click and hold the button to jump to the end of the archive.
1. Play recording:
2. Pause/Stop playback:
Slow playback.
For reverse playback of a recording, move the slider to the left of the position corresponding to zero playback speed (the center
of the slider); for forward playback, move it to the right. The current playback speed is displayed under the slider. During forward
playback of a recording, a + sign appears before the speed; during reverse playback, a - sign appears. The value 0X corresponds
to zero speed, i.e., no playback; the value 1X corresponds to the frame rate of recording.
Tp speed up playback by one step, click +. To slow down by one step, click -. To temporarily change the playback speed, move
the slider in the desired direction.
To slow playback N-fold, do as follows:
1. Accelerate playback N-fold:
2. Click the value of the current playback speed below the slider.
This slows the playback N-fold.To return to the fast playback, click the current speed again.
Note
Forward playback speed can be increased up to 32x, reverse playback speed — up to 8x.
Key or key combination Resultant action during pause Resultant action during play
Ctrl+Spacebar Uses the current position to set the export interval Uses the current position to set the export interval
Up-Arrow Increases playback speed by one level Increases playback speed by one level
Down-Arrow Decreases playback speed by one level Decreases playback speed by one level
Page down Switches to the next recording Switches to the next recording
If not all tabs fit in the viewing tile, a full list of saved Forensic Search queries is available by clicking the button.
Clicking a tab switches to Archive mode, displaying the results of the relevant search on the timeline (the process is similar to
viewing search results in Archive Analysis mode).
The standard Archive mode controls are used for navigating between search results (see Navigating in the Archive).
Note
To search in standard archive mode without displaying search results, click the corresponding tab in the viewing tile.
The parameters for the search are displayed when switching from search results to Archive Analysis mode.
3. Set the time interval on the timeline (see Standard video export). Recordings from this range will be copied to the new file.
Attention!
You can replicate recorded video only to the "right" end (later point in time) of the archive. It is not possible to
overwrite existing data in the archive.
If the selected replication range starts and ends earlier than the starting time of the mirror archive (e.g. you want
to copy a hour of video footage from 9 a.m. to 10 a.m., but the mirror archive starts at 11.a.m.), replication will
not be possible (the Export button is not available).
Attention!
For Tag & Track Lite to work, you must activate object tracking in the viewing tile (see the Tracking objects section).
2. The most probable camera where the object may have been captured next is suggested.
3. After selecting an object, you are switched to the suggested camera. Camera footage will be played back automatically
from the moment when the target object was supposed to appear in the FoV.
Attention!
Tag & Track Lite merely predicts, and therefore cannot guarantee, that the object will appear in front of a given
camera.
Alarm Management mode, in the lower-left corner of the tile, click the button.
Note
You can initiate an alarm only if the specific video camera is linked to the archive.
2. The operation will trigger an alarm that appears on Alarms Panel (see Alarms Panel). To classify an alarm, click the
button again.
Note
When in Alarm Management mode, the user that initiated the alarm will be indicated at the bottom of the
viewing tile.
To evaluate the situation, click the Alarms Panel tab, select the event and classify it in Alarms Management (see the section titled
Selecting Events for Alarm Management).
When you hover over the Event Preview tile, all information about the alarm pops up.
If you click the button, the event footage / alarm recording will then be played back in Event Preview in a repeating cycle.
To stop playback, click the button.
On the Alarms Panel, click on the alarm event video window to play back the event video in the camera window.
If the button is activated on the Alarms Panel, the alarm video will appear in a temporary layout containing just the current
camera.
If the button is inactive, the video playback will start in the regular layout.
3. If the required layout does not yet exist, the system creates a new layout with a single video camera.
4. The system switches to the selected layout.
5. The video camera becomes active in the selected layout. The viewing tile is expanded by one level. It switches to the
Alarm Management mode. (if you have selected an active alarm) or to the Archive mode. (if you have selected a
processed / classified alarm or missed / unclassified).
If the alarm was initiated automatically, the visual element set for the detection tool which initiated the alarm will be displayed
in the viewing tile: or a detection area or virtual tripwire , which triggers the detection tool when it is crossed. The object which
caused the trigger will be outlined with a red frame.
Display of an Area visual element:
The name of the detection unit that initiated the alarm is displayed in the lower portion of the viewing tile.
To navigate the fragment of an alarm event, use the Advanced Archive Navigation Panel (see Navigation using the advanced
panel) or the Playback Panel (see Navigating Using the Playback Panel).
To switch to a required fragment of an alarm event in order to play it again, hold the timeline pointer with the left mouse button
and drag it to the required position.
Attention!
In the case of multi-user event processing, only the first operator to switch to alarm mode may process the alarm (if he
or she has the appropriate permissions). For the rest of the operators, the Alarm Management buttons are not
displayed.
Confirmed alarm
Suspicious alarm
False alarm
8.2.5.6 Limitations when working with alarm events in case of multi-user processing
In the case of multi-user processing, only one operator may accept an alarm for processing. Other operators may switch to alarm
mode with limited functions for the purpose of playing back the alarm. This can be done in one of two ways:
1. Click the button (see the section Video surveillance in Alarm Management mode).
2. Switch to the Alarms tab and select the alarm from the alarms list.
In Alarm Management mode with limited functions, the Alarm Management buttons are not displayed. Instead, the name of the
operator who is currently processing the alarm is displayed. The other functions of the alarm handling tile remain unchanged.
After processing of the alarm on another client, on the given client the status assigned to the alarm is displayed in place of the
name of the operator.
If a user has accepted an alarm for processing and leaves Alarm Management mode (going to Live Video mode, Archive or Archive
Search mode, the viewing tile for another camera, etc.), after an amount of time equal to the operator's idle time after leaving,
other users will also have the opportunity to accept the alarm for processing.
If more than one alarm appears for one camera, any operator may access all alarms not yet accepted for processing.
Note
If the video camera is not linked to a video archive, this tab will be unavailable.
Note
In Live Video mode, if the viewing tile is not active, the tabs for switching to other modes are not displayed. To display
the tabs, click the viewing tile by using either button of the mouse.
2. Search control panel (2, see Search in an archive of a single video camera)
3. Search results panel (3, see Viewing search results)
4. archive navigation panel (4, see The Archive Navigation Panel)
You can hide search parameters for a portrait-oriented camera. To do so, click the button.
6. Events search.
7. Forensic search.
8. Time search.
9. Searching comments.
10. Switching between search results.
11. Playing back fragments retrieved by searches of specific moments in time.
12. Zooming in on objects that trigger detection tools.
13. Functions Available in All Video Surveillance Modes.
Note
The functions for navigating through an archive, displaying the causes of situation analysis detection unit triggering,
and Archive Selection were inherited from archive mode; their descriptions are Video surveillance in archive mode. The
Autozoom function is described in the Real-time video surveillance section.
Note
The current Axxon Next suite release supports only search of a single type at one time.
Note
In the on-board storage of the camera, you can only find video episodes with thumbnail search (TimeSlice).
To alter the search interval, select the desired value from the Search Range dropdown list.
Timeline Searches span across Archive tracks currently displayed on the timeline.
Range Searches are performed within the interval currently selected on the timeline.
Last 1h/3h/6h/12h/24h The searches will be performed within the last hour (or 3, 6, 12, 24 hours) of recorded footage.
Next 5min /15min /30min /1h /3h The searches will be performed within the following interval: [specified start of the interval;
specified start of the interval + 5min / 15min / 30min / 1h / 3h]. To set the beginning time of
Event Description
All alarms The search finds moments in the archive containing all types of alarms
Non-critical alarm The search finds moments in the archive containing non-critical alarms
Critical alarm The search finds moments in the archive containing critical alarms
Unclassified alarm The search finds moments in the archive containing unclassified alarms
False alarm The search finds moments in the archive containing false alarms
Triggering The search finds moments when detection units were triggered
Recording start The search finds the beginning and end of recordings from the specified video camera regardless of the initiator
2. Select an event initiator from a list with the same name (2).
Note
An event initiator could be an operator, a video camera input, or any detection unit that is activated in the
system. The search results will show the moments in time containing the events that were triggered by the
initiator.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
This starts a search in the archive based on the defined criteria. Search results are available on the search results panel.
Note
To zoom objects that caused an alarm or triggered a detection unit, select the Expand alarm object check box in the
lower portion of the search results panel.
Note
It's recommended to set the interval value to no less than 10 seconds.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
The search results panel displays frames that match moments in time that are equally spaced from each other; the search
control panel shows the number of fragments found.
4. If the specific moment is not found, then start the second search iteration: double-clicking on the found moment triggers
the search in the time interval from this moment to the next one.
5. Keep searching until the specific moment is found.
Note
Information on playback of video fragments is provided in the section titled Playback of video fragments
Attention
Search is performed for the entire string of entered text, not for separate words.
Note
If no text is specified, all comments for the selected interval are found.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
This starts a search for video fragments based on the defined criteria. The search results pane displays frames for which there are
comments containing the search text. The relevant comment is displayed under each frame.
Note
If the comment was left for an interval, the first frame of the interval is displayed.
1. Motion in Area.
2. Loitering of an object in a specific area.
3. Simultaneous presence of a large number of objects in a specific area.
4. Crossing of a virtual line by an object’s trajectory.
5. Motion from one area to another.
Position the cursor on a node and hold down the left mouse button while you move the mouse Moves the area node
2. Select the metadata source if there are several for this video camera. This parameter will not be displayed if there is only
one source.
3. Specify any number of additional parameters by clicking , if necessary (see Configure the search parameters).
4. Set the search interval (see Setting a search interval).
5. Click the Search button.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
3. Set the minimum duration of stay in the area (2, in seconds and minutes). Search results contain recorded video in which
the object is present in the area for longer than the indicated time.
4. Specify any number of additional parameters by clicking , if necessary (see Configure the search parameters).
5. Set the search interval (see Setting a search interval).
6. Click the Search button.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
3. Specify the number of objects allowed in the area (2). Search results contain recorded video in which the number of
objects in the area exceeds the specified number.
4. Specify any number of additional parameters by clicking , if necessary (see Configure the search parameters).
5. Set the search interval (see Setting a search interval).
6. Click the Search button.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
Note
You can collapse the graphical elements if they block the visual elements and prevent editing them. To hide
them, select the Hide graphical elements check box.
2. Select the metadata source if there are several for this video camera. This parameter will not be displayed if there is only
one source.
3. Specify any number of additional parameters by clicking , if necessary (see Configure the search parameters).
4. Set the search interval (see Setting a search interval).
5. Click the Search button.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
To move the end point of a line, position the cursor on the end point and hold down the left mouse button as you move
the mouse.
By default, both directions of motion across the virtual line are taken into account when searching the archive. If you do
not need to search in a specific direction, click the button corresponding to that direction.
Attention!
At least one direction must be selected for the search.
Note
A disregarded direction of object motion is indicated by a dimmed arrow.
2. Select the metadata source if there are several for this video camera. This parameter will not be displayed if there is only
one source.
3. Specify any number of additional parameters by clicking , if necessary (see Configure the search parameters).
4. Set the search interval (see Setting a search interval).
5. Click the Search button.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
Object trajectory crossing a virtual line Maximum and minimum object size
Motion from area to area Maximum and minimum object speed
Object color
Object type
Note
The first method lets you roughly configure the size, and the second method allows you to set the size precisely.
1. Position the cursor on a visual element node and hold down the left mouse button while moving the mouse (1).
2. Set the width and height of an object of the minimum (maximum) size using the arrows in the upper and lower margins,
respectively. The dimensions of a visual element in the viewing tile can be changed in a similar manner (2).
The minimum (maximum) size of an object is now set.
Configuring minimum and maximum object speed
In the Axxon Next VMS, the speed is a relative value computed from parameters of different units. The computing algorithm
includes both frame width and height. For more accurate search, we recommend you to perform several search iterations while
setting speed values empirically.
The procedures for setting the minimum and maximum speed of a moving object are identical.
The minimum (or maximum) speed of a moving object can be set using any of the following methods:
1. Position the cursor on an end point of the arrow and hold down either mouse button while you move the mouse. The
length of the arrow will correspond to the minimum (maximum) displacement of the object per second (1).
2. Use the arrows to set the minimum (maximum) speed of the object as percentages of the frame per second (2).
The minimum (maximum) speed of a moving object is now set.
The following objects will be included in the search results:
• If you set only the maximum speed - the objects that move slower, than the maximum speed.
• If only the minimum speed is specified - the objects that move faster than the minimum speed.
• If both the maximum and the minimum speed are specified - the objects whose speed does not exceed the maximum, but
is more than the minimum speed.
Configuring object color
The color range is selected using drag and drop on the color palette (click and hold either mouse button, move the mouse, then
release the button).
Any click on the palette is interpreted as the beginning of a new range; the previous range will disappear.
Attention!
The Axxon Next's inner logic treats all objects as monochrome. The object color is averaged within the object's contour.
All objects of specified colors will appear in search results.
Note
The object type is determined by analyzing its appearance. An item that does not move for some time is considered to
be abandoned, e.g. a parked car.
Attention!
You cannot search by object type in VMD-generated metadata (see Setting up VMD-based Scene Analytics).
Attention!
Search is performed for the entire string of entered text, not for separate words.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
This starts a search for video fragments based on the defined criteria. The search results pane displays frames for which there are
titles containing the keywords.
Attention!
When you search for events, the event time corresponds to the start time of the receipt, rather than the time of
occurrence of the search text.
and 0 in the second and third position respectively. The total number of characters in number plates will be variable.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
This starts a search for video fragments based on the defined criteria. The search results pane displays frames for which there are
number plates containing the search text.
A recognized number plate will be highlighted by a red frame in the Viewing Tile.
3. Select a photo with the face to be searched for in the archive (2). Supported formats: png, jpg, jpeg, jpe. Clicking on a
face will select this face as a search parameter. If you do not select a photo, then all faces recognized during the specified
time will be displayed.
4. Select, how to sort search results: - by the face match; - by time (3).
5. Click the Search button.
Note
Once launched, the search can be stopped at any time. To do this, click the Stop button which appears instead
of the Search button.
Attention!
The search range can not be saved.
To apply the saved search criteria to another camera's archive, switch the camera to Archive Search mode and select the
required search query.
To edit a search query, view the list and select the relevant query.
Changes are not saved until the Save button is clicked. If the query name is changed, the query is saved under the new name and
the old, unchanged query remains available.
Note
The number of stored results is limited only by the amount of RAM in the server.
Click on the search control panel to switch to the previous search result, and click to switch to the next result.
Each time you switch between results, the search results panel displays the moments corresponding to the previous/next result.
The search results panel displays the precise moments in an archive that correspond to the defined search criteria. The precise
time of each moment is displayed underneath (1).
Note
An alarm object is outlined in red.
Note
To copy the time and the start date of the video fragment to the Clipboard, right-click on them.
A scroll bar is located on the right side of the search results panel (2). Beneath is a time scale adjuster (3).
If you choose a spot on the the timeline, the search results are automatically sorted. The closest episode will be highlighted in
search results.
You can filter the search results and keep only the important episodes. To do this:
1. Double-click the episode, that you want to keep. Its thumbnail is tagged with a star .
Note
2. Using the playback panel (1), start playback of the fragment in the viewing tile (2).
By default, the playback starts with the time specified under the thumbnail. You can use the control (3) to change the start
time. If the control is in the leftmost position,playback starts 4 seconds earlier then the start time. If the control is in the far right
position, playback starts 4 seconds later.
Note
If object tracking is activated in the viewing tile, then the properties of tracked objects (width and height as a
percentage of the width or height of the frame) are displayed when viewing video fragments found through forensic
search.
Note
To switch between video fragments, use the corresponding buttons on the playback panel or on the advanced
navigation panel (see the sections titled Navigation using the advanced panel and Navigating Using the Playback
Panel).
Extreme zooming a camera window makes the Search Conditions and Archive Navigation panels become hidden (see Scaling the
Viewing Tile).
Attention!
Enlargement occurs only in the following cases:
1. If the height and width of the visual item specified in the forensic search settings is less than 1/3 of the frame
dimensions.
2. If the tracking object occupies less than 1/3 of the frame (for detection tool search)
3. If the object marked by the comment occupies less than 1/3 of the frame (for comments search)
In all other cases, the found moments are displayed in their entirety.
Important
Double-clicking the found moment will also cause a repeated search within the selected time interval for export.
Attention!
The number of video cameras for search is not limited.
2. In the Camera Search Panel, check the cameras where Video Footage search has to be performed (see Camera Search
Panel).
To check all cameras within an Axxon domain, check the domain's box in the Objects Panel (see Objects Panel).
1. 360о panorama.
2. Regional view.
3. 180о panorama (for video camera with an Immervision lens).
2. Select Change panomorph view type to PTZ or Change panomorph view type to Perimeter.
Note
This setting is not preserved if you switch to another layout.
Note
This display format is only available in Live Video and Archive Video modes
When digital zoom is applied to video (see Digitally Zooming Video Images) by one notch or more, regional viewing begins.
The following actions are available when viewing video in this format:
1. Point & Click functionality (see Control using Point&Click).
2. Change the angle of view of the fisheye camera, by left-clicking in the viewing tile.
In both viewing modes, all standard video surveillance functions are available for the fisheye camera.
When using a dual lens fisheye camera, the default viewing mode is set to two 180° views.
When you zoom in one of the images, both views will be merged into a single panoramic view.
Note
If the video camera is wall-mounted, the angle of view cannot be configured (see Configuring fisheye cameras).
To set the viewing angle, click and hold the button while moving the cursor to the left or right.
Point&Click (see Control using Point&Click) and all standard video viewing functions are available when viewing video in this
format.
8.2.7.2.1 Viewing video and controlling a fisheye camera from the map
If a fisheye camera is ceiling-mounted (this position is selected in the video camera settings, see Configuring fisheye cameras)
and a 360о field of view is specified for it on the map, the video from the camera is displayed on the map in real time.
To refocus the angle of view of a fisheye camera so that a chosen point in the viewing tile becomes the center of the frame, left-
click that point (this is the Point & Click function, see Changing the camera lens focus (Point&Click)).
Note
If the viewing tile for the fisheye camera is inactive when it is clicked, the first click on the video on the map activates the
viewing tile. The second click activates the Point & Click function
In immersive mode, only the following video surveillance functions are available for fisheye cameras.
1. Digital zoom via mouse scrolling (see Enlarging a video image using the mouse scroll wheel).
2. Point & Click functionality (see Changing the camera lens focus (Point&Click)).
3. To change the angle of view of a fisheye camera, move the mouse around the video image while holding down the left
mouse button.
If you activate the age and gender informartion collection, its results will be displayed on the panel and saved into the System
Log (see Configure Facial Recognition).
If the Camera window is linked to the Event Board (see Linking cells), you can double click on an event to start the Video Footage
search for sequences containing the recognized face.
If you have created any lists of faces, configured an alarm on facial recognition, and linked the Camera window to the dialog
board, you will have the following information on screen upon recognition a person that belongs to the list:
1. Reference photo from the List of Faces.
2. Close up shots of faces captured in the scene.
3. Additional information about the person, and similarity percentage between the recognized and reference photos.
If the Dialog Board has a portrait orientation, its lower part will display the alarm video.
You can search FR events in recorded video from one (see Face search) or multiple cameras (see Simultaneous search in an
archive of several video cameras).
You can set the system to mask recognized faces from viewing (see Masking faces).
Attention!
Depending on detection tool settings, a delay may occur between the number recognition and the registration of the
corresponding event (see Configuring License plate recognition (VT), Configuring License plate recognition (IV)).
The event is time-stamped with the time of recognition, not registration.
For example, if a car passes the camera at 12:05:00, and the detection tool is set to a 10 sec timeout, the event will be
registered at 12:05:10 and the event data will include 12:05:00 as the time of recognition.
Attention!
The standard license provides a more than 30-sec delay between the number recognition and the corresponding event
(see Configuring License plate recognition (VT).
All relevant events can be displayed on the Events Board (see Working with Events Boards) or Dialog Board (see Working with
Dialog Board).
If you have created any LP Lists in your system, you can program automatic responses (e.g. alarm triggering) to LP recognition
events related to list's entries (see Configuring real-time vehicle license plate recognition).
Each face recognition event contains thermal measurement data which can be displayed on the Events Board (see
Working with Events Boards).
2. With the use of built-in detection tools of selected camera models (see Embedded Detection Tools).
Some cameras are capable to display a bounding box over the facial image along with corresponding temperature
readings. If this option is available, it can be activated via the web interface of a particular camera.
Note
When the Counter Board is enlarged, the graph is enlarged as well, displaying data for a broader range of time. When
the size of the Counter Board is reduced, the opposite occurs.
In both cases, the right-hand border of the graph is constant.
If an information board tile is linked to a viewing tile, at the first enlargement step (to 50%), the viewing tile and information
board tile are displayed together and occupy all of the screen on one side.
Note
In this case, the first step takes into account the total size of the related cells: the related cells must be less than 50% of
both sides of the layout
Note
If configured, Dialog Board hides after you click a Response Button (see Configure the Dialog Board).
If all cells in the layout have the same size, the space freed up after hiding an information board is allocated to the neighboring
cells. Horizontal neighbors have priority over vertical ones.
If free space cannot be distributed horizontally, it is distributed between the vertical neighbors.
In more complicated cases (when cell sizes are different), an attempt is made to distribute the free space between horizontal
neighboring cells. If this is not possible, free space is distributed between vertical neighboring cells. If even this second attempt is
unsuccessful due to the layout configuration, the space remains empty.
Hidden information boards are displayed in two cases:
1. After switching to another layout and back to the original one
2. When an event occurs that requires the operator's attention A description of such events for each type of information
board is given in the following table.
Dialog and Events An event matching the board filtering settings occurs
Note
The frame is not displayed when there is no recording in the archive.
3. Text only
When a layout is switched to, by default the Events Board is displayed as configured in the settings.
At the top of the list are the most recent events. If you are at the end of the list, you can use the button in the panel's top-
right corner to go to the most recent events. New events are highlighted for 4 seconds. The panel may include up to 100 events. If
the number of events exceeds the maximum permitted value, the newer events will be displayed in place of older ones.
You can also access the events panel on the right side of the screen (see Viewing selected camera's detection tool triggering
events). In this case, it includes only detection events for a selected camera.
Note
If there is no archive for a camera when an alarm occurs, the archive is positioned at the closest recorded archive entry.
Note
If an Events Board is linked to several cameras, all cameras transition to Archive mode.
To switch to viewing the status of cameras, click the diagram for the relevant server.
Note
In table mode, you can view server status by clicking the relevant line in the table.
3. as a table
Note
Disconnected Servers are displayed at the end of the list with dimmed brightness.
Note
The remaining time of the archive replication is also displayed in the table.
Red Load >95% Connection failure Critical load on the disk subsystem, data loss when recording to archive over 10%
Yellow Load from 85% to 94% At 70% to 100% of capacity Elevated load on the disk subsystem, data loss when recording to archive under 10%
Green Load <85% At less than 70% of capacity Normal functioning of the disk subsystem (proper operation)
When you switch the server to reserve power, an icon is added to the chart. The icon disappears when you restore the main
power.
The edge of the diagram changes color based on the status of the connected cameras (see Viewing camera status).
If the entire edge is green, all cameras are in normal condition. If part of the edge is yellow or red, some cameras have borderline
or critical status.
Overall server status is determined from the above parameters as follows:
1. Normal – all components and cameras are normal.
2. Borderline – possible problems with the status of at least one component or camera.
3. Critical – at least one component or camera is in critical condition.
Server information is updated every ten seconds.
If the connection to a server is lost, a corresponding icon is used to depict it.
If all servers are in normal condition, the bottom of the board displays a status bar with information about the number of
monitored and distressed servers.
If the status of any server worsens, the status bar is replaced by a message. When the message is clicked, the server status is
displayed (if the board is currently displaying camera status).
The message then closes and the status bar again appears.
Note
If the status of several servers worsens, a message is shown for the last one.
1. as diagrams
3. as a table
Note
Disconnected cameras are displayed at the end of the list with dimmed brightness.
Note
For constant recording, this parameter value is equal to 24 hours. If the recording takes 50 percent of total time,
the value is 12 hours.
Information about detection tools is received in real time. Depending on the status of detection tools, the corresponding icons
change color:
• Green – detection tool status is normal
• Red – detection tool activated
• Gray – detection tool disabled
When the current time becomes 1:08:00 p.m., the points will be updated to 12:58:00 p.m., 12:48:00 p.m., etc.
The graph displays the current number of events. The number of events is recalculated every minute and does not depend on the
interval chosen.
For example, for this graph with a time interval of 2 hours and a current time of 1:48:23 p.m., the current number of events
equals 413, for the period from 11:48:00 a.m. to 1:48:00 p.m.
To scroll through the graph, use the arrows on the graph edges. To jump to the last point on the graph, click the
button.
Clicking anywhere on the graph jumps to the nearest point and the relevant value is indicated.
Note
If there is no network connection, no access to the requested page or if there are other problems, Web Board displays
standard Internet Explorer browser error messages.
1. The last-in event matching the board filtering settings (1, see Configure the Dialog Board).
Note
All of the above mentioned elements of the board are optional. You configure the board to show them.
In the second mode, the panel will display an alarm event from the selected or linked camera, depending on the panel setting
and the event evaluation buttons.
Note
To select multiple cameras, use Ctrl or Shift key.
After you drag and drop a group of cameras, the layout will include all cameras from this group; when you drag and drop an
Axxon domain, all cameras from this domain will be displayed.
Note
Currently disabled cameras will not be included.
If the client is connected to multiple Axxon domains, layouts for the main Axxon domain are listed by default. To view the layouts
of other Axxon domains, select the desired domain on the camera search panel.
If another user has shared the layout, you will see the following icon:
Note
If you hover the mouse cursor over such a layout, you will see the name of the user that has shared it.
To launch a slideshow, click the button, and select Start slideshow in the context menu of the layout panel.
Note
If you have only one layout in VMS, you do not have StartSlideshow in the context menu.
This will launch a carousel of all available layouts according to the assigned dwell-time (see Configuring Slideshow parameters).
To launch a user-defined slide show, select the required one from the list.
Note
To launch a slide show on a video wall (see Monitor Management), select the required monitor in the Video Wall
Management Panel (see Monitor Panel).
To turn off slideshow mode, select Stop slideshow in the context menu of the layouts panel or left-click any viewing tile.
The first 10 seconds after the alert appears in the special layout, it is highlighted.
Note
The alarm sticks to the layout while the Alarm Management window is open and before the timeout expires. If
you select a different camera without classifying the alarm, the alarm evaluation timeout resets
All changes to the Dynamic Layout are preserved until you exit the Client. After you launch the Client, the Dynamic Layout is
always empty.
However, you can back up your Dynamic Layout at any given time (see Layout copying).
Note.
Client monitor management falls under user rights (see the Creating and configuring roles section).
Note
Each monitor has its unique ID number. To display the ID number of a monitor, click .
The thumbnails may appear differently, depending on the status of the monitors in Axxon Next.
Thumbnail Monitor status
Note.
The image of the monitors shows the currently open layout.
You can close main and additional monitor views separately. To close an additional monitor in Axxon Next, click the button
on its thumbnail.
Note
You can add a layout to the additional monitor of a remote Client the same way as you do it for a local Client (see
Managing monitors on a local Client).
The remote Client's layout is saved, and will be available after rebooting the Client.
You can change the set of cameras to be displayed on the main or additional monitor using the Objects Panel; these changes are
not saved, and the initial layout is restored after rebooting the Client. To do it, follow the steps below:
1. Open and lock the Monitor panel (see Monitor Panel).
2. Open the Objects Panel (see Objects Panel).
3. Using the Ctrl or Shift keys, select one or several cameras on the panel.
4. Left-click on any selected camera.
5. Drag the camera icon onto a desired cell on the main or additional monitor's layout diagram.
6. Release the mouse button.
Another way to put a camera into a desired cell on the main or secondary monitor layout diagram is to capture the camera's icon
on the interactive Map (see Working with the Interactive Map).
To close an additional monitor on a remote Client, click the button on the monitor's thumbnail.
Note
In Archive mode and Archive Search mode, an audio recording can be played back only from the microphone
corresponding to the currently selected video camera, and only in forward playback mode at a speed of 1x.
To listen on the Client to sound from the microphone of a camera, left-click to activate the speaker icon in the viewing tile.
Note
Audio from only one camera can be played back at a time
The speaker icon now becomes active and a volume slider appears.
Note
The Axxon Next VMS has an embedded sound booster.
2. Select Sound.
Note
The currently selected microphone is marked as On. If you select a 8.0 microphone, the 3,1.0 microphone is set
to off, and 8.0 microphone to on.
Note
If a currently specified loudspeaker goes offline, the system automatically switches to another available one.
After the first loudspeaker goes back online, no automatic switching will occur.
To automatically activate the first loudspeaker in such a case, you have to create a NGP_PORTSOUND_HOSTAPI system
variable and set it to DS (see Appendix 10. Creating system variable).
Sound from the Client microphone can be broadcast both on a single camera and on all cameras in a layout.
Attention!
To use this option:
1. Configuration of audio on the Client is now completed (see Configuring audio on the Client).
2. Speaker objects are activated for the corresponding cameras (see The Speaker Object).
To broadcast sound on the speaker of a single camera, left-click the microphone icon in the viewing tile.
To turn off broadcasting of sound on a camera speaker, click the microphone icon again.
To broadcast sound on speakers on all cameras in a layout, in the context menu of the layouts ribbon, select Enable audio for
all.
The microphone icon is then activated for all cameras that have an activated Speaker object.
To turn off sound broadcasting on all cameras, select Disable audio for all.
To go Map View, click the Map button in the bottom right corner .
The Map will open in a 3 D view while the current layout contracts to fit the screen area over the map.
To switch to 2D map view and close the layout, click the button to the left of the Map button.
If you expand a tile to full screen, the map auto-hides (see Scaling the Viewing Tile).
Note
When you minimize the tile, the map appears again.
To switch to 2D map view and close the layout, click the button to the left of the Map button.
Note
You can also switch to 2D when the map is hidden.
To return to 3D, click the Map button. To close the map, clcik the button.
Note
The map is automatically resized and refocused if this function is activated in the settings (see Configuring map
autozoom).
Automatic adjustment of map scale and focus occurs when a video camera alarm occurs, if no video camera icon is selected on
the map.
In this case, the map is scaled and refocused to center the icon for the alarm camera on the map.
If alarms occur for several video cameras simultaneously, the map scale and position are adjusted to show all icons for the
relevant video cameras.
After a video camera alarm ends and there are no alarms for other video cameras, the map scale and position return to their
initial status.
Automatic scaling and focusing of the map stops during the following actions:
When the user clicks to select the icon of a video camera on the map or viewing tile
To manually adjust the map scale, use the mouse scroll wheel (the cursor must be above the map) or use the map scale slider.
After increasing the scale, you can refocus the map with the mouse (by clicking and holding the left mouse button) in the
direction of your choice.
You can adjust transparency of video display in the Map View using a dedicated slider in the bottom right corner.
The leftmost position corresponds to no video, the rightmost makes the video opaque.
Use the
button to toggle the display of devices' names and their IDs.
To enable/disable camera icon fluttering on alarm, use the
/
button.
To change the action when you click on the video camera icon on the map, use (takes you to the layout with the camera) /
(switches to immersion mode (see Immersive mode)).
To change map view, use the button (3D view) / (flat view).
Video in 3D view:
To switch to immersion mode, click the button on the left border of the viewing tile or, on the map, left-click a video icon,
field of view, or video display area.
Note
The second method is possible if the button is not pressed (see Customizing an Interactive Map).
In immersive mode you can view video from only one video camera at a time.
To select another video camera, do one of the following:
1. Click the video camera icon or its field of view on the map, if possible.
2. Exit immersive mode and select the necessary video camera on the map.
To exit immersive mode, do one of the following:
Note
Actions 2 and 3 do not apply if a fisheye camera is in immersive mode (see Fisheye cameras in immersive mode)
Note
If many maps have been created, some tabs may not fit on the screen. If this happens, click the button. In
the drop-down menu that opens, select a map.
2. By left-clicking a map icon for switching, if it has been created (see Adding switches to another map).
The icon header shows the name of the destination map.
Command (context menu item) Condition Icon status after the command is performed
Command (context menu item) Condition Icon status after the command is performed
Note
From within the map, you cannot switch the status of the Output if there are macros with the corresponding action
running in the system
The table below possible status states of the Output icon are described in the following list.
Output is activated
Note
When a macro changes the Output status, the Output icon on the map does not change.
The table below possible status states of the Input icon are described in the following list.
Inputs (sensors) connected to Tibbo boards also show temperature/humidity values on the map.
Note.
The date and time of the event in the file name are given based on the Windows locale settings.
Note.
The file name can be up to 70 characters long.
When exporting a snapshot to PDF, you can also print the document immediately.
A digital watermark is added to exported snapshots and video. Watermark authentication is available in the corresponding
bundled utility (see the Digital Signature Verification Utility section).
Note
Exported videos and snapchots are digitally signed with the SHA-256 algorithm.
Titles containing a date and time stamp will be superimposed on exported video.
Note.
Captions are stored in a separate video track and, if necessary, are disabled in the player through software.
Attention!
To instantly export a frame with standard settings, right-click the button.
3. Specify the folder to which you want to export the snapshot (1).
Note.
By default, snapshots are exported to the folder specified in the export settings (see the Configuring export
section).
If you change this folder, the new path to exported files will be kept in memory until the Client restarts.
4. Select the date and time for a snapshot (2). The default setting is the frame currently displayed in the viewing tile. If you
are watching recorded video, then the snapshot with the frame displayed at the time when you hit the button
Note.
If you are watching live video, then the snapshot with the frame displayed at the time when you hit the
button. Date and time fields are not displayed.
Attention!
This setting may be overridden by the user role settings (see Creating and configuring roles).
8. To export unmasked frames from the Video Footage, the user must have appropriate access rights. To perform such an
export, check the View Masked Video box (6).
9. Enter the credentials of the supervisor who confirms the launch of export (7).
Note
The supervisor enters their password only if it is required by user rights settings. If no export confirmation is
required, these fields will not be displayed.
10. If exporting to PDF, you can immediately send the file to print (8). In this case, the snapshot is not saved to disk.
11. To export the snapshot, click the corresponding button (9).
Export begins. Progress is shown in the export panel (see the Viewing export progress section).
Export of the frame is now complete. The frame exported to JPG will also be placed on the Clipboard.
b. you can specify the range on the additional navigation panel in the same way, by clicking the buttons.
You cannot use the mouse to set the export range on the additional navigation panel.
2. Setting export area and masks (see the Configuring export area and masks).
3. Setting an export format, specifying where to save exported files, and adding comments.
b. If necessary, change the export path (1). By default, the file is exported to the folder specified in the settings (see
the Configuring export section). If you change this folder, the new path to exported files will be kept in memory
until the Client restarts.
c. You can set the start and end time of the exported episode in the calendar (2).
d. If necessary, specify a different file format for exporting the video (3). Videos can be exported into the following 4
formats: MP4, MKV, EXE and AVI.
Note.
Video is exported in MKV format without recompression.
Video is exported in AVI format with recompression in the selected codec (see point 4). Export to AVI files
may take longer time because of recompression.
In addition, AVI export increases CPU load, especially when you export several files at once (see
Simultaneous export of video from multiple cameras, Exporting all event videos).
When video is exported in EXE format, a self-contained executable file is generated, containing video,
playback tools, and necessary codecs.
When exporting to EXE format, please note that Windows does not allow launching executable files of
more than 4 Gb.
e. If you want to export to an encrypted zip archive, set a password (4). If you are exporting an .exe file, you will need
to enter a password when you open the file.
Attention!
This setting may be overridden by the user role settings (see Creating and configuring roles).
f. If necessary, add comments for the export. The comments will be shown as captions when the exported video is
played (5).
g. If you export a video to MKW or AVI format, and you need to copy the Axxon Player utility (see The Axxon Player
User Guide) to the same folder, check the corresponding box (6).
h. To export unmasked videos from the Video Footage, the user must have appropriate access rights. To perform
such an export, check the View Masked Video box (7).
i. Enter the credentials of the supervisor who confirms the launch of export (8).
Note
The supervisor enters their password only if it is required by user rights settings. If no export confirmation
is required, these fields will not be displayed.
Note.
You can stop export at any time by clicking the Stop button.
Note.
The duration of the exported file can be longer than the specified one, because the keyframe is not always at the
beginning of the export interval.
You can instantly export video without needing to specify an export range. To do so, click the button in a viewing tile at any
time.
Note.
Then specify export settings, as described in the Standard video export section.
If export is performed from Live Video mode, the first frame of the exported video will be the moment when the button was
clicked. Export will continue for 10 minutes or until the Stop button on the export panel is clicked (see the Viewing export
progress section).
If export is performed from archive mode or archive analysis mode, the first frame of the exported video will be the position of
the bar on the timeline when the button was clicked. Export will continue for 10 minutes or until the Stop button on the
export panel is clicked (see the Viewing export progress section).
The length of the exported video clip will depend on the time of export and resources of the Server.
Note.
If export is performed from archive mode or archive analysis mode, you can pre-configure an export area and masks
(see the Configuring export area and masks section).
Note.
When you export to one file, the streams are written in parallel. To view exported video, use a player that allows allows
playing multiple instances and different streams in each (for example, VLС).
Video from each camera is exported to a separate file. Comments made during export are added to each exported video.
Note.
For each video you can pre-set an export area and masks (se the Configuring export area and masks section).
3. Click the + button in the middle of the thumbnail. The video clip will be added to the export batch.
Note
Note.
When you export to one file, the streams are written in parallel. To view exported video, use a player that allows allows
playing multiple instances and different streams in each (for example, VLС).
This opens the Export window. Follow the same steps as with the standard export procedure (see Standard video export).
Note
We recommend you to export to the EXE format.
2. Reposition the corner points to specify the area that you want to export. To reposition the corner points, left-click a corner
point and drag it.
Configuration of the export area is now complete.
Masking allows you to hide complex or irrelevant areas of the frame so that they do not appear in an exported file. You can set an
unlimited number of masks.
To specify a mask:
After you add a mask, you can perform the following actions:
• Move corner points (left-click a corner point and drag it).
• Delete corner points (right-click a corner point).
• Delete a mask (click the button).
• Add a new mask.
Mask creation is now complete.
In the exported snapshot or video, the masked area is filled in with black.
To stop the export process, click the Stop button. In this case the file will be saved. The length of the exported fragment will
depend on the export time and resources of the Server.
To cancel export, click the Cancel button. In this case, the file is not saved.
If several export processes are active, you can switch between them by clicking the buttons. The following information
is displayed between them: number of the current export operation / total number of export operations (export progress for all
operations).
Note.
If export is active, you cannot close the message.
If an error occurs during export, it will be indicated on the dynamic error panel (see Control in Live Video Mode).
A macro may include settings (see Create Macros) to display the control menu in the upper panel.
Those macros that are currently active (enabled as Always or within a time schedule, see Create Macros), are marked as (On).
To enable or disable macros you need to select them in the list. When you disable a macro, the mode is changed to Never.
Note
If you disable a macro, which is active within the time schedule, then this mode will be restored after reactivation.
Note
The status of the macro in the menu changes only a few seconds after a command is completed.
Event-driven macros can be triggered by using the corresponding buttons on the Dialog board (see Working with Dialog Board) or
by using hotkeys.
Note
Configuration of logging to external files is carried out through the log management utility (see the section Log
Management Utility).
Note
This feature is configured on the Permissions tab (see the section Creating and configuring roles).
To accept an error and delete it from the error panel, click the cross.
To accept all errors and close the error panel, click the cross on the right-hand side of the panel.
To jump to System Log (see The System Log), and open error messages, click the button.
When you do this, a window appears which can be used to search, view, and export system log events.
Note
The time period is a mandatory filter, while the event type and key phrase are optional.
Note
The date format is DD-MM-YYYY and the time format is HH:MM:SS.XXX.
Note
By default, the event search period is defined as the past 24 hours.
Attention!
You can use OR and AND logical operators when searching data in the system log.
- to search with the OR logical operator, separate the words with the symbol "|";
- to search with the AND logical operator, use a space.
Note
Select this check box to update events automatically with no need to search again.
Note
Events in the table are sorted by the date they were registered, beginning with the most recent one.
Date & time Date and time the event was recorded in the
system in the format DD.MM.YYYY HH:MM:SS
The search results table may be more than one page. To navigate through a table which is more than one page, use the following
buttons (2):
1. Back Goes back to the previous page of the table.
2. Next Goes to the next page of the table.
Note
Once the Axxon Next VMS is installed, the log may show a Table end violation error. This is part of the installation
routine and not a bug.
Once the Axxon Next VMS is installed, the log may show a Table end violation error. This is part of the installation routine and
not a bug.
When you do this, the standard Windows “Save as” dialog box appears, using which you can save the search results as a file with
a .txt (text) extension or .csv (comma-separated).
Note
Archive viewing can be triggered by events coming from cameras, inputs (sensors) and outputs (relays). To make it
work, I/O must be linked to a particular camera (see The Input Object, The Output Object).
The system will now switch to archive mode and fetch the video of the selected event.
7. Digital zooming.
8. Working with bookmarks.
9. Viewing Camera and Archive Statistics.
8.11.2 Hardware and software requirements for the Web Client operation
The Web Client operates correctly with the latest versions of Google Chrome, Firefox and Microsoft Edge browsers.
Note
Since no 3rd party technologies are used in the Web Client, it may operate with other browsers; in this case, we cannot
guarantee its stable operation.
Attention!
No support for Safari and Internet Explorer is provided in the current version.
To monitor 16 FullHD* camera videos on a single browser tab, you need at least an Intel Core i3 CPU and 1Gb of RAM.
* conditions are:
Attention!
Opera browser supports Web-client starting from version 15.
In the Windows OS, the Web-client for Safari browser is not supported.
Connecting the web and mobile Clients to the Server behind NAT
Attention!
The Server URL is case-sensitive. You have to type in the URL using the exact case of characters specified in
settings(see Configuring the web server).
Attention!
If the web server is properly configured (see Configuring the web server), then a secure HTPPS connection is
automatically established.
3. Enter a user name and password for connecting to the Axxon Next web server.
Attention!
After 5 successive failed authorization attempts, the user is blocked for 10 minutes.
To switch between users, press the Change User link in the upper-right corner; another authentication will follow.
Note
The name of the current user is indicated near this point.
Attention!
The web client offers layouts available to each particular user.
You can create and edit layouts using the Axxon Next Client (see Configuring Layouts).
The number of cameras within a layout is not limited.
To display the list of available layouts, hover the mouse over the layouts panel.
You can also search layouts. To do it, follow the steps below:
1. Hover the mouse cursor over the Layouts panel.
2. Enter a layout name or its fragment.
A search bar appears, and the panel displays layouts matching your search criteria.
2. By default, switching to another tab or collapsing the browser window stops the transmission of video streams. To
preserve the transmission, you can clear the corresponding parameter (1).
Attention!
This setting is common for all Web Client users in a particular browser.
3. If your layout contains several camera windows, only I-frames (key frames) will be displayed for H.264 cameras by default.
Selecting a particular camera sets its window to display all frames. If you need to display all frames in H.264 video
regardless of the camera status, activate the Show all frames option (2). This setting is common for all Web Client users
in a particular browser.
4. By default, fast archive playback uses the H.264 codec. If required, you can use MJPEG codec by setting the Use MGPEG
for fast archive playback parameter (3). This setting is common for all Web Client users in a particular browser.
5. If you need to position the retrieved video in the center of the screen, activate the Scroll Selected search result to the
center option (4). This setting is common for all Web Client users in a particular browser.
6. Set the default interval for Time Slice (5, see Types of Archive search available via the Web Client).
7. By default, the Cameras panel does not hide itself after you select a device. If you need to hide the panel, deselect the
Manually open and close checkbox (6). This setting is common for all Web Client users in a particular browser.
8. When you select a camera in Cameras Panel, the layout featuring this camera's window opens. If there are several such
layouts, then the one with fewer cells is selected. To expand camera view to full screen, disable Open selected cam in
the layout (7).
• A list .
• Thumbnails .
By default, the panel includes all available cameras. You can create a list of selected cameras. Do the following:
1. Hover the mouse cursor over the camera icon.
2. Click to add the camera to the list. The asterisk becomes filled . Another click on the asterisk excludes the camera
from the list.
Attention!
The list of selected cameras is common for all web client users in a particular browser.
Click Favorites to show selected cameras only. Click the same button once more to show all available cameras.
Additionally, you can bookmark cameras by preparing a list in an Excel file. To do it, follow the steps below:
1. Create an Excel file containing two columns: id for camera IDs and name for camera names.
Attention!
After successful loading, only cameras listed in the file will remain bookmarked.
Note
When you add new cameras to your server configuration, they automatically appear in the Web client (no page
reloading needed).
To set a group of cameras to be displayed, click the Default button and select the required group.
Configuring Layouts
To create and edit layouts in Web Client click to access the corresponding menu.
There are a number of things you need to consider when dealing with layouts in Web Client:
1. In Web Client, you cannot edit/share/delete layouts created in the main Client. You can only copy layouts.
2. Layouts created in Web Client will not be available in the main Client.
If the panel does not fit all layouts, then to display a list of all available layouts, in the upper right corner click .
2. You can add new columns or rows of cells using arrows on the layout boundaries (similar to the main Client, see Adding
new cells to a layout).
Note
To delete a column or a row of empty cells, click Clean Up.
Note
When editing a layout, the buttons to undo and redo the last action are available .
3. You can resize the cells using arrows on the cell boundaries ((similar to the main Client, see Resizing cells).
4. Adding video cameras to the cells. To do this, drag & drop a camera from the Cameras panel (see Searching for video
cameras in the web client).
5. Select the default stream for each video camera from the Quality list. Use the buttons at the top of the screen to select
quality for all camera layouts.
6. Click Save.
Note
To undo the layout creation, click Cancel.
To remove the default layout, click and select Do not use by default.
Layout copying
Share Layouts
The Web Client supports playback of the following video formats: MJPEG, H.264, H.265. Any other formats are re-coded to
MJPEG by the Server.
Attention!
H.265 playback is possible only in the Edge browser with hardware acceleration.
You can play back a video in the web client with any of the 2 players: jpeg and mp4. If your browser supports mp4 format, the
mp4 player is used. Otherwise, your videos will be played back via the jpeg player.
Attention!
If your layout contains several camera windows, only I-frames (key frames) may be displayed in H.264 streams,
depending on settings (see Web Client Configuration). For the selected camera, each frame is displayed.
In MPEG videos, each frame is displayed.
Note
When you use Internet Explorer to toggle frames reception mode between 'I-frames only' and 'each frame' in the
camera window, short video dropouts may occur. In such a case, other browsers will show the most recent I-frame.
If your camera is capable of multi-streaming, the lowest resolution stream will be displayed by default.
To select a stream in the web client, do as follows:
1. Click the parameters of the current stream.
Note
Video stream settings are not memorized while switching between layouts.
Click or double click the image for full screen view. To exit full screen mode, click again the button, or press Esc.
The following actions can be performed using the PTZ device control panel:
1. Use presets.
2. Adjust optical zoom and positioning speed of the video camera.
3. Modify the horizontal and vertical tilt angle of the video camera.
8.11.9.1 Controlling a PTZ camera through the web client by using presets
To go to a preset, select the relevant line in the list of presets.
8.11.9.2 Changing the optical zoom of a PTZ camera in the web client
To change the optical zoom of a PTZ unit, use the buttons in the zoom group.
– increase image
– reduce image
– field for displaying speed at which the camera changes the zoom scale
8.11.9.3 Changing the positioning speed of a PTZ camera in the web client
To change the positioning speed of a PTZ camera, use the buttons in the Speed group.
The arrow direction indicates the direction in which the camera lens will be moved when the arrow is clicked.
button.
Note
Video Footage opened by default is specified as Default Archive in settings (see Binding a camera to an archive).
3. The archive navigation panel is then displayed, with the following interface features:
a. Timeline. Archive navigation via the timeline in the web client is the same as when working in the Axxon Next client
(see Navigating Using the Timeline).
Attention!
You cannot resize the timeline in the web client. By default, the timeline displays the current date's
recordings. You can switch to another date using the position selection panel (see 3c).
Note
Similarly to the regular Client (see The Timeline), alarms are indicated on the timeline as flags, and
comments as icons.
b. Playback control panel. Archive navigation via the playback panel in the web client is the same as when working in
the Axxon Next client (see Navigating Using the Playback Panel).
Attention!
Fast reverse playback in H.264 format will be not smooth because of the use of I-frames.
c. Archive position selection panel. The archive position selection panel is opened by left-clicking the date above the
timeline.
4. To select an Archive, do as follows:
a. click the name of the current Archive;
2. To set the playback position to the current time and date, click the Today button and go to step 6 (4).
Note
The days, for which there is video footage are in light shade.
5. Use the Hours, Minutes, and Seconds sliders to set the time.
Attention!
If you select an item from Cameras panel, the search conditions will not be reset (see Searching for video cameras in the
web client).
If a search was run more than once, and the user did not exit Archive Search mode during that time, it is possible to switch
between search results.
Attention!
You can load only JPEG images.
7. LPN search.
8. TimeSlice.
Note
The default interval is set in the Web client's settings (see Web Client Configuration).
9. Events search.
Archive search interface and search parameters pane are identical to those in the standalone Client software (see Video
surveillance in Archive Search mode).
You can also build a Heat Map with the Web Client (see Building a Heat Map).
Attention!
To build a Heat Map, you need at least one source of metadata (for example, an Object Tracker).
Heat Maps are useful for evaluation of motion intensity within the scene and determining common trajectories of moving
objects.
How to build a Heat Map:
1. Proceed to the Archive search.
Use the dedicated slider to adjust the transparency of the heat map.
8.11.12.3 Simultaneous search in multiple camera Video Footage via the web Client
You can use the Web Client for multiple camera Video Footage searching by:
• facial recognition events;
• ANPR events;
• detection tool triggering events;
• TimeSlice.
To simultaneously search multiple cameras' Video Footage, do the following:
1. Proceed to the Video Footage search (see Archive search through the web client).
2. Set the search criteria.
3. Open Cameras panel and select the required devices (see Searching for video cameras in the web client).
Note
To select all cameras, select the appropriate box on the bottom of the page.
To build a report after you finished searching, click in the bottom part of the screen.
Attention!
The report is limited to first 60 retrieved videos.
Note
You cannot monitor alarms that went off before you selected the camera/layout.
When an alarms goes off, an alarm panel appears (like in the standalone client software, see Alarms Panel).
The system will now switch to Archive (video footage) viewing mode and show the video of the selected event.
Attention!
You can play back audio in mp4 format only.
Note
Audio playback in web browsers is supported in Windows 8 and higher OS versions.
Note
You cannot play back audio from multiple cameras simultaneously.
button.
Note
For PTZ units, you can zoom in by using the buttons in the zoom group
To export a snapshot / frame, click the button while viewing video. A JPG file will be exported to a folder specified in the
Server settings.
To export video, do as follows:
1. Switch to Archive mode (see Viewing video archives through the web client).
2. On the timeline, set the timeline indicator to the location that matches the beginning of the export interval. Click the
button. Set the indicator to the location that matches the end of the export interval. Click the button.
3. Click the button.
Note
4. Select the video format (1). You can export videos to avi, mkv, mp4 and exe formats.
5. If you are exporting a video in .avi format, select the compression level (2).
4 – minimum compression, maximum file size;
6 – maximum compression, minimum file size.
6. If necessary, add comments for the export (3). The comments will be shown as captions when the exported video is
played.
7. Click the Export button (4).
The progress bar is displayed on the drop-down panel as in the Client (see Viewing export progress).
To view camera statistics, click in the top right corner, and select Statistics Panel.
Note
Loading statistics from large number of cameras may take some time. The loading process is visualized by a progress
indicator.
On the Cameras tab, the following parameters are displayed for each video stream of each camera:
• resolution;
• frame rate;
• bit rate;
• compression format.
If the bitrate exceeds the expected value, and there is more than 1Mbit per megapixel, the stream is marked with "
".
If the bitrate exceeds the expected value, and there is more than 2Mbit per megapixel, the stream is marked with "
".
On the Archives tab, the following information is displayed for each Archive:
• name;
• number of linked cameras;
• cumulative bit rate of linked cameras.
Note
The indicator in the upper right corner displays the AxxonNext Web Client and Server version numbers.
To return to the previous page, click Back in the left upper corner of the screen.
To work with bookmarks, click the button in the top right corner of the Web client window, and select the Archive
Bookmarks option.
Attention!
You can access only the bookmarks on the visible part of the footage archive (see Configuring access restrictions to
older footage).
The system does not display bookmarks related to currently re-recorded part of the archive.
You can:
1. Edit a bookmark.
2. Delete a bookmark.
3. Un-protect a protected video.
4. Delete a protected video.
Use the search bar to locate a necessary bookmark.
3. If necessary, you can alter the time interval and/or a text comment.
4. Click Save Bookmark.
9 Description of utilities
Note.
The executable TrayTool.exe is located at <Axxon Next installation folder>\AxxonNext\bin.
With the Axxon Next Tray Tool utility, you can quickly start the following applications from the notification area:
• Client,
• Activation Utility,
• Axxon Support Tool,
• Configuration Backup and Restore utility,
• Self-checking Service,
• Server rebooting.
Note
The product activation utility program file LicenseTool.exe is located in the folder <Directory where Axxon Next is
installed>\AxxonSoft\AxxonSmart\bin\
Then you must select the name of one of the Axxon Domain servers to which the license file will be applied (the file is applied to
all Axxon Domain servers launched at the moment of activation) and connect to the system, under an administrator's user name
and password, to continue the activation process.
Note
To activate Axxon Next, connect to a Server in the Axxon domain. Otherwise, an error message appears.
To activate Axxon Next, please refer to the document titled Activation Guide, which presents step-by-step instructions on
activating, updating and upgrading Axxon Next .
It is also recommended that you use the prompts displayed in the product activation utility's dialog boxes.
Note
The Support.exe utility is located in the folder <Axxon Next installation directory>\AxxonNext\Support
Note
The Support.exe utility requires administrator rights to run.
Process Description
AXXON.VMDA Process responsible for the metadata database. Writes metadata and searches the archive.
AXXON.Notification Process for managing events in the system and creating a database of these events
AXXON.FileBrowser Process that provides access to the file system and information about server files
AXXON.MiscMMSS Process that plays back audio on the server audio card
Note
Selecting the Show info on all system's processes check box enables viewing of all processes running on the
computer.
9.3.5 Collecting Data on the Configuration of Servers and Clients Using the Support
To collect data using the Support.exe utility, perform the following:
1. Launch the Support.exe utility (see the section Launching and Closing the Utility).
2. By default, a report includes data about already launched Axxon Next processes. To exclude this data from reports,
deselect the checkbox (1).
3. Select the corresponding checkbox to include a backup copy of events database in reports (2).
4. By default, a report includes information about Windows security system. To exclude this data from reports, deselect the
checkbox (3).
5. Select the corresponding checkbox to include a self-diagnostic service (see Self-diagnostics service) information in
reports (4).
6. Click the Next button (5).
The data collection process will begin. The table that displays the progress of data collection includes two columns: Step
and Status. In the Step column, a brief description of the stage of information collection is displayed. In the Status
column, a progress indicator and the time spent on executing the stage are displayed.
8. A window containing information about the generated archive support_[date]_[time].7z will then appear. You can
access the folder containing this archive by clicking the Open directory with file button.
Note
The archive is located in the folder <System disk>:\Documents and Settings\<Current User>\My Documents if
you're using Windows XP, or in the folder <System disk>:\Users\<Current User>\Documents if you're using
Windows Vista
9. Send an email with the attached support_[date]_[time].7z archive to the ITV technical support department.
Note
The log management utility is located in the folder <System disk>:\Program Files\Common Files\AxxonSoft\LogRotate
To close the log management utility, click the Cancel button or (accessible in both tabs of the utility).
1. In the Archive location field (1), enter the complete path to the directory to which the event logs should be moved after
archiving.
Note
2. In the Archive logs every...hours field (2), enter the interval for event log archiving, in hours.
3. In the Log archive depth group, set the following parameters:
a. In the Storage time field (3), indicate the maximum retention time in days of a log in the archive, after which the
log is deleted.
b. In the Size field (4), indicate the maximum size of the archive, above which the oldest logs are deleted from the
archive.
Note
Archive disk space restrictions take priority over log retention time restrictions. For example, the oldest logs will
be automatically deleted even if their retention time has not expired, if the archive size has exceeded the
maximum value
Note
If it is not necessary to impose any limitations on log retention period and/or size, clear the corresponding check
boxes in the Log archive depth (3-4)
1. Select the desired logging level of the client (Axxon Next Client) and the server (Axxon Next Server) (1).
Note
If you change the logging level of a Server, the server will be restarted.
Note
If the Axxon Next VMS is installed in the Failover Server and Client configuration, you can log in as either a
Client or a Supervisor.
Warning Low level of detail - only system warnings and system errors are logged
Info Low level of detail - logs informational messages, system warnings, and system errors
Debug Medium level of detail - logs debugging events, informational messages, system warnings, and system errors
2. If you need to include GUI exceptions into the logs, select the corresponding check box (2).
3. Click OK (3) to save changes.
Configuration of logging levels is complete.
Note
Value 0 - no information is logged.
Note
The utility executable file WatermarkCheck.exe is also located in the folder <Directory where Axxon Next is
installed>\Axxon Next\bin\.
To check a digital signature, click the Open file button and select the file of the exported snapshot or video.
If the digital signature is valid, the utility will show the message: Signature check: OK!
If it is not valid, the utility will show the message: Signature check: Invalid signature!
Note
During verification of a digital signature, the thumbnail of a snapshot is shown in the utility window. Videos cannot be
previewed during the verification process
Attention!
The backup and restore utility may be applied to both the local configuration of a selected Server (including video
cameras, archives, detection tools, event sources, logging levels) and the global configuration of the Axxon domain
(users, maps, layouts, etc.).
This utility can also be used to change the name of the local Server.
Note.
The BackupTool.exe executable is located at <Axxon Next installation folder>\AxxonNext\bin\.
After loading is complete, the main page of the Backup and Restore Utility is shown.
A window then opens, displaying a list of available restore points and their respective creation times, with a description of what
was changed.
Note.
If multiple changes were made in a configuration but the Apply button was clicked only once, only one restore point is
created in the list.
In the list, select the restore point to which you want to roll back. To continue, click the Next button (2).
Note.
Empty configuration corresponds to when the system was first created.
Further steps are the same as for rolling back the local configuration (see Roll back the local configuration to a selected restore
point).
Attention!
We recommend you to back up your configuration after any major change in system configuration.
1. On the main page of the Backup and Restore Utility, set the switch to the Backup position.
Note
For each copy of the backup configuration, a separate folder is created. The folder name contains the date and
time of the backup copy and has the following format: YYYYMMDDHHMMSS. The default time zone is UTC + 0
3. In the other field (2), select servers for creating the backup. You can select multiple servers. To select all servers, click the
Select all button. Start the backup process by clicking the Next button.
Attention!
To successfully restore a configuration, please ensure that the current Server name is exactly the same as the Server
name in the backup configuration.
To restore a configuration:
1. On the main page of the Backup and Restore Utility, set the switch to the Restore position.
2. In the field (1), specify the file containing the configuration backup.
3. After the file is opened, the servers on the current domain are displayed in the other field (2). You can select a server in the
list only if it is on the domain and the open file contains the corresponding backup. To start restoration, click the Next
button.
Progress information is shown in the following window.
Attention!
After this command is completed, the server is excluded from Axxon domain. You cannot access recorded video
(Archive). All custom layouts, maps, automatic rules and macros are deleted.
1. Connect to the Server that requires a name change (see Starting and quitting BackupTool.exe).
4. Run the ProtocolLicenser.exe executable file. The Read protocol from service window opens.
5. In the Log file name field specify a full path to the folder where the file with gathered info is to be saved. By default, the
file is stored in the folder to which the archive with the utility is unpacked.
6. If the POS-terminal is connected to the computer via COM-port, specify connection parameters in the COM tab.
7. If the POS-terminal is connected to the computer via Ethernet, specify connection parameters for TCP or UDP protocol in
the corresponding tab.
8. Click Start to run log collection.
9. Start using the POS-terminal, i.e. issuing receipts. It is highly recommended to do all the operations including Cancel,
Return, etc.
Process of gathering info is displayed via the Status progress bar.
Important!
If the log file of POS-terminal is to be processed in software, then provide AxxonSoft with protocol description. POS-
terminal manufacturer can give this information.
Attention!
For correct operation of the application, you have to remove the corresponding archive volume in Axxon Next without
removing the archive files (see Deleting and formatting archive volumes).
The utility is located in < Axxon Next installation folder > \bin>.
Attention!
Run the command line as Administrator.
Note
To launch the utility on Linux OS, run the following command:
ngprun start_app vfs_tools + arguments.
To open the argument list, run the vfs_tools --help command.
Parameter Description
--volume Archive path. The basic parameter must always be present in the query.
For example: vfs_format.exe --volume D:\archiveAntiqueWhite.afs
(for the archive volume as a file) or
vfs_format.exe --volume D:\
(for the archive volume as a disk)
--fill Populating an archive with multiple copies of video footage from another archive.
The system fills up a destination archive with multiple copies of a source archive; for easier timeline handling, each new copy is written
with 1 minute offset.
For example: vfs_format.exe --volume S:\FILEONE.afs --fill G:\
--cache-to- Copying an archive to RAM and further copying to a destination archive. Use with the --fill parameter.
memory
This parameter is valid only for archives that could fit to RAM.
For instance, vfs_format.exe --volume S:\FILEONE.afs --fill G:\ --cache-to-memory
--dump Collect service information about the archive volume in a TXT or XML file.
For example: vfs_format.exe --volume D:\archiveAntiqueWhite.afs --dump C:\DumpArc.txt
--expand Specify the new size of the archive volume in sectors. By default, the size of one sector is 4MB, if the -format parameter was not applied.
This option is relevant only for the archive volume as a file.
For example: vfs_format.exe --volume D:\archiveAntiqueWhite.afs --expand 128
--size Specify the new size of the archive volume in megabytes. This option is relevant only for the archive volume as a file.
For example: vfs_format.exe --volume D:\archiveAntiqueWhite.afs --size 4096
--format Split the volume of the archive into sectors of the specified size (in megabytes).
For example: vfs_format.exe --volume D:\archiveAntiqueWhite.afs --format 16
Parameter Description
--copy Copying archive volumes. Specify the path and name of the new archive file. If the archive volume is copied as a disk, create a partition,
large enough. If you have a smaller partition, then only the most recent entries are copied.
For example: vfs_format.exe --volume D:\archiveAntiqueWhite.afs --copy C:\NewArc.afs
--skip-bad- Skip the bad sectors when copying the archive volume. This parameter is used only together with --copy .
block
For example: vfs_format.exe --volume D:\archiveAliceBlue.afs --copy C:\NewArc.afs --skip-bad-block
--modify- Enable / disable re-indexing of the archive volume. 1 - enable reindexing, 0 - disable.
corrupted-flag
For example: vfs_format.exe --volume D:\archiveAliceBlue.afs --modify-corrupted-flag 1
--build-meta Launch the process of metadata generation for an archive volume (including timeline markers and video footage size per channel).
For example, vfs-format.exe --volume D:\ --build-meta
Important! Processing large archives may take significant amounts of time.
--log-level Sets the logging level for this action. Available values:
0 - OFF
10 - ERROR
20 - WARN
40 - INFO
50 - DEBUG
60 - TRACE
100 - ALL
To log error messages, you must add a path for the log file at the end of each query.
For example: vfs_format.exe --volume S:\FILEONE.afs --fill G:\ --log-level=100 > S:\log.txt
1. Set the external IP address or DNS name of the router if the Server is located behind the NAT (1).
Note
You may set multiple interfaces using a comma separated list, such as: "IP Address1 or DNS Name1, IP address2
or DNS Name2".
For example: 88.78.12.33, ExampleAxxon.ddns.net.
Т
2. Set the port range for operation of the Axxon Next Server (2-3). To do this, specify the beginning of the range and the
number of ports. The number of ports should not be lower than 20.
Attention!
Within an Axxon Domain, the port ranges of Servers should not overlap.
Note
The number of ports that you select affects the scalability of the system. Keep the following in mind when
specifying the number of ports:
- 6 ports is a minimum system requirement for machines with Axxon Next.
- In a 32-bit configuration, for every 32 cameras, 6 ports are required (for multistreaming cameras). In a 64-bit
configuration, for any number of cameras, 6 ports are required.
- 2 ports are required to write to the archive.
- To use the Object Tracking database (track recording), 1 port is required.
- To use basic detection tools, 2 ports are required.
- To use Scene Analytics, 2 ports are required.
- To use E-mail- (through SMTP), SMS or server audio notification 1 port is required .
3. Restrict the visibility of Servers from various networks in the Servers list during the Axxon Next setup (4). Possible values:
a. "0.0.0.0/0" - Servers from all networks will be visible.
b. "10.0.1.23/32,192.168.0.7/32" - Only Servers from specified networks will be visible.
c. "127.0.0.1" - Only Servers from the local network will be visible.
After you save the settings, the Server will be restarted.
10 Appendices
Detection zone – the area of a video image processed by a detection tool is triggered.
Interface cable - cable used to connect two or more devices together for data transfer.
Interface object - a system object used for interaction between the user and software (data input/output).
Client - designation for a personal computer on which Axxon Next software is installed (or will be installed) as a Client.
Designation for the graphical shell of the Axxon Next software package.
Slideshow – automatic switching of user layouts, or of viewing tiles in a single layout if working with standard layouts.
Licensing - regulating and setting the terms for usage of AxxonSoft software modules.
Detection zone – 1. the area of a video image processed by a detection tool is triggered.
2. a tool which allows the user to mark out an area of the video image which is not to be processed by a detection tool is
triggered.
Microphone – 1. a source of audio signals.
2. a system object used to manage the parameters of audio signal reception.
Video surveillance monitor – an interface object used to manage the user interfaces of the Axxon Next software, e.g., layouts,
viewing tiles, various panels and context menus, etc.
Viewing tile - interface object displaying the video stream coming from a certain video camera and enabling control of that video
camera.
Dial panel – panel (part of the PTZ control panel) used to dial a preset.
Archive navigation panel – all interface objects used to work with an archive, e.g., timeline, list of alarm events, etc.
Control panel – panel made up of tabs accessible to the user, used to navigate from one group of interface objects to another.
Playback control panel – panel containing buttons to control playback of video recordings: Play, Pause, Go to next video
recording, etc.
PTZ control panel – all interface objects used to control a certain PTZ device.
Layout control ribbon – panel containing tools to create, edit, and manage layouts.
PTZ device – a system object displaying the properties of an installed PTZ camera device.
Note
Also used to designate a physical device
The PTZ subsystem encompasses all the tools that provide for remote control of a PTZ device and the lens of a video camera.
The analytics subsystem encompasses all the tools that provide for automatic analysis of incoming video and audio data.
The Forensic Search in archive subsystem is a set of tools for searching video recordings in the archive by using video image
metadata.
The Output subsystem encompasses all the tools that provide for the triggering of an execution device connected to the
embedded Output port of a video camera or IP server when a detection tool is triggered (including one which processes the
embedded Input of a video camera or IP server) is triggered.
The notification subsystem encompasses all the tools that provide for notification of the user about events which have occurred
in the system.
Event registration subsystem – all the tools that provide for the collection of data about system events, processing, and its
storage on media.
Pre-alarm recording is the period of pre-event recording that will be added to the beginning of an alarm event recording.
Preset – preprogrammed positioning of a PTZ device.
Software package – all software and hardware tools used together to build a security system.
Software module – a program or functionally complete component of a program used to perform a specific functional task
(perform a user function).
On page:
Note
Disabling the firewall during installation can cause another problem: see No signal from video cameras and failure to
connect to other servers.
Installation failed because the Universal C Runtime is not installed. Please run
Windows Update and install all required Windows updates(KB2999226). You can download
the UCRT separately from here: 'https://support.microsoft.com/en-us/kb/2999226'
10.2.1.6 An error occurred while installing on Windows with the language pack Norsk (bokmål)
Installing the Axxon Next server on Windows with the language pack Norsk (bokmål) is not possible due to incompatibility with
PostgreSQL.
You must install Norsk Language Pack (Nynorsk).
Launching the Axxon Next software program with client logging enabled can take a long time when the ESET NOD32 Antivirus 4
Real-time file system protection mode is on.
To solve this problem, add the Axxon Next installation folder and the folder with the client logs (<Letter of system disk>:
\Users\<User>\Appdata\Local\AxxonSoft\AxxonNext\logs) to the list of exceptions in ESET NOD32 Antivirus 4.
On page:
• All video cameras or archives stop working once the license maximum is reached
• No signal from video cameras and failure to connect to other servers
• Incorrect display of Client interface elements
• Server error on Windows Server 2012
• Emergency shutdown of the Client on Windows 8.1
• Error creating new archives even when license restriction on total size is observed
• The Axxon Next VMS operation along with Windows Defender software
• Upper panel display problem
• High CPU load during OpenGL software emulation
• Performance of Axxon Next when working with NetLimiter 2
• Exported videos' playback in Movies and TV application
10.2.3.1 All video cameras or archives stop working once the license maximum is reached
If the activation key allows the use of a smaller number of video cameras than the amount used at the moment on the system, all
of the video cameras will cease to function with the system. To resume operation, remove the objects corresponding to the
excess number of video cameras and restart the server.
Note
Restart the Server through the Start menu as follows:
1. All Programs -> Axxon Next -> Shut Down Server
2. All Programs -> Axxon Next -> Start Server
Similarly, if an activation key allows using archives with a total size of an amount less than the current one, you are advised to
correct the archive size to the required amount and then restart the server.
10.2.3.2 No signal from video cameras and failure to connect to other servers
If the Windows Firewall (or firewall of other manufacturers) was disabled during installation of Axxon Next, Axxon Next services
and applications will not be automatically added to the list of firewall exceptions.
After you turn on the firewall, you may see no signal from cameras in both main and web Client, and no possibility to connect to
other Servers.
To solve this problem, add the following applications to the firewall exceptions: Apphost.exe, NetHost.exe, AxxonNext.exe, and
LicenceTool.exe.
Note
If ESET NOD32 Smart Security 6 anti-virus software is used, disable Personal firewall
10.2.3.6 Error creating new archives even when license restriction on total size is observed
If the user creates archives at the same time (in other words, without applying changes) while deleting some existing archives,
creation of archives may be forbidden even if the total archive size does not exceed the amount of the license restriction.
Note
This happens because when verifying the license restrictions, the size of created archives is calculated based on the
total size the last time when changes were applied
To regain the ability to create new archives in such situations, the user must first delete unnecessary archives and apply changes.
10.2.3.7 The Axxon Next VMS operation along with Windows Defender software
If you have Windows Defender software installed in your system, some problems may occur while accessing and saving data to
Archive files, as well as significant slowing down of MomentQuest searches.
As a workaround, you can either disable Windows Defender or add AppHost.exe, AppHostSvc.exe and vfs_format.exe to the
exceptions list.
10.3 Appendix 3. Assigning of the domain takes place when the Axxon Next server is
installed
The Windows OS will create two accounts when the Axxon Next software package is installed using a Client and Server type of
configuration.
1. An account with administrator rights which is used by the Axxon Next file browser. The name of this account is set during
installation of Axxon Next (see Installation).
For Axxon Next to function correctly, this account must have Windows administrator rights. If the account is a domain
user account, you must also add the account to the Users and Power Users groups.
Note
The file browser helps to navigate through the Server's file system (such as when choosing disks for log volumes)
The account can also be used for configuring access rights to the hard disk.
2. Axxonpostgres – an account under which the log data database service is started.
Note
A log database (Postgres) is used for storing system events
On page:
• NOD32
• ESET Smart Security
• AVG
• DrWeb
• McAfee SAAS
Depending on the anti-virus software that you use, when you install, start, and use Axxon Next, your anti-virus software may ask
for permission for the software components to perform Internet access.
It is recommended that you allow these components to do so for proper functioning of the application.
Recommendations for specific anti-virus programs are given below.
10.4.1 NOD32
When using NOD32 Antivirus, it is strongly recommended to either disable the Web Access Protection service or to add the IP
addresses of IP cameras to the list of exceptions for anti-virus scanning.
See also section Possible Errors During Start-Up.
10.4.3 AVG
When using AVG on a configuration with many video cameras, it is strongly recommended to add the IP addresses of IP cameras
to the list of exceptions. Otherwise, the avgsa.exe process may severely slow down the CPU.
This action can be performed only in the paid version of AVG.
When installing Axxon Next, allow the NetHost.exe and ngpsh.exe processes to run.
10.4.4 DrWeb
If you use DrWeb anti-virus software, perform the following actions before installing Axxon Next:
1. Disable automatic start of the DrWeb firewall.
2. In the proactive protection settings, select the option to use custom settings and enable the following options:
Note
It is not necessary to disable this component if your configuration contains a single server and a local client.
10.5 Appendix 5. Using CH VM-Desktop USB multifunction controllers with Axxon Next
CH VM-Desktop USB multifunction controllers offer a range of input controls:
• Three-axis joystick for PTZ and digital zoom control (J1 and J2)
• Jog/shuttle dial (J3 and J4)
• 27 keys
• 10 number keys
• * key
• # key
• Programmable keys C1 to C13 (keys cannot be remapped in Axxon Next)
• Two additional keys (B1 and B2)
The multifunction controller can be used to operate Axxon Next on the active monitor.
Note
The active monitor is either the main one (if all additional monitors are inactive or not connected) or an additional one,
if the additional monitor is active (see Configuring Interfaces on a Multi-Monitor Computer).
The active monitor can be selected only by using the mouse. If no mouse is present, the multifunction controller will
work on the main monitor only.
J4 Live Video mode Iris control for the selected camera.
2.3 Live Video mode Focus control for the selected camera.
B1, B2 Open calendar Cycle through calendar elements (equivalent to pressing the tab key)
days of month - hours - minutes - seconds - am/pm (B2 key) and in reverse (by pressing the B1 key)
J3 Open calendar Navigate by days and set hours, minutes, seconds, and AM/PM
Global
General
Navigation. Up Up
DIgit 1 D1
Digit 9 D9
Layouts
* by ID
Arm Ctrl + A
Disarm Ctrl + D
PTZ
Move up NumPad8
Zoom in NumPad9
Archive
Show calendar F8
TimeCompressor
Alarms
Attention!
These configuration backups are incompatible with those created with the Backup and Configuration Recovery utility,
and vice versa.
Where
Parameter Description
local Add it, if you need to save the local configuration for this Server: all created objects along with
their parameters, links and changes' histories.
shared Add it, if you need to save the global configuration for an Axxon domain: users, layouts, etc.
Parameter Description
tickets Add it, if you need to save the Axxon domain structure.
An example:
Where
Parameter Description
deleteLocal Add it, if you need to clear the local configuration from objects not present in the backup copy.
deleteShared Add it, if you need to clear the global configuration from objects not present in the backup copy.
An example:
Note
It is recommended to use Ubuntu 18.
Note
If the distributives based on Debian 10 are used, it may be necessary to install additional packages:
apt-get install wget
apt-get install gnupg
Attention!
It is not allowed to simultaneously install a regular Server and a FailOver Server.
During the installation, the installer will request the name of the Axxon-domain for the Axxon Next server. If you leave this
field blank, you can specify it on the client at the first connection.
By default, the files are downloaded to the /var/cache/apt/archives folder. If it is necessary to download the files to
another folder, run the following command:
Attention!
If you plan to install the downloaded packages on another computer with no Internet, then the OS
version on that computer should match the OS version on which the packages were downloaded.
where
user - user name;
Downloads - folder with downloaded packages.
Package examples...
Example of packages required to install the server side:
axxon-drivers-pack_3.46_amd64.deb
axxon-detector-pack_3.1_amd64.deb
axxon-next-core_4.5.0_amd64.deb
axxon-next_4.5.0_all.deb
axxon-drivers-pack_3.46_amd64.deb
axxon-detector-pack_3.1_amd64.deb
axxon-next-core_4.5.0_amd64.deb
axxon-next-raft_4.5.0_amd64.deb
Example of packages required for the Server and Client installation type:
axxon-drivers-pack_3.46_amd64.deb
axxon-detector-pack_3.1_amd64.deb
axxon-next_4.5.0_all.deb
axxon-next-core_4.5.0_amd64.deb
axxon-next-client-bin_4.5.0_amd64.deb
axxon-next-client_4.5.0_all.deb
Attention!
The folder should not contain other packages.
It is not allowed to simultaneously install the normal Server and the Failover Server.
During the installation, the installer will request the name of the Axxon-domain for the Axxon Next server. If you leave this
field blank, you can later specify it on the Client at the first connection.
4. If necessary, you can change the Server configuration after installation (see Axxon Next Server configuration change).
Installation is complete.
You can install the Detector Pack and Driver Pack from the repository. To do this, sequentially execute the following commands:
Attention!
The Detector Pack and Driver Pack must be installed from the repository before installing the main part of Axxon Next.
If the Detector Pack and Driver Pack were installed from the repository, it is necessary to remove them from the folder
with downloaded installation packages.
Attention!
The Client installation is possible only after getting the same Server version installed (see Installing the Axxon Next
Server in Linux OS).
b. If you use Ubuntu 20.04, then install the mono-complete from the stretch repository:
d. During installation, it will be necessary to specify the maximum size of log files in megabytes and log level.
Note
The specified value can be changed (see Configuring the Axxon Next Client logging parameters on Linux
OS). To do this, execute the command:
sudo dpkg-reconfigure axxon-next-client
After the installation is complete, the Client icon will be displayed in the application menu.
Attention!
It is not recommended to run the Client as root user or with root permissions.
By default, at the first Client start, the OS interface language will be used. To change the language of the Client's interface at the
first start, do the following:
1. Run the following command.
Note
This should be configured separately for each OS user.
Attention!
After you run the Client, select the interface language for the next launches in the settings (see Selecting the
interface language).
/home/USER/.local/share/AxxonSoft/
2. Server configuration:
/opt/AxxonSoft/AxxonNext/
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
[ui]
tls = False
cd axxonnext.docker
hg pull -u
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
b. for armhf architecture:
$ sudo add-apt-repository \
"deb [arch=armhf] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
7. Check the current version of docker-compose and upgrade it to the latest version, if necessary.
docker-compose --version
the following:
[ui]
tls = False
cd axxonnext.docker
hg pull -u
sudo reboot
cd ~/axxonnext.docker/next
./axxon-next.sh build
./axxon-next.sh list
./axxon-next.sh list
./axxon-next.sh status
./axxon-next.sh stop
./axxon-next.sh support
./axxon-next.sh versions
To update the Axxon Next software from the folder, do the following:
1. Go to the folder with the downloaded packages.
2. Execute the following command:
sudo dpkg -i *
Attention!
After the update is completed, it is necessary to check the access rights of the archive file and the folder where it is
stored.
The ngp user should be specified as the owner of both the file and the folder.
Attention!
The Server in Linux OS must belong to some Axxon-domain.
3. Press the Enter button several times until the Server node name change window is displayed.
Server stop:
Server restart:
sudo su
fdisk -l
where
• /dev/sda - the first physical disk;
• /dev/sda1 - the first section of the first physical disk;
• /dev/sda2 - the second section of the first physical disk;
• dev/sdb - the second physical disk.
To delete the disk section, do the following:
1. Go to the disk where it is necessary to delete a section.
fdisk /dev/sdb
fdisk /dev/sdb
+5G
Attention!
At this point you cannot change the archive size.
ls -lt /home/
If there is a string with the ngp user permissions in the result, it is now possible to create an archive as a file in this
directory.
Note
For example, the disk size is 60 GB, and a 10 GB archive is created on it, but it is only 1 GB full.
When you try to create a second archive on this disk, 59 GB of free space will be displayed, not 50 GB.
2. Availability of the entire archive file size is not guaranteed in cases when other files run out of available space.
Note
Due to the ext and xfs file systems features, it is possible to create archives whose total size exceeds the free disk
space.
Attention!
In such cases, it is necessary for the system administrator to control the free disk space.
2. Enter the Axxon-domain ID to which the Server should be added. To skip this step, press the Enter button.
7. Restrict the visibility of Servers from various networks in the Servers list during the Axxon Next setup. Possible values:
a. "0.0.0.0/0" - Servers from all networks will be visible.
b. "10.0.1.23/32,192.168.0.7/32" - Only Servers from specified networks will be visible.
c. "127.0.0.1" - Only Servers from the local network will be visible.
8. Set the external address of the switch if the Server is located behind the NAT (1). Use the following settings format: "IP
Address 1 or DNS Name 1, IP Address 2 or DNS Name 2"
where,
a. IP-address - NAS address,
b. common - shared network folder,
c. user, password - NAS access credentials,
d. uid, gid - id of the user and ngp group; they can be obtained using the following command:
id ngp
5. In the Axxon Next metadata storage settings, specify the /media/netdir path (see Configuring storage of the system log
and metadata).
After you restart Linux OS, the attached folder will be deleted. To configure the network folder to be attached on the OS loading,
do the following:
1. Open the /etc/fstab file.
3. Save file.
3. Set the maximum size of the log in megabytes. When the specified size is reached, a new log is created.
3. Get the AxxonArchiveSearchll.v2c key file from the technical support department.
4. Move the received file to the folder with the unpacked archive.
5. Go to the folder with the unpacked archive and run the following script:
cd aksusbd-8.13.1
sudo ./dinst
As a result, the installed key will be displayed at the http://127.0.0.1:1947/ address in the Sentinel Admin Control Center web-
application.
where
• /opt/AxxonSoft/AxxonNext/bin/support - the utility location directory;
• /home/user - the user’s home directory.
10.9.1 Consolidating the Servers from different networks into Axxon domain
To consolidate the Servers from different networks separated by routers into Axxon domain, do the following:
1. Set the port range for operation and the router's public IP address on each Server that is to be included in the Axxon
domain (see Network settings utility).
Attention!
The Server port ranges of Axxon Domain should not overlap within the same network.
By default, the base port is 20111, and the port range is 20111-20210. Hence, it is necessary to set and forward
the port range 20211-20310 for the second Server, the port range 20311-20410 for the third Server, and so on.
Attention!
The router's public IP address should be static.
2. For each router, forward the specified ports of the Server, which is located behind this switch.
3. Connect the Client to the Server from any network (see Starting an Axxon Next Client, Connecting the Client to the Server
behind NAT).
4. Manually add other Servers to the Axxon domain using the public IP address of the corresponding router and the external
base port of the Server (see Adding a Server to an existing Axxon Domain).
Example:
To combine Servers into one Axxon domain in this configuration, do the following:
1. On Server 1, set the port range 20111-20210 and the public IP address of router 1.
2. On Server 2, set the port range 20211-20310 and the public IP address of router 1.
3. On Server 3, set the port range 20111-20210 and the public IP address of router 2.
4. On Server 4, set the port range 20211-20310 and the public IP address of router 2.
5. On router 1, configure the forwarding of:
a. the router ports 20111-20210 to the internal IP address of Server 1 and ports 20111-20210;
b. the router ports 20211-20310 to the internal IP address of Server 2 and ports 20211-20310.
6. On router 2, configure the forwarding of:
a. the router ports 20111-20210 to the internal IP address of Server 3 and ports 20111-20210;
b. the router ports 20211-20310 to the internal IP address of Server 4 and ports 20211-20310.
7. Connect to Server 1.
8. Manually add Server 2 to the Axxon domain using the local IP address of Server 2 and port 20211.
9. Manually add Server 3 to the Axxon domain using the public IP address of router 2 and port 20111.
10. Manually add Server 4 to the Axxon domain using the public IP address of router 2 and port 20211.
Attention!
The router's public IP address should be static.
2. On the router, forward the specified Server ports and the NativeBL 20109 port.
3. Launch the Client and specify the router's external IP address and the Server's external base port in the connection
settings (see Starting an Axxon Next Client).
Attention!
When connecting the Client from an external network, only those Servers that have access to the external
network will be available in the Axxon domain configuration (see Consolidating the Servers from different
networks into Axxon domain).
Attention!
In some cases, in security systems with a complex architecture (NAT, VPN), the Client may not receive events
from the Server. To fix this, it is necessary to create the system variable NGP_POLL_EVENTS and set the 1 value
to it (see Appendix 10. Creating system variable).
Attention!
In a failover system, it is not possible to connect to the node that is behind NAT (see Connecting to a Node and
Configuring of an Axxon domain).
Example:
3. On Server 3, set the port range 20311-20410 and the public IP address of the router.
4. On the router, configure the forwarding of:
a. the router ports 20111-20210 to the internal IP address of Server 1 and ports 20111-20210;
b. the router ports 20211-20310 to the internal IP address of Server 2 and ports 20211-20310;
c. the router ports 20311-20410 to the internal IP address of Server 3 and ports 20311-20410.
5. When connecting the Client, enter the router's public IP address and port: 20111 to connect to Server 1; port 20211 to
connect to Server 2; port 20311 to connect to Server 3.
10.9.3 Connecting the web and mobile Clients to the Server behind NAT
To connect web and mobile Clients to the Server behind NAT, do the following:
1. On the router, forward the specified port of the web server (see Configuring the web server). The default port is 80.
Note
To access all Servers of the Axxon domain, it is enough to forward any single port of the web server.
2. When connecting using a web browser or mobile Client, use the Server's public IP address and the forwarded port of the
web server (see Starting the web client).
Example:
To connect the web Client to the Axxon domain in this configuration, do the following:
1. On the router, forward the port 80 to the internal IP address of Server 1 and port 80.
2. When starting the web Client, use the router's public IP address and port 80.
10.9.4 Audio transmission from the Client's microphone behind NAT to the Server's or
camera's loudspeaker
Playing back sound from Client microphone on camera speakers
To transmit audio from the Client's microphone behind NAT to the Server's or camera's loudspeaker, do the following:
1. On the Client, specify a unique port range and an external router's public IP address. Follow the steps below:
a. Add an NGP_CLIENT_PORT_BASE environmental variable and set its value to the first port number in the range
(see Appendix 10. Creating system variable).
b. Add an NGP_CLIENT_PORT_SPAN environmental variable and set its value to the number of ports in the range.
Attention!
We recommend you to use no less than 100 ports.
Attention!
If you have multiple Clients within your network, their port ranges must not overlap.
c. Add an NGP_ALT_ADDR environmental variable and set its value to the public IP address of the externa router.
2. Configure forwarding of the specified ports on both external and internal router.
Example.
To transmit audio from microphones connected to Client 1 and Client 2 to the Server's or camera's loudspeaker, do the
following:
1. On Client 1, set port range to 20555-21554, and the public IP address of your Internet provider's router.
2. On Client 2, set port range to 21555-22554, and the public IP address of your Internet provider's router.
3. Configure port forwarding on the internal router:
a. ports 20555-21554 to the internal IP address of Client 1 and ports 20555-21554;
b. ports 21555-22554 to the internal IP address of Client 2 and ports 21555-22554;
4. Configure port forwarding on the provider's router:
a. router ports 20555-21554 to the IP address of the internal router and ports 20555-21554;
b. router ports 21555-22554 to the IP address of the internal router and ports 21555-22554;
5. Click the OK button.