Intel® Deep Learning SDK Beta: Release Notes
Intel® Deep Learning SDK Beta: Release Notes
Beta
Release Notes
9 December 2016
Version History/Revision History
This document is updated for the main releases of Intel® Deep Learning SDK.
Build Numbers
Beta release of Intel® Deep Learning SDK Training Tool build number: 1.0.520
Beta release of Intel® Deep Learning SDK Deployment Tool build number: 1.0.861
Customer Support
For technical support, including answers to questions not addressed in this product, visit the technical support forum,
FAQs, and other support information at: software.intel.com/en-us/deep-learning-sdk-support.
Product Description
The Intel® Deep Learning SDK is a set of tools for data scientists and software developers to develop, train, and deploy
deep learning solutions. The SDK encompasses a training tool and a deployment tool that can be used separately or
together in a complete deep learning workflow.
Training Tool
Easily prepare training data, design models, and train models with automated experiments and advanced
visualizations
Simplify the installation and usage of popular deep learning frameworks optimized for Intel® platforms
Deployment Tool
Optimize trained deep learning models through model compression and weight quantization, which are tailored to
end-point device characteristics
Deliver a unified API to integrate the inference with application logic
This document provides system requirements, installation instructions, issues and limitations, and legal information.
To learn more about this product, see:
New features listed in the New in this Release section below, or in the help.
Reference documentation listed in the Related Documentation section below.
Installation instructions can be found in the Installation Guide on the Intel® Deep Learning SDK documentation
webpage.
See the Installation Guide on the Intel® Deep Learning SDK documentation webpage.
For more information, see the User Guide on the Intel® Deep Learning SDK documentation webpage.
Dashboard
The new dashboard screen help users to have better control and view of the tool status:
"Quick links" provide easy navigation directly to the action screens (upload, dataset/model creation)
"Active jobs" presents all active actions inside the tool
The notifications & history area, enable users to view past activities and actions.
Dataset creation
Option to select an archive from the completed uploads to use in the dataset creation.
Define test data as part of your training uploaded archive or from a new archive.
Review one of the pre-made templates to learn how to use the Caffe* Python code
Create your own notebook - to practice or even to customize your training flow to fit your needs
Use this feature to "compress" an existing model using L.R.A.
Inference Engine
Inference Engine is now optimized for Intel® Xeon® and Intel® Core™ Processors for FP32 inference on Linux*.
Supports fusion of Conv and ReLU.
Supports acceleration using parallelism and vectorization using Intel® Math Kernel Library for Deep Neural
Networks (Intel® MKL-DNN).
Bootstrapping machine on amazon cloud After the AWS* machine is created need to
restrict manually the SSH access to the
machines to be open only from the machine
where the installer was launched.
Model with error notification is opened just after the creation. Minimize the model menu and re-open it.
The model should appear with an “error”
status.
(The model shouldn’t display any
information on screen)
External Dependencies
Model Optimizer needs to be installed on system with Intel® Distribution for Caffe* or Berkley*’s main Caffe*
branch.
Supported Hardware
Training Tool
o Intel® Deep Learning SDK is optimized for Intel® Xeon® processors (formerly Intel® microarchitecture
code name Broadwell, Haswell) and Intel® Xeon Phi™ processors (formerly Knights Landing).
Deployment Tool
o Intel® Deep Learning SDK Inference Engine is optimized for all Intel® Xeon® processors and Intel®
Core™ processors.
Prerequisites
Training Tool
o The system should be installed on a machine running a supported operating system accessible through a
SSH connection, with root privileges to run the installation script on Linux* machine.
o The tool web user interface can be accessible by any computer with Google Chrome* browser version 50
or higher.
o The installer is supported on Microsoft Windows* and Apple MacOS* computers (GUI installer) and via
Linux* (script installer).
Deployment Tool
o Intel® Distribution for Caffe* or Berkley*’s main Caffe branch.
Supported Topologies
All standard Image Classification Networks such as: AlexNet, GoogleNet v1+v2, VGG, ResNet
All standard Image Segmentation Networks such as : FCN
Supported Layers:
o Convolution
o Fully Connected
8
Acronym/Term Description
DL Deep Learning
Technical Preview Package that has limited functionality and is not intended for production use.
10
11
A "Mission Critical Application" is any application in which failure of the Intel® Product could result, directly or indirectly, in
personal injury or death. SHOULD YOU PURCHASE OR USE INTEL®'S PRODUCTS FOR ANY SUCH MISSION
CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL® AND ITS SUBSIDIARIES,
SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH,
HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES
ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR
DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL® OR
ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL®
PRODUCT OR ANY OF ITS PARTS.
Intel® may make changes to specifications and product descriptions at any time, without notice. Designers must not rely
on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel® reserves these
for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future
changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the
product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel® sales office or your distributor to obtain the latest specifications and before placing your product
order.
Copies of documents which have an order number and are referenced in this document, or other Intel® literature, may be
obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm
12