Releases: intel/caffe
Caffe_v1.1.6
Caffe_v1.1.6
- Optimize the inference performance of first layer INT8 convolution
 - Support multi-instance running with weight sharing in inference
 - Add Windows support in training and inference under single node
 - Fix bugs in FC INT8/LARS/SSD detection/calibration tool
 
Caffe_v1.1.5
- Support memory optimization for inference
 - Enable INT8 InnerProduct and its calibration support
 - Release the full INT8 model of ResNet-50 v1.0
 - Fix in-place concat for INT8 inference with single batch size
 
Caffe_v1.1.4
- Enabled single-node VNET training and inference
 - Enhanced full convolution calibration to support models with customized data layer
 - Enabled inference benchmarking scripts with multiple instances inference support
 - Supported INT8 accuracy test in docker image
 
Caffe_v1.1.3
- Upgraded to MKLDNN v0.17
 - Supported INT8 convolution with signed input
 - Added more 3D layers support
 
Caffe_v1.1.2a
Features:
- Support multi-node inference
 
Caffe_v1.1.2
- Features
 
- 
INT8 inference
Inference speed improved with upgraded MKL-DNN library.
In-place concat for latency improvement with batch size 1. Scale unify for concat for better performance. Support added in calibration tool as well - 
FP32 inference
Performance improved on detectionOutput layer with ~3X
Add MKL-DNN 3D convolution support - 
Multi-node training
SSD-VGG16 multi-node training is supported - 
New models
Support training of R-FCN object detection model
Support training of Yolo-V2 object detection model
Support inference of SSD-MobileNet object detection model
Added the SSD-VGG16 multi-node model that converges to SOTA - 
Build improvement
Fixed compiler warnings using GCC7+ version - 
Misc
MKLML upgraded to mklml_lnx_2019.0.20180710
MKL-DNN upgraded to v0.16+ (4e333787e0d66a1dca1218e99a891d493dbc8ef1) 
- Known issues
 
- INT8 inference accuracy drop for convolutions with output channel 16-individable
 - FP32 training cannot reach SOTA accuracy with Winograd convolution
 
Caffe_v1.1.1a
- Features
 
- Update the batch size for benchmark scripts
 
- Bug fixings
 
- Fix docker image build target cpu-ubuntu
 
Caffe_v1.1.1
- Features
 
- INT8 inference
Inference speed improved with upgraded MKL-DNN library.
Accuracy improved with channel-wise scaling factor. Support added in calibration tool as well. - Multi-node training
Better training scalability on 10Gbe with prioritized communication in gradient all-reduce.
Support Python binding for multi-node training in pycaffe.
Default build now includes multi-node training feature. - Layer performance optimization: dilated convolution and softmax
 - Auxiliary scripts
Added a script to parse the training log and plot loss trends (tools/extra/caffe_log_parser.py and tools/extra/plot_loss_trends.py).
Added a script to identify the batch size for optimal throughput given a model (scripts/obtain_optimal_batch_size.py).
Improved benchmark scripts to support Inception-V3 and VGG-16 - New models
Support inference of R-FCN object detection model.
Added the Inception-V3 multi-node model that converges to SOTA. - Build improvement
Merged PR#167 "Extended cmake install package script for MKL"
Fixed all ICC/GCC compiler warnings and enabled warning as error.
Added build options to turn off each inference model optimization.
Do not try to download MKL-DNN when there is no network connection. 
- Misc
 
- MLSL upgraded to 2018-Preview
 - MKL-DNN upgraded to 464c268e544bae26f9b85a2acb9122c766a4c396
 
Caffe_v1.1.0
- Features
 
- Support INT8 inference. A calibration tool is provided to transform FP32 models to INT8 models
 - Support convolution and element-wise sum fusion, boosting inference performance (e.g. ResNet-50)
 - Support SSD training and inference with pure MKLDNN engine
 - Enhance MSRA weight filler with scale parameter
 - Support performance collection on single node in the same way as multi-node
 - Set CPU_ONLY as default in CMake configuration
 
- Bug fixings
 
- Fix correctness issue on layers with various engines
 - Sync sampling bug fix 96175b from Wei Liu’s SSD branch
 - Fix multi-node crash issue running from pycaffe
 - Correct link library of MLSL for multi-node
 - Fix build issue of weight quantization
 
- Misc
 
- Upgrade MKLML to 2018.0.1.20171227 and MKLDNN to v0.12
 - Update models for multi-node training
 - Enhance installation and benchmarking scripts