Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update code to use latest caffe and cudnn #19

Open
wants to merge 787 commits into
base: master
Choose a base branch
from

Conversation

xizero00
Copy link

@xizero00 xizero00 commented Jul 22, 2017

Hi Tomas:
I have updated the caffe on Jun 21, 2017 to support the latest cudnn and new features.
Would you please merge it.
Thanks :)

Feng.

intelfx and others added 30 commits August 31, 2016 15:20
Despite Caffe itself does not use OpenMP, explicitly linking to OpenMP
should be done when one statically links to a BLAS library which uses
OpenMP internally and does not provide proper CMake imported targets
with proper dependencies (nobody this so far).
Rationale: these are duplicated in CMakeLists code, and they cannot be
removed from there because many definitions need to be exported to the
library clients. See issue #4625.
Benchmarking should not impact perf until timer is read
A bias/scaling can be applied wherever desired by defining the
respective layers, and `ScaleLayer` can handle both as a memory
optimization.
Document that Ubuntu 16.04 Requires CUDA 8
batch norm statistics are not learnable parameters subject to solver
updates, so they must be shielded from the solver. `BatchNorm` layer now
masks its statistics for itself by zeroing parameter learning rates
instead of relying on the layer definition.

n.b. declaring `param`s for batch norm layers is no longer allowed.
automatically strip old batch norm layer definitions including `param`
messages. the batch norm layer used to require manually masking its
state from the solver by setting `param { lr_mult: 0 }` messages for
each of its statistics. this is now handled automatically by the layer.
[examples] Fixed typos in examples/cpp_classification/readme
Batch Norm: Further Documentation and Simplified Definition
fix layerSetUp of scale_layer to not add bias blob when already present
[TravisCI] google/protobuf renamed the 3.0 branch
fixes the broken glog link in yum_install.md which is currently returning a 404.
fix typo in pascal_multilabel_datalayers.py
shelhamer and others added 30 commits April 14, 2017 13:24
dev has diffused into the community from the original Caffe core
Test for python forward and backward with start and end layer
Explicit std::string to bp::object conversion
Handling destruction of empty Net objects
Rewrite crop layer GPU implementation
Downgrade boost requirement from 1.55 to 1.54
docs/debian guide: update compiler combination table
…brary

cmake: rename libproto.a -> libcaffeproto.a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.