Skip to content

Commit

Permalink
Merge pull request jikexueyuanwiki#1 from jikexueyuanwiki/master
Browse files Browse the repository at this point in the history
Sync fork with origin
  • Loading branch information
linbojin committed Nov 14, 2015
2 parents 2e3f98c + 04cc390 commit e2085c6
Show file tree
Hide file tree
Showing 5 changed files with 94 additions and 148 deletions.
77 changes: 21 additions & 56 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ PS: 想探讨TensorFlow技术的可以加"TensorFlow技术交流群":495115006
* fork主仓库(<https://github.com/jikexueyuanwiki/tensorflow-zh>
* 按照章节认领翻译(每次申请一个章节),在下面这个`README.md`里找还没有被人申请的章节,写上(@你的github号),给主仓库的`master`分支提pull request;
* 提的 pull request 被确认,合并到主仓库后,代表你申请的章节*认领*完成,开始翻译;
* 翻译的文件为README.md或者TOC.md中对应的md文件,请不要翻译单独文件夹中的index.md
* 翻译过程请参 *翻译协作规范* ,完成翻译后提交 pull request 给主仓库的`master`分支;
* 校核完成后,从主仓库的`master`分支合并到主`publish`分支;
* 全部翻译完成后,生成PDF/ePub文档,放在极客学院Wiki平台发布;
Expand All @@ -61,83 +62,47 @@ PS: 想探讨TensorFlow技术的可以加"TensorFlow技术交流群":495115006

## 参与者(按认领章节排序)

### 翻译
### 翻译 & 校对

- 起步
- [介绍](SOURCE/get_started/introduction.md) [此处添加翻译者姓名及链接](https://github.com/xxx)[@PFZheng](https://github.com/PFZheng)
- [下载及安装](SOURCE/get_started/os_setup.md)[@PFZheng](https://github.com/PFZheng)
- [基本用法](SOURCE/get_started/basic_usage.md)[@PFZheng](https://github.com/PFZheng)
- [介绍](SOURCE/get_started/introduction.md) 翻译:([@PFZheng](https://github.com/PFZheng))校对:[@yangtze](https://github.com/sstruct)
- [下载及安装](SOURCE/get_started/os_setup.md) 翻译:[@PFZheng](https://github.com/PFZheng))校对: ([@yangtze](https://github.com/sstruct)
- [基本用法](SOURCE/get_started/basic_usage.md) 翻译:[@PFZheng](https://github.com/PFZheng))校对:([@yangtze](https://github.com/sstruct)
- 教程
- [Overview](SOURCE/tutorials/overview.md)[@PFZheng](https://github.com/PFZheng)
- [MNIST For ML Beginners](SOURCE/tutorials/mnist_beginners.md) ([@Tony Jin](https://github.com/linbojin))
- [Deep MNIST for Expert](SOURCE/tutorials/mnist_pros.md)
- [TensorFlow Mechanics 101](SOURCE/tutorials/mnist_tf.md)
- [Convolutional Neural Networks](SOURCE/tutorials/deep_cnn.md)
- [Vector Representations of Words](SOURCE/tutorials/word2vec.md)
- [Vector Representations of Words](SOURCE/tutorials/word2vec.md)
- [Recurrent Neural Networks](SOURCE/tutorials/recurrent.md)(@[Warln](https://github.com/Warln))
- [Mandelbrot Set](SOURCE/tutorials/mandelbrot.md)
- [Overview](SOURCE/tutorials/overview.md) 翻译:([@PFZheng](https://github.com/PFZheng))校对: ([@ericxk](https://github.com/ericxk))
- [MNIST For ML Beginners](SOURCE/tutorials/mnist_beginners.md) 翻译:([@Tony Jin](https://github.com/linbojin)) 校对: ([@ericxk](https://github.com/ericxk))
- [Deep MNIST for Expert](SOURCE/tutorials/mnist_pros.md) 翻译:([@chenweican](https://github.com/chenweican))
- [TensorFlow Mechanics 101](SOURCE/tutorials/mnist_tf.md) 翻译:([@bingjin](https://github.com/bingjin))
- [Convolutional Neural Networks](SOURCE/tutorials/deep_cnn.md) 翻译: ([@oskycar](https://github.com/oskycar)) 校对: ([@zhyhooo](https://github.com/zhyhooo))
- [Vector Representations of Words](SOURCE/tutorials/word2vec.md)翻译:([@xyang40](https://github.com/xyang40)
- [Recurrent Neural Networks](SOURCE/tutorials/recurrent.md) 翻译:([@Warln](https://github.com/Warln))
- [Mandelbrot集合](SOURCE/tutorials/mandelbrot.md) 翻译:([@ericxk](https://github.com/ericxk))√
- [Partial Differential Equations](SOURCE/tutorials/pdes.md)
- [MNIST Data Download](SOURCE/tutorials/mnist_download.md)
- [MNIST Data Download](SOURCE/tutorials/mnist_download.md) 翻译: ([@JoyLiu](https://github.com/fengsehng))
- 运作方式
- [总览](SOURCE/how_tos/overview.md)
- [变量:创建、初始化、保存和加载](SOURCE/how_tos/variables.md)
- [TensorBoard:可视化学习](SOURCE/how_tos/summaries_and_tensorboard.md)
- [变量:创建、初始化、保存和加载](SOURCE/how_tos/variables.md) 翻译: ([@zhyhooo](https://github.com/zhyhooo))
- [TensorBoard:可视化学习](SOURCE/how_tos/summaries_and_tensorboard.md) 翻译:([@thylaco1eo](https://github.com/thylaco1eo))
- [TensorBoard:图表可视化](SOURCE/how_tos/graph_viz.md)
- [读取数据](SOURCE/how_tos/reading_data.md)
- [线程和队列](SOURCE/how_tos/threading_and_queues.md)
- [添加新的Op](SOURCE/how_tos/adding_an_op.md)
- [自定义数据读取](SOURCE/how_tos/new_data_formats.md)
- [使用gpu](SOURCE/how_tos/using_gpu.md)
- [使用gpu](SOURCE/how_tos/using_gpu.md)翻译:([@lianghyv](https://github.com/lianghyv))√
- [共享变量](SOURCE/how_tos/variable_scope.md)
- 资源
- [总览](SOURCE/resources/overview.md)
- [BibTex 引用](SOURCE/resources/bib.md)
- [示例使用](SOURCE/resources/uses.md)
- [示例使用](SOURCE/resources/uses.md) 翻译:([@andyiac](https://github.com/andyiac))
- [FAQ](SOURCE/resources/faq.md)
- [术语表](SOURCE/resources/glossary.md)
- [Tensor排名、形状和类型](SOURCE/resources/dim_types.md)
- [Tensor排名、形状和类型](SOURCE/resources/dims_types.md)

### 校对

- 起步
- [介绍](SOURCE/get_started/introduction.md)[此处添加审校者姓名及链接](https://github.com/xxx)
- [下载及安装](SOURCE/get_started/os_setup.md)
- [基本用法](SOURCE/get_started/basic_usage.md)
- 教程
- [Overview](SOURCE/tutorials/overview.md)
- [MNIST For ML Beginners](SOURCE/tutorials/mnist_beginners.md)
- [Deep MNIST for Expert](SOURCE/tutorials/mnist_pros.md)
- [TensorFlow Mechanics 101](SOURCE/tutorials/mnist_tf.md)
- [Convolutional Neural Networks](SOURCE/tutorials/deep_cnn.md)
- [Vector Representations of Words](SOURCE/tutorials/word2vec.md)
- [Vector Representations of Words](SOURCE/tutorials/word2vec.md)
- [Recurrent Neural Networks](SOURCE/tutorials/recurrent.md)
- [Mandelbrot Set](SOURCE/tutorials/mandelbrot.md)
- [Partial Differential Equations](SOURCE/tutorials/pdes.md)
- [MNIST Data Download](SOURCE/tutorials/mnist_download.md)
- 运作方式
- [总览](SOURCE/how_tos/overview.md)
- [变量:创建、初始化、保存和加载](SOURCE/how_tos/variables.md)
- [TensorBoard:可视化学习](SOURCE/how_tos/summaries_and_tensorboard.md)
- [TensorBoard:图表可视化](SOURCE/how_tos/graph_viz.md)
- [读取数据](SOURCE/how_tos/reading_data.md)
- [线程和队列](SOURCE/how_tos/threading_and_queues.md)
- [添加新的Op](SOURCE/how_tos/adding_an_op.md)
- [自定义数据读取](SOURCE/how_tos/new_data_formats.md)
- [使用gpu](SOURCE/how_tos/using_gpu.md)
- [共享变量](SOURCE/how_tos/variable_scope.md)
- 资源
- [总览](SOURCE/resources/overview.md)
- [BibTex 引用](SOURCE/resources/bib.md)
- [示例使用](SOURCE/resources/uses.md)
- [FAQ](SOURCE/resources/faq.md)
- [术语表](SOURCE/resources/glossary.md)
- [Tensor排名、形状和类型](SOURCE/resources/dim_types.md)

## 进度记录

- 2015-11-10, 谷歌发布全新人工智能系统TensorFlow并宣布开源, 极客学院Wiki启动协同翻译,创建 GitHub 仓库,制定协同规范
- 2015-11-10, 谷歌发布全新人工智能系统TensorFlow并宣布开源, 极客学院Wiki启动协同翻译,创建 GitHub 仓库,制定协同规范



## 感谢支持
Expand Down
101 changes: 45 additions & 56 deletions SOURCE/how_tos/using_gpu.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,33 @@
# Using GPUs <a class="md-anchor" id="AUTOGENERATED-using-gpus"></a>
# 使用GPUs <a class="md-anchor" id="AUTOGENERATED-using-gpus"></a>

## Supported devices <a class="md-anchor" id="AUTOGENERATED-supported-devices"></a>
## 支持的设备 <a class="md-anchor" id="AUTOGENERATED-supported-devices"></a>

On a typical system, there are multiple computing devices. In TensorFlow, the
supported device types are `CPU` and `GPU`. They are represented as
`strings`. For example:
在一套标准的系统上通常有多个计算设备.TensorFlow中支持CPU和GPU两种.我们用指定字符串
`strings`来标识这些设备.比如:

* `"/cpu:0"`: The CPU of your machine.
* `"/gpu:0"`: The GPU of your machine, if you have one.
* `"/gpu:1"`: The second GPU of your machine, etc.
* `"/cpu:0"`:机器的CPU
* `"/gpu:0"`:机器的GPU,如果你有一个的话.
* `"/gpu:1"`:机器的第二个GPU,以此类推...

If a TensorFlow operation has both CPU and GPU implementations, the
GPU devices will be given priority when the operation is assigned to
a device. For example, `matmul` has both CPU and GPU kernels. On a
system with devices `cpu:0` and `gpu:0`, `gpu:0` will be selected to run
`matmul`.
如果一个TensorFlow系统中兼有CPU和GPU实现,当你指派设备时GPU有优先权.比如`matmul`中CPU
和GPU核心都有.那么在`cpu:0``gpu:0`中,`gpu:0`会被选择运行`matmul`.

## Logging Device placement <a class="md-anchor" id="AUTOGENERATED-logging-device-placement"></a>
## 记录使用设备的位置 <a class="md-anchor" id="AUTOGENERATED-logging-device-placement"></a>

To find out which devices your operations and tensors are assigned to, create
the session with `log_device_placement` configuration option set to `True`.
为了获取你的operations和Tensor被指派到哪个设备上运行,用`log_device_placement`新建一个`session`,并设置为`True`.

```python
# Creates a graph.
# 新建一个graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
# 新建session with log_device_placement并设置为True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
# 运行这个op.
print sess.run(c)
```

You should see the following output:
你应该能看见以下输出:

```
Device mapping:
Expand All @@ -46,26 +41,24 @@ MatMul: /job:localhost/replica:0/task:0/gpu:0
```

## Manual device placement <a class="md-anchor" id="AUTOGENERATED-manual-device-placement"></a>
## 手工指定使用的设备 <a class="md-anchor" id="AUTOGENERATED-manual-device-placement"></a>

If you would like a particular operation to run on a device of your
choice instead of what's automatically selected for you, you can use
`with tf.device` to create a device context such that all the operations
within that context will have the same device assignment.
如果你想让指定设备运行operation而不用系统自动为你分配的,你可以用`with tf.device`
创建一个设备环境,这个环境下的操作都统一运行在环境指定的设备上.

```python
# Creates a graph.
# 新建一个graph.
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
# 新建session with log_device_placement并设置为True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
# 运行这个op.
print sess.run(c)
```

You will see that now `a` and `b` are assigned to `cpu:0`.
你会发现现在`a``b`操作都被分配给了`cpu:0`.

```
Device mapping:
Expand All @@ -78,26 +71,24 @@ MatMul: /job:localhost/replica:0/task:0/gpu:0
[ 49. 64.]]
```

## Using a single GPU on a multi-GPU system <a class="md-anchor" id="AUTOGENERATED-using-a-single-gpu-on-a-multi-gpu-system"></a>
## 在多GPU系统里使用单一GPU<a class="md-anchor" id="AUTOGENERATED-using-a-single-gpu-on-a-multi-gpu-system"></a>

If you have more than one GPU in your system, the GPU with the lowest ID will be
selected by default. If you would like to run on a different GPU, you will need
to specify the preference explicitly:
如果你的系统里有多个GPU,ID最小的GPU会被默认选中.如果你想用别的GPU,
可以这样指明你的偏好:

```python
# Creates a graph.
# 新建一个graph.
with tf.device('/gpu:2'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
# 新建session with log_device_placement并设置为True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
# 运行这个op.
print sess.run(c)
```

If the device you have specified does not exist, you will get
`InvalidArgumentError`:
如果你指定的设备不存在,你会收到`InvalidArgumentError`提示:

```
InvalidArgumentError: Invalid argument: Cannot assign a device to node 'b':
Expand All @@ -106,33 +97,29 @@ Could not satisfy explicit device specification '/gpu:2'
values: 1 2 3...>, _device="/gpu:2"]()]]
```

If you would like TensorFlow to automatically choose an existing and
supported device to run the operations in case the specified one doesn't
exist, you can set `allow_soft_placement` to `True` in the configuration
option when creating the session.
如果你希望TensorFlow自动选择一个存在的且被支持的设备以防你指定的设备不存在,
你可以在创建的`session`里把参数`allow_soft_placement`设置为`True`

```python
# Creates a graph.
# 新建一个graph.
with tf.device('/gpu:2'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with allow_soft_placement and log_device_placement set
# to True.
# 新建session with log_device_placement并设置为True.
sess = tf.Session(config=tf.ConfigProto(
allow_soft_placement=True, log_device_placement=True))
# Runs the op.
# 运行这个op.
print sess.run(c)
```

## Using multiple GPUs <a class="md-anchor" id="AUTOGENERATED-using-multiple-gpus"></a>
## 使用多个GPU <a class="md-anchor" id="AUTOGENERATED-using-multiple-gpus"></a>

If you would like to run TensorFlow on multiple GPUs, you can construct your
model in a multi-tower fashion where each tower is assigned to a different GPU.
For example:
如果你想让TensorFlow在多个GPU上运行,你可以建立multi-tower结构,在这个结构
里每个tower分别被指配给不同的GPU运行.比如:

```
# Creates a graph.
# 新建一个graph.
c = []
for d in ['/gpu:2', '/gpu:3']:
with tf.device(d):
Expand All @@ -141,13 +128,13 @@ for d in ['/gpu:2', '/gpu:3']:
c.append(tf.matmul(a, b))
with tf.device('/cpu:0'):
sum = tf.add_n(c)
# Creates a session with log_device_placement set to True.
# 新建session with log_device_placement并设置为True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
# 运行这个op.
print sess.run(sum)
```

You will see the following output.
你会看到如下输出:

```
Device mapping:
Expand All @@ -170,5 +157,7 @@ AddN: /job:localhost/replica:0/task:0/cpu:0
[ 98. 128.]]
```

The [cifar10 tutorial](../../tutorials/deep_cnn/index.md) is a good example
demonstrating how to do training with multiple GPUs.
[cifar10 tutorial](../../tutorials/deep_cnn/index.md) 这个例子很好的演示了怎样用GPU集群训练.

> 原文:[using_gpu](http://tensorflow.org/how_tos/using_gpu/index.md)
翻译:[@lianghyv](https://github.com/lianghyv) 校对:[](https://github.com/)
Binary file added SOURCE/images/mandelbrot_output.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit e2085c6

Please sign in to comment.