-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ja] cs-229-deep-learning #96
Conversation
The first draft is done. I would love for someone to have a look at it. |
Thank you for your translation @taixhi! If you know a native speaker to review your work, please feel free to spread the word about this PR. |
I would like to check this ja version today, and will comment here. |
Hi @taniokah, let me know if you have had a look at it :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's my 5 cent. (From MLT slack)
ja/cheatsheet-deep-learning.md
Outdated
|
||
**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** | ||
|
||
⟶ ニューラルネットワークとは複数の層を用いて組まれる数学モデルです。代表的なネットワークとして畳み込みと再帰型ニューラルネットワークが挙げられます。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to wikipedia[0], Recurrent Neural Network can either be translated to 回帰型ニューラルネットワーク or 再帰型ニューラルネットワーク, but the wikipedia page also distinguish the two by using 回帰型 for "Recurrent" and 再帰型 for "Recursive" Neural Network. 回帰 translates to "Regression" and 再帰 to "Recursion.
But a quick Google search for "Recurrent Neural Network" on Japanese Google give mixed result, some people use 再帰型 while others 回帰型 to specify "Recurrent" Neural Network. Is there any definitive source that explain why use one over the other? (other than wikipedia)
If not, to reduce confucsion, just using カタカナ: リカレントニューラルネットワーク might also be an option
ja/cheatsheet-deep-learning.md
Outdated
|
||
**7. where we note w, b, z the weight, bias and output respectively.** | ||
|
||
⟶ この場合重み付けをw、バイアス項をb、出力をzとします。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
この場合重み付けをw、バイアス項をb、出力をzとします。 > この場合重みをw、バイアス項をb、出力をzとします。
ja/cheatsheet-deep-learning.md
Outdated
|
||
**9. [Sigmoid, Tanh, ReLU, Leaky ReLU]** | ||
|
||
⟶ [Sigmoid(シグモイド関数), Tanh(双曲線関数), ReLU(ランプ関数), Leaky ReLU] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
正規化線形ユニット(rectified linear unit、ReLU) [0]
ランプ関数 is by no means wrong, but it translates to ramp function and is not the preferred term in machine learning, I think.
漏洩ReLU = Leaky ReLU [0]
[0] https://ja.wikipedia.org/wiki/%E6%AD%A3%E8%A6%8F%E5%8C%96%E7%B7%9A%E5%BD%A2%E9%96%A2%E6%95%B0
ja/cheatsheet-deep-learning.md
Outdated
|
||
**18. Step 4: Use the gradients to update the weights of the network.** | ||
|
||
⟶ 傾斜を使い誤差が小さくなるように重みを調整する。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
勾配 = Gradient [0]
While you use the gradients to update the weights to reduce the error, the English sentence only says to use gradient to update the weight of the network. So a more direct translation would probably be
"勾配を使いネットワークの重みを更新する。” (or 調整 instead of 更新)
|
||
**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** | ||
|
||
⟶ これは通常、学習率を高め、初期値への依存性を減らすことを目的でFully Connected層と畳み込み層の後、非線形化を行う前に行われます。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
非線形化を行う層の前に行われます。
|
||
<br> | ||
|
||
**24. Recurrent Neural Networks** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See first comment about: 回帰型ニューラルネットワーク
ja/cheatsheet-deep-learning.md
Outdated
|
||
**38. Policy ― A policy π is a function π:S⟶A that maps states to actions.** | ||
|
||
⟶ 政策 - 政策πは状態と行動を写像する関数π:S⟶A |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"強化学習は一連の行動を通じて報酬が最も多く得られるような方策(policy)を学習する。"[0]
I would prefer 方策 over 政策 since the latter indicate "political" context.
[0] https://ja.wikipedia.org/wiki/%E5%BC%B7%E5%8C%96%E5%AD%A6%E7%BF%92
ja/cheatsheet-deep-learning.md
Outdated
|
||
**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** | ||
|
||
⟶ [強化学習, マルコフ決定過程, バリュー/ポリシー反復, 近似動的計画法, ポリシーサーチ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
バリュー/ポリシー反復, -> 価値/方策反復
ja/cheatsheet-deep-learning.md
Outdated
|
||
**53. [Recurrent Neural Networks, Gates, LSTM]** | ||
|
||
⟶ [再帰型ニューラルネットワーク, ゲート, LSTM] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See first comment about: 回帰型ニューラルネットワーク
ja/cheatsheet-deep-learning.md
Outdated
|
||
**42. Remark: we note that the optimal policy π∗ for a given state s is such that:** | ||
|
||
⟶ 備考: 与えられた状態sに対する最適方針π*はこのようになります: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
方針 and 政策 is mixed in the translations. I'd suggest to use either 方策 or 方針 and stick with it.
Sorry, my hand was stopped... |
Hi @taixhi, please feel free to incorporate current suggestions to your work if you feel it would be appropriate. Looking forward to seeing the great work of this PR merged! |
oh wow, completely forgot about this, will take a look at in the next couple of days! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 50. to 54.
ja/cheatsheet-deep-learning.md
Outdated
|
||
**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** | ||
|
||
⟶ ニューラルネットワークとは複数の層を用いて組まれる数学モデルです。代表的なネットワークとして畳み込みと再帰型ニューラルネットワークが挙げられます。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We chose リカレントニューラルネットワーク when we translated this document
https://stanford.edu/~shervine/l/ja/teaching/cs-230/cheatsheet-recurrent-neural-networks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 29. to 49.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 24. to 28.
|
||
**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** | ||
|
||
⟶ これは通常、学習率を高め、初期値への依存性を減らすことを目的でFully Connected層と畳み込み層の後、非線形化を行う前に行われます。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⟶ これは通常、学習率を高め、初期値への依存性を減らすことを目的でFully Connected層と畳み込み層の後、非線形化を行う前に行われます。 | |
⟶ これは通常、学習率を高め、初期値への強い依存性を減らすことを目的として、全結合層もしくは畳み込み層の後、非線形化層の前で行われます。 |
ja/cheatsheet-deep-learning.md
Outdated
|
||
**29. Reinforcement Learning and Control** | ||
|
||
⟶ 強化学習とコントロール |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⟶ 強化学習とコントロール | |
⟶ 強化学習と制御 |
Hello @taixhi, a team from Machine Learning Tokyo completed reviewing your translation and added some suggestions. Could you check and incorporate our suggestions? Here is how to incorporate suggestions: |
Hello @taixhi, this is a friendly reminder that we completed reviewing your pull request. Could you take a look at our suggestions? |
レビュー確認させていただきました。妥当なSuggestionで勉強になりました、ありがとうございます。 MLTの皆様、一昨年出来心でGithubの方でContributeした幼稚な翻訳ですが丁寧なレビューありがとうございます。こちらの大変遅い対応で日本語版公開の遅れに寄与してしまったことをお詫び申し上げます。 Co-Authored-By: for_tokyo <[email protected]> Co-Authored-By: Yoshiyuki Nakai 中井喜之 <[email protected]>
Sorry about the delay guys, just reviewed... Looks good to me! It was a good learning opportunity for me as well :) |
Hello @shervinea, we completed translation and review. Could you check if you can merge this pull request? |
Thank you @yoshiyukinakai and everyone else who contributed to this translation. Your work is really appreciated! I'll go ahead and proceed to the merge as well as fill out the |
work in progress.