Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ja] cs-229-deep-learning #96

Merged
merged 6 commits into from
Feb 9, 2020
Merged

[ja] cs-229-deep-learning #96

merged 6 commits into from
Feb 9, 2020

Conversation

taixhi
Copy link
Contributor

@taixhi taixhi commented Nov 15, 2018

work in progress.

@taixhi
Copy link
Contributor Author

taixhi commented Nov 15, 2018

The first draft is done. I would love for someone to have a look at it.

@shervinea shervinea added the reviewer wanted Looking for a reviewer label Nov 16, 2018
@shervinea
Copy link
Owner

Thank you for your translation @taixhi! If you know a native speaker to review your work, please feel free to spread the word about this PR.

@taniokah
Copy link
Contributor

I would like to check this ja version today, and will comment here.

@taixhi
Copy link
Contributor Author

taixhi commented Nov 22, 2018

Hi @taniokah, let me know if you have had a look at it :)

@shervinea shervinea mentioned this pull request Jun 3, 2019
Copy link

@Harimus Harimus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's my 5 cent. (From MLT slack)


**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.**

⟶ ニューラルネットワークとは複数の層を用いて組まれる数学モデルです。代表的なネットワークとして畳み込みと再帰型ニューラルネットワークが挙げられます。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to wikipedia[0], Recurrent Neural Network can either be translated to 回帰型ニューラルネットワーク or 再帰型ニューラルネットワーク, but the wikipedia page also distinguish the two by using 回帰型 for "Recurrent" and 再帰型 for "Recursive" Neural Network. 回帰 translates to "Regression" and 再帰 to "Recursion.

But a quick Google search for "Recurrent Neural Network" on Japanese Google give mixed result, some people use 再帰型 while others 回帰型 to specify "Recurrent" Neural Network. Is there any definitive source that explain why use one over the other? (other than wikipedia)

If not, to reduce confucsion, just using カタカナ: リカレントニューラルネットワーク might also be an option

[0] https://ja.wikipedia.org/wiki/回帰型ニューラルネットワーク


**7. where we note w, b, z the weight, bias and output respectively.**

⟶ この場合重み付けをw、バイアス項をb、出力をzとします。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

この場合重み付けをw、バイアス項をb、出力をzとします。 > この場合重みをw、バイアス項をb、出力をzとします。


**9. [Sigmoid, Tanh, ReLU, Leaky ReLU]**

⟶ [Sigmoid(シグモイド関数), Tanh(双曲線関数), ReLU(ランプ関数), Leaky ReLU]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

正規化線形ユニット(rectified linear unit、ReLU) [0]
ランプ関数 is by no means wrong, but it translates to ramp function and is not the preferred term in machine learning, I think.
漏洩ReLU = Leaky ReLU [0]

[0] https://ja.wikipedia.org/wiki/%E6%AD%A3%E8%A6%8F%E5%8C%96%E7%B7%9A%E5%BD%A2%E9%96%A2%E6%95%B0


**18. Step 4: Use the gradients to update the weights of the network.**

⟶ 傾斜を使い誤差が小さくなるように重みを調整する。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

勾配 = Gradient [0]

While you use the gradients to update the weights to reduce the error, the English sentence only says to use gradient to update the weight of the network. So a more direct translation would probably be
"勾配を使いネットワークの重みを更新する。” (or 調整 instead of 更新)

[0] https://ja.wikipedia.org/wiki/%E5%8B%BE%E9%85%8D_(%E3%83%99%E3%82%AF%E3%83%88%E3%83%AB%E8%A7%A3%E6%9E%90)


**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.**

⟶ これは通常、学習率を高め、初期値への依存性を減らすことを目的でFully Connected層と畳み込み層の後、非線形化を行う前に行われます。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

非線形化を行う層の前に行われます。


<br>

**24. Recurrent Neural Networks**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See first comment about: 回帰型ニューラルネットワーク


**38. Policy ― A policy π is a function π:S⟶A that maps states to actions.**

&#10230; 政策 - 政策πは状態と行動を写像する関数π:S⟶A
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"強化学習は一連の行動を通じて報酬が最も多く得られるような方策(policy)を学習する。"[0]
I would prefer 方策 over 政策 since the latter indicate "political" context.

[0] https://ja.wikipedia.org/wiki/%E5%BC%B7%E5%8C%96%E5%AD%A6%E7%BF%92


**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]**

&#10230; [強化学習, マルコフ決定過程, バリュー/ポリシー反復, 近似動的計画法, ポリシーサーチ]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

バリュー/ポリシー反復, -> 価値/方策反復


**53. [Recurrent Neural Networks, Gates, LSTM]**

&#10230; [再帰型ニューラルネットワーク, ゲート, LSTM]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See first comment about: 回帰型ニューラルネットワーク


**42. Remark: we note that the optimal policy π∗ for a given state s is such that:**

&#10230; 備考: 与えられた状態sに対する最適方針π*はこのようになります:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

方針 and 政策 is mixed in the translations. I'd suggest to use either 方策 or 方針 and stick with it.

@taniokah
Copy link
Contributor

Hi @taniokah, let me know if you have had a look at it :)

Sorry, my hand was stopped...
I will check the whole content in today 29/10/2019 JST.

@shervinea
Copy link
Owner

Hi @taixhi, please feel free to incorporate current suggestions to your work if you feel it would be appropriate. Looking forward to seeing the great work of this PR merged!

@taixhi
Copy link
Contributor Author

taixhi commented Nov 19, 2019

oh wow, completely forgot about this, will take a look at in the next couple of days!

Copy link
Contributor

@yoshiyukinakai yoshiyukinakai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 50. to 54.


**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.**

&#10230; ニューラルネットワークとは複数の層を用いて組まれる数学モデルです。代表的なネットワークとして畳み込みと再帰型ニューラルネットワークが挙げられます。
Copy link
Contributor

@yoshiyukinakai yoshiyukinakai Nov 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We chose リカレントニューラルネットワーク when we translated this document
https://stanford.edu/~shervine/l/ja/teaching/cs-230/cheatsheet-recurrent-neural-networks

ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
Copy link
Contributor

@yoshiyukinakai yoshiyukinakai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 29. to 49.

ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
Copy link
Contributor

@yoshiyukinakai yoshiyukinakai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 24. to 28.

ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved

**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.**

&#10230; これは通常、学習率を高め、初期値への依存性を減らすことを目的でFully Connected層と畳み込み層の後、非線形化を行う前に行われます。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
&#10230; これは通常、学習率を高め、初期値への依存性を減らすことを目的でFully Connected層と畳み込み層の後、非線形化を行う前に行われます
&#10230; これは通常、学習率を高め、初期値への強い依存性を減らすことを目的として、全結合層もしくは畳み込み層の後、非線形化層の前で行われます


**29. Reinforcement Learning and Control**

&#10230; 強化学習とコントロール
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
&#10230; 強化学習とコントロール
&#10230; 強化学習と制御

ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
ja/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
@yoshiyukinakai
Copy link
Contributor

Hello @taixhi, a team from Machine Learning Tokyo completed reviewing your translation and added some suggestions. Could you check and incorporate our suggestions?

Here is how to incorporate suggestions:
https://help.github.com/ja/github/collaborating-with-issues-and-pull-requests/incorporating-feedback-in-your-pull-request

@yoshiyukinakai
Copy link
Contributor

Hello @taixhi, this is a friendly reminder that we completed reviewing your pull request. Could you take a look at our suggestions?

レビュー確認させていただきました。妥当なSuggestionで勉強になりました、ありがとうございます。

MLTの皆様、一昨年出来心でGithubの方でContributeした幼稚な翻訳ですが丁寧なレビューありがとうございます。こちらの大変遅い対応で日本語版公開の遅れに寄与してしまったことをお詫び申し上げます。

Co-Authored-By: for_tokyo <[email protected]>
Co-Authored-By: Yoshiyuki Nakai 中井喜之 <[email protected]>
@taixhi
Copy link
Contributor Author

taixhi commented Feb 6, 2020

Sorry about the delay guys, just reviewed... Looks good to me! It was a good learning opportunity for me as well :)

@yoshiyukinakai
Copy link
Contributor

Hello @shervinea, we completed translation and review. Could you check if you can merge this pull request?

@shervinea
Copy link
Owner

Thank you @yoshiyukinakai and everyone else who contributed to this translation. Your work is really appreciated! I'll go ahead and proceed to the merge as well as fill out the CONTRIBUTORS file accordingly.

@shervinea shervinea merged commit cbc79c5 into shervinea:master Feb 9, 2020
shervinea added a commit that referenced this pull request Jun 30, 2020
@shervinea shervinea changed the title [ja] Deep Learning [ja] cs-229-deep-learning Oct 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
reviewer wanted Looking for a reviewer
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants