Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ZH] 13-3 Inline latex broken #618

Closed
xcastilla opened this issue Sep 23, 2020 · 9 comments
Closed

[ZH] 13-3 Inline latex broken #618

xcastilla opened this issue Sep 23, 2020 · 9 comments

Comments

@xcastilla
Copy link
Contributor

Hi @JonathanSum ! Just for your info,
There seems to be some inline latex broken on lecture 13-3:

Screen Shot 2020-09-23 at 22 41 44

The rest of the lectures I've checked seem to be fine.

@JonathanSum
Copy link
Contributor

JonathanSum commented Sep 23, 2020

I tried to fix it, but it looks like the latex is the same as the one in the English version.

@JonathanSum
Copy link
Contributor

JonathanSum commented Sep 23, 2020

Chinese version:

在自我注意力机制中,我们有一个输入集$\lbrace\boldsymbol{x}_{i}\rbrace^{t}{i=1}$。不像序列那样,它没有顺序的。
隐藏向量$\boldsymbol{h}$是由集之中的向量的线性组合得出来的。
我们可以用矩阵向量乘法来以$\boldsymbol{X}\boldsymbol{a}$去表达这个东西,这里$\boldsymbol{a}$包含一些会缩放向量$\boldsymbol{x}
{i}$的系数。

English version

In self-attention, we have a set of input $\lbrace\boldsymbol{x}_{i}\rbrace^{t}{i=1}$.
Unlike a sequence, it does not have an order.
Hidden vector $\boldsymbol{h}$ is given by linear combination of the vectors in the set.
We can express this as $\boldsymbol{X}\boldsymbol{a}$ using matrix vector multiplication, where $\boldsymbol{a}$ contains coefficients that scale the input vector $\boldsymbol{x}
{i}$.

The following Latex that has issue in the Chinese version:

$\lbrace\boldsymbol{x}_{i}\rbrace^{t}_{i=1}$

$\boldsymbol{h}$

$\boldsymbol{X}\boldsymbol{a}$

$\boldsymbol{a}$

$\boldsymbol{x}_{i}$

JonathanSum added a commit to JonathanSum/pytorch-Deep-Learning that referenced this issue Sep 23, 2020
JonathanSum added a commit to JonathanSum/pytorch-Deep-Learning that referenced this issue Sep 23, 2020
@JonathanSum
Copy link
Contributor

JonathanSum commented Sep 23, 2020

Pull request to fix this issue: #619

I will follow this to fix when I have time:
Guide line to fix: #576

image

But I think why xcastilla posted this issue here. It is because the latex is the same as the English one, but the Chinese version still has an issue.

@Atcold
Copy link
Owner

Atcold commented Sep 23, 2020

@JonathanSum, you may have to escape the _ like \_. Let me know if it works.

Atcold pushed a commit that referenced this issue Sep 26, 2020
* Update 13-3.md

#618

* Update 13-3.md
@JonathanSum
Copy link
Contributor

JonathanSum commented Oct 1, 2020

I am still fixing it and testing it on my repo:
https://jonathansum.github.io/pytorch-Deep-Learning/zh/week13/13-3/

Updated: I tried to change_to be \_ .
image

@Atcold
Copy link
Owner

Atcold commented Nov 25, 2020

Was this fixed?

@JonathanSum
Copy link
Contributor

JonathanSum commented Nov 25, 2020

Let's hope this pull request can fix them all. I checked out my repo. It looks like the one in above is fixed. There is another one. I will try to see this pull request can fix them all or not.
@#697

@Atcold
Copy link
Owner

Atcold commented Dec 11, 2020

Thank, going over the PRs right now.

@JonathanSum
Copy link
Contributor

Problems are solved. I think we can close it.
You can see it here: https://atcold.github.io/pytorch-Deep-Learning/zh/week13/13-3/

@Atcold Atcold closed this as completed Dec 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants