From 089cc3799a27593b81c8702343a575a1296be50f Mon Sep 17 00:00:00 2001 From: Redouane Lguensat Date: Sun, 21 Oct 2018 14:10:51 +0200 Subject: [PATCH 01/10] [ar] Unsupervised Learning --- ar/cheatsheet-unsupervised-learning.md | 44 ++++++++++++++++---------- 1 file changed, 28 insertions(+), 16 deletions(-) diff --git a/ar/cheatsheet-unsupervised-learning.md b/ar/cheatsheet-unsupervised-learning.md index 1d80c47b5..e47df827c 100644 --- a/ar/cheatsheet-unsupervised-learning.md +++ b/ar/cheatsheet-unsupervised-learning.md @@ -1,61 +1,73 @@ **1. Unsupervised Learning cheatsheet** -⟶ +
+ورقة مراجعة للتعلم بدون إشراف +

**2. Introduction to Unsupervised Learning** -⟶ +
+ مقدمة للتعلم بدون إشراف +

**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** -⟶ +
+ {x(1),...,x(m)} الحافز ― الهدف من التعلم بدون إشراف هو إيجاد الأنماط الخفية في البيانات الغير موسومة +

**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** -⟶ +
+متباينة جينسن ― لتكن f دالة محدبة و X متغير عشوائي. لدينا المتفاوتة التالية +: +

**5. Clustering** -⟶ - +
+ تجميع +

**6. Expectation-Maximization** -⟶ - +
+تحقيق أقصى قدر للتوقع +

**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** -⟶ - +
+المتغيرات الكامنة ― المتغيرات الكامنة هي متغيرات باطنية/غير معاينة تزيد من صعوبة مشاكل التقدير، غالبا ما ترمز بالحرف z. في مايلي الإعدادات الشائعة التي تحتوي على متغيرات كامنة.

**8. [Setting, Latent variable z, Comments]** -⟶ - +
+إعداد، متغير كامن z، تعاليق

**9. [Mixture of k Gaussians, Factor analysis]** -⟶ - +
+مزيج من k غاوسيات، تحليل العوامل

**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** -⟶ - +
+خوارزمية ― خوارزمية تحقيق أقصى قدر للتوقع هي عبارة عن طريقة فعالة لتقدير المعامل θ عبر تقدير الاحتمال الأرجح، و يتم ذلك بشكل تكراري حيث يتم إيجاد حد أدنى لدالة الإمكان (الخطوة M) ثم يتم استمثال ذلك الحد الأدنى (الخطوة E) كما يلي: +

**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** From e3be0da13a14851d51bbf71a8b4cdce901183ce2 Mon Sep 17 00:00:00 2001 From: Redouane Lguensat Date: Wed, 24 Oct 2018 16:18:54 +0200 Subject: [PATCH 02/10] Update cheatsheet-unsupervised-learning.md --- ar/cheatsheet-unsupervised-learning.md | 27 +++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/ar/cheatsheet-unsupervised-learning.md b/ar/cheatsheet-unsupervised-learning.md index e47df827c..8c91dabd3 100644 --- a/ar/cheatsheet-unsupervised-learning.md +++ b/ar/cheatsheet-unsupervised-learning.md @@ -17,7 +17,7 @@ **3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.**
- {x(1),...,x(m)} الحافز ― الهدف من التعلم بدون إشراف هو إيجاد الأنماط الخفية في البيانات الغير موسومة + {x(1),...,x(m)} الحافز ― الهدف من التعلم بدون إشراف هو إيجاد الأنماط الخفية في البيانات غير الموسومة

@@ -72,32 +72,37 @@ **11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** -⟶ - +
+الخطوة E : حساب الاحتمال البعدي Qi(z(i)) بأن تصدر كل نقطة x(i) من التجمع z(i) كما يلي: +

**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** -⟶ - +
+الخطوة M : يتم استعمال الاحتمالات البعدية Qi(z(i)) كأثقال خاصة لكل تجمع على النقط x(i) ، لكي يتم تقدير نموذج لكل تجمع بشكل منفصل، و ذلك كما يلي: +

**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** -⟶ - +
+[تهيئة غاوسية، خطوة التوقع، خطوة التعظيم، التقاء] +

**14. k-means clustering** -⟶ - +
+تجميع k-أوساط +

**15. We note c(i) the cluster of data point i and μj the center of cluster j.** -⟶ - +
+نرمز تجمع النقط i ب c(i) ، و نرمز ب μj j مركز التجمع +

**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** From 248dfbdc8e6a1acb3f84bd7e78fad9d1582dd7ff Mon Sep 17 00:00:00 2001 From: Redouane Lguensat Date: Sun, 11 Nov 2018 01:11:19 +0100 Subject: [PATCH 03/10] Update cheatsheet-unsupervised-learning.md --- ar/cheatsheet-unsupervised-learning.md | 28 +++++++++++++++----------- 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/ar/cheatsheet-unsupervised-learning.md b/ar/cheatsheet-unsupervised-learning.md index 8c91dabd3..b7ffe6002 100644 --- a/ar/cheatsheet-unsupervised-learning.md +++ b/ar/cheatsheet-unsupervised-learning.md @@ -87,14 +87,14 @@ **13. [Gaussians initialization, Expectation step, Maximization step, Convergence]**
-[تهيئة غاوسية، خطوة التوقع، خطوة التعظيم، التقاء] +[ استهلالات غاوسية، خطوة التوقع، خطوة التعظيم، تقارب]

**14. k-means clustering**
-تجميع k-أوساط +تجميع k-متوسطات

@@ -107,32 +107,36 @@ **16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** -⟶ - +
+بعد الاستهلال العشوائي لمتوسطات التجمعات μ1,μ2,...,μk∈Rn، خوارزمية تجميع k-متوسطات تكرر الخطوة التالية حتى التقارب +

**17. [Means initialization, Cluster assignment, Means update, Convergence]** -⟶ - +
+[استهلال المتوسطات، تعيين تجمع، تحديث المتوسطات، التقارب]

**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** -⟶ - +
+ دالة التشويه - لكي نتأكد من أن الخوارزمية تقاربت، ننظر إلى دالة التشويه المعرفة كما يلي: +

**19. Hierarchical clustering** -⟶ - +
+ التجميع الهرمي +

**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** -⟶ - +
+ خوارزمية - هي عبارة عن خوارزمية تجميع تعتمد على طريقة تجميعية هرمية تبني مجموعات متداخلة بشكل متتال +

**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** From c784af3b29bab655ccd8fe89921c1c20ef2e6baf Mon Sep 17 00:00:00 2001 From: Redouane Lguensat Date: Sun, 11 Nov 2018 13:28:54 +0100 Subject: [PATCH 04/10] Update cheatsheet-unsupervised-learning.md --- ar/cheatsheet-unsupervised-learning.md | 69 +++++++++++++++----------- 1 file changed, 39 insertions(+), 30 deletions(-) diff --git a/ar/cheatsheet-unsupervised-learning.md b/ar/cheatsheet-unsupervised-learning.md index b7ffe6002..05dce41e0 100644 --- a/ar/cheatsheet-unsupervised-learning.md +++ b/ar/cheatsheet-unsupervised-learning.md @@ -141,92 +141,101 @@ **21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** -⟶ - +
+أنواع هنالك عدة أنواع من خوارزميات التجميع الهرمي التي ترمي إلى تحسين دوال هدف مختلفة، هاته الأنواع ملخصة في الجدول أسفله +

**22. [Ward linkage, Average linkage, Complete linkage]** -⟶ - +
+[الربط البَينِي، الربط المتوسط، الربط الكامل]

**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** -⟶ - +
+[تقليل داخل مسافة التجمع، تقليل متوسط المسافات بين أزواج التجمعات، تقليل المسافة القصوى بين أزواج التجمعات]

**24. Clustering assessment metrics** -⟶ - +
+مقاييس تقدير التجميع +

**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** -⟶ - +
+في إعداد للتعلم بدون إشراف، من الصعب غالبا تقدير أداء نموذج ما لأننا لا نتوفر على القيم الحقيقية كما كان الحال في إعداد التعلم تحت إشراف +

**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** -⟶ - +
+المعامل الظِلِّي - إذا رمزنا aو b متوسط المسافة بين عينة و كل النقط المنتمية لنفس الصنف، و بين عينة و كل النقط المنتمية لأقرب صنف، المعامل الظِلِّي s لعينة وحيدة معرف كالتالي: +

**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** -⟶ - +
+مؤشر كالينسكي هاراباز - إذا رمزنا بk لعدد التجمعات، Bk و Wk مصفوفات التشتت بين التجمعات و داخلها معرفة كالتالي:

**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** -⟶ - +
+مؤشر كالينسكي هاراباز s(k) يعطي تقييما للتجمعات الناتجة عن نموذج تجميعي، بحيث كلما كان التقييم أعلى كلما دل ذلك على أن التجمعات أكثر كثافة و أكثر انفصالا. هذا المؤشر معرّف كالتالي

**29. Dimension reduction** -⟶ - +
+تخفيض الأبعاد

**30. Principal component analysis** -⟶ - +
+تحليل المكون الرئيسي +

**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** -⟶ - +
+إنها تقنية لخفض الأبعاد ترمي إلى إيجاد الاتجاهات المكبرة للتباين و التي تسقط عليها البيانات +

**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** -⟶ - +
+ قيمة ذاتية، متجه ذاتي - لتكن A∈Rn×n مصفوفة ، نقول أن λ قيمة ذاتية للمصفوفة A إذا وُجِد متجه z∈Rn∖{0} يسمى متجها ذاتيا، بحيث: +

**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** -⟶ - +
+ نظرية الطّيف لتكن A∈Rn×n. إذا كانت A متماثلة فإنها شبه قطرية بمصفوفة متعامدة U∈Rn×n. إذا رمزنا Λ=diag(λ1,...,λn) ، لدينا: +

**34. diagonal** -⟶ - +
+قطري +

**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** -⟶ - +
+ملحوظة: المتجه الذاتي المرتبط بأكبر قيمة ذاتية يسمى بالمتجه الذاتي الرئيسي للمصفوفة A

**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k From bfddc369f062ace5c2e4998fd77f3600840280fa Mon Sep 17 00:00:00 2001 From: Redouane Lguensat Date: Thu, 13 Dec 2018 21:35:45 +0100 Subject: [PATCH 05/10] Update cheatsheet-unsupervised-learning.md --- ar/cheatsheet-unsupervised-learning.md | 125 +++++++++++++++---------- 1 file changed, 78 insertions(+), 47 deletions(-) diff --git a/ar/cheatsheet-unsupervised-learning.md b/ar/cheatsheet-unsupervised-learning.md index 05dce41e0..300fd8dfd 100644 --- a/ar/cheatsheet-unsupervised-learning.md +++ b/ar/cheatsheet-unsupervised-learning.md @@ -235,136 +235,167 @@ **35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.**
-ملحوظة: المتجه الذاتي المرتبط بأكبر قيمة ذاتية يسمى بالمتجه الذاتي الرئيسي للمصفوفة A
+ملحوظة: المتجه الذاتي المرتبط بأكبر قيمة ذاتية يسمى بالمتجه الذاتي الرئيسي للمصفوفة A +
**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k dimensions by maximizing the variance of the data as follows:** -⟶ - +
+خوارزمية - تحليل المكون الرئيسي تقنية لخفض الأبعاد تهدف إلى إسقاط البيانات على k بعد بحيث يتم تكبير التباين، خطواتها كالتالي:

**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** -⟶ - -
+
+الخطوة 1: تسوية البيانات بحيث تصبح ذات متوسط يساوي صفر و انحراف معياري يساوي واحد +
+
**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** -⟶ - +
+الخطوة 2: حساب Σ=1mm∑i=1x(i)x(i)T∈Rn×n ، و هي متماثلة و ذات قيم ذاتية حقيقية +

**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** -⟶ - -
+
+الخطوة 3: حساب u1,...,uk∈Rn المتجهات الذاتية الرئيسية المتعامدة لΣ و عددها k ، يعني k من المتجهات الذاتية المتعامدة ذات القيم الذاتية الأكبر +
+
**40. Step 4: Project the data on spanR(u1,...,uk).** -⟶ - +
+الخطوة 4: إسقاط البيانات على spanR(u1,...,uk) +

**41. This procedure maximizes the variance among all k-dimensional spaces.** -⟶ - +
+هذا الإجراء يضخم التباين بين كل الفضاءات البعدية +

**42. [Data in feature space, Find principal components, Data in principal components space]** -⟶ - +
+[بيانات في فضاء الخصائص, أوجد المكونات الرئيسية, بيانات في فضاء المكونات الرئيسية] +

**43. Independent component analysis** -⟶ - +
+تحليل المكونات المستقلة +

**44. It is a technique meant to find the underlying generating sources.** -⟶ - +
+هي تقنية تهدف إلى إيجاد المصادر التوليدية الكامنة +

**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** -⟶ - +
+افتراضات - لنفترض أن بياناتنا x تم توليدها من طرف s=(s1,...,sn) المصدر المتجهي ال n بعدي، بحيث متغيرات عشوائية مستقلة، و ذلك عبر مصفوفة خلط غير منفردة A +كالتالي +

**46. The goal is to find the unmixing matrix W=A−1.** -⟶ - +
+الهدف هو العثور على مصفوفة الفصل W=A−1

**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** - -⟶ - +
+خوارزمية ICA +Bell و Sejnowski ل +هاته الخوارزمية تجد مصفوفة الفصل W عن طريق الخطوات التالية +

**48. Write the probability of x=As=W−1s as:** -⟶ - +
+اكتب احتمال x=As=W−1s كالتالي

**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** -⟶ - +
+ لتكن {x(i),i∈[[1,m]]} +بيانات التمرن +و g دالة سيجمويد +اكتب الأرجحية اللوغاريتمية كالتالي +

**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** -⟶ - +
+و منه، قاعدة التعلم للصعود التفاضلي العشوائي تقتضي أن لكل مثال تمرين x(i) ، نقوم بتحديث W كما يلي +

**51. The Machine Learning cheatsheets are now available in Arabic.** -⟶ - +
+ورقات المراجعة للتعلم الآلي متوفرة حاليا باللغة العربية +

**52. Original authors** -⟶ - +
+المحررون الأصليون +

**53. Translated by X, Y and Z** -⟶ - +
+تم ترجمته بواسطة X, Y و Z
+
**54. Reviewed by X, Y and Z** -⟶ - +
+تم مراجعته بواسطة X, Y و Z

**55. [Introduction, Motivation, Jensen's inequality]** -⟶ - +
+[تقديم، تحفيز، متفاوتة جنسن] +

**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** -⟶ - +
+[تجميع, + التوقع-التعظيم + , k-متوسطات + , التجميع الهرمي + , مقاييس] + +

**57. [Dimension reduction, PCA, ICA]** -⟶ +
+[خفض الأبعاد, PCA, ICA] +
+
From 199aef6db8cf5a7b4f0b05e6c24ad45a990239eb Mon Sep 17 00:00:00 2001 From: Redouane Lguensat Date: Thu, 13 Dec 2018 21:40:59 +0100 Subject: [PATCH 06/10] Update CONTRIBUTORS --- CONTRIBUTORS | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/CONTRIBUTORS b/CONTRIBUTORS index afd1d1f12..8dfa394f9 100644 --- a/CONTRIBUTORS +++ b/CONTRIBUTORS @@ -1,5 +1,7 @@ --ar - + + Redouane Lguensat (translation of unsupervised learning) + --de --es From 49c436d9ebf8589629fb4845718fe64846e33b91 Mon Sep 17 00:00:00 2001 From: Redouane Lguensat Date: Thu, 26 Sep 2019 11:15:29 +0200 Subject: [PATCH 07/10] Update cheatsheet-unsupervised-learning.md --- ar/cheatsheet-unsupervised-learning.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/ar/cheatsheet-unsupervised-learning.md b/ar/cheatsheet-unsupervised-learning.md index 300fd8dfd..b3471ab4e 100644 --- a/ar/cheatsheet-unsupervised-learning.md +++ b/ar/cheatsheet-unsupervised-learning.md @@ -1,7 +1,7 @@ **1. Unsupervised Learning cheatsheet**
-ورقة مراجعة للتعلم بدون إشراف + مرجع سريع للتعلّم غير المُوَجَّه

@@ -9,7 +9,7 @@ **2. Introduction to Unsupervised Learning**
- مقدمة للتعلم بدون إشراف + مقدمة للتعلّم غير المُوَجَّه

@@ -17,7 +17,7 @@ **3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.**
- {x(1),...,x(m)} الحافز ― الهدف من التعلم بدون إشراف هو إيجاد الأنماط الخفية في البيانات غير الموسومة + {x(1),...,x(m)} الحافز ― الهدف من التعلّم غير المُوَجَّه هو إيجاد الأنماط الخفية في البيانات غير المٌعلمّة

@@ -25,8 +25,7 @@ **4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:**
-متباينة جينسن ― لتكن f دالة محدبة و X متغير عشوائي. لدينا المتفاوتة التالية -: +متباينة جينسن ― لتكن f دالة محدبة و X متغير عشوائي. لدينا المتباينة التالية:

@@ -34,14 +33,14 @@ **5. Clustering**
- تجميع + التجميع

**6. Expectation-Maximization**
-تحقيق أقصى قدر للتوقع +

From cdb5aba0cc3f052312d2fb078fb515b977eac462 Mon Sep 17 00:00:00 2001 From: Redouane Lguensat Date: Thu, 26 Sep 2019 11:35:50 +0200 Subject: [PATCH 08/10] Approved reviews by @qunaieer --- ar/cheatsheet-unsupervised-learning.md | 123 ++++++++++++------------- 1 file changed, 61 insertions(+), 62 deletions(-) diff --git a/ar/cheatsheet-unsupervised-learning.md b/ar/cheatsheet-unsupervised-learning.md index b3471ab4e..d98e37ea2 100644 --- a/ar/cheatsheet-unsupervised-learning.md +++ b/ar/cheatsheet-unsupervised-learning.md @@ -40,87 +40,91 @@ **6. Expectation-Maximization**
- + تعظيم القيمة المتوقعة (Expectation-Maximization)

**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:**
-المتغيرات الكامنة ― المتغيرات الكامنة هي متغيرات باطنية/غير معاينة تزيد من صعوبة مشاكل التقدير، غالبا ما ترمز بالحرف z. في مايلي الإعدادات الشائعة التي تحتوي على متغيرات كامنة.
+ المتغيرات الكامنة ― المتغيرات الكامنة هي متغيرات مخفية/غير معاينة تزيد من صعوبة مشاكل التقدير، غالباً ما ترمز بالحرف z. في مايلي الإعدادات الشائعة التي تحتوي على متغيرات كامنة: +
**8. [Setting, Latent variable z, Comments]**
-إعداد، متغير كامن z، تعاليق
+ [الإعداد، المتغير الكامن z، ملاحظات] +
**9. [Mixture of k Gaussians, Factor analysis]**
-مزيج من k غاوسيات، تحليل العوامل
+ [خليط من k توزيع جاوسي، تحليل عاملي] +
**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:**
-خوارزمية ― خوارزمية تحقيق أقصى قدر للتوقع هي عبارة عن طريقة فعالة لتقدير المعامل θ عبر تقدير الاحتمال الأرجح، و يتم ذلك بشكل تكراري حيث يتم إيجاد حد أدنى لدالة الإمكان (الخطوة M) ثم يتم استمثال ذلك الحد الأدنى (الخطوة E) كما يلي: +خوارزمية ― تعظيم القيمة المتوقعة (Expectation-Maximization) هي عبارة عن طريقة فعالة لتقدير المُدخل θ عبر تقدير تقدير الأرجحية الأعلى (maximum likelihood estimation)، ويتم ذلك بشكل تكراري حيث يتم إيجاد حد أدنى للأرجحية (الخطوة M)، ثم يتم تحسين (optimizing) ذلك الحد الأدنى (الخطوة E)، كما يلي:

**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:**
-الخطوة E : حساب الاحتمال البعدي Qi(z(i)) بأن تصدر كل نقطة x(i) من التجمع z(i) كما يلي: +الخطوة E : حساب الاحتمال البعدي Qi(z(i)) بأن تصدر كل نقطة x(i) من مجموعة (cluster) z(i) كما يلي:

**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:**
-الخطوة M : يتم استعمال الاحتمالات البعدية Qi(z(i)) كأثقال خاصة لكل تجمع على النقط x(i) ، لكي يتم تقدير نموذج لكل تجمع بشكل منفصل، و ذلك كما يلي: + الخطوة M : يتم استعمال الاحتمالات البعدية Qi(z(i)) كأوزان خاصة لكل مجموعة (cluster) على النقط x(i)، لكي يتم تقدير نموذج لكل مجموعة بشكل منفصل، و ذلك كما يلي:

**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]**
-[ استهلالات غاوسية، خطوة التوقع، خطوة التعظيم، تقارب] +[استهلالات جاوسية، خطوة القيمة المتوقعة، خطوة التعظيم، التقارب]

**14. k-means clustering**
-تجميع k-متوسطات +التجميع بالمتوسطات k (k-mean clustering)

**15. We note c(i) the cluster of data point i and μj the center of cluster j.**
-نرمز تجمع النقط i ب c(i) ، و نرمز ب μj j مركز التجمع +نرمز لمجموعة النقط i بـ c(i)، ونرمز بـ μj مركز المجموعات j.

**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:**
-بعد الاستهلال العشوائي لمتوسطات التجمعات μ1,μ2,...,μk∈Rn، خوارزمية تجميع k-متوسطات تكرر الخطوة التالية حتى التقارب +خوارزمية - بعد الاستهلال العشوائي للنقاط المركزية (centroids) للمجوعات μ1,μ2,...,μk∈Rn، التجميع بالمتوسطات k تكرر الخطوة التالية حتى التقارب:

**17. [Means initialization, Cluster assignment, Means update, Convergence]**
-[استهلال المتوسطات، تعيين تجمع، تحديث المتوسطات، التقارب]
+[استهلال المتوسطات، تعيين المجموعات، تحديث المتوسطات، التقارب] +
**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:**
- دالة التشويه - لكي نتأكد من أن الخوارزمية تقاربت، ننظر إلى دالة التشويه المعرفة كما يلي: +دالة التحريف (distortion function) - لكي نتأكد من أن الخوارزمية تقاربت، ننظر إلى دالة التحريف المعرفة كما يلي:

@@ -134,93 +138,95 @@ **20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.**
- خوارزمية - هي عبارة عن خوارزمية تجميع تعتمد على طريقة تجميعية هرمية تبني مجموعات متداخلة بشكل متتال +خوارزمية - هي عبارة عن خوارزمية تجميع تعتمد على طريقة تجميع هرمية تبني مجموعات متداخلة بشكل متتال.

**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:**
-أنواع هنالك عدة أنواع من خوارزميات التجميع الهرمي التي ترمي إلى تحسين دوال هدف مختلفة، هاته الأنواع ملخصة في الجدول أسفله +الأنواع - هنالك عدة أنواع من خوارزميات التجميع الهرمي التي ترمي إلى تحسين دوال هدف (objective functions) مختلفة، هذه الأنواع ملخصة في الجدول التالي:

**22. [Ward linkage, Average linkage, Complete linkage]**
-[الربط البَينِي، الربط المتوسط، الربط الكامل]
+[ربط وارْد (ward linkage)، الربط المتوسط، الربط الكامل] +
**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]**
-[تقليل داخل مسافة التجمع، تقليل متوسط المسافات بين أزواج التجمعات، تقليل المسافة القصوى بين أزواج التجمعات]
+[تصغير المسافة داخل المجموعة، تصغير متوسط المسافة بين أزواج المجموعات، تصغير المسافة العظمى بين أزواج المجموعات]
**24. Clustering assessment metrics**
-مقاييس تقدير التجميع +مقاييس تقدير المجموعات

**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.**
-في إعداد للتعلم بدون إشراف، من الصعب غالبا تقدير أداء نموذج ما لأننا لا نتوفر على القيم الحقيقية كما كان الحال في إعداد التعلم تحت إشراف -
+في التعلّم غير المُوَجَّه من الصعب غالباً تقدير أداء نموذج ما، لأن القيم الحقيقية تكون غير متوفرة كما هو الحال في التعلًم المُوَجَّه.
**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:**
-المعامل الظِلِّي - إذا رمزنا aو b متوسط المسافة بين عينة و كل النقط المنتمية لنفس الصنف، و بين عينة و كل النقط المنتمية لأقرب صنف، المعامل الظِلِّي s لعينة وحيدة معرف كالتالي: +معامل الظّل (silhouette coefficient) - إذا رمزنا a و b لمتوسط المسافة بين عينة وكل النقط المنتمية لنفس الصنف، و بين عينة وكل النقط المنتمية لأقرب مجموعة، المعامل الظِلِّي s لعينة واحدة معرف كالتالي:

**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as**
-مؤشر كالينسكي هاراباز - إذا رمزنا بk لعدد التجمعات، Bk و Wk مصفوفات التشتت بين التجمعات و داخلها معرفة كالتالي:
+مؤشر كالينسكي-هارباز (Calinski-Harabaz index) - إذا رمزنا بـ k لعدد المجموعات، فإن Bk و Wk مصفوفتي التشتت بين المجموعات وداخلها تعرف كالتالي: +
**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:**
-مؤشر كالينسكي هاراباز s(k) يعطي تقييما للتجمعات الناتجة عن نموذج تجميعي، بحيث كلما كان التقييم أعلى كلما دل ذلك على أن التجمعات أكثر كثافة و أكثر انفصالا. هذا المؤشر معرّف كالتالي
+مؤشر كالينسكي-هارباز s(k) يشير إلى جودة نموذج تجميعي في تعريف مجموعاته، بحيث كلما كانت النتيجة أعلى كلما دل ذلك على أن المجموعات أكثر كثافة وأكثر انفصالاً فيما بينها. هذا المؤشر معرّف كالتالي: +
**29. Dimension reduction**
-تخفيض الأبعاد
+تقليص الأبعاد
**30. Principal component analysis**
-تحليل المكون الرئيسي +تحليل المكون الرئيس

**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.**
-إنها تقنية لخفض الأبعاد ترمي إلى إيجاد الاتجاهات المكبرة للتباين و التي تسقط عليها البيانات +إنها طريقة لتقليص الأبعاد ترمي إلى إيجاد الاتجاهات المعظمة للتباين من أجل إسقاط البيانات عليها.

**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:**
- قيمة ذاتية، متجه ذاتي - لتكن A∈Rn×n مصفوفة ، نقول أن λ قيمة ذاتية للمصفوفة A إذا وُجِد متجه z∈Rn∖{0} يسمى متجها ذاتيا، بحيث: +قيمة ذاتية (eigenvalue)، متجه ذاتي (eigenvector) - لتكن A∈Rn×n مصفوفة، نقول أن λ قيمة ذاتية للمصفوفة A إذا وُجِد متجه z∈Rn∖{0} يسمى متجهاً ذاتياً، بحيث:

**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:**
- نظرية الطّيف لتكن A∈Rn×n. إذا كانت A متماثلة فإنها شبه قطرية بمصفوفة متعامدة U∈Rn×n. إذا رمزنا Λ=diag(λ1,...,λn) ، لدينا: +مبرهنة الطّيف (Spectral theorem) - لتكن A∈Rn×n. إذا كانت A متناظرة فإنها يمكن أن تكون شبه قطرية عن طريق مصفوفة متعامدة حقيقية U∈Rn×n. إذا رمزنا Λ=diag(λ1,...,λn) ، لدينا:

@@ -234,7 +240,7 @@ **35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.**
-ملحوظة: المتجه الذاتي المرتبط بأكبر قيمة ذاتية يسمى بالمتجه الذاتي الرئيسي للمصفوفة A +ملحوظة: المتجه الذاتي المرتبط بأكبر قيمة ذاتية يسمى بالمتجه الذاتي الرئيسي (principal eigenvector) للمصفوفة A.

@@ -242,41 +248,42 @@ dimensions by maximizing the variance of the data as follows:**
-خوارزمية - تحليل المكون الرئيسي تقنية لخفض الأبعاد تهدف إلى إسقاط البيانات على k بعد بحيث يتم تكبير التباين، خطواتها كالتالي:
+خوارزمية - تحليل المكون الرئيس (Principal Component Analysis (PCA)) طريقة لخفض الأبعاد تهدف إلى إسقاط البيانات على k بُعد بحيث يتم تعطيم التباين (variance)، خطواتها كالتالي: +
**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.**
-الخطوة 1: تسوية البيانات بحيث تصبح ذات متوسط يساوي صفر و انحراف معياري يساوي واحد -
+الخطوة 1: تسوية البيانات بحيث تصبح ذات متوسط يساوي صفر وانحراف معياري يساوي واحد. +
**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.**
-الخطوة 2: حساب Σ=1mm∑i=1x(i)x(i)T∈Rn×n ، و هي متماثلة و ذات قيم ذاتية حقيقية +الخطوة 2: حساب Σ=1mm∑i=1x(i)x(i)T∈Rn×n، وهي متناظرة وذات قيم ذاتية حقيقية.

**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.**
-الخطوة 3: حساب u1,...,uk∈Rn المتجهات الذاتية الرئيسية المتعامدة لΣ و عددها k ، يعني k من المتجهات الذاتية المتعامدة ذات القيم الذاتية الأكبر -
-
+الخطوة 3: حساب u1,...,uk∈Rn المتجهات الذاتية الرئيسية المتعامدة لـ Σ وعددها k ، بعبارة أخرى، k من المتجهات الذاتية المتعامدة ذات القيم الذاتية الأكبر. + +
**40. Step 4: Project the data on spanR(u1,...,uk).**
-الخطوة 4: إسقاط البيانات على spanR(u1,...,uk) +الخطوة 4: إسقاط البيانات على spanR(u1,...,uk).

**41. This procedure maximizes the variance among all k-dimensional spaces.**
-هذا الإجراء يضخم التباين بين كل الفضاءات البعدية +هذا الإجراء يعظم التباين بين كل الفضاءات البُعدية.

@@ -297,59 +304,55 @@ dimensions by maximizing the variance of the data as follows:** **44. It is a technique meant to find the underlying generating sources.**
-هي تقنية تهدف إلى إيجاد المصادر التوليدية الكامنة +هي طريقة تهدف إلى إيجاد المصادر التوليدية الكامنة.

**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:**
-افتراضات - لنفترض أن بياناتنا x تم توليدها من طرف s=(s1,...,sn) المصدر المتجهي ال n بعدي، بحيث متغيرات عشوائية مستقلة، و ذلك عبر مصفوفة خلط غير منفردة A -كالتالي +افتراضات - لنفترض أن بياناتنا x تم توليدها عن طريق المتجه المصدر s=(s1,...,sn) ذا n بُعد، حيث si متغيرات عشوائية مستقلة، وذلك عبر مصفوفة خلط غير منفردة (mixing and non-singular) A كالتالي:

**46. The goal is to find the unmixing matrix W=A−1.**
-الهدف هو العثور على مصفوفة الفصل W=A−1
+الهدف هو العثور على مصفوفة الفصل W=A−1. +
**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:**
-خوارزمية ICA -Bell و Sejnowski ل -هاته الخوارزمية تجد مصفوفة الفصل W عن طريق الخطوات التالية +خوارزمية تحليل المكونات المستقلة (ICA) لبيل وسجنوسكي (Bell and Sejnowski) - هذه الخوارزمية تجد مصفوفة الفصل W عن طريق الخطوات التالية:

**48. Write the probability of x=As=W−1s as:**
-اكتب احتمال x=As=W−1s كالتالي
+اكتب الاحتمال لـ x=As=W−1s كالتالي: +
**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:**
- لتكن {x(i),i∈[[1,m]]} -بيانات التمرن -و g دالة سيجمويد -اكتب الأرجحية اللوغاريتمية كالتالي +لتكن {x(i),i∈[[1,m]]} بيانات التمرن و g دالة سيجمويد، اكتب الأرجحية اللوغاريتمية (log likelihood) كالتالي:

**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:**
-و منه، قاعدة التعلم للصعود التفاضلي العشوائي تقتضي أن لكل مثال تمرين x(i) ، نقوم بتحديث W كما يلي +هكذا، باستخدام الصعود الاشتقاقي العشوائي (stochastic gradient ascent)، لكل عينة تدريب x(i) نقوم بتحديث W كما يلي:

**51. The Machine Learning cheatsheets are now available in Arabic.**
-ورقات المراجعة للتعلم الآلي متوفرة حاليا باللغة العربية +المرجع السريع لتعلم الآلة متوفر الآن باللغة العربية.

@@ -363,38 +366,34 @@ Bell و Sejnowski ل **53. Translated by X, Y and Z**
-تم ترجمته بواسطة X, Y و Z
+تمت الترجمة بواسطة X,Y و Z
**54. Reviewed by X, Y and Z**
-تم مراجعته بواسطة X, Y و Z
+تمت المراجعة بواسطة X,Y و Z +
**55. [Introduction, Motivation, Jensen's inequality]**
-[تقديم، تحفيز، متفاوتة جنسن] +[مقدمة، الحافز، متباينة جينسن]

**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]**
-[تجميع, - التوقع-التعظيم - , k-متوسطات - , التجميع الهرمي - , مقاييس] - +[التجميع، تعظيم القيمة المتوقعة، تجميع k-متوسطات، التجميع الهرمي، مقاييس]

**57. [Dimension reduction, PCA, ICA]**
-[خفض الأبعاد, PCA, ICA] +[تقليص الأبعاد، تحليل المكون الرئيس (PCA)، تحليل المكونات المستقلة (ICA)]

From 70ba0e4969d7f9fd3b9493e737e1d97ccff9b0c0 Mon Sep 17 00:00:00 2001 From: Shervine Amidi Date: Mon, 4 Nov 2019 22:18:06 -0800 Subject: [PATCH 09/10] Add reviewer to contributors --- CONTRIBUTORS | 128 +++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 123 insertions(+), 5 deletions(-) diff --git a/CONTRIBUTORS b/CONTRIBUTORS index 8dfa394f9..ccff563f5 100644 --- a/CONTRIBUTORS +++ b/CONTRIBUTORS @@ -1,6 +1,19 @@ ---ar + --ar + Amjad Khatabi (translation of deep learning) + Zaid Alyafeai (review of deep learning) + + Zaid Alyafeai (translation of linear algebra) + Amjad Khatabi (review of linear algebra) + Mazen Melibari (review of linear algebra) + + Fares Al-Quaneier (translation of machine learning tips and tricks) + Zaid Alyafeai (review of machine learning tips and tricks) + + Fares Al-Quaneier (translation of supervised learning) + Zaid Alyafeai (review of supervised learning) Redouane Lguensat (translation of unsupervised learning) + Fares Al-Quaneier (review of unsupervised learning) --de @@ -38,9 +51,16 @@ Fernando Diaz (review of unsupervised learning) --fa + AlisterTA (translation of convolutional neural networks) + Ehsan Kermani (translation of convolutional neural networks) + Erfan Noury (review of convolutional neural networks) + AlisterTA (translation of deep learning) Mohammad Karimi (review of deep learning) Erfan Noury (review of deep learning) + + AlisterTA (translation of deep learning tips and tricks) + Erfan Noury (review of deep learning tips and tricks) Erfan Noury (translation of linear algebra) Mohammad Karimi (review of linear algebra) @@ -52,7 +72,10 @@ Erfan Noury (translation of probabilities and statistics) Mohammad Karimi (review of probabilities and statistics) - + + AlisterTA (translation of recurrent neural networks) + Erfan Noury (review of recurrent neural networks) + Amirhosein Kazemnejad (translation of supervised learning) Erfan Noury (review of supervised learning) Mohammad Karimi (review of supervised learning) @@ -67,17 +90,59 @@ --hi ---ja +--id + Prasetia Utama Putra (translation of convolutional neural networks) + Gunawan Tri (review of convolutional neural networks) +--ko + Wooil Jeong (translation of machine learning tips and tricks) + + Wooil Jeong (translation of probabilities and statistics) + + Kwang Hyeok Ahn (translation of unsupervised learning) + +--ja + Tran Tuan Anh (translation of convolutional neural networks) + Yoshiyuki Nakai (review of convolutional neural networks) + Linh Dang (review of convolutional neural networks) + + Kamuela Lau (translation of deep learning tips and tricks) + Yoshiyuki Nakai (review of deep learning tips and tricks) + Hiroki Mori (review of deep learning tips and tricks) + + Robert Altena (translation of linear algebra) + Kamuela Lau (review of linear algebra) + + Takatoshi Nao (translation of probabilities and statistics) + Yuta Kanzawa (review of probabilities and statistics) + + H. Hamano (translation of recurrent neural networks) + Yoshiyuki Nakai (review of recurrent neural networks) + + Yuta Kanzawa (translation of supervised learning) + Tran Tuan Anh (review of supervised learning) + + Tran Tuan Anh (translation of unsupervised learning) + Yoshiyuki Nakai (review of unsupervised learning) + Yuta Kanzawa (review of unsupervised learning) + Dan Lillrank (review of unsupervised learning) + --pt + Leticia Portella (translation of convolutional neural networks) + Gabriel Aparecido Fonseca (review of convolutional neural networks) + Gabriel Fonseca (translation of deep learning) Leticia Portella (review of deep learning) Gabriel Fonseca (translation of linear algebra) Leticia Portella (review of linear algebra) + + Fernando Santos (translation of machine learning tips and tricks) + Leticia Portella (review of machine learning tips and tricks) + Gabriel Fonseca (review of machine learning tips and tricks) - Leticia Portella (translation of probability) - Flavio Clesio (review of probability) + Leticia Portella (translation of probabilities and statistics) + Flavio Clesio (review of probabilities and statistics) Leticia Portella (translation of supervised learning) Gabriel Fonseca (review of supervised learning) @@ -87,12 +152,50 @@ Tiago Danin (review of unsupervised learning) --tr + Ayyüce Kızrak (translation of convolutional neural networks) + Yavuz Kömeçoğlu (review of convolutional neural networks) + Ekrem Çetinkaya (translation of deep learning) Omer Bukte (review of deep learning) + Ayyüce Kızrak (translation of deep learning tips and tricks) + Yavuz Kömeçoğlu (review of deep learning tips and tricks) + Kadir Tekeli (translation of linear algebra) Ekrem Çetinkaya (review of linear algebra) + Ayyüce Kızrak (translation of logic-based models) + Başak Buluz (review of logic-based models) + + Seray Beşer (translation of machine learning tips and tricks) + Ayyüce Kızrak (review of machine learning tips and tricks) + Yavuz Kömeçoğlu (review of machine learning tips and tricks) + + Ayyüce Kızrak (translation of probabilities and statistics) + Başak Buluz (review of probabilities and statistics) + + Başak Buluz (translation of recurrent neural networks) + Yavuz Kömeçoğlu (review of recurrent neural networks) + + Yavuz Kömeçoğlu (translation of reflex-based models) + Ayyüce Kızrak (review of reflex-based models) + + Cemal Gurpinar (translation of states-based models) + Başak Buluz (review of states-based models) + + Başak Buluz (translation of supervised learning) + Ayyüce Kızrak (review of supervised learning) + + Yavuz Kömeçoğlu (translation of unsupervised learning) + Başak Buluz (review of unsupervised learning) + + Başak Buluz (translation of variables-based models) + Ayyüce Kızrak (review of variables-based models) + +--uk + Gregory Reshetniak (translation of probabilities and statistics) + Denys (review of probabilities and statistics) + --zh Wang Hongnian (translation of supervised learning) Xiaohu Zhu (朱小虎) (review of supervised learning) @@ -102,3 +205,18 @@ kevingo (translation of deep learning) TobyOoO (review of deep learning) + kevingo (translation of linear algebra) + Miyaya (review of linear algebra) + + kevingo (translation of probabilities and statistics) + johnnychhsu (review of probabilities and statistics) + + kevingo (translation of supervised learning) + accelsao (review of supervised learning) + + kevingo (translation of unsupervised learning) + imironhead (review of unsupervised learning) + johnnychhsu (review of unsupervised learning) + + kevingo (translation of machine learning tips and tricks) + kentropy (review of machine learning tips and tricks) From 9b0f8187cfa246b29d297b269a17570ee3558a7e Mon Sep 17 00:00:00 2001 From: shervinea Date: Mon, 4 Nov 2019 22:37:13 -0800 Subject: [PATCH 10/10] Synchronize branch --- .DS_Store | Bin 0 -> 6148 bytes CONTRIBUTORS | 74 +- README.md | 126 ++- ...tsheet-machine-learning-tips-and-tricks.md | 285 ----- ar/cheatsheet-supervised-learning.md | 567 ---------- ar/cs-229-deep-learning.md | 323 ++++++ ar/cs-229-linear-algebra.md | 413 ++++++++ ar/cs-229-machine-learning-tips-and-tricks.md | 338 ++++++ ar/cs-229-supervised-learning.md | 663 ++++++++++++ ...ing.md => cs-229-unsupervised-learning.md} | 12 +- ar/refresher-linear-algebra.md | 339 ------ ar/refresher-probability.md | 381 ------- de/cheatsheet-deep-learning.md | 321 ------ ...tsheet-machine-learning-tips-and-tricks.md | 285 ----- de/cheatsheet-unsupervised-learning.md | 340 ------ ...ep-learning.md => cs-229-deep-learning.md} | 0 ...ar-algebra.md => cs-229-linear-algebra.md} | 0 ...s-229-machine-learning-tips-and-tricks.md} | 0 ...r-probability.md => cs-229-probability.md} | 0 ...rning.md => cs-229-supervised-learning.md} | 0 ...ing.md => cs-229-unsupervised-learning.md} | 0 ...ep-learning.md => cs-229-deep-learning.md} | 0 ...ar-algebra.md => cs-229-linear-algebra.md} | 0 ...s-229-machine-learning-tips-and-tricks.md} | 0 ...r-probability.md => cs-229-probability.md} | 0 ...rning.md => cs-229-supervised-learning.md} | 0 ...ing.md => cs-229-unsupervised-learning.md} | 0 fa/cs-230-convolutional-neural-networks.md | 923 +++++++++++++++++ fa/cs-230-deep-learning-tips-and-tricks.md | 586 +++++++++++ fa/cs-230-recurrent-neural-networks.md | 868 ++++++++++++++++ fr/cs-221-logic-models.md | 462 +++++++++ fr/cs-221-reflex-models.md | 539 ++++++++++ fr/cs-221-states-models.md | 980 ++++++++++++++++++ fr/cs-221-variables-models.md | 617 +++++++++++ ...ep-learning.md => cs-229-deep-learning.md} | 4 +- ...ar-algebra.md => cs-229-linear-algebra.md} | 12 +- ...s-229-machine-learning-tips-and-tricks.md} | 2 +- ...r-probability.md => cs-229-probability.md} | 4 +- ...rning.md => cs-229-supervised-learning.md} | 12 +- ...ing.md => cs-229-unsupervised-learning.md} | 14 +- fr/cs-230-convolutional-neural-networks.md | 716 +++++++++++++ fr/cs-230-deep-learning-tips-and-tricks.md | 457 ++++++++ fr/cs-230-recurrent-neural-networks.md | 678 ++++++++++++ he/cheatsheet-deep-learning.md | 321 ------ ...tsheet-machine-learning-tips-and-tricks.md | 285 ----- he/cheatsheet-supervised-learning.md | 567 ---------- he/refresher-probability.md | 381 ------- hi/cheatsheet-deep-learning.md | 321 ------ hi/cheatsheet-supervised-learning.md | 567 ---------- hi/cheatsheet-unsupervised-learning.md | 340 ------ hi/refresher-linear-algebra.md | 339 ------ hi/refresher-probability.md | 381 ------- id/cs-230-convolutional-neural-networks.md | 715 +++++++++++++ .../cs-229-linear-algebra.md | 115 +- ja/cs-229-probability.md | 381 +++++++ ja/cs-229-supervised-learning.md | 567 ++++++++++ ja/cs-229-unsupervised-learning.md | 339 ++++++ ja/cs-230-convolutional-neural-networks.md | 717 +++++++++++++ ja/cs-230-deep-learning-tips-and-tricks.md | 457 ++++++++ ja/cs-230-recurrent-neural-networks.md | 678 ++++++++++++ ko/cs-229-linear-algebra.md | 340 ++++++ ko/cs-229-machine-learning-tips-and-tricks.md | 285 +++++ ko/cs-229-probability.md | 381 +++++++ ko/cs-229-unsupervised-learning.md | 340 ++++++ ...tsheet-machine-learning-tips-and-tricks.md | 285 ----- ...ep-learning.md => cs-229-deep-learning.md} | 0 ...ar-algebra.md => cs-229-linear-algebra.md} | 0 pt/cs-229-machine-learning-tips-and-tricks.md | 284 +++++ ...r-probability.md => cs-229-probability.md} | 0 ...rning.md => cs-229-supervised-learning.md} | 0 ...ing.md => cs-229-unsupervised-learning.md} | 0 pt/cs-230-convolutional-neural-networks.md | 718 +++++++++++++ ru/cheatsheet-deep-learning.md | 321 ------ ...tsheet-machine-learning-tips-and-tricks.md | 285 ----- ru/cheatsheet-supervised-learning.md | 567 ---------- ru/cheatsheet-unsupervised-learning.md | 340 ------ ru/refresher-linear-algebra.md | 339 ------ ru/refresher-probability.md | 381 ------- template/cheatsheet-deep-learning.md | 321 ------ ...tsheet-machine-learning-tips-and-tricks.md | 285 ----- template/cheatsheet-supervised-learning.md | 567 ---------- template/cheatsheet-unsupervised-learning.md | 340 ------ template/cs-221-logic-models.md | 462 +++++++++ template/cs-221-reflex-models.md | 539 ++++++++++ template/cs-221-states-models.md | 980 ++++++++++++++++++ template/cs-221-variables-models.md | 617 +++++++++++ .../cs-229-deep-learning.md | 4 + .../cs-229-linear-algebra.md | 4 + ...cs-229-machine-learning-tips-and-tricks.md | 4 + .../cs-229-probability.md | 4 + .../cs-229-supervised-learning.md | 4 + .../cs-229-unsupervised-learning.md | 6 +- .../cs-230-convolutional-neural-networks.md | 716 +++++++++++++ .../cs-230-deep-learning-tips-and-tricks.md | 457 ++++++++ template/cs-230-recurrent-neural-networks.md | 677 ++++++++++++ template/refresher-linear-algebra.md | 339 ------ template/refresher-probability.md | 381 ------- ...tsheet-machine-learning-tips-and-tricks.md | 285 ----- tr/cheatsheet-supervised-learning.md | 567 ---------- tr/cheatsheet-unsupervised-learning.md | 340 ------ tr/cs-221-logic-models.md | 462 +++++++++ tr/cs-221-reflex-models.md | 538 ++++++++++ tr/cs-221-states-models.md | 980 ++++++++++++++++++ tr/cs-221-variables-models.md | 617 +++++++++++ ...ep-learning.md => cs-229-deep-learning.md} | 16 +- ...ar-algebra.md => cs-229-linear-algebra.md} | 0 tr/cs-229-machine-learning-tips-and-tricks.md | 290 ++++++ tr/cs-229-probability.md | 381 +++++++ tr/cs-229-supervised-learning.md | 567 ++++++++++ tr/cs-229-unsupervised-learning.md | 340 ++++++ tr/cs-230-convolutional-neural-networks.md | 712 +++++++++++++ tr/cs-230-deep-learning-tips-and-tricks.md | 450 ++++++++ tr/cs-230-recurrent-neural-networks.md | 674 ++++++++++++ tr/refresher-probability.md | 381 ------- uk/cs-229-probability.md | 381 +++++++ ...ep-learning.md => cs-229-deep-learning.md} | 0 .../cs-229-linear-algebra.md | 115 +- ...cs-229-machine-learning-tips-and-tricks.md | 116 +-- .../cs-229-probability.md | 139 +-- zh-tw/cs-229-supervised-learning.md | 352 +++++++ .../cs-229-unsupervised-learning.md | 141 +-- zh/cheatsheet-deep-learning.md | 321 ------ ...rning.md => cs-229-supervised-learning.md} | 0 123 files changed, 26426 insertions(+), 13124 deletions(-) create mode 100644 .DS_Store delete mode 100644 ar/cheatsheet-machine-learning-tips-and-tricks.md delete mode 100644 ar/cheatsheet-supervised-learning.md create mode 100644 ar/cs-229-deep-learning.md create mode 100644 ar/cs-229-linear-algebra.md create mode 100644 ar/cs-229-machine-learning-tips-and-tricks.md create mode 100644 ar/cs-229-supervised-learning.md rename ar/{cheatsheet-unsupervised-learning.md => cs-229-unsupervised-learning.md} (99%) delete mode 100644 ar/refresher-linear-algebra.md delete mode 100644 ar/refresher-probability.md delete mode 100644 de/cheatsheet-deep-learning.md delete mode 100644 de/cheatsheet-machine-learning-tips-and-tricks.md delete mode 100644 de/cheatsheet-unsupervised-learning.md rename es/{cheatsheet-deep-learning.md => cs-229-deep-learning.md} (100%) rename es/{refresher-linear-algebra.md => cs-229-linear-algebra.md} (100%) rename es/{cheatsheet-machine-learning-tips-and-tricks.md => cs-229-machine-learning-tips-and-tricks.md} (100%) rename es/{refresher-probability.md => cs-229-probability.md} (100%) rename es/{cheatsheet-supervised-learning.md => cs-229-supervised-learning.md} (100%) rename es/{cheatsheet-unsupervised-learning.md => cs-229-unsupervised-learning.md} (100%) rename fa/{cheatsheet-deep-learning.md => cs-229-deep-learning.md} (100%) rename fa/{refresher-linear-algebra.md => cs-229-linear-algebra.md} (100%) rename fa/{cheatsheet-machine-learning-tips-and-tricks.md => cs-229-machine-learning-tips-and-tricks.md} (100%) rename fa/{refresher-probability.md => cs-229-probability.md} (100%) rename fa/{cheatsheet-supervised-learning.md => cs-229-supervised-learning.md} (100%) rename fa/{cheatsheet-unsupervised-learning.md => cs-229-unsupervised-learning.md} (100%) create mode 100644 fa/cs-230-convolutional-neural-networks.md create mode 100644 fa/cs-230-deep-learning-tips-and-tricks.md create mode 100644 fa/cs-230-recurrent-neural-networks.md create mode 100644 fr/cs-221-logic-models.md create mode 100644 fr/cs-221-reflex-models.md create mode 100644 fr/cs-221-states-models.md create mode 100644 fr/cs-221-variables-models.md rename fr/{cheatsheet-deep-learning.md => cs-229-deep-learning.md} (95%) rename fr/{refresher-linear-algebra.md => cs-229-linear-algebra.md} (92%) rename fr/{cheatsheet-machine-learning-tips-and-tricks.md => cs-229-machine-learning-tips-and-tricks.md} (99%) rename fr/{refresher-probability.md => cs-229-probability.md} (98%) rename fr/{cheatsheet-supervised-learning.md => cs-229-supervised-learning.md} (96%) rename fr/{cheatsheet-unsupervised-learning.md => cs-229-unsupervised-learning.md} (95%) create mode 100644 fr/cs-230-convolutional-neural-networks.md create mode 100644 fr/cs-230-deep-learning-tips-and-tricks.md create mode 100644 fr/cs-230-recurrent-neural-networks.md delete mode 100644 he/cheatsheet-deep-learning.md delete mode 100644 he/cheatsheet-machine-learning-tips-and-tricks.md delete mode 100644 he/cheatsheet-supervised-learning.md delete mode 100644 he/refresher-probability.md delete mode 100644 hi/cheatsheet-deep-learning.md delete mode 100644 hi/cheatsheet-supervised-learning.md delete mode 100644 hi/cheatsheet-unsupervised-learning.md delete mode 100644 hi/refresher-linear-algebra.md delete mode 100644 hi/refresher-probability.md create mode 100644 id/cs-230-convolutional-neural-networks.md rename he/refresher-linear-algebra.md => ja/cs-229-linear-algebra.md (51%) create mode 100644 ja/cs-229-probability.md create mode 100644 ja/cs-229-supervised-learning.md create mode 100644 ja/cs-229-unsupervised-learning.md create mode 100644 ja/cs-230-convolutional-neural-networks.md create mode 100644 ja/cs-230-deep-learning-tips-and-tricks.md create mode 100644 ja/cs-230-recurrent-neural-networks.md create mode 100644 ko/cs-229-linear-algebra.md create mode 100644 ko/cs-229-machine-learning-tips-and-tricks.md create mode 100644 ko/cs-229-probability.md create mode 100644 ko/cs-229-unsupervised-learning.md delete mode 100644 pt/cheatsheet-machine-learning-tips-and-tricks.md rename pt/{cheatsheet-deep-learning.md => cs-229-deep-learning.md} (100%) rename pt/{refresher-linear-algebra.md => cs-229-linear-algebra.md} (100%) create mode 100644 pt/cs-229-machine-learning-tips-and-tricks.md rename pt/{refresher-probability.md => cs-229-probability.md} (100%) rename pt/{cheatsheet-supervised-learning.md => cs-229-supervised-learning.md} (100%) rename pt/{cheatsheet-unsupervised-learning.md => cs-229-unsupervised-learning.md} (100%) create mode 100644 pt/cs-230-convolutional-neural-networks.md delete mode 100644 ru/cheatsheet-deep-learning.md delete mode 100644 ru/cheatsheet-machine-learning-tips-and-tricks.md delete mode 100644 ru/cheatsheet-supervised-learning.md delete mode 100644 ru/cheatsheet-unsupervised-learning.md delete mode 100644 ru/refresher-linear-algebra.md delete mode 100644 ru/refresher-probability.md delete mode 100644 template/cheatsheet-deep-learning.md delete mode 100644 template/cheatsheet-machine-learning-tips-and-tricks.md delete mode 100644 template/cheatsheet-supervised-learning.md delete mode 100644 template/cheatsheet-unsupervised-learning.md create mode 100644 template/cs-221-logic-models.md create mode 100644 template/cs-221-reflex-models.md create mode 100644 template/cs-221-states-models.md create mode 100644 template/cs-221-variables-models.md rename ar/cheatsheet-deep-learning.md => template/cs-229-deep-learning.md (98%) rename de/refresher-linear-algebra.md => template/cs-229-linear-algebra.md (97%) rename hi/cheatsheet-machine-learning-tips-and-tricks.md => template/cs-229-machine-learning-tips-and-tricks.md (97%) rename de/refresher-probability.md => template/cs-229-probability.md (98%) rename de/cheatsheet-supervised-learning.md => template/cs-229-supervised-learning.md (98%) rename he/cheatsheet-unsupervised-learning.md => template/cs-229-unsupervised-learning.md (96%) create mode 100644 template/cs-230-convolutional-neural-networks.md create mode 100644 template/cs-230-deep-learning-tips-and-tricks.md create mode 100644 template/cs-230-recurrent-neural-networks.md delete mode 100644 template/refresher-linear-algebra.md delete mode 100644 template/refresher-probability.md delete mode 100644 tr/cheatsheet-machine-learning-tips-and-tricks.md delete mode 100644 tr/cheatsheet-supervised-learning.md delete mode 100644 tr/cheatsheet-unsupervised-learning.md create mode 100644 tr/cs-221-logic-models.md create mode 100644 tr/cs-221-reflex-models.md create mode 100644 tr/cs-221-states-models.md create mode 100644 tr/cs-221-variables-models.md rename tr/{cheatsheet-deep-learning.md => cs-229-deep-learning.md} (92%) rename tr/{refresher-linear-algebra.md => cs-229-linear-algebra.md} (100%) create mode 100644 tr/cs-229-machine-learning-tips-and-tricks.md create mode 100644 tr/cs-229-probability.md create mode 100644 tr/cs-229-supervised-learning.md create mode 100644 tr/cs-229-unsupervised-learning.md create mode 100644 tr/cs-230-convolutional-neural-networks.md create mode 100644 tr/cs-230-deep-learning-tips-and-tricks.md create mode 100644 tr/cs-230-recurrent-neural-networks.md delete mode 100644 tr/refresher-probability.md create mode 100644 uk/cs-229-probability.md rename zh-tw/{cheatsheet-deep-learning.md => cs-229-deep-learning.md} (100%) rename zh/refresher-linear-algebra.md => zh-tw/cs-229-linear-algebra.md (58%) rename zh/cheatsheet-machine-learning-tips-and-tricks.md => zh-tw/cs-229-machine-learning-tips-and-tricks.md (59%) rename zh/refresher-probability.md => zh-tw/cs-229-probability.md (56%) create mode 100644 zh-tw/cs-229-supervised-learning.md rename zh/cheatsheet-unsupervised-learning.md => zh-tw/cs-229-unsupervised-learning.md (59%) delete mode 100644 zh/cheatsheet-deep-learning.md rename zh/{cheatsheet-supervised-learning.md => cs-229-supervised-learning.md} (100%) diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..5008ddfcf53c02e82d7eee2e57c38e5672ef89f6 GIT binary patch literal 6148 zcmeH~Jr2S!425mzP>H1@V-^m;4Wg<&0T*E43hX&L&p$$qDprKhvt+--jT7}7np#A3 zem<@ulZcFPQ@L2!n>{z**++&mCkOWA81W14cNZlEfg7;MkzE(HCqgga^y>{tEnwC%0;vJ&^%eQ zLs35+`xjp>T0 34. **English blabla** > > ⟶ Translated blabla @@ -49,7 +23,85 @@ Please first check for [existing pull requests](https://github.com/shervinea/che 5. Submit a [pull request](https://help.github.com/articles/creating-a-pull-request/) and call it `[code of language name] Topic name`. For example, a translation in Spanish of the deep learning cheatsheet will be called `[es] Deep learning`. -Submissions will have to be reviewed by a fellow native speaker before being accepted. +### Reviewers +1. Go to the [list of pull requests](https://github.com/shervinea/cheatsheet-translation/pulls) and filter them by your native language (e.g. `[es]` for Spanish, `[zh]` for Mandarin Chinese). + +2. Locate pull requests where help is needed. Those contain the tag `reviewer wanted`. + +3. Review the content line per line and add comments and suggestions when necessary. + +### Important note +Please make sure to propose the translation of **only one** cheatsheet per pull request -- it simplifies a lot the review process. + +## Progression +### CS 221 (Artificial Intelligence) +| |[Reflex models](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-221-reflex-models.md)|[States models](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-221-states-models.md)|[Variables models](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-221-variables-models.md)|[Logic models](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-221-logic-models.md)| +|:---|:---:|:---:|:---:|:---:| +|**Deutsch**|not started|not started|not started|not started| +|**Español**|not started|not started|not started|not started| +|**فارسی**|not started|not started|not started|not started| +|**Français**|done|done|done|done| +|**עִבְרִית**|not started|not started|not started|not started| +|**Italiano**|not started|not started|not started|not started| +|**日本語**|not started|not started|not started|not started| +|**한국어**|not started|not started|not started|not started| +|**Português**|not started|not started|not started|not started| +|**Türkçe**|done|done|done|done| +|**Tiếng Việt**|not started|not started|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/179)| +|**简体中文**|not started|not started|not started|not started| +|**繁體中文**|not started|not started|not started|not started| + +### CS 229 (Machine Learning) +| |[Deep learning](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-229-deep-learning.md)|[Supervised](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-229-supervised-learning.md)|[Unsupervised](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-229-unsupervised-learning.md)|[ML tips](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-229-machine-learning-tips-and-tricks.md)|[Probabilities](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-229-probability.md)|[Algebra](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-229-linear-algebra.md)| +|:---|:---:|:---:|:---:|:---:|:---:|:---:| +|**العَرَبِيَّة**|done|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/87)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/88)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/83)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/182)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/85)| +|**Català**|not started|not started|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/47)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/47)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/47)| +|**Deutsch**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/106)|not started|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/135)|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/136)| +|**Español**|done|done|done|done|done|done| +|**فارسی**|done|done|done|done|done|done| +|**Suomi**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/34)|not started|not started|not started|not started|not started| +|**Français**|done|done|done|done|done|done| +|**עִבְרִית**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/156)|not started|not started|not started|not started|not started| +|**हिन्दी**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/37)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/46)|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/40)|not started|not started| +|**Magyar**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/124)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/124)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/124)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/124)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/124)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/124)| +|**Bahasa Indonesia**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/154)|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/139)|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/151)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/150)| +|**Italiano**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/78)|not started|not started|not started|not started|not started| +|**日本語**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/96)|done|done|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/99)|done|done| +|**한국어**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/80)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/90)|done|done|done|done| +|**Polski**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/8)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/8)|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/8)|not started|not started| +|**Português**|done|done|done|done|done|done| +|**Русский**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/21)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/21)|not started|not started|not started|not started| +|**Türkçe**|done|done|done|done|done|done| +|**Українська**|not started|not started|not started|not started|done|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/95)| +|**Tiếng Việt**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/159)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/162)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/177)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/160)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/175)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/176)| +|**简体中文**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/12)|done|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/48)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/7)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/73)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/72)| +|**繁體中文**|done|done|done|done|done|done| + +### CS 230 (Deep Learning) +| |[Convolutional Neural Networks](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-230-convolutional-neural-networks.md)|[Recurrent Neural Networks](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-230-recurrent-neural-networks.md)|[Deep Learning tips](https://github.com/shervinea/cheatsheet-translation/blob/master/template/cs-230-deep-learning-tips-and-tricks.md)| +|:---|:---:|:---:|:---:| +|**العَرَبِيَّة**|not started|not started|not started| +|**Català**|not started|not started|not started| +|**Deutsch**|not started|not started|not started| +|**Español**|not started|not started|not started| +|**فارسی**|done|done|done| +|**Suomi**|not started|not started|not started| +|**Français**|done|done|done| +|**עִבְרִית**|not started|not started|not started| +|**हिन्दी**|not started|not started|not started| +|**Magyar**|not started|not started|not started| +|**Bahasa Indonesia**|done|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/152)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/153)| +|**Italiano**|not started|not started|not started| +|**日本語**|done|done|done| +|**한국어**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/109)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/107)|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/108)| +|**Polski**|not started|not started|not started| +|**Português**|done|not started|not started| +|**Русский**|not started|not started|not started| +|**Türkçe**|done|done|done| +|**Українська**|not started|not started|not started| +|**Tiếng Việt**|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/180)|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/178)| +|**简体中文**|not started|[in progress](https://github.com/shervinea/cheatsheet-translation/pull/181)|not started| +|**繁體中文**|not started|not started|not started| ## Acknowledgements -Thank you everyone for your help! Please do not forget to add your name to the `CONTRIBUTORS` file so that we can give you proper credit in the cheatsheets' [official website](https://stanford.edu/~shervine/teaching/cs-229.html). +Thank you everyone for your help! Please do not forget to add your name to the `CONTRIBUTORS` file so that we can give you proper credit in the cheatsheets' [official website](https://stanford.edu/~shervine/teaching). diff --git a/ar/cheatsheet-machine-learning-tips-and-tricks.md b/ar/cheatsheet-machine-learning-tips-and-tricks.md deleted file mode 100644 index 9712297b8..000000000 --- a/ar/cheatsheet-machine-learning-tips-and-tricks.md +++ /dev/null @@ -1,285 +0,0 @@ -**1. Machine Learning tips and tricks cheatsheet** - -⟶ - -
- -**2. Classification metrics** - -⟶ - -
- -**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** - -⟶ - -
- -**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** - -⟶ - -
- -**5. [Predicted class, Actual class]** - -⟶ - -
- -**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** - -⟶ - -
- -**7. [Metric, Formula, Interpretation]** - -⟶ - -
- -**8. Overall performance of model** - -⟶ - -
- -**9. How accurate the positive predictions are** - -⟶ - -
- -**10. Coverage of actual positive sample** - -⟶ - -
- -**11. Coverage of actual negative sample** - -⟶ - -
- -**12. Hybrid metric useful for unbalanced classes** - -⟶ - -
- -**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** - -⟶ - -
- -**14. [Metric, Formula, Equivalent]** - -⟶ - -
- -**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** - -⟶ - -
- -**16. [Actual, Predicted]** - -⟶ - -
- -**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** - -⟶ - -
- -**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** - -⟶ - -
- -**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** - -⟶ - -
- -**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** - -⟶ - -
- -**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** - -⟶ - -
- -**22. Model selection** - -⟶ - -
- -**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** - -⟶ - -
- -**24. [Training set, Validation set, Testing set]** - -⟶ - -
- -**25. [Model is trained, Model is assessed, Model gives predictions]** - -⟶ - -
- -**26. [Usually 80% of the dataset, Usually 20% of the dataset]** - -⟶ - -
- -**27. [Also called hold-out or development set, Unseen data]** - -⟶ - -
- -**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** - -⟶ - -
- -**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** - -⟶ - -
- -**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** - -⟶ - -
- -**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** - -⟶ - -
- -**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** - -⟶ - -
- -**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** - -⟶ - -
- -**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** - -⟶ - -
- -**35. Diagnostics** - -⟶ - -
- -**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** - -⟶ - -
- -**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** - -⟶ - -
- -**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** - -⟶ - -
- -**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** - -⟶ - -
- -**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** - -⟶ - -
- -**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** - -⟶ - -
- -**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** - -⟶ - -
- -**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** - -⟶ - -
- -**44. Regression metrics** - -⟶ - -
- -**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** - -⟶ - -
- -**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** - -⟶ - -
- -**47. [Model selection, cross-validation, regularization]** - -⟶ - -
- -**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** - -⟶ diff --git a/ar/cheatsheet-supervised-learning.md b/ar/cheatsheet-supervised-learning.md deleted file mode 100644 index a6b19ea1c..000000000 --- a/ar/cheatsheet-supervised-learning.md +++ /dev/null @@ -1,567 +0,0 @@ -**1. Supervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Supervised Learning** - -⟶ - -
- -**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** - -⟶ - -
- -**4. Type of prediction ― The different types of predictive models are summed up in the table below:** - -⟶ - -
- -**5. [Regression, Classifier, Outcome, Examples]** - -⟶ - -
- -**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** - -⟶ - -
- -**7. Type of model ― The different models are summed up in the table below:** - -⟶ - -
- -**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** - -⟶ - -
- -**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** - -⟶ - -
- -**10. Notations and general concepts** - -⟶ - -
- -**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** - -⟶ - -
- -**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** - -⟶ - -
- -**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** - -⟶ - -
- -**14. [Linear regression, Logistic regression, SVM, Neural Network]** - -⟶ - -
- -**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** - -⟶ - -
- -**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** - -⟶ - -
- -**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** - -⟶ - -
- -**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** - -⟶ - -
- -**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** - -⟶ - -
- -**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** - -⟶ - -
- -**21. Linear models** - -⟶ - -
- -**22. Linear regression** - -⟶ - -
- -**23. We assume here that y|x;θ∼N(μ,σ2)** - -⟶ - -
- -**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** - -⟶ - -
- -**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** - -⟶ - -
- -**26. Remark: the update rule is a particular case of the gradient ascent.** - -⟶ - -
- -**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** - -⟶ - -
- -**28. Classification and logistic regression** - -⟶ - -
- -**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** - -⟶ - -
- -**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** - -⟶ - -
- -**31. Remark: there is no closed form solution for the case of logistic regressions.** - -⟶ - -
- -**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** - -⟶ - -
- -**33. Generalized Linear Models** - -⟶ - -
- -**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** - -⟶ - -
- -**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** - -⟶ - -
- -**36. Here are the most common exponential distributions summed up in the following table:** - -⟶ - -
- -**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** - -⟶ - -
- -**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** - -⟶ - -
- -**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** - -⟶ - -
- -**40. Support Vector Machines** - -⟶ - -
- -**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** - -⟶ - -
- -**42: Optimal margin classifier ― The optimal margin classifier h is such that:** - -⟶ - -
- -**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** - -⟶ - -
- -**44. such that** - -⟶ - -
- -**45. support vectors** - -⟶ - -
- -**46. Remark: the line is defined as wTx−b=0.** - -⟶ - -
- -**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** - -⟶ - -
- -**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** - -⟶ - -
- -**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** - -⟶ - -
- -**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** - -⟶ - -
- -**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** - -⟶ - -
- -**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** - -⟶ - -
- -**53. Remark: the coefficients βi are called the Lagrange multipliers.** - -⟶ - -
- -**54. Generative Learning** - -⟶ - -
- -**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** - -⟶ - -
- -**56. Gaussian Discriminant Analysis** - -⟶ - -
- -**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** - -⟶ - -
- -**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** - -⟶ - -
- -**59. Naive Bayes** - -⟶ - -
- -**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** - -⟶ - -
- -**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** - -⟶ - -
- -**62. Remark: Naive Bayes is widely used for text classification and spam detection.** - -⟶ - -
- -**63. Tree-based and ensemble methods** - -⟶ - -
- -**64. These methods can be used for both regression and classification problems.** - -⟶ - -
- -**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** - -⟶ - -
- -**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** - -⟶ - -
- -**67. Remark: random forests are a type of ensemble methods.** - -⟶ - -
- -**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** - -⟶ - -
- -**69. [Adaptive boosting, Gradient boosting]** - -⟶ - -
- -**70. High weights are put on errors to improve at the next boosting step** - -⟶ - -
- -**71. Weak learners trained on remaining errors** - -⟶ - -
- -**72. Other non-parametric approaches** - -⟶ - -
- -**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** - -⟶ - -
- -**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** - -⟶ - -
- -**75. Learning Theory** - -⟶ - -
- -**76. Union bound ― Let A1,...,Ak be k events. We have:** - -⟶ - -
- -**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** - -⟶ - -
- -**78. Remark: this inequality is also known as the Chernoff bound.** - -⟶ - -
- -**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** - -⟶ - -
- -**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** - -⟶ - -
- -**81: the training and testing sets follow the same distribution ** - -⟶ - -
- -**82. the training examples are drawn independently** - -⟶ - -
- -**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** - -⟶ - -
- -**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** - -⟶ - -
- -**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** - -⟶ - -
- -**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** - -⟶ - -
- -**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** - -⟶ - -
- -**88. [Introduction, Type of prediction, Type of model]** - -⟶ - -
- -**89. [Notations and general concepts, loss function, gradient descent, likelihood]** - -⟶ - -
- -**90. [Linear models, linear regression, logistic regression, generalized linear models]** - -⟶ - -
- -**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** - -⟶ - -
- -**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** - -⟶ - -
- -**93. [Trees and ensemble methods, CART, Random forest, Boosting]** - -⟶ - -
- -**94. [Other methods, k-NN]** - -⟶ - -
- -**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** - -⟶ diff --git a/ar/cs-229-deep-learning.md b/ar/cs-229-deep-learning.md new file mode 100644 index 000000000..d4cf59da6 --- /dev/null +++ b/ar/cs-229-deep-learning.md @@ -0,0 +1,323 @@ + +**1. Deep Learning cheatsheet** + +⟶ +ملخص مختصر التعلم العميق +
+ +**2. Neural Networks** + +⟶ +الشبكة العصبونية الاصطناعية(Neural Networks) +
+**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** + +⟶ +الشبكة العصبونية الاصطناعيةهي عبارة عن نوع من النماذج يبنى من عدة طبقات , اكثر هذة الانواع استخداما هي الشبكات الالتفافية و الشبكات العصبونية المتكرره + +
+ +**4. Architecture ― The vocabulary around neural networks architectures is described in the figure below:** + +⟶ +البنية - المصطلحات حول بنية الشبكة العصبونية موضح في الشكل ادناة +
+ +**5. [Input layer, hidden layer, output layer]** + +⟶ +[طبقة ادخال, طبقة مخفية, طبقة اخراج ] +
+ +**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** + +⟶ +عبر تدوين i كالطبقة رقم i و j للدلالة على رقم الوحده الخفية في تلك الطبقة , نحصل على: +
+ +**7. where we note w, b, z the weight, bias and output respectively.** + +⟶ +حيث نعرف w, b, z كالوزن , و معامل التعديل , و الناتج حسب الترتيب. +
+ +**8. Activation function ― Activation functions are used at the end of a hidden unit to introduce non-linear complexities to the model. Here are the most common ones:** + +⟶ +دالة التفعيل(Activation function) - دالة التفعيل تستخدم في نهاية الوحده الخفية لتضمن المكونات الغير خطية للنموذج. هنا بعض دوال التفعيل الشائعة +
+ +**9. [Sigmoid, Tanh, ReLU, Leaky ReLU]** + +⟶ +[Sigmoid, Tanh, ReLU, Leaky ReLU] +
+ +**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** + +⟶ +دالة الانتروبيا التقاطعية للخسارة(Cross-entropy loss) - في سياق الشبكات العصبونية, دالة الأنتروبيا L(z,y) تستخدم و تعرف كالاتي: +
+ +**11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** + +⟶ +معدل التعلم(Learning rate) - معدل التعلم, يرمز , و هو مؤشر في اي تجاة يتم تحديث الاوزان. يمكن تثبيت هذا المعامل او تحديثة بشكل تأقلمي . حاليا اكثر النسب شيوعا تدعى Adam , وهي طريقة تجعل هذه النسبة سرعة التعلم بشكل تأقلمي α او η ب , +
+ +**12. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to weight w is computed using chain rule and is of the following form:** + +⟶ +التغذية الخلفية(Backpropagation) - التغذية الخلفية هي طريقة لتحديث الاوزان في الشبكة العصبونية عبر اعتبار القيم الحقيقة للناتج مع القيمة المطلوبة للخرج. المشتقة بالنسبة للوزن w يتم حسابها باستخدام قاعدة التسلسل و تكون عبر الشكل الاتي: +
+ +**13. As a result, the weight is updated as follows:** + +⟶ +كنتيجة , الوزن سيتم تحديثة كالتالي: +
+ +**14. Updating weights ― In a neural network, weights are updated as follows:** + +⟶ +تحديث الاوزان - في الشبكات العصبونية , يتم تحديث الاوزان كما يلي: +
+ +**15. Step 1: Take a batch of training data.** + +⟶ +الخطوة 1: خذ حزمة من بيانات التدريب +
+ +**16. Step 2: Perform forward propagation to obtain the corresponding loss.** + +⟶ +الخطوة 2: قم بعملية التغذيه الامامية لحساب الخسارة الناتجة +
+ +**17. Step 3: Backpropagate the loss to get the gradients.** + +⟶ +الخطوة 3: قم بتغذية خلفية للخساره للحصول على دالة الانحدار +
+ +**18. Step 4: Use the gradients to update the weights of the network.** + +⟶ +الخطوة 4: استخدم قيم الانحدار لتحديث اوزان الشبكة +
+ +**19. Dropout ― Dropout is a technique meant at preventing overfitting the training data by dropping out units in a neural network. In practice, neurons are either dropped with probability p or kept with probability 1−p** + +⟶ +الاسقاط(Dropout) - الاسقاط هي طريقة الغرض منها منع التكيف الزائد للنموذج في بيانات التدريب عبر اسقاط بعض الواحدات في الشبكة العصبونية, العصبونات يتم اما اسقاطها باحتمالية p او الحفاظ عليها باحتمالية 1-p. +
+ +**20. Convolutional Neural Networks** + +⟶ +الشبكات العصبونية الالتفافية(CNN) +
+ +**21. Convolutional layer requirement ― By noting W the input volume size, F the size of the convolutional layer neurons, P the amount of zero padding, then the number of neurons N that fit in a given volume is such that:** + +⟶ +احتياج الطبقة الالتفافية - عبر رمز w لحجم المدخل , F حجم العصبونات للطبقة الالتفافية , P عدد الحشوات الصفرية , فأن N عدد العصبونات لكل حجم معطى يحسب عبر الاتي: +
+ +**22. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** + +⟶ +تنظيم الحزمة(Batch normalization) - هي خطوه من قيم التحسين الخاصة γ,β والتي تعدل الحزمة {xi}. لنجعل μB,σ2B المتوسط و الانحراف للحزمة المعنية و نريد تصحيح هذه الحزمة, يتم ذلك كالتالي: +
+ +**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** + +⟶ +في الغالب تتم بعد الطبقة الالتفافية أو المتصلة كليا و قبل طبقة التغيرات الغير خطية و تهدف للسماح للسرعات التعليم العالية للتقليل من الاعتمادية القوية للقيم الاولية. + + +
+ +**24. Recurrent Neural Networks** + +⟶ +(RNN)الشبكات العصبونية التكرارية +
+ +**25. Types of gates ― Here are the different types of gates that we encounter in a typical recurrent neural network:** + +⟶ +انواع البوابات - هنا الانواع المختلفة التي ممكن مواجهتها في الشبكة العصبونية الاعتيادية: +
+ +**26. [Input gate, forget gate, gate, output gate]** + +⟶ +[بوابة ادخال, بوابة نسيان, بوابة منفذ, بوابة اخراج ] +
+ +**27. [Write to cell or not?, Erase a cell or not?, How much to write to cell?, How much to reveal cell?]** + +⟶ +[كتابة ام عدم كتابة الى الخلية؟, مسح ام عدم مسح الخلية؟, كمية الكتابة الى الخلية ؟ , مدى الافصاح عن الخلية ؟ ] +
+ +**28. LSTM ― A long short-term memory (LSTM) network is a type of RNN model that avoids the vanishing gradient problem by adding 'forget' gates.** + +⟶ +LSTM - ذاكرة طويلة قصير الامد (long short-term memory) هي نوع من نموذج ال RNN تستخدم لتجنب مشكلة اختفاء الانحدار عبر اضافة بوابات النسيان. +
+ +**29. Reinforcement Learning and Control** + +⟶ +التعلم و التحكم المعزز(Reinforcement Learning) +
+ +**30. The goal of reinforcement learning is for an agent to learn how to evolve in an environment.** + +⟶ +الهدف من التعلم المعزز للعميل الذكي هو التعلم لكيفية التأقلم في اي بيئة. +
+ +**31. Definitions** + +⟶ +تعريفات +
+ +**32. Markov decision processes ― A Markov decision process (MDP) is a 5-tuple (S,A,{Psa},γ,R) where:** + +⟶ +عملية ماركوف لاتخاذ القرار - عملية ماركوف لاتخاذ القرار هي سلسلة خماسية (S,A,{Psa},γ,R) حيث + +
+**33. S is the set of states** + +⟶ + S هي مجموعة من حالات البيئة +
+ +**34. A is the set of actions** + +⟶ +A هي مجموعة من حالات الاجراءات +
+**35. {Psa} are the state transition probabilities for s∈S and a∈A** + +⟶ +{Psa} هو حالة احتمال الانتقال من الحالة s∈S و a∈A +
+ +**36. γ∈[0,1[ is the discount factor** + +⟶ +γ∈[0,1[ هي عامل الخصم +
+ +**37. R:S×A⟶R or R:S⟶R is the reward function that the algorithm wants to maximize** + +⟶ +R:S×A⟶R or R:S⟶R هي دالة المكافأة والتي تعمل الخوارزمية على جعلها اعلى قيمة +
+ +**38. Policy ― A policy π is a function π:S⟶A that maps states to actions.** + +⟶ +دالة القواعد - دالة القواعد π:S⟶A هي التي تقوم بترجمة الحالات الى اجراءات. +
+ +**39. Remark: we say that we execute a given policy π if given a state s we take the action a=π(s).** + +⟶ +ملاحظة: نقول ان النموذج ينفذ القاعدة المعينه π للحالة المعطاة s ان نتخذ الاجراءa=π(s). +
+ +**40. Value function ― For a given policy π and a given state s, we define the value function Vπ as follows:** + +⟶ +دالة القاعدة - لاي قاعدة معطاة π و حالة s, نقوم بتعريف دالة القيمة Vπ كما يلي: +
+ +**41. Bellman equation ― The optimal Bellman equations characterizes the value function Vπ∗ of the optimal policy π∗:** + +⟶ +معادلة بيلمان - معادلات بيلمان المثلى تشخص دالة القيمة دالة القيمة Vπ∗ π∗:للقاعدة المثلى +
+ +**42. Remark: we note that the optimal policy π∗ for a given state s is such that:** + +⟶ + π∗ للحالة المعطاه s تعطى كاالتالي: ملاحظة: نلاحظ ان القاعدة المثلى +
+ +**43. Value iteration algorithm ― The value iteration algorithm is in two steps:** + +⟶ +خوارزمية تكرار القيمة(Value iteration algorithm) - خوارزمية تكرار القيمة تكون في خطوتين: +
+ +**44. 1) We initialize the value:** + +⟶ + 1) نقوم بوضع قيمة اولية: +
+ +**45. 2) We iterate the value based on the values before:** + +⟶ +2) نقوم بتكرير القيمة حسب القيم السابقة: + +
+**46. Maximum likelihood estimate ― The maximum likelihood estimates for the state transition probabilities are as follows:** + +⟶ +تقدير الامكانية القصوى - تقديرات الامكانية القصوى (تقدير الاحتمال الأرجح) لحتماليات انتقال الحالة تكون كما يلي : +
+ +**47. times took action a in state s and got to s′** + +⟶ +اوقات تنفيذ الاجراء a في الحالة s و انتقلت الى s' + +
+**48. times took action a in state s** + +⟶ +اوقات تنفيذ الاجراء a في الحالة s +
+ +**49. Q-learning ― Q-learning is a model-free estimation of Q, which is done as follows:** + +⟶ +التعلم-Q (Q-learning) -هي طريقة غير منمذجة لتقدير Q , و تتم كالاتي: +
+**50. View PDF version on GitHub** + +⟶ +قم باستعراض نسخة ال PDF على GitHub +
+ +**51. [Neural Networks, Architecture, Activation function, Backpropagation, Dropout]** + +⟶ + [شبكات عصبونية, البنية , دالة التفعيل , التغذية الخلفية , الاسقاط ] +
+ +**52. [Convolutional Neural Networks, Convolutional layer, Batch normalization]** + +⟶ +[ الشبكة العصبونية الالتفافية , طبقة التفافية , تنظيم الحزمة ] +
+ +**53. [Recurrent Neural Networks, Gates, LSTM]** + +⟶ +[الشبكة العصبونية التكرارية , البوابات , LSTM] +
+ +**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** + +⟶ +[التعلم المعزز , عملية ماركوف لاتخاذ القرار , تكرير القيمة / القاعدة , بحث القاعدة] diff --git a/ar/cs-229-linear-algebra.md b/ar/cs-229-linear-algebra.md new file mode 100644 index 000000000..d0e88a543 --- /dev/null +++ b/ar/cs-229-linear-algebra.md @@ -0,0 +1,413 @@ +**1. Linear Algebra and Calculus refresher** + +
+ملخص الجبر الخطي و التفاضل و التكامل +
+
+ +**2. General notations** +
+الرموز العامة +
+ +
+ +**3. Definitions** + +
+التعريفات +
+ +
+ +**4. Vector ― We note x∈Rn a vector with n entries, where xi∈R is the ith entry:** +
+ متجه (vector) - نرمز ل $x \in \mathbb{R^n}$ متجه يحتوي على $n$ مدخلات، حيث $x_i \in \mathbb{R}$ يعتبر المدخل رقم $i$ . +
+
+ +**5. Matrix ― We note A∈Rm×n a matrix with m rows and n columns, where Ai,j∈R is the entry located in the ith row and jth column:** + +
+ مصفوفة (Matrix) - نرمز ل ${A \in \mathbb{R}^{m\times n$ مصفوفة تحتوي على $m$ صفوف و $n$ أعمدة، حيث $A_{i,j}$ يرمز للمدخل في الصف$ i$ و العمود $j$ +
+ +
+ +**6. Remark: the vector x defined above can be viewed as a n×1 matrix and is more particularly called a column-vector.** +
+ملاحظة : المتجه $x$ المعرف مسبقا يمكن اعتباره مصفوفة من الشكل $n \times 1$ والذي يسمى ب مصفوفة من عمود واحد. +
+ +
+ +**7. Main matrices** + +
+المصفوفات الأساسية +
+
+ +**8. Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** +
+ مصفوفة الوحدة (Identity) - مصفوفة الوحدة $I \in \mathbb{R^{n\times n}$ تعتبر مصفوفة مربعة تحتوي على المدخل 1 في قطر المصفوفة و 0 في بقية المدخلات: + +
+
+ +**9. Remark: for all matrices A∈Rn×n, we have A×I=I×A=A.** + +
+ملاحظة : جميع المصفوفات من الشكل $A \in \mathbb{R^}{n\times n}$ فإن $A \times I = I \times A = A$. +
+
+ +**10. Diagonal matrix ― A diagonal matrix D∈Rn×n is a square matrix with nonzero values in its diagonal and zero everywhere else:** +
+مصفوفة قطرية (diagonal) - المصفوفة القطرية هي مصفوفة من الشكل + $D \in \mathbb{R}^{n\times n}$ حيث أن جميع العناصر الواقعة خارج القطر الرئيسي تساوي الصفر والعناصر على القطر الرئيسي تحتوي أعداد لاتساوي الصفر. +
+
+ +**11. Remark: we also note D as diag(d1,...,dn).** + +
+ملاحظة: نرمز كذلك ل $D$ ب $text{diag}(d_1, \dots, d_n)\$. +
+
+ +**12. Matrix operations** + +
+ عمليات المصفوفات +
+ +
+ +**13. Multiplication** + +
+ الضرب +
+ +
+ +**14. Vector-vector ― There are two types of vector-vector products:** + +
+ ضرب المتجهات - توجد طريقتين لضرب متجه بمتجه : +
+ +
+ +**15. inner product: for x,y∈Rn, we have:** + +
+ ضرب داخلي (inner product): ل $x,y \in \mathbb{R}^n$ نستنتج : +
+ +
+ +**16. outer product: for x∈Rm,y∈Rn, we have:** + +
+ ضرب خارجي (outer product): ل $x \in \mathbb{m}, y \in \mathbb{R}^n$ نستنتج : +
+ +
+ +**17. Matrix-vector ― The product of matrix A∈Rm×n and vector x∈Rn is a vector of size Rn, such that:** + +
+ مصفوفة - متجه : ضرب المصفوفة $A \in \mathbb{R}^{n\times m}$ والمتجه $x \in \mathbb{R}^n$ ينتجه متجه من الشكل $x \in \mathbb{R}^n$ حيث : +
+
+ +**18. where aTr,i are the vector rows and ac,j are the vector columns of A, and xi are the entries of x.** + +
+ حيث $a^{T}_{r,i}$ يعتبر متجه الصفوف و $a_{c,j}$ يعتبر متجه الأعمدة ل $A$ كذلك $x_i$ يرمز لعناصر $x$. +
+ +
+ +**19. Matrix-matrix ― The product of matrices A∈Rm×n and B∈Rn×p is a matrix of size Rn×p, such that:** + +
+ ضرب مصفوفة ومصفوفة - ضرب المصفوفة $A \in \mathbb{R}^{n \times m}$ و $A \in \mathbb{R}^{n \times p}$ ينتجه عنه المصفوفة $A \in \mathbb{R}^{n \times p}$ حيث أن : +
+ +
+ +**20. where aTr,i,bTr,i are the vector rows and ac,j,bc,j are the vector columns of A and B respectively** + +
+حيث $a^T_{r, i}$ و $b^T_{r, i}$ يعتبر متجه الصفوف $a_{c, j}$ و b_{c, j}$ متجه الأعمدة ل $A$ و $B$ على التوالي. +
+ +
+ +**21. Other operations** + +
+ عمليات أخرى +
+ +
+ +**22. Transpose ― The transpose of a matrix A∈Rm×n, noted AT, is such that its entries are flipped:** + +
+ المنقول (Transpose) - منقول المصفوفة$A \in \mathbb{R}^{m \times n}$ يرمز له ب $A^T$ حيث الصفوف يتم تبديلها مع الأعمدة : +
+ +
+ +**23. Remark: for matrices A,B, we have (AB)T=BTAT** + +
+ ملاحظة: لأي مصفوفتين $A$ و $B$، نستنتج $(AB)^T = B^T A^T$. +
+
+ +**24. Inverse ― The inverse of an invertible square matrix A is noted A−1 and is the only matrix such that:** + +
+ المعكوس (Inverse)- معكوس أي مصفوفة $A$ قابلة للعكس (Invertible) يرمز له ب $A^{-1}$ ويعتبر المعكوس المصفوفة الوحيدة التي لديها الخاصية التالية : +
+
+ +**25. Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1=B−1A−1** + +
+ملاحظة: ليس جميع المصفوفات يمكن إيجاد معكوس لها. كذلك لأي مصفوفتين $A$ و $B$ نستنتج $(AB)^{-1} = B^{-1} A^{-1}$. +
+ +
+ +**26. Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** + +
+أثر المصفوفة (Trace) - أثر أي مصفوفة مربعة $A$ يرمز له ب $tr(A)$ يعتبر مجموع العناصر التي في القطر: +
+
+ +**27. Remark: for matrices A,B, we have tr(AT)=tr(A) and tr(AB)=tr(BA)** + +
+ ملاحظة : لأي مصفوفتين $A$ و $B$ لدينا $tr(A^T) = tr(A)$ و $tr(AB) = tr(BA)$. +
+
+ +**28. Determinant ― The determinant of a square matrix A∈Rn×n, noted |A| or det(A) is expressed recursively in terms of A∖i,∖j, which is the matrix A without its ith row and jth column, as follows:** + +
+المحدد (Determinant) - المحدد لأي مصفوفة مربعة من الشكل $A \in \mathbb{R}^{n \times n}$ يرمز له ب $|A|$ او $det(A)$يتم تعريفه بإستخدام $ِA_{\\i,\\j}$ والذي يعتبر المصفوفة $A$ مع حذف الصف $i$ والعمود $j$ كالتالي : +
+
+ +**29. Remark: A is invertible if and only if |A|≠0. Also, |AB|=|A||B| and |AT|=|A|.** + +
+ ملاحظة: $A$ يكون لديه معكوذ إذا وفقط إذا $\neq 0 |A|$. كذلك $|A B| = |A| |B|$ و $|A^T| = |A|$. +
+
+ +**30. Matrix properties** + +
+خواص المصفوفات +
+
+ +**31. Definitions** + +
+التعريفات +
+
+ +**32. Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** + +
+ التفكيك المتماثل (Symmetric Decomposition)- المصفوفة $A$ يمكن التعبير عنها بإستخدام جزئين مثماثل (Symmetric) وغير متماثل(Antisymmetric) كالتالي : +
+
+ +**33. [Symmetric, Antisymmetric]** + +
+[متماثل، غير متماثل] +
+ +
+ +**34. Norm ― A norm is a function N:V⟶[0,+∞[ where V is a vector space, and such that for all x,y∈V, we have:** + +
+المعيار (Norm) - المعيار يعتبر دالة $N: V \to [0, +\infity)$ حيث $V$ يعتبر فضاء متجه (Vector Space)، حيث أن لكل $x,y \in V$ لدينا : +
+
+ +**35. N(ax)=|a|N(x) for a scalar** + +
+لأي عدد $a$ فإن $N(ax) = |a| N(x)$ +
+
+ +**36. if N(x)=0, then x=0** + +
+$N(x) =0 \implies x = 0$ +
+
+ +**37. For x∈V, the most commonly used norms are summed up in the table below:** + +
+لأي $x \in V$ المعايير الأكثر إستخداماً ملخصة في الجدول التالي: +
+
+ +**38. [Norm, Notation, Definition, Use case]** + +
+[المعيار، الرمز، التعريف، مثال للإستخدام] +
+
+ +**39. Linearly dependence ― A set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others.** + +
+ الارتباط الخطي (Linear Dependence): مجموعة المتجهات تعتبر تابعة خطياً إذا وفقط إذا كل متجه يمكن كتابته بشكل خطي بإسخدام مجموعة من المتجهات الأخرى. +
+
+ +**40. Remark: if no vector can be written this way, then the vectors are said to be linearly independent** + +
+ملاحظة: إذا لم يتحقق هذا الشرط فإنها تسمى مستقلة خطياً . +
+
+ +**41. Matrix rank ― The rank of a given matrix A is noted rank(A) and is the dimension of the vector space generated by its columns. This is equivalent to the maximum number of linearly independent columns of A.** + +
+ رتبة المصفوفة (Rank) - رتبة المصفوفة $A$ يرمز له ب $text{rank}(A)\$ وهو يصف حجم الفضاء المتجهي الذي نتج من أعمدة المصفوفة. يمكن وصفه كذلك بأقصى عدد من أعمدة المصفوفة $A$ التي تمتلك خاصية أنها مستقلة خطياً. +
+
+ +**42. Positive semi-definite matrix ― A matrix A∈Rn×n is positive semi-definite (PSD) and is noted A⪰0 if we have:** + +
+ مصفوفة شبه معرفة موجبة (Positive semi-definite) - المصفوفة $A \in \mathbb{R}^{n \times n}$ تعتبر مصفوفة شبه معرفة موجبة (PSD) ويرمز لها بالرمز $A \succed 0 $ إذا : +
+
+ +**43. Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** + +
+ ملاحظة: المصفوفة $A$ تعتبر مصفوفة معرفة موجبة إذا $A \succ 0 $ وهي تعتبر مصفوفة (PSD) والتي تستوفي الشرط : لكل متجه غير الصفر $x$ حيث $x^TAx>0 $. +
+
+ +**44. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** + +
+ القيم الذايتة (eigenvalue), المتجه الذاتي (eigenvector) - إذا كان لدينا مصفوفة $A \in \mathbb{R}^{n \times n}$، القيمة $\lambda$ تعتبر قيمة ذاتية للمصفوفة $A$ إذا وجد متجه $z \in \mathbb{R}^n \\ \{0\}$ يسمى متجه ذاتي حيث أن : +
+
+ +**45. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** + +
+ النظرية الطيفية (spectral theorem) - نفرض $A \in \mathbb{R}^{n \times n}$ إذا كانت المصفوفة $A$ متماثلة فإن $A$ تعتبر مصفوفة قطرية بإستخدام مصفوفة متعامدة (orthogonal) $U \in \mathbb{R} ^{n \times n}$ ويرمز لها بالرمز $\Lambda = \diag(\lambda_1, \dots, \lambda_n)$ حيث أن: +
+
+ +**46. diagonal** + +
+ قطرية +
+
+ +**47. Singular-value decomposition ― For a given matrix A of dimensions m×n, the singular-value decomposition (SVD) is a factorization technique that guarantees the existence of U m×m unitary, Σ m×n diagonal and V n×n unitary matrices, such that:** + +
+ مجزئ القيمة المفرده (singular value decomposition) : لأي مصفوفة $A$ من الشكل $n\times m$ ، تفكيك القيمة المنفردة (SVD) يعتبر طريقة تحليل تضمن وجود $U \in \mathbb{R}^{m \times m}$ , مصفوفة قطرية $\Sigma \in \mathbb{R}^{m \times n}$ و $V \in \mathbb{R}^{n \times n}$ حيث أن : +
+
+ +**48. Matrix calculus** + +
+ حساب المصفوفات +
+
+ +**49. Gradient ― Let f:Rm×n→R be a function and A∈Rm×n be a matrix. The gradient of f with respect to A is a m×n matrix, noted ∇Af(A), such that:** + +
+ المشتقة في فضاءات عالية (gradient) - افترض $f: \mathbb{R}^{m \times n} \rightarrow \mathbb{R}$ تعتبر دالة و $f: \mathbb{R}^{m \times n} \rightarrow \mathbb{R}$ تعتبر مصفوفة. المشتقة العليا ل $f$ بالنسبة ل $A$ يعتبر مصفوفة $n\times m$ يرمز له $nabla_A f(A)\$ حيث أن: +
+
+ +**50. Remark: the gradient of f is only defined when f is a function that returns a scalar.** + +
+ملاحظة : المشتقة العليا معرفة فقط إذا كانت الدالة $f$ لديها مدى ضمن الأعداد الحقيقية. +
+
+ +**51. Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** + +
+هيشيان (Hessian) - افترض $f: \mathbb{R}^n \rightarrow \mathbb{R}$ تعتبر دالة و $x \in \mathbb{R}^n$ يعتبر متجه. الهيشيان ل $f$ بالنسبة ل $x$ تعتبر مصفوفة متماثلة من الشكل $n \times n$ يرمز لها بالرمز $nabla^2_x f(x)\$ حيثب أن : +
+
+ +**52. Remark: the hessian of f is only defined when f is a function that returns a scalar** + +
+ ملاحظة : الهيشيان معرفة فقط إذا كانت الدالة $f$ لديها مدى ضمن الأعداد الحقيقية. + +
+
+ +**53. Gradient operations ― For matrices A,B,C, the following gradient properties are worth having in mind:** + +
+ الحساب في مشتقة الفضاءات العالية- لأي مصفوفات $A,B,C$ فإن الخواص التالية مهمة : + +
+
+ +**54. [General notations, Definitions, Main matrices]** + +
+ [الرموز العامة، التعاريف، المصفوفات الرئيسية] +
+ +
+ +**55. [Matrix operations, Multiplication, Other operations]** + +
+ [عمليات المصفوفات، الضرب، عمليات أخرى] +
+
+ +**56. [Matrix properties, Norm, Eigenvalue/Eigenvector, Singular-value decomposition]** + +
+ [خواص المصفوفات، المعيار، قيمة ذاتية/متجه ذاتي، تفكيك القيمة المنفردة] +
+
+ +**57. [Matrix calculus, Gradient, Hessian, Operations]** + +
+ [حساب المصفوفات، مشتقة الفضاءات العالية، الهيشيان، العمليات] +
diff --git a/ar/cs-229-machine-learning-tips-and-tricks.md b/ar/cs-229-machine-learning-tips-and-tricks.md new file mode 100644 index 000000000..d48445a75 --- /dev/null +++ b/ar/cs-229-machine-learning-tips-and-tricks.md @@ -0,0 +1,338 @@ +**Machine Learning tips and tricks translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-229/cheatsheet-machine-learning-tips-and-tricks) + +
+ +**1. Machine Learning tips and tricks cheatsheet** + +
+مرجع سريع لنصائح وحيل تعلّم الآلة +
+
+ +**2. Classification metrics** + +
+مقاييس التصنيف +
+
+ +**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** + +
+في سياق التصنيف الثنائي، هذه المقاييس (metrics) المهمة التي يجدر مراقبتها من أجل تقييم آداء النموذج. +
+
+ +**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** + +
+مصفوفة الدقّة (confusion matrix) - تستخدم مصفوفة الدقّة لأخذ تصور شامل عند تقييم أداء النموذج. وهي تعرّف كالتالي: +
+
+ +**5. [Predicted class, Actual class]** + +
+[التصنيف المتوقع، التصنيف الفعلي] +
+
+ +**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** + +
+المقاييس الأساسية - المقاييس التالية تستخدم في العادة لتقييم أداء نماذج التصنيف: +
+
+ +**7. [Metric, Formula, Interpretation]** + +
+[المقياس، المعادلة، التفسير] +
+
+ +**8. Overall performance of model** + +
+الأداء العام للنموذج +
+
+ +**9. How accurate the positive predictions are** + +
+دقّة التوقعات الإيجابية (positive) +
+
+ +**10. Coverage of actual positive sample** + +
+تغطية عينات التوقعات الإيجابية الفعلية +
+
+ +**11. Coverage of actual negative sample** + +
+تغطية عينات التوقعات السلبية الفعلية +
+
+ +**12. Hybrid metric useful for unbalanced classes** + +
+مقياس هجين مفيد للأصناف غير المتوازنة (unbalanced) +
+
+ +**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** + +
+منحنى دقّة الأداء (ROC) - منحنى دقّة الآداء، ويطلق عليه ROC، هو رسمة لمعدل التصنيفات الإيجابية الصحيحة (TPR) مقابل معدل التصنيفات الإيجابية الخاطئة (FPR) باستخدام قيم حد (threshold) متغيرة. هذه المقاييس ملخصة في الجدول التالي: +
+
+ +**14. [Metric, Formula, Equivalent]** + +
+[المقياس، المعادلة، مرادف] +
+
+ +**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** + +
+المساحة تحت منحنى دقة الأداء (المساحة تحت المنحنى) (AUC) - المساحة تحت منحنى دقة الأداء (المساحة تحت المنحنى)، ويطلق عليها AUC أو AUROC، هي المساحة تحت ROC كما هو موضح في الرسمة التالية: +
+
+ +**16. [Actual, Predicted]** + +
+[الفعلي، المتوقع] +
+
+ +**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** + +
+المقاييس الأساسية - إذا كان لدينا نموذج الانحدار f، فإن المقاييس التالية غالباً ما تستخدم لتقييم أداء النموذج: +
+
+ +**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** + +
+[المجموع الكلي للمربعات، مجموع المربعات المُفسَّر، مجموع المربعات المتبقي] +
+
+ +**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** + +
+مُعامل التحديد (Coefficient of determination) - مُعامل التحديد، وغالباً يرمز له بـ R2 أو r2، يعطي قياس لمدى مطابقة النموذج للنتائج الملحوظة، ويعرف كما يلي: +
+
+ +**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** + +
+المقاييس الرئيسية - المقاييس التالية تستخدم غالباً لتقييم أداء نماذج الانحدار، وذلك بأن يتم الأخذ في الحسبان عدد المتغيرات n المستخدمة فيها: +
+
+ +**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** + +
+حيث L هو الأرجحية، و ˆσ2 تقدير التباين الخاص بكل نتيجة. +
+
+ +**22. Model selection** + +
+اختيار النموذج +
+
+ +**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** + +
+مفردات - عند اختيار النموذج، نفرق بين 3 أجزاء من البيانات التي لدينا كالتالي: +
+
+ +**24. [Training set, Validation set, Testing set]** + +
+[مجموعة تدريب، مجموعة تحقق، مجموعة اختبار] +
+
+ +**25. [Model is trained, Model is assessed, Model gives predictions]** + +
+[يتم تدريب النموذج، يتم تقييم النموذج، النموذج يعطي التوقعات] +
+
+ +**26. [Usually 80% of the dataset, Usually 20% of the dataset]** + +
+[غالباً 80% من مجموعة البيانات، غالباً 20% من مجموعة البيانات] +
+
+ +**27. [Also called hold-out or development set, Unseen data]** + +
+[يطلق عليها كذلك المجموعة المُجنّبة أو مجموعة التطوير، بيانات لم يسبق رؤيتها من قبل] +
+
+ +**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** + +
+بمجرد اختيار النموذج، يتم تدريبه على مجموعة البيانات بالكامل ثم يتم اختباره على مجموعة اختبار لم يسبق رؤيتها من قبل. كما هو موضح في الشكل التالي: +
+
+ +**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** + +
+التحقق المتقاطع (Cross-validation) - التحقق المتقاطع، وكذلك يختصر بـ CV، هو طريقة تستخدم لاختيار نموذج بحيث لا يعتمد بشكل كبير على مجموعة بيانات التدريب المبدأية. أنواع التحقق المتقاطع المختلفة ملخصة في الجدول التالي: +
+
+ +**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** + +
+[التدريب على k-1 جزء والتقييم باستخدام الجزء الباقي، التدريب على n−p عينة والتقييم باستخدام الـ p عينات المتبقية] +
+
+ +**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** + +
+[بشكل عام k=5 أو 10، الحالة p=1 يطلق عليها الإبقاء على واحد (leave-one-out)] +
+
+ +**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** + +
+الطريقة الأكثر استخداماً يطلق عليها التحقق المتقاطع س جزء/أجزاء (k-fold)، ويتم فيها تقسيم البيانات إلى k جزء، بحيث يتم تدريب النموذج باستخدام k−1 والتحقق باستخدام الجزء المتبقي، ويتم تكرار ذلك k مرة. يتم بعد ذلك حساب معدل الأخطاء في الأجزاء k ويسمى خطأ التحقق المتقاطع. +
+
+ +**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** + +
+ضبط (Regularization) - عمليه الضبط تهدف إلى تفادي فرط التخصيص (overfit) للنموذج، وهو بذلك يتعامل مع مشاكل التباين العالي. الجدول التالي يلخص أنواع وطرق الضبط الأكثر استخداماً: +
+
+ +**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +
+[يقلص المُعاملات إلى 0، جيد لاختيار المتغيرات، يجعل المُعاملات أصغر، المفاضلة بين اختيار المتغيرات والمُعاملات الصغيرة] +
+
+ +**35. Diagnostics** + +
+التشخيصات +
+
+ +**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** + +
+الانحياز (Bias) - الانحياز للنموذج هو الفرق بين التنبؤ المتوقع والنموذج الحقيقي الذي نحاول تنبؤه للبيانات المعطاة. +
+
+ +**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** + +
+التباين (Variance) - تباين النموذج هو مقدار التغير في تنبؤ النموذج لنقاط البيانات المعطاة. +
+
+ +**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** + +
+موازنة الانحياز/التباين (Bias/variance tradeoff) - كلما زادت بساطة النموذج، زاد الانحياز، وكلما زاد تعقيد النموذج، زاد التباين. +
+
+ +**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** + +
+[الأعراض، توضيح الانحدار، توضيح التصنيف، توضيح التعلم العميق، العلاجات الممكنة] +
+
+ +**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** + +
+[خطأ التدريب عالي، خطأ التدريب قريب من خطأ الاختبار، انحياز عالي، خطأ التدريب أقل بقليل من خطأ الاختبار، خطأ التدريب منخفض جداً، خطأ التدريب أقل بكثير من خطأ الاختبار، تباين عالي] +
+
+ +**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** + +
+[زيادة تعقيد النموذج، إضافة المزيد من الخصائص، تدريب لمدة أطول، إجراء الضبط (regularization)، الحصول على المزيد من البيانات] +
+
+ +**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** + +
+تحليل الخطأ - تحليل الخطأ هو تحليل السبب الرئيسي للفرق في الأداء بين النماذج الحالية والنماذج المثالية. +
+
+ +**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** + +
+تحليل استئصالي (Ablative analysis) - التحليل الاستئصالي هو تحليل السبب الرئيسي للفرق في الأداء بين النماذج الحالية والنماذج المبدئية (baseline). +
+
+ +**44. Regression metrics** + +
+مقاييس الانحدار +
+
+ +**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** + +
+[مقاييس التصنيف، مصفوفة الدقّة، الضبط (accuracy)، الدقة (precision)، الاستدعاء (recall)، درجة F1] +
+
+ +**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** + +
+[مقاييس الانحدار، مربع R، معيار معامل مالوس (Mallow's)، معيار آكياك المعلوماتي (AIC)، معيار المعلومات البايزي (BIC)] +
+
+ +**47. [Model selection, cross-validation, regularization]** + +
+[اختيار النموذج، التحقق المتقاطع، الضبط] +
+
+ +**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** + +
+[التشخيصات، موازنة الانحياز/التباين، تحليل الخطأ/التحليل الاستئصالي] +
diff --git a/ar/cs-229-supervised-learning.md b/ar/cs-229-supervised-learning.md new file mode 100644 index 000000000..9104d46a1 --- /dev/null +++ b/ar/cs-229-supervised-learning.md @@ -0,0 +1,663 @@ +**1. Supervised Learning cheatsheet** + +
+مرجع سريع للتعلّم المُوَجَّه +
+
+ +**2. Introduction to Supervised Learning** + +
+مقدمة للتعلّم المُوَجَّه +
+
+ +**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** + +
+إذا كان لدينا مجموعة من نقاط البيانات {x(1),...,x(m)} مرتبطة بمجموعة مخرجات {y(1),...,y(m)}، نريد أن نبني مُصَنِّف يتعلم كيف يتوقع y من x. +
+
+ +**4. Type of prediction ― The different types of predictive models are summed up in the table below:** + +
+نوع التوقّع - أنواع نماذج التوقّع المختلفة موضحة في الجدول التالي: +
+
+ +**5. [Regression, Classifier, Outcome, Examples]** + +
+[الانحدار (Regression)، التصنيف (Classification)، المُخرَج، أمثلة] +
+
+ +**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** + +
+[مستمر، صنف، انحدار خطّي (Linear regression)، انحدار لوجستي (Logistic regression)، آلة المتجهات الداعمة (SVM)، بايز البسيط (Naive Bayes)] +
+
+ +**7. Type of model ― The different models are summed up in the table below:** + +
+نوع النموذج - أنواع النماذج المختلفة موضحة في الجدول التالي: +
+
+ +**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** + +
+[نموذج تمييزي (discriminative)، نموذج توليدي (Generative)، الهدف، ماذا يتعلم، توضيح، أمثلة] +
+
+ +**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, آلة المتجهات الداعمة (SVM), GDA, Naive Bayes]** + +
+[التقدير المباشر لـ P(y|x)، تقدير P(x|y) ثم استنتاج P(y|x)، حدود القرار، التوزيع الاحتمالي للبيانات، الانحدار (Regression)، آلة المتجهات الداعمة (SVM)، GDA، بايز البسيط (Naive Bayes)] +
+
+ +**10. Notations and general concepts** + +
+الرموز ومفاهيم أساسية +
+
+ +**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** + +
+الفرضية (Hypothesis) - الفرضية، ويرمز لها بـ hθ، هي النموذج الذي نختاره. إذا كان لدينا المدخل x(i)، فإن المخرج الذي سيتوقعه النموذج هو hθ(x(i)). +
+
+ +**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** + +
+دالة الخسارة (Loss function) - دالة الخسارة هي الدالة L:(z,y)∈R×Y⟼L(z,y)∈R التي تأخذ كمدخلات القيمة المتوقعة z والقيمة الحقيقية y وتعطينا الاختلاف بينهما. الجدول التالي يحتوي على بعض دوال الخسارة الشائعة: +
+
+ +**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** + +
+[خطأ أصغر تربيع (Least squared error)، خسارة لوجستية (Logistic loss)، خسارة مفصلية (Hinge loss)، الانتروبيا التقاطعية (Cross-entropy)] +
+
+ +**14. [Linear regression, Logistic regression, SVM, Neural Network]** + +
+[الانحدار الخطّي (Linear regression)، الانحدار اللوجستي (Logistic regression)، آلة المتجهات الداعمة (SVM)، الشبكات العصبية (Neural Network)] +
+
+ +**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** + +
+دالة التكلفة (Cost function) - دالة التكلفة J تستخدم عادة لتقييم أداء نموذج ما، ويتم تعريفها مع دالة الخسارة L كالتالي: +
+
+ +**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** + +
+النزول الاشتقاقي (Gradient descent) - لنعرّف معدل التعلّم α∈R، يمكن تعريف القانون الذي يتم تحديث خوارزمية النزول الاشتقاقي من خلاله باستخدام معدل التعلّم ودالة التكلفة J كالتالي: +
+
+ +**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** + +
+ملاحظة: في النزول الاشتقاقي العشوائي (Stochastic gradient descent (SGD)) يتم تحديث المُعاملات (parameters) بناءاً على كل عينة تدريب على حدة، بينما في النزول الاشتقاقي الحُزَمي (batch gradient descent) يتم تحديثها باستخدام حُزَم من عينات التدريب. +
+
+ +**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** + +
+الأرجحية (Likelihood) - تستخدم أرجحية النموذج L(θ)، حيث أن θ هي المُدخلات، للبحث عن المُدخلات θ الأحسن عن طريق تعظيم (maximizing) الأرجحية. عملياً يتم استخدام الأرجحية اللوغاريثمية (log-likelihood) ℓ(θ)=log(L(θ)) حيث أنها أسهل في التحسين (optimize). فيكون لدينا: +
+
+ +**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** + +
+خوارزمية نيوتن (Newton's algorithm) - خوارزمية نيوتن هي طريقة حسابية للعثور على θ بحيث يكون ℓ′(θ)=0. قاعدة التحديث للخوارزمية كالتالي: +
+
+ +**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** + +
+ملاحظة: هناك خوارزمية أعم وهي متعددة الأبعاد (multidimensional)، يطلق عليها خوارزمية نيوتن-رافسون (Newton-Raphson)، ويتم تحديثها عبر القانون التالي: +
+
+ +**21. Linear models** + +
+النماذج الخطيّة (Linear models) +
+
+ +**22. Linear regression** + +
+الانحدار الخطّي (Linear regression) +
+
+ +**23. We assume here that y|x;θ∼N(μ,σ2)** + +
+هنا نفترض أن y|x;θ∼N(μ,σ2) +
+
+ +**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** + +
+المعادلة الطبيعية/الناظمية (Normal) - إذا كان لدينا المصفوفة X، القيمة θ التي تقلل من دالة التكلفة يمكن حلها رياضياً بشكل مغلق (closed-form) عن طريق: +
+
+ +**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** + +
+خوارزمية أصغر معدل تربيع LMS - إذا كان لدينا معدل التعلّم α، فإن قانون التحديث لخوارزمية أصغر معدل تربيع (Least Mean Squares (LMS)) لمجموعة بيانات من m عينة، ويطلق عليه قانون تعلم ويدرو-هوف (Widrow-Hoff)، كالتالي: +
+
+ +**26. Remark: the update rule is a particular case of the gradient ascent.** + +
+ملاحظة: قانون التحديث هذا يعتبر حالة خاصة من النزول الاشتقاقي (Gradient descent). +
+
+ +**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** + +
+الانحدار الموزون محليّاً (LWR) - الانحدار الموزون محليّاً (Locally Weighted Regression)، ويعرف بـ LWR، هو نوع من الانحدار الخطي يَزِن كل عينة تدريب أثناء حساب دالة التكلفة باستخدام w(i)(x)، التي يمكن تعريفها باستخدام المُدخل (parameter) τ∈R كالتالي: +
+
+ +**28. Classification and logistic regression** + +
+التصنيف والانحدار اللوجستي +
+
+ +**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** + +
+دالة سيجمويد (Sigmoid) - دالة سيجمويد g، وتعرف كذلك بالدالة اللوجستية، تعرّف كالتالي: +
+
+ +**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** + +
+الانحدار اللوجستي (Logistic regression) - نفترض هنا أن y|x;θ∼Bernoulli(ϕ). فيكون لدينا: +
+
+ +**31. Remark: there is no closed form solution for the case of logistic regressions.** + +
+ملاحظة: ليس هناك حل رياضي مغلق للانحدار اللوجستي. +
+
+ +**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** + +
+انحدار سوفت ماكس (Softmax) - ويطلق عليه الانحدار اللوجستي متعدد الأصناف (multiclass logistic regression)، يستخدم لتعميم الانحدار اللوجستي إذا كان لدينا أكثر من صنفين. في العرف يتم تعيين θK=0، بحيث تجعل مُدخل بيرنوللي (Bernoulli) ϕi لكل فئة i يساوي: +
+
+ +**33. Generalized Linear Models** + +
+النماذج الخطية العامة (Generalized Linear Models - GLM) +
+
+ +**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** + +
+العائلة الأُسيّة (Exponential family) - يطلق على صنف من التوزيعات (distributions) بأنها تنتمي إلى العائلة الأسيّة إذا كان يمكن كتابتها بواسطة مُدخل قانوني (canonical parameter) η، إحصاء كافٍ (sufficient statistic) T(y)، ودالة تجزئة لوغاريثمية a(η)، كالتالي: +
+
+ +**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** + +
+ملاحظة: كثيراً ما سيكون T(y)=y. كذلك فإن exp(−a(η)) يمكن أن تفسر كمُدخل تسوية (normalization) للتأكد من أن الاحتمالات يكون حاصل جمعها يساوي واحد. +
+
+ +**36. Here are the most common exponential distributions summed up in the following table:** + +
+تم تلخيص أكثر التوزيعات الأسيّة استخداماً في الجدول التالي: +
+
+ +**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** + +
+[التوزيع، بِرنوللي (Bernoulli)، جاوسي (Gaussian)، بواسون (Poisson)، هندسي (Geometric)] +
+
+ +**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** + +
+افتراضات GLMs - تهدف النماذج الخطيّة العامة (GLM) إلى توقع المتغير العشوائي y كدالة لـ x∈Rn+1، وتستند إلى ثلاثة افتراضات: +
+
+ +**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** + +
+ملاحظة: أصغر تربيع (least squares) الاعتيادي و الانحدار اللوجستي يعتبران من الحالات الخاصة للنماذج الخطيّة العامة. +
+
+ +**40. Support Vector Machines** + +
+آلة المتجهات الداعمة (Support Vector Machines) +
+
+ +**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** + +
+تهدف آلة المتجهات الداعمة (SVM) إلى العثور على الخط الذي يعظم أصغر مسافة إليه: +
+
+ +**42: Optimal margin classifier ― The optimal margin classifier h is such that:** + +
+مُصنِّف الهامش الأحسن (Optimal margin classifier) - يعرَّف مُصنِّف الهامش الأحسن h كالتالي: +
+
+ +**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** + +
+حيث (w,b)∈Rn×R هو الحل لمشكلة التحسين (optimization) التالية: +
+
+ +**44. such that** + +
+بحيث أن +
+
+ +**45. support vectors** + +
+المتجهات الداعمة (support vectors) +
+
+ +**46. Remark: the line is defined as wTx−b=0.** + +
+ملاحظة: يتم تعريف الخط بهذه المعادلة wTx−b=0. +
+
+ +**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** + +
+الخسارة المفصلية (Hinge loss) - تستخدم الخسارة المفصلية في حل SVM ويعرف على النحو التالي: +
+
+ +**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** + +
+النواة (Kernel) - إذا كان لدينا دالة ربط الخصائص (features) ϕ، يمكننا تعريف النواة K كالتالي: +
+
+ +**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** + +
+عملياً، يمكن أن تُعَرَّف الدالة K عن طريق المعادلة K(x,z)=exp(−||x−z||22σ2)، ويطلق عليها النواة الجاوسية (Gaussian kernel)، وهي تستخدم بكثرة. +
+
+ +**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** + +
+[قابلية الفصل غير الخطي، استخدام ربط النواة، حد القرار في الفضاء الأصلي] +
+
+ +**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** + +
+ملاحظة: نقول أننا نستخدم "حيلة النواة" (kernel trick) لحساب دالة التكلفة عند استخدام النواة لأننا في الحقيقة لا نحتاج أن نعرف التحويل الصريح ϕ، الذي يكون في الغالب شديد التعقيد. ولكن، نحتاج أن فقط أن نحسب القيم K(x,z). +
+
+ +**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** + +
+اللّاغرانجي (Lagrangian) - يتم تعريف اللّاغرانجي L(w,b) على النحو التالي: +
+
+ +**53. Remark: the coefficients βi are called the Lagrange multipliers.** + +
+ملاحظة: المعامِلات (coefficients) βi يطلق عليها مضروبات لاغرانج (Lagrange multipliers). +
+
+ +**54. Generative Learning** + +
+التعلم التوليدي (Generative Learning) +
+
+ +**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** + +
+النموذج التوليدي في البداية يحاول أن يتعلم كيف تم توليد البيانات عن طريق تقدير P(x|y)، التي يمكن حينها استخدامها لتقدير P(y|x) باستخدام قانون بايز (Bayes' rule). +
+
+ +**56. Gaussian Discriminant Analysis** + +
+تحليل التمايز الجاوسي (Gaussian Discriminant Analysis) +
+
+ +**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** + +
+الإطار - تحليل التمايز الجاوسي يفترض أن y و x|y=0 و x|y=1 بحيث يكونوا كالتالي: +
+
+ +**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** + +
+التقدير - الجدول التالي يلخص التقديرات التي يمكننا التوصل لها عند تعظيم الأرجحية (likelihood): +
+
+ +**59. Naive Bayes** + +
+بايز البسيط (Naive Bayes) +
+
+ +**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** + +
+الافتراض - يفترض نموذج بايز البسيط أن جميع الخصائص لكل عينة بيانات مستقلة (independent): +
+
+ +**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** + +
+الحل - تعظيم الأرجحية اللوغاريثمية (log-likelihood) يعطينا الحلول التالية إذا كان k∈{0,1}، l∈[[1,L]]: +
+
+ +**62. Remark: Naive Bayes is widely used for text classification and spam detection.** + +
+ملاحظة: بايز البسيط يستخدم بشكل واسع لتصنيف النصوص واكتشاف البريد الإلكتروني المزعج. +
+
+ +**63. Tree-based and ensemble methods** + +
+الطرق الشجرية (tree-based) والتجميعية (ensemble) +
+
+ +**64. These methods can be used for both regression and classification problems.** + +
+هذه الطرق يمكن استخدامها لكلٍ من مشاكل الانحدار (regression) والتصنيف (classification). +
+
+ +**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** + +
+التصنيف والانحدار الشجري (CART) - والاسم الشائع له أشجار القرار (decision trees)، يمكن أن يمثل كأشجار ثنائية (binary trees). من المزايا لهذه الطريقة إمكانية تفسيرها بسهولة. +
+
+ +**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** + +
+الغابة العشوائية (Random forest) - هي أحد الطرق الشجرية التي تستخدم عدداً كبيراً من أشجار القرار مبنية باستخدام مجموعة عشوائية من الخصائص. بخلاف شجرة القرار البسيطة لا يمكن تفسير النموذج بسهولة، ولكن أدائها العالي جعلها أحد الخوارزمية المشهورة. +
+
+ +**67. Remark: random forests are a type of ensemble methods.** + +
+ملاحظة: أشجار القرار نوع من الخوارزميات التجميعية (ensemble). +
+
+ +**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** + +
+التعزيز (Boosting) - فكرة خوارزميات التعزيز هي دمج عدة خوارزميات تعلم ضعيفة لتكوين نموذج قوي. الطرق الأساسية ملخصة في الجدول التالي: +
+
+ +**69. [Adaptive boosting, Gradient boosting]** + +
+[التعزيز التَكَيُّفي (Adaptive boosting)، التعزيز الاشتقاقي (Gradient boosting)] +
+
+ +**70. High weights are put on errors to improve at the next boosting step** + +
+يتم التركيز على مواطن الخطأ لتحسين النتيجة في الخطوة التالية. +
+
+ +**71. Weak learners trained on remaining errors** + +
+يتم تدريب خوارزميات التعلم الضعيفة على الأخطاء المتبقية. +
+
+ +**72. Other non-parametric approaches** + +
+طرق أخرى غير بارامترية (non-parametric) +
+
+ +**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** + +
+خوارزمية أقرب الجيران (k-nearest neighbors) - تعتبر خوارزمية أقرب الجيران، وتعرف بـ k-NN، طريقة غير بارامترية، حيث يتم تحديد نتيجة عينة من البيانات من خلال عدد k من البيانات المجاورة في مجموعة التدريب. ويمكن استخدامها للتصنيف والانحدار. +
+
+ +**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** + +
+ملاحظة: كلما زاد المُدخل k، كلما زاد الانحياز (bias)، وكلما نقص k، زاد التباين (variance). +
+
+ +**75. Learning Theory** + +
+نظرية التعلُّم +
+
+ +**76. Union bound ― Let A1,...,Ak be k events. We have:** + +
+حد الاتحاد (Union bound) - لنجعل A1,...,Ak تمثل k حدث. فيكون لدينا: +
+
+ +**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** + +
+متراجحة هوفدينج (Hoeffding) - لنجعل Z1,..,Zm تمثل m متغير مستقلة وموزعة بشكل مماثل (iid) مأخوذة من توزيع بِرنوللي (Bernoulli distribution) ذا مُدخل ϕ. لنجعل ˆϕ متوسط العينة (sample mean) و γ>0 ثابت. فيكون لدينا: +
+
+ +**78. Remark: this inequality is also known as the Chernoff bound.** + +
+ملاحظة: هذه المتراجحة تعرف كذلك بحد تشرنوف (Chernoff bound). +
+
+ +**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** + +
+خطأ التدريب - ليكن لدينا المُصنِّف h، يمكن تعريف خطأ التدريب ˆϵ(h)، ويعرف كذلك بالخطر التجريبي أو الخطأ التجريبي، كالتالي: +
+
+ +**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** + +
+تقريباً صحيح احتمالياً (Probably Approximately Correct (PAC)) - هو إطار يتم من خلاله إثبات العديد من نظريات التعلم، ويحتوي على الافتراضات التالية: +
+
+ +**81: the training and testing sets follow the same distribution ** + +
+مجموعتي التدريب والاختبار يتبعان نفس التوزيع. +
+
+ +**82. the training examples are drawn independently** + +
+عينات التدريب تؤخذ بشكل مستقل. +
+
+ +**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** + +
+مجموعة تكسيرية (Shattering Set) - إذا كان لدينا المجموعة S={x(1),...,x(d)}، ومجموعة مُصنٍّفات H، نقول أن H تكسر S (H shatters S) إذا كان لكل مجموعة علامات (labels) {y(1),...,y(d)} لدينا: +
+
+ +**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** + +
+مبرهنة الحد الأعلى (Upper bound theorem) - لنجعل H فئة فرضية محدودة (finite hypothesis class) بحيث |H|=k، و δ وحجم العينة m ثابتين. حينها سيكون لدينا، مع احتمال على الأقل 1−δ، التالي: +
+
+ +**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** + +
+بُعْد فابنيك-تشرفونيكس (Vapnik-Chervonenkis - VC) لفئة فرضية غير محدودة (infinite hypothesis class) H، ويرمز له بـ VC(H)، هو حجم أكبر مجموعة (set) التي تم تكسيرها بواسطة H (shattered by H). +
+
+ +**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** + +
+ملاحظة: بُعْد فابنيك-تشرفونيكس VC لـ H = {مجموعة التصنيفات الخطية في بُعدين} يساوي 3. +
+
+ +**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** + +
+مبرهنة فابنيك (Vapnik theorem) - ليكن لدينا H، مع VC(H)=d وعدد عيّنات التدريب m. سيكون لدينا، مع احتمال على الأقل 1−δ، التالي: +
+
+ +**88. [Introduction, Type of prediction, Type of model]** + +
+[مقدمة، نوع التوقع، نوع النموذج] +
+
+ +**89. [Notations and general concepts, loss function, gradient descent, likelihood]** + +
+[الرموز ومفاهيم أساسية، دالة الخسارة، النزول الاشتقاقي، الأرجحية] +
+
+ +**90. [Linear models, linear regression, logistic regression, generalized linear models]** + +
+[النماذج الخطيّة، الانحدار الخطّي، الانحدار اللوجستي، النماذج الخطية العامة] +
+
+ +**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** + +
+[آلة المتجهات الداعمة (SVM)، مُصنِّف الهامش الأحسن، الفرق المفصلي، النواة] +
+
+ +**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** + +
+[التعلم التوليدي، تحليل التمايز الجاوسي، بايز البسيط] +
+
+ +**93. [Trees and ensemble methods, CART, Random forest, Boosting]** + +
+[الطرق الشجرية والتجميعية، التصنيف والانحدار الشجري (CART)، الغابة العشوائية (Random forest)، التعزيز (Boosting)] +
+
+ +**94. [Other methods, k-NN]** + +
+[طرق أخرى، خوارزمية أقرب الجيران (k-NN)] +
+
+ +**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** + +
+[نظرية التعلُّم، متراجحة هوفدنك، تقريباً صحيح احتمالياً (PAC)، بُعْد فابنيك-تشرفونيكس (VC dimension)] +
diff --git a/ar/cheatsheet-unsupervised-learning.md b/ar/cs-229-unsupervised-learning.md similarity index 99% rename from ar/cheatsheet-unsupervised-learning.md rename to ar/cs-229-unsupervised-learning.md index d98e37ea2..6e309b36d 100644 --- a/ar/cheatsheet-unsupervised-learning.md +++ b/ar/cs-229-unsupervised-learning.md @@ -8,7 +8,7 @@ **2. Introduction to Unsupervised Learning** -
+
مقدمة للتعلّم غير المُوَجَّه
@@ -16,9 +16,9 @@ **3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** -
- {x(1),...,x(m)} الحافز ― الهدف من التعلّم غير المُوَجَّه هو إيجاد الأنماط الخفية في البيانات غير المٌعلمّة -
+
+ {x(1),...,x(m)} الحافز ― الهدف من التعلّم غير المُوَجَّه هو إيجاد الأنماط الخفية في البيانات غير المٌعلمّة +

@@ -269,7 +269,7 @@ dimensions by maximizing the variance of the data as follows:** **39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.**
-الخطوة 3: حساب u1,...,uk∈Rn المتجهات الذاتية الرئيسية المتعامدة لـ Σ وعددها k ، بعبارة أخرى، k من المتجهات الذاتية المتعامدة ذات القيم الذاتية الأكبر. +الخطوة 3: حساب u1,...,uk∈Rn المتجهات الذاتية الرئيسية المتعامدة لـ Σ وعددها k ، بعبارة أخرى، k من المتجهات الذاتية المتعامدة ذات القيم الذاتية الأكبر.

@@ -387,7 +387,7 @@ dimensions by maximizing the variance of the data as follows:** **56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]**
-[التجميع، تعظيم القيمة المتوقعة، تجميع k-متوسطات، التجميع الهرمي، مقاييس] +[التجميع، تعظيم القيمة المتوقعة، تجميع k-متوسطات، التجميع الهرمي، مقاييس]

diff --git a/ar/refresher-linear-algebra.md b/ar/refresher-linear-algebra.md deleted file mode 100644 index a6b440d1e..000000000 --- a/ar/refresher-linear-algebra.md +++ /dev/null @@ -1,339 +0,0 @@ -**1. Linear Algebra and Calculus refresher** - -⟶ - -
- -**2. General notations** - -⟶ - -
- -**3. Definitions** - -⟶ - -
- -**4. Vector ― We note x∈Rn a vector with n entries, where xi∈R is the ith entry:** - -⟶ - -
- -**5. Matrix ― We note A∈Rm×n a matrix with m rows and n columns, where Ai,j∈R is the entry located in the ith row and jth column:** - -⟶ - -
- -**6. Remark: the vector x defined above can be viewed as a n×1 matrix and is more particularly called a column-vector.** - -⟶ - -
- -**7. Main matrices** - -⟶ - -
- -**8. Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** - -⟶ - -
- -**9. Remark: for all matrices A∈Rn×n, we have A×I=I×A=A.** - -⟶ - -
- -**10. Diagonal matrix ― A diagonal matrix D∈Rn×n is a square matrix with nonzero values in its diagonal and zero everywhere else:** - -⟶ - -
- -**11. Remark: we also note D as diag(d1,...,dn).** - -⟶ - -
- -**12. Matrix operations** - -⟶ - -
- -**13. Multiplication** - -⟶ - -
- -**14. Vector-vector ― There are two types of vector-vector products:** - -⟶ - -
- -**15. inner product: for x,y∈Rn, we have:** - -⟶ - -
- -**16. outer product: for x∈Rm,y∈Rn, we have:** - -⟶ - -
- -**17. Matrix-vector ― The product of matrix A∈Rm×n and vector x∈Rn is a vector of size Rn, such that:** - -⟶ - -
- -**18. where aTr,i are the vector rows and ac,j are the vector columns of A, and xi are the entries of x.** - -⟶ - -
- -**19. Matrix-matrix ― The product of matrices A∈Rm×n and B∈Rn×p is a matrix of size Rn×p, such that:** - -⟶ - -
- -**20. where aTr,i,bTr,i are the vector rows and ac,j,bc,j are the vector columns of A and B respectively** - -⟶ - -
- -**21. Other operations** - -⟶ - -
- -**22. Transpose ― The transpose of a matrix A∈Rm×n, noted AT, is such that its entries are flipped:** - -⟶ - -
- -**23. Remark: for matrices A,B, we have (AB)T=BTAT** - -⟶ - -
- -**24. Inverse ― The inverse of an invertible square matrix A is noted A−1 and is the only matrix such that:** - -⟶ - -
- -**25. Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1=B−1A−1** - -⟶ - -
- -**26. Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** - -⟶ - -
- -**27. Remark: for matrices A,B, we have tr(AT)=tr(A) and tr(AB)=tr(BA)** - -⟶ - -
- -**28. Determinant ― The determinant of a square matrix A∈Rn×n, noted |A| or det(A) is expressed recursively in terms of A∖i,∖j, which is the matrix A without its ith row and jth column, as follows:** - -⟶ - -
- -**29. Remark: A is invertible if and only if |A|≠0. Also, |AB|=|A||B| and |AT|=|A|.** - -⟶ - -
- -**30. Matrix properties** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** - -⟶ - -
- -**33. [Symmetric, Antisymmetric]** - -⟶ - -
- -**34. Norm ― A norm is a function N:V⟶[0,+∞[ where V is a vector space, and such that for all x,y∈V, we have:** - -⟶ - -
- -**35. N(ax)=|a|N(x) for a scalar** - -⟶ - -
- -**36. if N(x)=0, then x=0** - -⟶ - -
- -**37. For x∈V, the most commonly used norms are summed up in the table below:** - -⟶ - -
- -**38. [Norm, Notation, Definition, Use case]** - -⟶ - -
- -**39. Linearly dependence ― A set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others.** - -⟶ - -
- -**40. Remark: if no vector can be written this way, then the vectors are said to be linearly independent** - -⟶ - -
- -**41. Matrix rank ― The rank of a given matrix A is noted rank(A) and is the dimension of the vector space generated by its columns. This is equivalent to the maximum number of linearly independent columns of A.** - -⟶ - -
- -**42. Positive semi-definite matrix ― A matrix A∈Rn×n is positive semi-definite (PSD) and is noted A⪰0 if we have:** - -⟶ - -
- -**43. Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** - -⟶ - -
- -**44. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**45. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**46. diagonal** - -⟶ - -
- -**47. Singular-value decomposition ― For a given matrix A of dimensions m×n, the singular-value decomposition (SVD) is a factorization technique that guarantees the existence of U m×m unitary, Σ m×n diagonal and V n×n unitary matrices, such that:** - -⟶ - -
- -**48. Matrix calculus** - -⟶ - -
- -**49. Gradient ― Let f:Rm×n→R be a function and A∈Rm×n be a matrix. The gradient of f with respect to A is a m×n matrix, noted ∇Af(A), such that:** - -⟶ - -
- -**50. Remark: the gradient of f is only defined when f is a function that returns a scalar.** - -⟶ - -
- -**51. Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** - -⟶ - -
- -**52. Remark: the hessian of f is only defined when f is a function that returns a scalar** - -⟶ - -
- -**53. Gradient operations ― For matrices A,B,C, the following gradient properties are worth having in mind:** - -⟶ - -
- -**54. [General notations, Definitions, Main matrices]** - -⟶ - -
- -**55. [Matrix operations, Multiplication, Other operations]** - -⟶ - -
- -**56. [Matrix properties, Norm, Eigenvalue/Eigenvector, Singular-value decomposition]** - -⟶ - -
- -**57. [Matrix calculus, Gradient, Hessian, Operations]** - -⟶ diff --git a/ar/refresher-probability.md b/ar/refresher-probability.md deleted file mode 100644 index 5c9b34656..000000000 --- a/ar/refresher-probability.md +++ /dev/null @@ -1,381 +0,0 @@ -**1. Probabilities and Statistics refresher** - -⟶ - -
- -**2. Introduction to Probability and Combinatorics** - -⟶ - -
- -**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** - -⟶ - -
- -**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** - -⟶ - -
- -**5. Axioms of probability For each event E, we denote P(E) as the probability of event E occuring.** - -⟶ - -
- -**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** - -⟶ - -
- -**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** - -⟶ - -
- -**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** - -⟶ - -
- -**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** - -⟶ - -
- -**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** - -⟶ - -
- -**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** - -⟶ - -
- -**12. Conditional Probability** - -⟶ - -
- -**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** - -⟶ - -
- -**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** - -⟶ - -
- -**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** - -⟶ - -
- -**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** - -⟶ - -
- -**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** - -⟶ - -
- -**18. Independence ― Two events A and B are independent if and only if we have:** - -⟶ - -
- -**19. Random Variables** - -⟶ - -
- -**20. Definitions** - -⟶ - -
- -**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** - -⟶ - -
- -**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** - -⟶ - -
- -**23. Remark: we have P(a - -**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** - -⟶ - -
- -**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** - -⟶ - -
- -**26. [Case, CDF F, PDF f, Properties of PDF]** - -⟶ - -
- -**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** - -⟶ - -
- -**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** - -⟶ - -
- -**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** - -⟶ - -
- -**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** - -⟶ - -
- -**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** - -⟶ - -
- -**32. Probability Distributions** - -⟶ - -
- -**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** - -⟶ - -
- -**34. Main distributions ― Here are the main distributions to have in mind:** - -⟶ - -
- -**35. [Type, Distribution]** - -⟶ - -
- -**36. Jointly Distributed Random Variables** - -⟶ - -
- -**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** - -⟶ - -
- -**38. [Case, Marginal density, Cumulative function]** - -⟶ - -
- -**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** - -⟶ - -
- -**40. Independence ― Two random variables X and Y are said to be independent if we have:** - -⟶ - -
- -**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** - -⟶ - -
- -**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** - -⟶ - -
- -**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** - -⟶ - -
- -**44. Remark 2: If X and Y are independent, then ρXY=0.** - -⟶ - -
- -**45. Parameter estimation** - -⟶ - -
- -**46. Definitions** - -⟶ - -
- -**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** - -⟶ - -
- -**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** - -⟶ - -
- -**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** - -⟶ - -
- -**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** - -⟶ - -
- -**51. Estimating the mean** - -⟶ - -
- -**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** - -⟶ - -
- -**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** - -⟶ - -
- -**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** - -⟶ - -
- -**55. Estimating the variance** - -⟶ - -
- -**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** - -⟶ - -
- -**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** - -⟶ - -
- -**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** - -⟶ - -
- -**59. [Introduction, Sample space, Event, Permutation]** - -⟶ - -
- -**60. [Conditional probability, Bayes' rule, Independence]** - -⟶ - -
- -**61. [Random variables, Definitions, Expectation, Variance]** - -⟶ - -
- -**62. [Probability distributions, Chebyshev's inequality, Main distributions]** - -⟶ - -
- -**63. [Jointly distributed random variables, Density, Covariance, Correlation]** - -⟶ - -
- -**64. [Parameter estimation, Mean, Variance]** - -⟶ diff --git a/de/cheatsheet-deep-learning.md b/de/cheatsheet-deep-learning.md deleted file mode 100644 index a5aa3756c..000000000 --- a/de/cheatsheet-deep-learning.md +++ /dev/null @@ -1,321 +0,0 @@ -**1. Deep Learning cheatsheet** - -⟶ - -
- -**2. Neural Networks** - -⟶ - -
- -**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** - -⟶ - -
- -**4. Architecture ― The vocabulary around neural networks architectures is described in the figure below:** - -⟶ - -
- -**5. [Input layer, hidden layer, output layer]** - -⟶ - -
- -**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** - -⟶ - -
- -**7. where we note w, b, z the weight, bias and output respectively.** - -⟶ - -
- -**8. Activation function ― Activation functions are used at the end of a hidden unit to introduce non-linear complexities to the model. Here are the most common ones:** - -⟶ - -
- -**9. [Sigmoid, Tanh, ReLU, Leaky ReLU]** - -⟶ - -
- -**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** - -⟶ - -
- -**11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** - -⟶ - -
- -**12. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to weight w is computed using chain rule and is of the following form:** - -⟶ - -
- -**13. As a result, the weight is updated as follows:** - -⟶ - -
- -**14. Updating weights ― In a neural network, weights are updated as follows:** - -⟶ - -
- -**15. Step 1: Take a batch of training data.** - -⟶ - -
- -**16. Step 2: Perform forward propagation to obtain the corresponding loss.** - -⟶ - -
- -**17. Step 3: Backpropagate the loss to get the gradients.** - -⟶ - -
- -**18. Step 4: Use the gradients to update the weights of the network.** - -⟶ - -
- -**19. Dropout ― Dropout is a technique meant at preventing overfitting the training data by dropping out units in a neural network. In practice, neurons are either dropped with probability p or kept with probability 1−p** - -⟶ - -
- -**20. Convolutional Neural Networks** - -⟶ - -
- -**21. Convolutional layer requirement ― By noting W the input volume size, F the size of the convolutional layer neurons, P the amount of zero padding, then the number of neurons N that fit in a given volume is such that:** - -⟶ - -
- -**22. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** - -⟶ - -
- -**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** - -⟶ - -
- -**24. Recurrent Neural Networks** - -⟶ - -
- -**25. Types of gates ― Here are the different types of gates that we encounter in a typical recurrent neural network:** - -⟶ - -
- -**26. [Input gate, forget gate, gate, output gate]** - -⟶ - -
- -**27. [Write to cell or not?, Erase a cell or not?, How much to write to cell?, How much to reveal cell?]** - -⟶ - -
- -**28. LSTM ― A long short-term memory (LSTM) network is a type of RNN model that avoids the vanishing gradient problem by adding 'forget' gates.** - -⟶ - -
- -**29. Reinforcement Learning and Control** - -⟶ - -
- -**30. The goal of reinforcement learning is for an agent to learn how to evolve in an environment.** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Markov decision processes ― A Markov decision process (MDP) is a 5-tuple (S,A,{Psa},γ,R) where:** - -⟶ - -
- -**33. S is the set of states** - -⟶ - -
- -**34. A is the set of actions** - -⟶ - -
- -**35. {Psa} are the state transition probabilities for s∈S and a∈A** - -⟶ - -
- -**36. γ∈[0,1[ is the discount factor** - -⟶ - -
- -**37. R:S×A⟶R or R:S⟶R is the reward function that the algorithm wants to maximize** - -⟶ - -
- -**38. Policy ― A policy π is a function π:S⟶A that maps states to actions.** - -⟶ - -
- -**39. Remark: we say that we execute a given policy π if given a state s we take the action a=π(s).** - -⟶ - -
- -**40. Value function ― For a given policy π and a given state s, we define the value function Vπ as follows:** - -⟶ - -
- -**41. Bellman equation ― The optimal Bellman equations characterizes the value function Vπ∗ of the optimal policy π∗:** - -⟶ - -
- -**42. Remark: we note that the optimal policy π∗ for a given state s is such that:** - -⟶ - -
- -**43. Value iteration algorithm ― The value iteration algorithm is in two steps:** - -⟶ - -
- -**44. 1) We initialize the value:** - -⟶ - -
- -**45. 2) We iterate the value based on the values before:** - -⟶ - -
- -**46. Maximum likelihood estimate ― The maximum likelihood estimates for the state transition probabilities are as follows:** - -⟶ - -
- -**47. times took action a in state s and got to s′** - -⟶ - -
- -**48. times took action a in state s** - -⟶ - -
- -**49. Q-learning ― Q-learning is a model-free estimation of Q, which is done as follows:** - -⟶ - -
- -**50. View PDF version on GitHub** - -⟶ - -
- -**51. [Neural Networks, Architecture, Activation function, Backpropagation, Dropout]** - -⟶ - -
- -**52. [Convolutional Neural Networks, Convolutional layer, Batch normalization]** - -⟶ - -
- -**53. [Recurrent Neural Networks, Gates, LSTM]** - -⟶ - -
- -**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** - -⟶ diff --git a/de/cheatsheet-machine-learning-tips-and-tricks.md b/de/cheatsheet-machine-learning-tips-and-tricks.md deleted file mode 100644 index 9712297b8..000000000 --- a/de/cheatsheet-machine-learning-tips-and-tricks.md +++ /dev/null @@ -1,285 +0,0 @@ -**1. Machine Learning tips and tricks cheatsheet** - -⟶ - -
- -**2. Classification metrics** - -⟶ - -
- -**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** - -⟶ - -
- -**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** - -⟶ - -
- -**5. [Predicted class, Actual class]** - -⟶ - -
- -**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** - -⟶ - -
- -**7. [Metric, Formula, Interpretation]** - -⟶ - -
- -**8. Overall performance of model** - -⟶ - -
- -**9. How accurate the positive predictions are** - -⟶ - -
- -**10. Coverage of actual positive sample** - -⟶ - -
- -**11. Coverage of actual negative sample** - -⟶ - -
- -**12. Hybrid metric useful for unbalanced classes** - -⟶ - -
- -**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** - -⟶ - -
- -**14. [Metric, Formula, Equivalent]** - -⟶ - -
- -**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** - -⟶ - -
- -**16. [Actual, Predicted]** - -⟶ - -
- -**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** - -⟶ - -
- -**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** - -⟶ - -
- -**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** - -⟶ - -
- -**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** - -⟶ - -
- -**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** - -⟶ - -
- -**22. Model selection** - -⟶ - -
- -**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** - -⟶ - -
- -**24. [Training set, Validation set, Testing set]** - -⟶ - -
- -**25. [Model is trained, Model is assessed, Model gives predictions]** - -⟶ - -
- -**26. [Usually 80% of the dataset, Usually 20% of the dataset]** - -⟶ - -
- -**27. [Also called hold-out or development set, Unseen data]** - -⟶ - -
- -**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** - -⟶ - -
- -**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** - -⟶ - -
- -**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** - -⟶ - -
- -**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** - -⟶ - -
- -**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** - -⟶ - -
- -**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** - -⟶ - -
- -**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** - -⟶ - -
- -**35. Diagnostics** - -⟶ - -
- -**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** - -⟶ - -
- -**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** - -⟶ - -
- -**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** - -⟶ - -
- -**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** - -⟶ - -
- -**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** - -⟶ - -
- -**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** - -⟶ - -
- -**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** - -⟶ - -
- -**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** - -⟶ - -
- -**44. Regression metrics** - -⟶ - -
- -**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** - -⟶ - -
- -**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** - -⟶ - -
- -**47. [Model selection, cross-validation, regularization]** - -⟶ - -
- -**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** - -⟶ diff --git a/de/cheatsheet-unsupervised-learning.md b/de/cheatsheet-unsupervised-learning.md deleted file mode 100644 index 1bf117d72..000000000 --- a/de/cheatsheet-unsupervised-learning.md +++ /dev/null @@ -1,340 +0,0 @@ -**1. Unsupervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Unsupervised Learning** - -⟶ - -
- -**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** - -⟶ - -
- -**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** - -⟶ - -
- -**5. Clustering** - -⟶ - -
- -**6. Expectation-Maximization** - -⟶ - -
- -**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** - -⟶ - -
- -**8. [Setting, Latent variable z, Comments]** - -⟶ - -
- -**9. [Mixture of k Gaussians, Factor analysis]** - -⟶ - -
- -**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** - -⟶ - -
- -**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** - -⟶ - -
- -**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** - -⟶ - -
- -**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** - -⟶ - -
- -**14. k-means clustering** - -⟶ - -
- -**15. We note c(i) the cluster of data point i and μj the center of cluster j.** - -⟶ - -
- -**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** - -⟶ - -
- -**17. [Means initialization, Cluster assignment, Means update, Convergence]** - -⟶ - -
- -**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** - -⟶ - -
- -**19. Hierarchical clustering** - -⟶ - -
- -**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** - -⟶ - -
- -**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** - -⟶ - -
- -**22. [Ward linkage, Average linkage, Complete linkage]** - -⟶ - -
- -**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** - -⟶ - -
- -**24. Clustering assessment metrics** - -⟶ - -
- -**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** - -⟶ - -
- -**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** - -⟶ - -
- -**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** - -⟶ - -
- -**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** - -⟶ - -
- -**29. Dimension reduction** - -⟶ - -
- -**30. Principal component analysis** - -⟶ - -
- -**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** - -⟶ - -
- -**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**34. diagonal** - -⟶ - -
- -**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** - -⟶ - -
- -**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k -dimensions by maximizing the variance of the data as follows:** - -⟶ - -
- -**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** - -⟶ - -
- -**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** - -⟶ - -
- -**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** - -⟶ - -
- -**40. Step 4: Project the data on spanR(u1,...,uk).** - -⟶ - -
- -**41. This procedure maximizes the variance among all k-dimensional spaces.** - -⟶ - -
- -**42. [Data in feature space, Find principal components, Data in principal components space]** - -⟶ - -
- -**43. Independent component analysis** - -⟶ - -
- -**44. It is a technique meant to find the underlying generating sources.** - -⟶ - -
- -**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** - -⟶ - -
- -**46. The goal is to find the unmixing matrix W=A−1.** - -⟶ - -
- -**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** - -⟶ - -
- -**48. Write the probability of x=As=W−1s as:** - -⟶ - -
- -**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** - -⟶ - -
- -**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** - -⟶ - -
- -**51. The Machine Learning cheatsheets are now available in German.** - -⟶ - -
- -**52. Original authors** - -⟶ - -
- -**53. Translated by X, Y and Z** - -⟶ - -
- -**54. Reviewed by X, Y and Z** - -⟶ - -
- -**55. [Introduction, Motivation, Jensen's inequality]** - -⟶ - -
- -**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** - -⟶ - -
- -**57. [Dimension reduction, PCA, ICA]** - -⟶ diff --git a/es/cheatsheet-deep-learning.md b/es/cs-229-deep-learning.md similarity index 100% rename from es/cheatsheet-deep-learning.md rename to es/cs-229-deep-learning.md diff --git a/es/refresher-linear-algebra.md b/es/cs-229-linear-algebra.md similarity index 100% rename from es/refresher-linear-algebra.md rename to es/cs-229-linear-algebra.md diff --git a/es/cheatsheet-machine-learning-tips-and-tricks.md b/es/cs-229-machine-learning-tips-and-tricks.md similarity index 100% rename from es/cheatsheet-machine-learning-tips-and-tricks.md rename to es/cs-229-machine-learning-tips-and-tricks.md diff --git a/es/refresher-probability.md b/es/cs-229-probability.md similarity index 100% rename from es/refresher-probability.md rename to es/cs-229-probability.md diff --git a/es/cheatsheet-supervised-learning.md b/es/cs-229-supervised-learning.md similarity index 100% rename from es/cheatsheet-supervised-learning.md rename to es/cs-229-supervised-learning.md diff --git a/es/cheatsheet-unsupervised-learning.md b/es/cs-229-unsupervised-learning.md similarity index 100% rename from es/cheatsheet-unsupervised-learning.md rename to es/cs-229-unsupervised-learning.md diff --git a/fa/cheatsheet-deep-learning.md b/fa/cs-229-deep-learning.md similarity index 100% rename from fa/cheatsheet-deep-learning.md rename to fa/cs-229-deep-learning.md diff --git a/fa/refresher-linear-algebra.md b/fa/cs-229-linear-algebra.md similarity index 100% rename from fa/refresher-linear-algebra.md rename to fa/cs-229-linear-algebra.md diff --git a/fa/cheatsheet-machine-learning-tips-and-tricks.md b/fa/cs-229-machine-learning-tips-and-tricks.md similarity index 100% rename from fa/cheatsheet-machine-learning-tips-and-tricks.md rename to fa/cs-229-machine-learning-tips-and-tricks.md diff --git a/fa/refresher-probability.md b/fa/cs-229-probability.md similarity index 100% rename from fa/refresher-probability.md rename to fa/cs-229-probability.md diff --git a/fa/cheatsheet-supervised-learning.md b/fa/cs-229-supervised-learning.md similarity index 100% rename from fa/cheatsheet-supervised-learning.md rename to fa/cs-229-supervised-learning.md diff --git a/fa/cheatsheet-unsupervised-learning.md b/fa/cs-229-unsupervised-learning.md similarity index 100% rename from fa/cheatsheet-unsupervised-learning.md rename to fa/cs-229-unsupervised-learning.md diff --git a/fa/cs-230-convolutional-neural-networks.md b/fa/cs-230-convolutional-neural-networks.md new file mode 100644 index 000000000..ee4201100 --- /dev/null +++ b/fa/cs-230-convolutional-neural-networks.md @@ -0,0 +1,923 @@ +**Convolutional Neural Networks translation** + +
+ +**1. Convolutional Neural Networks cheatsheet** + +
+راهنمای کوتاه شبکه‌های عصبی پیچشی (کانولوشنی) +
+ +
+ + +**2. CS 230 - Deep Learning** + +
+کلاس CS 230 - یادگیری عمیق +
+
+ +
+ + +**3. [Overview, Architecture structure]** + +
+[نمای کلی، ساختار معماری] +
+ +
+ + +**4. [Types of layer, Convolution, Pooling, Fully connected]** + +
+[انواع لایه، کانولوشنی، ادغام، تمام‌متصل] +
+ +
+ + +**5. [Filter hyperparameters, Dimensions, Stride, Padding]** + +
+[ابرفراسنج‌های فیلتر، ابعاد، گام، حاشیه] +
+
+ +
+ + +**6. [Tuning hyperparameters, Parameter compatibility, Model complexity, Receptive field]** + +
+[تنظیم ابرفراسنج‌ها، سازش‌پذیری فراسنج، پیچیدگی مدل، ناحیه‌ی تاثیر] +
+ +
+ + +**7. [Activation functions, Rectified Linear Unit, Softmax]** + +
+[توابع فعال‌سازی، تابع یکسوساز خطی، تابع بیشینه‌ی هموار] +
+ +
+ + +**8. [Object detection, Types of models, Detection, Intersection over Union, Non-max suppression, YOLO, R-CNN]** + +
+[شناسایی شیء، انواع مدل‌ها، شناسایی، نسبت هم‌پوشانی اشتراک به اجتماع، فروداشت غیربیشینه، YOLO، R-CNN] +
+ +
+ + +**9. [Face verification/recognition, One shot learning, Siamese network, Triplet loss]** + +
+[تایید/بازشناسایی چهره، یادگیری یک‌باره‌ای (One shot)، شبکه‌ی Siamese، خطای سه‌گانه] +
+ +
+ + +**10. [Neural style transfer, Activation, Style matrix, Style/content cost function]** + +
+[انتقالِ سبکِ عصبی، فعال سازی، ماتریسِ سبک، تابع هزینه‌ی محتوا/سبک] +
+ +
+ + +**11. [Computational trick architectures, Generative Adversarial Net, ResNet, Inception Network]** + +
+[معماری‌های با ترفندهای محاسباتی، شبکه‌ی هم‌آوردِ مولد، ResNet، شبکه‌ی Inception] +
+ +
+ + +**12. Overview** + +
+نمای کلی +
+ +
+ + +**13. Architecture of a traditional CNN ― Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers:** + +
+معماری یک CNN سنتی – شبکه‌های عصبی مصنوعی پیچشی، که همچنین با عنوان CNN شناخته می شوند، یک نوع خاص از شبکه های عصبی هستند که عموما از لایه‌های زیر تشکیل شده‌اند: +
+ +
+ + +**14. The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections.** + +
+لایه‌ی کانولوشنی و لایه‌ی ادغام می‌توانند به نسبت ابرفراسنج‌هایی که در بخش‌های بعدی بیان شده‌اند تنظیم و تعدیل شوند. +
+ +
+ + +**15. Types of layer** + +
+انواع لایه‌ها +
+ +
+ + +**16. Convolution layer (CONV) ― The convolution layer (CONV) uses filters that perform convolution operations as it is scanning the input I with respect to its dimensions. Its hyperparameters include the filter size F and stride S. The resulting output O is called feature map or activation map.** + +
+لایه کانولوشنی (CONV) - لایه کانولوشنی (CONV) از فیلترهایی استفاده می‌کند که عملیات کانولوشنی را در هنگام پویش ورودی I به نسبت ابعادش، اجرا می‌کند. ابرفراسنج‌های آن شامل اندازه فیلتر F و گام S هستند. خروجی حاصل شده O نگاشت ویژگی یا نگاشت فعال‌سازی نامیده می‌شود. +
+ +
+ + +**17. Remark: the convolution step can be generalized to the 1D and 3D cases as well.** + +
+نکته: مرحله کانولوشنی همچنین می‌تواند به موارد یک بُعدی و سه بُعدی تعمیم داده شود. +
+ +
+ + +**18. Pooling (POOL) ― The pooling layer (POOL) is a downsampling operation, typically applied after a convolution layer, which does some spatial invariance. In particular, max and average pooling are special kinds of pooling where the maximum and average value is taken, respectively.** + +
+لایه ادغام (POOL) - لایه ادغام (POOL) یک عمل نمونه‌کاهی است، که معمولا بعد از یک لایه کانولوشنی اعمال می‌شود، که تا حدی منجر به ناوردایی مکانی می‌شود. به طور خاص، ادغام بیشینه و میانگین انواع خاص ادغام هستند که به ترتیب مقدار بیشینه و میانگین گرفته می‌شود. +
+ +
+ + +**19. [Type, Purpose, Illustration, Comments]** + +
+[نوع، هدف، نگاره، توضیحات] +
+ +
+ + +**20. [Max pooling, Average pooling, Each pooling operation selects the maximum value of the current view, Each pooling operation averages the values of the current view]** + +
+[ادغام بیشینه، ادغام میانگین، هر عمل ادغام مقدار بیشینه‌ی نمای فعلی را انتخاب می‌کند، هر عمل ادغام مقدار میانگینِ نمای فعلی را انتخاب می‌کند] +
+ +
+ + +**21. [Preserves detected features, Most commonly used, Downsamples feature map, Used in LeNet]** + +
+[ویژگی‌های شناسایی شده را حفظ می‌کند، اغلب مورد استفاده قرار می‌گیرد، کاستن نگاشت ویژگی، در (معماری) LeNet استفاده شده است] +
+ +
+ + +**22. Fully Connected (FC) ― The fully connected layer (FC) operates on a flattened input where each input is connected to all neurons. If present, FC layers are usually found towards the end of CNN architectures and can be used to optimize objectives such as class scores.** + +
+تمام‌متصل (FC) - لایه‌ی تمام‌متصل (FC) بر روی یک ورودی مسطح به طوری ‌که هر ورودی به تمامی نورون‌ها متصل است، عمل می‌کند. در صورت وجود، لایه‌های FC معمولا در انتهای معماری‌های CNN یافت می‌شوند و می‌توان آن‌ها را برای بهینه‌سازی اهدافی مثل امتیازات کلاس به‌ کار برد. +
+
+ + +**23. Filter hyperparameters** + +
+ابرفراسنج‌های فیلتر +
+ +
+ + +**24. The convolution layer contains filters for which it is important to know the meaning behind its hyperparameters.** + +
+لایه کانولوشنی شامل فیلترهایی است که دانستن مفهوم نهفته در فراسنج‌های آن اهمیت دارد. +
+ +
+ + +**25. Dimensions of a filter ― A filter of size F×F applied to an input containing C channels is a F×F×C volume that performs convolutions on an input of size I×I×C and produces an output feature map (also called activation map) of size O×O×1.** + +
+ابعاد یک فیلتر - یک فیلتر به اندازه F×F اعمال شده بر روی یک ورودیِ حاوی C کانال، یک توده F×F×C است که (عملیات) پیچشی بر روی یک ورودی به اندازه I×I×C اعمال می‌کند و یک نگاشت ویژگی خروجی (که همچنین نگاشت فعال‌سازی نامیده می‌شود) به اندازه O×O×1 تولید می‌کند. +
+ +
+ + +**26. Filter** + +
+فیلتر +
+ +
+ + +**27. Remark: the application of K filters of size F×F results in an output feature map of size O×O×K.** + +
+نکته: اعمال K فیلتر به اندازه‌ی F×F، منتج به یک نگاشت ویژگی خروجی به اندازه O×O×K می‌شود. +
+ +
+ + +**28. Stride ― For a convolutional or a pooling operation, the stride S denotes the number of pixels by which the window moves after each operation.** + +
+گام – در یک عملیات ادغام یا پیچشی، اندازه گام S به تعداد پیکسل‌هایی که پنجره بعد از هر عملیات جابه‌جا می‌شود، اشاره دارد. +
+ +
+ + +**29. Zero-padding ― Zero-padding denotes the process of adding P zeroes to each side of the boundaries of the input. This value can either be manually specified or automatically set through one of the three modes detailed below:** + +
+حاشیه‌ی صفر – حاشیه‌ی صفر به فرآیند افزودن P صفر به هر طرف از کرانه‌های ورودی اشاره دارد. این مقدار می‌تواند به طور دستی مشخص شود یا به طور خودکار به سه روش زیر تعیین گردد: +
+ +
+ + +**30. [Mode, Value, Illustration, Purpose, Valid, Same, Full]** + +
+[نوع، مقدار، نگاره، هدف، Valid، Same، Full] +
+ +
+ + +**31. [No padding, Drops last convolution if dimensions do not match, Padding such that feature map size has size ⌈IS⌉, Output size is mathematically convenient, Also called 'half' padding, Maximum padding such that end convolutions are applied on the limits of the input, Filter 'sees' the input end-to-end]** + +
+[فاقد حاشیه، اگر ابعاد مطابقت ندارد آخرین کانولوشنی را رها کن، (اعمال) حاشیه به طوری که اندازه نگاشت ویژگی ⌈IS⌉ باشد، (محاسبه) اندازه خروجی به لحاظ ریاضیاتی آسان است، همچنین حاشیه‌ی 'نیمه' نامیده می‌شود، بالاترین حاشیه (اعمال می‌شود) به طوری که (عملیات) کانولوشنی انتهایی بر روی مرزهای ورودی اعمال می‌شود، فیلتر ورودی را به صورت پکپارچه 'می‌پیماید'] +
+ +
+ + +**32. Tuning hyperparameters** + +
+تنظیم ابرفراسنج‌ها +
+ +
+ + +**33. Parameter compatibility in convolution layer ― By noting I the length of the input volume size, F the length of the filter, P the amount of zero padding, S the stride, then the output size O of the feature map along that dimension is given by:** + +
+سازش‌پذیری فراسنج در لایه کانولوشنی – با ذکر I به عنوان طول اندازه توده ورودی، F طول فیلتر، P میزان حاشیه‌ی صفر، S گام، اندازه خروجی نگاشت ویژگی O در امتداد ابعاد خواهد بود: +
+ +
+ + +**34. [Input, Filter, Output]** + +
+[ورودی، فیلتر، خروجی] +
+ +
+ + +**35. Remark: often times, Pstart=Pend≜P, in which case we can replace Pstart+Pend by 2P in the formula above.** + +
+نکته: اغلب Pstart=Pend≜P است، در این صورت Pstart+Pend را می‌توان با 2 Pدر فرمول بالا جایگزین کرد. +
+ +
+ + +**36. Understanding the complexity of the model ― In order to assess the complexity of a model, it is often useful to determine the number of parameters that its architecture will have. In a given layer of a convolutional neural network, it is done as follows:** + +
+درک پیچیدگی مدل – برای برآورد پیچیدگی مدل، اغلب تعیین تعداد فراسنج‌هایی که معماری آن می‌تواند داشته باشد، مفید است. در یک لایه مفروض شبکه پیچشی عصبی این امر به صورت زیر انجام می‌شود: +
+ +
+ + +**37. [Illustration, Input size, Output size, Number of parameters, Remarks]** + +
+[نگاره، اندازه ورودی، اندازه خروجی، تعداد فراسنج‌ها، ملاحظات] +
+ +
+ + +**38. [One bias parameter per filter, In most cases, S +[یک پیش‌قدر به ازای هر فیلتر، در بیشتر موارد S<F است، یک انتخاب رایج برای K، 2C است] +
+ + +
+ + +**39. [Pooling operation done channel-wise, In most cases, S=F]** + +
+[عملیات ادغام به صورت کانال‌به‌کانال انجام میشود، در بیشتر موارد S=F است] +
+ +
+ +**40. [Input is flattened, One bias parameter per neuron, The number of FC neurons is free of structural constraints]** + +
+[ورودی مسطح شده است، یک پیش‌قدر به ازای هر نورون، تعداد نورون‌های FC فاقد محدودیت‌های ساختاری‌ست] +
+ +
+ + +**41. Receptive field ― The receptive field at layer k is the area denoted Rk×Rk of the input that each pixel of the k-th activation map can 'see'. By calling Fj the filter size of layer j and Si the stride value of layer i and with the convention S0=1, the receptive field at layer k can be computed with the formula:** + +
+ناحیه تاثیر – ناحیه تاثیر در لایه k محدوده‌ای از ورودی Rk×Rk است که هر پیکسلِ kاٌم نگاشت ویژگی می‌تواند 'ببیند'. با ذکر Fj به عنوان اندازه فیلتر لایه j و Si مقدار گام لایه i و با این توافق که S0=1 است، ناحیه تاثیر در لایه k با فرمول زیر محاسبه می‌شود: +
+ +
+ + +**42. In the example below, we have F1=F2=3 and S1=S2=1, which gives R2=1+2⋅1+2⋅1=5.** + +
+در مثال زیر داریم، F1=F2=3 و S1=S2=1 که منتج به R2=1+2⋅1+2⋅1=5 می‌شود. +
+ +
+ + +**43. Commonly used activation functions** + +
+توابع فعال‌سازی پرکاربرد +
+ +
+ + +**44. Rectified Linear Unit ― The rectified linear unit layer (ReLU) is an activation function g that is used on all elements of the volume. It aims at introducing non-linearities to the network. Its variants are summarized in the table below:** + +
+تابع یکسوساز خطی – تابع یکسوساز خطی (ReLU) یک تابع فعال‌سازی g است که بر روی تمامی عناصر توده اعمال می‌شود. هدف آن ارائه (رفتار) غیرخطی به شبکه است. انواع آن در جدول زیر به‌صورت خلاصه آمده‌اند: +
+ +
+ + +**45. [ReLU, Leaky ReLU, ELU, with]** + +
+[ReLU ، ReLUنشت‌دار، ELU، با] +
+ +
+ + +**46. [Non-linearity complexities biologically interpretable, Addresses dying ReLU issue for negative values, Differentiable everywhere]** + +
+[پیچیدگی‌های غیر خطی که از دیدگاه زیستی قابل تفسیر هستند، مسئله افول ReLU برای مقادیر منفی را مهار می‌کند، در تمامی نقاط مشتق‌پذیر است] +
+ +
+ + +**47. Softmax ― The softmax step can be seen as a generalized logistic function that takes as input a vector of scores x∈Rn and outputs a vector of output probability p∈Rn through a softmax function at the end of the architecture. It is defined as follows:** + +
+بیشینه‌ی هموار – مرحله بیشینه‌ی هموار را می‌توان به عنوان یک تابع لجستیکی تعمیم داده شده که یک بردار x∈Rn را از ورودی می‌گیرد و یک بردار خروجی احتمال p∈Rn، به‌واسطه‌ی تابع بیشینه‌ی هموار در انتهای معماری، تولید می‌کند. این تابع به‌صورت زیر تعریف می‌شود: +
+ +
+ + +**48. where** + +
+که +
+ +
+ + +**49. Object detection** + +
+شناسایی شیء +
+ +
+ + +**50. Types of models ― There are 3 main types of object recognition algorithms, for which the nature of what is predicted is different. They are described in the table below:** + +
+انواع مدل‌ – سه نوع اصلی از الگوریتم‌های بازشناسایی وجود دارد، که ماهیت آنچه‌که شناسایی شده متفاوت است. این الگوریتم‌ها در جدول زیر توضیح داده شده‌اند: +
+ +
+ + +**51. [Image classification, Classification w. localization, Detection]** + +
+[دسته‌بندی تصویر، دسته‌بندی با موقعیت‌یابی، شناسایی] +
+ +
+ + +**52. [Teddy bear, Book]** + +
+[خرس تدی، کتاب] +
+ +
+ + +**53. [Classifies a picture, Predicts probability of object, Detects an object in a picture, Predicts probability of object and where it is located, Detects up to several objects in a picture, Predicts probabilities of objects and where they are located]** + +
+[یک عکس را دسته‌بندی می‌کند، احتمال شیء را پیش‌بینی می‌کند، یک شیء را در یک عکس شناسایی می‌کند، احتمال یک شیء و موقعیت آن را پیش‌بینی میکند، چندین شیء در یک عکس را شناسایی می‌کند، احتمال اشیاء و موقعیت آنها را پیش‌بینی می‌کند] +
+ +
+ + +**54. [Traditional CNN, Simplified YOLO, R-CNN, YOLO, R-CNN]** + +
+[CNN سنتی، YOLO ساده شده، R-CNN، YOLO، R-CNN] +
+ +
+ + +**55. Detection ― In the context of object detection, different methods are used depending on whether we just want to locate the object or detect a more complex shape in the image. The two main ones are summed up in the table below:** + +
+شناسایی – در مضمون شناسایی شیء، روشهای مختلفی بسته به اینکه آیا فقط می‌خواهیم موقعیت قرارگیری شیء را پیدا کنیم یا شکل پیچیده‌تری در تصویر را شناسایی کنیم، استفاده می‌شوند. دو مورد از اصلی ترین آنها در جدول زیر به‌صورت خلاصه آورده‌ شده‌اند: +
+ +
+ + +**56. [Bounding box detection, Landmark detection]** + +
+[پیش‌بینی کادر محصورکننده، ]شناسایی نقاط(برجسته) +
+ +
+ + +**57. [Detects the part of the image where the object is located, Detects a shape or characteristics of an object (e.g. eyes), More granular]** + +
+[بخشی از تصویر که شیء در آن قرار گرفته را شناسایی می‌کند، یک شکل یا مشخصات یک شیء (مثل چشم‌ها) را شناسایی می‌کند، موشکافانه‌تر] +
+ +
+ + +**58. [Box of center (bx,by), height bh and width bw, Reference points (l1x,l1y), ..., (lnx,lny)]** + +
+[مرکزِ کادر (bx,by)، ارتفاع bh و عرض bw، نقاط مرجع (l1x,l1y), ..., (lnx,lny)] +
+ +
+ + +**59. Intersection over Union ― Intersection over Union, also known as IoU, is a function that quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding box Ba. It is defined as:** + +
+نسبت هم‌پوشانی اشتراک به اجتماع - نسبت هم‌پوشانی اشتراک به اجتماع، همچنین به عنوان IoU شناخته می‌شود، تابعی‌ است که میزان موقعیت دقیق کادر محصورکننده Bp نسبت به کادر محصورکننده حقیقی Ba را می‌سنجد. این تابع به‌صورت زیر تعریف می‌شود: +
+ +
+ + +**60. Remark: we always have IoU∈[0,1]. By convention, a predicted bounding box Bp is considered as being reasonably good if IoU(Bp,Ba)⩾0.5.** + +
+نکته: همواره داریم IoU∈[0,1]. به صورت قرارداد، یک کادر محصورکننده Bp را می‌توان نسبتا خوب در نظر گرفت اگر IoU(Bp,Ba)⩾0.5 باشد. +
+ +
+ + +**61. Anchor boxes ― Anchor boxing is a technique used to predict overlapping bounding boxes. In practice, the network is allowed to predict more than one box simultaneously, where each box prediction is constrained to have a given set of geometrical properties. For instance, the first prediction can potentially be a rectangular box of a given form, while the second will be another rectangular box of a different geometrical form.** + +
+کادرهای محوری – کادر بندی محوری روشی است که برای پیش‌بینی کادرهای محصورکننده هم‌پوشان استفاده می‌شود. در عمل، شبکه این اجازه را دارد که بیش از یک کادر به‌صورت هم‌زمان پیش‌بینی کند جایی‌که هر پیش‌بینی کادر مقید به داشتن یک مجموعه خصوصیات هندسی مفروض است. به عنوان مثال، اولین پیش‌بینی می‌تواند یک کادر مستطیلی با قالب خاص باشد حال آنکه کادر دوم، یک کادر مستطیلی محوری با قالب هندسی متفاوتی خواهد بود. +
+ +
+ + +**62. Non-max suppression ― The non-max suppression technique aims at removing duplicate overlapping bounding boxes of a same object by selecting the most representative ones. After having removed all boxes having a probability prediction lower than 0.6, the following steps are repeated while there are boxes remaining:** + +
+فروداشت غیربیشینه – هدف روش فروداشت غیربیشینه، حذف کادرهای محصورکننده هم‌پوشان تکراریِ دسته یکسان با انتخاب معرف‌ترین‌ها است. بعد از حذف همه کادرهایی که احتمال پیش‌بینی پایین‌تر از 0.6 دارند، مراحل زیر با وجود آنکه کادرهایی باقی می‌مانند، تکرار می‌شوند: +
+ +
+ + +**63. [For a given class, Step 1: Pick the box with the largest prediction probability., Step 2: Discard any box having an IoU⩾0.5 with the previous box.]** + +
+[برای یک دسته مفروض، گام اول: کادر با بالاترین احتمال پیش‌بینی را انتخاب کن، گام دوم: هر کادری که IoU≥0.5 نسبت به کادر پیشین دارد را رها کن.] +
+ +
+ + +**64. [Box predictions, Box selection of maximum probability, Overlap removal of same class, Final bounding boxes]** + +
+[پیش‌بینی کادرها، انتخاب کادرِ با احتمال بیشینه، حذف (کادر) همپوشان دسته یکسان، کادرهای محصورکننده نهایی] +
+ +
+ + +**65. YOLO ― You Only Look Once (YOLO) is an object detection algorithm that performs the following steps:** + +
+YOLO - «شما فقط یک‌بار نگاه می‌کنید» (YOLO) یک الگوریتم شناسایی شیء است که مراحل زیر را اجرا می‌کند: +
+ +
+ + +**66. [Step 1: Divide the input image into a G×G grid., Step 2: For each grid cell, run a CNN that predicts y of the following form:, repeated k times]** + +
+[گام اول: تصویر ورودی را به یک مشبک G×G تقسیم کن، گام دوم: برای هر سلول مشبک، یک CNN که y را به شکل زیر پیش‌بینی می‌کند، اجرا کن:، k مرتبه تکرارشده] +
+ +
+ + +**67. where pc is the probability of detecting an object, bx,by,bh,bw are the properties of the detected bouding box, c1,...,cp is a one-hot representation of which of the p classes were detected, and k is the number of anchor boxes.** + +
+که pc احتمال شناسایی یک شیء است، bx,by,bh,bw اندازه‌های نسبی کادر محیطی شناسایی شده است، c1,...,cp نمایش «تک‌فعال» یک دسته از p دسته که تشخیص داده شده است، و k تعداد کادرهای محوری است. + +
+ +
+ + +**68. Step 3: Run the non-max suppression algorithm to remove any potential duplicate overlapping bounding boxes.** + +
+گام سوم: الگوریتم فروداشت غیربیشینه را برای حذف هر کادر محصورکننده هم‌پوشان تکراری بالقوه، اجرا کن. +
+ +
+ + +**69. [Original image, Division in GxG grid, Bounding box prediction, Non-max suppression]** + +
+[تصویر اصلی، تقسیم به GxG مشبک، پیش‌بینی کادر محصورکننده، فروداشت غیربیشینه] +
+ +
+ + +**70. Remark: when pc=0, then the network does not detect any object. In that case, the corresponding predictions bx,...,cp have to be ignored.** + +
+نکته: زمانی‌که pc=0 است، شبکه هیچ شیئی را شناسایی نمی‌کند. در چنین حالتی، پیش‌بینی‌های متناظر bx,…,cp بایستی نادیده گرفته شوند. +
+ +
+ + +**71. R-CNN ― Region with Convolutional Neural Networks (R-CNN) is an object detection algorithm that first segments the image to find potential relevant bounding boxes and then run the detection algorithm to find most probable objects in those bounding boxes.** + +
+R-CNN - ناحیه با شبکه‌های عصبی پیچشی (R-CNN) یک الگوریتم شناسایی شیء است که ابتدا تصویر را برای یافتن کادرهای محصورکننده مربوط بالقوه قطعه‌بندی می‌کند و سپس الگوریتم شناسایی را برای یافتن محتمل‌ترین اشیاء در این کادرهای محصور کننده اجرا می‌کند. +
+ +
+ + +**72. [Original image, Segmentation, Bounding box prediction, Non-max suppression]** + +
+[تصویر اصلی، قطعه بندی، پیش‌بینی کادر محصور کننده، فروداشت غیربیشینه] +
+ +
+ + +**73. Remark: although the original algorithm is computationally expensive and slow, newer architectures enabled the algorithm to run faster, such as Fast R-CNN and Faster R-CNN.** + +
+نکته: هرچند الگوریتم اصلی به لحاظ محاسباتی پرهزینه و کند است، معماری‌های جدید از قبیل Fast R-CNN و Faster R-CNN باعث شدند که الگوریتم سریعتر اجرا شود. +
+ +
+ + +**74. Face verification and recognition** + +
+تایید چهره و بازشناسایی +
+ +
+ + +**75. Types of models ― Two main types of model are summed up in table below:** + +
+انواع مدل – دو نوع اصلی از مدل در جدول زیر به‌صورت خلاصه آورده‌ شده‌اند: +
+ +
+ + +**76. [Face verification, Face recognition, Query, Reference, Database]** + +
+[تایید چهره، بازشناسایی چهره، جستار، مرجع، پایگاه داده] +
+ +
+ + +**77. [Is this the correct person?, One-to-one lookup, Is this one of the K persons in the database?, One-to-many lookup]** + +
+[فرد مورد نظر است؟، جستجوی یک‌به‌یک، این فرد یکی از K فرد پایگاه داده است؟، جستجوی یک‌به‌چند] +
+ +
+ + +**78. One Shot Learning ― One Shot Learning is a face verification algorithm that uses a limited training set to learn a similarity function that quantifies how different two given images are. The similarity function applied to two images is often noted d(image 1,image 2).** + +
+یادگیری یک‌باره‌ای – یادگیری یک‌باره‌ای یک الگوریتم تایید چهره است که از یک مجموعه آموزشی محدود برای یادگیری یک تابع مشابهت که میزان اختلاف دو تصویر مفروض را تعیین می‌کند، بهره می‌برد. تابع مشابهت اعمال‌شده بر روی دو تصویر اغلب با نماد d(image 1, image 2) نمایش داده می‌شود. +
+ +
+ + +**79. Siamese Network ― Siamese Networks aim at learning how to encode images to then quantify how different two images are. For a given input image x(i), the encoded output is often noted as f(x(i)).** + +
+شبکه‌ی Siamese - هدف شبکه‌ی Siamese یادگیری طریقه رمزنگاری تصاویر و سپس تعیین اختلاف دو تصویر است. برای یک تصویر مفروض ورودی x(i)، خروجی رمزنگاری شده اغلب با نماد f(x(i)) نمایش داده می‌شود. +
+ +
+ + +**80. Triplet loss ― The triplet loss ℓ is a loss function computed on the embedding representation of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive example belong to a same class, while the negative example to another one. By calling α∈R+ the margin parameter, this loss is defined as follows:** + +
+خطای سه‌گانه – خطای سه‌گانه ℓ یک تابع خطا است که بر روی بازنمایی تعبیه‌ی سه‌گانه‌ی تصاویر A (محور)، P (مثبت) و N (منفی) محاسبه می‌شود. نمونه‌های محور (anchor) و مثبت به دسته یکسانی تعلق دارند، حال آنکه نمونه منفی به دسته دیگری تعلق دارد. با نامیدن α∈R+ (به عنوان) فراسنج حاشیه، این خطا به‌صورت زیر تعریف می‌شود: +
+ +
+ + +**81. Neural style transfer** + +
+انتقالِ سبک عصبی +
+ +
+ + +**82. Motivation ― The goal of neural style transfer is to generate an image G based on a given content C and a given style S.** + +
+انگیزه – هدف انتقالِ سبک عصبی تولید یک تصویر G بر مبنای یک محتوای مفروض C و سبک مفروض S است. +
+ +
+ + +**83. [Content C, Style S, Generated image G]** + +
+[محتوای C، سبک S، تصویر تولیدشده‌ی G] +
+ +
+ + +**84. Activation ― In a given layer l, the activation is noted a[l] and is of dimensions nH×nw×nc** + +
+فعال‌سازی – در یک لایه مفروض l، فعال‌سازی با a[l] نمایش داده می‌شود و به ابعاد nH×nw×nc است +
+ +
+ + +**85. Content cost function ― The content cost function Jcontent(C,G) is used to determine how the generated image G differs from the original content image C. It is defined as follows:** + +
+تابع هزینه‌ی محتوا – تابع هزینه‌ی محتوا Jcontent(C,G) برای تعیین میزان اختلاف تصویر تولیدشده G از تصویر اصلی C استفاده می‌شود. این تابع به‌صورت زیر تعریف می‌شود: +
+ +
+ + +**86. Style matrix ― The style matrix G[l] of a given layer l is a Gram matrix where each of its elements G[l]kk′ quantifies how correlated the channels k and k′ are. It is defined with respect to activations a[l] as follows:** + +
+ماتریسِ سبک - ماتریسِ سبک G[l] یک لایه مفروض l، یک ماتریس گرَم (Gram) است که هر کدام از عناصر G[l]kk′ میزان همبستگی کانال‌های k و k′ را می‌سنجند. این ماتریس نسبت به فعال‌سازی‌های a[l] به‌صورت زیر محاسبه می‌شود: +
+ +
+ + +**87. Remark: the style matrix for the style image and the generated image are noted G[l] (S) and G[l] (G) respectively.** + +
+نکته: ماتریس سبک برای تصویر سبک و تصویر تولید شده، به ترتیب با G[l] (S) و G[l] (G) نمایش داده می‌شوند. +
+ +
+ + +**88. Style cost function ― The style cost function Jstyle(S,G) is used to determine how the generated image G differs from the style S. It is defined as follows:** + +
+تابع هزینه‌ی سبک – تابع هزینه‌ی سبک Jstyle(S,G) برای تعیین میزان اختلاف تصویر تولیدشده G و سبک S استفاده می‌شود. این تابع به صورت زیر تعریف می‌شود: +
+ +
+ + +**89. Overall cost function ― The overall cost function is defined as being a combination of the content and style cost functions, weighted by parameters α,β, as follows:** + +
+تابع هزینه‌ی کل – تابع هزینه‌ی کل به صورت ترکیبی از توابع هزینه‌ی سبک و محتوا تعریف شده است که با فراسنج‌های α,β, به شکل زیر وزن‌دار شده است: +
+ +
+ + +**90. Remark: a higher value of α will make the model care more about the content while a higher value of β will make it care more about the style.** + +
+نکته: مقدار بیشتر α مدل را به توجه بیشتر به محتوا وا می‌دارد حال آنکه مقدار بیشتر β مدل را به توجه بیشتر به سبک وا می‌دارد. +
+ +
+ + +**91. Architectures using computational tricks** + +
+معماری‌هایی که از ترفندهای محاسباتی استفاده می‌کنند. +
+ +
+ + +**92. Generative Adversarial Network ― Generative adversarial networks, also known as GANs, are composed of a generative and a discriminative model, where the generative model aims at generating the most truthful output that will be fed into the discriminative which aims at differentiating the generated and true image.** + +
+شبکه‌ی هم‌آوردِ مولد – شبکه‌ی هم‌آوردِ مولد، همچنین با نام GANs شناخته می‌شوند، ترکیبی از یک مدل مولد و تمیزدهنده هستند، جایی‌که مدل مولد هدفش تولید واقعی‌ترین خروجی است که به (مدل) تمیزدهنده تغذیه می‌شود و این (مدل) هدفش تفکیک بین تصویر تولیدشده و واقعی است. +
+ +
+ + +**93. [Training, Noise, Real-world image, Generator, Discriminator, Real Fake]** + +
+[آموزش، نویز، تصویر دنیای واقعی، مولد، تمیز دهنده، واقعی بدلی] +
+ +
+ + +**94. Remark: use cases using variants of GANs include text to image, music generation and synthesis.** + +
+نکته: موارد استفاده متنوع GAN ها شامل تبدیل متن به تصویر، تولید موسیقی و تلفیقی از آنهاست. +
+ +
+ + +**95. ResNet ― The Residual Network architecture (also called ResNet) uses residual blocks with a high number of layers meant to decrease the training error. The residual block has the following characterizing equation:** + +
+ResNet – معماری شبکه‌ی پسماند (همچنین با عنوان ResNet شناخته می‌شود) از بلاک‌های پسماند با تعداد لایه‌های زیاد به منظور کاهش خطای آموزش استفاده می‌کند. بلاک پسماند معادله‌ای با خصوصیات زیر دارد: +
+ +
+ + +**96. Inception Network ― This architecture uses inception modules and aims at giving a try at different convolutions in order to increase its performance through features diversification. In particular, it uses the 1×1 convolution trick to limit the computational burden.** + +
+شبکه‌ی Inception – این معماری از ماژول‌های inception استفاده می‌کند و هدفش فرصت دادن به (عملیات) کانولوشنی مختلف برای افزایش کارایی از طریق تنوع‌بخشی ویژگی‌ها است. به طور خاص، این معماری از ترفند کانولوشنی 1×1 برای محدود سازی بار محاسباتی استفاده می‌کند. +
+ +
+ + +**97. The Deep Learning cheatsheets are now available in [target language].** + +
+راهنمای یادگیری عمیق هم اکنون به زبان ]فارسی[ در دسترس است. +
+ +
+ + +**98. Original authors** + +
+نویسندگان اصلی +
+ +
+ + +**99. Translated by X, Y and Z** + +
+ترجمه شده توسط X،Y و Z +
+ +
+ + +**100. Reviewed by X, Y and Z** + +
+بازبینی شده توسط توسط X،Y و Z +
+ +
+ + +**101. View PDF version on GitHub** + +
+نسخه پی‌دی‌اف را در گیت‌هاب ببینید +
+ +
+ + +**102. By X and Y** + +
+توسط X و Y +
+ +
+ diff --git a/fa/cs-230-deep-learning-tips-and-tricks.md b/fa/cs-230-deep-learning-tips-and-tricks.md new file mode 100644 index 000000000..1248a06bf --- /dev/null +++ b/fa/cs-230-deep-learning-tips-and-tricks.md @@ -0,0 +1,586 @@ + +**Deep Learning Tips and Tricks translation** + +
+ +**1. Deep Learning Tips and Tricks cheatsheet** + +
+راهنمای کوتاه نکات و ترفندهای یادگیری عمیق +
+ +
+ + +**2. CS 230 - Deep Learning** + +
+کلاس CS 230 - یادگیری عمیق +
+ +
+ + +**3. Tips and tricks** + +
+نکات و ترفندها +
+ +
+ + +**4. [Data processing, Data augmentation, Batch normalization]** + +
+[پردازش داده، داده‌افزایی، نرمال‌سازی دسته‌ای] +
+ +
+ + +**5. [Training a neural network, Epoch, Mini-batch, Cross-entropy loss, Backpropagation, Gradient descent, Updating weights, Gradient checking]** + +
+[آموزش یک شبکه‌ی عصبی، تکرار(Epoch)، دسته‌ی کوچک، خطای آنتروپی متقاطع، انتشار معکوس، گرادیان نزولی، به‌روزرسانی وزن‌ها، وارسی گرادیان] +
+ +
+ + +**6. [Parameter tuning, Xavier initialization, Transfer learning, Learning rate, Adaptive learning rates]** + +
+[تنظیم فراسنج، مقداردهی اولیه Xavier،یادگیری انتقالی، نرخ یادگیری، نرخ یادگیری سازگارشونده] +
+ +
+ + +**7. [Regularization, Dropout, Weight regularization, Early stopping]** + +
+[نظام‌بخشی، برون‌اندازی، نظام‌بخشی وزن، توقف زودهنگام] +
+ +
+ + +**8. [Good practices, Overfitting small batch, Gradient checking]** + +
+[عادت‌های خوب، بیش‌برارزش دسته‌ی کوچک، وارسی گرادیان] +
+ +
+ + +**9. View PDF version on GitHub** + +
+نسخه پی‌دی‌اف را در گیت‌هاب ببینید +
+ +
+ + +**10. Data processing** + +
+پردازش داده +
+ +
+ + +**11. Data augmentation ― Deep learning models usually need a lot of data to be properly trained. It is often useful to get more data from the existing ones using data augmentation techniques. The main ones are summed up in the table below. More precisely, given the following input image, here are the techniques that we can apply:** + +
+داده‌افزایی ― مدل‌های یادگیری عمیق معمولا به داده‌های زیادی نیاز دارند تا بتوانند به خوبی آموزش ببینند. اغلب، استفاده از روش‌های داده‌افزایی برای گرفتن داده‌ی بیشتر از داده‌های موجود، مفید است. اصلی‌ترین آنها در جدول زیر به اختصار آمده‌اند. به عبارت دقیق‌تر، با در نظر گرفتن تصویر ورودی زیر، روش‌هایی که می‌توان اعمال کرد بدین شرح هستند: +
+ +
+ +**12. [Original, Flip, Rotation, Random crop]** + +
+[تصویر اصلی، قرینه، چرخش، برش تصادفی] +
+ +
+ + +**13. [Image without any modification, Flipped with respect to an axis for which the meaning of the image is preserved, Rotation with a slight angle, Simulates incorrect horizon calibration, Random focus on one part of the image, Several random crops can be done in a row]** + +
+[تصویر (آغازین) بدون هیچ‌گونه تغییری، قرینه‌شده نسبت به محوری که معنای (محتوای) تصویر را حفظ می‌کند، چرخش با زاویه‌ی اندک، خط افق نادرست را شبیه‌سازی می‌کند، روی ناحیه‌ای تصادفی از تصویر متمرکز می‌شود، چندین برش تصادفی را میتوان پشت‌سرهم انجام داد] +
+
+ + +**14. [Color shift, Noise addition, Information loss, Contrast change]** + +
+[تغییر رنگ، اضافه‌کردن نویز، هدررفت اطلاعات، تغییر تباین(کُنتراست)] +
+ +
+ + +**15. [Nuances of RGB is slightly changed, Captures noise that can occur with light exposure, Addition of noise, More tolerance to quality variation of inputs, Parts of image ignored, Mimics potential loss of parts of image, Luminosity changes, Controls difference in exposition due to time of day]** + +
+[عناصر RGB کمی تغییر کرده است، نویزی که در هنگام مواجه شدن با نور رخ می‌دهد را شبیه‌سازی می‌کند، افزودگی نویز، مقاومت بیشتر نسبت به تغییر کیفیت تصاویر ورودی، بخش‌هایی از تصویر نادیده گرفته می‌شوند، تقلید (شبیه سازی) هدررفت بالقوه بخش‌هایی از تصویر، تغییر درخشندگی، با توجه به زمان روز تفاوت نمایش (تصویر) را کنترل می‌کند] +
+ +
+ + +**16. Remark: data is usually augmented on the fly during training.** + +
+نکته: داده‌ها معمولا در فرآیند آموزش (به صورت درجا) افزایش پیدا می‌کنند. +
+ +
+ + +**17. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** + +
+نرمال‌سازی دسته‌ای ― یک مرحله از فراسنج‌های γ و β که دسته‌ی {xi} را نرمال می‌کند. نماد μB و σ2B به میانگین و وردایی دسته‌ای که می‌خواهیم آن را اصلاح کنیم اشاره دارد که به صورت زیر است: +
+ +
+ + +**18. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** + +
+معمولا بعد از یک لایه‌ی تمام‌متصل یا لایه‌ی کانولوشنی و قبل از یک لایه‌ی غیرخطی اعمال می‌شود و امکان استفاده از نرخ یادگیری بالاتر را می‌دهد و همچنین باعث می‌شود که وابستگی شدید مدل به مقداردهی اولیه کاهش یابد. +
+ +
+ + +**19. Training a neural network** + +
+آموزش یک شبکه‌ی عصبی +
+ +
+ + +**20. Definitions** + +
+تعاریف +
+ +
+ + +**21. Epoch ― In the context of training a model, epoch is a term used to refer to one iteration where the model sees the whole training set to update its weights.** + +
+تکرار (epoch) ― در مضمون آموزش یک مدل، تکرار اصطلاحی است که مدل در یک دوره تکرار تمامی نمونه‌های آموزشی را برای به‌روزرسانی وزن‌ها می‌بیند. +
+ +
+ + +**22. Mini-batch gradient descent ― During the training phase, updating weights is usually not based on the whole training set at once due to computation complexities or one data point due to noise issues. Instead, the update step is done on mini-batches, where the number of data points in a batch is a hyperparameter that we can tune.** + +
+گرادیان نزولی دسته‌ی‌کوچک ― در فاز آموزش، به‌روزرسانی وزن‌ها معمولا بر مبنای تمامی مجموعه آموزش به علت پیچیدگی‌های محاسباتی، یا یک نمونه داده به علت مشکل نویز، نیست. در عوض، گام به‌روزرسانی بر روی دسته‌های کوچک انجام می شود، که تعداد نمونه‌های داده در یک دسته یک ابرفراسنج است که میتوان آن را تنظیم کرد. +
+ +
+ + +**23. Loss function ― In order to quantify how a given model performs, the loss function L is usually used to evaluate to what extent the actual outputs y are correctly predicted by the model outputs z.** + +
+تابع خطا ― به منظور سنجش کارایی یک مدل مفروض، معمولا از تابع خطای L برای ارزیابی اینکه تا چه حد خروجی حقیقی y به شکل صحیح توسط خروجی z مدل پیش‌بینی شده‌اند، استفاده می‌شود. +
+ +
+ + +**24. Cross-entropy loss ― In the context of binary classification in neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** + +
+خطای آنتروپی متقاطع – در مضمون دسته‌بندی دودویی در شبکه‌های عصبی، عموما از تابع خطای آنتروپی متقاطع L(z,y) استفاده و به صورت زیر تعریف میشود: +
+ +
+ + +**25. Finding optimal weights** + +
+یافتن وزن‌های بهینه +
+ +
+ + +**26. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to each weight w is computed using the chain rule.** + +
+انتشار معکوس ― انتشار معکوس روشی برای به‌روزرسانی وزن‌ها با توجه به خروجی واقعی و خروجی مورد انتظار در شبکه‌ی عصبی است. مشتق نسبت به هر وزن w توسط قاعده‌ی زنجیری محاسبه می‌شود. +
+ +
+ + +**27. Using this method, each weight is updated with the rule:** + +
+با استفاده از این روش، هر وزن با قانون زیر به‌روزرسانی می‌شود: +
+ +
+ + +**28. Updating weights ― In a neural network, weights are updated as follows:** + +
+به‌روزرسانی وزن‌ها – در یک شبکه‌ی عصبی، وزن‌ها به شکل زیر به‌روزرسانی می‌شوند: +
+ +
+ + +**29. [Step 1: Take a batch of training data and perform forward propagation to compute the loss, Step 2: Backpropagate the loss to get the gradient of the loss with respect to each weight, Step 3: Use the gradients to update the weights of the network.]** + +
+[گام 1: یک دسته از داده‌های آموزشی گرفته شده و با استفاده از انتشار مستقیم خطا محاسبه می‌شود، گام 2: با استفاده از انتشار معکوس مشتق خطا نسبت به هر وزن محاسبه می‌شود، گام 3: با استفاده از مشتقات، وزن‌های شبکه به‌روزرسانی می‌شوند.] +
+ +
+ + +**30. [Forward propagation, Backpropagation, Weights update]** + +
+[انتشار مستقیم، انتشار معکوس، به‌روزرسانی وزنها] +
+ +
+ + +**31. Parameter tuning** + +
+تنظیم فراسنج +
+ +
+ + +**32. Weights initialization** + +
+مقداردهی اولیه‌ی وزن‌ها +
+ +
+ + +**33. Xavier initialization ― Instead of initializing the weights in a purely random manner, Xavier initialization enables to have initial weights that take into account characteristics that are unique to the architecture.** + +
+مقداردهی‌ اولیه Xavier ― به‌جای مقداردهی اولیه‌ی وزن‌ها به شیوه‌ی کاملا تصادفی، مقداردهی اولیه Xavier این امکان را فراهم می‌سازد تا وزن‌های اولیه‌ای داشته باشیم که ویژگی‌های منحصر به فرد معماری را به حساب می‌آورند. +
+ +
+ + +**34. Transfer learning ― Training a deep learning model requires a lot of data and more importantly a lot of time. It is often useful to take advantage of pre-trained weights on huge datasets that took days/weeks to train, and leverage it towards our use case. Depending on how much data we have at hand, here are the different ways to leverage this:** + +
+یادگیری انتقالی ― آموزش یک مدل یادگیری عمیق به داده‌های زیاد و مهم‌تر از آن به زمان زیادی احتیاج دارد. اغلب بهتر است که از وزن‌های پیش‌آموخته روی پایگاه داده‌های عظیم که آموزش بر روی آن‌ها روزها یا هفته‌ها طول می‌کشند استفاده کرد، و آن‌ها را برای موارد استفاده‌ی خود به کار برد. بسته به میزان داده‌هایی که در اختیار داریم، در زیر روش‌های مختلفی که می‌توان از آنها بهره جست آورده شده‌اند: +
+ +
+ + +**35. [Training size, Illustration, Explanation]** + +
+[تعداد داده‌های آموزش، نگاره، توضیح] +
+ +
+ + +**36. [Small, Medium, Large]** + +
+[کوچک، متوسط، بزرگ] +
+ +
+ + +**37. [Freezes all layers, trains weights on softmax, Freezes most layers, trains weights on last layers and softmax, Trains weights on layers and softmax by initializing weights on pre-trained ones]** + +
+[منجمد کردن تمامی لایه‌ها، آموزش وزن‌ها در بیشینه‌ی هموار، منجمد کردن اکثر لایه‌ها، آموزش وزن‌ها در لایه‌های آخر و بیشینه‌ی هموار، آموزش وزن‌ها در (تمامی) لایه‌ها و بیشینه‌ی هموار با مقداردهی‌اولیه‌ی وزن‌ها بر طبق مقادیر پیش‌آموخته] +
+ +
+ + +**38. Optimizing convergence** + +
+بهینه‌سازی همگرایی +
+ +
+ + +**39. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. It can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate. +** + +
+نرخ یادگیری – نرخ یادگیری اغلب با نماد α و گاهی اوقات با نماد η نمایش داده می‌شود و بیانگر سرعت (گام) به‌روزرسانی وزن‌ها است که می‌تواند مقداری ثابت داشته باشد یا به صورت سازگارشونده تغییر کند. محبوب‌ترین روش حال حاضر Adam نام دارد، روشی است که نرخ یادگیری را در حین فرآیند آموزش تنظیم می‌کند. +
+ +
+ + +**40. Adaptive learning rates ― Letting the learning rate vary when training a model can reduce the training time and improve the numerical optimal solution. While Adam optimizer is the most commonly used technique, others can also be useful. They are summed up in the table below:** + +
+نرخ‌های یادگیری سازگارشونده ― داشتن نرخ یادگیری متغیر در فرآیند آموزش یک مدل، می‌تواند زمان آموزش را کاهش دهد و راه‌حل بهینه عددی را بهبود ببخشد. با آنکه بهینه‌ساز Adam محبوب‌ترین روش مورد استفاده است، دیگر روش‌ها نیز می‌توانند مفید باشند. این روش‌ها در جدول زیر به اختصار آمده‌اند: +
+ +
+ + +**41. [Method, Explanation, Update of w, Update of b]** + +
+[روش، توضیح، به‌روزرسانی w، به‌روزرسانی b] +
+ +
+ + +**42. [Momentum, Dampens oscillations, Improvement to SGD, 2 parameters to tune]** + +
+[تکانه، نوسانات را تعدیل می‌دهد، بهبود SGD، دو فراسنج که نیاز به تنظیم دارند] +
+ +
+ + +**43. [RMSprop, Root Mean Square propagation, Speeds up learning algorithm by controlling oscillations]** + +
+[RMSprop، انتشار جذر میانگین مربعات، سرعت بخشیدن به الگوریتم یادگیری با کنترل نوسانات] +
+ +
+ + +**44. [Adam, Adaptive Moment estimation, Most popular method, 4 parameters to tune]** + +
+[Adam، تخمین سازگارشونده ممان، محبوب‌ترین روش، چهار فراسنج که نیاز به تنظیم دارند] +
+ +
+ + +**45. Remark: other methods include Adadelta, Adagrad and SGD.** + +
+نکته: سایر متدها شامل Adadelta، Adagrad و SGD هستند. +
+ +
+ + +**46. Regularization** + +
+نظام‌بخشی +
+ +
+ + +**47. Dropout ― Dropout is a technique used in neural networks to prevent overfitting the training data by dropping out neurons with probability p>0. It forces the model to avoid relying too much on particular sets of features.** + +
+برون‌اندازی – برون‌اندازی روشی است که در شبکه‌های عصبی برای جلوگیری از بیش‌برارزش بر روی داده‌های آموزشی با حذف تصادفی نورون‌ها با احتمال p>0 استفاده می‌شود. این روش مدل را مجبور می‌کند تا از تکیه کردن بیش‌از‌حد بر روی مجموعه خاصی از ویژگی‌ها خودداری کند. +
+ +
+ + +**48. Remark: most deep learning frameworks parametrize dropout through the 'keep' parameter 1−p.** + +
+نکته: بیشتر کتابخانه‌های یادگیری عمیق برون‌اندازی را با استفاده از فراسنج 'نگه‌داشتن' 1-p کنترل می‌کنند. +
+ +
+ + +**49. Weight regularization ― In order to make sure that the weights are not too large and that the model is not overfitting the training set, regularization techniques are usually performed on the model weights. The main ones are summed up in the table below:** + +
+نظام‌بخشی وزن – برای اطمینان از اینکه (مقادیر) وزن‌ها بیش‌ازحد بزرگ نیستند و مدل به مجموعه‌ی آموزش بیش‌برارزش نمی‌کند، روشهای نظام‌بخشی معمولا بر روی وزن‌های مدل اجرا می‌شوند. اصلی‌ترین آنها در جدول زیر به اختصار آمده‌اند: +
+ +
+ + +**50. [LASSO, Ridge, Elastic Net]** + +
+[LASSO, Ridge, Elastic Net] +
+
+ +**50 bis. Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +
+ضرایب را تا صفر کاهش می‌دهد، برای انتخاب متغیر مناسب است، ضرایب را کوچکتر می‌کند، بین انتخاب متغیر و ضرایب کوچک مصالحه می‌کند +
+ +
+ +**51. Early stopping ― This regularization technique stops the training process as soon as the validation loss reaches a plateau or starts to increase.** + +
+توقف زودهنگام ― این روش نظام‌بخشی، فرآیند آموزش را به محض اینکه خطای اعتبارسنجی ثابت می‌شود یا شروع به افزایش پیدا کند، متوقف می‌کند. +
+ +
+ + +**52. [Error, Validation, Training, early stopping, Epochs]** + +
+[خطا، اعتبارسنجی، آموزش، توقف زودهنگام، تکرارها] +
+ +
+ + +**53. Good practices** + +
+عادت‌های خوب +
+ +
+ + +**54. Overfitting small batch ― When debugging a model, it is often useful to make quick tests to see if there is any major issue with the architecture of the model itself. In particular, in order to make sure that the model can be properly trained, a mini-batch is passed inside the network to see if it can overfit on it. If it cannot, it means that the model is either too complex or not complex enough to even overfit on a small batch, let alone a normal-sized training set.** + +
+بیش‌برارزش روی دسته‌ی ‌کوچک ― هنگام اشکال‌زدایی یک مدل، اغلب مفید است که یک سری آزمایش‌های سریع برای اطمینان از اینکه هیچ مشکل عمده‌ای در معماری مدل وجود ندارد، انجام شود. به طورخاص، برای اطمینان از اینکه مدل می‌تواند به شکل صحیح آموزش ببیند، یک دسته‌ی‌ کوچک (از داده‌ها) به شبکه داده می‌شود تا دریابیم که مدل می‌تواند به آنها بیش‌برارزش کند. اگر نتواند، بدین معناست که مدل از پیچیدگی بالایی برخوردار است یا پیچیدگی کافی برای بیش‌برارزش شدن روی دسته‌ی‌ کوچک ندارد، چه برسد به یک مجموعه آموزشی با اندازه عادی. +
+ +
+ + +**55. Gradient checking ― Gradient checking is a method used during the implementation of the backward pass of a neural network. It compares the value of the analytical gradient to the numerical gradient at given points and plays the role of a sanity-check for correctness.** + +
+وارسی گرادیان – وارسی گرادیان روشی است که در طول پیاده‌سازی گذر روبه‌عقبِ یک شبکه‌ی عصبی استفاده می‌شود. این روش مقدار گرادیان تحلیلی را با گرادیان عددی در نقطه‌های مفروض مقایسه می‌کند و نقش بررسی درستی را ایفا میکند. +
+ +
+ + +**56. [Type, Numerical gradient, Analytical gradient]** + +
+[نوع، گرادیان عددی، گرادیان تحلیلی] +
+ +
+ + +**57. [Formula, Comments]** + +
+[فرمول، توضیحات] +
+ +
+ + +**58. [Expensive; loss has to be computed two times per dimension, Used to verify correctness of analytical implementation, Trade-off in choosing h not too small (numerical instability) nor too large (poor gradient approximation)]** + +
+[پرهزینه (از نظر محاسباتی)، خطا باید دو بار برای هر بُعد محاسبه شود، برای تایید صحت پیاده‌سازی تحلیلی استفاده می‌شود، مصالحه در انتخاب h: نه بسیار کوچک (ناپایداری عددی) و نه خیلی بزرگ (تخمین گرادیان ضعیف) باشد] +
+ +
+ + +**59. ['Exact' result, Direct computation, Used in the final implementation]** + +
+[نتیجه 'عینی'، محاسبه مستقیم، در پیاده‌سازی نهایی استفاده می‌شود] +
+ +
+ + +**60. The Deep Learning cheatsheets are now available in [target language].** + +
+راهنمای یادگیری عمیق هم اکنون به زبان [فارسی] در دسترس است. +
+ +**61. Original authors** + +
+نویسندگان اصلی +
+ +
+ +**62.Translated by X, Y and Z** + +
+ترجمه شده توسط X،Y و Z +
+ +
+ +**63.Reviewed by X, Y and Z** + +
+بازبینی شده توسط توسط X،Y و Z +
+ +
+ +**64.View PDF version on GitHub** + +
+نسخه پی‌دی‌اف را در گیت‌هاب ببینید +
+ +
+ +**65.By X and Y** + +
+توسط X و Y +
+ +
diff --git a/fa/cs-230-recurrent-neural-networks.md b/fa/cs-230-recurrent-neural-networks.md new file mode 100644 index 000000000..22a1e2106 --- /dev/null +++ b/fa/cs-230-recurrent-neural-networks.md @@ -0,0 +1,868 @@ +**Recurrent Neural Networks translation** + +
+ +**1. Recurrent Neural Networks cheatsheet** + +
+راهنمای کوتاه شبکه‌های عصبی برگشتی +
+ +
+ + +**2. CS 230 - Deep Learning** + +
+کلاس CS 230 - یادگیری عمیق +
+ +
+ + +**3. [Overview, Architecture structure, Applications of RNNs, Loss function, Backpropagation]** + +
+[نمای کلی، ساختار معماری، کاربردهایRNN ها، تابع خطا، انتشار معکوس] +
+ +
+ + +**4. [Handling long term dependencies, Common activation functions, Vanishing/exploding gradient, Gradient clipping, GRU/LSTM, Types of gates, Bidirectional RNN, Deep RNN]** + +
+[کنترل وابستگی‌های بلندمدت، توابع فعال‌سازی رایج، مشتق صفرشونده/منفجرشونده، برش گرادیان، GRU/LSTM، انواع دروازه، RNN دوسویه، RNN عمیق] +
+ +
+ + +**5. [Learning word representation, Notations, Embedding matrix, Word2vec, Skip-gram, Negative sampling, GloVe]** + +
+[یادگیری بازنمائی کلمه، نمادها، ماتریس تعبیه، Word2vec،skip-gram، نمونه‌برداری منفی، GloVe] +
+ +
+ + +**6. [Comparing words, Cosine similarity, t-SNE]** + +
+[مقایسه‌ی کلمات، شباهت کسینوسی، t-SNE] +
+ +
+ + +**7. [Language model, n-gram, Perplexity]** + +
+[مدل زبانی،ان‌گرام، سرگشتگی] +
+ +
+ + +**8. [Machine translation, Beam search, Length normalization, Error analysis, Bleu score]** + +
+[ترجمه‌ی ماشینی، جستجوی پرتو، نرمال‌سازی طول، تحلیل خطا، امتیاز Bleu] +
+ +
+ + +**9. [Attention, Attention model, Attention weights]** + +
+[ژرف‌نگری، مدل ژرف‌نگری، وزن‌های ژرف‌نگری] +
+ +
+ + +**10. Overview** + +
+نمای کلی +
+ +
+ + +**11. Architecture of a traditional RNN ― Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. They are typically as follows:** + +
+معماری RNN سنتی ــ شبکه‌های عصبی برگشتی که همچنین با عنوان RNN شناخته می‌شوند، دسته‌ای از شبکه‌های عصبی‌اند که این امکان را می‌دهند خروجی‌های قبلی به‌عنوان ورودی استفاده شوند و در عین حال حالت‌های نهان داشته باشند. این شبکه‌ها به‌طور معمول عبارت‌اند از:
+ +
+ + +**12. For each timestep t, the activation a and the output y are expressed as follows:** + +
+به‌ازای هر گام زمانی t، فعال‌سازی a و خروجی y به‌صورت زیر بیان می‌شود: +
+ +
+ + +**13. and** + +
+و +
+ +
+ + +**14. where Wax,Waa,Wya,ba,by are coefficients that are shared temporally and g1,g2 activation functions.** + +
+که در آن Wax,Waa,Wya,ba,by ضرایبی‌اند که در راستای زمان به ‌اشتراک گذاشته می‌شوند و g1، g2 توابع فعال‌سازی‌ هستند. +
+ +
+ + +**15. The pros and cons of a typical RNN architecture are summed up in the table below:** + +
+مزایا و معایب معماری RNN به‌صورت خلاصه در جدول زیر آورده شده‌اند: +
+ +
+ + +**16. [Advantages, Possibility of processing input of any length, Model size not increasing with size of input, Computation takes into account historical information, Weights are shared across time]** + +
+مزایا، امکان پردازش ورودی با هر طولی، اندازه‌ی مدل مطابق با اندازه‌ی ورودی افزایش نمی‌یابد، اطلاعات (زمان‌های) گذشته در محاسبه در نظر گرفته می‌شود، وزن‌ها در طول زمان به‌ اشتراک گذاشته می‌شوند] +
+ +
+ + +**17. [Drawbacks, Computation being slow, Difficulty of accessing information from a long time ago, Cannot consider any future input for the current state]** + +
+[معایب، محاسبه کند می‌شود، دشوار بودن دسترسی به اطلاعات مدت‌ها پیش، در نظر نگرفتن ورودی‌های بعدی در وضعیت جاری] +
+ +
+ + +**18. Applications of RNNs ― RNN models are mostly used in the fields of natural language processing and speech recognition. The different applications are summed up in the table below:** + +
+کاربردهایRNN ها ــ مدل‌های RNN غالباً در حوزه‌ی پردازش زبان طبیعی و حوزه‌ی بازشناسایی گفتار به کار می‌روند. کاربردهای مختلف آنها به صورت خلاصه در جدول زیر آورده شده‌اند: +
+ +
+ + +**19. [Type of RNN, Illustration, Example]** + +
+[نوع RNN، نگاره، مثال] +
+ +
+ + +**20. [One-to-one, One-to-many, Many-to-one, Many-to-many]** + +
+[یک به یک، یک به چند، چند به یک، چند به چند] +
+ +
+ + +**21. [Traditional neural network, Music generation, Sentiment classification, Name entity recognition, Machine translation]** + +
+[شبکه‌ی عصبی سنتی، تولید موسیقی، دسته‌بندی حالت احساسی، بازشناسایی موجودیت اسمی، ترجمه ماشینی] +
+ +
+ + +**22. Loss function ― In the case of a recurrent neural network, the loss function L of all time steps is defined based on the loss at every time step as follows:** + +
+تابع خطا ــ در شبکه عصبی برگشتی، تابع خطا L برای همه‌ی گام‌های زمانی براساس خطا در هر گام به صورت زیر محاسبه می‌شود: +
+ +
+ + +**23. Backpropagation through time ― Backpropagation is done at each point in time. At timestep T, the derivative of the loss L with respect to weight matrix W is expressed as follows:** + +
+انتشار معکوس در طول زمان ـــ انتشار معکوس در هر نقطه از زمان انجام می‌شود. در گام زمانی T، مشتق خطا L با توجه به ماتریس وزن W به‌صورت زیر بیان می‌شود: +
+ +
+ + +**24. Handling long term dependencies** + +
+کنترل وابستگی‌های بلندمدت +
+ +
+ + +**25. Commonly used activation functions ― The most common activation functions used in RNN modules are described below:** + +
+توابع فعال‌سازی پرکاربرد ـــ رایج‌ترین توابع فعال‌سازی به‌کاررفته در ماژول‌های RNN به شرح زیر است: +
+ +
+ + +**26. [Sigmoid, Tanh, RELU]** + +
+[سیگموید، تانژانت هذلولوی، یکسو ساز] +
+ +
+ + +**27. Vanishing/exploding gradient ― The vanishing and exploding gradient phenomena are often encountered in the context of RNNs. The reason why they happen is that it is difficult to capture long term dependencies because of multiplicative gradient that can be exponentially decreasing/increasing with respect to the number of layers.** + +
+مشتق صفرشونده/منفجرشونده ــ پدیده مشتق صفرشونده و منفجرشونده غالبا در بستر RNNها رخ می‌دهند. علت چنین رخدادی این است که به دلیل گرادیان ضربی، که می‌تواند با توجه به تعداد لایه‌ها به صورت نمایی کاهش/افزایش می‌یابد، به‌دست آوردن وابستگی‌های بلندمدت سخت است. +
+ + +
+ + +**28. Gradient clipping ― It is a technique used to cope with the exploding gradient problem sometimes encountered when performing backpropagation. By capping the maximum value for the gradient, this phenomenon is controlled in practice.** + +
+برش گرادیان ــ یک روش برای مقابله با انفجار گرادیان است که گاهی اوقات هنگام انتشار معکوس رخ می‌دهد. با تعیین حداکثر مقدار برای گرادیان، این پدیده در عمل کنترل می‌شود. +
+ +
+ + +**29. clipped** + +
+برش ‌داده‌شده +
+ +
+ + +**30. Types of gates ― In order to remedy the vanishing gradient problem, specific gates are used in some types of RNNs and usually have a well-defined purpose. They are usually noted Γ and are equal to:** + +
+انواع دروازه ـــ برای حل مشکل مشتق صفرشونده/منفجرشونده، در برخی از انواع RNN ها، دروازه‌های خاصی استفاده می‌شود و این دروازه‌ها عموما هدف معینی دارند. این دروازه‌ها عموما با نمادΓ نمایش داده می‌شوند و برابرند با: +
+ +
+ + +**31. where W,U,b are coefficients specific to the gate and σ is the sigmoid function. The main ones are summed up in the table below:** + +
+که W,U,b ضرایب خاص دروازه و σ تابع سیگموید است. دروازه‌های اصلی به صورت خلاصه در جدول زیر آورده شده‌اند: +
+ +
+ + +**32. [Type of gate, Role, Used in]** + +
+[نوع دروازه، نقش، به‌کار رفته در] +
+ +
+ + +**33. [Update gate, Relevance gate, Forget gate, Output gate]** + +
+33. [دروازه‌ی به‌روزرسانی، دروازه‌ی ربط(میزان اهمیت)، دروازه‌ی فراموشی، دروازه‌ی خروجی] +
+ +
+ + +**34. [How much past should matter now?, Drop previous information?, Erase a cell or not?, How much to reveal of a cell?]** + +
+34. [چه میزان از گذشته اکنون اهمیت دارد؟ اطلاعات گذشته رها شوند؟ سلول حذف شود یا خیر؟ چه میزان از (محتوای) سلول آشکار شود؟] +
+ +
+ + +**35. [LSTM, GRU]** + +
+[LSTM، GRU] +
+ +
+ + +**36. GRU/LSTM ― Gated Recurrent Unit (GRU) and Long Short-Term Memory units (LSTM) deal with the vanishing gradient problem encountered by traditional RNNs, with LSTM being a generalization of GRU. Below is a table summing up the characterizing equations of each architecture:** + +
+GRU/LSTM ـــ واحد برگشتی دروازه‌دار (GRU) و واحدهای حافظه‌ی کوتاه‌-مدت طولانی (LSTM) مشکل مشتق صفرشونده که در RNNهای سنتی رخ می‌دهد، را بر طرف می‌کنند، درحالی‌که LSTM شکل عمومی‌تر GRU است. در جدول زیر، معادله‌های توصیف‌کنندهٔ هر معماری به صورت خلاصه آورده شده‌اند: +
+ +
+ + +**37. [Characterization, Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Dependencies]** + +
+37. [توصیف، واحد برگشتی دروازه‌دار (GRU)، حافظه‌ی کوتاه-مدت طولانی (LSTM)، وابستگی‌ها] +
+
+ + +**38. Remark: the sign ⋆ denotes the element-wise multiplication between two vectors.** + +
+نکته: نشانه‌ی * نمایان‌گر ضرب عنصربه‌عنصر دو بردار است. +
+ +
+ + +**39. Variants of RNNs ― The table below sums up the other commonly used RNN architectures:** + +
+انواع RNN ها ــ جدول زیر سایر معماری‌های پرکاربرد RNN را به صورت خلاصه نشان می‌دهد. +
+ +
+ + +**40. [Bidirectional (BRNN), Deep (DRNN)]** + +
+[دوسویه (BRNN)، عمیق (DRNN)] +
+ +
+ + +**41. Learning word representation** + +
+یادگیری بازنمائی کلمه +
+ +
+ + +**42. In this section, we note V the vocabulary and |V| its size.** + +
+در این بخش، برای اشاره به واژگان از V و برای اشاره به اندازه‌ی آن از |V| استفاده می‌کنیم. +
+ +
+ + +**43. Motivation and notations** + +
+انگیزه و نمادها +
+ +
+ + +**44. Representation techniques ― The two main ways of representing words are summed up in the table below:** + +
+روش‌های بازنمائی ― دو روش اصلی برای بازنمائی کلمات به صورت خلاصه در جدول زیر آورده شده‌اند: +
+ +
+ + +**45. [1-hot representation, Word embedding]** + +
+[بازنمائی تک‌فعال، تعبیه‌ی کلمه] +
+ +
+ + +**46. [teddy bear, book, soft]** + +
+[خرس تدی، کتاب، نرم] +
+ +
+ + +**47. [Noted ow, Naive approach, no similarity information, Noted ew, Takes into account words similarity]** + +
+[نشان داده شده با نماد ow، رویکرد ساده، فاقد اطلاعات تشابه، نشان داده شده با نماد ew، به‌حساب‌آوردن تشابه کلمات] +
+ +
+ + +**48. Embedding matrix ― For a given word w, the embedding matrix E is a matrix that maps its 1-hot representation ow to its embedding ew as follows:** + +
+ماتریس تعبیه ـــ به‌ ازای کلمه‌ی مفروض w ، ماتریس تعبیه E ماتریسی است که بازنمائی تک‌فعال ow را به نمایش تعبیه‌ی ew نگاشت می‌دهد: +
+ +
+ + +**49. Remark: learning the embedding matrix can be done using target/context likelihood models.** + +
+نکته: یادگیری ماتریس تعبیه را می‌توان با استفاده از مدل‌های درست‌نمایی هدف/متن(زمینه) انجام داد. +
+ +
+ + +**50. Word embeddings** + +
+(نمایش) تعبیه‌ی کلمه +
+ +
+ + +**51. Word2vec ― Word2vec is a framework aimed at learning word embeddings by estimating the likelihood that a given word is surrounded by other words. Popular models include skip-gram, negative sampling and CBOW.** + +
+Word2vec ― Word2vec چهارچوبی است که با محاسبه‌ی احتمال قرار گرفتن یک کلمه‌ی خاص در میان سایر کلمات، تعبیه‌های کلمه را یاد می‌گیرد. مدل‌های متداول شامل Skip-gram، نمونه‌برداری منفی و CBOW هستند. +
+ +
+ + +**52. [A cute teddy bear is reading, teddy bear, soft, Persian poetry, art]** + +
+[یک خرس تدی بامزه در حال مطالعه است، خرس تدی، نرم، شعر فارسی، هنر] +
+ +
+ + +**53. [Train network on proxy task, Extract high-level representation, Compute word embeddings]** + +
+[آموزش شبکه بر روی مسئله‌ی جایگزین، استخراج بازنمائی سطح بالا، محاسبه‌ی نمایش تعبیه‌ی کلمات] +
+ +
+ + +**54. Skip-gram ― The skip-gram word2vec model is a supervised learning task that learns word embeddings by assessing the likelihood of any given target word t happening with a context word c. By noting θt a parameter associated with t, the probability P(t|c) is given by:** + +
+Skip-gram ــ مدل اسکیپ‌گرام word2vec یک وظیفه‌ی یادگیری بانظارت است که تعبیه‌های کلمه را با ارزیابی احتمال وقوع کلمه‌ی t هدف با کلمه‌ی زمینه c یاد می‌گیرد. با توجه به اینکه نماد θt پارامتری مرتبط با t است، احتمال P(t|c) به‌صورت زیر به‌دست می‌آید: +
+ +
+ + +**55. Remark: summing over the whole vocabulary in the denominator of the softmax part makes this model computationally expensive. CBOW is another word2vec model using the surrounding words to predict a given word.** + +
+نکته: جمع کل واژگان در بخش مقسوم‌الیه بیشینه‌ی‌هموار باعث می‌شود که این مدل از لحاظ محاسباتی گران شود. مدل CBOW مدل word2vec دیگری ست که از کلمات اطراف برای پیش‌بینی یک کلمهٔ مفروض استفاده می‌کند. +
+ +
+ + +**56. Negative sampling ― It is a set of binary classifiers using logistic regressions that aim at assessing how a given context and a given target words are likely to appear simultaneously, with the models being trained on sets of k negative examples and 1 positive example. Given a context word c and a target word t, the prediction is expressed by:** + +
+نمونه‌گیری منفی ― مجموعه‌ای از دسته‌بندی‌های دودویی با استفاده از رگرسیون لجستیک است که مقصودش ارزیابی احتمال ظهور همزمان کلمه‌ی مفروض هدف و کلمه‌ی مفروض زمینه است، که در اینجا مدل‌ها براساس مجموعه k مثال منفی و 1 مثال مثبت آموزش می‌بینند. با توجه به کلمه‌ی مفروض زمینه c و کلمه‌ی مفروض هدف t، پیش‌بینی به صورت زیر بیان می‌شود: +
+ +
+ + +**57. Remark: this method is less computationally expensive than the skip-gram model.** + +
+نکته: این روش از لحاظ محاسباتی ارزان‌تر از مدل skip-gram است. +
+ +
+ + +**57bis. GloVe ― The GloVe model, short for global vectors for word representation, is a word embedding technique that uses a co-occurence matrix X where each Xi,j denotes the number of times that a target i occurred with a context j. Its cost function J is as follows:** + +
+GloVe ― مدل GloVe، مخفف بردارهای سراسری بازنمائی کلمه، یکی از روش‌های تعبیه کلمه است که از ماتریس هم‌رویدادی X استفاده می‌کند که در آن هر Xi,j به تعداد دفعاتی اشاره دارد که هدف i با زمینهٔ j رخ می‌دهد. تابع هزینه‌ی J به‌صورت زیر است: +
+ +
+ + +**58. where f is a weighting function such that Xi,j=0⟹f(Xi,j)=0. +Given the symmetry that e and θ play in this model, the final word embedding e(final)w is given by:** + +
+که در آن f تابع وزن‌دهی است، به‌طوری که Xi,j=0⟹f(Xi,j)=0. با توجه به تقارنی که e و θ در این مدل دارند، نمایش تعبیه‌ی نهایی کلمه‌ e(final)w به صورت زیر محاسبه می‌شود: +
+ +
+ + +**59. Remark: the individual components of the learned word embeddings are not necessarily interpretable.** + +
+تذکر: مولفه‌های مجزا در نمایش تعبیه‌ی یادگرفته‌شده‌ی کلمه الزاما قابل تفسیر نیستند. +
+ +
+ + +**60. Comparing words** + +
+مقایسه‌ی کلمات +
+ +
+ + +**61. Cosine similarity ― The cosine similarity between words w1 and w2 is expressed as follows:** + +
+شباهت کسینوسی - شباهت کسینوسی بین کلمات w1 و w2 به ‌صورت زیر بیان می‌شود: +
+ +
+ + +**62. Remark: θ is the angle between words w1 and w2.** + +
+نکته: θ زاویهٔ بین کلمات w1 و w2 است. +
+ +
+ + +**63. t-SNE ― t-SNE (t-distributed Stochastic Neighbor Embedding) is a technique aimed at reducing high-dimensional embeddings into a lower dimensional space. In practice, it is commonly used to visualize word vectors in the 2D space.** + +
+t-SNE ― t-SNE (نمایش تعبیه‌ی همسایه‌ی تصادفی توزیع‌شده توسط توزیع t) روشی است که هدف آن کاهش تعبیه‌های ابعاد بالا به فضایی با ابعاد پایین‌تر است. این روش در تصویرسازی بردارهای کلمه در فضای 2 بعدی کاربرد فراوانی دارد. +
+ +
+ + +**64. [literature, art, book, culture, poem, reading, knowledge, entertaining, loveable, childhood, kind, teddy bear, soft, hug, cute, adorable]** + +
+[ادبیات، هنر، کتاب، فرهنگ، شعر، دانش، مفرح، دوست‌داشتنی، دوران کودکی، مهربان، خرس تدی، نرم، آغوش، بامزه، ناز] +
+ +
+ + +**65. Language model** + +
+مدل زبانی +
+ +
+ + +**66. Overview ― A language model aims at estimating the probability of a sentence P(y).** + +
+نمای کلی ـــ هدف مدل زبان تخمین احتمال جمله‌ی P(y) است. +
+ +
+ + +**67. n-gram model ― This model is a naive approach aiming at quantifying the probability that an expression appears in a corpus by counting its number of appearance in the training data.** + +
+مدل ان‌گرام ــ این مدل یک رویکرد ساده با هدف اندازه‌گیری احتمال نمایش یک عبارت در یک نوشته است که با دفعات تکرار آن در داده‌های آموزشی محاسبه می‌شود. +
+ +
+ + +**68. Perplexity ― Language models are commonly assessed using the perplexity metric, also known as PP, which can be interpreted as the inverse probability of the dataset normalized by the number of words T. The perplexity is such that the lower, the better and is defined as follows:** + +
+سرگشتگی ـــ مدل‌های زبانی معمولاً با معیار سرگشتی، که با PP هم نمایش داده می‌شود، سنجیده می‌شوند، که مقدار آن معکوس احتمال یک مجموعه‌ داده است که تقسیم بر تعداد کلمات T می‌شود. هر چه سرگشتگی کمتر باشد بهتر است و به صورت زیر تعریف می‌شود: +
+ +
+ + +**69. Remark: PP is commonly used in t-SNE.** + +
+نکته: PP عموما در t-SNE کاربرد دارد. +
+ +
+ + +**70. Machine translation** + +
+ترجمه ماشینی +
+ +
+ + +**71. Overview ― A machine translation model is similar to a language model except it has an encoder network placed before. For this reason, it is sometimes referred as a conditional language model. The goal is to find a sentence y such that:** + +
+نمای کلی ― مدل ترجمه‌ی ماشینی مشابه مدل زبانی است با این تفاوت که یک شبکه‌ی رمزنگار قبل از آن قرار گرفته است. به همین دلیل، گاهی اوقات به آن مدل زبان شرطی می‌گویند. هدف آن یافتن جمله y است بطوری که: +
+ +
+ + +**72. Beam search ― It is a heuristic search algorithm used in machine translation and speech recognition to find the likeliest sentence y given an input x.** + +
+جستجوی پرتو ― یک الگوریتم جستجوی اکتشافی است که در ترجمه‌ی ماشینی و بازتشخیص گفتار برای یافتن محتمل‌ترین جمله‌ی y باتوجه به ورودی مفروض x بکار برده می‌شود. +
+ +
+ + +**73. [Step 1: Find top B likely words y<1>, Step 2: Compute conditional probabilities y|x,y<1>,...,y, Step 3: Keep top B combinations x,y<1>,...,y, End process at a stop word]** + +
+[گام 1: یافتن B کلمه‌ی محتمل برتر y<1>، گام 2: محاسبه احتمالات شرطی y|x,y<1>,...,y، گام 3: نگه‌داشتن B ترکیب برتر x,y<1>,…,y، خاتمه فرآیند با کلمه‌ی توقف] +
+ +
+ + +**74. Remark: if the beam width is set to 1, then this is equivalent to a naive greedy search.** + +
+نکته: اگر پهنای پرتو 1 باشد، آنگاه با جست‌وجوی حریصانهٔ ساده برابر خواهد بود. +
+ +
+ + +**75. Beam width ― The beam width B is a parameter for beam search. Large values of B yield to better result but with slower performance and increased memory. Small values of B lead to worse results but is less computationally intensive. A standard value for B is around 10.** + +
+پهنای پرتو ـــ پهنای پرتوی B پارامتری برای جستجوی پرتو است. مقادیر بزرگ B به نتیجه بهتر منتهی می‌شوند اما عملکرد آهسته‌تری دارند و حافظه را افزایش می‌دهند. مقادیر کوچک B به نتایج بدتر منتهی می‌شوند اما بار محاسباتی پایین‌تری دارند. مقدار استاندارد B حدود 10 است. +
+ +
+ + +**76. Length normalization ― In order to improve numerical stability, beam search is usually applied on the following normalized objective, often called the normalized log-likelihood objective, defined as:** + +
+نرمال‌سازی طول ―‌ برای بهبود ثبات عددی، جستجوی پرتو معمولا با تابع هدف نرمال‌شده‌ی زیر اعمال می‌شود، که اغلب اوقات هدف درست‌نمایی لگاریتمی نرمال‌شده نامیده می‌شود و به‌صورت زیر تعریف می‌شود: +
+ +
+ + +**77. Remark: the parameter α can be seen as a softener, and its value is usually between 0.5 and 1.** + +
+تذکر: پارامتر α را می‌توان تعدیل‌کننده نامید و مقدارش معمولا بین 0.5 و 1 است. +
+ +
+ + +**78. Error analysis ― When obtaining a predicted translation ˆy that is bad, one can wonder why we did not get a good translation y∗ by performing the following error analysis:** + +
+تحلیل خطا ―زمانی‌که ترجمه‌ی پیش‌بینی‌شده‌ی ^y ی به‌دست می‌آید که مطلوب نیست، می‌توان با انجام تحلیل خطای زیر از خود پرسید که چرا ترجمه y* خوب نیست: +
+ +
+ + +**79. [Case, Root cause, Remedies]** + +
+[قضیه، ریشه‌ی مشکل، راه‌حل] +
+ +
+ + +**80. [Beam search faulty, RNN faulty, Increase beam width, Try different architecture, Regularize, Get more data]** + +
+[جستجوی پرتوی معیوب، RNN معیوب، افزایش پهنای پرتو، امتحان معماری‌های مختلف، استفاده از تنظیم‌کننده، جمع‌آوری داده‌های بیشتر]
+ +
+ + +**81. Bleu score ― The bilingual evaluation understudy (bleu) score quantifies how good a machine translation is by computing a similarity score based on n-gram precision. It is defined as follows:** + +
+امتیاز Bleu ― جایگزین ارزشیابی دوزبانه (bleu) میزان خوب بودن ترجمه ماشینی را با محاسبه‌ی امتیاز تشابه برمبنای دقت ان‌گرام اندازه‌گیری می‌کند. (این امتیاز) به صورت زیر تعریف می‌شود: +
+ +
+ + +**82. where pn is the bleu score on n-gram only defined as follows:** + +
+که pn امتیاز bleu تنها براساس ان‌گرام است و به صورت زیر تعریف می‌شود: +
+ +
+ + +**83. Remark: a brevity penalty may be applied to short predicted translations to prevent an artificially inflated bleu score.** + +
+تذکر: ممکن است برای پیشگیری از امتیاز اغراق آمیز تصنعیbleu ، برای ترجمه‌های پیش‌بینی‌شده‌ی کوتاه از جریمه اختصار استفاده شود.
+ +
+ + +**84. Attention** + +
+ژرف‌نگری +
+ +
+ + +**85. Attention model ― This model allows an RNN to pay attention to specific parts of the input that is considered as being important, which improves the performance of the resulting model in practice. By noting α the amount of attention that the output y should pay to the activation a and c the context at time t, we have:** + +
+مدل ژرف‌نگری ― این مدل به RNN این امکان را می‌دهد که به بخش‌های خاصی از ورودی که حائز اهمیت هستند توجه نشان دهد که در عمل باعث بهبود عملکرد مدل حاصل‌شده خواهد شد. اگر α به معنای مقدار توجهی باشد که خروجی y باید به فعال‌سازی a داشته باشد و c نشان‌دهنده‌ی زمینه (متن) در زمان t باشد، داریم: +
+ +
+ + +**86. with** + +
+با +
+ +
+ + +**87. Remark: the attention scores are commonly used in image captioning and machine translation.** + +
+نکته: امتیازات ژرف‌نگری عموما در عنوان‌سازی متنی برای تصویر (image captioning) و ترجمه ماشینی کاربرد دارد. +
+ +
+ + +**88. A cute teddy bear is reading Persian literature.** + +
+یک خرس تدی بامزه در حال خواندن ادبیات فارسی است. +
+ +
+ + +**89. Attention weight ― The amount of attention that the output y should pay to the activation a is given by α computed as follows:** + +
+وزن ژرف‌نگری ― مقدار توجهی که خروجی y باید به فعال‌سازی a داشته باشد به‌وسیله‌ی α به‌دست می‌آید که به‌صورت زیر محاسبه می‌شود: +
+ +
+ + +**90. Remark: computation complexity is quadratic with respect to Tx.** + +
+نکته: پیچیدگی محاسباتی به نسبت Tx از نوع درجه‌ی دوم است. +
+ +
+ + +**91. The Deep Learning cheatsheets are now available in [target language].** + +
+راهنمای یادگیری عمیق هم اکنون به زبان [فارسی] در دسترس است. +
+ +
+ +**92. Original authors** + +
+نویسندگان اصلی +
+ +
+ +**93. Translated by X, Y and Z** + +
+ترجمه شده توسط X،Y و Z +
+ +
+ +**94. Reviewed by X, Y and Z** + +
+بازبینی شده توسط توسط X،Y و Z +
+ +
+ +**95. View PDF version on GitHub** + +
+نسخه پی‌دی‌اف را در گیت‌هاب ببینید +
+ +
+ +**96. By X and Y** + +
+توسط X و Y +
+ +
diff --git a/fr/cs-221-logic-models.md b/fr/cs-221-logic-models.md new file mode 100644 index 000000000..aa03a9b9a --- /dev/null +++ b/fr/cs-221-logic-models.md @@ -0,0 +1,462 @@ +**Logic-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-logic-models) + +
+ +**1. Logic-based models with propositional and first-order logic** + +⟶ Modèles basés sur la logique : logique propositionnelle et calcul des prédicats du premier ordre + +
+ + +**2. Basics** + +⟶ Bases + +
+ + +**3. Syntax of propositional logic ― By noting f,g formulas, and ¬,∧,∨,→,↔ connectives, we can write the following logical expressions:** + +⟶ Syntaxe de la logique propositionnelle - En notant f et g formules et ¬,∧,∨,→,↔ opérateurs, on peut écrire les expressions logiques suivantes : + +
+ + +**4. [Name, Symbol, Meaning, Illustration]** + +⟶ [Nom, Symbole, Signification, Illustration] + +
+ + +**5. [Affirmation, Negation, Conjunction, Disjunction, Implication, Biconditional]** + +⟶ [Affirmation, Négation, Conjonction, Disjonction, Implication, Biconditionnel] + +
+ + +**6. [not f, f and g, f or g, if f then g, f, that is to say g]** + +⟶ [non f, f et g, f ou g, si f alors g, f, c'est à dire g] + +
+ + +**7. Remark: formulas can be built up recursively out of these connectives.** + +⟶ Remarque : n'importe quelle formule peut être construite de manière récursive à partir de ces opérateurs. + +
+ + +**8. Model ― A model w denotes an assignment of binary weights to propositional symbols.** + +⟶ [Modèle - Un modèle w dénote une combinaison de valeurs binaires liées à des symboles propositionnels] + +
+ + +**9. Example: the set of truth values w={A:0,B:1,C:0} is one possible model to the propositional symbols A, B and C.** + +⟶ Exemple : l'ensemble de valeurs de vérité w={A:0,B:1,C:0} est un modèle possible pour les symboles propositionnels A, B et C. + +
+ + +**10. Interpretation function ― The interpretation function I(f,w) outputs whether model w satisfies formula f:** + +⟶ Interprétation - L'interprétation I(f,w) nous renseigne si le modèle w satisfait la formule f : + +
+ + +**11. Set of models ― M(f) denotes the set of models w that satisfy formula f. Mathematically speaking, we define it as follows:** + +⟶ Ensemble de modèles - M(f) dénote l'ensemble des modèles w qui satisfont la formule f. Sa définition mathématique est donnée par : + +
+ + +**12. Knowledge base** + +⟶ Base de connaissance + +
+ + +**13. Definition ― The knowledge base KB is the conjunction of all formulas that have been considered so far. The set of models of the knowledge base is the intersection of the set of models that satisfy each formula. In other words:** + +⟶ Définition - La base de connaissance KB est la conjonction de toutes les formules considérées jusqu'à présent. L'ensemble des modèles de la base de connaissance est l'intersection de l'ensemble des modèles satisfaisant chaque formule. En d'autres termes : + +
+ + +**14. Probabilistic interpretation ― The probability that query f is evaluated to 1 can be seen as the proportion of models w of the knowledge base KB that satisfy f, i.e.:** + +⟶ Interprétation en termes de probabilités - La probabilité que la requête f soit évaluée à 1 peut être vue comme la proportion des modèles w de la base de connaissance KB qui satisfait f, i.e. : + +
+ + +**15. Satisfiability ― The knowledge base KB is said to be satisfiable if at least one model w satisfies all its constraints. In other words:** + +⟶ Satisfaisabilité - La base de connaissance KB est dite satisfaisable si au moins un modèle w satisfait toutes ses contraintes. En d'autres termes : + +
+ + +**16. satisfiable** + +⟶ satisfaisable + +
+ + +**17. Remark: M(KB) denotes the set of models compatible with all the constraints of the knowledge base.** + +⟶ Remarque : M(KB) dénote l'ensemble des modèles compatibles avec toutes les contraintes de la base de connaissance. + +
+ + +**18. Relation between formulas and knowledge base - We define the following properties between the knowledge base KB and a new formula f:** + +⟶ Relation entre formules et base de connaissance - On définit les propriétés suivantes entre la base de connaissance KB et une nouvelle formule f : + +
+ + +**19. [Name, Mathematical formulation, Illustration, Notes]** + +⟶ [Nom, Formulation mathématique, Illustration, Notes] + +
+ + +**20. [KB entails f, KB contradicts f, f contingent to KB]** + +⟶ [KB déduit f, KB contredit f, f est contingent à KB] + +
+ + +**21. [f does not bring any new information, Also written KB⊨f, No model satisfies the constraints after adding f, Equivalent to KB⊨¬f, f does not contradict KB, f adds a non-trivial amount of information to KB]** + +⟶ [f n'apporte aucune nouvelle information, Aussi écrit KB⊨f, Aucun modèle ne satisfait les contraintes après l'ajout de f, Équivalent à KB⊨¬f, f ne contredit pas KB, f ajoute une quantité non-triviale d'information à KB] + +
+ + +**22. Model checking ― A model checking algorithm takes as input a knowledge base KB and outputs whether it is satisfiable or not.** + +⟶ Vérification de modèles - Un algorithme de vérification de modèles (model checking en anglais) prend comme argument une base de connaissance KB et nous renseigne si celle-ci est satisfaisable ou pas. + +
+ + +**23. Remark: popular model checking algorithms include DPLL and WalkSat.** + +⟶ Remarque : DPLL et WalkSat sont des exemples populaires d'algorithmes de vérification de modèles. + +
+ + +**24. Inference rule ― An inference rule of premises f1,...,fk and conclusion g is written:** + +⟶ Règle d'inférence - Une règle d'inférence de prémisses f1,...,fk et de conclusion g s'écrit : + +
+ + +**25. Forward inference algorithm ― From a set of inference rules Rules, this algorithm goes through all possible f1,...,fk and adds g to the knowledge base KB if a matching rule exists. This process is repeated until no more additions can be made to KB.** + +⟶ Algorithme de chaînage avant - Partant d'un ensemble de règles d'inférence Rules, l'algorithme de chaînage avant (en anglais forward inference algorithm) parcourt tous les f1,...,fk et ajoute g à la base de connaissance KB si une règle parvient à une telle conclusion. Cette démarche est répétée jusqu'à ce qu'aucun autre ajout ne puisse être fait à KB. + +
+ + +**26. Derivation ― We say that KB derives f (written KB⊢f) with rules Rules if f already is in KB or gets added during the forward inference algorithm using the set of rules Rules.** + +⟶ Dérivation - On dit que KB dérive f (noté KB⊢f) par le biais des règles Rules soit si f est déjà dans KB ou si elle se fait ajouter pendant l'application du chaînage avant utilisant les règles Rules. + +
+ + +**27. Properties of inference rules ― A set of inference rules Rules can have the following properties:** + +⟶ Propriétés des règles d'inférence - Un ensemble de règles d'inférence Rules peut avoir les propriétés suivantes : + +
+ + +**28. [Name, Mathematical formulation, Notes]** + +⟶ [Nom, Formulation mathématique, Notes] + +
+ + +**29. [Soundness, Completeness]** + +⟶ [Validité, Complétude] + +
+ + +**30. [Inferred formulas are entailed by KB, Can be checked one rule at a time, "Nothing but the truth", Formulas entailing KB are either already in the knowledge base or inferred from it, "The whole truth"]** + +⟶ [Les formules inférées sont déduites par KB, Peut être vérifiée une règle à la fois, "Rien que la vérité", Les formules déduites par KB sont soit déjà dans la base de connaissance, soit inférées de celle-ci, "La vérité dans sa totalité"] + +
+ + +**31. Propositional logic** + +⟶ Logique propositionnelle + +
+ + +**32. In this section, we will go through logic-based models that use logical formulas and inference rules. The idea here is to balance expressivity and computational efficiency.** + +⟶ Dans cette section, nous allons parcourir les modèles logiques utilisant des formules logiques et des règles d'inférence. L'idée est de trouver le juste milieu entre expressivité et efficacité. + +
+ + +**33. Horn clause ― By noting p1,...,pk and q propositional symbols, a Horn clause has the form:** + +⟶ Clause de Horn - En notant p1,...,pk et q des symboles propositionnels, une clause de Horn s'écrit : + +
+ + +**34. Remark: when q=false, it is called a "goal clause", otherwise we denote it as a "definite clause".** + +⟶ Remarque : quand q=false, cette clause de Horn est "négative", autrement elle est appelée "stricte". + +
+ + +**35. Modus ponens ― For propositional symbols f1,...,fk and p, the modus ponens rule is written:** + +⟶ Modus ponens - Sur les symboles propositionnels f1,...,fk et p, la règle de modus ponens est écrite : + +
+ + +**36. Remark: it takes linear time to apply this rule, as each application generate a clause that contains a single propositional symbol.** + +⟶ Remarque : l'application de cette règle se fait en temps linéaire, puisque chaque exécution génère une clause contenant un symbole propositionnel. + +
+ + +**37. Completeness ― Modus ponens is complete with respect to Horn clauses if we suppose that KB contains only Horn clauses and p is an entailed propositional symbol. Applying modus ponens will then derive p.** + +⟶ Complétude - Modus ponens est complet lorsqu'on le munit des clauses de Horn si l'on suppose que KB contient uniquement des clauses de Horn et que p est un symbole propositionnel qui est déduit. L'application de modus ponens dérivera alors p. + +
+ + +**38. Conjunctive normal form ― A conjunctive normal form (CNF) formula is a conjunction of clauses, where each clause is a disjunction of atomic formulas.** + +⟶ Forme normale conjonctive - La forme normale conjonctive (en anglais conjunctive normal form ou CNF) d'une formule est une conjonction de clauses, chacune d'entre elles étant une disjonction de formules atomiques. + +
+ + +**39. Remark: in other words, CNFs are ∧ of ∨.** + +⟶ Remarque : en d'autres termes, les CNFs sont des ∧ de ∨. + +
+ + +**40. Equivalent representation ― Every formula in propositional logic can be written into an equivalent CNF formula. The table below presents general conversion properties:** + +⟶ Représentation équivalente - Chaque formule en logique propositionnelle peut être écrite de manière équivalente sous la forme d'une formule CNF. Le tableau ci-dessous présente les propriétés principales permettant une telle conversion : + +
+ + +**41. [Rule name, Initial, Converted, Eliminate, Distribute, over]** + +⟶ [Nom de la règle, Début, Résultat, Élimine, Distribue, sur] + +
+ + +**42. Resolution rule ― For propositional symbols f1,...,fn, and g1,...,gm as well as p, the resolution rule is written:** + +⟶ Règle de résolution - Pour des symboles propositionnels f1,...,fn, et g1,...,gm ainsi que p, la règle de résolution s'écrit : + +
+ + +**43. Remark: it can take exponential time to apply this rule, as each application generates a clause that has a subset of the propositional symbols.** + +⟶ Remarque : l'application de cette règle peut prendre un temps exponentiel, vu que chaque itération génère une clause constituée d'une partie des symboles propositionnels. + +
+ + +**44. [Resolution-based inference ― The resolution-based inference algorithm follows the following steps:, Step 1: Convert all formulas into CNF, Step 2: Repeatedly apply resolution rule, Step 3: Return unsatisfiable if and only if False, is derived]** + +⟶ [Inférence basée sur la règle de résolution - L'algorithme d'inférence basée sur la règle de résolution se déroule en plusieurs étapes :, Étape 1 : Conversion de toutes les formules vers leur forme CNF, Étape 2 : Application répétée de la règle de résolution, Étape 3 : Renvoyer "non satisfaisable" si et seulement si False est dérivé] + +
+ + +**45. First-order logic** + +⟶ Calcul des prédicats du premier ordre + +
+ + +**46. The idea here is to use variables to yield more compact knowledge representations.** + +⟶ L'idée ici est d'utiliser des variables et ainsi permettre une représentation des connaissances plus compacte. + +
+ + +**47. [Model ― A model w in first-order logic maps:, constant symbols to objects, predicate symbols to tuple of objects]** + +⟶ [Modèle - Un modèle w en calcul des prédicats du premier ordre lie :, des symboles constants à des objets, des prédicats à n-uplets d'objets] + +
+ + +**48. Horn clause ― By noting x1,...,xn variables and a1,...,ak,b atomic formulas, the first-order logic version of a horn clause has the form:** + +⟶ Clause de Horn - En notant x1,...,xn variables et a1,...,ak,b formules atomiques, une clause de Horn pour le calcul des prédicats du premier ordre a la forme : + +
+ + +**49. Substitution ― A substitution θ maps variables to terms and Subst[θ,f] denotes the result of substitution θ on f.** + +⟶ Substitution - Une substitution θ lie les variables aux termes et Subst[θ,f] désigne le résultat de la substitution θ sur f. + +
+ + +**50. Unification ― Unification takes two formulas f and g and returns the most general substitution θ that makes them equal:** + +⟶ Unification - Une unification prend deux formules f et g et renvoie la substitution θ la plus générale les rendant égales : + +
+ + +**51. such that** + +⟶ tel que + +
+ + +**52. Note: Unify[f,g] returns Fail if no such θ exists.** + +⟶ Note : Unify[f,g] renvoie Fail si un tel θ n'existe pas. + +
+ + +**53. Modus ponens ― By noting x1,...,xn variables, a1,...,ak and a′1,...,a′k atomic formulas and by calling θ=Unify(a′1∧...∧a′k,a1∧...∧ak) the first-order logic version of modus ponens can be written:** + +⟶ Modus ponens - En notant x1,...,xn variables, a1,...,ak et a′1,...,a′k formules atomiques et en notant θ=Unify(a′1∧...∧a′k,a1∧...∧ak), modus ponens pour le calcul des prédicats du premier ordre s'écrit : + +
+ + +**54. Completeness ― Modus ponens is complete for first-order logic with only Horn clauses.** + +⟶ Complétude - Modus ponens est complet pour le calcul des prédicats du premier ordre lorsqu'il agit uniquement sur les clauses de Horn. + +
+ + +**55. Resolution rule ― By noting f1,...,fn, g1,...,gm, p, q formulas and by calling θ=Unify(p,q), the first-order logic version of the resolution rule can be written:** + +⟶ Règle de résolution - En notant f1,...,fn, g1,...,gm, p, q formules et en posant θ=Unify(p,q), le règle de résolution pour le calcul des prédicats du premier ordre s'écrit : + +
+ + +**56. [Semi-decidability ― First-order logic, even restricted to only Horn clauses, is semi-decidable., if KB⊨f, forward inference on complete inference rules will prove f in finite time, if KB⊭f, no algorithm can show this in finite time]** + +⟶ [Semi-décidabilité - Le calcul des prédicats du premier ordre, même restreint aux clauses de Horn, n'est que semi-décidable., si KB⊨f, l'algorithme de chaînage avant sur des règles d'inférence complètes prouvera f en temps fini, si KB⊭f, aucun algorithme ne peut le prouver en temps fini] + +
+ + +**57. [Basics, Notations, Model, Interpretation function, Set of models]** + +⟶ [Bases, Notations, Modèle, Interprétation, Ensemble de modèles] + +
+ + +**58. [Knowledge base, Definition, Probabilistic interpretation, Satisfiability, Relationship with formulas, Forward inference, Rule properties]** + +⟶ [Base de connaissance, Définition, Interprétation en termes de probabilité, Satisfaisabilité, Lien avec les formules, Chaînage en avant, Propriétés des règles] + +
+ + +**59. [Propositional logic, Clauses, Modus ponens, Conjunctive normal form, Representation equivalence, Resolution]** + +⟶ [Logique propositionnelle, Clauses, Modus ponens, Forme normale conjonctive, Représentation équivalente, Résolution] + +
+ + +**60. [First-order logic, Substitution, Unification, Resolution rule, Modus ponens, Resolution, Semi-decidability]** + +⟶ [Calcul des prédicats du premier ordre, Substitution, Unification, Règle de résolution, Modus ponens, Résolution, Semi-décidabilité] + +
+ + +**61. View PDF version on GitHub** + +⟶ Voir la version PDF sur GitHub + +
+ + +**62. Original authors** + +⟶ Auteurs originaux. + +
+ + +**63. Translated by X, Y and Z** + +⟶ Traduit par X, Y et Z. + +
+ + +**64. Reviewed by X, Y and Z** + +⟶ Revu par X, Y et Z. + +
+ + +**65. By X and Y** + +⟶ Par X et Y. + +
+ + +**66. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ Les pense-bêtes d'intelligence artificielle sont maintenant disponibles en français. diff --git a/fr/cs-221-reflex-models.md b/fr/cs-221-reflex-models.md new file mode 100644 index 000000000..7a7a489e1 --- /dev/null +++ b/fr/cs-221-reflex-models.md @@ -0,0 +1,539 @@ +**Reflex-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-reflex-models) + +
+ +**1. Reflex-based models with Machine Learning** + +⟶ Modèles basés sur le réflex : apprentissage automatique + +
+ + +**2. Linear predictors** + +⟶ Prédicteurs linéaires + +
+ + +**3. In this section, we will go through reflex-based models that can improve with experience, by going through samples that have input-output pairs.** + +⟶ Dans cette section, nous allons explorer les modèles basés sur le réflex qui peuvent s'améliorer avec l'expérience s'appuyant sur des données ayant une correspondance entrée-sortie. + +
+ + +**4. Feature vector ― The feature vector of an input x is noted ϕ(x) and is such that:** + +⟶ Vecteur caractéristique - Le vecteur caractéristique (en anglais feature vector) d'une entrée x est noté ϕ(x) et se décompose en : + +
+ + +**5. Score ― The score s(x,w) of an example (ϕ(x),y)∈Rd×R associated to a linear model of weights w∈Rd is given by the inner product:** + +⟶ Score - Le score s(x,w) d'un exemple (ϕ(x),y)∈Rd×R associé à un modèle linéaire de paramètres w∈Rd est donné par le produit scalaire : + +
+ + +**6. Classification** + +⟶ Classification + +
+ + +**7. Linear classifier ― Given a weight vector w∈Rd and a feature vector ϕ(x)∈Rd, the binary linear classifier fw is given by:** + +⟶ Classifieur linéaire - Étant donnés un vecteur de paramètres w∈Rd et un vecteur caractéristique ϕ(x)∈Rd, le classifieur linéaire binaire est donné par : + +
+ + +**8. if** + +⟶ si + +
+ + +**9. Margin ― The margin m(x,y,w)∈R of an example (ϕ(x),y)∈Rd×{−1,+1} associated to a linear model of weights w∈Rd quantifies the confidence of the prediction: larger values are better. It is given by:** + +⟶ Marge - La marge (en anglais margin) m(x,y,w)∈R d'un exemple (ϕ(x),y)∈Rd×{−1,+1} associée à un modèle linéaire de paramètre w∈Rd quantifie la confiance associée à une prédiction : plus cette valeur est grande, mieux c'est. Cette quantité est donnée par : + +
+ + +**10. Regression** + +⟶ Régression + +
+ + +**11. Linear regression ― Given a weight vector w∈Rd and a feature vector ϕ(x)∈Rd, the output of a linear regression of weights w denoted as fw is given by:** + +⟶ Régression linéaire - Étant donnés un vecteur de paramètres w∈Rd et un vecteur caractéristique ϕ(x)∈Rd, le résultat d'une régression linéaire de paramètre w, notée fw, est donné par : + +
+ + +**12. Residual ― The residual res(x,y,w)∈R is defined as being the amount by which the prediction fw(x) overshoots the target y:** + +⟶ Résidu - Le résidu res(x,y,w)∈R est défini comme étant la différence entre la prédiction fw(x) et la vraie valeur y. + +
+ + +**13. Loss minimization** + +⟶ Minimisation de la fonction objectif + +
+ + +**14. Loss function ― A loss function Loss(x,y,w) quantifies how unhappy we are with the weights w of the model in the prediction task of output y from input x. It is a quantity we want to minimize during the training process.** + +⟶ Fonction objectif - Une fonction objectif (en anglais loss function) Loss(x,y,w) traduit notre niveau d'insatisfaction avec les paramètres w du modèle dans la tâche de prédiction de la sortie y à partir de l'entrée x. C'est une quantité que l'on souhaite minimiser pendant la phase d'entraînement. + +
+ + +**15. Classification case - The classification of a sample x of true label y∈{−1,+1} with a linear model of weights w can be done with the predictor fw(x)≜sign(s(x,w)). In this situation, a metric of interest quantifying the quality of the classification is given by the margin m(x,y,w), and can be used with the following loss functions:** + +⟶ Cas de la classification - Trouver la classe d'un exemple x appartenant à y∈{−1,+1} peut être faite par le biais d'un modèle linéaire de paramètre w à l'aide du prédicteur fw(x)≜sign(s(x,w)). La qualité de cette prédiction peut alors être évaluée au travers de la marge m(x,y,w) intervenant dans les fonctions objectif suivantes : + +
+ + +**16. [Name, Illustration, Zero-one loss, Hinge loss, Logistic loss]** + +⟶ [Nom, Illustration, Fonction objectif zéro-un, Fonction objectif de Hinge, Fonction objectif logistique] + +
+ + +**17. Regression case - The prediction of a sample x of true label y∈R with a linear model of weights w can be done with the predictor fw(x)≜s(x,w). In this situation, a metric of interest quantifying the quality of the regression is given by the margin res(x,y,w) and can be used with the following loss functions:** + +⟶ Cas de la régression - Prédire la valeur y∈R associée à l'exemple x peut être faite par le biais d'un modèle linéaire de paramètre w à l'aide du prédicteur fw(x)≜s(x,w). La qualité de cette prédiction peut alors être évaluée au travers du résidu res(x,y,w) intervenant dans les fonctions objectif suivantes : + +
+ + +**18. [Name, Squared loss, Absolute deviation loss, Illustration]** + +⟶ [Nom, Erreur quadratique, Erreur absolue, Illustration] + +
+ + +**19. Loss minimization framework ― In order to train a model, we want to minimize the training loss is defined as follows:** + +⟶ Processus de minimisation de la fonction objectif - Lors de l'entraînement d'un modèle, on souhaite minimiser la valeur de la fonction objectif évaluée sur l'ensemble d'entraînement : + +
+ + +**20. Non-linear predictors** + +⟶ Prédicteurs non linéaires + +
+ + +**21. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** + +⟶ k plus proches voisins - L'algorithme des k plus proches voisins (en anglais k-nearest neighbors ou k-NN) est une approche non paramétrique où la réponse associée à un exemple est déterminée par la nature de ses k plus proches voisins de l'ensemble d'entraînement. Cette démarche peut être utilisée pour la classification et la régression. + +
+ + +**22. Remark: the higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** + +⟶ Remarque : plus le paramètre k est grand, plus le biais est élevé. À l'inverse, la variance devient plus élevée lorsque l'on réduit la valeur k. + +
+ + +**23. Neural networks ― Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks. The vocabulary around neural networks architectures is described in the figure below:** + +⟶ Réseaux de neurones - Les réseaux de neurones (en anglais neural networks) constituent un type de modèle basés sur des couches (en anglais layers). Parmi les types de réseaux populaires, on peut compter les réseaux de neurones convolutionnels et récurrents (abbréviés respectivement en CNN et RNN en anglais). Une partie du vocabulaire associé aux réseaux de neurones est détaillée dans la figure ci-dessous : + +
+ + +**24. [Input layer, Hidden layer, Output layer]** + +⟶ [Couche d'entrée, Couche cachée, Couche de sortie] + +
+ + +**25. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** + +⟶ En notant i la i-ème couche du réseau et j son j-ième neurone, on a : + +
+ + +**26. where we note w, b, x, z the weight, bias, input and non-activated output of the neuron respectively.** + +⟶ où l'on note w, b, x, z le coefficient, le biais ainsi que la variable de sortie respectivement. + +
+ + +**27. For a more detailed overview of the concepts above, check out the Supervised Learning cheatsheets!** + +⟶ Pour un aperçu plus détaillé des concepts ci-dessus, rendez-vous sur le pense-bête d'apprentissage supervisé ! + +
+ + +**28. Stochastic gradient descent** + +⟶ Algorithme du gradient stochastique + +
+ + +**29. Gradient descent ― By noting η∈R the learning rate (also called step size), the update rule for gradient descent is expressed with the learning rate and the loss function Loss(x,y,w) as follows:** + +⟶ Descente de gradient - En notant η∈R le taux d'apprentissage (en anglais learning rate ou step size), la règle de mise à jour des coefficients pour cet algorithme utilise la fonction objectif Loss(x,y,w) de la manière suivante : + +
+ + +**30. Stochastic updates ― Stochastic gradient descent (SGD) updates the parameters of the model one training example (ϕ(x),y)∈Dtrain at a time. This method leads to sometimes noisy, but fast updates.** + +⟶ Mises à jour stochastiques - L'algorithme du gradient stochastique (en anglais stochastic gradient descent ou SGD) met à jour les paramètres du modèle en parcourant les exemples (ϕ(x),y)∈Dtrain de l'ensemble d'entraînement un à un. Cette méthode engendre des mises à jour rapides à calculer mais qui manquent parfois de robustesse. + +
+ + +**31. Batch updates ― Batch gradient descent (BGD) updates the parameters of the model one batch of examples (e.g. the entire training set) at a time. This method computes stable update directions, at a greater computational cost.** + +⟶ Mises à jour par lot - L'algorithme du gradient par lot (en anglais batch gradient descent ou BGD) met à jour les paramètre du modèle en utilisant des lots entiers d'exemples (e.g. la totalité de l'ensemble d'entraînement) à la fois. Cette méthode calcule des directions de mise à jour des coefficients plus stable au prix d'un plus grand nombre de calculs. + +
+ + +**32. Fine-tuning models** + +⟶ Peaufinage de modèle + +
+ + +**33. Hypothesis class ― A hypothesis class F is the set of possible predictors with a fixed ϕ(x) and varying w:** + +⟶ Classe d'hypothèses - Une classe d'hypothèses F est l'ensemble des prédicteurs candidats ayant un ϕ(x) fixé et dont le paramètre w peut varier. + +
+ + +**34. Logistic function ― The logistic function σ, also called the sigmoid function, is defined as:** + +⟶ Fonction logistique - La fonction logistique σ, aussi appelée en anglais sigmoid function, est définie par : + +
+ + +**35. Remark: we have σ′(z)=σ(z)(1−σ(z)).** + +⟶ Remarque : la dérivée de cette fonction s'écrit σ′(z)=σ(z)(1−σ(z)). + +
+ + +**36. Backpropagation ― The forward pass is done through fi, which is the value for the subexpression rooted at i, while the backward pass is done through gi=∂out∂fi and represents how fi influences the output.** + +⟶ Rétropropagation du gradient (en anglais backpropagation) - La propagation avant (en anglais forward pass) est effectuée via fi, valeur correspondant à l'expression appliquée à l'étape i. La propagation de l'erreur vers l'arrière (en anglais backward pass) se fait via gi=∂out∂fi et décrit la manière dont fi agit sur la sortie du réseau. + +
+ + +**37. Approximation and estimation error ― The approximation error ϵapprox represents how far the entire hypothesis class F is from the target predictor g∗, while the estimation error ϵest quantifies how good the predictor ^f is with respect to the best predictor f∗ of the hypothesis class F.** + +⟶ Erreur d'approximation et d'estimation - L'erreur d'approximation ϵapprox représente la distance entre la classe d'hypothèses F et le prédicteur optimal g∗. De son côté, l'erreur d'estimation quantifie la qualité du prédicteur ^f par rapport au meilleur prédicteur f∗ de la classe d'hypothèses F. + +
+ + +**38. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** + +⟶ Régularisation - Le but de la régularisation est d'empêcher le modèle de surapprendre (en anglais overfit) les données en s'occupant ainsi des problèmes de variance élevée. La table suivante résume les différents types de régularisation couramment utilisés : + +
+ + +**39. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶ [Réduit les coefficients à 0, Bénéfique pour la sélection de variables, Rapetissit les coefficients, Compromis entre sélection de variables et coefficients de faible magnitude] + +
+ + +**40. Hyperparameters ― Hyperparameters are the properties of the learning algorithm, and include features, regularization parameter λ, number of iterations T, step size η, etc.** + +⟶ Hyperparamètres - Les hyperparamètres sont les paramètres de l'algorithme d'apprentissage et incluent parmi d'autres le type de caractéristiques utilisé ainsi que le paramètre de régularisation λ, le nombre d'itérations T, le taux d'apprentissage η. + +
+ + +**41. Sets vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** + +⟶ Vocabulaire ― Lors de la sélection d'un modèle, on divise les données en 3 différentes parties : + +
+ + +**42. [Training set, Validation set, Testing set]** + +⟶ [Données d'entraînement, Données de validation, Données de test] + +
+ + +**43. [Model is trained, Usually 80% of the dataset, Model is assessed, Usually 20% of the dataset, Also called hold-out or development set, Model gives predictions, Unseen data]** + +⟶ [Le modèle est entrainé, Constitue normalement 80% du jeu de données, Le modèle est évalué, Constitue normalement 20% du jeu de données, Aussi appelé données de développement (en anglais hold-out ou development set), Le modèle donne ses prédictions, Données jamais observées] + +
+ + +**44. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** + +⟶ Une fois que le modèle a été choisi, il est entrainé sur le jeu de données entier et testé sur l'ensemble de test (qui n'a jamais été vu). Ces derniers sont représentés dans la figure ci-dessous : + +
+ + +**45. [Dataset, Unseen data, train, validation, test]** + +⟶ [Jeu de données, Données inconnues, entrainement, validation, test] + +
+ + +**46. For a more detailed overview of the concepts above, check out the Machine Learning tips and tricks cheatsheets!** + +⟶ Pour un aperçu plus détaillé des concepts ci-dessus, rendez-vous sur le pense-bête de petites astuces d'apprentissage automatique ! + +
+ + +**47. Unsupervised Learning** + +⟶ Apprentissage non supervisé + +
+ + +**48. The class of unsupervised learning methods aims at discovering the structure of the data, which may have of rich latent structures.** + +⟶ Les méthodes d'apprentissage non supervisé visent à découvrir la structure (parfois riche) des données. + +
+ + +**49. k-means** + +⟶ k-moyennes (en anglais k-means) + +
+ + +**50. Clustering ― Given a training set of input points Dtrain, the goal of a clustering algorithm is to assign each point ϕ(xi) to a cluster zi∈{1,...,k}** + +⟶ Partitionnement - Étant donné un ensemble d'entraînement Dtrain, le but d'un algorithme de partitionnement (en anglais clustering) est d'assigner chaque point ϕ(xi) à une partition zi∈{1,...,k}. + +
+ + +**51. Objective function ― The loss function for one of the main clustering algorithms, k-means, is given by:** + +⟶ Fonction objectif - La fonction objectif d'un des principaux algorithmes de partitionnement, k-moyennes, est donné par : + +
+ + +**52. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** + +⟶ Après avoir aléatoirement initialisé les centroïdes de partitions μ1,μ2,...,μk∈Rn, l'algorithme k-moyennes répète l'étape suivante jusqu'à convergence : + +
+ + +**53. and** + +⟶ et + +
+ + +**54. [Means initialization, Cluster assignment, Means update, Convergence]** + +⟶ [Initialisation des moyennes, Assignation de partition, Mise à jour des moyennes, Convergence] + +
+ + +**55. Principal Component Analysis** + +⟶ Analyse des composantes principales + +
+ + +**56. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** + +⟶ Étant donnée une matrice A∈Rn×n, λ est dite être une valeur propre de A s'il existe un vecteur z∈Rn∖{0}, appelé vecteur propre, tel que : + +
+ + +**57. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** + +⟶ Théorème spectral ― Soit A∈Rn×n. Si A est symétrique, alors A est diagonalisable par une matrice réelle orthogonale U∈Rn×n. En notant Λ=diag(λ1,...,λn), on a : + +
+ + +**58. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** + +⟶ Remarque : le vecteur propre associé à la plus grande valeur propre est appelé le vecteur propre principal de la matrice A. + +
+ + +**59. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k dimensions by maximizing the variance of the data as follows:** + +⟶ Algorithme ― La procédure d'analyse des composantes principales (en anglais PCA - Principal Component Analysis) est une technique de réduction de dimension qui projette les données sur k dimensions en maximisant la variance des données de la manière suivante : + +
+ + +**60. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** + +⟶ Étape 1: Normaliser les données pour avoir une moyenne de 0 et un écart-type de 1. + +
+ + +**61. [where, and]** + +⟶ [où, et] + +
+ + +**62. [Step 2: Compute Σ=1mm∑i=1ϕ(xi)ϕ(xi)T∈Rn×n, which is symmetric with real eigenvalues., Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues., Step 4: Project the data on spanR(u1,...,uk).]** + +⟶ [Étape 2: Calculer Σ=1mm∑i=1ϕ(xi)ϕ(xi)T∈Rn×n, qui est symétrique avec des valeurs propres réelles., Étape 3: Calculer u1,...,uk∈Rn les k valeurs propres principales orthogonales de Σ, i.e. les vecteurs propres orthogonaux des k valeurs propres les plus grandes., Étape 4: Projeter les données sur spanR(u1,...,uk).] + +
+ + +**63. This procedure maximizes the variance among all k-dimensional spaces.** + +⟶ Cette procédure maximise la variance sur tous les espaces à k dimensions. + +
+ + +**64. [Data in feature space, Find principal components, Data in principal components space]** + +⟶ [Données dans l'espace initial, Trouve les composantes principales, Données dans l'espace des composantes principales] + +
+ + +**65. For a more detailed overview of the concepts above, check out the Unsupervised Learning cheatsheets!** + +⟶ Pour un aperçu plus détaillé des concepts ci-dessus, rendez-vous sur le pense-bête d'apprentissage non supervisé ! + +
+ + +**66. [Linear predictors, Feature vector, Linear classifier/regression, Margin]** + +⟶ [Prédicteurs linéaires, Vecteur caractéristique, Classification/régression linéaire, Marge] + +
+ + +**67. [Loss minimization, Loss function, Framework]** + +⟶ [Minimisation de la fonction objectif, Fonction objectif, Cadre] + +
+ + +**68. [Non-linear predictors, k-nearest neighbors, Neural networks]** + +⟶ [Prédicteurs non linéaires, k plus proches voisins, Réseaux de neurones] + +
+ + +**69. [Stochastic gradient descent, Gradient, Stochastic updates, Batch updates]** + +⟶ [Algorithme du gradient stochastique, Gradient, Mises à jour stochastiques, Mises à jour par lots] + +
+ + +**70. [Fine-tuning models, Hypothesis class, Backpropagation, Regularization, Sets vocabulary]** + +⟶ [Peaufiner les modèles, Classe d'hypothèses, Rétropropagation du gradient, Régularisation, Vocabulaire] + +
+ + +**71. [Unsupervised Learning, k-means, Principal components analysis]** + +⟶ [Apprentissage non supervisé, k-means, Analyse des composantes principales] + +
+ + +**72. View PDF version on GitHub** + +⟶ Voir la version PDF sur GitHub + +
+ + +**73. Original authors** + +⟶ Auteurs d'origine + +
+ + +**74. Translated by X, Y and Z** + +⟶ Traduit par X, Y et Z + +
+ + +**75. Reviewed by X, Y and Z** + +⟶ Revu par X, Y et Z + +
+ + +**76. By X and Y** + +⟶ De X et Y + +
+ + +**77. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ Les pense-bêtes d'intelligence artificielle sont maintenant disponibles en français. diff --git a/fr/cs-221-states-models.md b/fr/cs-221-states-models.md new file mode 100644 index 000000000..20be6ebb7 --- /dev/null +++ b/fr/cs-221-states-models.md @@ -0,0 +1,980 @@ +**States-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-states-models) + +
+ +**1. States-based models with search optimization and MDP** + +⟶ Modèles basés sur les états : optimisation de parcours et MDPs + +
+ + +**2. Search optimization** + +⟶ Optimisation de parcours + +
+ + +**3. In this section, we assume that by accomplishing action a from state s, we deterministically arrive in state Succ(s,a). The goal here is to determine a sequence of actions (a1,a2,a3,a4,...) that starts from an initial state and leads to an end state. In order to solve this kind of problem, our objective will be to find the minimum cost path by using states-based models.** + +⟶ Dans cette section, nous supposons qu'en effectuant une action a à partir d'un état s, on arrive de manière déterministe à l'état Succ(s,a). Le but de cette étude est de déterminer une séquence d'actions (a1,a2,a3,a4,...) démarrant d'un état initial et aboutissant à un état final. Pour y parvenir, notre objectif est de minimiser le coût associés à ces actions à l'aide de modèles basés sur les états (state-based model en anglais). + +
+ + +**4. Tree search** + +⟶ Parcours d'arbre + +
+ + +**5. This category of states-based algorithms explores all possible states and actions. It is quite memory efficient, and is suitable for huge state spaces but the runtime can become exponential in the worst cases.** + +⟶ Cette catégorie d'algorithmes explore tous les états et actions possibles. Même si leur consommation en mémoire est raisonnable et peut supporter des espaces d'états de taille très grande, ce type d'algorithmes est néanmoins susceptible d'engendrer des complexités en temps exponentielles dans le pire des cas. + +
+ + +**6. [Self-loop, More than a parent, Cycle, More than a root, Valid tree]** + +⟶ [Boucle, Plus d'un parent, Cycle, Plus d'une racine, Arbre valide] + +
+ + +**7. [Search problem ― A search problem is defined with:, a starting state sstart, possible actions Actions(s) from state s, action cost Cost(s,a) from state s with action a, successor Succ(s,a) of state s after action a, whether an end state was reached IsEnd(s)]** + +⟶ [Problème de recherche - Un problème de recherche est défini par :, un état de départ sstart, des actions Actions(s) pouvant être effectuées depuis l'état s, le coût de l'action Cost(s,a) depuis l'état s pour effectuer l'action a, le successeur Succ(s,a) de l'état s après avoir effectué l'action a, la connaissance d'avoir atteint ou non un état final IsEnd(s)] + +
+ + +**8. The objective is to find a path that minimizes the cost.** + +⟶ L'objectif est de trouver un chemin minimisant le coût total des actions utilisées. + +
+ + +**9. Backtracking search ― Backtracking search is a naive recursive algorithm that tries all possibilities to find the minimum cost path. Here, action costs can be either positive or negative.** + +⟶ Retour sur trace - L'algorithme de retour sur trace (en anglais backtracking search) est un algorithme récursif explorant naïvement toutes les possibilités jusqu'à trouver le chemin de coût minimal. + +
+ + +**10. Breadth-first search (BFS) ― Breadth-first search is a graph search algorithm that does a level-by-level traversal. We can implement it iteratively with the help of a queue that stores at each step future nodes to be visited. For this algorithm, we can assume action costs to be equal to a constant c⩾0.** + +⟶ Parcours en largeur (BFS) - L'algorithme de parcours en largeur (en anglais breadth-first search ou BFS) est un algorithme de parcours de graphe traversant chaque niveau de manière successive. On peut le coder de manière itérative à l'aide d'une queue stockant à chaque étape les prochains nœuds à visiter. Cet algorithme suppose que le coût de toutes les actions est égal à une constante c⩾0. + +
+ + +**11. Depth-first search (DFS) ― Depth-first search is a search algorithm that traverses a graph by following each path as deep as it can. We can implement it recursively, or iteratively with the help of a stack that stores at each step future nodes to be visited. For this algorithm, action costs are assumed to be equal to 0.** + +⟶ Parcours en profondeur (DFS) - L'algorithme de parcours en profondeur (en anglais depth-first search ou DFS) est un algorithme de parcours de graphe traversant chaque chemin qu'il emprunte aussi loin que possible. On peut le coder de manière récursive, ou itérative à l'aide d'une pile qui stocke à chaque étape les prochains nœuds à visiter. Cet algorithme suppose que le coût de toutes les actions est égal à 0. + +
+ + +**12. Iterative deepening ― The iterative deepening trick is a modification of the depth-first search algorithm so that it stops after reaching a certain depth, which guarantees optimality when all action costs are equal. Here, we assume that action costs are equal to a constant c⩾0.** + +⟶ Approfondissement itératif - L'astuce de l'approfondissement itératif (en anglais iterative deepening) est une modification de l'algorithme de DFS qui l'arrête après avoir atteint une certaine profondeur, garantissant l'optimalité de la solution trouvée quand toutes les actions ont un même coût constant c⩾0. + +
+ + +**13. Tree search algorithms summary ― By noting b the number of actions per state, d the solution depth, and D the maximum depth, we have:** + +⟶ Récapitulatif des algorithmes de parcours d'arbre - En notant b le nombre d'actions par état, d la profondeur de la solution et D la profondeur maximale, on a : + +
+ + +**14. [Algorithm, Action costs, Space, Time]** + +⟶ [Algorithme, Coût des actions, Espace, Temps] + +
+ + +**15. [Backtracking search, any, Breadth-first search, Depth-first search, DFS-Iterative deepening]** + +⟶ [Retour sur trace, peu importe, Parcours en largeur, Parcours en profondeur, DFS-approfondissement itératif] + +
+ + +**16. Graph search** + +⟶ Parcours de graphe + +
+ + +**17. This category of states-based algorithms aims at constructing optimal paths, enabling exponential savings. In this section, we will focus on dynamic programming and uniform cost search.** + +⟶ Cette catégorie d'algorithmes basés sur les états vise à trouver des chemins optimaux avec une complexité moins grande qu'exponentielle. Dans cette section, nous allons nous concentrer sur la programmation dynamique et la recherche à coût uniforme. + +
+ + +**18. Graph ― A graph is comprised of a set of vertices V (also called nodes) as well as a set of edges E (also called links).** + +⟶ Graphe - Un graphe se compose d'un ensemble de sommets V (aussi appelés noeuds) et d'arêtes E (appelés arcs lorsque le graphe est orienté). + +
+ + +**19. Remark: a graph is said to be acylic when there is no cycle.** + +⟶ Remarque : un graphe est dit être acyclique lorsqu'il ne contient pas de cycle. + +
+ + +**20. State ― A state is a summary of all past actions sufficient to choose future actions optimally.** + +⟶ État - Un état contient le résumé des actions passées suffisant pour choisir les actions futures de manière optimale. + +
+ + +**21. Dynamic programming ― Dynamic programming (DP) is a backtracking search algorithm with memoization (i.e. partial results are saved) whose goal is to find a minimum cost path from state s to an end state send. It can potentially have exponential savings compared to traditional graph search algorithms, and has the property to only work for acyclic graphs. For any given state s, the future cost is computed as follows:** + +⟶ Programmation dynamique - La programmation dynamique (en anglais dynamic programming ou DP) est un algorithme de recherche de type retour sur trace qui utilise le principe de mémoïsation (i.e. les résultats intermédiaires sont enregistrés) et ayant pour but de trouver le chemin à coût minimal allant de l'état s à l'état final send. Cette procédure peut potentiellement engendrer des économies exponentielles si on la compare aux algorithmes de parcours de graphe traditionnels, et a la propriété de ne marcher que dans le cas de graphes acycliques. Pour un état s donné, le coût futur est calculé de la manière suivante : + +
+ + +**22. [if, otherwise]** + +⟶ [si, sinon] + +
+ + +**23. Remark: the figure above illustrates a bottom-to-top approach whereas the formula provides the intuition of a top-to-bottom problem resolution.** + +⟶ Remarque : la figure ci-dessus illustre une approche ascendante alors que la formule nous donne l'intuition d'une résolution avec une approche descendante. + +
+ + +**24. Types of states ― The table below presents the terminology when it comes to states in the context of uniform cost search:** + +⟶ Types d'états - La table ci-dessous présente la terminologie relative aux états dans le contexte de la recherche à coût uniforme : + +
+ + +**25. [State, Explanation]** + +⟶ [État, Explication] + +
+ + +**26. [Explored, Frontier, Unexplored]** + +⟶ [Exploré, Frontière, Inexploré] + +
+ + +**27. [States for which the optimal path has already been found, States seen for which we are still figuring out how to get there with the cheapest cost, States not seen yet]** + +⟶ [États pour lesquels le chemin optimal a déjà été trouvé, États rencontrés mais pour lesquels on se demande toujours comment s'y rendre avec un coût minimal, États non rencontrés jusqu'à présent] + +
+ + +**28. Uniform cost search ― Uniform cost search (UCS) is a search algorithm that aims at finding the shortest path from a state sstart to an end state send. It explores states s in increasing order of PastCost(s) and relies on the fact that all action costs are non-negative.** + +⟶ Recherche à coût uniforme - La recherche à coût uniforme (uniform cost search ou UCS en anglais) est un algorithme de recherche qui a pour but de trouver le chemin le plus court entre les états sstart et send. Celui-ci explore les états s en les triant par coût croissant de PastCost(s) et repose sur le fait que toutes les actions ont un coût non négatif. + +
+ + +**29. Remark 1: the UCS algorithm is logically equivalent to Dijkstra's algorithm.** + +⟶ Remarque 1 : UCS fonctionne de la même manière que l'algorithme de Dijkstra. + +
+ + +**30. Remark 2: the algorithm would not work for a problem with negative action costs, and adding a positive constant to make them non-negative would not solve the problem since this would end up being a different problem.** + +⟶ Remarque 2 : cet algorithme ne marche pas sur une configuration contenant des actions à coût négatif. Quelqu'un pourrait penser à ajouter une constante positive à tous les coûts, mais cela ne résoudrait rien puisque le problème résultant serait différent. + +
+ + +**31. Correctness theorem ― When a state s is popped from the frontier F and moved to explored set E, its priority is equal to PastCost(s) which is the minimum cost path from sstart to s.** + +⟶ Théorème de correction - Lorsqu'un état s passe de la frontière F à l'ensemble exploré E, sa priorité est égale à PastCost(s), représentant le chemin de coût minimal allant de sstart à s. + +
+ + +**32. Graph search algorithms summary ― By noting N the number of total states, n of which are explored before the end state send, we have:** + +⟶ Récapitulatif des algorithmes de parcours de graphe - En notant N le nombre total d'états dont n sont explorés avant l'état final send, on a : + +
+ + +**33. [Algorithm, Acyclicity, Costs, Time/space]** + +⟶ [Algorithme, Acyclicité, Coûts, Temps/Espace] + +
+ + +**34. [Dynamic programming, Uniform cost search]** + +⟶ [Programmation dynamique, Recherche à coût uniforme] + +
+ + +**35. Remark: the complexity countdown supposes the number of possible actions per state to be constant.** + +⟶ Remarque : ce décompte de la complexité suppose que le nombre d'actions possibles à partir de chaque état est constant. + +
+ + +**36. Learning costs** + +⟶ Apprendre les coûts + +
+ + +**37. Suppose we are not given the values of Cost(s,a), we want to estimate these quantities from a training set of minimizing-cost-path sequence of actions (a1,a2,...,ak).** + +⟶ Supposons que nous ne sommes pas donnés les valeurs de Cost(s,a). Nous souhaitons estimer ces quantités à partir d'un ensemble d'apprentissage de chemins à coût minimaux d'actions (a1,a2,...,ak). + +
+ + +**38. [Structured perceptron ― The structured perceptron is an algorithm aiming at iteratively learning the cost of each state-action pair. At each step, it:, decreases the estimated cost of each state-action of the true minimizing path y given by the training data, increases the estimated cost of each state-action of the current predicted path y' inferred from the learned weights.]** + +⟶ [Perceptron structuré - L'algorithme du perceptron structuré vise à apprendre de manière itérative les coûts des paires état-action. À chaque étape, il :, fait décroître le coût estimé de chaque état-action du vrai chemin minimisant y donné par la base d'apprentissage, fait croître le coût estimé de chaque état-action du chemin y' prédit comme étant minimisant par les paramètres appris par l'algorithme.] + +
+ + +**39. Remark: there are several versions of the algorithm, one of which simplifies the problem to only learning the cost of each action a, and the other parametrizes Cost(s,a) to a feature vector of learnable weights.** + +⟶ Remarque : plusieurs versions de cette algorithme existent, l'une d'elles réduisant ce problème à l'apprentissage du coût de chaque action a et l'autre paramétrisant chaque Cost(s,a) à un vecteur de paramètres pouvant être appris. + +
+ + +**40. A* search** + +⟶ Algorithme A* + +
+ + +**41. Heuristic function ― A heuristic is a function h over states s, where each h(s) aims at estimating FutureCost(s), the cost of the path from s to send.** + +⟶ Fonction heuristique - Une heuristique est une fonction h opérant sur les états s, où chaque h(s) vise à estimer FutureCost(s), le coût du chemin optimal allant de s à send. + +
+ + +**42. Algorithm ― A∗ is a search algorithm that aims at finding the shortest path from a state s to an end state send. It explores states s in increasing order of PastCost(s)+h(s). It is equivalent to a uniform cost search with edge costs Cost′(s,a) given by:** + +⟶ Algorithme - A* est un algorithme de recherche visant à trouver le chemin le plus court entre un état s et un état final send. Il le fait en explorant les états s triés par ordre croissant de PastCost(s)+h(s). Cela revient à utiliser l'algorithme UCS où chaque arête est associée au coût Cost′(s,a) donné par : + +
+ + +**43. Remark: this algorithm can be seen as a biased version of UCS exploring states estimated to be closer to the end state.** + +⟶ Remarque : cet algorithme peut être vu comme une version biaisée de UCS explorant les états estimés comme étant plus proches de l'état final. + +
+ + +**44. [Consistency ― A heuristic h is said to be consistent if it satisfies the two following properties:, For all states s and actions a, The end state verifies the following:]** + +⟶ [Consistance - Une heuristique h est dite consistante si elle satisfait les deux propriétés suivantes :, Pour tous états s et actions a, L'état final vérifie la propriété :] + +
+ + +**45. Correctness ― If h is consistent, then A∗ returns the minimum cost path.** + +⟶ Correction - Si h est consistante, alors A* renvoie le chemin de coût minimal. + +
+ + +**46. Admissibility ― A heuristic h is said to be admissible if we have:** + +⟶ Admissibilité - Une heuristique est dite admissible si l'on a : + +
+ + +**47. Theorem ― Let h(s) be a given heuristic. We have:** + +⟶ Théorème - Soit h(s) une heuristique. On a : + +
+ + +**48. [consistent, admissible]** + +⟶ [consistante, admissible] + +
+ + +**49. Efficiency ― A* explores all states s satisfying the following equation:** + +⟶ Efficacité - A* explore les états s satisfaisant l'équation : + +
+ + +**50. Remark: larger values of h(s) is better as this equation shows it will restrict the set of states s going to be explored.** + +⟶ Remarque : avoir h(s) élevé est préférable puisque cette équation montre que le nombre d'états s à explorer est alors réduit. + +
+ + +**51. Relaxation** + +⟶ Relaxation + +
+ + +**52. It is a framework for producing consistent heuristics. The idea is to find closed-form reduced costs by removing constraints and use them as heuristics.** + +⟶ C'est un type de procédure permettant de produire des heuristiques consistantes. L'idée est de trouver une fonction de coût facile à exprimer en enlevant des contraintes au problème, et ensuite l'utiliser en tant qu'heuristique. + +
+ + +**53. Relaxed search problem ― The relaxation of search problem P with costs Cost is noted Prel with costs Costrel, and satisfies the identity:** + +⟶ Relaxation d'un problème de recherche - La relaxation d'un problème de recherche P aux coûts Cost est noté Prel avec coûts Costrel, et vérifie la relation : + +
+ + +**54. Relaxed heuristic ― Given a relaxed search problem Prel, we define the relaxed heuristic h(s)=FutureCostrel(s) as the minimum cost path from s to an end state in the graph of costs Costrel(s,a).** + +⟶ Relaxation d'une heuristique - Étant donné la relaxation d'un problème de recherche Prel, on définit l'heuristique relaxée h(s)=FutureCostrel(s) comme étant le chemin de coût minimal allant de s à un état final dans le graphe de fonction de coût Costrel(s,a). + +
+ + +**55. Consistency of relaxed heuristics ― Let Prel be a given relaxed problem. By theorem, we have:** + +⟶ Consistance de la relaxation d'heuristiques - Soit Prel une relaxation d'un problème de recherche. Par théorème, on a : + +
+ + +**56. consistent** + +⟶ consistante + +
+ + +**57. [Tradeoff when choosing heuristic ― We have to balance two aspects in choosing a heuristic:, Computational efficiency: h(s)=FutureCostrel(s) must be easy to compute. It has to produce a closed form, easier search and independent subproblems., Good enough approximation: the heuristic h(s) should be close to FutureCost(s) and we have thus to not remove too many constraints.]** + +⟶ [Compromis lors du choix d'heuristique - Le choix d'heuristique se repose sur un compromis entre :, Complexité de calcul : h(s)=FutureCostrel(s) doit être facile à calculer. De manière préférable, cette fonction peut s'exprimer de manière explicite et elle permet de diviser le problème en sous-parties indépendantes.] + +
+ + +**58. Max heuristic ― Let h1(s), h2(s) be two heuristics. We have the following property:** + +⟶ Heuristique max - Soient h1(s) et h2(s) deux heuristiques. On a la propriété suivante : + +
+ + +**59. Markov decision processes** + +⟶ Processus de décision markovien + +
+ + +**60. In this section, we assume that performing action a from state s can lead to several states s′1,s′2,... in a probabilistic manner. In order to find our way between an initial state and an end state, our objective will be to find the maximum value policy by using Markov decision processes that help us cope with randomness and uncertainty.** + +⟶ Dans cette section, on suppose qu'effectuer l'action a à partir de l'état s peut mener de manière probabiliste à plusieurs états s′1,s′2,... Dans le but de trouver ce qu'il faudrait faire entre un état initial et un état final, on souhaite trouver une stratégie maximisant la quantité des récompenses en utilisant un outil adapté à l'imprévisibilité et l'incertitude : les processus de décision markoviens. + +
+ + +**61. Notations** + +⟶ Notations + +
+ + +**62. [Definition ― The objective of a Markov decision process is to maximize rewards. It is defined with:, a starting state sstart, possible actions Actions(s) from state s, transition probabilities T(s,a,s′) from s to s′ with action a, rewards Reward(s,a,s′) from s to s′ with action a, whether an end state was reached IsEnd(s), a discount factor 0⩽γ⩽1]** + +⟶ [Définition - L'objectif d'un processus de décision markovien (en anglais Markov decision process ou MDP) est de maximiser la quantité de récompenses. Un tel problème est défini par :, un état de départ sstart, l'ensemble des actions Actions(s) pouvant être effectuées à partir de l'état s, la probabilité de transition T(s,a,s′) de l'état s vers l'état s' après avoir pris l'action a, la récompense Reward(s,a,s′) pour être passé de l'état s à l'état s' après avoir pris l'action a, la connaissance d'avoir atteint ou non un état final IsEnd(s), un facteur de dévaluation 0⩽γ⩽1] + +
+ + +**63. Transition probabilities ― The transition probability T(s,a,s′) specifies the probability of going to state s′ after action a is taken in state s. Each s′↦T(s,a,s′) is a probability distribution, which means that:** + +⟶ Probabilités de transitions - La probabilité de transition T(s,a,s′) représente la probabilité de transitionner vers l'état s' après avoir effectué l'action a en étant dans l'état s. Chaque s′↦T(s,a,s′) est une loi de probabilité : + +
+ + +**64. states** + +⟶ états + +
+ + +**65. Policy ― A policy π is a function that maps each state s to an action a, i.e.** + +⟶ Politique - Une politique π est une fonction liant chaque état s à une action a, i.e. : + +
+ + +**66. Utility ― The utility of a path (s0,...,sk) is the discounted sum of the rewards on that path. In other words,** + +⟶ Utilité - L'utilité d'un chemin (s0,...,sk) est la somme des récompenses dévaluées récoltées sur ce chemin. En d'autres termes, + +
+ + +**67. The figure above is an illustration of the case k=4.** + +⟶ La figure ci-dessus illustre le cas k=4. + +
+ + +**68. Q-value ― The Q-value of a policy π at state s with action a, also noted Qπ(s,a), is the expected utility from state s after taking action a and then following policy π. It is defined as follows:** + +⟶ Q-value - La fonction de valeur des états-actions (Q-value en anglais) d'une politique π évaluée à l'état s avec l'action a, aussi notée Qπ(s,a), est l'espérance de l'utilité partant de l'état s avec l'action a et adoptant ensuite la politique π. Cette fonction est définie par : + +
+ + +**69. Value of a policy ― The value of a policy π from state s, also noted Vπ(s), is the expected utility by following policy π from state s over random paths. It is defined as follows:** + +⟶ Fonction de valeur des états d'une politique - La fonction de valeur des états d'une politique π évaluée à l'état s, aussi notée Vπ(s), est l'espérance de l'utilité partant de l'état s et adoptant ensuite la politique π. Cette fonction est définie par : + +
+ + +**70. Remark: Vπ(s) is equal to 0 if s is an end state.** + +⟶ Remarque : Vπ(s) vaut 0 si s est un état final. + +
+ + +**71. Applications** + +⟶ Applications + +
+ + +**72. [Policy evaluation ― Given a policy π, policy evaluation is an iterative algorithm that aims at estimating Vπ. It is done as follows:, Initialization: for all states s, we have:, Iteration: for t from 1 to TPE, we have, with]** + +⟶ [Évaluation d'une politique - Étant donnée une politique π, on peut utiliser l'algorithme itératif d'évaluation de politiques (en anglais policy evaluation) pour estimer Vπ :, Initialisation : pour tous les états s, on a, Itération : pour t allant de 1 à TPE, on a, avec] + +
+ + +**73. Remark: by noting S the number of states, A the number of actions per state, S′ the number of successors and T the number of iterations, then the time complexity is of O(TPESS′).** + +⟶ Remarque : en notant S le nombre d'états, A le nombre d'actions par états, S' le nombre de successeurs et T le nombre d'itérations, la complexité en temps est alors de O(TPESS′). + +
+ + +**74. Optimal Q-value ― The optimal Q-value Qopt(s,a) of state s with action a is defined to be the maximum Q-value attained by any policy starting. It is computed as follows:** + +⟶ Q-value optimale - La Q-value optimale Qopt(s,a) d'un état s avec l'action a est définie comme étant la Q-value maximale atteinte avec n'importe quelle politique. Elle est calculée avec la formule : + +
+ + +**75. Optimal value ― The optimal value Vopt(s) of state s is defined as being the maximum value attained by any policy. It is computed as follows:** + +⟶ Valeur optimale - La valeur optimale Vopt(s) d'un état s est définie comme étant la valeur maximum atteinte par n'importe quelle politique. Elle est calculée avec la formule : + +
+ + +**76. actions** + +⟶ actions + +
+ + +**77. Optimal policy ― The optimal policy πopt is defined as being the policy that leads to the optimal values. It is defined by:** + +⟶ Politique optimale - La politique optimale πopt est définie comme étant la politique liée aux valeurs optimales. Elle est définie par : + +
+ + +**78. [Value iteration ― Value iteration is an algorithm that finds the optimal value Vopt as well as the optimal policy πopt. It is done as follows:, Initialization: for all states s, we have:, Iteration: for t from 1 to TVI, we have:, with]** + +⟶ [Itération sur la valeur - L'algorithme d'itération sur la valeur (en anglais value iteration) vise à trouver la valeur optimale Vopt ainsi que la politique optimale πopt en deux temps :, Initialisation : pour tout état s, on a, Itération : pour t allant de 1 à TVI, on a, avec] + +
+ + +**79. Remark: if we have either γ<1 or the MDP graph being acyclic, then the value iteration algorithm is guaranteed to converge to the correct answer.** + +⟶ Remarque : si γ<1 ou si le graphe associé au processus de décision markovien est acyclique, alors l'algorithme d'itération sur la valeur est garanti de converger vers la bonne solution. + +
+ + +**80. When unknown transitions and rewards** + +⟶ Cas des transitions et récompenses inconnues + +
+ + +**81. Now, let's assume that the transition probabilities and the rewards are unknown.** + +⟶ On suppose maintenant que les probabilités de transition et les récompenses sont inconnues. + +
+ + +**82. Model-based Monte Carlo ― The model-based Monte Carlo method aims at estimating T(s,a,s′) and Reward(s,a,s′) using Monte Carlo simulation with:** + +⟶ Monte-Carlo basé sur modèle - La méthode de Monte-Carlo basée sur modèle (en anglais model-based Monte Carlo) vise à estimer T(s,a,s′) et Reward(s,a,s′) en utilisant des simulations de Monte-Carlo avec : + +
+ + +**83. [# times (s,a,s′) occurs, and]** + +⟶ [# de fois où (s,a,s') se produit] + +
+ + +**84. These estimations will be then used to deduce Q-values, including Qπ and Qopt.** + +⟶ Ces estimations sont ensuite utilisées pour trouver les Q-values, ainsi que Qπ et Qopt. + +
+ + +**85. Remark: model-based Monte Carlo is said to be off-policy, because the estimation does not depend on the exact policy.** + +⟶ Remarque : la méthode de Monte-Carlo basée sur modèle est dite "hors politique" (en anglais "off-policy") car l'estimation produite ne dépend pas de la politique utilisée. + +
+ + +**86. Model-free Monte Carlo ― The model-free Monte Carlo method aims at directly estimating Qπ, as follows:** + +⟶ Monte-Carlo sans modèle - La méthode de Monte-Carlo sans modèle (en anglais model-free Monte Carlo) vise à directement estimer Qπ de la manière suivante : + +
+ + +**87. Qπ(s,a)=average of ut where st−1=s,at=a** + +⟶ Qπ(s,a)=moyenne de ut où st−1=s,at=a + +
+ + +**88. where ut denotes the utility starting at step t of a given episode.** + +⟶ où ut désigne l'utilité à partir de l'étape t d'un épisode donné. + +
+ + +**89. Remark: model-free Monte Carlo is said to be on-policy, because the estimated value is dependent on the policy π used to generate the data.** + +⟶ Remarque : la méthode de Monte-Carlo sans modèle est dite "sur politique" (en anglais "on-policy") car l'estimation produite dépend de la politique π utilisée pour générer les données. + +
+ + +**90. Equivalent formulation - By introducing the constant η=11+(#updates to (s,a)) and for each (s,a,u) of the training set, the update rule of model-free Monte Carlo has a convex combination formulation:** + +⟶ Formulation équivalente - En introduisant la constante η=11+(#mises à jour à (s,a)) et pour chaque triplet (s,a,u) de la base d'apprentissage, la formule de récurrence de la méthode de Monte-Carlo sans modèle s'écrit à l'aide de la combinaison convexe : + +
+ + +**91. as well as a stochastic gradient formulation:** + +⟶ ainsi qu'une formulation mettant en valeur une sorte de gradient : + +
+ + +**92. SARSA ― State-action-reward-state-action (SARSA) is a boostrapping method estimating Qπ by using both raw data and estimates as part of the update rule. For each (s,a,r,s′,a′), we have:** + +⟶ SARSA - État-action-récompense-état-action (en anglais state-action-reward-state-action ou SARSA) est une méthode de bootstrap qui estime Qπ en utilisant à la fois des données réelles et estimées dans sa formule de mise à jour. Pour chaque (s,a,r,s′,a′), on a : + +
+ + +**93. Remark: the SARSA estimate is updated on the fly as opposed to the model-free Monte Carlo one where the estimate can only be updated at the end of the episode.** + +⟶ Remarque : l'estimation donnée par SARSA est mise à jour à la volée contrairement à celle donnée par la méthode de Monte-Carlo sans modèle où la mise à jour est uniquement effectuée à la fin de l'épisode. + +
+ + +**94. Q-learning ― Q-learning is an off-policy algorithm that produces an estimate for Qopt. On each (s,a,r,s′,a′), we have:** + +⟶ Q-learning - Le Q-apprentissage (en anglais Q-learning) est un algorithme hors politique (en anglais off-policy) donnant une estimation de Qopt. Pour chaque (s,a,r,s′,a′), on a : + +
+ + +**95. Epsilon-greedy ― The epsilon-greedy policy is an algorithm that balances exploration with probability ϵ and exploitation with probability 1−ϵ. For a given state s, the policy πact is computed as follows:** + +⟶ Epsilon-glouton - La politique epsilon-gloutonne (en anglais epsilon-greedy) est un algorithme essayant de trouver un compromis entre l'exploration avec probabilité ϵ et l'exploitation avec probabilité 1-ϵ. Pour un état s, la politique πact est calculée par : + +
+ + +**96. [with probability, random from Actions(s)]** + +⟶ [avec probabilité, aléatoire venant d'Actions(s)] + +
+ + +**97. Game playing** + +⟶ Jeux + +
+ + +**98. In games (e.g. chess, backgammon, Go), other agents are present and need to be taken into account when constructing our policy.** + +⟶ Dans les jeux (e.g. échecs, backgammon, Go), d'autres agents sont présents et doivent être pris en compte au moment d'élaborer une politique. + +
+ + +**99. Game tree ― A game tree is a tree that describes the possibilities of a game. In particular, each node is a decision point for a player and each root-to-leaf path is a possible outcome of the game.** + +⟶ Arbre de jeu - Un arbre de jeu est un arbre détaillant toutes les issues possibles d'un jeu. En particulier, chaque noeud représente un point de décision pour un joueur et chaque chemin liant la racine à une des feuilles traduit une possible instance du jeu. + +
+ + +**100. [Two-player zero-sum game ― It is a game where each state is fully observed and such that players take turns. It is defined with:, a starting state sstart, possible actions Actions(s) from state s, successors Succ(s,a) from states s with actions a, whether an end state was reached IsEnd(s), the agent's utility Utility(s) at end state s, the player Player(s) who controls state s]** + +⟶ [Jeu à somme nulle à deux joueurs - C'est un type jeu où chaque état est entièrement observé et où les joueurs jouent de manière successive. On le définit par :, un état de départ sstart, de possibles actions Actions(s) partant de l'état s, du successeur Succ(s,a) l'état s après avoir effectué l'action a, la connaissance d'avoir atteint ou non un état final IsEnd(s), l'utilité de l'agent Utility(s) à l'état final s, le joueur Player(s) qui contrôle l'état s] + +
+ + +**101. Remark: we will assume that the utility of the agent has the opposite sign of the one of the opponent.** + +⟶ Remarque : nous assumerons que l'utilité de l'agent a le signe opposé de celui de son adversaire. + +
+ + +**102. [Types of policies ― There are two types of policies:, Deterministic policies, noted πp(s), which are actions that player p takes in state s., Stochastic policies, noted πp(s,a)∈[0,1], which are probabilities that player p takes action a in state s.]** + +⟶ [Types de politiques - Il y a deux types de politiques :, Les politiques déterministes, notées πp(s), qui représentent pour tout s l'action que le joueur p prend dans l'état s., Les politiques stochastiques, notées πp(s,a)∈[0,1], qui sont décrites pour tout s et a par la probabilité que le joueur p prenne l'action a dans l'état s.] + +
+ + +**103. Expectimax ― For a given state s, the expectimax value Vexptmax(s) is the maximum expected utility of any agent policy when playing with respect to a fixed and known opponent policy πopp. It is computed as follows:** + +⟶ Expectimax - Pour un état donné s, la valeur d'expectimax Vexptmax(s) est l'utilité maximum sur l'ensemble des politiques utilisées par l'agent lorsque celui-ci joue avec un adversaire de politique connue πopp. Cette valeur est calculée de la manière suivante : + +
+ + +**104. Remark: expectimax is the analog of value iteration for MDPs.** + +⟶ Remarque : expectimax est l'analogue de l'algorithme d'itération sur la valeur pour les MDPs. + +
+ + +**105. Minimax ― The goal of minimax policies is to find an optimal policy against an adversary by assuming the worst case, i.e. that the opponent is doing everything to minimize the agent's utility. It is done as follows:** + +⟶ Minimax - Le but des politiques minimax est de trouver une politique optimale contre un adversaire que l'on assume effectuer toutes les pires actions, i.e. toutes celles qui minimisent l'utilité de l'agent. La valeur correspondante est calculée par : + +
+ + +**106. Remark: we can extract πmax and πmin from the minimax value Vminimax.** + +⟶ Remarque : on peut déduire πmax et πmin à partir de la valeur minimax Vminimax. + +
+ + +**107. Minimax properties ― By noting V the value function, there are 3 properties around minimax to have in mind:** + +⟶ Propriétés de minimax - En notant V la fonction de valeur, il y a 3 propriétés sur minimax qu'il faut avoir à l'esprit : + +
+ + +**108. Property 1: if the agent were to change its policy to any πagent, then the agent would be no better off.** + +⟶ Propriété 1 : si l'agent changeait sa politique en un quelconque πagent, alors il ne s'en sortirait pas mieux. + +
+ + +**109. Property 2: if the opponent changes its policy from πmin to πopp, then he will be no better off.** + +⟶ Propriété 2 : si son adversaire change sa politique de πmin à πopp, alors il ne s'en sortira pas mieux. + +
+ + +**110. Property 3: if the opponent is known to be not playing the adversarial policy, then the minimax policy might not be optimal for the agent.** + +⟶ Propriété 3 : si l'on sait que son adversaire ne joue pas les pires actions possibles, alors la politique minimax peut ne pas être optimale pour l'agent. + +
+ + +**111. In the end, we have the following relationship:** + +⟶ À la fin, on a la relation suivante : + +
+ + +**112. Speeding up minimax** + +⟶ Accélération de minimax + +
+ + +**113. Evaluation function ― An evaluation function is a domain-specific and approximate estimate of the value Vminimax(s). It is noted Eval(s).** + +⟶ Fonction d'évaluation - Une fonction d'évaluation estime de manière approximative la valeur Vminimax(s) selon les paramètres du problème. Elle est notée Eval(s). + +
+ + +**114. Remark: FutureCost(s) is an analogy for search problems.** + +⟶ Remarque : l'analogue de cette fonction utilisé dans les problèmes de recherche est FutureCost(s). + +
+ + +**115. Alpha-beta pruning ― Alpha-beta pruning is a domain-general exact method optimizing the minimax algorithm by avoiding the unnecessary exploration of parts of the game tree. To do so, each player keeps track of the best value they can hope for (stored in α for the maximizing player and in β for the minimizing player). At a given step, the condition β<α means that the optimal path is not going to be in the current branch as the earlier player had a better option at their disposal.** + +⟶ Élagage alpha-bêta - L'élagage alpha-bêta (en anglais alpha-beta pruning) est une méthode exacte d'optimisation employée sur l'algorithme de minimax et a pour but d'éviter l'exploration de parties inutiles de l'arbre de jeu. Pour ce faire, chaque joueur garde en mémoire la meilleure valeur qu'il puisse espérer (appelée α chez le joueur maximisant et β chez le joueur minimisant). À une étape donnée, la condition β<α signifie que le chemin optimal ne peut pas passer par la branche actuelle puisque le joueur qui précédait avait une meilleure option à sa disposition. + +
+ + +**116. TD learning ― Temporal difference (TD) learning is used when we don't know the transitions/rewards. The value is based on exploration policy. To be able to use it, we need to know rules of the game Succ(s,a). For each (s,a,r,s′), the update is done as follows:** + +⟶ TD learning - L'apprentissage par différence de temps (en anglais temporal difference learning ou TD learning) est une méthode utilisée lorsque l'on ne connait pas les transitions/récompenses. La valeur et alors basée sur la politique d'exploration. Pour pouvoir l'utiliser, on a besoin de connaître les règles du jeu Succ(s,a). Pour chaque (s,a,r,s′), la mise à jour des coefficients est faite de la manière suivante : + +
+ + +**117. Simultaneous games** + +⟶ Jeux simultanés + +
+ + +**118. This is the contrary of turn-based games, where there is no ordering on the player's moves.** + +⟶ Ce cas est opposé aux jeux joués tour à tour. Il n'y a pas d'ordre prédéterminé sur le mouvement du joueur. + +
+ + +**119. Single-move simultaneous game ― Let there be two players A and B, with given possible actions. We note V(a,b) to be A's utility if A chooses action a, B chooses action b. V is called the payoff matrix.** + +⟶ Jeu simultané à un mouvement - Soient deux joueurs A et B, munis de possibles actions. On note V(a,b) l'utilité de A si A choisit l'action a et B l'action b. V est appelée la matrice de profit (en anglais payoff matrix). + +
+ + +**120. [Strategies ― There are two main types of strategies:, A pure strategy is a single action:, A mixed strategy is a probability distribution over actions:]** + +⟶ [Stratégies - Il y a principalement deux types de stratégies :, Une stratégie pure est une seule action, Une stratégie mixte est une loi de probabilité sur les actions :] + +
+ + +**121. Game evaluation ― The value of the game V(πA,πB) when player A follows πA and player B follows πB is such that:** + +⟶ Évaluation de jeu - La valeur d'un jeu V(πA,πB) quand le joueur A suit πA et le joueur B suit πB est telle que : + +
+ + +**122. Minimax theorem ― By noting πA,πB ranging over mixed strategies, for every simultaneous two-player zero-sum game with a finite number of actions, we have:** + +⟶ Théorème Minimax - Soient πA et πB des stratégies mixtes. Pour chaque jeu à somme nulle à deux joueurs ayant un nombre fini d'actions, on a : + +
+ + +**123. Non-zero-sum games** + +⟶ Jeux à somme non nulle + +
+ + +**124. Payoff matrix ― We define Vp(πA,πB) to be the utility for player p.** + +⟶ Matrice de profit - On définit Vp(πA,πB) l'utilité du joueur p. + +
+ + +**125. Nash equilibrium ― A Nash equilibrium is (π∗A,π∗B) such that no player has an incentive to change its strategy. We have:** + +⟶ Équilibre de Nash - Un équilibre de Nash est défini par (π∗A,π∗B) tel qu'aucun joueur n'a d'intérêt de changer sa stratégie. On a : + +
+ + +**126. and** + +⟶ et + +
+ + +**127. Remark: in any finite-player game with finite number of actions, there exists at least one Nash equilibrium.** + +⟶ Remarque : dans un jeu à nombre de joueurs et d'actions finis, il existe au moins un équilibre de Nash. + +
+ + +**128. [Tree search, Backtracking search, Breadth-first search, Depth-first search, Iterative deepening]** + +⟶ [Parcours d'arbre, Retour sur trace, Parcours en largeur, Parcours en profondeur, Approfondissement itératif] + +
+ + +**129. [Graph search, Dynamic programming, Uniform cost search]** + +⟶ [Parcours de graphe, Programmation dynamique, Recherche à coût uniforme] + +
+ + +**130. [Learning costs, Structured perceptron]** + +⟶ [Apprendre les coûts, Perceptron structuré] + +
+ + +**131. [A star search, Heuristic function, Algorithm, Consistency, correctness, Admissibility, efficiency]** + +⟶ [A étoile, Fonction heuristique, Algorithme, Consistance, Correction, Admissibilité, Efficacité] + +
+ + +**132. [Relaxation, Relaxed search problem, Relaxed heuristic, Max heuristic]** + +⟶ [Relaxation, Relaxation d'un problème de recherche, Relaxation d'une heuristique, Heuristique max] + +
+ + +**133. [Markov decision processes, Overview, Policy evaluation, Value iteration, Transitions, rewards]** + +⟶ [Processus de décision markovien, Aperçu, Évaluation d'une politique, Itération sur la valeur, Transitions, Récompenses] + +
+ + +**134. [Game playing, Expectimax, Minimax, Speeding up minimax, Simultaneous games, Non-zero-sum games]** + +⟶ [Jeux, Expectimax, Minimax, Accélération de minimax, Jeux simultanés, Jeux à somme non nulle] + +
+ + +**135. View PDF version on GitHub** + +⟶ Voir la version PDF sur GitHub. + +
+ + +**136. Original authors** + +⟶ Auteurs d'origine. + +
+ + +**137. Translated by X, Y and Z** + +⟶ Traduit de l'anglais par X, Y et Z. + +
+ + +**138. Reviewed by X, Y and Z** + +⟶ Revu par X, Y et Z. + +
+ + +**139. By X and Y** + +⟶ De X et Y. + +
+ + +**140. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ Les pense-bêtes d'intelligence artificielle sont maintenant disponibles en français ! diff --git a/fr/cs-221-variables-models.md b/fr/cs-221-variables-models.md new file mode 100644 index 000000000..9c802583b --- /dev/null +++ b/fr/cs-221-variables-models.md @@ -0,0 +1,617 @@ +**Variables-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-variables-models) + +
+ +**1. Variables-based models with CSP and Bayesian networks** + +⟶ Modèles basés sur les variables : CSP et réseaux bayésiens + +
+ + +**2. Constraint satisfaction problems** + +⟶ Problèmes de satisfaction de contraintes + +
+ + +**3. In this section, our objective is to find maximum weight assignments of variable-based models. One advantage compared to states-based models is that these algorithms are more convenient to encode problem-specific constraints.** + +⟶ Dans cette section, notre but est de trouver des affectations de poids maximisants dans des problèmes impliquant des modèles basés sur les variables. Un avantage comparé aux modèles basés sur les états est que ces algorithmes sont plus commodes lorsqu'il s'agit de transcrire des contraintes spécifiques à certains problèmes. + +
+ + +**4. Factor graphs** + +⟶ Graphes de facteurs + +
+ + +**5. Definition ― A factor graph, also referred to as a Markov random field, is a set of variables X=(X1,...,Xn) where Xi∈Domaini and m factors f1,...,fm with each fj(X)⩾0.** + +⟶ Définition - Un graphe de facteurs, aussi appelé champ aléatoire de Markov, est un ensemble de variables X=(X1,...,Xn) où Xi∈Domaini muni de m facteurs f1,...,fm où chaque fj(X)⩾0. + +
+ + +**6. Domain** + +⟶ Domaine + +
+ + +**7. Scope and arity ― The scope of a factor fj is the set of variables it depends on. The size of this set is called the arity.** + +⟶ Arité - Le nombre de variables dépendant d'un facteur fj est appelé son arité. + +
+ + +**8. Remark: factors of arity 1 and 2 are called unary and binary respectively.** + +⟶ Remarque : les facteurs d'arité 1 et 2 sont respectivement appelés unaire et binaire. + +
+ + +**9. Assignment weight ― Each assignment x=(x1,...,xn) yields a weight Weight(x) defined as being the product of all factors fj applied to that assignment. Its expression is given by:** + +⟶ Affectation de poids - Chaque affectation x=(x1,...,xn) donne un poids Weight(x) défini comme étant le produit de tous les facteurs fj appliqués à cette affectation. Son expression est donnée par : + +
+ + +**10. Constraint satisfaction problem ― A constraint satisfaction problem (CSP) is a factor graph where all factors are binary; we call them to be constraints:** + +⟶ Problème de satisfaction de contraintes - Un problème de satisfaction de contraintes (en anglais constraint satisfaction problem ou CSP) est un graphe de facteurs où tous les facteurs sont binaires ; on les appelle "contraintes". + +
+ + +**11. Here, the constraint j with assignment x is said to be satisfied if and only if fj(x)=1.** + +⟶ Ici, on dit que l'affectation x satisfait la contrainte j si et seulement si fj(x)=1. + +
+ + +**12. Consistent assignment ― An assignment x of a CSP is said to be consistent if and only if Weight(x)=1, i.e. all constraints are satisfied.** + +⟶ Affectation consistante - Une affectation x d'un CSP est dite consistante si et seulement si Weight(x)=1, i.e. toutes les contraintes sont satisfaites. + +
+ + +**13. Dynamic ordering** + +⟶ Mise en ordre dynamique + +
+ + +**14. Dependent factors ― The set of dependent factors of variable Xi with partial assignment x is called D(x,Xi), and denotes the set of factors that link Xi to already assigned variables.** + +⟶ Facteurs dépendants - L'ensemble des facteurs dépendants de la variable Xi dont l'affectation partielle est x est appelé D(x,Xi) et désigne l'ensemble des facteurs liant Xi à des variables déjà affectées. + +
+ + +**15. Backtracking search ― Backtracking search is an algorithm used to find maximum weight assignments of a factor graph. At each step, it chooses an unassigned variable and explores its values by recursion. Dynamic ordering (i.e. choice of variables and values) and lookahead (i.e. early elimination of inconsistent options) can be used to explore the graph more efficiently, although the worst-case runtime stays exponential: O(|Domain|n).** + +⟶ Recherche avec retour sur trace - L'algorithme de recherche avec retour sur trace (en anglais backtracking search) est utilisé pour trouver l'affectation de poids maximum d'un graphe de facteurs. À chaque étape, une variable non assignée est choisie et ses valeurs sont explorées par récursivité. On peut utiliser un processus de mise en ordre dynamique sur le choix des variables et valeurs et/ou d'anticipation (i.e. élimination précoce d'options non consistantes) pour explorer le graphe de manière plus efficace. La complexité temporelle dans tous les cas reste néanmoins exponentielle : O(|Domaine|n). + +
+ + +**16. [Forward checking ― It is a one-step lookahead heuristic that preemptively removes inconsistent values from the domains of neighboring variables. It has the following characteristics:, After assigning a variable Xi, it eliminates inconsistent values from the domains of all its neighbors., If any of these domains becomes empty, we stop the local backtracking search., If we un-assign a variable Xi, we have to restore the domain of its neighbors.]** + +⟶ [Vérification en avant - La vérification en avant (forward checking en anglais) est une heuristique d'anticipation à une étape qui enlève des variables voisines les valeurs impossibles de manière préemptive. Cette méthode a les caractéristiques suivantes :, Après l'affectation d'une variable Xi, les valeurs non consistantes sont éliminées du domaine de tous ses voisins., Si l'un de ces domaines devient vide, la recherche locale s'arrête., Si l'on enlève l'affectation d'une valeur Xi, on doit restaurer le domaine de ses voisins.] + +
+ + +**17. Most constrained variable ― It is a variable-level ordering heuristic that selects the next unassigned variable that has the fewest consistent values. This has the effect of making inconsistent assignments to fail earlier in the search, which enables more efficient pruning.** + +⟶ Variable la plus contrainte - L'heuristique de la variable la plus contrainte (en anglais most constrained variable ou MCV) sélectionne la prochaine variable sans affectation ayant le moins de valeurs consistantes. Cette procédure a pour effet de faire échouer les affectations impossibles plus tôt dans la recherche, permettant un élagage plus efficace. + +
+ + +**18. Least constrained value ― It is a value-level ordering heuristic that assigns the next value that yields the highest number of consistent values of neighboring variables. Intuitively, this procedure chooses first the values that are most likely to work.** + +⟶ Valeur la moins contraignante - L'heuristique de la valeur la moins contraignante (en anglais least constrained value ou LCV) sélectionne pour une variable donnée la prochaine valeur maximisant le nombre de valeurs consistantes chez les variables voisines. De manière intuitive, on peut dire que cette procédure choisit en premier les valeurs qui sont le plus susceptible de marcher. + +
+ + +**19. Remark: in practice, this heuristic is useful when all factors are constraints.** + +⟶ Remarque : en pratique, cette heuristique est utile quand tous les facteurs sont des contraintes. + +
+ + +**20. The example above is an illustration of the 3-color problem with backtracking search coupled with most constrained variable exploration and least constrained value heuristic, as well as forward checking at each step.** + +⟶ L'exemple ci-dessus est une illustration du problème de coloration de graphe à 3 couleurs en utilisant l'algorithme de recherche avec retour sur trace couplé avec les heuristiques de MCV, de LCV ainsi que de vérification en avant à chaque étape. + +
+ + +**21. [Arc consistency ― We say that arc consistency of variable Xl with respect to Xk is enforced when for each xl∈Domainl:, unary factors of Xl are non-zero, there exists at least one xk∈Domaink such that any factor between Xl and Xk is non-zero.]** + +⟶ [Arc-consistance - On dit que l'arc-consistance de la variable Xl par rapport à Xk est vérifiée lorsque pour tout xl∈Domainl :, les facteurs unaires de Xl sont non-nuls, il existe au moins un xk∈Domaink tel que n'importe quel facteur entre Xl et Xk est non nul.] + +
+ + +**22. AC-3 ― The AC-3 algorithm is a multi-step lookahead heuristic that applies forward checking to all relevant variables. After a given assignment, it performs forward checking and then successively enforces arc consistency with respect to the neighbors of variables for which the domain change during the process.** + +⟶ AC-3 - L'algorithme d'AC-3 est une heuristique qui applique le principe de vérification en avant à toutes les variables susceptibles d'être concernées. Après l'affectation d'une variable, cet algorithme effectue une vérification en avant et applique successivement l'arc-consistance avec tous les voisins de variables pour lesquels le domaine change. + +
+ + +**23. Remark: AC-3 can be implemented both iteratively and recursively.** + +⟶ Remarque : AC-3 peut être codé de manière itérative ou récursive. + +
+ + +**24. Approximate methods** + +⟶ Méthodes approximatives + +
+ + +**25. Beam search ― Beam search is an approximate algorithm that extends partial assignments of n variables of branching factor b=|Domain| by exploring the K top paths at each step. The beam size K∈{1,...,bn} controls the tradeoff between efficiency and accuracy. This algorithm has a time complexity of O(n⋅Kblog(Kb)).** + +⟶ Recherche en faisceau - L'algorithme de recherche en faisceau (en anglais beam search) est une technique approximative qui étend les affectations partielles de n variables de facteur de branchement b=|Domain| en explorant les K meilleurs chemins qui s'offrent à chaque étape. La largeur du faisceau K∈{1,...,bn} détermine la balance entre efficacité et précision de l'algorithme. Sa complexité en temps est de O(n⋅Kblog(Kb)). + +
+ + +**26. The example below illustrates a possible beam search of parameters K=2, b=3 and n=5.** + +⟶ L'exemple ci-dessous illustre une recherche en faisceau de paramètres K=2, b=3 et n=5. + +
+ + +**27. Remark: K=1 corresponds to greedy search whereas K→+∞ is equivalent to BFS tree search.** + +⟶ Remarque : K=1 correspond à la recherche gloutonne alors que K→+∞ est équivalent à effectuer un parcours en largeur. + +
+ + +**28. Iterated conditional modes ― Iterated conditional modes (ICM) is an iterative approximate algorithm that modifies the assignment of a factor graph one variable at a time until convergence. At step i, we assign to Xi the value v that maximizes the product of all factors connected to that variable.** + +⟶ Modes conditionnels itérés - L'algorithme des modes conditionnels itérés (en anglais iterated conditional modes ou ICM) est une technique itérative et approximative qui modifie l'affectation d'un graphe de facteurs une variable à la fois jusqu'à convergence. À l'étape i, Xi prend la valeur v qui maximise le produit de tous les facteurs connectés à cette variable. + +
+ + +**29. Remark: ICM may get stuck in local minima.** + +⟶ Remarque : il est possible qu'ICM reste bloqué dans un minimum local. + +
+ + +**30. [Gibbs sampling ― Gibbs sampling is an iterative approximate method that modifies the assignment of a factor graph one variable at a time until convergence. At step i:, we assign to each element u∈Domaini a weight w(u) that is the product of all factors connected to that variable, we sample v from the probability distribution induced by w and assign it to Xi.]** + +⟶ [Échantillonnage de Gibbs - La méthode d'échantillonnage de Gibbs (en anglais Gibbs sampling) est une technique itérative et approximative qui modifie les affectations d'un graphe de facteurs une variable à la fois jusqu'à convergence. À l'étape i :, on assigne à chaque élément u∈Domaini un poids w(u) qui est le produit de tous les facteurs connectés à cette variable, on échantillonne v de la loi de probabilité engendrée par w et on l'associe à Xi.] + +
+ + +**31. Remark: Gibbs sampling can be seen as the probabilistic counterpart of ICM. It has the advantage to be able to escape local minima in most cases.** + +⟶ Remarque : la méthode d'échantillonnage de Gibbs peut être vue comme étant la version probabiliste de ICM. Cette méthode a l'avantage de pouvoir échapper aux potentiels minimum locaux dans la plupart des situations. + +
+ + +**32. Factor graph transformations** + +⟶ Transformations sur les graphes de facteurs + +
+ + +**33. Independence ― Let A,B be a partitioning of the variables X. We say that A and B are independent if there are no edges between A and B and we write:** + +⟶ Indépendance - Soit A, B une partition des variables X. On dit que A et B sont indépendants s'il n'y a pas d'arête connectant A et B et on écrit : + +
+ + +**34. Remark: independence is the key property that allows us to solve subproblems in parallel.** + +⟶ Remarque : l'indépendance est une propriété importante car elle nous permet de décomposer la situation en sous-problèmes que l'on peut résoudre en parallèle. + +
+ + +**35. Conditional independence ― We say that A and B are conditionally independent given C if conditioning on C produces a graph in which A and B are independent. In this case, it is written:** + +⟶ Indépendance conditionnelle - On dit que A et B sont conditionnellement indépendants par rapport à C si le fait de conditionner sur C produit un graphe dans lequel A et B sont indépendants. Dans ce cas, on écrit : + +
+ + +**36. [Conditioning ― Conditioning is a transformation aiming at making variables independent that breaks up a factor graph into smaller pieces that can be solved in parallel and can use backtracking. In order to condition on a variable Xi=v, we do as follows:, Consider all factors f1,...,fk that depend on Xi, Remove Xi and f1,...,fk, Add gj(x) for j∈{1,...,k} defined as:]** + +⟶ [Conditionnement - Le conditionnement est une transformation visant à rendre des variables indépendantes et ainsi diviser un graphe de facteurs en pièces plus petites qui peuvent être traitées en parallèle et utiliser le retour sur trace. Pour conditionner par rapport à une variable Xi=v, on :, considère toues les facteurs f1,...,fk qui dépendent de Xi, enlève Xi et f1,...,fk, ajoute gj(x) pour j∈{1,...,k} défini par :] + +
+ + +**37. Markov blanket ― Let A⊆X be a subset of variables. We define MarkovBlanket(A) to be the neighbors of A that are not in A.** + +⟶ Couverture de Markov - Soit A⊆X une partie des variables. On définit MarkovBlanket(A) comme étant les voisins de A qui ne sont pas dans A. + +
+ + +**38. Proposition ― Let C=MarkovBlanket(A) and B=X∖(A∪C). Then we have:** + +⟶ Proposition - Soit C=MarkovBlanket(A) et B=X∖(A∪C). On a alors : + +
+ + +**39. [Elimination ― Elimination is a factor graph transformation that removes Xi from the graph and solves a small subproblem conditioned on its Markov blanket as follows:, Consider all factors fi,1,...,fi,k that depend on Xi, Remove Xi +and fi,1,...,fi,k, Add fnew,i(x) defined as:]** + +⟶ [Élimination - L'élimination est une transformation consistant à enlever Xi d'un graphe de facteurs pour ensuite résoudre un sous-problème conditionné sur sa couverture de Markov où l'on :, considère tous les facteurs fi,1,...,fi,k qui dépendent de Xi, enlève Xi et fi,1,...,fi,k, ajoute fnew,i(x) défini par :] + +
+ + +**40. Treewidth ― The treewidth of a factor graph is the maximum arity of any factor created by variable elimination with the best variable ordering. In other words,** + +⟶ Largeur arborescente - La largeur arborescente (en anglais treewidth) d'un graphe de facteurs est l'arité maximum de n'importe quel facteur créé par élimination avec le meilleur ordre de variable. En d'autres termes, + +
+ + +**41. The example below illustrates the case of a factor graph of treewidth 3.** + +⟶ L'exemple ci-dessous illustre le cas d'un graphe de facteurs ayant une largeur arborescente égale à 3. + +
+ + +**42. Remark: finding the best variable ordering is a NP-hard problem.** + +⟶ Remarque : trouver le meilleur ordre de variable est un problème NP-difficile. + +
+ + +**43. Bayesian networks** + +⟶ Réseaux bayésiens + +
+ + +**44. In this section, our goal will be to compute conditional probabilities. What is the probability of a query given evidence?** + +⟶ Dans cette section, notre but est de calculer des probabilités conditionnelles. Quelle est la probabilité d'un événement étant donné des observations ? + +
+ + +**45. Introduction** + +⟶ Introduction + +
+ + +**46. Explaining away ― Suppose causes C1 and C2 influence an effect E. Conditioning on the effect E and on one of the causes (say C1) changes the probability of the other cause (say C2). In this case, we say that C1 has explained away C2.** + +⟶ Explication - Supposons que les causes C1 et C2 influencent un effet E. Le conditionnement sur l'effet E et une des causes (disons C1) change la probabilité de l'autre cause (disons C2). Dans ce cas, on dit que C1 a expliqué C2. + +
+ + +**47. Directed acyclic graph ― A directed acyclic graph (DAG) is a finite directed graph with no directed cycles.** + +⟶ Graphe orienté acyclique - Un graphe orienté acyclique (en anglais directed acyclic graph ou DAG) est un graphe orienté fini sans cycle orienté. + +
+ + +**48. Bayesian network ― A Bayesian network is a directed acyclic graph (DAG) that specifies a joint distribution over random variables X=(X1,...,Xn) as a product of local conditional distributions, one for each node:** + +⟶ Réseau bayésien - Un réseau bayésien (en anglais Bayesian network) est un DAG qui définit une loi de probabilité jointe sur les variables aléatoires X=(X1,...,Xn) comme étant le produit des lois de probabilités conditionnelles locales (une pour chaque noeud) : + +
+ + +**49. Remark: Bayesian networks are factor graphs imbued with the language of probability.** + +⟶ Remarque : les réseaux bayésiens sont des graphes de facteurs imprégnés de concepts de probabilité. + +
+ + +**50. Locally normalized ― For each xParents(i), all factors are local conditional distributions. Hence they have to satisfy:** + +⟶ Normalisation locale - Pour chaque xParents(i), tous les facteurs sont localement des lois de probabilité conditionnelles. Elles doivent donc vérifier : + +
+ + +**51. As a result, sub-Bayesian networks and conditional distributions are consistent.** + +⟶ De ce fait, les sous-réseaux bayésiens et les distributions conditionnelles sont consistants. + +
+ + +**52. Remark: local conditional distributions are the true conditional distributions.** + +⟶ Remarque : les lois locales de probabilité conditionnelles sont de vraies lois de probabilité conditionnelles. + +
+ + +**53. Marginalization ― The marginalization of a leaf node yields a Bayesian network without that node.** + +⟶ Marginalisation - La marginalisation d'un noeud sans enfant entraine un réseau bayésian sans ce noeud. + +
+ + +**54. Probabilistic programs** + +⟶ Programmes probabilistes + +
+ + +**55. Concept ― A probabilistic program randomizes variables assignment. That way, we can write down complex Bayesian networks that generate assignments without us having to explicitly specify associated probabilities.** + +⟶ Concept - Un programme probabiliste rend aléatoire l'affectation de variables. De ce fait, on peut imaginer des réseaux bayésiens compliqués pour la génération d'affectations sans avoir à écrire de manière explicite les probabilités associées. + +
+ + +**56. Remark: examples of probabilistic programs include Hidden Markov model (HMM), factorial HMM, naive Bayes, latent Dirichlet allocation, diseases and symptoms and stochastic block models.** + +⟶ Remarque : quelques exemples de programmes probabilistes incluent parmi d'autres le modèle de Markov caché (en anglais hidden Markov model ou HMM), HMM factoriel, le modèle bayésien naïf (en anglais naive Bayes), l'allocation de Dirichlet latente (en anglais latent Dirichlet allocation ou LDA), le modèle à blocs stochastiques (en anglais stochastic block model). + +
+ + +**57. Summary ― The table below summarizes the common probabilistic programs as well as their applications:** + +⟶ Récapitulatif - La table ci-dessous résume les programmes probabilistes les plus fréquents ainsi que leur champ d'application associé : + +
+ + +**58. [Program, Algorithm, Illustration, Example]** + +⟶ [Programme, Algorithme, Illustration, Exemple] + +
+ + +**59. [Markov Model, Hidden Markov Model (HMM), Factorial HMM, Naive Bayes, Latent Dirichlet Allocation (LDA)]** + +⟶ [Modèle de Markov, Modèle de Markov caché (HMM), HMM factoriel, Bayésien naïf, Allocation de Dirichlet latente (LDA)] + +
+ + +**60. [Generate, distribution]** + +⟶ [Génère, distribution] + +
+ + +**61. [Language modeling, Object tracking, Multiple object tracking, Document classification, Topic modeling]** + +⟶ [Modélisation du langage, Suivi d'objet, Suivi de plusieurs objets, Classification de document, Modélisation de sujet] + +
+ + +**62. Inference** + +⟶ Inférence + +
+ + +**63. [General probabilistic inference strategy ― The strategy to compute the probability P(Q|E=e) of query Q given evidence E=e is as follows:, Step 1: Remove variables that are not ancestors of the query Q or the evidence E by marginalization, Step 2: Convert Bayesian network to factor graph, Step 3: Condition on the evidence E=e, Step 4: Remove nodes disconnected from the query Q by marginalization, Step 5: Run a probabilistic inference algorithm (manual, variable elimination, Gibbs sampling, particle filtering)]** + +⟶ [Stratégie générale pour l'inférence probabiliste - La stratégie que l'on utilise pour calculer la probabilité P(Q|E=e) d'une requête Q étant donnée l'observation E=e est la suivante :, Étape 1 : on enlève les variables qui ne sont pas les ancêtres de la requête Q ou de l'observation E par marginalisation, Étape 2 : on convertit le réseau bayésien en un graphe de facteurs, Étape 3 : on conditionne sur l'observation E=e, Étape 4 : on enlève les noeuds déconnectés de la requête Q par marginalisation, Étape 5 : on lance un algorithme d'inférence probabiliste (manuel, élimination de variables, échantillonnage de Gibbs, filtrage particulaire)] + +
+ + +**64. Forward-backward algorithm ― This algorithm computes the exact value of P(H=hk|E=e) (smoothing query) for any k∈{1,...,L} in the case of an HMM of size L. To do so, we proceed in 3 steps:** + +⟶ Algorithme progressif-rétrogressif - L'algorithme progressif-rétrogressif (en anglais forward-backward) calcule la valeur exacte de P(H=hk|E=e) pour chaque k∈{1,...,L} dans le cas d'un HMM de taille L. Pour ce faire, on procède en 3 étapes : + +
+ + +**65. Step 1: for ..., compute ...** + +⟶ Étape 1 : pour ..., calculer ... + +
+ + +**66. with the convention F0=BL+1=1. From this procedure and these notations, we get that** + +⟶ avec la convention F0=BL+1=1. À partir de cette procédure et avec ces notations, on obtient + +
+ + +**67. Remark: this algorithm interprets each assignment to be a path where each edge hi−1→hi is of weight p(hi|hi−1)p(ei|hi).** + +⟶ Remarque : cet algorithme interprète une affectation comme étant un chemin où chaque arête hi−1→hi a un poids p(hi|hi−1)p(ei|hi). + +
+ + +**68. [Gibbs sampling ― This algorithm is an iterative approximate method that uses a small set of assignments (particles) to represent a large probability distribution. From a random assignment x, Gibbs sampling performs the following steps for i∈{1,...,n} until convergence:, For all u∈Domaini, compute the weight w(u) of assignment x where Xi=u, Sample v from the probability distribution induced by w: v∼P(Xi=v|X−i=x−i), Set Xi=v]** + +⟶ [Échantillonnage de Gibbs - L'algorithme d'échantillonnage de Gibbs (en anglais Gibbs sampling) est une méthode itérative et approximative qui utilise un petit ensemble d'affectations (particules) pour représenter une loi de probabilité. Pour une affectation aléatoire x, l'échantillonnage de Gibbs effectue les étapes suivantes pour i∈{1,...,n} jusqu'à convergence :, Pour tout u∈Domaini, on calcule le poids w(u) de l'affectation x où Xi=u, On échantillonne v de la loi de probabilité engendrée par w : v∼P(Xi=v|X−i=x−i), On pose Xi=v] + +
+ + +**69. Remark: X−i denotes X∖{Xi} and x−i represents the corresponding assignment.** + +⟶ Remarque X-i veut dire X∖{Xi} et x−i représente l'affectation correspondante. + +
+ + +**70. [Particle filtering ― This algorithm approximates the posterior density of state variables given the evidence of observation variables by keeping track of K particles at a time. Starting from a set of particles C of size K, we run the following 3 steps iteratively:, Step 1: proposal - For each old particle xt−1∈C, sample x from the transition probability distribution p(x|xt−1) and add x to a set C′., Step 2: weighting - Weigh each x of the set C′ by w(x)=p(et|x), where et is the evidence observed at time t., Step 3: resampling - Sample K elements from the set C′ using the probability distribution induced by w and store them in C: these are the current particles xt.]** + +⟶ [Filtrage particulaire - L'algorithme de filtrage particulaire (en anglais particle filtering) approxime la densité postérieure de variables d'états à partir des variables observées en suivant K particules à la fois. En commençant avec un ensemble de particules C de taille K, on répète les 3 étapes suivantes :, Étape 1 : proposition - Pour chaque particule xt−1∈C, on échantillonne x avec loi de probabilité p(x|xt−1) et on ajoute x à un ensemble C′., Étape 2 : pondération - On associe chaque x de l'ensemble C′ au poids w(x)=p(et|x), où et est l'observation vue à l'instant t. Étape 3 : ré-échantillonnage - On échantillonne K éléments de l'ensemble C´ en utilisant la loi de probabilité engendrée par w et on les met dans C : ce sont les particules actuelles xt.] + +
+ + +**71. Remark: a more expensive version of this algorithm also keeps track of past particles in the proposal step.** + +⟶ Remarque : une version plus coûteuse de cet algorithme tient aussi compte des particules passée à l'étape de proposition. + +
+ + +**72. Maximum likelihood ― If we don't know the local conditional distributions, we can learn them using maximum likelihood.** + +⟶ Maximum de vraisemblance - Si l'on ne connaît pas les lois de probabilité locales, on peut les trouver en utilisant le maximum de vraisemblance. + +
+ + +**73. Laplace smoothing ― For each distribution d and partial assignment (xParents(i),xi), add λ to countd(xParents(i),xi), then normalize to get probability estimates.** + +⟶ Lissage de Laplace - Pour chaque loi de probabilité d et affectation partielle (xParents(i),xi), on ajoute λ à countd(xParents(i),xi) et on normalise ensuite pour obtenir des probabilités. + +
+ + +**74. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** + +⟶ Espérance-maximisation - L'algorithme d'espérance-maximisation (en anglais expectation-maximization ou EM) est une méthode efficace utilisée pour estimer le paramètre θ via l'estimation du maximum de vraisemblance en construisant de manière répétée une borne inférieure de la vraisemblance (étape E) et en optimisant cette borne inférieure (étape M) : + +
+ + +**75. [E-step: Evaluate the posterior probability q(h) that each data point e came from a particular cluster h as follows:, M-step: Use the posterior probabilities q(h) as cluster specific weights on data points e to determine θ through maximum likelihood.]** + +⟶ [Étape E : on évalue la probabilité postérieure q(h) que chaque point e vienne d'une partition particulière h avec :, Étape M : on utilise la probabilité postérieure q(h) en tant que poids de la partition h sur les points e pour déterminer θ via maximum de vraisemblance] + +
+ + +**76. [Factor graphs, Arity, Assignment weight, Constraint satisfaction problem, Consistent assignment]** + +⟶ [Graphe de facteurs, Arité, Poids, Satisfaction de contraintes, Affectation consistante] + +
+ + +**77. [Dynamic ordering, Dependent factors, Backtracking search, Forward checking, Most constrained variable, Least constrained value]** + +⟶ [Mise en ordre dynamique, Facteurs dépendants, Retour sur trace, Vérification en avant, Variable la plus contrainte, Valeur la moins contraignante] + +
+ + +**78. [Approximate methods, Beam search, Iterated conditional modes, Gibbs sampling]** + +⟶ [Méthodes approximatives, Recherche en faisceau, Modes conditionnels itérés, Échantillonnage de Gibbs] + +
+ + +**79. [Factor graph transformations, Conditioning, Elimination]** + +⟶ [Transformations de graphes de facteurs, Conditionnement, Élimination] + +
+ + +**80. [Bayesian networks, Definition, Locally normalized, Marginalization]** + +⟶ [Réseaux bayésiens, Définition, Normalisé localement, Marginalisation] + +
+ + +**81. [Probabilistic program, Concept, Summary]** + +⟶ [Programme probabiliste, Concept, Récapitulatif] + +
+ + +**82. [Inference, Forward-backward algorithm, Gibbs sampling, Laplace smoothing]** + +⟶ [Inférence, Algorithme progressif-rétrogressif, Échantillonnage de Gibbs, Lissage de Laplace] + +
+ + +**83. View PDF version on GitHub** + +⟶ Voir la version PDF sur GitHub. + +
+ + +**84. Original authors** + +⟶ Auteurs d'origine. + +
+ + +**85. Translated by X, Y and Z** + +⟶ Traduit de l'anglais par X, Y et Z. + +
+ + +**86. Reviewed by X, Y and Z** + +⟶ Revu par X, Y et Z. + +
+ + +**87. By X and Y** + +⟶ De X et Y. + +
+ + +**88. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ Les pense-bêtes d'intelligence artificielle sont maintenant disponibles en français ! diff --git a/fr/cheatsheet-deep-learning.md b/fr/cs-229-deep-learning.md similarity index 95% rename from fr/cheatsheet-deep-learning.md rename to fr/cs-229-deep-learning.md index 4045d723c..56073a5e8 100644 --- a/fr/cheatsheet-deep-learning.md +++ b/fr/cs-229-deep-learning.md @@ -120,7 +120,7 @@ **21. Convolutional layer requirement ― By noting W the input volume size, F the size of the convolutional layer neurons, P the amount of zero padding, then the number of neurons N that fit in a given volume is such that:** -⟶ Pré-requis de la couche convolutionelle ― Si l'on note W la taille du volume d'entrée, F la taille de la couche de neurones convolutionelle, P la quantité de zero padding, alors le nombre de neurones N qui tient dans un volume donné est tel que : +⟶ Pré-requis de la couche convolutionnelle ― Si l'on note W la taille du volume d'entrée, F la taille de la couche de neurones convolutionnelle, P la quantité de zero padding, alors le nombre de neurones N qui tient dans un volume donné est tel que :
@@ -132,7 +132,7 @@ **23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** -⟶ Cela est normalement effectué après une couche fully-connected/couche convolutionelle et avant une couche de non-linéarité et a pour but de permettre un taux d'apprentissage plus grand et de réduire une dépendance trop forte à l'initialisation. +⟶ Cela est normalement effectué après une couche fully-connected/couche convolutionnelle et avant une couche de non-linéarité et a pour but de permettre un taux d'apprentissage plus grand et de réduire une dépendance trop forte à l'initialisation.
diff --git a/fr/refresher-linear-algebra.md b/fr/cs-229-linear-algebra.md similarity index 92% rename from fr/refresher-linear-algebra.md rename to fr/cs-229-linear-algebra.md index 37329faa3..f1aea7efd 100644 --- a/fr/refresher-linear-algebra.md +++ b/fr/cs-229-linear-algebra.md @@ -42,7 +42,7 @@ **8. Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** -⟶ Matrice identitée ― La matrice identitée I∈Rn×n est une matrice carrée avec des 1 sur sa diagonale et des 0 partout ailleurs : +⟶ Matrice identité ― La matrice identité I∈Rn×n est une matrice carrée avec des 1 sur sa diagonale et des 0 partout ailleurs :
@@ -150,7 +150,7 @@ **26. Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** -⟶ Trace ― La trace d'une matrice carée A, notée tr(A), est définie comme la somme de ses coefficients diagonaux: +⟶ Trace ― La trace d'une matrice carrée A, notée tr(A), est définie comme la somme de ses coefficients diagonaux:
@@ -186,7 +186,7 @@ **32. Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** -⟶ Décomposition symmétrique ― Une matrice donnée A peut être exprimée en termes de ses parties symétrique et antisymétrique de la manière suivante : +⟶ Décomposition symétrique ― Une matrice donnée A peut être exprimée en termes de ses parties symétrique et antisymétrique de la manière suivante :
@@ -252,7 +252,7 @@ **43. Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** -⟶ Remarque : de manière similaire, une matrice A est dite définie positive et est notée A≻0 si elle est semi-définie positive et que pour tout vector x non-nul, on a xTAx>0. +⟶ Remarque : de manière similaire, une matrice A est dite définie positive et est notée A≻0 si elle est semi-définie positive et que pour tout vecteur x non-nul, on a xTAx>0.
@@ -264,7 +264,7 @@ **45. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** -⟶ Théorème spectral ― Soit A∈Rn×n. Si A est symmétrique, alors A est diagonalisable par une matrice orthogonale réelle U∈Rn×n. En notant Λ=diag(λ1,...,λn), on a : +⟶ Théorème spectral ― Soit A∈Rn×n. Si A est symétrique, alors A est diagonalisable par une matrice orthogonale réelle U∈Rn×n. En notant Λ=diag(λ1,...,λn), on a :
@@ -300,7 +300,7 @@ **51. Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** -⟶ Hessienne ― Soit f:Rn→R une fonction et x∈Rn un vecteur. La hessienne de f par rapport à x est une matrice symmetrique n×n, notée ∇2xf(x), telle que : +⟶ Hessienne ― Soit f:Rn→R une fonction et x∈Rn un vecteur. La hessienne de f par rapport à x est une matrice symétrique n×n, notée ∇2xf(x), telle que :
diff --git a/fr/cheatsheet-machine-learning-tips-and-tricks.md b/fr/cs-229-machine-learning-tips-and-tricks.md similarity index 99% rename from fr/cheatsheet-machine-learning-tips-and-tricks.md rename to fr/cs-229-machine-learning-tips-and-tricks.md index d74182df0..2adf1db50 100644 --- a/fr/cheatsheet-machine-learning-tips-and-tricks.md +++ b/fr/cs-229-machine-learning-tips-and-tricks.md @@ -198,7 +198,7 @@ **34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** -⟶ [Réduit les coefficients à 0, Bon pour la sélection de variables, Rend les coefficients plus petits, Compromis entre la selection de variables et la réduction de coefficients] +⟶ [Réduit les coefficients à 0, Bon pour la sélection de variables, Rend les coefficients plus petits, Compromis entre la sélection de variables et la réduction de coefficients]
diff --git a/fr/refresher-probability.md b/fr/cs-229-probability.md similarity index 98% rename from fr/refresher-probability.md rename to fr/cs-229-probability.md index fe4562f80..8e407b9b2 100644 --- a/fr/refresher-probability.md +++ b/fr/cs-229-probability.md @@ -36,7 +36,7 @@ **7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** -⟶ Axiome 2 ― La probabilité qu'au moins un des évènements élementaires de tout l'univers se produise est 1, i.e. +⟶ Axiome 2 ― La probabilité qu'au moins un des évènements élémentaires de tout l'univers se produise est 1, i.e.
@@ -120,7 +120,7 @@ **21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** -⟶ Variable aléatoire ― Une variable aléatoire, souvent notée X, est une fonction qui associe chaque élement de l'univers de probabilité à la droite des réels. +⟶ Variable aléatoire ― Une variable aléatoire, souvent notée X, est une fonction qui associe chaque élément de l'univers de probabilité à la droite des réels.
diff --git a/fr/cheatsheet-supervised-learning.md b/fr/cs-229-supervised-learning.md similarity index 96% rename from fr/cheatsheet-supervised-learning.md rename to fr/cs-229-supervised-learning.md index 2f4850d1f..b79583323 100644 --- a/fr/cheatsheet-supervised-learning.md +++ b/fr/cs-229-supervised-learning.md @@ -42,7 +42,7 @@ **8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** -⟶ [Modèle discriminatif, Modèle génératif, But, Ce qui est appris, Illustration, Exemples] +⟶ [Modèle discriminant, Modèle génératif, But, Ce qui est appris, Illustration, Exemples]
@@ -66,7 +66,7 @@ **12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** -⟶ Fonction de loss ― Une fonction de loss est une fonction L:(z,y)∈R×Y⟼L(z,y)∈R prennant comme entrée une valeur prédite z correspondant à une valeur réelle y, et nous renseigne sur la ressemblance de ces deux valeurs. Les fonctions de loss courantes sont récapitulées dans le tableau ci-dessous : +⟶ Fonction de loss ― Une fonction de loss est une fonction L:(z,y)∈R×Y⟼L(z,y)∈R prenant comme entrée une valeur prédite z correspondant à une valeur réelle y, et nous renseigne sur la ressemblance de ces deux valeurs. Les fonctions de loss courantes sont récapitulées dans le tableau ci-dessous :
@@ -138,7 +138,7 @@ **24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** -⟶ Équations normales ― En notant X la matrice de design, la valeur de θ qui minimize la fonction de cost a une solution de forme fermée tel que : +⟶ Équations normales ― En notant X la matrice de design, la valeur de θ qui minimise la fonction de cost a une solution de forme fermée tel que :
@@ -186,7 +186,7 @@ **32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** -⟶ Régression softmax ― Une régression softmax, aussi appelée un régression logistique multiclasse, est utilisée pour généraliser la régression logistique lorsqu'il y a plus de 2 classes à prédire. Par convention, on fixe θK=0, ce qui oblige le paramètre de Bernoulli ϕi de chaque classe i à être égal à : +⟶ Régression softmax ― Une régression softmax, aussi appelée un régression logistique multi-classe, est utilisée pour généraliser la régression logistique lorsqu'il y a plus de 2 classes à prédire. Par convention, on fixe θK=0, ce qui oblige le paramètre de Bernoulli ϕi de chaque classe i à être égal à :
@@ -210,7 +210,7 @@ **36. Here are the most common exponential distributions summed up in the following table:** -⟶ Les distributions exponentielles les plus communémment rencontrées sont récapitulées dans le tableau ci-dessous : +⟶ Les distributions exponentielles les plus communément rencontrées sont récapitulées dans le tableau ci-dessous :
@@ -324,7 +324,7 @@ **55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** -⟶ Un modèle génératif essaie d'abord d'apprendre comment les données sont générées en estimant P(x|y), nous permettant ensuite d'estimer P(y|x) par le biais du théorème de Bayes. +⟶ Un modèle génératif essaie d'abord d'apprendre comment les données sont générées en estimant P(x|y), nous permettant ensuite d'estimer P(y|x) par le biais du théorème de Bayes.
diff --git a/fr/cheatsheet-unsupervised-learning.md b/fr/cs-229-unsupervised-learning.md similarity index 95% rename from fr/cheatsheet-unsupervised-learning.md rename to fr/cs-229-unsupervised-learning.md index f64268a4b..7757f9539 100644 --- a/fr/cheatsheet-unsupervised-learning.md +++ b/fr/cs-229-unsupervised-learning.md @@ -12,7 +12,7 @@ **3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** -⟶ Motivation ― Le but de l'apprentissage non-supervisé est de trouver des formes cachées dans un jeu de données non-labelées {x(1),...,x(m)}. +⟶ Motivation ― Le but de l'apprentissage non-supervisé est de trouver des formes cachées dans un jeu de données non annotées {x(1),...,x(m)}.
@@ -66,7 +66,7 @@ **12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** -⟶ M-step : Utiliser les probabilités postérieures Qi(z(i)) en tant que coefficients propres aux partitions sur les points x(i) pour ré-estimer séparemment chaque modèle de partition de la manière suivante : +⟶ M-step : Utiliser les probabilités postérieures Qi(z(i)) en tant que coefficients propres aux partitions sur les points x(i) pour ré-estimer séparément chaque modèle de partition de la manière suivante :
@@ -102,7 +102,7 @@ **18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** -⟶ Fonction de distortion ― Pour voir si l'algorithme converge, on regarde la fonction de distortion définie de la manière suivante : +⟶ Fonction de distorsion ― Pour voir si l'algorithme converge, on regarde la fonction de distorsion définie de la manière suivante :
@@ -192,7 +192,7 @@ **33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** -⟶ Théorème spectral ― Soit A∈Rn×n. Si A est symmétrique, alors A est diagonalisable par une matrice réelle orthogonale U∈Rn×n. En notant Λ=diag(λ1,...,λn), on a : +⟶ Théorème spectral ― Soit A∈Rn×n. Si A est symétrique, alors A est diagonalisable par une matrice réelle orthogonale U∈Rn×n. En notant Λ=diag(λ1,...,λn), on a :
@@ -222,7 +222,7 @@ **38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** -⟶ Étape 2 : Calculer Σ=1mm∑i=1x(i)x(i)T∈Rn×n, qui est symmétrique et aux valeurs propres réelles. +⟶ Étape 2 : Calculer Σ=1mm∑i=1x(i)x(i)T∈Rn×n, qui est symétrique et aux valeurs propres réelles.
@@ -264,7 +264,7 @@ **45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** -⟶ Hypothèses ― On suppose que nos données x ont été générées par un vecteur source à n dimensions s=(s1,...,sn), où les si sont des variables aléatoires indépendantes, par le biais d'une matrice de mélange et inversible A de la manière suivante : +⟶ Hypothèses ― On suppose que nos données x ont été générées par un vecteur source à n dimensions s=(s1,...,sn), où les si sont des variables aléatoires indépendantes, par le biais d'une matrice de mélange et inversible A de la manière suivante :
@@ -294,4 +294,4 @@ **50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** -⟶ Par conséquent, l'algorithme du gradient stochastique est tel que pour chaque example de ensemble d'apprentissage x(i), on met à jour W de la manière suivante : +⟶ Par conséquent, l'algorithme du gradient stochastique est tel que pour chaque exemple de ensemble d'apprentissage x(i), on met à jour W de la manière suivante : diff --git a/fr/cs-230-convolutional-neural-networks.md b/fr/cs-230-convolutional-neural-networks.md new file mode 100644 index 000000000..29cca030e --- /dev/null +++ b/fr/cs-230-convolutional-neural-networks.md @@ -0,0 +1,716 @@ +**Convolutional Neural Networks translation** + +
+ +**1. Convolutional Neural Networks cheatsheet** + +⟶ Pense-bête de réseaux de neurones convolutionnels + +
+ + +**2. CS 230 - Deep Learning** + +⟶ CS 230 - Apprentissage profond + +
+ + +**3. [Overview, Architecture structure]** + +⟶ [Vue d'ensemble, Structure de l'architecture] + +
+ + +**4. [Types of layer, Convolution, Pooling, Fully connected]** + +⟶ [Types de couche, Convolution, Pooling, Fully connected] + +
+ + +**5. [Filter hyperparameters, Dimensions, Stride, Padding]** + +⟶ [Paramètres du filtre, Dimensions, Stride, Padding] + +
+ + +**6. [Tuning hyperparameters, Parameter compatibility, Model complexity, Receptive field]** + +⟶ [Réglage des paramètres, Compatibilité des paramètres, Complexité du modèle, Champ récepteur] + +
+ + +**7. [Activation functions, Rectified Linear Unit, Softmax]** + +⟶ [Fonction d'activation, Unité linéaire rectifiée, Softmax] + +
+ + +**8. [Object detection, Types of models, Detection, Intersection over Union, Non-max suppression, YOLO, R-CNN]** + +⟶ [Détection d'objet, Types de modèle, Détection, Intersection sur union, Suppression non-max, YOLO, R-CNN] + +
+ + +**9. [Face verification/recognition, One shot learning, Siamese network, Triplet loss]** + +⟶ [Vérification/reconnaissance de visage, Apprentissage par coup, Réseau siamois, Loss triple] + +
+ + +**10. [Neural style transfer, Activation, Style matrix, Style/content cost function]** + +⟶ [Transfert de style de neurones, Activation, Matrice de style, Fonction de coût de style/contenu] + +
+ + +**11. [Computational trick architectures, Generative Adversarial Net, ResNet, Inception Network]** + +⟶ [Architectures à astuces calculatoires, Generative Adversarial Net, ResNet, Inception Network] + +
+ + +**12. Overview** + +⟶ Vue d'ensemble + +
+ + +**13. Architecture of a traditional CNN ― Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers:** + +⟶ Architecture d'un CNN traditionnel ― Les réseaux de neurones convolutionnels (en anglais Convolutional neural networks), aussi connus sous le nom de CNNs, sont un type spécifique de réseaux de neurones qui sont généralement composés des couches suivantes : + +
+ + +**14. The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections.** + +⟶ La couche convolutionnelle et la couche de pooling peuvent être ajustées en utilisant des paramètres qui sont décrites dans les sections suivantes. + +
+ + +**15. Types of layer** + +⟶ Types de couche + +
+ + +**16. Convolution layer (CONV) ― The convolution layer (CONV) uses filters that perform convolution operations as it is scanning the input I with respect to its dimensions. Its hyperparameters include the filter size F and stride S. The resulting output O is called feature map or activation map.** + +⟶ Couche convolutionnelle (CONV) ― La couche convolutionnelle (en anglais convolution layer) (CONV) utilise des filtres qui scannent l'entrée I suivant ses dimensions en effectuant des opérations de convolution. Elle peut être réglée en ajustant la taille du filtre F et le stride S. La sortie O de cette opération est appelée *feature map* ou aussi *activation map*. + +
+ + +**17. Remark: the convolution step can be generalized to the 1D and 3D cases as well.** + +⟶ Remarque : l'étape de convolution peut aussi être généralisée dans les cas 1D et 3D. + +
+ + +**18. Pooling (POOL) ― The pooling layer (POOL) is a downsampling operation, typically applied after a convolution layer, which does some spatial invariance. In particular, max and average pooling are special kinds of pooling where the maximum and average value is taken, respectively.** + +⟶ Pooling (POOL) ― La couche de pooling (en anglais pooling layer) (POOL) est une opération de sous-échantillonnage typiquement appliquée après une couche convolutionnelle. En particulier, les types de pooling les plus populaires sont le max et l'average pooling, où les valeurs maximales et moyennes sont prises, respectivement. + +
+ + +**19. [Type, Purpose, Illustration, Comments]** + +⟶ [Type, But, Illustration, Commentaires] + +
+ + +**20. [Max pooling, Average pooling, Each pooling operation selects the maximum value of the current view, Each pooling operation averages the values of the current view]** + +⟶ [Max pooling, Average pooling, Chaque opération de pooling sélectionne la valeur maximale de la surface. Chaque opération de pooling sélectionne la valeur moyenne de la surface.] + +
+ + +**21. [Preserves detected features, Most commonly used, Downsamples feature map, Used in LeNet]** + +⟶ [Garde les caractéristiques détectées. Plus communément utilisé, Sous-échantillonne la feature map, Utilisé dans LeNet] + +
+ + +**22. Fully Connected (FC) ― The fully connected layer (FC) operates on a flattened input where each input is connected to all neurons. If present, FC layers are usually found towards the end of CNN architectures and can be used to optimize objectives such as class scores.** + +⟶ Fully Connected (FC) ― La couche de fully connected (en anglais fully connected layer) (FC) s'applique sur une entrée préalablement aplatie où chaque entrée est connectée à tous les neurones. Les couches de fully connected sont typiquement présentes à la fin des architectures de CNN et peuvent être utilisées pour optimiser des objectifs tels que les scores de classe. + +
+ + +**23. Filter hyperparameters** + +⟶ Paramètres du filtre + +
+ + +**24. The convolution layer contains filters for which it is important to know the meaning behind its hyperparameters.** + +⟶ La couche convolutionnelle contient des filtres pour lesquels il est important de savoir comment ajuster ses paramètres. + +
+ + +**25. Dimensions of a filter ― A filter of size F×F applied to an input containing C channels is a F×F×C volume that performs convolutions on an input of size I×I×C and produces an output feature map (also called activation map) of size O×O×1.** + +⟶ Dimensions d'un filtre ― Un filtre de taille F×F appliqué à une entrée contenant C canaux est un volume de taille F×F×C qui effectue des convolutions sur une entrée de taille I×I×C et qui produit un feature map de sortie (aussi appelé activation map) de taille O×O×1. + +
+ + +**26. Filter** + +⟶ Filtre + +
+ + +**27. Remark: the application of K filters of size F×F results in an output feature map of size O×O×K.** + +⟶ Remarque : appliquer K filtres de taille F×F engendre un feature map de sortie de taille O×O×K. + +
+ + +**28. Stride ― For a convolutional or a pooling operation, the stride S denotes the number of pixels by which the window moves after each operation.** + +⟶ Stride ― Dans le contexte d'une opération de convolution ou de pooling, la stride S est un paramètre qui dénote le nombre de pixels par lesquels la fenêtre se déplace après chaque opération. + +
+ + +**29. Zero-padding ― Zero-padding denotes the process of adding P zeroes to each side of the boundaries of the input. This value can either be manually specified or automatically set through one of the three modes detailed below:** + +⟶ Zero-padding ― Le zero-padding est une technique consistant à ajouter P zeros à chaque côté des frontières de l'entrée. Cette valeur peut être spécifiée soit manuellement, soit automatiquement par le biais d'une des configurations détaillées ci-dessous : + +
+ + +**30. [Mode, Value, Illustration, Purpose, Valid, Same, Full]** + +⟶ [Configuration, Valeur, Illustration, But, Valide, Pareil, Total] + +
+ + +**31. [No padding, Drops last convolution if dimensions do not match, Padding such that feature map size has size ⌈IS⌉, Output size is mathematically convenient, Also called 'half' padding, Maximum padding such that end convolutions are applied on the limits of the input, Filter 'sees' the input end-to-end]** + +⟶ [Pas de padding, Enlève la dernière opération de convolution si les dimensions ne collent pas, Le padding tel que la feature map est de taille ⌈IS⌉, La taille de sortie est mathématiquement satisfaisante, Aussi appelé 'demi' padding, Padding maximum tel que les dernières convolutions sont appliquées sur les bords de l'entrée, Le filtre 'voit' l'entrée du début à la fin] + +
+ + +**32. Tuning hyperparameters** + +⟶ Ajuster les paramètres + +
+ + +**33. Parameter compatibility in convolution layer ― By noting I the length of the input volume size, F the length of the filter, P the amount of zero padding, S the stride, then the output size O of the feature map along that dimension is given by:** + +⟶ Compatibilité des paramètres dans la couche convolutionnelle ― En notant I le côté du volume d'entrée, F la taille du filtre, P la quantité de zero-padding, S la stride, la taille O de la feature map de sortie suivant cette dimension est telle que : + +
+ + +**34. [Input, Filter, Output]** + +⟶ [Entrée, Filtre, Sortie] + +
+ + +**35. Remark: often times, Pstart=Pend≜P, in which case we can replace Pstart+Pend by 2P in the formula above.** + +⟶ Remarque : on a souvent Pstart=Pend≜P, auquel cas on remplace Pstart+Pend par 2P dans la formule au-dessus. + +
+ + +**36. Understanding the complexity of the model ― In order to assess the complexity of a model, it is often useful to determine the number of parameters that its architecture will have. In a given layer of a convolutional neural network, it is done as follows:** + +⟶ Comprendre la complexité du modèle ― Pour évaluer la complexité d'un modèle, il est souvent utile de déterminer le nombre de paramètres que l'architecture va avoir. Dans une couche donnée d'un réseau de neurones convolutionnels, on a : + +
+ + +**37. [Illustration, Input size, Output size, Number of parameters, Remarks]** + +⟶ [Illustration, Taille d'entrée, Taille de sortie, Nombre de paramètres, Remarques] + +
+ + +**38. [One bias parameter per filter, In most cases, S + + +**39. [Pooling operation done channel-wise, In most cases, S=F]** + +⟶ [L'opération de pooling est effectuée pour chaque canal, Dans la plupart des cas, S=F] + +
+ + +**40. [Input is flattened, One bias parameter per neuron, The number of FC neurons is free of structural constraints]** + +⟶ [L'entrée est aplatie, Un paramètre de biais par neurone, Le choix du nombre de neurones de FC est libre] + +
+ + +**41. Receptive field ― The receptive field at layer k is the area denoted Rk×Rk of the input that each pixel of the k-th activation map can 'see'. By calling Fj the filter size of layer j and Si the stride value of layer i and with the convention S0=1, the receptive field at layer k can be computed with the formula:** + +⟶ Champ récepteur ― Le champ récepteur à la couche k est la surface notée Rk×Rk de l'entrée que chaque pixel de la k-ième activation map peut 'voir'. En notant Fj la taille du filtre de la couche j et Si la valeur de stride de la couche i et avec la convention S0=1, le champ récepteur à la couche k peut être calculé de la manière suivante : + +
+ + +**42. In the example below, we have F1=F2=3 and S1=S2=1, which gives R2=1+2⋅1+2⋅1=5.** + +⟶ Dans l'exemple ci-dessous, on a F1=F2=3 et S1=S2=1, ce qui donne R2=1+2⋅1+2⋅1=5. + +
+ + +**43. Commonly used activation functions** + +⟶ Fonctions d'activation communément utilisées + +
+ + +**44. Rectified Linear Unit ― The rectified linear unit layer (ReLU) is an activation function g that is used on all elements of the volume. It aims at introducing non-linearities to the network. Its variants are summarized in the table below:** + +⟶ Unité linéaire rectifiée ― La couche d'unité linéaire rectifiée (en anglais rectified linear unit layer) (ReLU) est une fonction d'activation g qui est utilisée sur tous les éléments du volume. Elle a pour but d'introduire des complexités non-linéaires au réseau. Ses variantes sont récapitulées dans le tableau suivant : + +
+ + +**45. [ReLU, Leaky ReLU, ELU, with]** + +⟶ [ReLU, Leaky ReLU, ELU, avec] + +
+ + +**46. [Non-linearity complexities biologically interpretable, Addresses dying ReLU issue for negative values, Differentiable everywhere]** + +⟶ [Complexités non-linéaires interprétables d'un point de vue biologique, Répond au problème de dying ReLU, Dérivable partout] + +
+ + +**47. Softmax ― The softmax step can be seen as a generalized logistic function that takes as input a vector of scores x∈Rn and outputs a vector of output probability p∈Rn through a softmax function at the end of the architecture. It is defined as follows:** + +⟶ Softmax ― L'étape softmax peut être vue comme une généralisation de la fonction logistique qui prend comme argument un vecteur de scores x∈Rn et qui renvoie un vecteur de probabilités p∈Rn à travers une fonction softmax à la fin de l'architecture. Elle est définie de la manière suivante : + +
+ + +**48. where** + +⟶ où + +
+ + +**49. Object detection** + +⟶ Détection d'objet + +
+ + +**50. Types of models ― There are 3 main types of object recognition algorithms, for which the nature of what is predicted is different. They are described in the table below:** + +⟶ Types de modèles ― Il y a 3 principaux types d'algorithme de reconnaissance d'objet, pour lesquels la nature de ce qui est prédit est different. Ils sont décrits dans la table ci-dessous : + +
+ + +**51. [Image classification, Classification w. localization, Detection]** + +⟶ [Classification d'image, Classification avec localisation, Détection] + +
+ + +**52. [Teddy bear, Book]** + +⟶ [Ours en peluche, Livre] + +
+ + +**53. [Classifies a picture, Predicts probability of object, Detects an object in a picture, Predicts probability of object and where it is located, Detects up to several objects in a picture, Predicts probabilities of objects and where they are located]** + +⟶ [Classifie une image, Prédit la probabilité d'un objet, Détecte un objet dans une image, Prédit la probabilité de présence d'un objet et où il est situé, Peut détecter plusieurs objets dans une image, Prédit les probabilités de présence des objets et où ils sont situés] + +
+ + +**54. [Traditional CNN, Simplified YOLO, R-CNN, YOLO, R-CNN]** + +⟶ [CNN traditionnel, YOLO simplifié, R-CNN, YOLO, R-CNN] + +
+ + +**55. Detection ― In the context of object detection, different methods are used depending on whether we just want to locate the object or detect a more complex shape in the image. The two main ones are summed up in the table below:** + +⟶ Détection ― Dans le contexte de la détection d'objet, des méthodes différentes sont utilisées selon si l'on veut juste localiser l'objet ou alors détecter une forme plus complexe dans l'image. Les deux méthodes principales sont résumées dans le tableau ci-dessous : + +
+ + +**56. [Bounding box detection, Landmark detection]** + +⟶ [Détection de zone délimitante, Détection de forme complexe] + +
+ + +**57. [Detects the part of the image where the object is located, Detects a shape or characteristics of an object (e.g. eyes), More granular]** + +⟶ [Détecte la partie de l'image où l'objet est situé, Détecte la forme ou les caractéristiques d'un objet (e.g. yeux), Plus granulaire] + +
+ + +**58. [Box of center (bx,by), height bh and width bw, Reference points (l1x,l1y), ..., (lnx,lny)]** + +⟶ [Zone de centre (bx,by), hauteur bh et largeur bw, Points de référence (l1x,l1y), ..., (lnx,lny)] + +
+ + +**59. Intersection over Union ― Intersection over Union, also known as IoU, is a function that quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding box Ba. It is defined as:** + +⟶ Intersection sur Union ― Intersection sur Union (en anglais Intersection over Union), aussi appelé IoU, est une fonction qui quantifie à quel point la zone délimitante prédite Bp est correctement positionnée par rapport à la zone délimitante vraie Ba. Elle est définie de la manière suivante : + +
+ + +**60. Remark: we always have IoU∈[0,1]. By convention, a predicted bounding box Bp is considered as being reasonably good if IoU(Bp,Ba)⩾0.5.** + +⟶ Remarque : on a toujours IoU∈[0,1]. Par convention, la prédiction Bp d'une zone délimitante est considérée comme étant satisfaisante si l'on a IoU(Bp,Ba)⩾0.5. + +
+ + +**61. Anchor boxes ― Anchor boxing is a technique used to predict overlapping bounding boxes. In practice, the network is allowed to predict more than one box simultaneously, where each box prediction is constrained to have a given set of geometrical properties. For instance, the first prediction can potentially be a rectangular box of a given form, while the second will be another rectangular box of a different geometrical form.** + +⟶ Zone d'accroche ― La technique des zones d'accroche (en anglais anchor boxing) sert à prédire des zones délimitantes qui se chevauchent. En pratique, on permet au réseau de prédire plus d'une zone délimitante simultanément, où chaque zone prédite doit respecter une forme géométrique particulière. Par exemple, la première prédiction peut potentiellement être une zone rectangulaire d'une forme donnée, tandis qu'une seconde prédiction doit être une zone rectangulaire d'une autre forme. + +
+ + +**62. Non-max suppression ― The non-max suppression technique aims at removing duplicate overlapping bounding boxes of a same object by selecting the most representative ones. After having removed all boxes having a probability prediction lower than 0.6, the following steps are repeated while there are boxes remaining:** + +⟶ Suppression non-max ― La technique de suppression non-max (en anglais non-max suppression) a pour but d'enlever des zones délimitantes qui se chevauchent et qui prédisent un seul et même objet, en sélectionnant les zones les plus représentatives. Après avoir enlevé toutes les zones ayant une probabilité prédite de moins de 0.6, les étapes suivantes sont répétées pour éliminer les zones redondantes : + +
+ + +**63. [For a given class, Step 1: Pick the box with the largest prediction probability., Step 2: Discard any box having an IoU⩾0.5 with the previous box.]** + +⟶ [Pour une classe donnée, Étape 1 : Choisir la zone ayant la plus grande probabilité de prédiction., Étape 2 : Enlever toute zone ayant IoU⩾0.5 avec la zone choisie précédemment.] + +
+ + +**64. [Box predictions, Box selection of maximum probability, Overlap removal of same class, Final bounding boxes]** + +⟶ [Zones prédites, Sélection de la zone de probabilité maximum, Suppression des chevauchements d'une même classe, Zones délimitantes finales] + +
+ + +**65. YOLO ― You Only Look Once (YOLO) is an object detection algorithm that performs the following steps:** + +⟶ YOLO ― L'algorithme You Only Look Once (YOLO) est un algorithme de détection d'objet qui fonctionne de la manière suivante : + +
+ + +**66. [Step 1: Divide the input image into a G×G grid., Step 2: For each grid cell, run a CNN that predicts y of the following form:, repeated k times]** + +⟶ [Étape 1 : Diviser l'image d'entrée en une grille de taille G×G., Étape 2 : Pour chaque cellule, faire tourner un CNN qui prédit y de la forme suivante :, répété k fois] + +
+ + +**67. where pc is the probability of detecting an object, bx,by,bh,bw are the properties of the detected bouding box, c1,...,cp is a one-hot representation of which of the p classes were detected, and k is the number of anchor boxes.** + +⟶ où pc est la probabilité de détecter un objet, bx,by,bh,bw sont les propriétés de la zone délimitante détectée, c1,...,cp est une représentation binaire (en anglais one-hot representation) de l'une des p classes détectée, et k est le nombre de zones d'accroche. + +
+ + +**68. Step 3: Run the non-max suppression algorithm to remove any potential duplicate overlapping bounding boxes.** + +⟶ Étape 3 : Faire tourner l'algorithme de suppression non-max pour enlever des doublons potentiels qui chevauchent des zones délimitantes. + +
+ + +**69. [Original image, Division in GxG grid, Bounding box prediction, Non-max suppression]** + +⟶ [Image originale, Division en une grille de taille GxG, Prédiction de zone délimitante, Suppression non-max] + +
+ + +**70. Remark: when pc=0, then the network does not detect any object. In that case, the corresponding predictions bx,...,cp have to be ignored.** + +⟶ Remarque : lorsque pc=0, le réseau ne détecte plus d'objet. Dans ce cas, les prédictions correspondantes bx,...,cp doivent être ignorées. + +
+ + +**71. R-CNN ― Region with Convolutional Neural Networks (R-CNN) is an object detection algorithm that first segments the image to find potential relevant bounding boxes and then run the detection algorithm to find most probable objects in those bounding boxes.** + +⟶ R-CNN ― L'algorithme de région avec des réseaux de neurones convolutionnels (en anglais Region with Convolutional Neural Networks) (R-CNN) est un algorithme de détection d'objet qui segmente l'image d'entrée pour trouver des zones délimitantes pertinentes, puis fait tourner un algorithme de détection pour trouver les objets les plus probables d'apparaître dans ces zones délimitantes. + +
+ + +**72. [Original image, Segmentation, Bounding box prediction, Non-max suppression]** + +⟶ [Image originale, Segmentation, Prédiction de zone délimitante, Suppression non-max] + +
+ + +**73. Remark: although the original algorithm is computationally expensive and slow, newer architectures enabled the algorithm to run faster, such as Fast R-CNN and Faster R-CNN.** + +⟶ Remarque : bien que l'algorithme original soit lent et coûteux en temps de calcul, de nouvelles architectures ont permis de faire tourner l'algorithme plus rapidement, tels que le Fast R-CNN et le Faster R-CNN. + +
+ + +**74. Face verification and recognition** + +⟶ Vérification et reconnaissance de visage + +
+ + +**75. Types of models ― Two main types of model are summed up in table below:** + +⟶ Types de modèles ― Deux principaux types de modèle sont récapitulés dans le tableau ci-dessous : + +
+ + +**76. [Face verification, Face recognition, Query, Reference, Database]** + +⟶ [Vérification de visage, Reconnaissance de visage, Requête, Référence, Base de données] + +
+ + +**77. [Is this the correct person?, One-to-one lookup, Is this one of the K persons in the database?, One-to-many lookup]** + +⟶ [Est-ce la bonne personne ?, , Est-ce une des K personnes dans la base de données ?, ] + +
+ + +**78. One Shot Learning ― One Shot Learning is a face verification algorithm that uses a limited training set to learn a similarity function that quantifies how different two given images are. The similarity function applied to two images is often noted d(image 1,image 2).** + +⟶ Apprentissage par coup ― L'apprentissage par coup (en anglais One Shot Learning) est un algorithme de vérification de visage qui utilise un training set de petite taille pour apprendre une fonction de similarité qui quantifie à quel point deux images données sont différentes. La fonction de similarité appliquée à deux images est souvent notée d(image 1,image 2). + +
+ + +**79. Siamese Network ― Siamese Networks aim at learning how to encode images to then quantify how different two images are. For a given input image x(i), the encoded output is often noted as f(x(i)).** + +⟶ Réseaux siamois ― Les réseaux siamois (en anglais Siamese Networks) ont pour but d'apprendre comment encoder des images pour quantifier le degré de différence de deux images données. Pour une image d'entrée donnée x(i), l'encodage de sortie est souvent notée f(x(i)). + +
+ + +**80. Triplet loss ― The triplet loss ℓ is a loss function computed on the embedding representation of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive example belong to a same class, while the negative example to another one. By calling α∈R+ the margin parameter, this loss is defined as follows:** + +⟶ Loss triple ― Le loss triple (en anglais triplet loss) ℓ est une fonction de loss calculée sur une représentation encodée d'un triplet d'images A (accroche), P (positif), et N (négatif). L'exemple d'accroche et l'exemple positif appartiennent à la même classe, tandis que l'exemple négatif appartient à une autre. En notant α∈R+ le paramètre de marge, le loss est défini de la manière suivante : + +
+ + +**81. Neural style transfer** + +⟶ Transfert de style neuronal + +
+ + +**82. Motivation ― The goal of neural style transfer is to generate an image G based on a given content C and a given style S.** + +⟶ Motivation ― Le but du transfert de style neuronal (en anglais neural style transfer) est de générer une image G à partir d'un contenu C et d'un style S. + +
+ + +**83. [Content C, Style S, Generated image G]** + +⟶ [Contenu C, Style S, Image générée G] + +
+ + +**84. Activation ― In a given layer l, the activation is noted a[l] and is of dimensions nH×nw×nc** + +⟶ Activation ― Dans une couche l donnée, l'activation est notée a[l] et est de dimensions nH×nw×nc + +
+ + +**85. Content cost function ― The content cost function Jcontent(C,G) is used to determine how the generated image G differs from the original content image C. It is defined as follows:** + +⟶ Fonction de coût de contenu ― La fonction de coût de contenu (en anglais content cost function), notée Jcontenu(C,G), est utilisée pour quantifier à quel point l'image générée G diffère de l'image de contenu original C. Elle est définie de la manière suivante : + +
+ + +**86. Style matrix ― The style matrix G[l] of a given layer l is a Gram matrix where each of its elements G[l]kk′ quantifies how correlated the channels k and k′ are. It is defined with respect to activations a[l] as follows:** + +⟶ Matrice de style ― La matrice de style (en anglais style matrix) G[l] d'une couche l donnée est une matrice de Gram dans laquelle chacun des éléments G[l]kk′ quantifie le degré de corrélation des canaux k and k′. Elle est définie en fonction des activations a[l] de la manière suivante : + +
+ + +**87. Remark: the style matrix for the style image and the generated image are noted G[l] (S) and G[l] (G) respectively.** + +⟶ Remarque : les matrices de style de l'image de style et de l'image générée sont notées G[l] (S) and G[l] (G) respectivement. + +
+ + +**88. Style cost function ― The style cost function Jstyle(S,G) is used to determine how the generated image G differs from the style S. It is defined as follows:** + +⟶ Fonction de coût de style ― La fonction de coût de style (en anglais style cost function), notée Jstyle(S,G), est utilisée pour quantifier à quel point l'image générée G diffère de l'image de style S. Elle est définie de la manière suivante : + +
+ + +**89. Overall cost function ― The overall cost function is defined as being a combination of the content and style cost functions, weighted by parameters α,β, as follows:** + +⟶ Fonction de coût totale ― La fonction de coût totale (en anglais overall cost function) est définie comme étant une combinaison linéaire des fonctions de coût de contenu et de style, pondérées par les paramètres α,β, de la manière suivante : + +
+ + +**90. Remark: a higher value of α will make the model care more about the content while a higher value of β will make it care more about the style.** + +⟶ Remarque : plus α est grand, plus le modèle privilégiera le contenu et plus β est grand, plus le modèle sera fidèle au style. + +
+ + +**91. Architectures using computational tricks** + +⟶ Architectures utilisant des astuces de calcul + +
+ + +**92. Generative Adversarial Network ― Generative adversarial networks, also known as GANs, are composed of a generative and a discriminative model, where the generative model aims at generating the most truthful output that will be fed into the discriminative which aims at differentiating the generated and true image.** + +⟶ Réseau antagoniste génératif ― Les réseaux antagonistes génératifs (en anglais generative adversarial networks), aussi connus sous le nom de GANs, sont composés d'un modèle génératif et d'un modèle discriminatif, où le modèle génératif a pour but de générer des prédictions aussi réalistes que possibles, qui seront ensuite envoyées dans un modèle discriminatif qui aura pour but de différencier une image générée d'une image réelle. + +
+ + +**93. [Training, Noise, Real-world image, Generator, Discriminator, Real Fake]** + +⟶ [Training, Bruit, Image réelle, Générateur, Discriminant, Vrai faux] + +
+ + +**94. Remark: use cases using variants of GANs include text to image, music generation and synthesis.** + +⟶ Remarque : les GANs sont utilisées dans des applications pouvant aller de la génération de musique au traitement de texte vers image. + +
+ + +**95. ResNet ― The Residual Network architecture (also called ResNet) uses residual blocks with a high number of layers meant to decrease the training error. The residual block has the following characterizing equation:** + +⟶ ResNet ― L'architecture du réseau résiduel (en anglais Residual Network), aussi appelé ResNet, utilise des blocs résiduels avec un nombre élevé de couches et a pour but de réduire l'erreur de training. Le bloc résiduel est caractérisé par l'équation suivante : + +
+ + +**96. Inception Network ― This architecture uses inception modules and aims at giving a try at different convolutions in order to increase its performance through features diversification. In particular, it uses the 1×1 convolution trick to limit the computational burden.** + +⟶ Inception Network ― Cette architecture utilise des modules d'inception et a pour but de tester toute sorte de configuration de convolution pour améliorer sa performance en diversifiant ses attributs. En particulier, elle utilise l'astuce de la convolution 1x1 pour limiter sa complexité de calcul. + +
+ + +**97. The Deep Learning cheatsheets are now available in [target language].** + +⟶ Les pense-bêtes d'apprentissage profond sont maintenant disponibles en français. + +
+ + +**98. Original authors** + +⟶ Auteurs + +
+ + +**99. Translated by X, Y and Z** + +⟶ Traduit par X, Y et Z + +
+ + +**100. Reviewed by X, Y and Z** + +⟶ Relu par X, Y et Z + +
+ + +**101. View PDF version on GitHub** + +⟶ Voir la version PDF sur GitHub + +
+ + +**102. By X and Y** + +⟶ Par X et Y + +
diff --git a/fr/cs-230-deep-learning-tips-and-tricks.md b/fr/cs-230-deep-learning-tips-and-tricks.md new file mode 100644 index 000000000..4c84b51f4 --- /dev/null +++ b/fr/cs-230-deep-learning-tips-and-tricks.md @@ -0,0 +1,457 @@ +**Deep Learning Tips and Tricks translation** + +
+ +**1. Deep Learning Tips and Tricks cheatsheet** + +⟶ Pense-bête de petites astuces d'apprentissage profond + +
+ + +**2. CS 230 - Deep Learning** + +⟶ CS 230 - Apprentissage profond + +
+ + +**3. Tips and tricks** + +⟶ Petites astuces + +
+ + +**4. [Data processing, Data augmentation, Batch normalization]** + +⟶ [Traitement des données, Augmentation des données, Normalisation de lot] + +
+ + +**5. [Training a neural network, Epoch, Mini-batch, Cross-entropy loss, Backpropagation, Gradient descent, Updating weights, Gradient checking]** + +⟶ [Entrainement d'un réseau de neurones, Epoch, Mini-lot, Entropie croisée, Rétropropagation du gradient, Algorithme du gradient, Mise à jour des coefficients, Vérification de gradient] + +
+ + +**6. [Parameter tuning, Xavier initialization, Transfer learning, Learning rate, Adaptive learning rates]** + +⟶ [Ajustement de paramètres, Initialisation de Xavier, Apprentissage par transfert, Taux d'apprentissage, Taux d'apprentissage adaptatifs] + +
+ + +**7. [Regularization, Dropout, Weight regularization, Early stopping]** + +⟶ [Régularisation, Abandon, Régularisation des coefficients, Arrêt prématuré] + +
+ + +**8. [Good practices, Overfitting small batch, Gradient checking]** + +⟶ [Bonnes pratiques, Surapprentissage d'un mini-lot, Vérification de gradient] + +
+ + +**9. View PDF version on GitHub** + +⟶ Voir la version PDF sur GitHub + +
+ + +**10. Data processing** + +⟶ Traitement des données + +
+ + +**11. Data augmentation ― Deep learning models usually need a lot of data to be properly trained. It is often useful to get more data from the existing ones using data augmentation techniques. The main ones are summed up in the table below. More precisely, given the following input image, here are the techniques that we can apply:** + +⟶ Augmentation des données - Les modèles d'apprentissage profond ont typiquement besoin de beaucoup de données afin d'être entrainés convenablement. Il est souvent utile de générer plus de données à partir de celles déjà existantes à l'aide de techniques d'augmentation de données. Celles les plus souvent utilisées sont résumées dans le tableau ci-dessous. À partir d'une image, voici les techniques que l'on peut utiliser : + +
+ + +**12. [Original, Flip, Rotation, Random crop]** + +⟶ [Original, Symétrie axiale, Rotation, Recadrage aléatoire] + +
+ + +**13. [Image without any modification, Flipped with respect to an axis for which the meaning of the image is preserved, Rotation with a slight angle, Simulates incorrect horizon calibration, Random focus on one part of the image, Several random crops can be done in a row]** + +⟶ [Image sans aucune modification, Symétrie par rapport à un axe pour lequel le sens de l'image est conservé, Rotation avec un petit angle, Reproduit une calibration imparfaite de l'horizon, Concentration aléatoire sur une partie de l'image, Plusieurs rognements aléatoires peuvent être faits à la suite] + +
+ + +**14. [Color shift, Noise addition, Information loss, Contrast change]** + +⟶ [Changement de couleur, Addition de bruit, Perte d'information, Changement de contraste] + +
+ + +**15. [Nuances of RGB is slightly changed, Captures noise that can occur with light exposure, Addition of noise, More tolerance to quality variation of inputs, Parts of image ignored, Mimics potential loss of parts of image, Luminosity changes, Controls difference in exposition due to time of day]** + +⟶ [Nuances de RGB sont légèrement changées, Capture le bruit qui peut survenir avec de l'exposition lumineuse, Addition de bruit, Plus de tolérance envers la variation de la qualité de l'entrée, Parties de l'image ignorées, Imite des pertes potentielles de parties de l'image, Changement de luminosité, Contrôle la différence de l'exposition dû à l'heure de la journée] + +
+ + +**16. Remark: data is usually augmented on the fly during training.** + +⟶ Remarque : les données sont normalement augmentées à la volée durant l'étape de training. + +
+ + +**17. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** + +⟶ Normalisation de lot ― La normalisation de lot (en anglais batch normalization) est une étape qui normalise le lot {xi} avec un choix de paramètres γ,β. En notant μB,σ2B la moyenne et la variance de ce que l'on veut corriger du lot, on a : + +
+ + +**18. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** + +⟶ Ceci est couramment fait après un fully connected/couche de convolution et avant une couche non-linéaire. Elle vise à permettre d'avoir de plus grands taux d'apprentissages et de réduire la dépendance à l'initialisation. + +
+ + +**19. Training a neural network** + +⟶ Entraîner un réseau de neurones + +
+ + +**20. Definitions** + +⟶ Définitions + +
+ + +**21. Epoch ― In the context of training a model, epoch is a term used to refer to one iteration where the model sees the whole training set to update its weights.** + +⟶ Epoch ― Dans le contexte de l'entraînement d'un modèle, l'epoch est un terme utilisé pour référer à une itération où le modèle voit tout le training set pour mettre à jour ses coefficients. + +
+ + +**22. Mini-batch gradient descent ― During the training phase, updating weights is usually not based on the whole training set at once due to computation complexities or one data point due to noise issues. Instead, the update step is done on mini-batches, where the number of data points in a batch is a hyperparameter that we can tune.** + +⟶ Gradient descent sur mini-lots ― Durant la phase d'entraînement, la mise à jour des coefficients n'est souvent basée ni sur tout le training set d'un coup à cause de temps de calculs coûteux, ni sur un seul point à cause de bruits potentiels. À la place de cela, l'étape de mise à jour est faite sur des mini-lots, où le nombre de points dans un lot est un paramètre que l'on peut régler. + +
+ + +**23. Loss function ― In order to quantify how a given model performs, the loss function L is usually used to evaluate to what extent the actual outputs y are correctly predicted by the model outputs z.** + +⟶ Fonction de loss ― Pour pouvoir quantifier la performance d'un modèle donné, la fonction de loss (en anglais loss function) L est utilisée pour évaluer la mesure dans laquelle les sorties vraies y sont correctement prédites par les prédictions du modèle z. + +
+ + +**24. Cross-entropy loss ― In the context of binary classification in neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** + +⟶ Entropie croisée ― Dans le contexte de la classification binaire d'un réseau de neurones, l'entropie croisée (en anglais cross-entropy loss) L(z,y) est couramment utilisée et est définie par : + +
+ + +**25. Finding optimal weights** + +⟶ Recherche de coefficients optimaux + +
+ + +**26. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to each weight w is computed using the chain rule.** + +⟶ Backpropagation ― La backpropagation est une méthode de mise à jour des coefficients d'un réseau de neurones en prenant en compte les sorties vraies et désirées. La dérivée par rapport à chaque coefficient w est calculée en utilisant la règle de la chaîne. + +
+ + +**27. Using this method, each weight is updated with the rule:** + +⟶ En utilisant cette méthode, chaque coefficient est mis à jour par : + +
+ + +**28. Updating weights ― In a neural network, weights are updated as follows:** + +⟶ Mettre à jour les coefficients ― Dans un réseau de neurones, les coefficients sont mis à jour par : + +
+ + +**29. [Step 1: Take a batch of training data and perform forward propagation to compute the loss, Step 2: Backpropagate the loss to get the gradient of the loss with respect to each weight, Step 3: Use the gradients to update the weights of the network.]** + +⟶ [Étape 1 : Prendre un lot de training data et effectuer une forward propagation pour calculer le loss, Étape 2 : Backpropaguer le loss pour obtenir le gradient du loss par rapport à chaque coefficient, Étape 3 : Utiliser les gradients pour mettre à jour les coefficients du réseau.] + +
+ + +**30. [Forward propagation, Backpropagation, Weights update]** + +⟶ [Forward propagation, Backpropagation, Mise à jour des coefficients] + +
+ + +**31. Parameter tuning** + +⟶ Réglage des paramètres + +
+ + +**32. Weights initialization** + +⟶ Initialisation des coefficients + +
+ + +**33. Xavier initialization ― Instead of initializing the weights in a purely random manner, Xavier initialization enables to have initial weights that take into account characteristics that are unique to the architecture.** + +⟶ Initialization de Xavier ― Au lieu de laisser les coefficients s'initialiser de manière purement aléatoire, l'initialisation de Xavier permet d'avoir des coefficients initiaux qui prennent en compte les caractéristiques uniques de l'architecture. + +
+ + +**34. Transfer learning ― Training a deep learning model requires a lot of data and more importantly a lot of time. It is often useful to take advantage of pre-trained weights on huge datasets that took days/weeks to train, and leverage it towards our use case. Depending on how much data we have at hand, here are the different ways to leverage this:** + +⟶ Apprentissage de transfert ― Entraîner un modèle d'apprentissage profond requière beaucoup de données et beaucoup de temps. Il est souvent utile de profiter de coefficients pre-entraînés sur des données énormes qui ont pris des jours/semaines pour être entraînés, et profiter de cela pour notre cas. Selon la quantité de données que l'on a sous la main, voici différentes manières d'utiliser cette méthode : + +
+ + +**35. [Training size, Illustration, Explanation]** + +⟶ [Taille du training, Illustration, Explication] + +
+ + +**36. [Small, Medium, Large]** + +⟶ [Petit, Moyen, Grand] + +
+ + +**37. [Freezes all layers, trains weights on softmax, Freezes most layers, trains weights on last layers and softmax, Trains weights on layers and softmax by initializing weights on pre-trained ones]** + +⟶ [Gèle toutes les couches, entraîne les coefficients du softmax, Gèle la plupart des couches, entraîne les coefficients des dernières couches et du softmax, Entraîne les coefficients des couches et du softmax en initialisant les coefficients sur ceux qui ont été pré-entraînés] + +
+ + +**38. Optimizing convergence** + +⟶ Optimisation de la convergence + +
+ + +**39. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. It can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** + +⟶ Taux d'apprentissage ― Le taux d'apprentissage (en anglais learning rate), souvent noté α ou η, indique la vitesse à laquelle les coefficients sont mis à jour. Il peut être fixe ou variable. La méthode actuelle la plus populaire est appelée Adam, qui est une méthode faisant varier le taux d'apprentissage. + +
+ + +**40. Adaptive learning rates ― Letting the learning rate vary when training a model can reduce the training time and improve the numerical optimal solution. While Adam optimizer is the most commonly used technique, others can also be useful. They are summed up in the table below:** + +⟶ Taux d'apprentissage adaptatifs ― Laisser le taux d'apprentissage varier pendant la phase d'entraînement du modèle peut réduire le temps d'entraînement et améliorer la qualité de la solution numérique optimale. Bien que la méthode d'Adam est la plus utilisée, d'autres peuvent aussi être utiles. Les différentes méthodes sont récapitulées dans le tableau ci-dessous : + +
+ + +**41. [Method, Explanation, Update of w, Update of b]** + +⟶ [Méthode, Explication, Mise à jour de b, Mise à jour de b] + +
+ + +**42. [Momentum, Dampens oscillations, Improvement to SGD, 2 parameters to tune]** + +⟶ [Momentum, Amortit les oscillations, Amélioration par rapport à la méthode SGD, 2 paramètres à régler] + +
+ + +**43. [RMSprop, Root Mean Square propagation, Speeds up learning algorithm by controlling oscillations]** + +⟶ [RMSprop, Root Mean Square propagation, Accélère l'algorithme d'apprentissage en contrôlant les oscillations] + +
+ + +**44. [Adam, Adaptive Moment estimation, Most popular method, 4 parameters to tune]** + +⟶ [Adam, Adaptive Moment estimation, Méthode la plus populaire, 4 paramètres à régler] + +
+ + +**45. Remark: other methods include Adadelta, Adagrad and SGD.** + +⟶ Remarque : parmi les autres méthodes existantes, on trouve Adadelta, Adagrad et SGD. + +
+ + +**46. Regularization** + +⟶ Régularisation + +
+ + +**47. Dropout ― Dropout is a technique used in neural networks to prevent overfitting the training data by dropping out neurons with probability p>0. It forces the model to avoid relying too much on particular sets of features.** + +⟶ Dropout ― Le dropout est une technique qui est destinée à empêcher le sur-ajustement sur les données de training en abandonnant des unités dans un réseau de neurones avec une probabilité p>0. Cela force le modèle à éviter de trop s'appuyer sur un ensemble particulier de features. + +
+ + +**48. Remark: most deep learning frameworks parametrize dropout through the 'keep' parameter 1−p.** + +⟶ Remarque : la plupart des frameworks d'apprentissage profond paramétrisent le dropout à travers le paramètre 'garder' 1-p. + +
+ + +**49. Weight regularization ― In order to make sure that the weights are not too large and that the model is not overfitting the training set, regularization techniques are usually performed on the model weights. The main ones are summed up in the table below:** + +⟶ Régularisation de coefficient ― Pour s'assurer que les coefficients ne sont pas trop grands et que le modèle ne sur-ajuste pas sur le training set, on utilise des techniques de régularisation sur les coefficients du modèle. Les techniques principales sont résumées dans le tableau suivant : + +
+ + +**50. [LASSO, Ridge, Elastic Net]** + +⟶ [LASSO, Ridge, Elastic Net] + +
+ +**50 bis. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶ [Réduit les coefficients à 0, Bon pour la sélection de variables, Rend les coefficients plus petits, Compromis entre la sélection de variables et la réduction de la taille des coefficients] + +
+ +**51. Early stopping ― This regularization technique stops the training process as soon as the validation loss reaches a plateau or starts to increase.** + +⟶ Arrêt prématuré ― L'arrêt prématuré (en anglais early stopping) est une technique de régularisation qui consiste à stopper l'étape d'entraînement dès que le loss de validation atteint un plateau ou commence à augmenter. + +
+ + +**52. [Error, Validation, Training, early stopping, Epochs]** + +⟶ [Erreur, Validation, Training, arrêt prématuré, Epochs] + +
+ + +**53. Good practices** + +⟶ Bonnes pratiques + +
+ + +**54. Overfitting small batch ― When debugging a model, it is often useful to make quick tests to see if there is any major issue with the architecture of the model itself. In particular, in order to make sure that the model can be properly trained, a mini-batch is passed inside the network to see if it can overfit on it. If it cannot, it means that the model is either too complex or not complex enough to even overfit on a small batch, let alone a normal-sized training set.** + +⟶ Sur-ajuster un mini-lot ― Lorsque l'on débugge un modèle, il est souvent utile de faire de petits tests pour voir s'il y a un gros souci avec l'architecture du modèle lui-même. En particulier, pour s'assurer que le modèle peut être entraîné correctement, un mini-lot est passé dans le réseau pour voir s'il peut sur-ajuster sur lui. Si le modèle ne peut pas le faire, cela signifie que le modèle est soit trop complexe ou pas assez complexe pour être sur-ajusté sur un mini-lot. + +
+ + +**55. Gradient checking ― Gradient checking is a method used during the implementation of the backward pass of a neural network. It compares the value of the analytical gradient to the numerical gradient at given points and plays the role of a sanity-check for correctness.** + +⟶ Gradient checking ― La méthode de gradient checking est utilisée durant l'implémentation d'un backward pass d'un réseau de neurones. Elle compare la valeur du gradient analytique par rapport au gradient numérique au niveau de certains points et joue un rôle de vérification élémentaire. + +
+ + +**56. [Type, Numerical gradient, Analytical gradient]** + +⟶ [Type, Gradient numérique, Gradient analytique] + +
+ + +**57. [Formula, Comments]** + +⟶ [Formule, Commentaires] + +
+ + +**58. [Expensive; loss has to be computed two times per dimension, Used to verify correctness of analytical implementation, Trade-off in choosing h not too small (numerical instability) nor too large (poor gradient approximation)]** + +⟶ [Coûteux; le loss doit être calculé deux fois par dimension, Utilisé pour vérifier l'exactitude d'une implémentation analytique, Compromis dans le choix de h entre pas trop petit (instabilité numérique) et pas trop grand (estimation du gradient approximative)] + +
+ + +**59. ['Exact' result, Direct computation, Used in the final implementation]** + +⟶ [Résultat 'exact', Calcul direct, Utilisé dans l'implémentation finale] + +
+ + +**60. The Deep Learning cheatsheets are now available in [target language].** + +⟶ Les pense-bêtes d'apprentissage profond sont maintenant disponibles en français. + +
+ +**61. Original authors** + +⟶ Auteurs + +
+ +**62. Translated by X, Y and Z** + +⟶ Traduit par X, Y et Z + +
+ +**63. Reviewed by X, Y and Z** + +⟶ Relu par X, Y et Z + +
+ +**64. View PDF version on GitHub** + +⟶ Voir la version PDF sur GitHub + +
+ +**65. By X and Y** + +⟶ Par X et Y + +
diff --git a/fr/cs-230-recurrent-neural-networks.md b/fr/cs-230-recurrent-neural-networks.md new file mode 100644 index 000000000..e7d8f5343 --- /dev/null +++ b/fr/cs-230-recurrent-neural-networks.md @@ -0,0 +1,678 @@ +**Recurrent Neural Networks translation** + +
+ +**1. Recurrent Neural Networks cheatsheet** + +⟶ Pense-bête de réseaux de neurones récurrents + +
+ + +**2. CS 230 - Deep Learning** + +⟶ CS 230 - Apprentissage profond + +
+ + +**3. [Overview, Architecture structure, Applications of RNNs, Loss function, Backpropagation]** + +⟶ [Vue d'ensemble, Structure d'architecture, Applications des RNNs, Fonction de loss, Backpropagation] + +
+ + +**4. [Handling long term dependencies, Common activation functions, Vanishing/exploding gradient, Gradient clipping, GRU/LSTM, Types of gates, Bidirectional RNN, Deep RNN]** + +⟶ [Dépendances à long terme, Fonctions d'activation communes, Gradient qui disparait/explose, Coupure de gradient, GRU/LSTM, Types de porte, RNN bi-directionnel, RNN profond] + +
+ + +**5. [Learning word representation, Notations, Embedding matrix, Word2vec, Skip-gram, Negative sampling, GloVe]** + +⟶ [Apprentissage de la représentation de mots, Notations, Matrice de représentation, Word2vec, Skip-gram, Échantillonnage négatif, GloVe] + +
+ + +**6. [Comparing words, Cosine similarity, t-SNE]** + +⟶ [Comparaison des mots, Similarité cosinus, t-SNE] + +
+ + +**7. [Language model, n-gram, Perplexity]** + +⟶ [Modèle de langage, n-gram, Perplexité] + +
+ + +**8. [Machine translation, Beam search, Length normalization, Error analysis, Bleu score]** + +⟶ [Traduction machine, Recherche en faisceau, Normalisation de longueur, Analyse d'erreur, Score bleu] + +
+ + +**9. [Attention, Attention model, Attention weights]** + +⟶ [Attention, Modèle d'attention, Coefficients d'attention] + +
+ + +**10. Overview** + +⟶ Vue d'ensemble + +
+ + +**11. Architecture of a traditional RNN ― Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. They are typically as follows:** + +⟶ Architecture d'un RNN traditionnel ― Les réseaux de neurones récurrents (en anglais recurrent neural networks), aussi appelés RNNs, sont une classe de réseaux de neurones qui permettent aux prédictions antérieures d'être utilisées comme entrées, par le biais d'états cachés (en anglais hidden states). Ils sont de la forme suivante : + +
+ + +**12. For each timestep t, the activation a and the output y are expressed as follows:** + +⟶ À l'instant t, l'activation a et la sortie y sont de la forme suivante : + +
+ + +**13. and** + +⟶ et + +
+ + +**14. where Wax,Waa,Wya,ba,by are coefficients that are shared temporally and g1,g2 activation functions.** + +⟶ où Wax,Waa,Wya,ba,by sont des coefficients indépendants du temps et où g1,g2 sont des fonctions d'activation. + +
+ + +**15. The pros and cons of a typical RNN architecture are summed up in the table below:** + +⟶ Les avantages et inconvénients des architectures de RNN traditionnelles sont résumés dans le tableau ci-dessous : + +
+ + +**16. [Advantages, Possibility of processing input of any length, Model size not increasing with size of input, Computation takes into account historical information, Weights are shared across time]** + +⟶ [Avantages, Possibilité de prendre en compte des entrées de toute taille, La taille du modèle n'augmente pas avec la taille de l'entrée, Les calculs prennent en compte les informations antérieures, Les coefficients sont indépendants du temps] + +
+ + +**17. [Drawbacks, Computation being slow, Difficulty of accessing information from a long time ago, Cannot consider any future input for the current state]** + +⟶ [Inconvénients, Le temps de calcul est long, Difficulté d'accéder à des informations d'un passé lointain, Impossibilité de prendre en compte des informations futures un état donné] + +
+ + +**18. Applications of RNNs ― RNN models are mostly used in the fields of natural language processing and speech recognition. The different applications are summed up in the table below:** + +⟶ Applications des RNNs ― Les modèles RNN sont surtout utilisés dans les domaines du traitement automatique du langage naturel et de la reconnaissance vocale. Le tableau suivant détaille les applications principales à retenir : + +
+ + +**19. [Type of RNN, Illustration, Example]** + +⟶ [Type de RNN, Illustration, Exemple] + +
+ + +**20. [One-to-one, One-to-many, Many-to-one, Many-to-many]** + +⟶ [Un à un, Un à plusieurs, Plusieurs à un, Plusieurs à plusieurs] + +
+ + +**21. [Traditional neural network, Music generation, Sentiment classification, Name entity recognition, Machine translation]** + +⟶ [Réseau de neurones traditionnel, Génération de musique, Classification de sentiment, Reconnaissance d'entité, Traduction machine] + +
+ + +**22. Loss function ― In the case of a recurrent neural network, the loss function L of all time steps is defined based on the loss at every time step as follows:** + +⟶ Fonction de loss ― Dans le contexte des réseaux de neurones récurrents, la fonction de loss L prend en compte le loss à chaque temps T de la manière suivante : + +
+ + +**23. Backpropagation through time ― Backpropagation is done at each point in time. At timestep T, the derivative of the loss L with respect to weight matrix W is expressed as follows:** + +⟶ Backpropagation temporelle ― L'étape de backpropagation est appliquée dans la dimension temporelle. À l'instant T, la dérivée du loss L par rapport à la matrice de coefficients W est donnée par : + +
+ + +**24. Handling long term dependencies** + +⟶ Dépendances à long terme + +
+ + +**25. Commonly used activation functions ― The most common activation functions used in RNN modules are described below:** + +⟶ Fonctions d'activation communément utilisées ― Les fonctions d'activation les plus utilisées dans les RNNs sont décrits ci-dessous : + +
+ + +**26. [Sigmoid, Tanh, RELU]** + +⟶ [Sigmoïde, Tanh, RELU] + +
+ + +**27. Vanishing/exploding gradient ― The vanishing and exploding gradient phenomena are often encountered in the context of RNNs. The reason why they happen is that it is difficult to capture long term dependencies because of multiplicative gradient that can be exponentially decreasing/increasing with respect to the number of layers.** + +⟶ Gradient qui disparait/explose ― Les phénomènes de gradient qui disparait et qui explose (en anglais vanishing gradient et exploding gradient) sont souvent rencontrés dans le contexte des RNNs. Ceci est dû au fait qu'il est difficile de capturer des dépendances à long terme à cause du gradient multiplicatif qui peut décroître/croître de manière exponentielle en fonction du nombre de couches. + +
+ + +**28. Gradient clipping ― It is a technique used to cope with the exploding gradient problem sometimes encountered when performing backpropagation. By capping the maximum value for the gradient, this phenomenon is controlled in practice.** + +⟶ Coupure de gradient ― Cette technique est utilisée pour atténuer le phénomène de gradient qui explose qui peut être rencontré lors de l'étape de backpropagation. En plafonnant la valeur qui peut être prise par le gradient, ce phénomène est maîtrisé en pratique. + +
+ + +**29. clipped** + +⟶ coupé + +
+ + +**30. Types of gates ― In order to remedy the vanishing gradient problem, specific gates are used in some types of RNNs and usually have a well-defined purpose. They are usually noted Γ and are equal to:** + +⟶ Types de porte ― Pour remédier au problème du gradient qui disparait, certains types de porte sont spécifiquement utilisés dans des variantes de RNNs et ont un but bien défini. Les portes sont souvent notées Γ et sont telles que : + +
+ + +**31. where W,U,b are coefficients specific to the gate and σ is the sigmoid function. The main ones are summed up in the table below:** + +⟶ où W,U,b sont des coefficients spécifiques à la porte et σ est une sigmoïde. Les portes à retenir sont récapitulées dans le tableau ci-dessous : + +
+ + +**32. [Type of gate, Role, Used in]** + +⟶ [Type de porte, Rôle, Utilisée dans] + +
+ + +**33. [Update gate, Relevance gate, Forget gate, Output gate]** + +⟶ [Porte d'actualisation, Porte de pertinence, Porte d'oubli, Porte de sortie] + +
+ + +**34. [How much past should matter now?, Drop previous information?, Erase a cell or not?, How much to reveal of a cell?]** + +⟶ [Dans quelle mesure le passé devrait être important ?, Enlever les informations précédentes ?, Enlever une cellule ?, Combien devrait-on révéler d'une cellule ?] + +
+ + +**35. [LSTM, GRU]** + +⟶ [LSTM, GRU] + +
+ + +**36. GRU/LSTM ― Gated Recurrent Unit (GRU) and Long Short-Term Memory units (LSTM) deal with the vanishing gradient problem encountered by traditional RNNs, with LSTM being a generalization of GRU. Below is a table summing up the characterizing equations of each architecture:** + +⟶ GRU/LSTM ― Les unités de porte récurrente (en anglais Gated Recurrent Unit) (GRU) et les unités de mémoire à long/court terme (en anglais Long Short-Term Memory units) (LSTM) apaisent le problème du gradient qui disparait rencontré par les RNNs traditionnels, où le LSTM peut être vu comme étant une généralisation du GRU. Le tableau ci-dessous résume les équations caractéristiques de chacune de ces architectures : + +
+ + +**37. [Characterization, Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Dependencies]** + +⟶ [Caractérisation, Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Dépendances] + +
+ + +**38. Remark: the sign ⋆ denotes the element-wise multiplication between two vectors.** + +⟶ Remarque : le signe ⋆ dénote le produit de Hadamard entre deux vecteurs. + +
+ + +**39. Variants of RNNs ― The table below sums up the other commonly used RNN architectures:** + +⟶ Variantes des RNNs ― Le tableau ci-dessous récapitule les autres architectures RNN communément utilisées : + +
+ + +**40. [Bidirectional (BRNN), Deep (DRNN)]** + +⟶ [Bi-directionnel (BRNN), Profond (DRNN)] + +
+ + +**41. Learning word representation** + +⟶ Apprentissage de la représentation de mots + +
+ + +**42. In this section, we note V the vocabulary and |V| its size.** + +⟶ Dans cette section, on note V le vocabulaire et |V| sa taille. + +
+ + +**43. Motivation and notations** + +⟶ Motivation et notations + +
+ + +**44. Representation techniques ― The two main ways of representing words are summed up in the table below:** + +⟶ Techniques de représentation ― Les deux manières principales de représenter des mots sont décrits dans le tableau suivant : + +
+ + +**45. [1-hot representation, Word embedding]** + +⟶ [Représentation binaire, Représentation du mot] + +
+ + +**46. [teddy bear, book, soft]** + +⟶ [ours en peluche, livre, doux] + +
+ + +**47. [Noted ow, Naive approach, no similarity information, Noted ew, Takes into account words similarity]** + +⟶ [Noté ow, Approche naïve, pas d'information de similarité, Noté ew, Prend en compte la similarité des mots] + +
+ + +**48. Embedding matrix ― For a given word w, the embedding matrix E is a matrix that maps its 1-hot representation ow to its embedding ew as follows:** + +⟶ Matrice de représentation ― Pour un mot donné w, la matrice de représentation (en anglais embedding matrix) E est une matrice qui relie une représentation binaire ow à sa représentation correspondante ew de la manière suivante : + +
+ + +**49. Remark: learning the embedding matrix can be done using target/context likelihood models.** + +⟶ Remarque : l'apprentissage d'une matrice de représentation peut être effectuée en utilisant des modèles probabilistiques de cible/contexte. + +
+ + +**50. Word embeddings** + +⟶ Représentation de mots + +
+ + +**51. Word2vec ― Word2vec is a framework aimed at learning word embeddings by estimating the likelihood that a given word is surrounded by other words. Popular models include skip-gram, negative sampling and CBOW.** + +⟶ Word2vec ― Word2vec est un ensemble de techniques visant à apprendre comment représenter les mots en estimant la probabilité qu'un mot donné a d'être entouré par d'autres mots. Le skip-gram, l'échantillonnage négatif et le CBOW font parti des modèles les plus populaires. + +
+ + +**52. [A cute teddy bear is reading, teddy bear, soft, Persian poetry, art]** + +⟶ [Un ours en peluche mignon est en train de lire, ours en peluche, doux, poésie persane, art] + +
+ + +**53. [Train network on proxy task, Extract high-level representation, Compute word embeddings]** + +⟶ [Entraîner le réseau, Extraire une représentation globale, Calculer une représentation des mots] + +
+ + +**54. Skip-gram ― The skip-gram word2vec model is a supervised learning task that learns word embeddings by assessing the likelihood of any given target word t happening with a context word c. By noting θt a parameter associated with t, the probability P(t|c) is given by:** + +⟶ Skip-gram ― Le skip-gram est un modèle de type supervisé qui apprend comment représenter les mots en évaluant la probabilité de chaque mot cible t donné dans un mot contexte c. En notant θt le paramètre associé à t, la probabilité P(t|c) est donnée par : + +
+ + +**55. Remark: summing over the whole vocabulary in the denominator of the softmax part makes this model computationally expensive. CBOW is another word2vec model using the surrounding words to predict a given word.** + +⟶ Remarque : le fait d'additionner tout le vocabulaire dans le dénominateur du softmax rend le modèle coûteux en temps de calcul. CBOW est un autre modèle utilisant les mots avoisinants pour prédire un mot donné. + +
+ + +**56. Negative sampling ― It is a set of binary classifiers using logistic regressions that aim at assessing how a given context and a given target words are likely to appear simultaneously, with the models being trained on sets of k negative examples and 1 positive example. Given a context word c and a target word t, the prediction is expressed by:** + +⟶ Échantillonnage négatif ― Cette méthode utilise un ensemble de classifieurs binaires utilisant des régressions logistiques qui visent à évaluer dans quelle mesure des mots contexte et cible sont susceptible d'apparaître simultanément, avec des modèles étant entraînés sur des ensembles de k exemples négatifs et 1 exemple positif. Étant donnés un mot contexte c et un mot cible t, la prédiction est donnée par : + +
+ + +**57. Remark: this method is less computationally expensive than the skip-gram model.** + +⟶ Remarque : cette méthode est moins coûteuse en calcul par rapport au modèle skip-gram. + +
+ + +**57bis. GloVe ― The GloVe model, short for global vectors for word representation, is a word embedding technique that uses a co-occurence matrix X where each Xi,j denotes the number of times that a target i occurred with a context j. Its cost function J is as follows:** + +⟶ GloVe ― Le modèle GloVe (en anglais global vectors for word representation) est une technique de représentation des mots qui utilise une matrice de co-occurrence X où chaque Xi,j correspond au nombre de fois qu'une cible i se produit avec un contexte j. Sa fonction de coût J est telle que : + +
+ + +**58. where f is a weighting function such that Xi,j=0⟹f(Xi,j)=0. +Given the symmetry that e and θ play in this model, the final word embedding e(final)w is given by:** + +⟶ où f est une fonction à coefficients telle que Xi,j=0⟹f(Xi,j)=0. +Étant donné la symétrie que e et θ ont dans un modèle, la représentation du mot final e(final)w est donnée par : + +
+ + +**59. Remark: the individual components of the learned word embeddings are not necessarily interpretable.** + +⟶ Remarque : les composantes individuelles de la représentation d'un mot n'est pas nécessairement facilement interprétable. + +
+ + +**60. Comparing words** + +⟶ Comparaison de mots + +
+ + +**61. Cosine similarity ― The cosine similarity between words w1 and w2 is expressed as follows:** + +⟶ Similarité cosinus ― La similarité cosinus (en anglais cosine similarity) entre les mots w1 et w2 est donnée par : + +
+ + +**62. Remark: θ is the angle between words w1 and w2.** + +⟶ Remarque : θ est l'angle entre les mots w1 et w2. + +
+ + +**63. t-SNE ― t-SNE (t-distributed Stochastic Neighbor Embedding) is a technique aimed at reducing high-dimensional embeddings into a lower dimensional space. In practice, it is commonly used to visualize word vectors in the 2D space.** + +⟶ t-SNE ― La méthode t-SNE (en anglais t-distributed Stochastic Neighbor Embedding) est une technique visant à réduire une représentation dans un espace de haute dimension en un espace de plus faible dimension. En pratique, on visualise les vecteur-mots dans un espace 2D. + +
+ + +**64. [literature, art, book, culture, poem, reading, knowledge, entertaining, loveable, childhood, kind, teddy bear, soft, hug, cute, adorable]** + +⟶ [littérature, art, livre, culture, poème, lecture, connaissance, divertissant, aimable, enfance, gentil, ours en peluche, doux, câlin, mignon, adorable] + +
+ + +**65. Language model** + +⟶ Modèle de langage + +
+ + +**66. Overview ― A language model aims at estimating the probability of a sentence P(y).** + +⟶ Vue d'ensemble ― Un modèle de langage vise à estimer la probabilité d'une phrase P(y). + +
+ + +**67. n-gram model ― This model is a naive approach aiming at quantifying the probability that an expression appears in a corpus by counting its number of appearance in the training data.** + +⟶ Modèle n-gram ― Ce modèle consiste en une approche naïve qui vise à quantifier la probabilité qu'une expression apparaisse dans un corpus en comptabilisant le nombre de son apparition dans le training data. + +
+ + +**68. Perplexity ― Language models are commonly assessed using the perplexity metric, also known as PP, which can be interpreted as the inverse probability of the dataset normalized by the number of words T. The perplexity is such that the lower, the better and is defined as follows:** + +⟶ Perplexité ― Les modèles de langage sont communément évalués en utilisant la perplexité, aussi noté PP, qui peut être interprété comme étant la probabilité inverse des données normalisée par le nombre de mots T. La perplexité est telle que plus elle est faible, mieux c'est. Elle est définie de la manière suivante : + +
+ + +**69. Remark: PP is commonly used in t-SNE.** + +⟶ Remarque : PP est souvent utilisée dans le cadre du t-SNE. + +
+ + +**70. Machine translation** + +⟶ Traduction machine + +
+ + +**71. Overview ― A machine translation model is similar to a language model except it has an encoder network placed before. For this reason, it is sometimes referred as a conditional language model. The goal is to find a sentence y such that:** + +⟶ Vue d'ensemble ― Un modèle de traduction machine est similaire à un modèle de langage ayant un auto-encodeur placé en amont. Pour cette raison, ce modèle est souvent surnommé modèle conditionnel de langage. Le but est de trouver une phrase y telle que : + +
+ + +**72. Beam search ― It is a heuristic search algorithm used in machine translation and speech recognition to find the likeliest sentence y given an input x.** + +⟶ Recherche en faisceau ― Cette technique (en anglais beam search) est un algorithme de recherche heuristique, utilisé dans le cadre de la traduction machine et de la reconnaissance vocale, qui vise à trouver la phrase la plus probable y sachant l'entrée x. + +
+ + +**73. [Step 1: Find top B likely words y<1>, Step 2: Compute conditional probabilities y|x,y<1>,...,y, Step 3: Keep top B combinations x,y<1>,...,y, End process at a stop word]** + +⟶ [Étape 1 : Trouver les B mots les plus probables y<1>, Étape 2 : Calculer les probabilités conditionnelles y|x,y<1>,...,y, Étape 3 : Garder les B combinaisons les plus probables x,y<1>,...,y, Arrêter la procédure à un mot stop] + +
+ + +**74. Remark: if the beam width is set to 1, then this is equivalent to a naive greedy search.** + +⟶ Remarque : si la largeur du faisceau est prise égale à 1, alors ceci est équivalent à un algorithme glouton. + +
+ + +**75. Beam width ― The beam width B is a parameter for beam search. Large values of B yield to better result but with slower performance and increased memory. Small values of B lead to worse results but is less computationally intensive. A standard value for B is around 10.** + +⟶ Largeur du faisceau ― La largeur du faisceau (en anglais beam width) B est un paramètre de la recherche en faisceau. De grandes valeurs de B conduisent à avoir de meilleurs résultats mais avec un coût de mémoire plus lourd et à un temps de calcul plus long. De faibles valeurs de B conduisent à de moins bons résultats mais avec un coût de calcul plus faible. Une valeur de B égale à 10 est standard et est souvent utilisée. + +
+ + +**76. Length normalization ― In order to improve numerical stability, beam search is usually applied on the following normalized objective, often called the normalized log-likelihood objective, defined as:** + +⟶ Normalisation de longueur ― Pour que la stabilité numérique puisse être améliorée, la recherche en faisceau utilise un objectif normalisé, souvent appelé l'objectif de log-probabilité normalisé, défini par : + +
+ + +**77. Remark: the parameter α can be seen as a softener, and its value is usually between 0.5 and 1.** + +⟶ Remarque : le paramètre α est souvent comprise entre 0.5 et 1. + +
+ + +**78. Error analysis ― When obtaining a predicted translation ˆy that is bad, one can wonder why we did not get a good translation y∗ by performing the following error analysis:** + +⟶ Analyse d'erreur ― Lorsque l'on obtient une mauvaise traduction prédite ˆy, on peut se demander la raison pour laquelle l'algorithme n'a pas obtenu une bonne traduction y∗ en faisant une analyse d'erreur de la manière suivante : + +
+ + +**79. [Case, Root cause, Remedies]** + +⟶ [Cas, Cause, Remèdes] + +
+ + +**80. [Beam search faulty, RNN faulty, Increase beam width, Try different architecture, Regularize, Get more data]** + +⟶ [Recherche en faisceau défectueuse, RNN défectueux, Augmenter la largeur du faisceau, Essayer une différente architecture, Régulariser, Obtenir plus de données] + +
+ + +**81. Bleu score ― The bilingual evaluation understudy (bleu) score quantifies how good a machine translation is by computing a similarity score based on n-gram precision. It is defined as follows:** + +⟶ Score bleu ― Le score bleu (en anglais bilingual evaluation understudy) a pour but de quantifier à quel point une traduction est bonne en calculant un score de similarité basé sur une précision n-gram. Il est défini de la manière suivante : + +
+ + +**82. where pn is the bleu score on n-gram only defined as follows:** + +⟶ où pn est le score bleu uniquement basé sur les n-gram, défini par : + +
+ + +**83. Remark: a brevity penalty may be applied to short predicted translations to prevent an artificially inflated bleu score.** + +⟶ Remarque : une pénalité de brièveté peut être appliquée aux traductions prédites courtes pour empêcher que le score bleu soit artificiellement haut. + +
+ + +**84. Attention** + +⟶ Attention + +
+ + +**85. Attention model ― This model allows an RNN to pay attention to specific parts of the input that is considered as being important, which improves the performance of the resulting model in practice. By noting α the amount of attention that the output y should pay to the activation a and c the context at time t, we have:** + +⟶ Modèle d'attention ― Le modèle d'attention (en anglais attention model) permet au RNN de mettre en valeur des parties spécifiques de l'entrée qui peuvent être considérées comme étant importantes, ce qui améliore la performance du modèle final en pratique. En notant α la quantité d'attention que la sortie y devrait porter à l'activation a et au contexte c à l'instant t, on a : + +
+ + +**86. with** + +⟶ avec + +
+ + +**87. Remark: the attention scores are commonly used in image captioning and machine translation.** + +⟶ Remarque : les scores d'attention sont communément utilisés dans la génération de légende d'image ainsi que dans la traduction machine. + +
+ + +**88. A cute teddy bear is reading Persian literature.** + +⟶ Un ours en peluche mignon est en train de lire de la littérature persane. + +
+ + +**89. Attention weight ― The amount of attention that the output y should pay to the activation a is given by α computed as follows:** + +⟶ Coefficient d'attention ― La quantité d'attention que la sortie y devrait porter à l'activation a est donné α, qui est calculé de la manière suivante : + +
+ + +**90. Remark: computation complexity is quadratic with respect to Tx.** + +⟶ Remarque : la complexité de calcul est quadratique par rapport à Tx. + +
+ + +**91. The Deep Learning cheatsheets are now available in [target language].** + +⟶ Les pense-bêtes d'apprentissage profond sont maintenant disponibles en français. + +
+ +**92. Original authors** + +⟶ Auteurs + +
+ +**93. Translated by X, Y and Z** + +⟶ Traduit par X, Y et Z + +
+ +**94. Reviewed by X, Y and Z** + +⟶ Relu par X, Y et Z + +
+ +**95. View PDF version on GitHub** + +⟶ Voir la version PDF sur GitHub + +
+ +**96. By X and Y** + +⟶ Par X et Y + +
diff --git a/he/cheatsheet-deep-learning.md b/he/cheatsheet-deep-learning.md deleted file mode 100644 index a5aa3756c..000000000 --- a/he/cheatsheet-deep-learning.md +++ /dev/null @@ -1,321 +0,0 @@ -**1. Deep Learning cheatsheet** - -⟶ - -
- -**2. Neural Networks** - -⟶ - -
- -**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** - -⟶ - -
- -**4. Architecture ― The vocabulary around neural networks architectures is described in the figure below:** - -⟶ - -
- -**5. [Input layer, hidden layer, output layer]** - -⟶ - -
- -**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** - -⟶ - -
- -**7. where we note w, b, z the weight, bias and output respectively.** - -⟶ - -
- -**8. Activation function ― Activation functions are used at the end of a hidden unit to introduce non-linear complexities to the model. Here are the most common ones:** - -⟶ - -
- -**9. [Sigmoid, Tanh, ReLU, Leaky ReLU]** - -⟶ - -
- -**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** - -⟶ - -
- -**11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** - -⟶ - -
- -**12. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to weight w is computed using chain rule and is of the following form:** - -⟶ - -
- -**13. As a result, the weight is updated as follows:** - -⟶ - -
- -**14. Updating weights ― In a neural network, weights are updated as follows:** - -⟶ - -
- -**15. Step 1: Take a batch of training data.** - -⟶ - -
- -**16. Step 2: Perform forward propagation to obtain the corresponding loss.** - -⟶ - -
- -**17. Step 3: Backpropagate the loss to get the gradients.** - -⟶ - -
- -**18. Step 4: Use the gradients to update the weights of the network.** - -⟶ - -
- -**19. Dropout ― Dropout is a technique meant at preventing overfitting the training data by dropping out units in a neural network. In practice, neurons are either dropped with probability p or kept with probability 1−p** - -⟶ - -
- -**20. Convolutional Neural Networks** - -⟶ - -
- -**21. Convolutional layer requirement ― By noting W the input volume size, F the size of the convolutional layer neurons, P the amount of zero padding, then the number of neurons N that fit in a given volume is such that:** - -⟶ - -
- -**22. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** - -⟶ - -
- -**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** - -⟶ - -
- -**24. Recurrent Neural Networks** - -⟶ - -
- -**25. Types of gates ― Here are the different types of gates that we encounter in a typical recurrent neural network:** - -⟶ - -
- -**26. [Input gate, forget gate, gate, output gate]** - -⟶ - -
- -**27. [Write to cell or not?, Erase a cell or not?, How much to write to cell?, How much to reveal cell?]** - -⟶ - -
- -**28. LSTM ― A long short-term memory (LSTM) network is a type of RNN model that avoids the vanishing gradient problem by adding 'forget' gates.** - -⟶ - -
- -**29. Reinforcement Learning and Control** - -⟶ - -
- -**30. The goal of reinforcement learning is for an agent to learn how to evolve in an environment.** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Markov decision processes ― A Markov decision process (MDP) is a 5-tuple (S,A,{Psa},γ,R) where:** - -⟶ - -
- -**33. S is the set of states** - -⟶ - -
- -**34. A is the set of actions** - -⟶ - -
- -**35. {Psa} are the state transition probabilities for s∈S and a∈A** - -⟶ - -
- -**36. γ∈[0,1[ is the discount factor** - -⟶ - -
- -**37. R:S×A⟶R or R:S⟶R is the reward function that the algorithm wants to maximize** - -⟶ - -
- -**38. Policy ― A policy π is a function π:S⟶A that maps states to actions.** - -⟶ - -
- -**39. Remark: we say that we execute a given policy π if given a state s we take the action a=π(s).** - -⟶ - -
- -**40. Value function ― For a given policy π and a given state s, we define the value function Vπ as follows:** - -⟶ - -
- -**41. Bellman equation ― The optimal Bellman equations characterizes the value function Vπ∗ of the optimal policy π∗:** - -⟶ - -
- -**42. Remark: we note that the optimal policy π∗ for a given state s is such that:** - -⟶ - -
- -**43. Value iteration algorithm ― The value iteration algorithm is in two steps:** - -⟶ - -
- -**44. 1) We initialize the value:** - -⟶ - -
- -**45. 2) We iterate the value based on the values before:** - -⟶ - -
- -**46. Maximum likelihood estimate ― The maximum likelihood estimates for the state transition probabilities are as follows:** - -⟶ - -
- -**47. times took action a in state s and got to s′** - -⟶ - -
- -**48. times took action a in state s** - -⟶ - -
- -**49. Q-learning ― Q-learning is a model-free estimation of Q, which is done as follows:** - -⟶ - -
- -**50. View PDF version on GitHub** - -⟶ - -
- -**51. [Neural Networks, Architecture, Activation function, Backpropagation, Dropout]** - -⟶ - -
- -**52. [Convolutional Neural Networks, Convolutional layer, Batch normalization]** - -⟶ - -
- -**53. [Recurrent Neural Networks, Gates, LSTM]** - -⟶ - -
- -**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** - -⟶ diff --git a/he/cheatsheet-machine-learning-tips-and-tricks.md b/he/cheatsheet-machine-learning-tips-and-tricks.md deleted file mode 100644 index 9712297b8..000000000 --- a/he/cheatsheet-machine-learning-tips-and-tricks.md +++ /dev/null @@ -1,285 +0,0 @@ -**1. Machine Learning tips and tricks cheatsheet** - -⟶ - -
- -**2. Classification metrics** - -⟶ - -
- -**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** - -⟶ - -
- -**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** - -⟶ - -
- -**5. [Predicted class, Actual class]** - -⟶ - -
- -**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** - -⟶ - -
- -**7. [Metric, Formula, Interpretation]** - -⟶ - -
- -**8. Overall performance of model** - -⟶ - -
- -**9. How accurate the positive predictions are** - -⟶ - -
- -**10. Coverage of actual positive sample** - -⟶ - -
- -**11. Coverage of actual negative sample** - -⟶ - -
- -**12. Hybrid metric useful for unbalanced classes** - -⟶ - -
- -**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** - -⟶ - -
- -**14. [Metric, Formula, Equivalent]** - -⟶ - -
- -**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** - -⟶ - -
- -**16. [Actual, Predicted]** - -⟶ - -
- -**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** - -⟶ - -
- -**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** - -⟶ - -
- -**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** - -⟶ - -
- -**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** - -⟶ - -
- -**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** - -⟶ - -
- -**22. Model selection** - -⟶ - -
- -**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** - -⟶ - -
- -**24. [Training set, Validation set, Testing set]** - -⟶ - -
- -**25. [Model is trained, Model is assessed, Model gives predictions]** - -⟶ - -
- -**26. [Usually 80% of the dataset, Usually 20% of the dataset]** - -⟶ - -
- -**27. [Also called hold-out or development set, Unseen data]** - -⟶ - -
- -**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** - -⟶ - -
- -**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** - -⟶ - -
- -**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** - -⟶ - -
- -**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** - -⟶ - -
- -**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** - -⟶ - -
- -**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** - -⟶ - -
- -**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** - -⟶ - -
- -**35. Diagnostics** - -⟶ - -
- -**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** - -⟶ - -
- -**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** - -⟶ - -
- -**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** - -⟶ - -
- -**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** - -⟶ - -
- -**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** - -⟶ - -
- -**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** - -⟶ - -
- -**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** - -⟶ - -
- -**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** - -⟶ - -
- -**44. Regression metrics** - -⟶ - -
- -**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** - -⟶ - -
- -**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** - -⟶ - -
- -**47. [Model selection, cross-validation, regularization]** - -⟶ - -
- -**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** - -⟶ diff --git a/he/cheatsheet-supervised-learning.md b/he/cheatsheet-supervised-learning.md deleted file mode 100644 index a6b19ea1c..000000000 --- a/he/cheatsheet-supervised-learning.md +++ /dev/null @@ -1,567 +0,0 @@ -**1. Supervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Supervised Learning** - -⟶ - -
- -**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** - -⟶ - -
- -**4. Type of prediction ― The different types of predictive models are summed up in the table below:** - -⟶ - -
- -**5. [Regression, Classifier, Outcome, Examples]** - -⟶ - -
- -**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** - -⟶ - -
- -**7. Type of model ― The different models are summed up in the table below:** - -⟶ - -
- -**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** - -⟶ - -
- -**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** - -⟶ - -
- -**10. Notations and general concepts** - -⟶ - -
- -**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** - -⟶ - -
- -**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** - -⟶ - -
- -**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** - -⟶ - -
- -**14. [Linear regression, Logistic regression, SVM, Neural Network]** - -⟶ - -
- -**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** - -⟶ - -
- -**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** - -⟶ - -
- -**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** - -⟶ - -
- -**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** - -⟶ - -
- -**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** - -⟶ - -
- -**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** - -⟶ - -
- -**21. Linear models** - -⟶ - -
- -**22. Linear regression** - -⟶ - -
- -**23. We assume here that y|x;θ∼N(μ,σ2)** - -⟶ - -
- -**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** - -⟶ - -
- -**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** - -⟶ - -
- -**26. Remark: the update rule is a particular case of the gradient ascent.** - -⟶ - -
- -**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** - -⟶ - -
- -**28. Classification and logistic regression** - -⟶ - -
- -**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** - -⟶ - -
- -**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** - -⟶ - -
- -**31. Remark: there is no closed form solution for the case of logistic regressions.** - -⟶ - -
- -**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** - -⟶ - -
- -**33. Generalized Linear Models** - -⟶ - -
- -**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** - -⟶ - -
- -**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** - -⟶ - -
- -**36. Here are the most common exponential distributions summed up in the following table:** - -⟶ - -
- -**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** - -⟶ - -
- -**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** - -⟶ - -
- -**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** - -⟶ - -
- -**40. Support Vector Machines** - -⟶ - -
- -**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** - -⟶ - -
- -**42: Optimal margin classifier ― The optimal margin classifier h is such that:** - -⟶ - -
- -**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** - -⟶ - -
- -**44. such that** - -⟶ - -
- -**45. support vectors** - -⟶ - -
- -**46. Remark: the line is defined as wTx−b=0.** - -⟶ - -
- -**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** - -⟶ - -
- -**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** - -⟶ - -
- -**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** - -⟶ - -
- -**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** - -⟶ - -
- -**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** - -⟶ - -
- -**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** - -⟶ - -
- -**53. Remark: the coefficients βi are called the Lagrange multipliers.** - -⟶ - -
- -**54. Generative Learning** - -⟶ - -
- -**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** - -⟶ - -
- -**56. Gaussian Discriminant Analysis** - -⟶ - -
- -**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** - -⟶ - -
- -**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** - -⟶ - -
- -**59. Naive Bayes** - -⟶ - -
- -**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** - -⟶ - -
- -**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** - -⟶ - -
- -**62. Remark: Naive Bayes is widely used for text classification and spam detection.** - -⟶ - -
- -**63. Tree-based and ensemble methods** - -⟶ - -
- -**64. These methods can be used for both regression and classification problems.** - -⟶ - -
- -**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** - -⟶ - -
- -**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** - -⟶ - -
- -**67. Remark: random forests are a type of ensemble methods.** - -⟶ - -
- -**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** - -⟶ - -
- -**69. [Adaptive boosting, Gradient boosting]** - -⟶ - -
- -**70. High weights are put on errors to improve at the next boosting step** - -⟶ - -
- -**71. Weak learners trained on remaining errors** - -⟶ - -
- -**72. Other non-parametric approaches** - -⟶ - -
- -**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** - -⟶ - -
- -**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** - -⟶ - -
- -**75. Learning Theory** - -⟶ - -
- -**76. Union bound ― Let A1,...,Ak be k events. We have:** - -⟶ - -
- -**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** - -⟶ - -
- -**78. Remark: this inequality is also known as the Chernoff bound.** - -⟶ - -
- -**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** - -⟶ - -
- -**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** - -⟶ - -
- -**81: the training and testing sets follow the same distribution ** - -⟶ - -
- -**82. the training examples are drawn independently** - -⟶ - -
- -**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** - -⟶ - -
- -**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** - -⟶ - -
- -**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** - -⟶ - -
- -**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** - -⟶ - -
- -**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** - -⟶ - -
- -**88. [Introduction, Type of prediction, Type of model]** - -⟶ - -
- -**89. [Notations and general concepts, loss function, gradient descent, likelihood]** - -⟶ - -
- -**90. [Linear models, linear regression, logistic regression, generalized linear models]** - -⟶ - -
- -**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** - -⟶ - -
- -**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** - -⟶ - -
- -**93. [Trees and ensemble methods, CART, Random forest, Boosting]** - -⟶ - -
- -**94. [Other methods, k-NN]** - -⟶ - -
- -**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** - -⟶ diff --git a/he/refresher-probability.md b/he/refresher-probability.md deleted file mode 100644 index 5c9b34656..000000000 --- a/he/refresher-probability.md +++ /dev/null @@ -1,381 +0,0 @@ -**1. Probabilities and Statistics refresher** - -⟶ - -
- -**2. Introduction to Probability and Combinatorics** - -⟶ - -
- -**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** - -⟶ - -
- -**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** - -⟶ - -
- -**5. Axioms of probability For each event E, we denote P(E) as the probability of event E occuring.** - -⟶ - -
- -**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** - -⟶ - -
- -**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** - -⟶ - -
- -**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** - -⟶ - -
- -**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** - -⟶ - -
- -**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** - -⟶ - -
- -**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** - -⟶ - -
- -**12. Conditional Probability** - -⟶ - -
- -**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** - -⟶ - -
- -**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** - -⟶ - -
- -**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** - -⟶ - -
- -**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** - -⟶ - -
- -**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** - -⟶ - -
- -**18. Independence ― Two events A and B are independent if and only if we have:** - -⟶ - -
- -**19. Random Variables** - -⟶ - -
- -**20. Definitions** - -⟶ - -
- -**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** - -⟶ - -
- -**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** - -⟶ - -
- -**23. Remark: we have P(a - -**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** - -⟶ - -
- -**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** - -⟶ - -
- -**26. [Case, CDF F, PDF f, Properties of PDF]** - -⟶ - -
- -**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** - -⟶ - -
- -**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** - -⟶ - -
- -**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** - -⟶ - -
- -**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** - -⟶ - -
- -**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** - -⟶ - -
- -**32. Probability Distributions** - -⟶ - -
- -**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** - -⟶ - -
- -**34. Main distributions ― Here are the main distributions to have in mind:** - -⟶ - -
- -**35. [Type, Distribution]** - -⟶ - -
- -**36. Jointly Distributed Random Variables** - -⟶ - -
- -**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** - -⟶ - -
- -**38. [Case, Marginal density, Cumulative function]** - -⟶ - -
- -**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** - -⟶ - -
- -**40. Independence ― Two random variables X and Y are said to be independent if we have:** - -⟶ - -
- -**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** - -⟶ - -
- -**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** - -⟶ - -
- -**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** - -⟶ - -
- -**44. Remark 2: If X and Y are independent, then ρXY=0.** - -⟶ - -
- -**45. Parameter estimation** - -⟶ - -
- -**46. Definitions** - -⟶ - -
- -**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** - -⟶ - -
- -**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** - -⟶ - -
- -**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** - -⟶ - -
- -**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** - -⟶ - -
- -**51. Estimating the mean** - -⟶ - -
- -**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** - -⟶ - -
- -**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** - -⟶ - -
- -**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** - -⟶ - -
- -**55. Estimating the variance** - -⟶ - -
- -**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** - -⟶ - -
- -**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** - -⟶ - -
- -**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** - -⟶ - -
- -**59. [Introduction, Sample space, Event, Permutation]** - -⟶ - -
- -**60. [Conditional probability, Bayes' rule, Independence]** - -⟶ - -
- -**61. [Random variables, Definitions, Expectation, Variance]** - -⟶ - -
- -**62. [Probability distributions, Chebyshev's inequality, Main distributions]** - -⟶ - -
- -**63. [Jointly distributed random variables, Density, Covariance, Correlation]** - -⟶ - -
- -**64. [Parameter estimation, Mean, Variance]** - -⟶ diff --git a/hi/cheatsheet-deep-learning.md b/hi/cheatsheet-deep-learning.md deleted file mode 100644 index a5aa3756c..000000000 --- a/hi/cheatsheet-deep-learning.md +++ /dev/null @@ -1,321 +0,0 @@ -**1. Deep Learning cheatsheet** - -⟶ - -
- -**2. Neural Networks** - -⟶ - -
- -**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** - -⟶ - -
- -**4. Architecture ― The vocabulary around neural networks architectures is described in the figure below:** - -⟶ - -
- -**5. [Input layer, hidden layer, output layer]** - -⟶ - -
- -**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** - -⟶ - -
- -**7. where we note w, b, z the weight, bias and output respectively.** - -⟶ - -
- -**8. Activation function ― Activation functions are used at the end of a hidden unit to introduce non-linear complexities to the model. Here are the most common ones:** - -⟶ - -
- -**9. [Sigmoid, Tanh, ReLU, Leaky ReLU]** - -⟶ - -
- -**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** - -⟶ - -
- -**11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** - -⟶ - -
- -**12. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to weight w is computed using chain rule and is of the following form:** - -⟶ - -
- -**13. As a result, the weight is updated as follows:** - -⟶ - -
- -**14. Updating weights ― In a neural network, weights are updated as follows:** - -⟶ - -
- -**15. Step 1: Take a batch of training data.** - -⟶ - -
- -**16. Step 2: Perform forward propagation to obtain the corresponding loss.** - -⟶ - -
- -**17. Step 3: Backpropagate the loss to get the gradients.** - -⟶ - -
- -**18. Step 4: Use the gradients to update the weights of the network.** - -⟶ - -
- -**19. Dropout ― Dropout is a technique meant at preventing overfitting the training data by dropping out units in a neural network. In practice, neurons are either dropped with probability p or kept with probability 1−p** - -⟶ - -
- -**20. Convolutional Neural Networks** - -⟶ - -
- -**21. Convolutional layer requirement ― By noting W the input volume size, F the size of the convolutional layer neurons, P the amount of zero padding, then the number of neurons N that fit in a given volume is such that:** - -⟶ - -
- -**22. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** - -⟶ - -
- -**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** - -⟶ - -
- -**24. Recurrent Neural Networks** - -⟶ - -
- -**25. Types of gates ― Here are the different types of gates that we encounter in a typical recurrent neural network:** - -⟶ - -
- -**26. [Input gate, forget gate, gate, output gate]** - -⟶ - -
- -**27. [Write to cell or not?, Erase a cell or not?, How much to write to cell?, How much to reveal cell?]** - -⟶ - -
- -**28. LSTM ― A long short-term memory (LSTM) network is a type of RNN model that avoids the vanishing gradient problem by adding 'forget' gates.** - -⟶ - -
- -**29. Reinforcement Learning and Control** - -⟶ - -
- -**30. The goal of reinforcement learning is for an agent to learn how to evolve in an environment.** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Markov decision processes ― A Markov decision process (MDP) is a 5-tuple (S,A,{Psa},γ,R) where:** - -⟶ - -
- -**33. S is the set of states** - -⟶ - -
- -**34. A is the set of actions** - -⟶ - -
- -**35. {Psa} are the state transition probabilities for s∈S and a∈A** - -⟶ - -
- -**36. γ∈[0,1[ is the discount factor** - -⟶ - -
- -**37. R:S×A⟶R or R:S⟶R is the reward function that the algorithm wants to maximize** - -⟶ - -
- -**38. Policy ― A policy π is a function π:S⟶A that maps states to actions.** - -⟶ - -
- -**39. Remark: we say that we execute a given policy π if given a state s we take the action a=π(s).** - -⟶ - -
- -**40. Value function ― For a given policy π and a given state s, we define the value function Vπ as follows:** - -⟶ - -
- -**41. Bellman equation ― The optimal Bellman equations characterizes the value function Vπ∗ of the optimal policy π∗:** - -⟶ - -
- -**42. Remark: we note that the optimal policy π∗ for a given state s is such that:** - -⟶ - -
- -**43. Value iteration algorithm ― The value iteration algorithm is in two steps:** - -⟶ - -
- -**44. 1) We initialize the value:** - -⟶ - -
- -**45. 2) We iterate the value based on the values before:** - -⟶ - -
- -**46. Maximum likelihood estimate ― The maximum likelihood estimates for the state transition probabilities are as follows:** - -⟶ - -
- -**47. times took action a in state s and got to s′** - -⟶ - -
- -**48. times took action a in state s** - -⟶ - -
- -**49. Q-learning ― Q-learning is a model-free estimation of Q, which is done as follows:** - -⟶ - -
- -**50. View PDF version on GitHub** - -⟶ - -
- -**51. [Neural Networks, Architecture, Activation function, Backpropagation, Dropout]** - -⟶ - -
- -**52. [Convolutional Neural Networks, Convolutional layer, Batch normalization]** - -⟶ - -
- -**53. [Recurrent Neural Networks, Gates, LSTM]** - -⟶ - -
- -**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** - -⟶ diff --git a/hi/cheatsheet-supervised-learning.md b/hi/cheatsheet-supervised-learning.md deleted file mode 100644 index a6b19ea1c..000000000 --- a/hi/cheatsheet-supervised-learning.md +++ /dev/null @@ -1,567 +0,0 @@ -**1. Supervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Supervised Learning** - -⟶ - -
- -**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** - -⟶ - -
- -**4. Type of prediction ― The different types of predictive models are summed up in the table below:** - -⟶ - -
- -**5. [Regression, Classifier, Outcome, Examples]** - -⟶ - -
- -**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** - -⟶ - -
- -**7. Type of model ― The different models are summed up in the table below:** - -⟶ - -
- -**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** - -⟶ - -
- -**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** - -⟶ - -
- -**10. Notations and general concepts** - -⟶ - -
- -**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** - -⟶ - -
- -**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** - -⟶ - -
- -**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** - -⟶ - -
- -**14. [Linear regression, Logistic regression, SVM, Neural Network]** - -⟶ - -
- -**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** - -⟶ - -
- -**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** - -⟶ - -
- -**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** - -⟶ - -
- -**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** - -⟶ - -
- -**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** - -⟶ - -
- -**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** - -⟶ - -
- -**21. Linear models** - -⟶ - -
- -**22. Linear regression** - -⟶ - -
- -**23. We assume here that y|x;θ∼N(μ,σ2)** - -⟶ - -
- -**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** - -⟶ - -
- -**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** - -⟶ - -
- -**26. Remark: the update rule is a particular case of the gradient ascent.** - -⟶ - -
- -**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** - -⟶ - -
- -**28. Classification and logistic regression** - -⟶ - -
- -**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** - -⟶ - -
- -**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** - -⟶ - -
- -**31. Remark: there is no closed form solution for the case of logistic regressions.** - -⟶ - -
- -**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** - -⟶ - -
- -**33. Generalized Linear Models** - -⟶ - -
- -**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** - -⟶ - -
- -**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** - -⟶ - -
- -**36. Here are the most common exponential distributions summed up in the following table:** - -⟶ - -
- -**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** - -⟶ - -
- -**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** - -⟶ - -
- -**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** - -⟶ - -
- -**40. Support Vector Machines** - -⟶ - -
- -**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** - -⟶ - -
- -**42: Optimal margin classifier ― The optimal margin classifier h is such that:** - -⟶ - -
- -**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** - -⟶ - -
- -**44. such that** - -⟶ - -
- -**45. support vectors** - -⟶ - -
- -**46. Remark: the line is defined as wTx−b=0.** - -⟶ - -
- -**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** - -⟶ - -
- -**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** - -⟶ - -
- -**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** - -⟶ - -
- -**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** - -⟶ - -
- -**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** - -⟶ - -
- -**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** - -⟶ - -
- -**53. Remark: the coefficients βi are called the Lagrange multipliers.** - -⟶ - -
- -**54. Generative Learning** - -⟶ - -
- -**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** - -⟶ - -
- -**56. Gaussian Discriminant Analysis** - -⟶ - -
- -**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** - -⟶ - -
- -**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** - -⟶ - -
- -**59. Naive Bayes** - -⟶ - -
- -**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** - -⟶ - -
- -**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** - -⟶ - -
- -**62. Remark: Naive Bayes is widely used for text classification and spam detection.** - -⟶ - -
- -**63. Tree-based and ensemble methods** - -⟶ - -
- -**64. These methods can be used for both regression and classification problems.** - -⟶ - -
- -**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** - -⟶ - -
- -**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** - -⟶ - -
- -**67. Remark: random forests are a type of ensemble methods.** - -⟶ - -
- -**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** - -⟶ - -
- -**69. [Adaptive boosting, Gradient boosting]** - -⟶ - -
- -**70. High weights are put on errors to improve at the next boosting step** - -⟶ - -
- -**71. Weak learners trained on remaining errors** - -⟶ - -
- -**72. Other non-parametric approaches** - -⟶ - -
- -**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** - -⟶ - -
- -**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** - -⟶ - -
- -**75. Learning Theory** - -⟶ - -
- -**76. Union bound ― Let A1,...,Ak be k events. We have:** - -⟶ - -
- -**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** - -⟶ - -
- -**78. Remark: this inequality is also known as the Chernoff bound.** - -⟶ - -
- -**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** - -⟶ - -
- -**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** - -⟶ - -
- -**81: the training and testing sets follow the same distribution ** - -⟶ - -
- -**82. the training examples are drawn independently** - -⟶ - -
- -**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** - -⟶ - -
- -**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** - -⟶ - -
- -**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** - -⟶ - -
- -**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** - -⟶ - -
- -**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** - -⟶ - -
- -**88. [Introduction, Type of prediction, Type of model]** - -⟶ - -
- -**89. [Notations and general concepts, loss function, gradient descent, likelihood]** - -⟶ - -
- -**90. [Linear models, linear regression, logistic regression, generalized linear models]** - -⟶ - -
- -**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** - -⟶ - -
- -**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** - -⟶ - -
- -**93. [Trees and ensemble methods, CART, Random forest, Boosting]** - -⟶ - -
- -**94. [Other methods, k-NN]** - -⟶ - -
- -**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** - -⟶ diff --git a/hi/cheatsheet-unsupervised-learning.md b/hi/cheatsheet-unsupervised-learning.md deleted file mode 100644 index d07b74750..000000000 --- a/hi/cheatsheet-unsupervised-learning.md +++ /dev/null @@ -1,340 +0,0 @@ -**1. Unsupervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Unsupervised Learning** - -⟶ - -
- -**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** - -⟶ - -
- -**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** - -⟶ - -
- -**5. Clustering** - -⟶ - -
- -**6. Expectation-Maximization** - -⟶ - -
- -**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** - -⟶ - -
- -**8. [Setting, Latent variable z, Comments]** - -⟶ - -
- -**9. [Mixture of k Gaussians, Factor analysis]** - -⟶ - -
- -**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** - -⟶ - -
- -**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** - -⟶ - -
- -**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** - -⟶ - -
- -**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** - -⟶ - -
- -**14. k-means clustering** - -⟶ - -
- -**15. We note c(i) the cluster of data point i and μj the center of cluster j.** - -⟶ - -
- -**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** - -⟶ - -
- -**17. [Means initialization, Cluster assignment, Means update, Convergence]** - -⟶ - -
- -**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** - -⟶ - -
- -**19. Hierarchical clustering** - -⟶ - -
- -**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** - -⟶ - -
- -**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** - -⟶ - -
- -**22. [Ward linkage, Average linkage, Complete linkage]** - -⟶ - -
- -**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** - -⟶ - -
- -**24. Clustering assessment metrics** - -⟶ - -
- -**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** - -⟶ - -
- -**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** - -⟶ - -
- -**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** - -⟶ - -
- -**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** - -⟶ - -
- -**29. Dimension reduction** - -⟶ - -
- -**30. Principal component analysis** - -⟶ - -
- -**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** - -⟶ - -
- -**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**34. diagonal** - -⟶ - -
- -**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** - -⟶ - -
- -**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k -dimensions by maximizing the variance of the data as follows:** - -⟶ - -
- -**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** - -⟶ - -
- -**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** - -⟶ - -
- -**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** - -⟶ - -
- -**40. Step 4: Project the data on spanR(u1,...,uk).** - -⟶ - -
- -**41. This procedure maximizes the variance among all k-dimensional spaces.** - -⟶ - -
- -**42. [Data in feature space, Find principal components, Data in principal components space]** - -⟶ - -
- -**43. Independent component analysis** - -⟶ - -
- -**44. It is a technique meant to find the underlying generating sources.** - -⟶ - -
- -**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** - -⟶ - -
- -**46. The goal is to find the unmixing matrix W=A−1.** - -⟶ - -
- -**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** - -⟶ - -
- -**48. Write the probability of x=As=W−1s as:** - -⟶ - -
- -**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** - -⟶ - -
- -**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** - -⟶ - -
- -**51. The Machine Learning cheatsheets are now available in Hindi.** - -⟶ - -
- -**52. Original authors** - -⟶ - -
- -**53. Translated by X, Y and Z** - -⟶ - -
- -**54. Reviewed by X, Y and Z** - -⟶ - -
- -**55. [Introduction, Motivation, Jensen's inequality]** - -⟶ - -
- -**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** - -⟶ - -
- -**57. [Dimension reduction, PCA, ICA]** - -⟶ diff --git a/hi/refresher-linear-algebra.md b/hi/refresher-linear-algebra.md deleted file mode 100644 index a6b440d1e..000000000 --- a/hi/refresher-linear-algebra.md +++ /dev/null @@ -1,339 +0,0 @@ -**1. Linear Algebra and Calculus refresher** - -⟶ - -
- -**2. General notations** - -⟶ - -
- -**3. Definitions** - -⟶ - -
- -**4. Vector ― We note x∈Rn a vector with n entries, where xi∈R is the ith entry:** - -⟶ - -
- -**5. Matrix ― We note A∈Rm×n a matrix with m rows and n columns, where Ai,j∈R is the entry located in the ith row and jth column:** - -⟶ - -
- -**6. Remark: the vector x defined above can be viewed as a n×1 matrix and is more particularly called a column-vector.** - -⟶ - -
- -**7. Main matrices** - -⟶ - -
- -**8. Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** - -⟶ - -
- -**9. Remark: for all matrices A∈Rn×n, we have A×I=I×A=A.** - -⟶ - -
- -**10. Diagonal matrix ― A diagonal matrix D∈Rn×n is a square matrix with nonzero values in its diagonal and zero everywhere else:** - -⟶ - -
- -**11. Remark: we also note D as diag(d1,...,dn).** - -⟶ - -
- -**12. Matrix operations** - -⟶ - -
- -**13. Multiplication** - -⟶ - -
- -**14. Vector-vector ― There are two types of vector-vector products:** - -⟶ - -
- -**15. inner product: for x,y∈Rn, we have:** - -⟶ - -
- -**16. outer product: for x∈Rm,y∈Rn, we have:** - -⟶ - -
- -**17. Matrix-vector ― The product of matrix A∈Rm×n and vector x∈Rn is a vector of size Rn, such that:** - -⟶ - -
- -**18. where aTr,i are the vector rows and ac,j are the vector columns of A, and xi are the entries of x.** - -⟶ - -
- -**19. Matrix-matrix ― The product of matrices A∈Rm×n and B∈Rn×p is a matrix of size Rn×p, such that:** - -⟶ - -
- -**20. where aTr,i,bTr,i are the vector rows and ac,j,bc,j are the vector columns of A and B respectively** - -⟶ - -
- -**21. Other operations** - -⟶ - -
- -**22. Transpose ― The transpose of a matrix A∈Rm×n, noted AT, is such that its entries are flipped:** - -⟶ - -
- -**23. Remark: for matrices A,B, we have (AB)T=BTAT** - -⟶ - -
- -**24. Inverse ― The inverse of an invertible square matrix A is noted A−1 and is the only matrix such that:** - -⟶ - -
- -**25. Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1=B−1A−1** - -⟶ - -
- -**26. Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** - -⟶ - -
- -**27. Remark: for matrices A,B, we have tr(AT)=tr(A) and tr(AB)=tr(BA)** - -⟶ - -
- -**28. Determinant ― The determinant of a square matrix A∈Rn×n, noted |A| or det(A) is expressed recursively in terms of A∖i,∖j, which is the matrix A without its ith row and jth column, as follows:** - -⟶ - -
- -**29. Remark: A is invertible if and only if |A|≠0. Also, |AB|=|A||B| and |AT|=|A|.** - -⟶ - -
- -**30. Matrix properties** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** - -⟶ - -
- -**33. [Symmetric, Antisymmetric]** - -⟶ - -
- -**34. Norm ― A norm is a function N:V⟶[0,+∞[ where V is a vector space, and such that for all x,y∈V, we have:** - -⟶ - -
- -**35. N(ax)=|a|N(x) for a scalar** - -⟶ - -
- -**36. if N(x)=0, then x=0** - -⟶ - -
- -**37. For x∈V, the most commonly used norms are summed up in the table below:** - -⟶ - -
- -**38. [Norm, Notation, Definition, Use case]** - -⟶ - -
- -**39. Linearly dependence ― A set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others.** - -⟶ - -
- -**40. Remark: if no vector can be written this way, then the vectors are said to be linearly independent** - -⟶ - -
- -**41. Matrix rank ― The rank of a given matrix A is noted rank(A) and is the dimension of the vector space generated by its columns. This is equivalent to the maximum number of linearly independent columns of A.** - -⟶ - -
- -**42. Positive semi-definite matrix ― A matrix A∈Rn×n is positive semi-definite (PSD) and is noted A⪰0 if we have:** - -⟶ - -
- -**43. Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** - -⟶ - -
- -**44. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**45. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**46. diagonal** - -⟶ - -
- -**47. Singular-value decomposition ― For a given matrix A of dimensions m×n, the singular-value decomposition (SVD) is a factorization technique that guarantees the existence of U m×m unitary, Σ m×n diagonal and V n×n unitary matrices, such that:** - -⟶ - -
- -**48. Matrix calculus** - -⟶ - -
- -**49. Gradient ― Let f:Rm×n→R be a function and A∈Rm×n be a matrix. The gradient of f with respect to A is a m×n matrix, noted ∇Af(A), such that:** - -⟶ - -
- -**50. Remark: the gradient of f is only defined when f is a function that returns a scalar.** - -⟶ - -
- -**51. Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** - -⟶ - -
- -**52. Remark: the hessian of f is only defined when f is a function that returns a scalar** - -⟶ - -
- -**53. Gradient operations ― For matrices A,B,C, the following gradient properties are worth having in mind:** - -⟶ - -
- -**54. [General notations, Definitions, Main matrices]** - -⟶ - -
- -**55. [Matrix operations, Multiplication, Other operations]** - -⟶ - -
- -**56. [Matrix properties, Norm, Eigenvalue/Eigenvector, Singular-value decomposition]** - -⟶ - -
- -**57. [Matrix calculus, Gradient, Hessian, Operations]** - -⟶ diff --git a/hi/refresher-probability.md b/hi/refresher-probability.md deleted file mode 100644 index 5c9b34656..000000000 --- a/hi/refresher-probability.md +++ /dev/null @@ -1,381 +0,0 @@ -**1. Probabilities and Statistics refresher** - -⟶ - -
- -**2. Introduction to Probability and Combinatorics** - -⟶ - -
- -**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** - -⟶ - -
- -**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** - -⟶ - -
- -**5. Axioms of probability For each event E, we denote P(E) as the probability of event E occuring.** - -⟶ - -
- -**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** - -⟶ - -
- -**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** - -⟶ - -
- -**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** - -⟶ - -
- -**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** - -⟶ - -
- -**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** - -⟶ - -
- -**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** - -⟶ - -
- -**12. Conditional Probability** - -⟶ - -
- -**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** - -⟶ - -
- -**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** - -⟶ - -
- -**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** - -⟶ - -
- -**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** - -⟶ - -
- -**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** - -⟶ - -
- -**18. Independence ― Two events A and B are independent if and only if we have:** - -⟶ - -
- -**19. Random Variables** - -⟶ - -
- -**20. Definitions** - -⟶ - -
- -**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** - -⟶ - -
- -**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** - -⟶ - -
- -**23. Remark: we have P(a - -**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** - -⟶ - -
- -**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** - -⟶ - -
- -**26. [Case, CDF F, PDF f, Properties of PDF]** - -⟶ - -
- -**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** - -⟶ - -
- -**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** - -⟶ - -
- -**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** - -⟶ - -
- -**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** - -⟶ - -
- -**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** - -⟶ - -
- -**32. Probability Distributions** - -⟶ - -
- -**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** - -⟶ - -
- -**34. Main distributions ― Here are the main distributions to have in mind:** - -⟶ - -
- -**35. [Type, Distribution]** - -⟶ - -
- -**36. Jointly Distributed Random Variables** - -⟶ - -
- -**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** - -⟶ - -
- -**38. [Case, Marginal density, Cumulative function]** - -⟶ - -
- -**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** - -⟶ - -
- -**40. Independence ― Two random variables X and Y are said to be independent if we have:** - -⟶ - -
- -**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** - -⟶ - -
- -**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** - -⟶ - -
- -**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** - -⟶ - -
- -**44. Remark 2: If X and Y are independent, then ρXY=0.** - -⟶ - -
- -**45. Parameter estimation** - -⟶ - -
- -**46. Definitions** - -⟶ - -
- -**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** - -⟶ - -
- -**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** - -⟶ - -
- -**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** - -⟶ - -
- -**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** - -⟶ - -
- -**51. Estimating the mean** - -⟶ - -
- -**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** - -⟶ - -
- -**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** - -⟶ - -
- -**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** - -⟶ - -
- -**55. Estimating the variance** - -⟶ - -
- -**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** - -⟶ - -
- -**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** - -⟶ - -
- -**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** - -⟶ - -
- -**59. [Introduction, Sample space, Event, Permutation]** - -⟶ - -
- -**60. [Conditional probability, Bayes' rule, Independence]** - -⟶ - -
- -**61. [Random variables, Definitions, Expectation, Variance]** - -⟶ - -
- -**62. [Probability distributions, Chebyshev's inequality, Main distributions]** - -⟶ - -
- -**63. [Jointly distributed random variables, Density, Covariance, Correlation]** - -⟶ - -
- -**64. [Parameter estimation, Mean, Variance]** - -⟶ diff --git a/id/cs-230-convolutional-neural-networks.md b/id/cs-230-convolutional-neural-networks.md new file mode 100644 index 000000000..4f22dfc35 --- /dev/null +++ b/id/cs-230-convolutional-neural-networks.md @@ -0,0 +1,715 @@ +**Convolutional Neural Networks translation** + +
+ +**1. Convolutional Neural Networks cheatsheet** + +⟶Cheatsheet Convolutional Neural Network + +
+ + +**2. CS 230 - Deep Learning** + +⟶Deep Learning + +
+ + +**3. [Intisari, Struktur arsitektur]** + +⟶[Overview, Struktur Arsitektur] + +
+ + +**4. [Types of layer, Convolution, Pooling, Fully connected]** + +⟶[Jenis-jenis layer, Konvolusi, Pooling, Fully connected] + +
+ + +**5. [Filter hyperparameters, Dimensions, Stride, Padding]** + +⟶[Hiperparameter filter, Dimensi, Stride, Padding] + +
+ + +**6. [Tuning hyperparameters, Parameter compatibility, Model complexity, Receptive field]** + +⟶[Penyetelan hiperparameter, Kesesuaian parameter, Kompleksitas model, Receptive field] + +
+ + +**7. [Activation functions, Rectified Linear Unit, Softmax]** + +⟶[Fungsi-fungsi aktifasi, Rectified Linear Unit, Softmax] + +
+ + +**8. [Object detection, Types of models, Detection, Intersection over Union, Non-max suppression, YOLO, R-CNN]** + +⟶[Deteksi objek, Tipe-tipe model, Deteksi, Intersection over Union, Non-max suppression, YOLO, R-CNN] + +
+ + +**9. [Face verification/recognition, One shot learning, Siamese network, Triplet loss]** + +⟶[Verifikasi/pengenal wajah, One shot learning, Siamese network, Loss triplet] + +
+ + +**10. [Neural style transfer, Activation, Style matrix, Style/content cost function]** + +⟶[Transfer neural style, Aktifasi, Matriks style, Fungsi cost style/konten] + +
+ + +**11. [Computational trick architectures, Generative Adversarial Net, ResNet, Inception Network]** + +⟶[Arkitektur trik komputasional, Generative Adversarial Net, ResNet, Inception Network] + +
+ + +**12. Overview** + +⟶Ringkasan + +
+ + +**13. Architecture of a traditional CNN ― Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers:** + +⟶Arkitektur dari sebuah tradisional CNN - Convolutional neural network, juga dikenal sebagai CNN, adalah sebuah tipe khusus dari neural network yang secara umum terdiri dari layer-layer berikut: + +
+ + +**14. The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections.** + +⟶Layer konvolusi and layer pooling dapat disesuaikan terhadap hiperparameter yang dijelaskan pada bagian selanjutnya. + +
+ + +**15. Types of layer** + +⟶Jenis-jenis layer + +
+ + +**16. Convolution layer (CONV) ― The convolution layer (CONV) uses filters that perform convolution operations as it is scanning the input I with respect to its dimensions. Its hyperparameters include the filter size F and stride S. The resulting output O is called feature map or activation map.** + +⟶Layer convolution - Layer convolution (CONV) menggunakan banyak filter yang dapat melakukan operasi konvolusi karena CONV memindai input I dengan memperhatikan dimensinya. Hiperparameter dari CONV meliputi ukuran filter F dan stride S. Keluaran hasil O disebut feature map atau activation map. + +
+ + +**17. Remark: the convolution step can be generalized to the 1D and 3D cases as well.** + +⟶Catatan: tahap konvolusi dapat digeneralisasi juga dalam kasus 1D dan 3D. + +
+ + +**18. Pooling (POOL) ― The pooling layer (POOL) is a downsampling operation, typically applied after a convolution layer, which does some spatial invariance. In particular, max and average pooling are special kinds of pooling where the maximum and average value is taken, respectively.** + +⟶Pooling (POOL) - Layer pooling adalah sebuah operasi downsampling, biasanya diaplikasikan setelah lapisan konvolusi, yang menyebabkan invarian spasial. Pada khususnya, pooling max dan average merupakan jenis-jenis pooling spesial di mana masing-masing nilai maksimal dan rata-rata diambil. + +
+ + +**19. [Type, Purpose, Illustration, Comments]** + +⟶[Jenis, Tujuan, Ilustrasi, Komentar] + +
+ + +**20. [Max pooling, Average pooling, Each pooling operation selects the maximum value of the current view, Each pooling operation averages the values of the current view]** + +⟶[Max pooling, Average pooling, Setiap operasi pooling mewakili nilai maksimal dari tampilan terbaru, setiap operasi pooling meratakan nilai-nilai dari tampilan terbaru] + +
+ + +**21. [Preserves detected features, Most commonly used, Downsamples feature map, Used in LeNet]** + +⟶[Mempertahankan fitur yang terdeteksi, yang paling sering digunakan, Downsamples feature map, dipakai di LeNet] + +
+ + +**22. Fully Connected (FC) ― The fully connected layer (FC) operates on a flattened input where each input is connected to all neurons. If present, FC layers are usually found towards the end of CNN architectures and can be used to optimize objectives such as class scores.** + +⟶Fully Connected (FC) - Fully connected layer (FC) menangani sebuah masukan dijadikan 1D ddi mana setiap masukan terhubung ke seluruh neuron. Bila ada, lapisan-lapisan FC biasanya ditemukan pada akhir arsitektur CNN dan dapat digunakan untuk mengoptimalkan hasil seperti skor-skor kelas (pada kasus klasifikasi). + +
+ + +**23. Filter hyperparameters** + +⟶Hiperparameter filter + +
+ + +**24. The convolution layer contains filters for which it is important to know the meaning behind its hyperparameters.** + +⟶Layer konvolusi mengandung penyaring yang penting untuk dimengerti tentang maksud dari penyaring hiperparameter tersebut. +
+ + +**25. Dimensions of a filter ― A filter of size F×F applied to an input containing C channels is a F×F×C volume that performs convolutions on an input of size I×I×C and produces an output feature map (also called activation map) of size O×O×1.** + +⟶Dimensi dari sebuah filter - Sebuah filter dengan ukuran FxF diaplikasikan pada sebuah input yang memuat C channel memiliki volume FxFxC yang melakukan konvolusi pada sebuah input masukan dengan ukuran IxIxC dan menghasilkan sebuah keluaran feature map (juga dikenal activation map) dengan ukuran O×O×1 + +
+ + +**26. Filter** + +⟶Filter + +
+ + +**27. Remark: the application of K filters of size F×F results in an output feature map of size O×O×K.** + +⟶Catatan: pengaplikasian dari penyaring F dengan ukuran FxF menghasilkan sebuah keluaran fitur peta dengan ukuran O×O×K. + +
+ + +**28. Stride ― For a convolutional or a pooling operation, the stride S denotes the number of pixels by which the window moves after each operation.** + +⟶Stride - Untuk sebuah konvolusi atau sebauh operasi pooling, stide S melambangkan jumlah pixel yang dilewati window setelah setiap operasi. + +
+ + +**29. Zero-padding ― Zero-padding denotes the process of adding P zeroes to each side of the boundaries of the input. This value can either be manually specified or automatically set through one of the three modes detailed below:** + +⟶Zero-padding - Zero-padding melambangkan proses penambahan P nilai 0 pada setiap sisi akhir dari masukan. Nilai dari zero-padding dapat dispesifikasikan secara manual atau secara otomatis melalui salah satu dari tiga mode yang dijelaskan dibawah ini: + +
+ + +**30. [Mode, Value, Illustration, Purpose, Valid, Same, Full]** + +⟶[Mode, Nilai, Ilustrasi, Tujuan, Valid, Same, Full] + +
+ + +**31. [No padding, Drops last convolution if dimensions do not match, Padding such that feature map size has size ⌈IS⌉, Output size is mathematically convenient, Also called 'half' padding, Maximum padding such that end convolutions are applied on the limits of the input, Filter 'sees' the input end-to-end]** + +⟶[No padding, Hapus konvolusi terakhir jika dimensi tidak sesuai, Padding yang menghasilkan feature map dengan ukuran ⌈IS⌉, Ukuran keluaran cocok secara matematis, Juga disebut 'half' padding, Maximum padding menjadikan akhir konvolusi dipasangkan pada batasan dari input, Filter 'melihat' masukan end-to-end] + +
+ + +**32. Tuning hyperparameters** + +⟶Menyetel hiperparameter + +
+ + +**33. Parameter compatibility in convolution layer ― By noting I the length of the input volume size, F the length of the filter, P the amount of zero padding, S the stride, then the output size O of the feature map along that dimension is given by:** + +⟶Kompabilitas parameter pada lapisan konvolusi - Dengan menuliskan I sebagai panjang dari ukuran volume masukan, F sebagai panjang dari filter, P sebagai jumlah dari zero padding, S sebagai stride, maka ukuran keluaran 0 dari feature map pada dimensi tersebut ditandai dengan: + +
+ + +**34. [Input, Filter, Output]** + +⟶[Masukan, Filter, Keluaran] + +
+ + +**35. Remark: often times, Pstart=Pend≜P, in which case we can replace Pstart+Pend by 2P in the formula above.** + +⟶Catatan: sering, Pstart=Pend≜P, pada kasus tersebut kita dapat mengganti Pstart+Pend dengan 2P pada formula di atas. + +
+ + +**36. Understanding the complexity of the model ― In order to assess the complexity of a model, it is often useful to determine the number of parameters that its architecture will have. In a given layer of a convolutional neural network, it is done as follows:** + +⟶Memahami kompleksitas dari model - Untuk menilai kompleksitas dari sebuah model, sangatlah penting untuk menentukan jumlah parameter yang arsitektur dari model akan miliki. Pada sebuah convolutional neural network, hal tersebut dilakukan sebagai berikut: + +
+ + +**37. [Illustration, Input size, Output size, Number of parameters, Remarks]** + +⟶[Ilustrasi, Ukuran masukan, Ukuran keluaran, Jumlah parameter, Catatan] + +
+ + +**38. [One bias parameter per filter, In most cases, SF, sebuah pilihan umum untuk K adalah 2C] + +
+ + +**39. [Pooling operation done channel-wise, In most cases, S=F]** + +⟶[Operasi pooling yang dilakukan dengan channel-wise, Pada banyak kasus, S=F] + +
+ + +**40. [Input is flattened, One bias parameter per neuron, The number of FC neurons is free of structural constraints]** + +⟶[Masukan diratakan, satu parameter bias untuk setiap neuron, Jumlah dari neuron FC adalah terbebas dari batasan struktural] + +
+ + +**41. Receptive field ― The receptive field at layer k is the area denoted Rk×Rk of the input that each pixel of the k-th activation map can 'see'. By calling Fj the filter size of layer j and Si the stride value of layer i and with the convention S0=1, the receptive field at layer k can be computed with the formula:** + +⟶Receptive field - Receptive field pada layer k adalah area yang dinotasikan RkxRk dari masukan yang setiap pixel dari k-th activation map dapat "melihat". Dengan menyebut Fj (sebagai) ukuran penyaring dari lapisan j dan Si (sebagai) nilai stride dari lapisan i dan dengan konvensi 50=1, receptive field pada lapisan k dapat dihitung dengan formula: + +
+ + +**42. In the example below, we have F1=F2=3 and S1=S2=1, which gives R2=1+2⋅1+2⋅1=5.** + +⟶Pada contoh dibawah ini, kita memiliki F1=F2=3 dan S1=S2=1, yang menghasilkan R2=1+2⋅1+2⋅1=5. + +
+ + +**43. Commonly used activation functions** + +⟶Fungsi-fungsi aktifasi yang biasa dipakai + +
+ + +**44. Rectified Linear Unit ― The rectified linear unit layer (ReLU) is an activation function g that is used on all elements of the volume. It aims at introducing non-linearities to the network. Its variants are summarized in the table below:** + +⟶Rectified Linear Unit - Layer rectified linear unit (ReLU) adalah sebuat fungsi aktivasi g yang digunakan pada seluruh elemen volume. Unit ini bertujuan untuk menempatkan non-linearitas pada jaringan. Variasi-variasi ReLU ini dirangkum pada tabel di bawah ini: + +
+ + +**45. [ReLU, Leaky ReLU, ELU, with]** + +⟶[ReLU, Leaky ReLU, ELU, dengan] + +
+ + +**46. [Non-linearity complexities biologically interpretable, Addresses dying ReLU issue for negative values, Differentiable everywhere]** + +⟶[Kompleksitas non-linearitas yang dapat ditafsirkan secara biologi, Menangani permasalahan dying ReLU yang bernilai negatif, Yang dapat dibedakan di mana pun] + +
+ + +**47. Softmax ― The softmax step can be seen as a generalized logistic function that takes as input a vector of scores x∈Rn and outputs a vector of output probability p∈Rn through a softmax function at the end of the architecture. It is defined as follows:** + +⟶Softmax - Langkah softmax dapat dilihat sebagai sebuah fungsi logistik umum yang berperan sebagai masukan dari nilai skor vektor x∈Rn dan mengualarkan probabilitas produk vektor p∈Rn melalui sebuah fungsi softmax pada akhir dari jaringan arsitektur. Softmax didefinisikan sebagai berikut: + +
+ + +**48. where** + +⟶Di mana + +
+ + +**49. Object detection** + +⟶Deteksi objek + +
+ + +**50. Types of models ― There are 3 main types of object recognition algorithms, for which the nature of what is predicted is different. They are described in the table below:** + +⟶Tipe-tipe model - Ada tiga tipe utama dari algoritma rekognisi objek, yang mana hakikat yang diprediksi tersebut berbeda. Tipe-tipe tersebut dijelaskan pada tabel di bawah ini: + +
+ + +**51. [Image classification, Classification w. localization, Detection]** + +⟶[Klasifikasi gambar, Klasifikasi w. lokalisasi, Deteksi] + +
+ + +**52. [Teddy bear, Book]** + +⟶[Boneka beruang, Buku] + +
+ + +**53. [Classifies a picture, Predicts probability of object, Detects an object in a picture, Predicts probability of object and where it is located, Detects up to several objects in a picture, Predicts probabilities of objects and where they are located]** + +⟶[Mengklasifikasikan sebuah gambar, Memprediksi probabilitas dari objek, Mendeteksi objek pada sebuah gambar, Memprediksi probabilitas dari objek dan lokasinya pada gambar, Mendeteksi hingga beberapa objek pada sebuah gambar, Memprediksi probabilitas dari objek-objek dan dimana lokasi mereka] + +
+ + +**54. [Traditional CNN, Simplified YOLO, R-CNN, YOLO, R-CNN]** + +⟶[CNN tradisional, Simplified YOLO, R-CNN, YOLO, R-CNN] + +
+ + +**55. Detection ― In the context of object detection, different methods are used depending on whether we just want to locate the object or detect a more complex shape in the image. The two main ones are summed up in the table below:** + +⟶Deteksi - Pada objek deteksi, metode yang berbeda digunakan tergantung apakah kita hanya ingin untuk mengetahui lokasi objek atau mendeteksi sebuah bentuk yang lebih rumit pada gambar. Dua metode yang utama dirangkum pada tabel dibawah ini: + +
+ + +**56. [Bounding box detection, Landmark detection]** + +⟶[Deteksi bounding box, Deteksi landmark] + +
+ + +**57. [Detects the part of the image where the object is located, Detects a shape or characteristics of an object (e.g. eyes), More granular]** + +⟶[Mendeteksi bagian dari gambar dinama objek berlokasi, Mendetek bentuk atau karakteristik dari sebuah objek (contoh: mata), Lebih granular] + +
+ + +**58. [Box of center (bx,by), height bh and width bw, Reference points (l1x,l1y), ..., (lnx,lny)]** + +⟶[Pusat dari box (bx,by), tinggi bh dan lebah bw, Poin referensi (l1x,l1y), ..., (lnx,lny)] + +
+ + +**59. Intersection over Union ― Intersection over Union, also known as IoU, is a function that quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding box Ba. It is defined as:** + +⟶[Intersection over Union - Intersection over Union, juga dikenal sebagai IoU, adalah sebuah fungsi yang mengkuantifikasi seberapa benar posisi dari sebuah prediksi bounding box Bp terhadap bounding box yang sebenarnya Ba. IoU didefinisikan sebagai berikut:] + +
+ + +**60. Remark: we always have IoU∈[0,1]. By convention, a predicted bounding box Bp is considered as being reasonably good if IoU(Bp,Ba)⩾0.5.** + +⟶Perlu diperhatikan: kita selalu memiliki nilai IoU∈[0,1]. Umumnya, sebuah prediksi bounding box dianggap cukup bagus jika IoU(Bp,Ba)⩾0.5. + +
+ + +**61. Anchor boxes ― Anchor boxing is a technique used to predict overlapping bounding boxes. In practice, the network is allowed to predict more than one box simultaneously, where each box prediction is constrained to have a given set of geometrical properties. For instance, the first prediction can potentially be a rectangular box of a given form, while the second will be another rectangular box of a different geometrical form.** + +⟶Anchor boxes ― Anchor boxing adalah sebuah teknik yang digunakan untuk memprediksi bounding box yang overlap. Pada pengaplikasiannya, network diperbolehkan untuk memprediksi lebih dari satu box secara bersamaan, dimana setiap prediksi box dibatasi untuk memiliki kumpulan properti geometri. Contohnya, prediksi pertama dapat berupa sebuah box persegi panjang untuk sebuah bentuk, sedangkan prediksi kedua adalah persegi panjang lainnya dengan bentuk geometri yang berbeda. + +
+ + +**62. Non-max suppression ― The non-max suppression technique aims at removing duplicate overlapping bounding boxes of a same object by selecting the most representative ones. After having removed all boxes having a probability prediction lower than 0.6, the following steps are repeated while there are boxes remaining:** + +⟶Non-max suppression ― Teknik non-max suppression bertujuan untuk menghapus duplikasi bounding box yang overlap satu sama lain dari sebuah objek yang sama dengan memilih box yang paling representatif. Setelah menghapus seluruh box dengan prediksi probability lebih kecil dari 0.6, langkah berikut diulang selama terdapat box tersisa. + +
+ + +**63. [For a given class, Step 1: Pick the box with the largest prediction probability., Step 2: Discard any box having an IoU⩾0.5 with the previous box.]** + +⟶[Untuk sebuah kelas, Langkah 1: Pilih box dengan probabilitas prediksi tertinggi., Langkah 2: Singkirkan box manapun yang yang memiliki IoU⩾0.5 dengan box yang dipilih pada tahap 1.] + +
+ + +**64. [Box predictions, Box selection of maximum probability, Overlap removal of same class, Final bounding boxes]** + +⟶[Prediksi-prediksi box, Seleksi box dari probabilitas tertinggi, Penghapusan overlap pada kelas yang sama, Bounding box akhir] + +
+ + +**65. YOLO ― You Only Look Once (YOLO) is an object detection algorithm that performs the following steps:** + +⟶YOLO - You Only Look Once (YOLO) adalah sebuah algoritma deteksi objek yang melakukan langkah-langkah berikut: + +
+ + +**66. [Step 1: Divide the input image into a G×G grid., Step 2: For each grid cell, run a CNN that predicts y of the following form:, repeated k times]** + +⟶Langkah 1: Bagi gambar masukan kedalam sebuah grid dengan ukuran GxG, Langkah 2: Untuk setiap sel grid, gunakan sebuah CNN yang memprediksi y dengan bentuk sebagai berikut; lakukan sebanyak k kali] + +
+ + +**67. where pc is the probability of detecting an object, bx,by,bh,bw are the properties of the detected bouding box, c1,...,cp is a one-hot representation of which of the p classes were detected, and k is the number of anchor boxes.** + +⟶dimana pc adalah deteksi probabilitas dari sebuah objek, bx,by,bh,bw adalah properti dari box bounding yang terdeteksi, c1,...,cp adalah representasi one-hot yang mana p classes terdeteksi, dan k adalah jumlah box anchor. + +
+ + +**68. Step 3: Run the non-max suppression algorithm to remove any potential duplicate overlapping bounding boxes.** + +⟶Langkah 3: Jalankan algoritma non-max suppression yang menghapus duplikasi potensial yang mengoverlap box bounding yang sebenarnya. + +
+ + +**69. [Original image, Division in GxG grid, Bounding box prediction, Non-max suppression]** + +⟶[Gambar asli, Pembagian kedalam grid berukuran GxG, Prediksi box bounding, Non-max suppression] + +
+ + +**70. Remark: when pc=0, then the network does not detect any object. In that case, the corresponding predictions bx,...,cp have to be ignored.** + +⟶Perlu diperhatikan: ketika pc=0, maka netwok tidak mendeteksi objek apapun. Pada kasus seperti itu, prediksi yang bersangkutan bx,...,cp harus diabaikan. + +
+ + +**71. R-CNN ― Region with Convolutional Neural Networks (R-CNN) is an object detection algorithm that first segments the image to find potential relevant bounding boxes and then run the detection algorithm to find most probable objects in those bounding boxes.** + +⟶R-CNN ― Region with Convolutional Neural Networks (R-CNN) adalah sebuah algoritma objek deteksi yang pertama-tama mensegmentasi gambar untuk menemukan potensial box-box bounding yang relevan dan selanjutnya menjalankan algoritma deteksi untuk menemukan objek yang paling memungkinkan pada box-box bounding tersebut.. + +
+ + +**72. [Original image, Segmentation, Bounding box prediction, Non-max suppression]** + +⟶[Gambar asli, Segmentasi, Prediksi box bounding, Non-max suppressio] + +
+ + +**73. Remark: although the original algorithm is computationally expensive and slow, newer architectures enabled the algorithm to run faster, such as Fast R-CNN and Faster R-CNN.** + +⟶Perlu diperhatikan: meskipun algoritma asli dari R-CNN membutuhkan komputasi resource yang besar dan lambar, arsitektur terbaru memungkinan algoritma untuk memiliki performa yang lebih cepat, yang dikenal sebagai Fast R-CNN dan Faster R-CNN. + +
+ + +**74. Face verification and recognition** + +⟶Verifikasi wajah dan rekognisi + +
+ + +**75. Types of models ― Two main types of model are summed up in table below:** + +⟶Jenis-jenis model - Dua jenis tipe utama dirangkum pada tabel dibawah ini: + +
+ + +**76. [Face verification, Face recognition, Query, Reference, Database]** + +⟶[Ferivikasi wajah, Rekognisi wajah, Query, Referensi, Database] + +
+ + +**77. [Is this the correct person?, One-to-one lookup, Is this one of the K persons in the database?, One-to-many lookup]** + +⟶[Apakah ini adalah orang yang sesuai?, One-to-one lookup, Apakah ini salah satu dari K orang pada database?, One-to-many lookup] + +
+ + +**78. One Shot Learning ― One Shot Learning is a face verification algorithm that uses a limited training set to learn a similarity function that quantifies how different two given images are. The similarity function applied to two images is often noted d(image 1,image 2).** + +⟶One Shot Learning ― One Shot Learning adalah sebuah algoritma verifikasi wajah yang menggunakan sebuah training set yang terbatas untuk belajar fungsi kemiripan yang mengkuantifikasi seberapa berbeda dua gambar yang diberikan. Fungsi kemiripan yang diaplikasikan pada dua gambar sering dinotasikan sebagai d(image 1,image 2). + +
+ + +**79. Siamese Network ― Siamese Networks aim at learning how to encode images to then quantify how different two images are. For a given input image x(i), the encoded output is often noted as f(x(i)).** + +⟶Siamese Network ― Siamese Networks didesain untuk mengkodekan gambar dan mengkuantifikasi seberapa berbeda dua buah gambar. Untuk sebuah gambar masukan x(i), keluaran yang dikodekan sering dinotasikan sebagai f(x(i)). + +
+ + +**80. Triplet loss ― The triplet loss ℓ is a loss function computed on the embedding representation of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive example belong to a same class, while the negative example to another one. By calling α∈R+ the margin parameter, this loss is defined as follows:** + +⟶Loss triplet - Loss triplet adalah sebuah fungsi loss yang dihitung pada representasi embedding dari sebuah tiga pasang gambar A (anchor), P (positif) dan N (negatif). Sampel anchor dan positif berdasal dari sebuah kelas yang sama, sedangkan sampel negatif berasal dari kelas yang lain. Dengan menuliskan α∈R+ sebagai parameter margin, fungsi loss ini dapat didefinisikan sebagai berikut: + +
+ + +**81. Neural style transfer** + +⟶Transfer neural style + +
+ + +**82. Motivation ― The goal of neural style transfer is to generate an image G based on a given content C and a given style S.** + +⟶Motivasi: Tujuan dari mentransfer Neural style adalah untuk menghasilakn sebuah gambar G berdasarkan sebuah konten dan sebuah style S. + +
+ + +**83. [Content C, Style S, Generated image G]** + +⟶[Konten C, Style S, gambar yang dihasilkan G] + +
+ + +**84. Activation ― In a given layer l, the activation is noted a[l] and is of dimensions nH×nw×nc** + +⟶Aktifasi - Pada sebuah layer l, aktifasi dinotasikan sebagai a[l] dan berdimensi nH×nw×nc + +
+ + +**85. Content cost function ― The content cost function Jcontent(C,G) is used to determine how the generated image G differs from the original content image C. It is defined as follows:** + +⟶Fungsi cost content - Fungsi cost content Jcontent(C,G) digunakan untuk menghitung perbedaan antara gambar yang dihasilkan dan gambar konten yang sebenarnya C. Fungsi cost content didefinsikan sebagai berikut: + +
+ + +**86. Style matrix ― The style matrix G[l] of a given layer l is a Gram matrix where each of its elements G[l]kk′ quantifies how correlated the channels k and k′ are. It is defined with respect to activations a[l] as follows:** + +⟶Matriks style - Matriks style G[l] dari sebuah layer l adalah sebuah matrix Gram dimana setiap elemennya G[l]kk′ mengkuantifikasi seberapa besar korelasi antara channel k dan k'. Matriks style didefinisikan terhadap aktifasi a[l] sebagai berikut: + +
+ + +**87. Remark: the style matrix for the style image and the generated image are noted G[l] (S) and G[l] (G) respectively.** + +⟶Perlu diperhatikan: matriks style untuk gambar style dan gambar yang dihasilkan masing-masing dituliskan sebagai G[l] (S) dan G[l] (G). + +
+ + +**88. Style cost function ― The style cost function Jstyle(S,G) is used to determine how the generated image G differs from the style S. It is defined as follows:** + +⟶Fungsi cost style - Fungsi cost style Jstyle(S,G) digunakan untuk menentukan perbedaan antara gambar yang dihasilkan G dengan style yang diberikan S. Fungsi tersebut definisikan sebagai berikut: + +
+ + +**89. Overall cost function ― The overall cost function is defined as being a combination of the content and style cost functions, weighted by parameters α,β, as follows:** + +⟶Fungsi cost overall - Fungsi cost overall didefinisikan sebagai sebuah kombinasi dari fungsi cost konten dan syle, dibobotkan oleh parameter α,β, sebagai berikut: + +
+ + +**90. Remark: a higher value of α will make the model care more about the content while a higher value of β will make it care more about the style.** + +⟶Perlu diperhatikan: semakin tinggi nilai α akan membuat model lebih memperhatikan konten sedangkan semakin tinggi nilai β akan membuat model lebih memprehatikan style. + +
+ + +**91. Architectures using computational tricks** + +⟶Arsitektur menggunakan trik komputasi. + +
+ + +**92. Generative Adversarial Network ― Generative adversarial networks, also known as GANs, are composed of a generative and a discriminative model, where the generative model aims at generating the most truthful output that will be fed into the discriminative which aims at differentiating the generated and true image.** + +⟶Generative Adversarial Network - Generative adversarial networks, juga dikenala sebagai GANs, terdiri dari sebuah generatif dan diskriminatif model , dimana generatif model didesain untuk menghasilkan keluaran palsu yang mendekati keluaran sebenarnya yang akan diberikan kepada diskriminatif model yang didesain untuk membedakan gambar palsu dan gambar sebenarnya. + +
+ + +**93. [Training, Noise, Real-world image, Generator, Discriminator, Real Fake]** + +⟶[Training, Noise, Gambar real-world, Generator, Discriminator, Real Fake] + +
+ + +**94. Remark: use cases using variants of GANs include text to image, music generation and synthesis.** + +⟶Perlu diperhatikan: penggunaan dari variasi GANs meliputi sistem yang dapat mengubah teks ke gambar, dan menghasilkan dan mensintese musik. + +
+ + +**95. ResNet ― The Residual Network architecture (also called ResNet) uses residual blocks with a high number of layers meant to decrease the training error. The residual block has the following characterizing equation:** + +⟶ ResNet ― Arsitektur Residual Network (juga disebut ResNet) menggunakan blok-blok residual dengan jumlah layer yang banyak untuk mengurangi training error. Blok residual memiliki karakteristik formula sebagai berikut: + +
+ + +**96. Inception Network ― This architecture uses inception modules and aims at giving a try at different convolutions in order to increase its performance through features diversification. In particular, it uses the 1×1 convolution trick to limit the computational burden.** + +⟶Inception Network ― Arsitektur ini menggunakan modul inception dan didesain dengan tujuan untuk meningkatkan performa network melalu diversifikasi fitur dengan menggunakan CNN yang berbeda-beda. Khususnya, inception model menggunakan trik 1×1 CNN untuk membatasi beban komputasi. + +
+ + +**97. The Deep Learning cheatsheets are now available in [target language].** + +⟶Deep Learning cheatsheet sekarang tersedia di [Bahasa Indonesia] + +
+ + +**98. Original authors** + +⟶Penulis asli + +
+ + +**99. Translated by X, Y and Z** + +⟶Diterjemahkan oleh X, Y dan Z + +
+ + +**100. Reviewed by X, Y and Z** + +⟶Diulas oleh X, Y dan Z + +
+ + +**101. View PDF version on GitHub** + +⟶Lihat versi PDF pada GitHub + +
+ + +**102. By X and Y** + +⟶Oleh X dan Y + +
diff --git a/he/refresher-linear-algebra.md b/ja/cs-229-linear-algebra.md similarity index 51% rename from he/refresher-linear-algebra.md rename to ja/cs-229-linear-algebra.md index a6b440d1e..c806cb4ca 100644 --- a/he/refresher-linear-algebra.md +++ b/ja/cs-229-linear-algebra.md @@ -1,339 +1,342 @@ **1. Linear Algebra and Calculus refresher** ⟶ - +線形代数と微積分の復習
**2. General notations** ⟶ - +一般表記
**3. Definitions** ⟶ - +定義
**4. Vector ― We note x∈Rn a vector with n entries, where xi∈R is the ith entry:** ⟶ - +ベクトル - x∈Rn はn個の要素を持つベクトルを表し、xi∈R はi番目の要素を表します。
**5. Matrix ― We note A∈Rm×n a matrix with m rows and n columns, where Ai,j∈R is the entry located in the ith row and jth column:** ⟶ - +行列 - m行n列の行列を A∈Rm×n と表記し、Ai,j∈R はi行目のj列目の要素を指します。
**6. Remark: the vector x defined above can be viewed as a n×1 matrix and is more particularly called a column-vector.** ⟶ - +備考:上記で定義されたベクトル x は n×1 の行列と見なすことができ、列ベクトルと呼ばれます。
**7. Main matrices** ⟶ - +主な行列の種類
**8. Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** ⟶ - +単位行列 - 単位行列 I∈Rn×n は、対角成分に 1 が並び、他は全て 0 となる正方行列です。
**9. Remark: for all matrices A∈Rn×n, we have A×I=I×A=A.** ⟶ - +備考:すべての行列 A∈Rn×n に対して、A×I=I×A=A となります。
**10. Diagonal matrix ― A diagonal matrix D∈Rn×n is a square matrix with nonzero values in its diagonal and zero everywhere else:** ⟶ - +対角行列 - 対角行列 D∈Rn×n は、対角成分の値が 0 以外で、それ以外は 0 である正方行列です。
**11. Remark: we also note D as diag(d1,...,dn).** ⟶ - +備考:Dをdiag(d1,...,dn) とも表記します。
**12. Matrix operations** ⟶ - +行列演算
**13. Multiplication** ⟶ - +行列乗算
**14. Vector-vector ― There are two types of vector-vector products:** ⟶ - +ベクトル-ベクトル - ベクトル-ベクトル積には2種類あります。
**15. inner product: for x,y∈Rn, we have:** ⟶ - +内積: x,y∈Rn に対して、内積の定義は下記の通りです:
**16. outer product: for x∈Rm,y∈Rn, we have:** ⟶ - +外積: x∈Rm,y∈Rn に対して、外積の定義は下記の通りです:
**17. Matrix-vector ― The product of matrix A∈Rm×n and vector x∈Rn is a vector of size Rn, such that:** ⟶ - +行列-ベクトル - 行列 A∈Rm×n とベクトル x∈Rn の積は以下の条件を満たすようなサイズ Rn のベクトルです。
**18. where aTr,i are the vector rows and ac,j are the vector columns of A, and xi are the entries of x.** ⟶ - +上記 aTr,i は A の行ベクトルで、ac,j は A の列ベクトルです。 xi は x の要素です。
**19. Matrix-matrix ― The product of matrices A∈Rm×n and B∈Rn×p is a matrix of size Rn×p, such that:** ⟶ - +行列-行列 - 行列 A∈Rm×n と B∈Rn×p の積は以下の条件を満たすようなサイズ Rm×p の行列です。 (There is a typo in the original: Rn×p)
**20. where aTr,i,bTr,i are the vector rows and ac,j,bc,j are the vector columns of A and B respectively** ⟶ - +aTr,i,bTr,i は A と B の行ベクトルで ac,j,bc,j は A と B の列ベクトルです。
**21. Other operations** ⟶ - +その他の演算
**22. Transpose ― The transpose of a matrix A∈Rm×n, noted AT, is such that its entries are flipped:** ⟶ - +転置 ― A∈Rm×n の転置行列は AT と表記し、A の行列要素が交換した行列です。
**23. Remark: for matrices A,B, we have (AB)T=BTAT** ⟶ - +備考: 行列AとBの場合、(AB)T=BTAT** となります。
**24. Inverse ― The inverse of an invertible square matrix A is noted A−1 and is the only matrix such that:** ⟶ - +逆行列 ― 可逆正方行列 A の逆行列は A-1 と表記し、 以下の条件を満たす唯一の行列です。
**25. Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1=B−1A−1** ⟶ - +備考: すべての正方行列が可逆とは限りません。 行列 A,B については、(AB)−1=B−1A−1
**26. Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** ⟶ - +跡 - 正方行列 A の跡は、tr(A) と表記し、その対角成分の要素の和です。
**27. Remark: for matrices A,B, we have tr(AT)=tr(A) and tr(AB)=tr(BA)** ⟶ - +備考: 行列 A,B の場合: tr(AT)=tr(A) と tr(AB)=tr(BA) となります。
**28. Determinant ― The determinant of a square matrix A∈Rn×n, noted |A| or det(A) is expressed recursively in terms of A∖i,∖j, which is the matrix A without its ith row and jth column, as follows:** ⟶ - +行列式 ― 正方行列 A∈Rn×n の行列式は |A| または det(A) と表記し、以下のように i番目の行とj番目の列を抜いた行列A、Aij によって再帰的に表現されます。 + それはi番目の行とj番目の列のない行列Aです。 次のように:
**29. Remark: A is invertible if and only if |A|≠0. Also, |AB|=|A||B| and |AT|=|A|.** ⟶ - +備考: |A|≠0の場合に限り、行列は可逆行列です。また |AB|=|A||B| と |AT|=|A|。
**30. Matrix properties** ⟶ - +行列の性質
**31. Definitions** ⟶ - +定義
**32. Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** ⟶ - +対称分解 ― 行列Aは次のように対称および反対称的な部分で表現できます。
**33. [Symmetric, Antisymmetric]** ⟶ - +[対称、反対称]
**34. Norm ― A norm is a function N:V⟶[0,+∞[ where V is a vector space, and such that for all x,y∈V, we have:** ⟶ - +ノルムは関数N:V⟶[0,+∞[ Vはすべての x,y∈V に対して、以下の条件を満たすようなベクトル空間です。 +]]
**35. N(ax)=|a|N(x) for a scalar** ⟶ - +スカラー a に対して N(ax)=|a|N(x)
**36. if N(x)=0, then x=0** ⟶ - +N(x)= 0ならば x = 0
**37. For x∈V, the most commonly used norms are summed up in the table below:** ⟶ - +x∈Vに対して、最も多用されているノルムは、以下の表にまとめられています。
**38. [Norm, Notation, Definition, Use case]** ⟶ - +[ノルム、表記法、定義、使用事例]
**39. Linearly dependence ― A set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others.** ⟶ - +線形従属 ― ベクトルの集合に対して、少なくともどれか一つのベクトルを他のベクトルの線形結合として定義できる場合、その集合が線形従属であるといいます。
**40. Remark: if no vector can be written this way, then the vectors are said to be linearly independent** ⟶ - +備考:この方法でベクトルを書くことができない場合、ベクトルは線形独立していると言われます。
**41. Matrix rank ― The rank of a given matrix A is noted rank(A) and is the dimension of the vector space generated by its columns. This is equivalent to the maximum number of linearly independent columns of A.** ⟶ - +行列の階数 ― 行列Aの階数は rank(A) と表記し、列空間の次元を表します。これは、Aの線形独立の列の最大数に相当します。
**42. Positive semi-definite matrix ― A matrix A∈Rn×n is positive semi-definite (PSD) and is noted A⪰0 if we have:** ⟶ - +半正定値行列 ― 行列A、A∈Rn×nに対して、以下の式が成り立つならば、 Aを半正定値(PSD)といい、A⪰0 と表記します。
**43. Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** ⟶ - +備考: 同様に、全ての非ゼロベクトルx、xTAx>0 に対して条件を満たすような行列Aは正定値行列といい、A≻0 と表記します。
**44. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** ⟶ - +固有値、固有ベクトル ― 行列A、A∈Rn×n に対して、以下の条件を満たすようなベクトルz、z∈Rn∖{0} が存在するならば、λ は固有値といい、z は固有ベクトルといいます。
**45. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** ⟶ - +スペクトル定理 ― A∈Rn×n とします。A が対称ならば、A は実直交行列 U∈Rn×n によって対角化可能です。Λ=diag(λ1,...,λn) と表記すると、次のように表現できます。
**46. diagonal** ⟶ - +対角
**47. Singular-value decomposition ― For a given matrix A of dimensions m×n, the singular-value decomposition (SVD) is a factorization technique that guarantees the existence of U m×m unitary, Σ m×n diagonal and V n×n unitary matrices, such that:** ⟶ - +特異値分解 ― A を m×n の行列とします。特異値分解(SVD)は、ユニタリ行列 U m×m、Σ m×n の対角行列、およびユニタリ行列 V n×n の存在を保証する因数分解手法で、以下の条件を満たします。
**48. Matrix calculus** ⟶ - +行列微積分
**49. Gradient ― Let f:Rm×n→R be a function and A∈Rm×n be a matrix. The gradient of f with respect to A is a m×n matrix, noted ∇Af(A), such that:** ⟶ - +勾配 ― f:Rm×n→R を関数とし、A∈Rm×n を行列とします。 A に対する f の勾配は m×n 行列で、∇Af(A) と表記し、次の条件を満たします。
**50. Remark: the gradient of f is only defined when f is a function that returns a scalar.** ⟶ - +備考: f の勾配は、f がスカラーを返す関数であるときに限り存在します。
**51. Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** ⟶ - +ヘッセ行列 ― f:Rn→R を関数とし、x∈Rn をベクトルとします。 x に対する f のヘッセ行列は、n×n 対称行列で ∇2xf(x) と表記し、以下の条件を満たします。
**52. Remark: the hessian of f is only defined when f is a function that returns a scalar** ⟶ - +備考: f のヘッセ行列は、f がスカラーを返す関数である場合に限り存在します。
**53. Gradient operations ― For matrices A,B,C, the following gradient properties are worth having in mind:** ⟶ - +勾配演算 ― 行列 A,B,C の場合、特に以下の勾配の性質を意識する甲斐があります。
**54. [General notations, Definitions, Main matrices]** ⟶ - +[表記, 定義, 主な行列の種類]
**55. [Matrix operations, Multiplication, Other operations]** ⟶ - +[行列演算, 乗算, その他の演算]
**56. [Matrix properties, Norm, Eigenvalue/Eigenvector, Singular-value decomposition]** ⟶ - +[行列特性, 行列ノルム, 固有値/固有ベクトル, 特異値分解]
**57. [Matrix calculus, Gradient, Hessian, Operations]** ⟶ +[行列微積分, 勾配, ヘッセ行列, 演算] diff --git a/ja/cs-229-probability.md b/ja/cs-229-probability.md new file mode 100644 index 000000000..16fca9ea5 --- /dev/null +++ b/ja/cs-229-probability.md @@ -0,0 +1,381 @@ +**1. Probabilities and Statistics refresher** + +⟶確率と統計の復習 + +
+ +**2. Introduction to Probability and Combinatorics** + +⟶確率と組合せの導入 + +
+ +**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** + +⟶標本空間 - ある試行のすべての起こりうる結果の集合はその試行の標本空間として知られ、Sと表します。 + +
+ +**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** + +⟶事象 - 標本空間の任意の部分集合Eを事象と言います。つまり、ある事象はある試行の起こりうる結果により構成された集合です。ある試行結果がEに含まれるなら、Eが起きたと言います。 + +
+ +**5. Axioms of probability ― For each event E, we denote P(E) as the probability of event E occuring.** + +⟶確率の公理 - 各事象Eに対して、事象Eが起こる確率をP(E)と書きます。 + +
+ +**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** + +⟶公理1 - すべての確率は0と1を含んでその間にあります。すなわち: + +
+ +**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** + +⟶公理2 - 標本空間全体において少なくとも一つの根元事象が起こる確率は1です。すなわち: + +
+ +**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** + +⟶公理3 - 互いに排反な事象の任意の数列E1,...,Enに対し、次が成り立ちます: + +
+ +**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** + +⟶順列(Permutation) - 順列とはn個のものの中からr個をある順序で並べた配列です。このような配列の数はP(n,r)と表し、次のように定義します: + +
+ +**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** + +⟶組合せ(Combination) - 組合せはn個の中からr個の順番を勘案しない配列です。このような配列の数はC(n,r)と表し、次のように定義します: + +
+ +**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** + +⟶注釈: 0⩽r⩽nのとき、P(n,r)⩾C(n,r)となります。 + +
+ +**12. Conditional Probability** + +⟶条件付き確率 + +
+ +**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** + +⟶ベイズの定理 - P(B)>0であるような事象A, Bに対して、次が成り立ちます: + +
+ +**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** + +⟶注釈: P(A∩B)=P(A)P(B|A)=P(A|B)P(B)となります。 + +
+ +**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** + +⟶分割(Partition) - {Ai,i∈[[1,n]]}はすべてのiに対してAi≠∅としましょう。次が成り立つとき、{Ai}は分割であると言います: + +
+ +**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** + +⟶注釈: 標本空間において任意の事象Bに対して、P(B)=n∑i=1P(B|Ai)P(Ai)が成り立ちます。 + +
+ +**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** + +⟶ベイズの定理の応用 - {Ai,i∈[[1,n]]}を標本空間の分割とすると、次が成り立ちます: + +
+ +**18. Independence ― Two events A and B are independent if and only if we have:** + +⟶独立性 - 次が成り立ちかつその場合に限り(必要十分)、2つの事象AとBは独立であるといいます: + +
+ +**19. Random Variables** + +⟶確率変数 + +
+ +**20. Definitions** + +⟶定義 + +
+ +**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** + +⟶確率変数 - 確率変数は、よくXと表記され、ある標本空間のすべての要素を実数直線に対応させる関数です。 + +
+ +**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** + +⟶累積分布関数(CDF) - 累積分布関数Fは、単調非減少かつlimx→−∞F(x)=0 and limx→+∞F(x)=1であり、次のように定義されます: + +
+ +**23. Remark: we have P(a + +**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** + +⟶確率密度関数(PDF) - 確率密度関数fは確率変数Xが2つの隣接する実現値の間の値をとる確率です。 + +
+ +**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** + +⟶PDFとCDFについての関係性 - 離散値(D)と連続値(C)のそれぞれの場合について知っておくべき重要な特性をここに挙げます。 + +
+ +**26. [Case, CDF F, PDF f, Properties of PDF]** + +⟶[種類、CDF F、PDF f、PDFの特性] + +
+ +**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** + +⟶分布の期待値と積率 - 離散値と連続値のそれぞれの場合における期待値E[X]、一般化した期待値E[g(X)]、k次の積率E[Xk]と特性関数ψ(ω)をここに挙げます: + +
+ +**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** + +⟶分散(Variance) - 確率変数の分散は、よくVar(X)またはσ2と表記され、その確率変数の分布関数のばらつきの尺度です。次のように計算されます。 + +
+ +**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** + +⟶標準偏差(Standard deviation) - 確率変数の標準偏差は、よくσと表記され、その確率変数の分布関数のばらつきの尺度であり、その確率変数の単位に則ったものです。次のように計算されます。 + +
+ +**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** + +⟶確率変数の変換 - 変数XとYはなんらかの関数により関連づけられているとします。fXとfYをそれぞれXとYの分布関数として表記すると次が成り立ちます: + +
+ +**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** + +⟶ライプニッツの積分則 - gをxと潜在的にcの関数とし、a,bをcに従属的な境界とすると、次が成り立ちます。 + +
+ +**32. Probability Distributions** + +⟶確率分布 + +
+ +**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** + +⟶チェビシェフの不等式 - Xを期待値μの確率変数とします。k,σ>0のとき次の不等式が成り立ちます: + +
+ +**34. Main distributions ― Here are the main distributions to have in mind:** + +⟶主な分布 - 覚えておくべき主な分布をここに挙げます: + +
+ +**35. [Type, Distribution]** + +⟶[種類、分布] + +
+ +**36. Jointly Distributed Random Variables** + +⟶同時分布の確率変数 + +
+ +**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** + +⟶周辺密度と累積分布 - 同時確率密度関数fXYから次が成り立ちます。 + +
+ +**38. [Case, Marginal density, Cumulative function]** + +⟶[種類、周辺密度、累積関数] + +
+ +**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** + +⟶条件付き密度(Conditional density) - Yに対するXの条件付き密度はよくfX|Yと表記され、次のように定義されます: + +
+ +**40. Independence ― Two random variables X and Y are said to be independent if we have:** + +⟶独立性(Independence) - 2つの確率変数XとYは次が成り立つとき、独立であると言います: + +
+ +**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** + +⟶共分散(Covariance) - 2つの確率変数XとYの共分散を、σ2XYまたはより一般的にはCov(X,Y)と表記し、次のように定義します: + +
+ +**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** + +⟶相関係数(Correlation) - X, Yの標準偏差をσX,σYと表記し、確率変数X,Yの相関関係をρXYと表記し、次のように定義します: + +
+ +**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** + +⟶注釈 1: 任意の確率変数X,Yに対してρXY∈[−1,1]が成り立ちます。 + +
+ +**44. Remark 2: If X and Y are independent, then ρXY=0.** + +⟶注釈 2: XとYが独立ならば、ρXY=0です。 + +
+ +**45. Parameter estimation** + +⟶母数推定 + +
+ +**46. Definitions** + +⟶定義 + +
+ +**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** + +⟶確率標本(Random sample) - 確率標本とはXに従う独立同分布のn個の確率変数X1,...,Xnの集合です。 + +
+ +**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** + +⟶推定量(Estimator) - 推定量とは統計モデルにおける未知のパラメータの値を推定するのに用いられるデータの関数です。 + +
+ +**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** + +⟶偏り(Bias) - 推定量^θの偏りは^θのの分布の期待値と真の値との差として定義されます。すなわち: + +
+ +**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** + +⟶注釈: E[^θ]=θが成り立つとき、推定量は不偏であるといいます。 + +
+ +**51. Estimating the mean** + +⟶平均の推定 + +
+ +**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** + +⟶標本平均(Sample mean) - 確率標本の標本平均は、ある分布の真の平均μを推定するのに用いられ、よく¯¯¯¯¯Xと表記され、次のように定義されます: + +
+ +**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** + +⟶注釈: 標本平均は不偏です。すなわちE[¯¯¯¯¯X]=μが成り立ちます。 + +
+ +**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** + +⟶中心極限定理 - 確率標本X1,...,Xnが平均μと分散σ2を持つある分布に従うとすると、次が成り立ちます: + +
+ +**55. Estimating the variance** + +⟶分散の推定 + +
+ +**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** + +⟶標本分散 - 確率標本の標本分散は、ある分布の真の分散σ2を推定するのに用いられ、よくs2または^σ2と表記され、次のように定義されます: + +
+ +**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** + +⟶注釈: 標本分散は不偏です。すなわちE[s2]=σ2が成り立ちます。 + +
+ +**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** + +⟶標本分散とカイ二乗分布との関係 - 確率標本の標本分散をs2とすると、次が成り立ちます: + +
+ +**59. [Introduction, Sample space, Event, Permutation]** + +⟶[導入、標本空間、事象、順列] + +
+ +**60. [Conditional probability, Bayes' rule, Independence]** + +⟶[条件付き確率、ベイズの定理、独立] + +
+ +**61. [Random variables, Definitions, Expectation, Variance]** + +⟶[確率変数、定義、期待値、分散] + +
+ +**62. [Probability distributions, Chebyshev's inequality, Main distributions]** + +⟶[確率分布、チェビシェフの不等式、主な分布] + +
+ +**63. [Jointly distributed random variables, Density, Covariance, Correlation]** + +⟶[同時分布の確率変数、密度、共分散、相関係数] + +
+ +**64. [Parameter estimation, Mean, Variance]** + +⟶[母数推定、平均、分散] diff --git a/ja/cs-229-supervised-learning.md b/ja/cs-229-supervised-learning.md new file mode 100644 index 000000000..71f63afdd --- /dev/null +++ b/ja/cs-229-supervised-learning.md @@ -0,0 +1,567 @@ +**1. Supervised Learning cheatsheet** + +⟶教師あり学習チートシート + +
+ +**2. Introduction to Supervised Learning** + +⟶教師あり学習入門 + +
+ +**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** + +⟶入力が{x(1),...,x(m)}、出力が{y(1),...,y(m)}であるとき、xからyを予測する分類器を構築したい。 + +
+ +**4. Type of prediction ― The different types of predictive models are summed up in the table below:** + +⟶予測の種類 ― 様々な種類の予測モデルは下表に集約される: + +
+ +**5. [Regression, Classifier, Outcome, Examples]** + +⟶回帰、分類、出力、例 + +
+ +**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** + +⟶連続値、クラス、線形回帰、ロジスティック回帰、SVM、ナイーブベイズ + +
+ +**7. Type of model ― The different models are summed up in the table below:** + +⟶モデルの種類 ― 様々な種類のモデルは下表に集約される: + +
+ +**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** + +⟶判別モデル、生成モデル、目的、学習対象、イメージ図、例 + +
+ +**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** + +⟶P(y|x)の直接推定、後にP(y|x)を推測するためのP(x|y)の推定、決定境界、データの確率分布、回帰、SVM、GDA、ナイーブベイズ + +
+ +**10. Notations and general concepts** + +⟶記法と全般的な概念 + +
+ +**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** + +⟶仮説 ― 仮説はhθと表され、選択されたモデルのことである。与えられた入力x(i)に対して、モデルの予測結果はhθ(x(i))である。 + +
+ +**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** + +⟶損失関数 ― 損失関数とは(z,y)∈R×Y⟼L(z,y)∈Rを満たす関数Lで、予測値zとそれに対応する正解データ値yを入力とし、その誤差を出力するものである。一般的な損失関数は次表に集約される: + +
+ +**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** + +⟶最小2乗誤差、ロジスティック損失、ヒンジ損失、交差エントロピー + +
+ +**14. [Linear regression, Logistic regression, SVM, Neural Network]** + +⟶線形回帰、ロジスティック回帰、SVM、ニューラルネットワーク + +
+ +**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** + +⟶コスト関数 ― コスト関数Jは一般的にモデルの性能を評価するために用いられ、損失関数をLとして次のように定義される: + +
+ +**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** + +⟶勾配降下法 ― 学習率をα∈Rとし、勾配降下法における更新ルールは学習率とコスト関数Jを用いて次のように表される: + +
+ +**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** + +⟶備考:確率的勾配降下法(SGD)は学習標本全体を用いてパラメータを更新し、バッチ勾配降下法は学習標本の各バッチ毎に更新する。 + +
+ +**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** + +⟶尤度 ― パラメータをθとすると、あるモデルの尤度L(θ)を最大にすることにより最適なパラメータを求められる。実際には、最適化しやすい対数尤度ℓ(θ)=log(L(θ))を用いる。すなわち: + +
+ +**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** + +⟶ニュートン法 ― ニュートン法とはℓ′(θ)=0となるθを求める数値法である。その更新ルールは次の通りである: + +
+ +**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** + +⟶備考:多次元一般化またはニュートン-ラフソン法の更新ルールは次の通りである: + +
+ +**21. Linear models** + +⟶線形モデル + +
+ +**22. Linear regression** + +⟶線形回帰 + +
+ +**23. We assume here that y|x;θ∼N(μ,σ2)** + +⟶ここでy|x;θ∼N(μ,σ2)であるとする。 + +
+ +**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** + +⟶正規方程式 ― Xを行列とすると、コスト関数を最小化するθの値は次のような閉形式の解である: + +
+ +**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** + +⟶最小2乗法 ― 学習率をαとすると、m個のデータ点からなる学習データに対する最小2乗法(LMSアルゴリズム)による更新ルールは、ウィドロウ-ホフの学習規則としても知られており、次の通りである: + +
+ +**26. Remark: the update rule is a particular case of the gradient ascent.** + +⟶備考:この更新ルールは勾配上昇法の特殊な例である。 + +
+ +**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** + +⟶局所重み付き回帰 ― 局所重み付き回帰は、LWRとも呼ばれ、線形回帰の派生形である。パラメータをτ∈Rとして次のように定義されるw(i)(x)により、個々の学習標本をそのコスト関数において重み付けする: + +
+ +**28. Classification and logistic regression** + +⟶分類とロジスティック回帰 + +
+ +**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** + +⟶シグモイド関数 ― シグモイド関数gは、ロジスティック関数とも呼ばれ、次のように定義される: + +
+ +**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** + +⟶ロジスティック回帰 ― ここでy|x;θ∼Bernoulli(ϕ)であるとすると、次の形式を得る: + +
+ +**31. Remark: there is no closed form solution for the case of logistic regressions.** + +⟶備考:ロジスティック回帰については閉形式の解は存在しない。 + +
+ +**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** + +⟶ソフトマックス回帰 ― ソフトマックス回帰は、多クラス分類ロジスティック回帰とも呼ばれ、3個以上の結果クラスがある場合にロジスティック回帰を一般化するためのものである。慣習的に、θK=0とすると、各クラスiのベルヌーイ分布のパラメータϕiは次と等しくなる: + +
+ +**33. Generalized Linear Models** + +⟶一般化線形モデル + +
+ +**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** + +⟶指数分布族 ― ある分布の集合は指数分布族と呼ばれ、正準パラメータまたはリンク関数とも呼ばれる自然パラメータη、十分統計量T(y)及び対数分配関数a(η)を用いて、次のように表される: + +
+ +**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** + +⟶備考:T(y)=yとすることが多い。また、exp(−a(η))は確率の合計が1になることを保証する正規化定数と見なせる。 + +
+ +**36. Here are the most common exponential distributions summed up in the following table:** + +⟶最も一般的な指数分布族は下表に集約される: + +
+ +**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** + +⟶分布、ベルヌーイ、ガウス、ポワソン、幾何 + +
+ +**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function of x∈Rn+1 and rely on the following 3 assumptions:** + +⟶GLMの仮定 ― 一般化線形モデル(GLM)はランダムな変数yをx∈Rn+1の関数として予測することを目的とし、次の3つの仮定に依拠する: + +
+ +**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** + +⟶備考:最小2乗回帰とロジスティック回帰は一般化線形モデルの特殊な例である。 + +
+ +**40. Support Vector Machines** + +⟶サポートベクターマシン + +
+ +**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** + +⟶サポートベクターマシンの目的は、データ点からの最短距離が最大となる境界線を求めることである。 + +
+ +**42: Optimal margin classifier ― The optimal margin classifier h is such that:** + +⟶最適マージン分類器 ― 最適マージン分類器hは次のようなものである: + +
+ +**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** + +⟶ここで、(w,b)∈Rn×Rは次の最適化問題の解である: + +
+ +**44. such that** + +⟶ただし + +
+ +**45. support vectors** + +⟶サポートベクター + +
+ +**46. Remark: the line is defined as wTx−b=0.** + +⟶備考:直線はwTx−b=0と定義する。 + +
+ +**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** + +⟶ヒンジ損失 ― ヒンジ損失はSVMの設定に用いられ、次のように定義される: + +
+ +**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** + +⟶カーネル ― 特徴写像をϕとすると、カーネルKは次のように定義される: + +
+ +**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** + +⟶実際には、K(x,z)=exp(−||x−z||22σ2)と定義され、ガウシアンカーネルと呼ばれるカーネルKがよく使われる。 + +
+ +**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** + +⟶非線形分離問題、カーネル写像の適用、元の空間における決定境界 + +
+ +**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** + +⟶備考:カーネルを用いてコスト関数を計算する「カーネルトリック」を用いる。なぜなら、明示的な写像ϕを実際には知る必要はないし、それはしばしば非常に複雑になってしまうからである。代わりに、K(x,z)の値のみが必要である。 + +
+ +**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** + +⟶ラグランジアン ― ラグランジアンL(w,b)を次のように定義する: + +
+ +**53. Remark: the coefficients βi are called the Lagrange multipliers.** + +⟶備考:係数βiはラグランジュ乗数と呼ばれる。 + +
+ +**54. Generative Learning** + +⟶生成学習 + +
+ +**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** + +⟶生成モデルは、P(x|y)を推定することによりデータがどのように生成されるのかを学習しようとする。それはベイズの定理を用いてP(y|x)を推定するために使える。 + +
+ +**56. Gaussian Discriminant Analysis** + +⟶ガウシアン判別分析 + +
+ +**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** + +⟶前提条件 ― ガウシアン判別分析はyとx|y=0とx|y=1は次のようであることを前提とする: + +
+ +**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** + +⟶推定 ― 尤度を最大にすると得られる推定量は下表に集約される: + +
+ +**59. Naive Bayes** + +⟶ナイーブベイズ + +
+ +**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** + +⟶仮定 ― ナイーブベイズモデルは、個々のデータ点の特徴量が全て独立であると仮定する: + +
+ +**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** + +⟶解 ― 対数尤度を最大にすると次の解を得る。ただし、k∈{0,1},l∈[[1,L]]とする。 + +
+ +**62. Remark: Naive Bayes is widely used for text classification and spam detection.** + +⟶備考:ナイーブベイズはテキスト分類やスパム検知に幅広く使われている。 + +
+ +**63. Tree-based and ensemble methods** + +⟶決定木とアンサンブル学習 + +
+ +**64. These methods can be used for both regression and classification problems.** + +⟶これらの方法は回帰と分類問題の両方に使える。 + +
+ +**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** + +⟶CART ― 分類・回帰木 (CART)は、一般には決定木として知られ、二分木として表される。非常に解釈しやすいという利点がある。 + +
+ +**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** + +⟶ランダムフォレスト ― これは決定木をベースにしたもので、ランダムに選択された特徴量の集合から構築された多数の決定木を用いる。単純な決定木と異なり、非常に解釈しにくいが、一般的に良い性能が出るのでよく使われるアルゴリズムである。 + +
+ +**67. Remark: random forests are a type of ensemble methods.** + +⟶備考:ランダムフォレストはアンサンブル学習の一種である。 + +
+ +**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** + +⟶ブースティング ― ブースティングの考え方は、複数の弱い学習器を束ねることで1つのより強い学習器を作るというものである。主なものは次の表に集約される: + +
+ +**69. [Adaptive boosting, Gradient boosting]** + +⟶[適応的ブースティング、勾配ブースティング] + +
+ +**70. High weights are put on errors to improve at the next boosting step** + +⟶次のブースティングステップにて改善すべき誤分類に大きい重みが課される。 + +
+ +**71. Weak learners trained on remaining errors** + +⟶残っている誤分類を弱い学習器が学習する。 + +
+ +**72. Other non-parametric approaches** + +⟶他のノンパラメトリックな手法 + +
+ +**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** + +⟶k近傍法 ― k近傍法は、一般的にk-NNとして知られ、あるデータ点の応答はそのk個の最近傍点の性質によって決まるノンパラメトリックな手法である。分類と回帰の両方に用いることができる。 + +
+ +**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** + +⟶備考:パラメータkが大きくなるほど、バイアスが大きくなる。パラメータkが小さくなるほど、分散が大きくなる。 + +
+ +**75. Learning Theory** + +⟶学習理論 + +
+ +**76. Union bound ― Let A1,...,Ak be k events. We have:** + +⟶和集合上界 ― A1,...,Akというk個の事象があるとき、次が成り立つ: + +
+ +**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** + +⟶ヘフディング不等式 ― パラメータϕのベルヌーイ分布から得られるm個の独立同分布変数をZ1,..,Zmとする。その標本平均をˆϕとし、γは正の定数であるとすると、次が成り立つ: + +
+ +**78. Remark: this inequality is also known as the Chernoff bound.** + +⟶備考:この不等式はチェルノフ上界としても知られる。 + +
+ +**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** + +⟶学習誤差 ― ある分類器hに対して、学習誤差、あるいは経験損失か経験誤差としても知られるˆϵ(h)を次のように定義する: + +
+ +**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** + +⟶確率的に近似的に正しい (PAC) ― PACとは、その下で学習理論に関する様々な業績が証明されてきたフレームワークであり、次の前提がある: + +
+ +**81: the training and testing sets follow the same distribution ** + +⟶学習データと検証データは同じ分布に従う。 + +
+ +**82. the training examples are drawn independently** + +⟶学習標本は独立に取得される。 + +
+ +**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** + +⟶細分化 ― 集合S={x(1),...,x(d)}と分類器の集合Hがあるとき、もし任意のラベル{y(1),...,y(d)}の集合に対して次が成り立つとき、HはSを細分化する: + +
+ +**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** + +⟶上界定理 ― Hを|H|=kで有限の仮説集合とし、δとサンプルサイズmは定数とする。そのとき、少なくとも1-δの確率で次が成り立つ: + +
+ +**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** + +⟶VC次元 ― ある仮説集合Hのヴァプニク・チェルヴォーネンキス次元 (VC)は、VC(H)と表記され、それはHによって細分化される最大の集合のサイズである。 + +
+ +**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** + +⟶備考:2次元の線形分類器の集合であるHのVC次元は3である。 + +
+ +**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** + +⟶定理(ヴァプニク) ― あるHについてVC(H)=dであり、mを学習標本の数とする。少なくとも1−δの確率で次が成り立つ: + +
+ +**88. [Introduction, Type of prediction, Type of model]** + +⟶[導入、予測の種類、モデルの種類] + +
+ +**89. [Notations and general concepts, loss function, gradient descent, likelihood]** + +⟶[記法と全般的な概念、損失関数、勾配降下、尤度] + +
+ +**90. [Linear models, linear regression, logistic regression, generalized linear models]** + +⟶ + +
[線形モデル、線形回帰、ロジスティック回帰、一般化線形モデル] + +**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** + +⟶ + +
[サポートベクターマシン、最適マージン分類器、ヒンジ損失、カーネル] + +**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** + +⟶ + +
[生成学習、ガウシアン判別分析、ナイーブベイズ] + +**93. [Trees and ensemble methods, CART, Random forest, Boosting]** + +⟶[ツリーとアンサンブル学習、CART、ランダムフォレスト、ブースティング] + +
+ +**94. [Other methods, k-NN]** + +⟶[他の手法、k近傍法] + +
+ +**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** + +⟶[学習理論、ヘフディング不等式、PAC、VC次元] diff --git a/ja/cs-229-unsupervised-learning.md b/ja/cs-229-unsupervised-learning.md new file mode 100644 index 000000000..cc8111e7c --- /dev/null +++ b/ja/cs-229-unsupervised-learning.md @@ -0,0 +1,339 @@ +**1. Unsupervised Learning cheatsheet** + +⟶教師なし学習チートシート + +
+ +**2. Introduction to Unsupervised Learning** + +⟶教師なし学習の概要 + +
+ +**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** + +⟶モチベーション - 教師なし学習の目的はラベルのないデータ{x(1),...,x(m)}に隠されたパターンを探すことです。 + +
+ +**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** + +⟶イェンセンの不等式 - fを凸関数、Xを確率変数とすると、次の不等式が成り立ちます: + +
+ +**5. Clustering** + +⟶クラスタリング + +
+ +**6. Expectation-Maximization** + +⟶期待値最大化 + +
+ +**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** + +⟶潜在変数 - 潜在変数は推定問題を困難にする隠れた/観測されていない変数であり、多くの場合zで示されます。潜在変数がある最も一般的な設定は次のとおりです: + +
+ +**8. [Setting, Latent variable z, Comments]** + +⟶[設定、潜在変数z、コメント] + +
+ +**9. [Mixture of k Gaussians, Factor analysis]** + +⟶[k個のガウス分布の混合、因子分析] + +
+ +**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** + +⟶アルゴリズム - EMアルゴリズムは次のように尤度の下限の構築(E-ステップ)と、その下限の最適化(M-ステップ)を繰り返し行うことによる最尤推定によりパラメーターθを推定する効率的な方法を提供します: + +
+ +**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** + +⟶E-ステップ: 各データポイントx(i)が特定クラスターz(i)に由来する事後確率Qi(z(i))を次のように評価します: + +
+ +**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** + +⟶M-ステップ: 事後確率Qi(z(i))をデータポイントx(i)のクラスター固有の重みとして使い、次のように各クラスターモデルを個別に再推定します: + +
+ +**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** + +⟶[ガウス分布初期化、期待値ステップ、最大化ステップ、収束] + +
+ +**14. k-means clustering** + +⟶k平均法 + +
+ +**15. We note c(i) the cluster of data point i and μj the center of cluster j.** + +⟶データポイントiのクラスタをc(i)、クラスタjの中心をμjと表記します。 + +
+ +**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** + +⟶クラスターの重心μ1,μ2,...,μk∈Rnをランダムに初期化後、k-meansアルゴリズムが収束するまで次のようなステップを繰り返します: + +
+ +**17. [Means initialization, Cluster assignment, Means update, Convergence]** + +⟶ [平均の初期化、クラスター割り当て、平均の更新、収束] + +
+ +**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** + +⟶ひずみ関数 - アルゴリズムが収束するかどうかを確認するため、次のように定義されたひずみ関数を参照します: + +
+ +**19. Hierarchical clustering** + +⟶ 階層的クラスタリング + +
+ +**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** + +⟶アルゴリズム - これは入れ子になったクラスタを逐次的に構築する凝集階層アプローチによるクラスタリングアルゴリズムです。 + +
+ +**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** + +⟶ 種類 ― 様々な目的関数を最適化するための様々な種類の階層クラスタリングアルゴリズムが以下の表にまとめられています。 + +
+ +**22. [Ward linkage, Average linkage, Complete linkage]** + +⟶ [ウォードリンケージ、平均リンケージ、完全リンケージ] + +
+ +**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** + +⟶ [クラスター内の距離最小化、クラスターペア間の平均距離の最小化、クラスターペア間の最大距離の最小化] + +
+ +**24. Clustering assessment metrics** + +⟶ クラスタリング評価指標 + +
+ +**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** + +⟶ 教師なし学習では、教師あり学習の場合のような正解ラベルがないため、モデルの性能を評価することが困難な場合が多くあります。 + +
+ +**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** + +⟶ シルエット係数 ― ある1つのサンプルと同じクラス内のその他全ての点との平均距離をa、そのサンプルから最も近いクラスタ内の全ての点との平均距離をbと表記すると、そのサンプルのシルエット係数sは次のように定義されます: + +
+ +**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** + +⟶ Calinski-Harabazインデックス ― クラスタの数をkと表記すると、クラスタ間およびクラスタ内の分散行列であるBkおよびWkはそれぞれ以下のように定義されます。 + +
+ +**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** + +⟶ Calinski-Harabazインデックスs(k)はクラスタリングモデルが各クラスタをどの程度適切に定義しているかを示します。つまり、スコアが高いほど、各クラスタはより密で、十分に分離されています。それは次のように定義されます: + +
+ +**29. Dimension reduction** + +⟶ 次元削減 + +
+ +**30. Principal component analysis** + +⟶ 主成分分析 + +
+ +**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** + +⟶ これは分散を最大にするデータの射影方向を見つける次元削減手法です。 + +
+ +**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** + +⟶ 固有値、固有ベクトル - 行列 A∈Rn×nが与えられたとき、次の式で固有ベクトルと呼ばれるベクトルz∈Rn∖{0}が存在した場合に、λはAの固有値と呼ばれます。 + +
+ +**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** + +⟶ スペクトル定理 - A∈Rn×nとする。Aが対称のとき、Aは実直交行列U∈Rn×nを用いて対角化可能です。Λ=diag(λ1,...,λn)と表記することで、次の式を得ます。 + +
+ +**34. diagonal** + +⟶ diagonal + +
+ +**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** + +⟶ 注釈: 最大固有値に対応する固有ベクトルは行列Aの第1固有ベクトルと呼ばれる。 + +
+ +**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k dimensions by maximizing the variance of the data as follows:** + +⟶ アルゴリズム ― 主成分分析(PCA)の過程は、次のようにデータの分散を最大化することによりデータをk次元に射影する次元削減の技術です。 + +
+ +**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** + +⟶ ステップ1:平均が0で標準偏差が1となるようにデータを正規化します。 + +
+ +**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** + +⟶ ステップ2:実固有値に関して対称であるΣ=1mm∑i=1x(i)x(i)T∈Rn×nを計算します。 + +
+ +**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** + +⟶ ステップ3:k個のΣの対角主値固有ベクトルu1,...,uk∈Rn、すなわちk個の最大の固有値の対角固有ベクトルを計算します。 + +
+ +**40. Step 4: Project the data on spanR(u1,...,uk).** + +⟶ ステップ4:データをspanR(u1,...,uk)に射影します。 + +
+ +**41. This procedure maximizes the variance among all k-dimensional spaces.** + +⟶ この過程は全てのk次元空間の間の分散を最大化します。 + +
+ +**42. [Data in feature space, Find principal components, Data in principal components space]** + +⟶ [特徴空間内のデータ、主成分の探索、主成分空間内のデータ] + +
+ +**43. Independent component analysis** + +⟶ 独立成分分析 + +
+ +**44. It is a technique meant to find the underlying generating sources.** + +⟶ 隠れた生成源を見つけることを意図した技術です。 + +
+ +**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** + +⟶ 仮定 ― 混合かつ非特異行列Aを通じて、データxはn次元の元となるベクトルs=(s1,...,sn)から次のように生成されると仮定します。ただしsiは独立でランダムな変数です: + +
+ +**46. The goal is to find the unmixing matrix W=A−1.** + +⟶ 非混合行列W=A−1を見つけることが目的です。 + +
+ +**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** + +⟶ ベルとシノスキーのICAアルゴリズム ― このアルゴリズムは非混合行列Wを次のステップによって見つけます: + +
+ +**48. Write the probability of x=As=W−1s as:** + +⟶ x=As=W−1sの確率を次のように表します: + +
+ +**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** + +⟶ 学習データを{x(i),i∈[[1,m]]}、シグモイド関数をgとし、対数尤度を次のように表します: + +
+ +**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** + +⟶ そのため、確率的勾配上昇法の学習規則は、学習サンプルx(i)に対して次のようにWを更新するものです: + +
+ +**51. The Machine Learning cheatsheets are now available in [target language].** + +⟶ 機械学習チートシートは日本語で読めます。 + +
+ +**52. Original authors** + +⟶ 原著者 + +
+ +**53. Translated by X, Y and Z** + +⟶ X・Y・Z 訳 + +
+ +**54. Reviewed by X, Y and Z** + +⟶ X・Y・Z 校正 + +
+ +**55. [Introduction, Motivation, Jensen's inequality]** + +⟶ [導入、動機、イェンセンの不等式] + +
+ +**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** + +⟶[クラスタリング、期待値最大化法、k-means、階層クラスタリング、指標] + +
+ +**57. [Dimension reduction, PCA, ICA]** + +⟶ [次元削減、PCA、ICA] diff --git a/ja/cs-230-convolutional-neural-networks.md b/ja/cs-230-convolutional-neural-networks.md new file mode 100644 index 000000000..178592414 --- /dev/null +++ b/ja/cs-230-convolutional-neural-networks.md @@ -0,0 +1,717 @@ +**Convolutional Neural Networks translation** + +
+ +**1. Convolutional Neural Networks cheatsheet** + +⟶ 畳み込みニューラルネットワーク チートシート + +
+ + +**2. CS 230 - Deep Learning** + +⟶ CS 230 - ディープラーニング + +
+ + +**3. [Overview, Architecture structure]** + +⟶ [概要、アーキテクチャ構造] + +
+ + +**4. [Types of layer, Convolution, Pooling, Fully connected]** + +⟶ [層の種類、畳み込み、プーリング、全結合] + +
+ + +**5. [Filter hyperparameters, Dimensions, Stride, Padding]** + +⟶ [フィルタハイパーパラメータ、次元、ストライド、パディング] + +
+ + +**6. [Tuning hyperparameters, Parameter compatibility, Model complexity, Receptive field]** + +⟶ [ハイパーパラメータの調整、パラメータの互換性、モデルの複雑さ、受容野] + +
+ + +**7. [Activation functions, Rectified Linear Unit, Softmax]** + +⟶ [活性化関数、正規化線形ユニット、ソフトマックス] + +
+ + +**8. [Object detection, Types of models, Detection, Intersection over Union, Non-max suppression, YOLO, R-CNN]** + +⟶ [物体検出、モデルの種類、検出、IoU、非極大抑制、YOLO、R-CNN] + +
+ + +**9. [Face verification/recognition, One shot learning, Siamese network, Triplet loss]** + +⟶ [顔認証/認識、One shot学習、シャムネットワーク、トリプレット損失] + +
+ + +**10. [Neural style transfer, Activation, Style matrix, Style/content cost function]** + +⟶ [ニューラルスタイル変換、活性化、スタイル行列、スタイル/コンテンツコスト関数] + +
+ + +**11. [Computational trick architectures, Generative Adversarial Net, ResNet, Inception Network]** + +⟶ [計算トリックアーキテクチャ、敵対的生成ネットワーク、ResNet、インセプションネットワーク] + +
+ + +**12. Overview** + +⟶ 概要 + +
+ + +**13. Architecture of a traditional CNN ― Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers:** + +⟶ 伝統的な畳み込みニューラルネットワークのアーキテクチャ - CNNとしても知られる畳み込みニューラルネットワークは一般的に次の層で構成される特定種類のニューラルネットワークです。 + +
+ + +**14. The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections.** + +⟶ 畳み込み層とプーリング層は次のセクションで説明されるハイパーパラメータに関してファインチューニングできます。 + +
+ + +**15. Types of layer** + +⟶ 層の種類 + +
+ + +**16. Convolution layer (CONV) ― The convolution layer (CONV) uses filters that perform convolution operations as it is scanning the input I with respect to its dimensions. Its hyperparameters include the filter size F and stride S. The resulting output O is called feature map or activation map.** + +⟶ 畳み込み層 (CONV) - 畳み込み層 (CONV)は入力Iを各次元に関して走査する時に、畳み込み演算を行うフィルタを使用します。畳み込み層のハイパーパラメータにはフィルタサイズFとストライドSが含まれます。結果出力Oは特徴マップまたは活性化マップと呼ばれます。 + +
+ + +**17. Remark: the convolution step can be generalized to the 1D and 3D cases as well.** + +⟶ 注: 畳み込みステップは1次元や3次元の場合にも一般化できます。 + +
+ + +**18. Pooling (POOL) ― The pooling layer (POOL) is a downsampling operation, typically applied after a convolution layer, which does some spatial invariance. In particular, max and average pooling are special kinds of pooling where the maximum and average value is taken, respectively.** + +⟶ プーリング (POOL) - プーリング層 (POOL)は位置不変性をもつ縮小操作で、通常は畳み込み層の後に適用されます。特に、最大及び平均プーリングはそれぞれ最大と平均値が取られる特別な種類のプーリングです。 + +
+ + +**19. [Type, Purpose, Illustration, Comments]** + +⟶ [種類、目的、図、コメント] + +
+ + +**20. [Max pooling, Average pooling, Each pooling operation selects the maximum value of the current view, Each pooling operation averages the values of the current view]** + +⟶ [最大プーリング、平均プーリング、各プーリング操作は現在のビューの中から最大値を選ぶ、各プーリング操作は現在のビューに含まれる値を平均する] + +
+ + +**21. [Preserves detected features, Most commonly used, Downsamples feature map, Used in LeNet]** + +⟶ [検出された特徴の保持、最も一般的な利用、特徴マップをダウンサンプリング、LeNetでの利用] + +
+ + +**22. Fully Connected (FC) ― The fully connected layer (FC) operates on a flattened input where each input is connected to all neurons. If present, FC layers are usually found towards the end of CNN architectures and can be used to optimize objectives such as class scores.** + +⟶ 全結合 (FC) - 全結合 (FC) 層は平坦化された入力に対して演算を行います。各入力は全てのニューロンに接続されています。FC層が存在する場合、通常CNNアーキテクチャの末尾に向かって見られ、クラススコアなどの目的を最適化するため利用できます。 + +
+ + +**23. Filter hyperparameters** + +⟶ フィルタハイパーパラメータ + +
+ + +**24. The convolution layer contains filters for which it is important to know the meaning behind its hyperparameters.** + +⟶ 畳み込み層にはハイパーパラメータの背後にある意味を知ることが重要なフィルタが含まれています。 + +
+ + +**25. Dimensions of a filter ― A filter of size F×F applied to an input containing C channels is a F×F×C volume that performs convolutions on an input of size I×I×C and produces an output feature map (also called activation map) of size O×O×1.** + +⟶ フィルタの次元 - C個のチャネルを含む入力に適用されるF×Fサイズのフィルタの体積はF×F×Cで、それはI×I×Cサイズの入力に対して畳み込みを実行してO×O×1サイズの特徴マップ(活性化マップとも呼ばれる)出力を生成します。 + + +
+ + +**26. Filter** + +⟶ フィルタ + +
+ + +**27. Remark: the application of K filters of size F×F results in an output feature map of size O×O×K.** + +⟶ 注: F×FサイズのK個のフィルタを適用すると、O×O×Kサイズの特徴マップの出力を得られます。 + +
+ + +**28. Stride ― For a convolutional or a pooling operation, the stride S denotes the number of pixels by which the window moves after each operation.** + +⟶ ストライド - 畳み込みまたはプーリング操作において、ストライドSは各操作の後にウィンドウを移動させるピクセル数を表します。 + +
+ + +**29. Zero-padding ― Zero-padding denotes the process of adding P zeroes to each side of the boundaries of the input. This value can either be manually specified or automatically set through one of the three modes detailed below:** + +⟶ ゼロパディング - ゼロパディングとは入力の各境界に対してP個のゼロを追加するプロセスを意味します。この値は手動で指定することも、以下に詳述する3つのモードのいずれかを使用して自動的に設定することもできます。 + +
+ + +**30. [Mode, Value, Illustration, Purpose, Valid, Same, Full]** + +⟶ [モード、値、図、目的、Valid、Same、Full] + +
+ + +**31. [No padding, Drops last convolution if dimensions do not match, Padding such that feature map size has size ⌈IS⌉, Output size is mathematically convenient, Also called 'half' padding, Maximum padding such that end convolutions are applied on the limits of the input, Filter 'sees' the input end-to-end]** + +⟶ [パディングなし、次元が合わなかったら場合の最後の畳み込みの終了, 特徴マップのサイズが⌈IS⌉になるようなパディング、出力サイズは数学的に扱いやすい、「ハーフ」パディングとも呼ばれる、入力の一番端まで畳み込みが適用されるような最大パディング, フィルタは入力を端から端まで「見る」] + +
+ + +**32. Tuning hyperparameters** + +⟶ ハイパーパラメータの調整 + +
+ + +**33. Parameter compatibility in convolution layer ― By noting I the length of the input volume size, F the length of the filter, P the amount of zero padding, S the stride, then the output size O of the feature map along that dimension is given by:** + +⟶ 畳み込み層内のパラメータ互換性 - Iを入力ボリュームサイズの長さ、Fをフィルタの長さ、Pをゼロパディングの量, Sをストライドとすると、その次元に沿った特徴マップの出力サイズOは次式で与えられます: + +
+ + +**34. [Input, Filter, Output]** + +⟶ [入力、フィルタ、出力] + +
+ + +**35. Remark: often times, Pstart=Pend≜P, in which case we can replace Pstart+Pend by 2P in the formula above.** + +⟶ 注: 多くの場合Pstart=Pend≜Pであり、上記の式のPstart+Pendを2Pに置き換える事ができます。 + +
+ + +**36. Understanding the complexity of the model ― In order to assess the complexity of a model, it is often useful to determine the number of parameters that its architecture will have. In a given layer of a convolutional neural network, it is done as follows:** + +⟶ モデルの複雑さを理解する - モデルの複雑さを評価するために、モデルのアーキテクチャが持つパラメータの数を測定することがしばしば有用です。畳み込みニューラルネットワークの各層では、以下のように行なわれます: + +
+ + +**37. [Illustration, Input size, Output size, Number of parameters, Remarks]** + +⟶ [図、入力サイズ、出力サイズ、パラメータの数、備考] + +
+ + +**38. [One bias parameter per filter, In most cases, S + + +**39. [Pooling operation done channel-wise, In most cases, S=F]** + +⟶ [チャネルごとに行われるプーリング操作、ほとんどの場合、S=F] + +
+ + +**40. [Input is flattened, One bias parameter per neuron, The number of FC neurons is free of structural constraints]** + +⟶ [入力は平坦化される、ニューロンごとにひとつのバイアスパラメータ、FCのニューロンの数には構造的制約がない] + +
+ + +**41. Receptive field ― The receptive field at layer k is the area denoted Rk×Rk of the input that each pixel of the k-th activation map can 'see'. By calling Fj the filter size of layer j and Si the stride value of layer i and with the convention S0=1, the receptive field at layer k can be computed with the formula:** + +⟶ 受容野 - 層kにおける受容野は、k番目の活性化マップの各ピクセルが「見る」ことができる入力のRk×Rkの領域です。層jのフィルタサイズをFj、層iのストライド値をSiとし、慣例に従ってS0=1とすると、層kでの受容野は次の式で計算されます: + +
+ + +**42. In the example below, we have F1=F2=3 and S1=S2=1, which gives R2=1+2⋅1+2⋅1=5.** + +⟶ 下記の例のようにF1=F2=3、S1=S2=1とすると、R2=1+2⋅1+2⋅1=5となります。 + +
+ + +**43. Commonly used activation functions** + +⟶ よく使われる活性化関数 + +
+ + +**44. Rectified Linear Unit ― The rectified linear unit layer (ReLU) is an activation function g that is used on all elements of the volume. It aims at introducing non-linearities to the network. Its variants are summarized in the table below:** + +⟶ 正規化線形ユニット - 正規化線形ユニット層(ReLU)はボリュームの全ての要素に利用される活性化関数gです。ReLUの目的は非線型性をネットワークに導入することです。変種は以下の表でまとめられています: + +
+ + +**45. [ReLU, Leaky ReLU, ELU, with]** + +⟶[ReLU、Leaky ReLU、ELU、ただし] + +
+ + +**46. [Non-linearity complexities biologically interpretable, Addresses dying ReLU issue for negative values, Differentiable everywhere]** + +⟶ [生物学的に解釈可能な非線形複雑性、負の値に対してReLUが死んでいる問題への対処、どこても微分可能] + +
+ + +**47. Softmax ― The softmax step can be seen as a generalized logistic function that takes as input a vector of scores x∈Rn and outputs a vector of output probability p∈Rn through a softmax function at the end of the architecture. It is defined as follows:** + +⟶ ソフトマックス - ソフトマックスのステップは入力としてスコアx∈Rnのベクトルを取り、アーキテクチャの最後にあるソフトマックス関数を通じて確率p∈Rnのベクトルを出力する一般化されたロジスティック関数として見ることができます。次のように定義されます: + +
+ + +**48. where** + +⟶ ここで + +
+ + +**49. Object detection** + +⟶ 物体検出 + +
+ + +**50. Types of models ― There are 3 main types of object recognition algorithms, for which the nature of what is predicted is different. They are described in the table below:** + +⟶ モデルの種類 - 物体認識アルゴリズムは主に3つの種類があり、予測されるものの性質は異なります。次の表で説明されています: + +
+ + +**51. [Image classification, Classification w. localization, Detection]** + +⟶ [画像分類、位置特定を伴う分類、検出] + +
+ + +**52. [Teddy bear, Book]** + +⟶ [テディベア、本] + +
+ + +**53. [Classifies a picture, Predicts probability of object, Detects an object in a picture, Predicts probability of object and where it is located, Detects up to several objects in a picture, Predicts probabilities of objects and where they are located]** + +⟶ [画像の分類、物体の確率の予測, 画像内の物体の検出、物体の確率とその位置の予測、画像内の複数の物体の検出、複数の物体の確率と位置の予測] + +
+ + +**54. [Traditional CNN, Simplified YOLO, R-CNN, YOLO, R-CNN]** + +⟶ [伝統的なCNN、単純されたYOLO、R-CNN、YOLO、R-CNN] + +
+ + +**55. Detection ― In the context of object detection, different methods are used depending on whether we just want to locate the object or detect a more complex shape in the image. The two main ones are summed up in the table below:** + +⟶ 検出 - 物体検出の文脈では、画像内の物体の位置を特定したいだけなのかあるいは複雑な形状を検出したいのかによって、異なる方法が使用されます。二つの主なものは次の表でまとめられています: + +
+ + +**56. [Bounding box detection, Landmark detection]** + +⟶ [バウンディングボックス検出、ランドマーク検出] + +
+ + +**57. [Detects the part of the image where the object is located, Detects a shape or characteristics of an object (e.g. eyes), More granular]** + +⟶ [物体が配置されている画像の部分の検出、物体(たとえば目)の形状または特徴の検出、詳細] + +
+ + +**58. [Box of center (bx,by), height bh and width bw, Reference points (l1x,l1y), ..., (lnx,lny)]** + +⟶ [中心(bx, by)、高さbh、幅bwのボックス、参照点(l1x,l1y), ..., (lnx,lny)] + +
+ + +**59. Intersection over Union ― Intersection over Union, also known as IoU, is a function that quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding box Ba. It is defined as:** + +⟶ Intersection over Union - Intersection over Union (IoUとしても知られる)は予測された境界ボックスBpが実際の境界ボックスBaに対してどれだけ正しく配置されているかを定量化する関数です。次のように定義されます: + +
+ + +**60. Remark: we always have IoU∈[0,1]. By convention, a predicted bounding box Bp is considered as being reasonably good if IoU(Bp,Ba)⩾0.5.** + +⟶ 注:常にIoU∈[0,1]となります。慣例では、IoU(Bp,Ba)⩾0.5の場合、予測された境界ボックスBpはそこそこ良いと見なされます。 + +
+ + +**61. Anchor boxes ― Anchor boxing is a technique used to predict overlapping bounding boxes. In practice, the network is allowed to predict more than one box simultaneously, where each box prediction is constrained to have a given set of geometrical properties. For instance, the first prediction can potentially be a rectangular box of a given form, while the second will be another rectangular box of a different geometrical form.** + +⟶ アンカーボックス - アンカーボクシングは重なり合う境界ボックスを予測するために使用される手法です。 実際には、ネットワークは同時に複数のボックスを予測することを許可されており、各ボックスの予測は特定の幾何学的属性の組み合わせを持つように制約されます。例えば、最初の予測は特定の形式の長方形のボックスになる可能性があり、2番目の予測は異なる幾何学的形式の別の長方形のボックスになります。 + +
+ + +**62. Non-max suppression ― The non-max suppression technique aims at removing duplicate overlapping bounding boxes of a same object by selecting the most representative ones. After having removed all boxes having a probability prediction lower than 0.6, the following steps are repeated while there are boxes remaining:** + +⟶ 非極大抑制 - 非極大抑制技術のねらいは、最も代表的なものを選択することによって、同じ物体の重複した重なり合う境界ボックスを除去することです。0.6未満の予測確率を持つボックスを全て除去した後、残りのボックスがある間、以下の手順が繰り返されます: + +
+ + +**63. [For a given class, Step 1: Pick the box with the largest prediction probability., Step 2: Discard any box having an IoU⩾0.5 with the previous box.]** + +⟶ [特定のクラスに対して、ステップ1: 最大の予測確率を持つボックスを選ぶ。ステップ2: そのボックスに対してIoU⩾0.5となる全てのボックスを破棄する。] + +
+ + +**64. [Box predictions, Box selection of maximum probability, Overlap removal of same class, Final bounding boxes]** + +⟶ [ボックス予測、最大確率のボックス選択、同じクラスの重複除去、最終的な境界ボックス] + +
+ + +**65. YOLO ― You Only Look Once (YOLO) is an object detection algorithm that performs the following steps:** + +⟶ YOLO - You Only Look Once (YOLO)は次の手順を実行する物体検出アルゴリズムです: + +
+ + +**66. [Step 1: Divide the input image into a G×G grid., Step 2: For each grid cell, run a CNN that predicts y of the following form:, repeated k times]** + +⟶ [ステップ1: 入力画像をGxGグリッドに分割する。ステップ2: 各グリッドセルに対して次の形式のyを予測するCNNを実行する:,k回繰り返す。] + +
+ + +**67. where pc is the probability of detecting an object, bx,by,bh,bw are the properties of the detected bouding box, c1,...,cp is a one-hot representation of which of the p classes were detected, and k is the number of anchor boxes.** + +⟶ ここで、pcは物体を検出する確率、bx,by,bh,bwは検出された境界ボックスの属性、c1, ..., cpはp個のクラスのうちどれが検出されたかのOne-hot表現、kはアンカーボックスの数です。 + +
+ + +**68. Step 3: Run the non-max suppression algorithm to remove any potential duplicate overlapping bounding boxes.** + +⟶ ステップ3: 重複する可能性のある重なり合う境界ボックスを全て除去するため、非極大抑制アルゴリズムを実行する。 + +
+ + +**69. [Original image, Division in GxG grid, Bounding box prediction, Non-max suppression]** + +⟶ [元の画像、GxGグリッドでの分割、境界ボックス予測、非極大抑制] + +
+ + +**70. Remark: when pc=0, then the network does not detect any object. In that case, the corresponding predictions bx,...,cp have to be ignored.** + +⟶ 注: pc=0のとき、ネットワークは物体を検出しません。その場合には、対応する予測 bx, ..., cpは無視する必要があります。 + +
+ + +**71. R-CNN ― Region with Convolutional Neural Networks (R-CNN) is an object detection algorithm that first segments the image to find potential relevant bounding boxes and then run the detection algorithm to find most probable objects in those bounding boxes.** + +⟶ R-CNN - Region with Convolutional Neural Networks (R-CNN)は物体検出アルゴリズムで、最初に画像をセグメント化して潜在的に関連する境界ボックスを見つけ、次に検出アルゴリズムを実行してそれらの境界ボックス内で最も可能性の高い物体を見つけます。 + +
+ + +**72. [Original image, Segmentation, Bounding box prediction, Non-max suppression]** + +⟶ [元の画像、セグメンテーション、境界ボックス予測、非極大抑制] + +
+ + +**73. Remark: although the original algorithm is computationally expensive and slow, newer architectures enabled the algorithm to run faster, such as Fast R-CNN and Faster R-CNN.** + +⟶ 注: 元のアルゴリズムは計算コストが高くて遅いですが、Fast R-CNNやFaster R-CNNなどの、より新しいアーキテクチャではアルゴリズムをより速く実行できます。 + +
+ + +**74. Face verification and recognition** + +⟶ 顔認証及び認識 + +
+ + +**75. Types of models ― Two main types of model are summed up in table below:** + +⟶ モデルの種類 - 2種類の主要なモデルが次の表にまとめられています: + +
+ + +**76. [Face verification, Face recognition, Query, Reference, Database]** + +⟶ [顔認証、顔認識、クエリ、参照、データベース] + +
+ + +**77. [Is this the correct person?, One-to-one lookup, Is this one of the K persons in the database?, One-to-many lookup]** + +⟶ [これは正しい人ですか?、1対1検索、これはデータベース内のK人のうちの1人ですか?、1対多検索] + +
+ + +**78. One Shot Learning ― One Shot Learning is a face verification algorithm that uses a limited training set to learn a similarity function that quantifies how different two given images are. The similarity function applied to two images is often noted d(image 1,image 2).** + +⟶ ワンショット学習 - ワンショット学習は限られた学習セットを利用して、2つの与えられた画像の違いを定量化する類似度関数を学習する顔認証アルゴリズムです。2つの画像に適用される類似度関数はしばしばd(画像1, 画像2)と記されます。 + +
+ + +**79. Siamese Network ― Siamese Networks aim at learning how to encode images to then quantify how different two images are. For a given input image x(i), the encoded output is often noted as f(x(i)).** + +⟶ シャムネットワーク - シャムネットワークは画像のエンコード方法を学習して2つの画像の違いを定量化することを目的としています。与えられた入力画像x(i)に対してエンコードされた出力はしばしばf(x(i))と記されます。 + +
+ + +**80. Triplet loss ― The triplet loss ℓ is a loss function computed on the embedding representation of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive example belong to a same class, while the negative example to another one. By calling α∈R+ the margin parameter, this loss is defined as follows:** + +⟶ トリプレット損失 - トリプレット損失ℓは3つ組の画像A(アンカー)、P(ポジティブ)、N(ネガティブ)の埋め込み表現で計算される損失関数です。アンカーとポジティブ例は同じクラスに属し、ネガティブ例は別のクラスに属します。マージンパラメータをα∈R+と呼ぶことによってこの損失は次のように定義されます: + +
+ + +**81. Neural style transfer** + +⟶ ニューラルスタイル変換 + +
+ + +**82. Motivation ― The goal of neural style transfer is to generate an image G based on a given content C and a given style S.** + +⟶ モチベーション - ニューラルスタイル変換の目的は与えられたコンテンツCとスタイルSに基づく画像Gを生成することです。 + +
+ + +**83. [Content C, Style S, Generated image G]** + +⟶ [コンテンツC、スタイルS、生成された画像G] + +
+ + +**84. Activation ― In a given layer l, the activation is noted a[l] and is of dimensions nH×nw×nc** + +⟶ 活性化 - 層lにおける活性化はa[l]と表記され、次元はnH×nw×ncです。 + +
+ + +**85. Content cost function ― The content cost function Jcontent(C,G) is used to determine how the generated image G differs from the original content image C. It is defined as follows:** + +⟶ コンテンツコスト関数 - Jcontent(C, G)というコンテンツコスト関数は生成された画像Gと元のコンテンツ画像Cとの違いを測定するため利用されます。以下のように定義されます: + +
+ + +**86. Style matrix ― The style matrix G[l] of a given layer l is a Gram matrix where each of its elements G[l]kk′ quantifies how correlated the channels k and k′ are. It is defined with respect to activations a[l] as follows:** + +⟶ スタイル行列 - 与えられた層lのスタイル行列G[l]はグラム行列で、各要素G[l]kk′がチャネルkとk′の相関関係を定量化します。活性化a[l]に関して次のように定義されます: + +
+ + +**87. Remark: the style matrix for the style image and the generated image are noted G[l] (S) and G[l] (G) respectively.** + +⟶ 注: スタイル画像及び生成された画像に対するスタイル行列はそれぞれG[l] (S)、G[l] (G)と表記されます。 + +
+ + +**88. Style cost function ― The style cost function Jstyle(S,G) is used to determine how the generated image G differs from the style S. It is defined as follows:** + +⟶ スタイルコスト関数 - スタイルコスト関数Jstyle(S,G)は生成された画像GとスタイルSとの違いを測定するため利用されます。以下のように定義されます: + +
+ + +**89. Overall cost function ― The overall cost function is defined as being a combination of the content and style cost functions, weighted by parameters α,β, as follows:** + +⟶ 全体のコスト関数 - 全体のコスト関数は以下のようにパラメータα,βによって重み付けされたコンテンツ及びスタイルコスト関数の組み合わせとして定義されます: + +
+ + +**90. Remark: a higher value of α will make the model care more about the content while a higher value of β will make it care more about the style.** + +⟶ 注: αの値を大きくするとモデルはコンテンツを重視し、βの値を大きくするとスタイルを重視します。 + +
+ + +**91. Architectures using computational tricks** + +⟶ 計算トリックを使うアーキテクチャ + +
+ + +**92. Generative Adversarial Network ― Generative adversarial networks, also known as GANs, are composed of a generative and a discriminative model, where the generative model aims at generating the most truthful output that will be fed into the discriminative which aims at differentiating the generated and true image.** + +⟶ 敵対的生成ネットワーク - 敵対的生成ネットワーク(GANsとも呼ばれる)は生成モデルと識別モデルで構成されます。生成モデルの目的は、生成された画像と本物の画像を区別することを目的とする識別モデルに与えられる、最も本物らしい出力を生成することです。 + +
+ + +**93. [Training set, Noise, Real-world image, Generator, Discriminator, Real Fake]** + +⟶ [学習セット、ノイズ、現実世界の画像、生成器、識別器、真偽] + +
+ + +**94. Remark: use cases using variants of GANs include text to image, music generation and synthesis.** + +⟶ 注: GANsの変種を使用するユースケースにはテキストからの画像生成, 音楽生成及び合成があります。 + +
+ + +**95. ResNet ― The Residual Network architecture (also called ResNet) uses residual blocks with a high number of layers meant to decrease the training error. The residual block has the following characterizing equation:** + +⟶ ResNet - Residual Networkアーキテクチャ(ResNetとも呼ばれる)は学習エラーを減らすため多数の層がある残差ブロックを使用します。残差ブロックは次の特性方程式を有します: + +
+ + +**96. Inception Network ― This architecture uses inception modules and aims at giving a try at different convolutions in order to increase its performance through features diversification. In particular, it uses the 1×1 convolution trick to limit the computational burden.** + +⟶ インセプションネットワーク - このアーキテクチャはインセプションモジュールを利用し、特徴量の多様化を通じてパーフォーマンスを向上させるため、様々な畳み込みを試すことを目的としています。特に、計算負荷を限定するため1×1畳み込みトリックを使います。 + +
+ + +**97. The Deep Learning cheatsheets are now available in [target language].** + +⟶ ディープラーニングのチートシートが[日本語]で利用可能になりました。 + +
+ + +**98. Original authors** + +⟶ 原著者 + +
+ + +**99. Translated by X, Y and Z** + +⟶ X・Y・Z 訳 + +
+ + +**100. Reviewed by X, Y and Z** + +⟶ X・Y・Z 校正 + +
+ + +**101. View PDF version on GitHub** + +⟶ GitHubでPDF版を見る + +
+ + +**102. By X and Y** + +⟶ X・Y 著 + +
diff --git a/ja/cs-230-deep-learning-tips-and-tricks.md b/ja/cs-230-deep-learning-tips-and-tricks.md new file mode 100644 index 000000000..a7de15349 --- /dev/null +++ b/ja/cs-230-deep-learning-tips-and-tricks.md @@ -0,0 +1,457 @@ +**Deep Learning Tips and Tricks translation** + +
+ +**1. Deep Learning Tips and Tricks cheatsheet** + +⟶深層学習(ディープラーニング)のアドバイスやコツのチートシート + +
+ + +**2. CS 230 - Deep Learning** + +⟶CS 230 - 深層学習 + +
+ + +**3. Tips and tricks** + +⟶アドバイスやコツ + +
+ + +**4. [Data processing, Data augmentation, Batch normalization]** + +⟶データ処理、Data augmentation (データ拡張)、Batch normalization (バッチ正規化) + +
+ + +**5. [Training a neural network, Epoch, Mini-batch, Cross-entropy loss, Backpropagation, Gradient descent, Updating weights, Gradient checking]** + +⟶ニューラルネットワークの学習、エポック、ミニバッチ、交差エントロピー誤差、誤差逆伝播法、勾配降下法、重み更新、勾配チェック + +
+ + +**6. [Parameter tuning, Xavier initialization, Transfer learning, Learning rate, Adaptive learning rates]** + +⟶パラメータチューニング、Xavier初期化、転移学習、学習率、適応学習率 + +
+ + +**7. [Regularization, Dropout, Weight regularization, Early stopping]** + +⟶正規化、Dropout (ドロップアウト)、重みの正規化、Early stopping (学習の早々な終了) + +
+ + +**8. [Good practices, Overfitting small batch, Gradient checking]** + +⟶おすすめの技法、小さいバッチの過学習、勾配チェック + +
+ + +**9. View PDF version on GitHub** + +⟶GitHubでPDF版を見る + +
+ + +**10. Data processing** + +⟶データ処理 + +
+ + +**11. Data augmentation ― Deep learning models usually need a lot of data to be properly trained. It is often useful to get more data from the existing ones using data augmentation techniques. The main ones are summed up in the table below. More precisely, given the following input image, here are the techniques that we can apply:** + +⟶Data augmentation (データ拡張) - 大抵の場合は、深層学習のモデルを適切に訓練するには大量のデータが必要です。Data augmentation という技術を用いて既存のデータから、データを増やすことがよく役立ちます。以下、Data augmentation の主な手法はまとまっています。より正確には、以下の入力画像に対して、下記の技術を適用できます。 + +
+ + +**12. [Original, Flip, Rotation, Random crop]** + +⟶元の画像、反転、回転、ランダムな切り抜き + +
+ + +**13. [Image without any modification, Flipped with respect to an axis for which the meaning of the image is preserved, Rotation with a slight angle, Simulates incorrect horizon calibration, Random focus on one part of the image, Several random crops can be done in a row]** + +⟶何も変更されていない画像、画像の意味が変わらない軸における反転、わずかな角度の回転、不正確な水平線の校正(calibration)をシミュレートする、画像の一部へのランダムなフォーカス、連続して数回のランダムな切り抜きが可能 + +
+ + +**14. [Color shift, Noise addition, Information loss, Contrast change]** + +⟶カラーシフト、ノイズの付加、情報損失、コントラスト(鮮やかさ)の修正 + +
+ + +**15. [Nuances of RGB is slightly changed, Captures noise that can occur with light exposure, Addition of noise, More tolerance to quality variation of inputs, Parts of image ignored, Mimics potential loss of parts of image, Luminosity changes, Controls difference in exposition due to time of day]** + +⟶RGBのわずかな修正、照らされ方によるノイズを捉える、ノイズの付加、入力画像の品質のばらつきへの耐性の強化、画像の一部を無視、画像の一部が欠ける可能性を再現する、明るさの変化、時刻による露出の違いをコントロールする + +
+ + +**16. Remark: data is usually augmented on the fly during training.** + +⟶備考:データ拡張は基本的には学習時に臨機応変に行われる。 + +
+ + +**17. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** + +⟶batch normalization - ハイパーパラメータ γ、β によってバッチ {xi} を正規化するステップです。修正を加えたいバッチの平均と分散をμB,σ2Bと表記すると、以下のように行えます。 + +
+ + +**18. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** + +⟶より高い学習率を利用可能にし初期化への強い依存を減らすことを目的として、基本的には全結合層・畳み込み層のあとで非線形層の前に行います。 + +
+ + +**19. Training a neural network** + +⟶ニューラルネットワークの学習 + +
+ + +**20. Definitions** + +⟶定義 + +
+ + +**21. Epoch ― In the context of training a model, epoch is a term used to refer to one iteration where the model sees the whole training set to update its weights.** + +⟶エポック - モデル学習においてエポックとは学習の繰り返しの中の1回を指す用語で、1エポックの間にモデルは全学習データからその重みを更新します。 + +
+ + +**22. Mini-batch gradient descent ― During the training phase, updating weights is usually not based on the whole training set at once due to computation complexities or one data point due to noise issues. Instead, the update step is done on mini-batches, where the number of data points in a batch is a hyperparameter that we can tune.** + +⟶ミニバッチ勾配降下法 - 学習段階では、計算が複雑になりすぎるため通常は全データを一度に使って重みを更新することはありません。またノイズが問題になるため1つのデータポイントだけを使って重みを更新することもありません。代わりに、更新はミニバッチごとに行われます。各バッチに含まれるデータポイントの数は調整可能なハイパーパラメータです。 + +
+ + +**23. Loss function ― In order to quantify how a given model performs, the loss function L is usually used to evaluate to what extent the actual outputs y are correctly predicted by the model outputs z.** + +⟶損失関数 - 得られたモデルの性能を数値化するために、モデルの出力zが実際の出力yをどの程度正確に予測できているかを評価する損失関数Lが通常使われます。 + +
+ + +**24. Cross-entropy loss ― In the context of binary classification in neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** + +⟶交差エントロピー誤差 - ニューラルネットワークにおける二項分類では、交差エントロピー誤差L(z,y)が一般的に使用されており、以下のように定義されています。 + +
+ + +**25. Finding optimal weights** + +⟶最適な重みの探索 + +
+ + +**26. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to each weight w is computed using the chain rule.** + +⟶誤差逆伝播法 - 実際の出力と期待される出力の差に基づいてニューラルネットワークの重みを更新する手法です。各重みwに関する微分は連鎖律を用いて計算されます。 + +
+ + +**27. Using this method, each weight is updated with the rule:** + +⟶この方法を使用することで、それぞれの重みはそのルールにしたがって更新されます。 + +
+ + +**28. Updating weights ― In a neural network, weights are updated as follows:** + +⟶重みの更新 - ニューラルネットワークでは、以下の方法にしたがって重みが更新されます。 + +
+ + +**29. [Step 1: Take a batch of training data and perform forward propagation to compute the loss, Step 2: Backpropagate the loss to get the gradient of the loss with respect to each weight, Step 3: Use the gradients to update the weights of the network.]** + +⟶ステップ1:訓練データのバッチを用いて順伝播で損失を計算します。ステップ2:損失を逆伝播させて各重みに関する損失の勾配を求めます。ステップ3:求めた勾配を用いてネットワークの重みを更新します。 + +
+ + +**30. [Forward propagation, Backpropagation, Weights update]** + +⟶順伝播、逆伝播、重みの更新 + +
+ + +**31. Parameter tuning** + +⟶パラメータチューニング + +
+ + +**32. Weights initialization** + +⟶重みの初期化 + +
+ + +**33. Xavier initialization ― Instead of initializing the weights in a purely random manner, Xavier initialization enables to have initial weights that take into account characteristics that are unique to the architecture.** + +⟶Xavier初期化 - 完全にランダムな方法で重みを初期化するのではなく、そのアーキテクチャのユニークな特徴を考慮に入れて重みを初期化する方法です。 + +
+ + +**34. Transfer learning ― Training a deep learning model requires a lot of data and more importantly a lot of time. It is often useful to take advantage of pre-trained weights on huge datasets that took days/weeks to train, and leverage it towards our use case. Depending on how much data we have at hand, here are the different ways to leverage this:** + +⟶転移学習 - 深層学習のモデルを学習させるには大量のデータと何よりも時間が必要です。膨大なデータセットから数日・数週間をかけて構築した学習済みモデルを利用し、自身のユースケースに活かすことは有益であることが多いです。手元にあるデータ量次第ではありますが、これを利用する以下の方法があります。 + +
+ + +**35. [Training size, Illustration, Explanation]** + +⟶学習サイズ、図、解説 + +
+ + +**36. [Small, Medium, Large]** + +⟶小、中、大 + +
+ + +**37. [Freezes all layers, trains weights on softmax, Freezes most layers, trains weights on last layers and softmax, Trains weights on layers and softmax by initializing weights on pre-trained ones]** + +⟶全層を凍結し、softmaxの重みを学習させる、大半の層を凍結し、最終層とsoftmaxの重みを学習させる、学習済みの重みで初期化して各層とsoftmaxの重みを学習させる + +
+ + +**38. Optimizing convergence** + +⟶収束の最適化 + +
+ + +**39. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. It can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate. +** + +⟶学習率 - 多くの場合αや時々ηと表記される学習率とは、重みの更新速度を表しています。学習率は固定することもできる上に、適応的に変更することもできます。現在もっとも使用される手法は、学習率を適切に調整するAdamと呼ばれる手法です。 + +
+ + +**40. Adaptive learning rates ― Letting the learning rate vary when training a model can reduce the training time and improve the numerical optimal solution. While Adam optimizer is the most commonly used technique, others can also be useful. They are summed up in the table below:** + +⟶適応学習率法 - モデルを学習させる際に学習率を変動させると、学習時間の短縮や精度の向上につながります。Adamがもっとも一般的に使用されている手法ですが、他の手法も役立つことがあります。それらの手法を下記の表にまとめました。 + +
+ + +**41. [Method, Explanation, Update of w, Update of b]** + +⟶手法、解説、wの更新、bの更新 + +
+ + +**42. [Momentum, Dampens oscillations, Improvement to SGD, 2 parameters to tune]** + +⟶Momentum(運動量)、振動を抑制する、SGDの改良、チューニングするパラメータは2つ + +
+ + +**43. [RMSprop, Root Mean Square propagation, Speeds up learning algorithm by controlling oscillations]** + +⟶RMSprop, 二乗平均平方根のプロパゲーション、振動をコントロールすることで学習アルゴリズムを高速化する + +
+ + +**44. [Adam, Adaptive Moment estimation, Most popular method, 4 parameters to tune]** + +⟶Adam, Adaptive Moment estimation, もっとも人気のある手法、チューニングするパラメータは4つ + +
+ + +**45. Remark: other methods include Adadelta, Adagrad and SGD.** + +⟶備考:他にAdadelta, Adagrad, SGD などの手法があります。 + +
+ + +**46. Regularization** + +⟶正則化 + +
+ + +**47. Dropout ― Dropout is a technique used in neural networks to prevent overfitting the training data by dropping out neurons with probability p>0. It forces the model to avoid relying too much on particular sets of features.** + +⟶ドロップアウト - ドロップアウトとは、ニューラルネットワークで過学習を避けるためにp>0の確率でノードをドロップアウト(無効化)する手法です。モデルが特定の特徴量に依存しすぎることを避けるよう強制します。 + +
+ + +**48. Remark: most deep learning frameworks parametrize dropout through the 'keep' parameter 1−p.** + +⟶備考:ほとんどの深層学習のフレームワークでは、ドロップアウトを'keep'というパラメータ(1-p)でパラメータ化します。 + +
+ + +**49. Weight regularization ― In order to make sure that the weights are not too large and that the model is not overfitting the training set, regularization techniques are usually performed on the model weights. The main ones are summed up in the table below:** + +⟶重みの正則化 - 重みが大きくなりすぎず、モデルが過学習しないようにするため、モデルの重みに対して正則化を行います。主な正則化手法は以下の表にまとめられています。 + +
+ + +**50. [LASSO, Ridge, Elastic Net]** + +⟶LASSO, Ridge, Elastic Net + +
+ +**50 bis. Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶bis. 係数を0へ小さくする、変数選択に適している、係数を小さくする、変数選択と小さい係数のトレードオフ + +
+ +**51. Early stopping ― This regularization technique stops the training process as soon as the validation loss reaches a plateau or starts to increase.** + +⟶Early stopping - バリデーションの損失が変化しなくなるか、あるいは増加し始めたときに学習を早々に止める正則化方法 + +
+ + +**52. [Error, Validation, Training, early stopping, Epochs]** + +⟶損失、評価、学習、early stopping、エポック + +
+ + +**53. Good practices** + +⟶おすすめの技法 + +
+ + +**54. Overfitting small batch ― When debugging a model, it is often useful to make quick tests to see if there is any major issue with the architecture of the model itself. In particular, in order to make sure that the model can be properly trained, a mini-batch is passed inside the network to see if it can overfit on it. If it cannot, it means that the model is either too complex or not complex enough to even overfit on a small batch, let alone a normal-sized training set.** + +⟶小さいバッチの過学習 - モデルをデバッグするとき、モデル自体の構造に大きな問題がないか確認するため簡易的なテストが役に立つことが多いです。特に、モデルを正しく学習できることを確認するため、ミニバッチをネットワークに渡してそれを過学習できるかを見ます。もしできなければ、モデルは複雑すぎるか単純すぎるかのいずれかであることを意味し、普通サイズの学習データセットはもちろん、小さいバッチですら過学習できないのです。 + +
+ + +**55. Gradient checking ― Gradient checking is a method used during the implementation of the backward pass of a neural network. It compares the value of the analytical gradient to the numerical gradient at given points and plays the role of a sanity-check for correctness.** + +⟶Gradient checking (勾配チェック) - Gradient checking とは、ニューラルネットワークの逆伝播を実装する際に用いられる手法です。特定の点で解析的勾配と数値的勾配とを比較する手法で、逆伝播の実装が正しいことを確認できます。 + +
+ + +**56. [Type, Numerical gradient, Analytical gradient]** + +⟶種類、数値的勾配、解析的勾配 + +
+ + +**57. [Formula, Comments]** + +⟶公式、コメント + +
+ + +**58. [Expensive; loss has to be computed two times per dimension, Used to verify correctness of analytical implementation, Trade-off in choosing h not too small (numerical instability) nor too large (poor gradient approximation)]** + +⟶計算コストが高い;損失を次元ごとに2回計算する必要がある、解析的実装が正しいかのチェックに用いられる、hを選ぶ時に小さすぎると数値不安定になり、大きすぎると勾配近似が不正確になるというトレードオフがある + +
+ + +**59. ['Exact' result, Direct computation, Used in the final implementation]** + +⟶「正しい」結果、直接的な計算、最終的な実装で使われる + +
+ + +**60. The Deep Learning cheatsheets are now available in [target language]. + +⟶深層学習のチートシートは[対象言語]で利用可能になりました。 + + +**61. Original authors** + +⟶原著者 + +
+ +**62.Translated by X, Y and Z** + +⟶X・Y・Z 訳 + +
+ +**63.Reviewed by X, Y and Z** + +⟶X・Y・Z 校正 + +
+ +**64.View PDF version on GitHub** + +⟶GitHubでPDF版を見る + +
+ +**65.By X and Y** + +⟶X・Y 著 + +
diff --git a/ja/cs-230-recurrent-neural-networks.md b/ja/cs-230-recurrent-neural-networks.md new file mode 100644 index 000000000..e366a86de --- /dev/null +++ b/ja/cs-230-recurrent-neural-networks.md @@ -0,0 +1,678 @@ +**Recurrent Neural Networks translation** + +
+ +**1. Recurrent Neural Networks cheatsheet** + +⟶リカレントニューラルネットワーク チートシート + +
+ + +**2. CS 230 - Deep Learning** + +⟶ディープラーニング + +
+ + +**3. [Overview, Architecture structure, Applications of RNNs, Loss function, Backpropagation]** + +⟶[概要、アーキテクチャの構造、RNNの応用アプリケーション、損失関数、逆伝播] + +
+ + +**4. [Handling long term dependencies, Common activation functions, Vanishing/exploding gradient, Gradient clipping, GRU/LSTM, Types of gates, Bidirectional RNN, Deep RNN]** + +⟶[長期依存性関係の処理、活性化関数、勾配喪失と発散、勾配クリッピング、GRU/LTSM、ゲートの種類、双方向性RNN、ディープ(深層学習)RNN] + +
+ + +**5. [Learning word representation, Notations, Embedding matrix, Word2vec, Skip-gram, Negative sampling, GloVe]** + +⟶[単語出現の学習、ノーテーション、埋め込み行列、Word2vec、スキップグラム、ネガティブサンプリング、グローブ] + +
+ + +**6. [Comparing words, Cosine similarity, t-SNE]** + +⟶[単語の比較、コサイン類似度、t-SNE] + +
+ + +**7. [Language model, n-gram, Perplexity]** + +⟶[言語モデル、n-gramモデル、パープレキシティ] + +
+ + +**8. [Machine translation, Beam search, Length normalization, Error analysis, Bleu score]** + +⟶[機械翻訳、ビームサーチ、単語長の正規化、エラー分析、BLEUスコア(機械翻訳比較スコア)] + +
+ + +**9. [Attention, Attention model, Attention weights]** + +⟶[アテンション、アテンションモデル、アテンションウェイト] + +
+ + +**10. Overview** + +⟶概要 + +
+ + +**11. Architecture of a traditional RNN ― Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. They are typically as follows:** + +⟶一般的なRNNのアーキテクチャ - RNNとして知られるリカレントニューラルネットワークは、隠れ層の状態を利用して、前の出力を次の入力として取り扱うことを可能にするニューラルネットワークの一種です。一般的なモデルは下記のようになります: + +
+ + +**12. For each timestep t, the activation a and the output y are expressed as follows:** + +⟶それぞれの時点 t において活性化関数の状態 a と出力 y は下記のように表現されます: + +
+ + +**13. and** + +⟶そして + +
+ + +**14. where Wax,Waa,Wya,ba,by are coefficients that are shared temporally and g1,g2 activation functions.** + +⟶ここで、Wax,Waa,Wya,ba,by は全ての時点で共有される係数であり、g1,g2 は活性化関数です。 + +
+ + +**15. The pros and cons of a typical RNN architecture are summed up in the table below:** + +⟶一般的なRNNのアーキテクチャ利用の長所・短所については下記の表にまとめられています。 + +
+ + +**16. [Advantages, Possibility of processing input of any length, Model size not increasing with size of input, Computation takes into account historical information, Weights are shared across time]** + +⟶[長所、任意の長さの入力の処理可能性、入力サイズに応じて大きくならないモデルサイズ、時系列情報を考慮した計算、全ての時点で共有される重み] + +
+ + +**17. [Drawbacks, Computation being slow, Difficulty of accessing information from a long time ago, Cannot consider any future input for the current state]** + +⟶[短所、遅い計算、長い時間軸での情報の利用の困難性、現在の状態から将来の入力が予測不可能] + +
+ + +**18. Applications of RNNs ― RNN models are mostly used in the fields of natural language processing and speech recognition. The different applications are summed up in the table below:** + +⟶RNNの応用 - RNNモデルは主に自然言語処理と音声認識の分野で使用されます。さまざまな応用例が以下の表にとめられています: + +
+ + +**19. [Type of RNN, Illustration, Example]** + +⟶[RNNの種類、図、例] + +
+ + +**20. [One-to-one, One-to-many, Many-to-one, Many-to-many]** + +⟶[一対一、一対多、多対一、多対多] + +
+ + +**21. [Traditional neural network, Music generation, Sentiment classification, Name entity recognition, Machine translation]** + +⟶[伝統的なニューラルネットワーク、音楽生成、感情分類、固有表現認識、機械翻訳] + +
+ + +**22. Loss function ― In the case of a recurrent neural network, the loss function L of all time steps is defined based on the loss at every time step as follows:** + +⟶損失関数 - リカレントニューラルネットワークの場合、時間軸全体での損失関数Lは、各時点での損失に基づき、次のように定義されます: + +
+ + +**23. Backpropagation through time ― Backpropagation is done at each point in time. At timestep T, the derivative of the loss L with respect to weight matrix W is expressed as follows:** + +⟶時間軸での誤差逆伝播法 - 誤差逆伝播(バックプロパゲーション)が各時点で行われます。時刻 T における、重み行列 W に関する損失 L の導関数は以下のように表されます: + +
+ + +**24. Handling long term dependencies** + +⟶長期依存関係の処理 + +
+ + +**25. Commonly used activation functions ― The most common activation functions used in RNN modules are described below:** + +⟶一般的に使用される活性化関数 - RNNモジュールで使用される最も一般的な活性化関数を以下に説明します: + +
+ + +**26. [Sigmoid, Tanh, RELU]** + +⟶[シグモイド、ハイパボリックタンジェント、RELU] + +
+ + +**27. Vanishing/exploding gradient ― The vanishing and exploding gradient phenomena are often encountered in the context of RNNs. The reason why they happen is that it is difficult to capture long term dependencies because of multiplicative gradient that can be exponentially decreasing/increasing with respect to the number of layers.** + +⟶勾配消失と勾配爆発について - 勾配消失と勾配爆発の現象は、RNNでよく見られます。これらの現象が起こる理由は、掛け算の勾配が層の数に対して指数関数的に減少/増加する可能性があるため、長期の依存関係を捉えるのが難しいからです。 + +
+ + +**28. Gradient clipping ― It is a technique used to cope with the exploding gradient problem sometimes encountered when performing backpropagation. By capping the maximum value for the gradient, this phenomenon is controlled in practice.** + +⟶勾配クリッピング - 誤差逆伝播法を実行するときに時折発生する勾配爆発問題に対処するために使用される手法です。勾配の上限値を定義することで、実際にこの現象が抑制されます。 + +
+ + +**29. clipped** + +⟶clipped + +
+ + +**30. Types of gates ― In order to remedy the vanishing gradient problem, specific gates are used in some types of RNNs and usually have a well-defined purpose. They are usually noted Γ and are equal to:** + +⟶ゲートの種類 - 勾配消失問題を解決するために、特定のゲートがいくつかのRNNで使用され、通常明確に定義された目的を持っています。それらは通常Γと記され、以下のように定義されます: + +
+ + +**31. where W,U,b are coefficients specific to the gate and σ is the sigmoid function. The main ones are summed up in the table below:** + +⟶ここで、W、U、bはゲート固有の係数、σはシグモイド関数です。主なものは以下の表にまとめられています: + +
+ + +**32. [Type of gate, Role, Used in]** + +⟶[ゲートの種類、役割、下記で使用] + +
+ + +**33. [Update gate, Relevance gate, Forget gate, Output gate]** + +⟶[更新ゲート、関連ゲート、忘却ゲート、出力ゲート] + +
+ + +**34. [How much past should matter now?, Drop previous information?, Erase a cell or not?, How much to reveal of a cell?]** + +⟶[過去情報はどのくらい重要ですか?、前の情報を削除しますか?、セルを消去しますか?しませんか?、セルをどのくらい見せますか?] + +
+ + +**35. [LSTM, GRU]** + +⟶[LSTM、GRU] + +
+ + +**36. GRU/LSTM ― Gated Recurrent Unit (GRU) and Long Short-Term Memory units (LSTM) deal with the vanishing gradient problem encountered by traditional RNNs, with LSTM being a generalization of GRU. Below is a table summing up the characterizing equations of each architecture:** + +⟶GRU/LSTM - ゲート付きリカレントユニット(GRU)およびロングショートタームメモリユニット(LSTM)は、従来のRNNが直面した勾配消失問題を解決しようとします。LSTMはGRUを一般化したものです。各アーキテクチャを特徴づける式を以下の表にまとめます: + +
+ + +**37. [Characterization, Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Dependencies]** + +⟶[特徴づけ、ゲート付きリカレントユニット(GRU)、ロングショートタームメモリ(LSTM)、依存関係] + +
+ + +**38. Remark: the sign ⋆ denotes the element-wise multiplication between two vectors.** + +⟶備考:記号 ⋆ は2つのベクトル間の要素ごとの乗算を表します。 + +
+ + +**39. Variants of RNNs ― The table below sums up the other commonly used RNN architectures:** + +⟶RNNの変種 - 一般的に使用されている他のRNNアーキテクチャを以下の表にまとめます: + +
+ + +**40. [Bidirectional (BRNN), Deep (DRNN)]** + +⟶[双方向(BRNN)、ディープ(DRNN)] + +
+ + +**41. Learning word representation** + +⟶単語表現の学習 + +
+ + +**42. In this section, we note V the vocabulary and |V| its size.** + +⟶この節では、Vは語彙、そして|V|は語彙のサイズを表します。 + +
+ + +**43. Motivation and notations** + +⟶動機と表記 + +
+ + +**44. Representation techniques ― The two main ways of representing words are summed up in the table below:** + +⟶表現のテクニック - 単語を表現する2つの主な方法は、以下の表にまとめられています。 + +
+ + +**45. [1-hot representation, Word embedding]** + +⟶[1-hot表現、単語埋め込み(単語分散表現)] + +
+ + +**46. [teddy bear, book, soft]** + +⟶[テディベア、本、柔らかい] + +
+ + +**47. [Noted ow, Naive approach, no similarity information, Noted ew, Takes into account words similarity]** + +⟶[owの表記、素朴なアプローチ、類似性のない情報、ewの表記、単語の類似性の考慮] + +
+ + +**48. Embedding matrix ― For a given word w, the embedding matrix E is a matrix that maps its 1-hot representation ow to its embedding ew as follows:** + +⟶埋め込み行列(分散表現行列) - 与えられた単語wに対して、埋め込み行列Eは、1-hot表現owを以下のように埋め込み行列ewに写像します: + +
+ + +**49. Remark: learning the embedding matrix can be done using target/context likelihood models.** + +⟶注:埋め込み行列は、ターゲット/コンテキスト尤度モデルを使用して学習できます。 + +
+ + +**50. Word embeddings** + +⟶単語の埋め込み + +
+ + +**51. Word2vec ― Word2vec is a framework aimed at learning word embeddings by estimating the likelihood that a given word is surrounded by other words. Popular models include skip-gram, negative sampling and CBOW.** + +⟶Word2vec - Word2vecは、ある単語が他の単語の周辺にある可能性を推定することで、単語の埋め込みの重みを学習することを目的としたフレームワークです。人気のあるモデルは、スキップグラム、ネガティブサンプリング、およびCBOWです。 + +
+ + +**52. [A cute teddy bear is reading, teddy bear, soft, Persian poetry, art]** + +⟶[かわいいテディベアが読んでいる、テディベア、柔らかい、ペルシャ詩、芸術] + +
+ + +**53. [Train network on proxy task, Extract high-level representation, Compute word embeddings]** + +⟶[代理タスクでのネットワークの訓練、高水準表現の抽出、単語埋め込み重みの計算] + +
+ + +**54. Skip-gram ― The skip-gram word2vec model is a supervised learning task that learns word embeddings by assessing the likelihood of any given target word t happening with a context word c. By noting θt a parameter associated with t, the probability P(t|c) is given by:** + +⟶スキップグラム - スキップグラムword2vecモデルは、あるターゲット単語tがコンテキスト単語cと一緒に出現する確率を評価することで単語の埋め込みを学習する教師付き学習タスクです。tに関するパラメータをθtと表記すると、その確率P(t|c) は以下の式で与えられます: + +
+ + +**55. Remark: summing over the whole vocabulary in the denominator of the softmax part makes this model computationally expensive. CBOW is another word2vec model using the surrounding words to predict a given word.** + +⟶注:softmax部分の分母の語彙全体を合計するため、このモデルの計算コストは高くなります。 CBOWは、ある単語を予測するため周辺単語を使用する別のタイプのword2vecモデルです。 + +
+ + +**56. Negative sampling ― It is a set of binary classifiers using logistic regressions that aim at assessing how a given context and a given target words are likely to appear simultaneously, with the models being trained on sets of k negative examples and 1 positive example. Given a context word c and a target word t, the prediction is expressed by:** + +⟶ネガティブサンプリング - ロジスティック回帰を使用したバイナリ分類器のセットで、特定の文脈とあるターゲット単語が同時に出現する確率を評価することを目的としています。モデルはk個のネガティブな例と1つのポジティブな例のセットで訓練されます。コンテキスト単語cとターゲット単語tが与えられると、予測は次のように表現されます。 + +
+ + +**57. Remark: this method is less computationally expensive than the skip-gram model.** + +⟶注:この方法の計算コストは、スキップグラムモデルよりも少ないです。 + +
+ + +**57bis. GloVe ― The GloVe model, short for global vectors for word representation, is a word embedding technique that uses a co-occurence matrix X where each Xi,j denotes the number of times that a target i occurred with a context j. Its cost function J is as follows:** + +⟶GloVe - GloVeモデルは、単語表現のためのグローバルベクトルの略で、共起行列Xを使用する単語の埋め込み手法です。ここで、各Xi,jは、ターゲットiがコンテキストjで発生した回数を表します。そのコスト関数Jは以下の通りです: + +
+ + +**58. where f is a weighting function such that Xi,j=0⟹f(Xi,j)=0. +Given the symmetry that e and θ play in this model, the final word embedding e(final)w is given by:** + +⟶ここで、fはXi,j =0⟹f(Xi,j)= 0となるような重み関数です。このモデルでeとθが果たす対称性を考えると、最後の単語の埋め込みe(final)wは以下のようになります: + +
+ + +**59. Remark: the individual components of the learned word embeddings are not necessarily interpretable.** + +⟶注:学習された単語の埋め込みの個々の要素は、必ずしも解釈可能ではありません。 + +
+ + +**60. Comparing words** + +⟶単語の比較 + +
+ + +**61. Cosine similarity ― The cosine similarity between words w1 and w2 is expressed as follows:** + +⟶コサイン類似度 - 単語w1とw2のコサイン類似度は次のように表されます + +
+ + +**62. Remark: θ is the angle between words w1 and w2.** + +⟶注:θは単語w1とw2の間の角度です。 + +
+ + +**63. t-SNE ― t-SNE (t-distributed Stochastic Neighbor Embedding) is a technique aimed at reducing high-dimensional embeddings into a lower dimensional space. In practice, it is commonly used to visualize word vectors in the 2D space.** + +⟶ t-SNE − t-SNE(t−分布型確率的近傍埋め込み)は、高次元埋め込みから低次元埋め込み空間への次元削減を目的とした手法です。実際には、2次元空間で単語ベクトルを視覚化するために使用されます。 + +
+ + +**64. [literature, art, book, culture, poem, reading, knowledge, entertaining, loveable, childhood, kind, teddy bear, soft, hug, cute, adorable]** + +⟶[文学、芸術、本、文化、詩、読書、知識、面白い、愛らしい、幼年期、親切、テディベア、柔らかい、抱擁、かわいい、愛らしい] + +
+ + +**65. Language model** + +⟶言語モデル + +
+ + +**66. Overview ― A language model aims at estimating the probability of a sentence P(y).** + +⟶概要 - 言語モデルは文の確率P(y)を推定することを目的としています。 + +
+ + +**67. n-gram model ― This model is a naive approach aiming at quantifying the probability that an expression appears in a corpus by counting its number of appearance in the training data.** + +⟶n-gramモデル - このモデルは、トレーニングデータでの出現数を数えることによって、ある表現がコーパスに出現する確率を定量化することを目的とした単純なアプローチです。 + +
+ + +**68. Perplexity ― Language models are commonly assessed using the perplexity metric, also known as PP, which can be interpreted as the inverse probability of the dataset normalized by the number of words T. The perplexity is such that the lower, the better and is defined as follows:** + +⟶パープレキシティ - 言語モデルは一般的に、PPとも呼ばれるパープレキシティメトリックを使用して評価されます。これは、単語数Tにより正規化されたデータセットの逆確率と解釈できます。パープレキシティは低いほど良く、次のように定義されます: +(訳注:パープレキシティの数値はより低いものがより選択しやすい単語として評価されます。10であれば10個の中から1つ、10000であれば10000個の中から1つ選択されます。) + +
+ + +**69. Remark: PP is commonly used in t-SNE.** + +⟶注:PPはt-SNEで一般的に使用されています。 + +
+ + +**70. Machine translation** + +⟶機械翻訳 + +
+ + +**71. Overview ― A machine translation model is similar to a language model except it has an encoder network placed before. For this reason, it is sometimes referred as a conditional language model. The goal is to find a sentence y such that:** + +⟶概要 - 機械翻訳モデルは、エンコーダーネットワークのロジックが最初に付加されている以外は、言語モデルと似ています。このため、条件付き言語モデルと呼ばれることもあります。目的は次のような文yを見つけることです: + +
+ + +**72. Beam search ― It is a heuristic search algorithm used in machine translation and speech recognition to find the likeliest sentence y given an input x.** + +⟶ビーム検索 - 入力xが与えられたとき最も可能性の高い文yを見つけるために、機械翻訳と音声認識で使用されるヒューリスティック探索アルゴリズムです。 + +
+ + +**73. [Step 1: Find top B likely words y<1>, Step 2: Compute conditional probabilities y|x,y<1>,...,y, Step 3: Keep top B combinations x,y<1>,...,y, End process at a stop word]** + +⟶[ステップ1:上位B個の高い確率を持つ単語y<1>を見つけ、ステップ2:条件付き確率y|x,y<1>,...,yを計算し、ステップ3:上位B個の組み合わせx,y<1>,...,yを保持し、あるストップワードでプロセスを終了します] + +
+ + +**74. Remark: if the beam width is set to 1, then this is equivalent to a naive greedy search.** + +⟶注意:ビーム幅が1に設定されている場合、これは単純な貪欲法と同等です。 + +
+ + +**75. Beam width ― The beam width B is a parameter for beam search. Large values of B yield to better result but with slower performance and increased memory. Small values of B lead to worse results but is less computationally intensive. A standard value for B is around 10.** + +⟶ビーム幅 - ビーム幅Bはビーム検索のパラメータです。 Bの値を大きくするとより良い結果が得られますが、探索パフォーマンスは低下し、メモリ使用量が増加します。 Bの値が小さいと結果が悪くなりますが、計算量は少なくなります。 Bの標準値は10前後です。 + +
+ + +**76. Length normalization ― In order to improve numerical stability, beam search is usually applied on the following normalized objective, often called the normalized log-likelihood objective, defined as:** + +⟶文章の長さの正規化 - 数値の安定性を向上させるために、ビーム検索は通常、正規化(対数尤度正規化)された目的関数に対して適用され、次のように定義されます: + +
+ + +**77. Remark: the parameter α can be seen as a softener, and its value is usually between 0.5 and 1.** + +⟶注:パラメータαは緩衝パラメータと見なされ、その値は通常、0.5から1の間です。 + +
+ + +**78. Error analysis ― When obtaining a predicted translation ˆy that is bad, one can wonder why we did not get a good translation y∗ by performing the following error analysis:** + +⟶エラー分析 - 予測されたˆyの翻訳が良くない場合、以下のようなエラー分析を実行することで、なぜy∗のような良い翻訳を得られなかったのか考えることが可能です: + +
+ + +**79. [Case, Root cause, Remedies]** + +⟶[症例、根本原因、改善策] + +
+ + +**80. [Beam search faulty, RNN faulty, Increase beam width, Try different architecture, Regularize, Get more data]** + +⟶[ビーム検索の誤り、RNNの誤り、ビーム幅の拡大、さまざまなアーキテクチャを試す、正則化、データをさらに取得] + +
+ + +**81. Bleu score ― The bilingual evaluation understudy (bleu) score quantifies how good a machine translation is by computing a similarity score based on n-gram precision. It is defined as follows:** + +⟶Bleuスコア - Bleu(Bilingual evaluation understudy)スコアは、n-gramの精度に基づき類似性スコアを計算することで、機械翻訳がどれほど優れているかを定量化します。以下のように定義されています: + +
+ + +**82. where pn is the bleu score on n-gram only defined as follows:** + +⟶ここで、pnはn-gramでのbleuスコアで下記のようにだけ定義されています: + +
+ + +**83. Remark: a brevity penalty may be applied to short predicted translations to prevent an artificially inflated bleu score.** + +⟶注:人為的に水増しされたブルースコアを防ぐために、短い翻訳評価には簡潔さへのペナルティが適用される場合があります。 + +
+ + +**84. Attention** + +⟶アテンション + +
+ + +**85. Attention model ― This model allows an RNN to pay attention to specific parts of the input that is considered as being important, which improves the performance of the resulting model in practice. By noting α the amount of attention that the output y should pay to the activation a and c the context at time t, we have:** + +⟶アテンションモデル - このモデルを使用するとRNNは重要であると考えられる入力の特定部分に注目することができ、得られるモデルの性能が実際に向上します。時刻tにおいて、出力yが活性化関数aとコンテキストcとに払うべき注意量をαと表記すると次のようになります: + +
+ + +**86. with** + +⟶および + +
+ + +**87. Remark: the attention scores are commonly used in image captioning and machine translation.** + +⟶注:アテンションスコアは、一般的に画像のキャプション作成および機械翻訳で使用されています。 + +
+ + +**88. A cute teddy bear is reading Persian literature.** + +⟶かわいいテディベアがペルシャ文学を読んでいます。 + +
+ + +**89. Attention weight ― The amount of attention that the output y should pay to the activation a is given by α computed as follows:** + +⟶アテンションの重み - 出力yが活性化関数aに払うべき注意量αは次のように計算されます。 + +
+ + +**90. Remark: computation complexity is quadratic with respect to Tx.** + +⟶注:この計算の複雑さはTxに関して2次です。 + +
+ + +**91. The Deep Learning cheatsheets are now available in [target language].** + +⟶ディープラーニングのチートシートが[日本語]で利用可能になりました。 + +
+ +**92. Original authors** + +⟶原著者 + +
+ +**93. Translated by X, Y and Z** + +⟶X・Y・Z 訳 + +
+ +**94. Reviewed by X, Y and Z** + +⟶X・Y・Z 校正 + +
+ +**95. View PDF version on GitHub** + +⟶GitHubでPDF版を見る + +
+ +**96. By X and Y** + +⟶X・Y 著 + +
diff --git a/ko/cs-229-linear-algebra.md b/ko/cs-229-linear-algebra.md new file mode 100644 index 000000000..2342a1619 --- /dev/null +++ b/ko/cs-229-linear-algebra.md @@ -0,0 +1,340 @@ +**1. Linear Algebra and Calculus refresher** + +⟶ 선형대수와 미적분학 복습 + +
+ +**2. General notations** + +⟶ 일반적인 표기법 + +
+ +**3. Definitions** + +⟶ 정의 + +
+ +**4. Vector ― We note x∈Rn a vector with n entries, where xi∈R is the ith entry:** + +⟶ 벡터 - x∈Rn는 n개의 요소를 가진 벡터이고, xi∈R는 i번째 요소이다. + +
+ +**5. Matrix ― We note A∈Rm×n a matrix with m rows and n columns, where Ai,j∈R is the entry located in the ith row and jth column:** + +⟶ 행렬 - A∈Rm×n는 m개의 행과 n개의 열을 가진 행렬이고, Ai,j∈R는 i번째 행, j번째 열에 있는 원소이다. + +
+ +**6. Remark: the vector x defined above can be viewed as a n×1 matrix and is more particularly called a column-vector.** + +⟶ 비고 : 위에서 정의된 벡터 x는 n×1행렬로 볼 수 있으며, 열벡터라고도 불린다. + +
+ +**7. Main matrices** + +⟶ 주요 행렬 + +
+ +**8. Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** + +⟶ 단위행렬 - 단위행렬 I∈Rn×n는 대각성분이 모두 1이고 대각성분이 아닌 성분은 모두 0인 정사각행렬이다. + +
+ +**9. Remark: for all matrices A∈Rn×n, we have A×I=I×A=A.** + +⟶ 비고 : 모든 행렬 A∈Rn×n에 대하여, A×I=I×A=A를 만족한다. + +
+ +**10. Diagonal matrix ― A diagonal matrix D∈Rn×n is a square matrix with nonzero values in its diagonal and zero everywhere else:** + +⟶ 대각행렬 - 대각행렬 D∈Rn×n는 대각성분은 모두 0이 아니고, 대각성분이 아닌 성분은 모두 0인 정사각행렬이다. + +
+ +**11. Remark: we also note D as diag(d1,...,dn).** + +⟶ 비고 : D를 diag(d1,...,dn)라고도 표시한다. + +
+ +**12. Matrix operations** + +⟶ 행렬 연산 + +
+ +**13. Multiplication** + +⟶ 곱셈 + +
+ +**14. Vector-vector ― There are two types of vector-vector products:** + +⟶ 벡터-벡터 – 벡터 간 연산에는 두 가지 종류가 있다. + +
+ +**15. inner product: for x,y∈Rn, we have:** + +⟶ 내적 : x,y∈Rn에 대하여, + +
+ +**16. outer product: for x∈Rm,y∈Rn, we have:** + +⟶ 외적 : x∈Rm,y∈Rn에 대하여, + +
+ +**17. Matrix-vector ― The product of matrix A∈Rm×n and vector x∈Rn is a vector of size Rn, such that:** + +⟶ 행렬-벡터 - 행렬 A∈Rm×n와 벡터 x∈Rn의 곱은 다음을 만족하는 Rn크기의 벡터이다. + +
+ +**18. where aTr,i are the vector rows and ac,j are the vector columns of A, and xi are the entries of x.** + +⟶ aTr,i는 A의 벡터행, ac,j는 A의 벡터열, xi는 x의 성분이다. + +
+ +**19. Matrix-matrix ― The product of matrices A∈Rm×n and B∈Rn×p is a matrix of size Rn×p, such that:** + +⟶ 행렬-행렬 - 행렬 A∈Rm×n와 행렬 B∈Rn×p의 곱은 다음을 만족하는 Rn×p크기의 행렬이다. + +
+ +**20. where aTr,i,bTr,i are the vector rows and ac,j,bc,j are the vector columns of A and B respectively** + +⟶ aTr,i,bTr,i는 A,B의 벡터행, ac,j,bc,j는 A,B의 벡터열이다. + +
+ +**21. Other operations** + +⟶ 그 외 연산 + +
+ +**22. Transpose ― The transpose of a matrix A∈Rm×n, noted AT, is such that its entries are flipped:** + +⟶ 전치 - 행렬 A∈Rm×n의 전치 AT는 모든 성분을 뒤집은 것이다. + +
+ +**23. Remark: for matrices A,B, we have (AB)T=BTAT** + +⟶ 비고 - 행렬 A,B에 대하여, (AB)T=BTAT가 성립힌다. + +
+ +**24. Inverse ― The inverse of an invertible square matrix A is noted A−1 and is the only matrix such that:** + +⟶ 역행렬 - 가역행렬 A의 역행렬은 A-1로 표기하며, 유일하다. + +
+ +**25. Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1=B−1A−1** + +⟶ 모든 정사각행렬이 역행렬을 갖는 것은 아니다. 그리고, 행렬 A,B에 대하여 (AB)−1=B−1A−1가 성립힌다. + +
+ +**26. Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** + +⟶ 대각합 – 정사각행렬 A의 대각합 tr(A)는 대각성분의 합이다. + +
+ +**27. Remark: for matrices A,B, we have tr(AT)=tr(A) and tr(AB)=tr(BA)** + +⟶ 비고 : 행렬 A,B에 대하여, tr(AT)=tr(A)와 tr(AB)=tr(BA)가 성립힌다. + +
+ +**28. Determinant ― The determinant of a square matrix A∈Rn×n, noted |A| or det(A) is expressed recursively in terms of A∖i,∖j, which is the matrix A without its ith row and jth column, as follows:** + +⟶ 행렬식 - 정사각행렬 A∈Rn×n의 행렬식 |A| 또는 det(A)는 i번째 행과 j번째 열이 없는 행렬 A인 A∖i,∖j에 대해 재귀적으로 표현된다. + +
+ +**29. Remark: A is invertible if and only if |A|≠0. Also, |AB|=|A||B| and |AT|=|A|.** + +⟶ 비고 : A가 가역일 필요충분조건은 |A|≠0이다. 또한 |AB|=|A||B|와 |AT|=|A|도 그렇다. + +
+ +**30. Matrix properties** + +⟶ 행렬의 성질 + +
+ +**31. Definitions** + +⟶ 정의 + +
+ +**32. Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** + +⟶ 대칭 분해 - 주어진 행렬 A는 다음과 같이 대칭과 비대칭 부분으로 표현될 수 있다. + +
+ +**33. [Symmetric, Antisymmetric]** + +⟶ [대칭, 비대칭] + +
+ +**34. Norm ― A norm is a function N:V⟶[0,+∞] where V is a vector space, and such that for all x,y∈V, we have:** + +⟶ 노름 – V는 벡터공간일 때, 노름은 모든 x,y∈V에 대해 다음을 만족하는 함수 N:V⟶[0,+∞]이다. + +
+ +**35. N(ax)=|a|N(x) for a scalar** + +⟶ scalar a에 대해서 N(ax)=|a|N(x)를 만족한다. + +
+ +**36. if N(x)=0, then x=0** + +⟶ N(x)=0이면 x=0이다. + +
+ +**37. For x∈V, the most commonly used norms are summed up in the table below:** + +⟶ x∈V에 대해, 가장 일반적으로 사용되는 규범이 아래 표에 요약되어 있다. + +
+ +**38. [Norm, Notation, Definition, Use case]** + +⟶ [규범, 표기법, 정의, 유스케이스] + +
+ +**39. Linearly dependence ― A set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others.** + +⟶ 일차 종속 - 집합 내의 벡터 중 하나가 다른 벡터들의 선형결합으로 정의될 수 있으면, 그 벡터 집합은 일차 종속이라고 한다. + +
+ +**40. Remark: if no vector can be written this way, then the vectors are said to be linearly independent** + +⟶ 비고 : 어느 벡터도 이런 방식으로 표현될 수 없다면, 그 벡터들은 일차 독립이라고 한다. + +
+ +**41. Matrix rank ― The rank of a given matrix A is noted rank(A) and is the dimension of the vector space generated by its columns. This is equivalent to the maximum number of linearly independent columns of A.** + +⟶ 행렬 랭크 - 주어진 행렬 A의 랭크는 열에 의해 생성된 벡터공간의 차원이고, rank(A)라고 쓴다. 이는 A의 선형독립인 열의 최대 수와 동일하다. + +
+ +**42. Positive semi-definite matrix ― A matrix A∈Rn×n is positive semi-definite (PSD) and is noted A⪰0 if we have:** + +⟶ 양의 준정부호 행렬 – 행렬 A∈Rn×n는 다음을 만족하면 양의 준정부호(PSD)라고 하고 A⪰0라고 쓴다. + +
+ +**43. Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** + +⟶ 비고 : 마찬가지로 PSD 행렬이 모든 0이 아닌 벡터 x에 대하여 xTAx>0를 만족하면 행렬 A를 양의 정부호라고 말하고 A≻0라고 쓴다. + +
+ +**44. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** + +⟶ 고유값, 고유벡터 - 주어진 행렬 A∈Rn×n에 대하여, 다음을 만족하는 벡터 z∈Rn∖{0}가 존재하면, z를 고유벡터라고 부르고, λ를 A의 고유값이라고 부른다. + +
+ +**45. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** + +⟶ 스펙트럼 정리 – A∈Rn×n라고 하자. A가 대칭이면, A는 실수 직교행렬 U∈Rn×n에 의해 대각화 가능하다. Λ=diag(λ1,...,λn)인 것에 주목하면, 다음을 만족한다. + +
+ +**46. diagonal** + +⟶ 대각 + +
+ +**47. Singular-value decomposition ― For a given matrix A of dimensions m×n, the singular-value decomposition (SVD) is a factorization technique that guarantees the existence of U m×m unitary, Σ m×n diagonal and V n×n unitary matrices, such that:** + +⟶ 특이값 분해 – 주어진 m×n차원 행렬 A에 대하여, 특이값 분해(SVD)는 다음과 같이 U m×m 유니터리와 Σ m×n 대각 및 V n×n 유니터리 행렬의 존재를 보증하는 인수분해 기술이다. + +
+ +**48. Matrix calculus** + +⟶ 행렬 미적분 + +
+ +**49. Gradient ― Let f:Rm×n→R be a function and A∈Rm×n be a matrix. The gradient of f with respect to A is a m×n matrix, noted ∇Af(A), such that:** + +⟶ 그라디언트 – f:Rm×n→R는 함수이고 A∈Rm×n는 행렬이라 하자. A에 대한 f의 그라디언트 ∇Af(A)는 다음을 만족하는 m×n 행렬이다. + +
+ +**50. Remark: the gradient of f is only defined when f is a function that returns a scalar.** 비고 : f의 그라디언트는 f가 스칼라를 반환하는 함수일 때만 정의된다. + +⟶ + +
+ +**51. Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** + +⟶ 헤시안 – f:Rn→R는 함수이고 x∈Rn는 벡터라고 하자. x에 대한 f의 헤시안 ∇2xf(x)는 다음을 만족하는 n×n 대칭행렬이다. + +
+ +**52. Remark: the hessian of f is only defined when f is a function that returns a scalar** + +⟶ 비고 : f의 헤시안은 f가 스칼라를 반환하는 함수일 때만 정의된다. + +
+ +**53. Gradient operations ― For matrices A,B,C, the following gradient properties are worth having in mind:** + +⟶ 그라디언트 연산 – 행렬 A,B,C에 대하여, 다음 그라디언트 성질을 염두해두는 것이 좋다. + +
+ +**54. [General notations, Definitions, Main matrices]** + +⟶ [일반적인 표기법, 정의, 주요 행렬] + +
+ +**55. [Matrix operations, Multiplication, Other operations]** + +⟶ [행렬 연산, 곱셈, 다른 연산] + +
+ +**56. [Matrix properties, Norm, Eigenvalue/Eigenvector, Singular-value decomposition]** + +⟶ [행렬 성질, 노름, 고유값/고유벡터, 특이값 분해] + +
+ +**57. [Matrix calculus, Gradient, Hessian, Operations]** + +⟶ [행렬 미적분, 그라디언트, 헤시안, 연산] + diff --git a/ko/cs-229-machine-learning-tips-and-tricks.md b/ko/cs-229-machine-learning-tips-and-tricks.md new file mode 100644 index 000000000..d6732e145 --- /dev/null +++ b/ko/cs-229-machine-learning-tips-and-tricks.md @@ -0,0 +1,285 @@ +**1. Machine Learning tips and tricks cheatsheet** + +⟶머신러닝 팁과 트릭 치트시트 + +
+ +**2. Classification metrics** + +⟶분류 측정 항목 + +
+ +**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** + +⟶이진 분류 상황에서 모델의 성능을 평가하기 위해 눈 여겨 봐야하는 주요 측정 항목이 여기에 있습니다. + +
+ +**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** + +⟶혼동 행렬 ― 혼동 행렬은 모델의 성능을 평가할 때, 보다 큰 그림을 보기위해 사용됩니다. 이는 다음과 같이 정의됩니다. + +
+ +**5. [Predicted class, Actual class]** + +⟶[예측된 클래스, 실제 클래스] + +
+ +**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** + +⟶주요 측정 항목들 ― 다음 측정 항목들은 주로 분류 모델의 성능을 평가할 때 사용됩니다. + +
+ +**7. [Metric, Formula, Interpretation]** + +⟶[측정 항목, 공식, 해석] + +
+ +**8. Overall performance of model** + +⟶전반적인 모델의 성능 + +
+ +**9. How accurate the positive predictions are** + +⟶예측된 양성이 정확한 정도 + +
+ +**10. Coverage of actual positive sample** + +⟶실제 양성의 예측 정도 + +
+ +**11. Coverage of actual negative sample** + +⟶실제 음성의 예측 정도 + +
+ +**12. Hybrid metric useful for unbalanced classes** + +⟶불균형 클래스에 유용한 하이브리드 측정 항목 + +
+ +**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** + +⟶ROC(Receiver Operating Curve) ― ROC 곡선은 임계값의 변화에 따른 TPR 대 FPR의 플롯입니다. 이 측정 항목은 아래 표에 요약되어 있습니다: + +
+ +**14. [Metric, Formula, Equivalent]** + +⟶[측정 항목, 공식, 같은 측도] + +
+ +**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** + +⟶AUC(Area Under the receiving operating Curve) ― AUC 또는 AUROC라고도 하는 이 측정 항목은 다음 그림과 같이 ROC 곡선 아래의 영역입니다: + +
+ +**16. [Actual, Predicted]** + +⟶[실제값, 예측된 값] + +
+ +**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** + +⟶기본 측정 항목 ― 회귀 모델 f가 주어졌을때, 다음의 측정 항목들은 모델의 성능을 평가할 때 주로 사용됩니다: + +
+ +**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** + +⟶[총 제곱합, 설명된 제곱합, 잔차 제곱합] + +
+ +**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** + +⟶결정 계수 ― 종종 R2 또는 r2로 표시되는 결정 계수는 관측된 결과가 모델에 의해 얼마나 잘 재현되는지를 측정하는 측도로서 다음과 같이 정의됩니다: + +
+ +**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** + +⟶주요 측정 항목들 ― 다음 측정 항목들은 주로 변수의 수를 고려하여 회귀 모델의 성능을 평가할 때 사용됩니다: + +
+ +**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** + +⟶여기서 L은 가능도이고 ^σ2는 각각의 반응과 관련된 분산의 추정값입니다. + +
+ +**22. Model selection** + +⟶모델 선택 + +
+ +**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** + +⟶어휘 ― 모델을 선택할 때 우리는 다음과 같이 가지고 있는 데이터를 세 부분으로 구분합니다: + +
+ +**24. [Training set, Validation set, Testing set]** + +⟶[학습 세트, 검증 세트, 테스트 세트] + +
+ +**25. [Model is trained, Model is assessed, Model gives predictions]** + +⟶[모델 훈련, 모델 평가, 모델 예측] + +
+ +**26. [Usually 80% of the dataset, Usually 20% of the dataset]** + +⟶[주로 데이터 세트의 80%, 주로 데이터 세트의 20%] + +
+ +**27. [Also called hold-out or development set, Unseen data]** + +⟶[홀드아웃 또는 개발 세트라고도하는, 보지 않은 데이터] + +
+ +**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** + +⟶모델이 선택되면 전체 데이터 세트에 대해 학습을 하고 보지 않은 데이터에서 테스트합니다. 이는 아래 그림에 나타나있습니다. + +
+ +**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** + +⟶교차-검증 ― CV라고도하는 교차-검증은 초기의 학습 세트에 지나치게 의존하지 않는 모델을 선택하는데 사용되는 방법입니다. 다양한 유형이 아래 표에 요약되어 있습니다: + +
+ +**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** + +⟶[k-1 폴드에 대한 학습과 나머지 1폴드에 대한 평가, n-p개 관측치에 대한 학습과 나머지 p개 관측치에 대한 평가] + +
+ +**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** + +⟶[일반적으로 k=5 또는 10, p=1인 케이스는 leave-one-out] + +
+ +**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** + +⟶가장 일반적으로 사용되는 방법은 k-폴드 교차-검증이라고하며 이는 학습 데이터를 k개의 폴드로 분할하고, 그 중 k-1개의 폴드로 모델을 학습하는 동시에 나머지 1개의 폴드로 모델을 검증합니다. 이 작업을 k번 수행합니다. 오류는 k 폴드에 대해 평균화되고 교차-검증 오류라고 부릅니다. + +
+ +**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** + +⟶정규화 ― 정규화 절차는 데이터에 대한 모델의 과적합을 피하고 분산이 커지는 문제를 처리하는 것을 목표로 합니다. 다음의 표는 일반적으로 사용되는 정규화 기법의 여러 유형을 요약한 것입니다: + +
+ +**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶[계수를 0으로 축소, 변수 선택에 좋음, 계수를 작게 함, 변수 선택과 작은 계수 간의 트래이드오프] + +
+ +**35. Diagnostics** + +⟶진단 + +
+ +**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** + +⟶편향 ― 모델의 편향은 기대되는 예측과 주어진 데이터 포인트에 대해 예측하려고하는 올바른 모델 간의 차이입니다. + +
+ +**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** + +⟶분산 ― 모델의 분산은 주어진 데이터 포인트에 대한 모델 예측의 가변성입니다. + +
+ +**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** + +⟶편향/분산 트래이드오프 ― 모델이 간단할수록 편향이 높아지고 모델이 복잡할수록 분산이 커집니다. + +
+ +**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** + +⟶[증상, 회귀 일러스트레이션, 분류 일러스트레이션, 딥러닝 일러스트레이션, 가능한 처리방법] + +
+ +**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** + +⟶[높은 학습 오류, 테스트 오류에 가까운 학습 오류, 높은 편향, 테스트 에러 보다 약간 낮은 학습 오류, 매우 낮은 학습 오류, 테스트 오류보다 훨씬 낮은 학습 오류, 높은 분산] + +
+ +**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** + +⟶[모델 복잡화, 특징 추가, 학습 증대, 정규화 수행, 추가 데이터 수집] + +
+ +**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** + +⟶오류 분석 ― 오류 분석은 현재 모델과 완벽한 모델 간의 성능 차이의 근본 원인을 분석합니다. + +
+ +**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** + +⟶애블러티브 분석 ― 애블러티브 분석은 현재 모델과 베이스라인 모델 간의 성능 차이의 근본 원인을 분석합니다. + +
+ +**44. Regression metrics** + +⟶회귀 측정 항목 + +
+ +**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** + +⟶[분류 측정 항목, 혼동 행렬, 정확도, 정밀도, 리콜, F1 스코어, ROC] + +
+ +**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** + +⟶[회귀 측정 항목, R 스퀘어, 맬로우의 CP, AIC, BIC] + +
+ +**47. [Model selection, cross-validation, regularization]** + +⟶[모델 선택, 교차-검증, 정규화] + +
+ +**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** + +⟶[진단, 편향/분산 트래이드오프, 오류/애블러티브 분석] diff --git a/ko/cs-229-probability.md b/ko/cs-229-probability.md new file mode 100644 index 000000000..53ec90c53 --- /dev/null +++ b/ko/cs-229-probability.md @@ -0,0 +1,381 @@ + +**1. Probabilities and Statistics refresher** + +⟶확률과 통계 + +
+ +**2. Introduction to Probability and Combinatorics** + +⟶확률과 조합론 소개 + +
+ +**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** + +⟶표본 공간 ― 시행의 가능한 모든 결과 집합은 시행의 표본 공간으로 알려져 있으며 S로 표기합니다. + +
+ +**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** + +⟶사건 ― 표본 공간의 모든 부분 집합 E를 사건이라고 합니다. 즉, 사건은 시행 가능한 결과로 구성된 집합입니다. 시행 결과가 E에 포함된다면, E가 발생했다고 이야기합니다. + +
+ +**5. Axioms of probability ― For each event E, we denote P(E) as the probability of event E occuring.** + +⟶확률의 공리 ― 각 사건 E에 대하여, 우리는 사건 E가 발생할 확률을 P(E)로 나타냅니다. + +
+ +**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** + +⟶공리 1 ― 모든 확률은 0과 1사이에 포함됩니다, 즉: + +
+ +**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** + +⟶공리 2 ― 전체 표본 공간에서 적어도 하나의 근원 사건이 발생할 확률은 1입니다. 즉: + +
+ +**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** + +⟶공리 3 ― 서로 배반인 어떤 연속적인 사건 E1,...,En 에 대하여, 우리는 다음을 가집니다: + +
+ +**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** + +⟶순열(Permutation) ― 순열은 n개의 객체들로부터 r개의 객체들의 순서를 고려한 배열입니다. 그러한 배열의 수는 P (n, r)에 의해 주어지며, 다음과 같이 정의됩니다: + +
+ +**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** + +⟶조합(Combination) ― 조합은 n개의 객체들로부터 r개의 객체들의 순서를 고려하지 않은 배열입니다. 그러한 배열의 수는 다음과 같이 정의되는 C(n, r)에 의해 주어집니다: + +
+ +**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** + +⟶비고 :우리는 for 0⩽r⩽n에 대해, P(n,r)⩾C(n,r)를 가집니다. + +
+ +**12. Conditional Probability** + +⟶조건부 확률 + +
+ +**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** + +⟶베이즈 규칙 ― P(B)>0인 사건 A, B에 대해, 우리는 다음을 가집니다: + +
+ +**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** + +⟶비고 :우리는 P(A∩B)=P(A)P(B|A)=P(A|B)P(B)를 가집니다. + +
+ +**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** + +⟶파티션(Partition)― {Ai, i∈ [[1, n]]}은 모든 i에 대해 Ai ≠ ∅이라고 해봅시다. 우리는 {Ai}가 다음과 같은 경우 파티션이라고 말합니다. + +
+ +**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** + +⟶비고 : 표본 공간에서 어떤 사건 B에 대해서 우리는 P(B) = nΣi = 1P (B | Ai) P (Ai)를 가집니다. + +
+ +**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** + +⟶베이즈 규칙의 확장된 형태 ― {Ai,i∈[[1,n]]}를 표본 공간의 파티션이라고 합시다. 우리는 다음을 가집니다.: + +
+ +**18. Independence ― Two events A and B are independent if and only if we have:** + +⟶독립성 ― 다음의 경우에만 두 사건 A, B가 독립적입니다: + +
+ +**19. Random Variables** + +⟶확률 변수 + +
+ +**20. Definitions** + +⟶정의 + +
+ +**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** + +⟶확률 변수 ― 주로 X라고 표기된 확률 변수는 표본 공간의 모든 요소를 ​​실선에 대응시키는 함수입니다. + +
+ +**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** + +⟶누적 분포 함수 (CDF) ― 단조 감소하지 않고 limx → -∞F (x) = 0 이고, limx → + ∞F (x) = 1 인 누적 분포 함수 F는 다음과 같이 정의됩니다: + +
+ +**23. Remark: we have P(a + +**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** + +⟶확률 밀도 함수 (PDF) ― 확률 밀도 함수 f는 인접한 두 확률 변수의 사이에 X가 포함될 확률입니다. + +
+ +**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** + +⟶PDF와 CDF의 관계 ― 이산 (D)과 연속 (C) 예시에서 알아야 할 중요한 특성이 있습니다. + +
+ +**26. [Case, CDF F, PDF f, Properties of PDF]** + +⟶[예시, CDF F, PDF f, PDF의 특성] + +
+ +**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** + +⟶분포의 기대값과 적률 ― 이산 혹은 연속일 때, 기대값 E[X], 일반화된 기대값 E[g(X)], k번째 적률 E[Xk] 및 특성 함수 ψ(ω) : + +
+ +**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** + +⟶분산 (Variance) ― 주로 Var(X) 또는 σ2이라고 표기된 확률 변수의 분산은 분포 함수의 산포(Spread)를 측정한 값입니다. 이는 다음과 같이 결정됩니다: + +
+ +**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** + +⟶표준 편차(Standard Deviation) ― 표준 편차는 실제 확률 변수의 단위를 사용할 수 있는 분포 함수의 산포(Spread)를 측정하는 측도입니다. 이는 다음과 같이 결정됩니다: +
+ +**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** + +⟶확률 변수의 변환 ― 변수 X와 Y를 어떤 함수로 연결되도록 해봅시다. fX와 fY에 각각 X와 Y의 분포 함수를 표기하면 다음과 같습니다: + +
+ +**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** + +⟶라이프니츠 적분 규칙 ― g를 x의 함수로, 잠재적으로 c라고 해봅시다. 그리고 c에 종속적인 경계 a, b에 대해 우리는 다음을 가집니다: + +
+ +**32. Probability Distributions** + +⟶확률 분포 + +
+ +**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** + +⟶체비쇼프 부등식 ― X를 기대값 μ의 확률 변수라고 해봅시다. k에 대하여, σ>0이면 다음과 같은 부등식을 가집니다: + +
+ +**34. Main distributions ― Here are the main distributions to have in mind:** + +⟶주요 분포들― 기억해야 할 주요 분포들이 여기 있습니다: + +
+ +**35. [Type, Distribution]** + +⟶[타입(Type), 분포] + +
+ +**36. Jointly Distributed Random Variables** + +⟶결합 분포 확률 변수 + +
+ +**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** + +⟶주변 밀도와 누적 분포 ― 결합 밀도 확률 함수 fXY로부터 우리는 다음을 가집니다 + +
+ +**38. [Case, Marginal density, Cumulative function]** + +⟶[예시, 주변 밀도, 누적 함수] + +
+ +**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** + +⟶조건부 밀도 ― 주로 fX|Y로 표기되는 Y에 대한 X의 조건부 밀도는 다음과 같이 정의됩니다: + +
+ +**40. Independence ― Two random variables X and Y are said to be independent if we have:** + +⟶독립성 ― 두 확률 변수 X와 Y는 다음과 같은 경우에 독립적이라고 합니다: + +
+ +**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** + +⟶공분산 ― 다음과 같이 두 확률 변수 X와 Y의 공분산을 σ2XY 혹은 더 일반적으로는 Cov(X,Y)로 정의합니다: + +
+ +**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** + +⟶상관관계 ― σX, σY로 X와 Y의 표준 편차를 표기함으로써 ρXY로 표기된 임의의 변수 X와 Y 사이의 상관관계를 다음과 같이 정의합니다: + +
+ +**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** + +⟶비고 1 : 우리는 임의의 확률 변수 X, Y에 대해 ρXY∈ [-1,1]를 가진다고 말합니다. + +
+ +**44. Remark 2: If X and Y are independent, then ρXY=0.** + +⟶비고 2 : X와 Y가 독립이라면 ρXY=0입니다. + +
+ +**45. Parameter estimation** + +⟶모수 추정 + +
+ +**46. Definitions** + +⟶정의 + +
+ +**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** + +⟶확률 표본 ― 확률 표본은 X와 독립적으로 동일하게 분포하는 n개의 확률 변수 X1, ..., Xn의 모음입니다. + +
+ +**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** + +⟶추정량 ― 추정량은 통계 모델에서 알 수 없는 모수의 값을 추론하는 데 사용되는 데이터의 함수입니다. + +
+ +**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** + +⟶편향 ― 추정량 ^θ의 편향은 ^θ 분포의 기대값과 실제값 사이의 차이로 정의됩니다. 즉,: + +
+ +**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** + +⟶비고 : 추정량은 E [^ θ]=θ 일 때, 비 편향적이라고 말합니다. + +
+ +**51. Estimating the mean** + +⟶평균 추정 + +
+ +**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** + +⟶표본 평균 ― 랜덤 표본의 표본 평균은 분포의 실제 평균 μ를 추정하는 데 사용되며 종종 다음과 같이 정의됩니다: + +
+ +**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** + +⟶비고 : 표본 평균은 비 편향적입니다, 즉i.e E[¯¯¯¯¯X]=μ. + +
+ +**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** + +⟶중심 극한 정리 ― 평균 μ와 분산 σ2를 갖는 주어진 분포를 따르는 랜덤 표본 X1, ..., Xn을 가정해 봅시다 그러면 우리는 다음을 가집니다: + +
+ +**55. Estimating the variance** + +⟶분산 추정 + +
+ +**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** + +⟶표본 분산 ― 랜덤 표본의 표본 분산은 분포의 실제 분산 σ2를 추정하는 데 사용되며 종종 s2 또는 σ2로 표기되며 다음과 같이 정의됩니다: + +
+ +**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** + +⟶비고 : 표본 분산은 비 편향적입니다, 즉 E[s2]=σ2. + +
+ +**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** + +⟶표본 분산과 카이 제곱의 관계 ― s2를 랜덤 표본의 표분 분산이라고 합시다. 우리는 다음을 가집니다: + +
+ +**59. [Introduction, Sample space, Event, Permutation]** + +⟶[소개, 표본 공간, 사건, 순열] + +
+ +**60. [Conditional probability, Bayes' rule, Independence]** + +⟶[조건부 확률, 베이즈 규칙, 독립] + +
+ +**61. [Random variables, Definitions, Expectation, Variance]** + +⟶[확률 변수, 정의, 기대값, 분산] + +
+ +**62. [Probability distributions, Chebyshev's inequality, Main distributions]** + +⟶[확률 분포, 체비쇼프 부등식, 주요 분포] + +
+ +**63. [Jointly distributed random variables, Density, Covariance, Correlation]** + +⟶[결합 분포의 확률 변수, 밀도, 공분산, 상관관계] + +
+ +**64. [Parameter estimation, Mean, Variance]** + +⟶[모수 추정, 평균, 분산] diff --git a/ko/cs-229-unsupervised-learning.md b/ko/cs-229-unsupervised-learning.md new file mode 100644 index 000000000..e961a88cc --- /dev/null +++ b/ko/cs-229-unsupervised-learning.md @@ -0,0 +1,340 @@ +**1. Unsupervised Learning cheatsheet** + +⟶ 비지도 학습 cheatsheet + +
+ +**2. Introduction to Unsupervised Learning** + +⟶ 비지도 학습 소개 + +
+ +**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** + +⟶ 동기부여 - 비지도학습의 목표는 {x(1),...,x(m)}와 같이 라벨링이 되어있지 않은 데이터 내의 숨겨진 패턴을 찾는것이다. + +
+ +**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** + +⟶ 옌센 부등식 - f를 볼록함수로 하며 X는 확률변수로 두고 아래와 같은 부등식을 따르도록 하자. + +
+ +**5. Clustering** + +⟶ 군집화 + +
+ +**6. Expectation-Maximization** + +⟶ 기댓값 최대화 + +
+ +**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** + +⟶ 잠재변수 - 잠재변수들은 숨겨져있거나 관측되지 않는 변수들을 말하며, 이러한 변수들은 추정문제의 어려움을 가져온다. 그리고 잠재변수는 종종 z로 표기되어진다. 일반적인 잠재변수로 구성되어져있는 형태들을 살펴보자 + +
+ +**8. [Setting, Latent variable z, Comments]** + +⟶ 표기형태, 잠재변수 z, 주석 + +
+ +**9. [Mixture of k Gaussians, Factor analysis]** + +⟶ 가우시안 혼합모델, 요인분석 + +
+ +**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** + +⟶ 알고리즘 - 기댓값 최대화 (EM) 알고리즘은 모수 θ를 추정하는 효율적인 방법을 제공해준다. 모수 θ의 추정은 아래와 같이 우도의 아래 경계지점을 구성하는(E-step)과 그 우도의 아래 경계지점을 최적화하는(M-step)들의 반복적인 최대우도측정을 통해 추정된다. + +
+ +**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** + +⟶ E-step : 각각의 데이터 포인트 x(i)은 특정 클러스터 z(i)로 부터 발생한 후 사후확률Qi(z(i))를 평가한다. 아래의 식 참조 + +
+ +**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** + +⟶ M-step : 데이터 포인트 x(i)에 대한 클러스트의 특정 가중치로 사후확률 Qi(z(i))을 사용, 각 클러스트 모델을 개별적으로 재평가한다. 아래의 식 참조 + +
+ +**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** + +⟶ Gaussians 초기값, 기대 단계, 최대화 단계, 수렴 + +
+ +**14. k-means clustering** + +⟶ k-평균 군집화 + +
+ +**15. We note c(i) the cluster of data point i and μj the center of cluster j.** + +⟶ c(i)는 데이터 포인트 i 와 j군집의 중앙인 μj 들의 군집이다. + +
+ +**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** + +⟶ 알고리즘 - 군집 중앙에 μ1,μ2,...,μk∈Rn 와 같이 무작위로 초기값을 잡은 후, k-평균 알고리즘이 수렴될때 까지 아래와 같은 단계를 반복한다. + +
+ +**17. [Means initialization, Cluster assignment, Means update, Convergence]** + +⟶ 평균 초기값, 군집분할, 평균 재조정, 수렴 + +
+ +**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** + +⟶ 왜곡 함수 - 알고리즘이 수렴하는지를 확인하기 위해서는 아래와 같은 왜곡함수를 정의해야 한다. + +
+ +**19. Hierarchical clustering** + +⟶ 계층적 군집분석 + +
+ +**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** + +⟶ 알고리즘 - 연속적 방식으로 중첩된 클러스트를 구축하는 결합형 계층적 접근방식을 사용하는 군집 알고리즘이다. + +
+ +**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** + +⟶ 종류 - 다양한 목적함수의 최적화를 목표로하는 다양한 종류의 계층적 군집분석 알고리즘들이 있으며, 아래 표와 같이 요약되어있다. + +
+ +**22. [Ward linkage, Average linkage, Complete linkage]** + +⟶ Ward 연결법, 평균 연결법, 완전 연결법 + +
+ +**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** + +⟶ 군집 거리 내에서의 최소화, 한쌍의 군집간 평균거리의 최소화, 한쌍의 군집간 최대거리의 최소화 + +
+ +**24. Clustering assessment metrics** + +⟶ 군집화 평가 metrics + +
+ +**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** + +⟶ 비지도학습 환경에서는, 지도학습 환경과는 다르게 실측자료에 라벨링이 없기 때문에 종종 모델에 대한 성능평가가 어렵다. + +
+ +**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** + +⟶ 실루엣 계수 - a와 b를 같은 클래스의 다른 모든점과 샘플 사이의 평균거리와 다음 가장 가까운 군집의 다른 모든 점과 샘플사이의 평균거리로 표기하면 단일 샘플에 대한 실루엣 계수 s는 다음과 같이 정의할 수 있다. + +
+ +**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** + +⟶ Calinski-Harabaz 색인 - k개 군집에 Bk와 Wk를 표기하면, 다음과 같이 각각 정의 된 군집간 분산행렬이다. + +
+ +**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** + +⟶ Calinski-Harabaz 색인 s(k)는 군집모델이 군집화를 얼마나 잘 정의하는지를 나타낸다. 가령 높은 점수일수록 군집이 더욱 밀도있으며 잘 분리되는 형태이다. 아래와 같은 정의를 따른다. + +
+ +**29. Dimension reduction** + +⟶ 차원 축소 + +
+ +**30. Principal component analysis** + +⟶ 주성분 분석 + +
+ +**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** + +⟶ 차원축소 기술은 데이터를 반영하는 최대 분산방향을 찾는 기술이다. + +
+ +**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** + +⟶ 고유값, 고유벡터 - A∈Rn×n 행렬이 주어질때, λ는 A의 고유값이 되며, 만약 z∈Rn∖{0} 벡터가 있다면 고유함수이다. + +
+ +**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** + +⟶ 스펙트럼 정리 - A∈Rn×n 이라고 하자 만약 A가 대칭이라면, A는 실수 직교 행렬 U∈Rn×n에 의해 대각행렬로 만들 수 있다. + +
+ +**34. diagonal** + +⟶ 대각선 + +
+ +**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** + +⟶ 참조: 가장 큰 고유값과 연관된 고유 벡터를 행렬 A의 주요 고유벡터라고 부른다 + +
+ +**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k +dimensions by maximizing the variance of the data as follows:** + +⟶ 알고리즘 - 주성분 분석(PCA) 절차는 데이터 분산을 최대화하여 k 차원의 데이터를 투영하는 차원 축소 기술로 다음과 같이 따른다. + +
+ +**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** + +⟶ 1단계: 평균을 0으로 표준편차가 1이되도록 데이터를 표준화한다. + +
+ +**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** + +⟶ 2단계: 실제 고유값과 대칭인 Σ=1mm∑i=1x(i)x(i)T∈Rn×n를 계산합니다. + +
+ +**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** + +⟶ 3단계: k 직교 고유벡터의 합을 u1,...,uk∈Rn와 같이 계산한다. 다시말하면, 가장 큰 고유값 k의 직교 고유벡터이다. + +
+ +**40. Step 4: Project the data on spanR(u1,...,uk).** + +⟶ 4단계: R(u1,...,uk) 범위에 데이터를 투영하자. + +
+ +**41. This procedure maximizes the variance among all k-dimensional spaces.** + +⟶ 해당 절차는 모든 k-차원의 공간들 사이에 분산을 최대화 하는것이다. + +
+ +**42. [Data in feature space, Find principal components, Data in principal components space]** + +⟶ 변수공간의 데이터, 주요성분들 찾기, 주요성분공간의 데이터 + +
+ +**43. Independent component analysis** + +⟶ 독립성분분석 + +
+ +**44. It is a technique meant to find the underlying generating sources.** + +⟶ 근원적인 생성원을 찾기위한 기술을 의미한다. + +
+ +**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** + +⟶ 가정 - 다음과 같이 우리는 데이터 x가 n차원의 소스벡터 s=(s1,...,sn)에서부터 생성되었음을 가정한다. 이때 si는 독립적인 확률변수에서 나왔으며, 혼합 및 비특이 행렬 A를 통해 생성된다고 가정한다. + +
+ +**46. The goal is to find the unmixing matrix W=A−1.** + +⟶ 비혼합 행렬 W=A−1를 찾는 것을 목표로 한다. + +
+ +**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** + +⟶ Bell과 Sejnowski 독립성분분석(ICA) 알고리즘 - 다음의 단계들을 따르는 비혼합 행렬 W를 찾는 알고리즘이다. + +
+ +**48. Write the probability of x=As=W−1s as:** + +⟶ x=As=W−1s의 확률을 다음과 같이 기술한다. + +
+ +**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** + +⟶ 주어진 학습데이터 {x(i),i∈[[1,m]]}에 로그우도를 기술하고 시그모이드 함수 g를 다음과 같이 표기한다. + +
+ +**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** + +⟶ 그러므로, 확률적 경사상승 학습 규칙은 각 학습예제 x(i)에 대해서 다음과 같이 W를 업데이트하는 것과 같다. + +
+ +**51. The Machine Learning cheatsheets are now available in Korean.** + +⟶ 머신러닝 cheatsheets는 현재 한국어로 제공된다. + +
+ +**52. Original authors** + +⟶ 원저자 + +
+ +**53. Translated by X, Y and Z** + +⟶ X,Y,Z에 의해 번역되다. + +
+ +**54. Reviewed by X, Y and Z** + +⟶ X,Y,Z에 의해 검토되다. + +
+ +**55. [Introduction, Motivation, Jensen's inequality]** + +⟶ 소개, 동기부여, 얀센 부등식 + +
+ +**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** + +⟶ 군집화, 기댓값-최대화, k-means, 계층적 군집화, 측정지표 + +
+ +**57. [Dimension reduction, PCA, ICA]** + +⟶ 차원축소, 주성분분석(PCA), 독립성분분석(ICA) diff --git a/pt/cheatsheet-machine-learning-tips-and-tricks.md b/pt/cheatsheet-machine-learning-tips-and-tricks.md deleted file mode 100644 index 9712297b8..000000000 --- a/pt/cheatsheet-machine-learning-tips-and-tricks.md +++ /dev/null @@ -1,285 +0,0 @@ -**1. Machine Learning tips and tricks cheatsheet** - -⟶ - -
- -**2. Classification metrics** - -⟶ - -
- -**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** - -⟶ - -
- -**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** - -⟶ - -
- -**5. [Predicted class, Actual class]** - -⟶ - -
- -**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** - -⟶ - -
- -**7. [Metric, Formula, Interpretation]** - -⟶ - -
- -**8. Overall performance of model** - -⟶ - -
- -**9. How accurate the positive predictions are** - -⟶ - -
- -**10. Coverage of actual positive sample** - -⟶ - -
- -**11. Coverage of actual negative sample** - -⟶ - -
- -**12. Hybrid metric useful for unbalanced classes** - -⟶ - -
- -**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** - -⟶ - -
- -**14. [Metric, Formula, Equivalent]** - -⟶ - -
- -**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** - -⟶ - -
- -**16. [Actual, Predicted]** - -⟶ - -
- -**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** - -⟶ - -
- -**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** - -⟶ - -
- -**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** - -⟶ - -
- -**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** - -⟶ - -
- -**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** - -⟶ - -
- -**22. Model selection** - -⟶ - -
- -**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** - -⟶ - -
- -**24. [Training set, Validation set, Testing set]** - -⟶ - -
- -**25. [Model is trained, Model is assessed, Model gives predictions]** - -⟶ - -
- -**26. [Usually 80% of the dataset, Usually 20% of the dataset]** - -⟶ - -
- -**27. [Also called hold-out or development set, Unseen data]** - -⟶ - -
- -**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** - -⟶ - -
- -**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** - -⟶ - -
- -**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** - -⟶ - -
- -**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** - -⟶ - -
- -**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** - -⟶ - -
- -**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** - -⟶ - -
- -**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** - -⟶ - -
- -**35. Diagnostics** - -⟶ - -
- -**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** - -⟶ - -
- -**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** - -⟶ - -
- -**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** - -⟶ - -
- -**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** - -⟶ - -
- -**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** - -⟶ - -
- -**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** - -⟶ - -
- -**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** - -⟶ - -
- -**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** - -⟶ - -
- -**44. Regression metrics** - -⟶ - -
- -**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** - -⟶ - -
- -**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** - -⟶ - -
- -**47. [Model selection, cross-validation, regularization]** - -⟶ - -
- -**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** - -⟶ diff --git a/pt/cheatsheet-deep-learning.md b/pt/cs-229-deep-learning.md similarity index 100% rename from pt/cheatsheet-deep-learning.md rename to pt/cs-229-deep-learning.md diff --git a/pt/refresher-linear-algebra.md b/pt/cs-229-linear-algebra.md similarity index 100% rename from pt/refresher-linear-algebra.md rename to pt/cs-229-linear-algebra.md diff --git a/pt/cs-229-machine-learning-tips-and-tricks.md b/pt/cs-229-machine-learning-tips-and-tricks.md new file mode 100644 index 000000000..4bad4360f --- /dev/null +++ b/pt/cs-229-machine-learning-tips-and-tricks.md @@ -0,0 +1,284 @@ +**1. Machine Learning tips and tricks cheatsheet** + +⟶ Dicas e Truques de Aprendizado de Máquina + +
+ +**2. Classification metrics** + +⟶ Métricas de classificação + +
+ +**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** + +⟶ Em um contexto de classificação binária, estas são as principais métricas que são importantes acompanhar para avaliar a desempenho do modelo. + +
+ +**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** + +⟶ Matriz de confusão ― A matriz de confusão (confusion matrix) é usada para termos um cenário mais completo quando estamos avaliando o desempenho de um modelo. Ela é definida conforme a seguir: + +
+ +**5. [Predicted class, Actual class]** + +⟶ [Classe prevista, Classe real] + +
+ +**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** + +⟶ Principais métricas - As seguintes métricas são comumente usadas para avaliar o desempenho de modelos de classificação: + +
+ +**7. [Metric, Formula, Interpretation]** + +⟶ [Métrica, Fórmula, Interpretação] + +
+ +**8. Overall performance of model** + +⟶ Desempenho geral do modelo + +
+ +**9. How accurate the positive predictions are** + +⟶ Quão precisas são as predições positivas + +
+ +**10. Coverage of actual positive sample** + +⟶ Cobertura da amostra positiva real + +
+ +**11. Coverage of actual negative sample** + +⟶ Cobertura da amostra negativa real + +
+ +**12. Hybrid metric useful for unbalanced classes** + +⟶ Métrica híbrida útil para classes desequilibradas + +
+ +**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** + +⟶ ROC - A curva de operação do receptor, também chamada ROC (Receiver Operating Characteristic), é o gráfico de TPR versus FPR variando o limiar. Essas métricas estão resumidas na tabela abaixo: + +
+ +**14. [Metric, Formula, Equivalent]** + +⟶ [Métrica, Fórmula, Equivalente] + +
+ +**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** + +⟶ AUC - A área sob a curva de operação de recebimento, também chamado AUC ou AUROC, é a área abaixo da ROC como mostrada na figura a seguir: + +
+ +**16. [Actual, Predicted]** + +⟶ [Real, Previsto] + +
+ +**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** + +⟶ Métricas básicas - Dado um modelo de regresão f, as seguintes métricas são geralmente utilizadas para avaliar o desempenho do modelo: + +
+ +**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** + +⟶ [Soma total dos quadrados, Soma explicada dos quadrados, Soma residual dos quadrados] + +
+ +**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** + +⟶ Coeficiente de determinação - O coeficiente de determinação, frequentemente escrito como R2 ou r2, fornece uma medida de quão bem os resultados observados são replicados pelo modelo e é definido como se segue: + +
+ +**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** + +⟶ Principais métricas - As seguintes métricas são comumente utilizadas para avaliar o desempenho de modelos de regressão, levando em conta o número de variáveis n que eles consideram: + +
+ +**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** + +⟶ onde L é a probabilidade e ˆσ2 é uma estimativa da variância associada com cada resposta. + +
+ +**22. Model selection** + +⟶ Seleção de Modelo + +
+ +**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** + +⟶ Vocabulário ― Ao selecionar um modelo, nós consideramos 3 diferentes partes dos dados que possuímos conforme a seguir: + +
+ +**24. [Training set, Validation set, Testing set]** + +⟶ [Conjunto de treino, Conjunto de validação, Conjunto de Teste] + +
+ +**25. [Model is trained, Model is assessed, Model gives predictions]** + +⟶ [Modelo é treinado, Modelo é avaliado, Modelo fornece previsões] + +
+ +**26. [Usually 80% of the dataset, Usually 20% of the dataset]** + +⟶ [Geralmente 80% do conjunto de dados, Geralmente 20% do conjunto de dados] + +
+ +**27. [Also called hold-out or development set, Unseen data]** + +⟶ [Também chamado de hold-out ou conjunto de desenvolvimento, Dados não vistos] + +
+ +**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** + +⟶ Uma vez que o modelo é escolhido, ele é treinado no conjunto inteiro de dados e testado no conjunto de dados de testes não vistos. São representados na figura abaixo: + +
+ +**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** + +⟶ Validação cruzada - Validação cruzada, também chamada de CV (Cross-Validation), é um método utilizado para selecionar um modelo que não depende muito do conjunto de treinamento inicial. Os diferente tipos estão resumidos na tabela abaixo: + +
+ +**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** + +⟶ [Treino em k-1 partes e teste sobre o restante, Treino em n-p observações e teste sobre p restantes] + +
+ +**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** + +⟶ [Geralmente k=5 ou 10, caso p=1 é chamado leave-one-out (deixe-um-fora)] + +
+ +**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** + +⟶ O método mais frequentemente usado é chamado k-fold cross validation e divide os dados de treinamento em k partes enquanto treina o modelo nas outras k-1 partes, todas estas em k vezes. O erro é então calculado sobre as k partes e é chamado erro de validação cruzada (cross-validation error). + +
+ +**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** + +⟶ Regularização ― O procedimento de regularização (regularization) visa evitar que o modelo sobreajuste os dados e portanto lide com os problemas de alta variância. A tabela a seguir resume os diferentes tipos de técnicas de regularização comumente utilizadas: +
+ +**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶ [Diminui coeficientes para 0, Bom para seleção de variáveis, Faz o coeficiente menor, Balanço entre seleção de variáveis e coeficientes pequenos] + +
+ +**35. Diagnostics** + +⟶ Diagnóstico + +
+ +**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** + +⟶ Viés - O viés (bias) de um modelo é a diferença entre a predição esperada e o modelo correto que nós tentamos prever para determinados pontos de dados. + +
+ +**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** + +⟶ Variância - A variância (variance) de um modelo é a variabilidade da previsão do modelo para determinados pontos de dados. + +
+ +**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** + +⟶ Balanço viés/variância ― Quanto mais simples o modelo, maior o viés e, quanto mais complexo o modelo, maior a variância. + +
+ +**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** + +⟶ Sintomas, Exemplo de regressão, Exemplo de classificação, Exemplo de Deep Learning, possíveis remédios + +
+ +**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** + +⟶ [Erro de treinamento elevado, Erro de treinamento próximo ao erro de teste, Viés elevado, Erro de treinamento ligeiramente menor que erro de teste, Erro de treinamento muito baixo, Erro de treinamento muito menor que erro de teste. Alta Variância] + +
+ +**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** + +⟶ [Modelo de complexificação, Adicionar mais recursos, Treinar mais, Executar a regularização, Obter mais dados] + +
+ +**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** + +⟶ Análise de erro - Análise de erro (error analysis) é a análise da causa raiz da diferença no desempenho entre o modelo atual e o modelo perfeito. + +
+ +**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** + +⟶ Análise ablativa - Análise ablativa (ablative analysis) é a análise da causa raiz da diferença no desempenho entre o modelo atual e o modelo base. + +
+ +**44. Regression metrics** + +⟶ Métricas de regressão + +
+ +**45. [Classification metrics, confusion matrix, accuracy, precision, recall, specificity, F1 score, ROC]** + +⟶ [Métricas de classificação, Matriz de confusão, acurácia, precisão, revocação/sensibilidade, especifidade, F1 score, ROC] + +
+ +**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** + +⟶ [Métricas de Regressão, R quadrado, Mallow's CP, AIC, BIC] + +
+ +**47. [Model selection, cross-validation, regularization]** + +⟶ [Seleção de modelo, validação cruzada, regularização] + +
+ +**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** + +⟶ [Diagnóstico, Balanço viés/variância, Análise de erro/ablativa] diff --git a/pt/refresher-probability.md b/pt/cs-229-probability.md similarity index 100% rename from pt/refresher-probability.md rename to pt/cs-229-probability.md diff --git a/pt/cheatsheet-supervised-learning.md b/pt/cs-229-supervised-learning.md similarity index 100% rename from pt/cheatsheet-supervised-learning.md rename to pt/cs-229-supervised-learning.md diff --git a/pt/cheatsheet-unsupervised-learning.md b/pt/cs-229-unsupervised-learning.md similarity index 100% rename from pt/cheatsheet-unsupervised-learning.md rename to pt/cs-229-unsupervised-learning.md diff --git a/pt/cs-230-convolutional-neural-networks.md b/pt/cs-230-convolutional-neural-networks.md new file mode 100644 index 000000000..4934d7c2f --- /dev/null +++ b/pt/cs-230-convolutional-neural-networks.md @@ -0,0 +1,718 @@ +**Convolutional Neural Networks translation** + +
+ +**1. Convolutional Neural Networks cheatsheet** + +⟶ Dicas de Redes Neurais Convolucionais + +
+ + +**2. CS 230 - Deep Learning** + +⟶ CS 230 - Aprendizagem profunda + +
+ + +**3. [Overview, Architecture structure]** + +⟶ [Visão geral, Estrutura arquitetural] + +
+ + +**4. [Types of layer, Convolution, Pooling, Fully connected]** + +⟶ [Tipos de camadas, Convolução, Pooling, Totalmente conectada] + +
+ + +**5. [Filter hyperparameters, Dimensions, Stride, Padding]** + +⟶ [Hiperparâmetros de filtro, Dimensões, Passo, Preenchimento] + +
+ + +**6. [Tuning hyperparameters, Parameter compatibility, Model complexity, Receptive field]** + +⟶[Ajustando hiperparâmetros, Compatibilidade de parâmetros, Complexidade de modelo, Campo receptivo] + +
+ + +**7. [Activation functions, Rectified Linear Unit, Softmax]** + +⟶ [Funções de Ativação, Unidade Linear Retificada, Softmax] + +
+ + +**8. [Object detection, Types of models, Detection, Intersection over Union, Non-max suppression, YOLO, R-CNN]** + +⟶[Detecção de objetos, Tipos de modelos, Detecção, Intersecção por União, Supressão não-máxima, YOLO, R-CNN] + +
+ + +**9. [Face verification/recognition, One shot learning, Siamese network, Triplet loss]** + +⟶ [Verificação / reconhecimento facial, Aprendizado de disparo único, Rede siamesa, Perda tripla] + +
+ + +**10. [Neural style transfer, Activation, Style matrix, Style/content cost function]** + +⟶ [Transferência de estilo neural, Ativação, Matriz de estilo, Função de custo de estilo/conteúdo] + +
+ + +**11. [Computational trick architectures, Generative Adversarial Net, ResNet, Inception Network]** + +⟶ [Arquiteturas de truques computacionais, Rede Adversarial Generativa, ResNet, Rede de Iniciação] + +
+ + +**12. Overview** + +⟶ Visão geral + +
+ + +**13. Architecture of a traditional CNN ― Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers:** + +⟶ Arquitetura de uma RNC tradicional (CNN) - Redes neurais convolucionais, também conhecidas como CNN (em inglês), são tipos específicos de redes neurais que geralmente são compostas pelas seguintes camadas: + +
+ + +**14. The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections.** + +⟶ A camada convolucional e a camadas de pooling podem ter um ajuste fino considerando os hiperparâmetros que estão descritos nas próximas seções. + +
+ + +**15. Types of layer** + +⟶ Tipos de camadas + +
+ + +**16. Convolution layer (CONV) ― The convolution layer (CONV) uses filters that perform convolution operations as it is scanning the input I with respect to its dimensions. Its hyperparameters include the filter size F and stride S. The resulting output O is called feature map or activation map.** + +⟶ Camada convolucional (CONV) - A camada convolucional (CONV) usa filtros que realizam operações de convolução conforme eles escaneiam a entrada I com relação a suas dimensões. Seus hiperparâmetros incluem o tamanho do filtro F e o passo S. O resultado O é chamado de mapa de recursos (feature map) ou mapa de ativação. + +
+ + +**17. Remark: the convolution step can be generalized to the 1D and 3D cases as well.** + +⟶ Observação: o passo de convolução também pode ser generalizado para os casos 1D e 3D. + +
+ + +**18. Pooling (POOL) ― The pooling layer (POOL) is a downsampling operation, typically applied after a convolution layer, which does some spatial invariance. In particular, max and average pooling are special kinds of pooling where the maximum and average value is taken, respectively.** + +⟶ Pooling (POOL) - A camada de pooling (POOL) é uma operação de amostragem (downsampling), tipicamente aplicada depois de uma camada convolucional, que faz alguma invariância espacial. Em particular, pooling máximo e médio são casos especiais de pooling onde o máximo e o médio valor são obtidos, respectivamente. + +
+ + +**19. [Type, Purpose, Illustration, Comments]** + +⟶ [Tipo, Propósito, Ilustração, Comentários] + +
+ + +**20. [Max pooling, Average pooling, Each pooling operation selects the maximum value of the current view, Each pooling operation averages the values of the current view]** + +⟶ [Pooling máximo, Pooling médio, Cada operação de pooling seleciona o valor máximo da exibição atual, Cada operação de pooling calcula a média dos valores da exibição atual] + +
+ + +**21. [Preserves detected features, Most commonly used, Downsamples feature map, Used in LeNet]** + +⟶ [Preserva os recursos detectados, Mais comumente usados, Mapa de recursos de amostragem (downsample), Usado no LeNet] + + +
+ + +**22. Fully Connected (FC) ― The fully connected layer (FC) operates on a flattened input where each input is connected to all neurons. If present, FC layers are usually found towards the end of CNN architectures and can be used to optimize objectives such as class scores.** + +⟶ Totalmente Conectado (FC) - A camada totalmente conectada (FC opera em uma entrada achatada, onde cada entrada é conectada a todos os neurônios. Se estiver presente, as camadas FC geralmente são encontradas no final das arquiteturas da CNN e podem ser usadas para otimizar objetivos, como pontuações de classes. + +
+ + +**23. Filter hyperparameters** + +⟶ Hiperparâmetros de filtros + +
+ + +**24. The convolution layer contains filters for which it is important to know the meaning behind its hyperparameters.** + +⟶ A camada de convolução contém filtros para os quais é importante conhecer o significado por trás de seus hiperparâmetros. + +
+ + +**25. Dimensions of a filter ― A filter of size F×F applied to an input containing C channels is a F×F×C volume that performs convolutions on an input of size I×I×C and produces an output feature map (also called activation map) of size O×O×1.** + +⟶ Dimensões de um filtro - Um filtro de tamanho F×F aplicado a uma entrada contendo C canais é um volume de tamanho F×F×C que executa convoluções em uma entrada de tamanho I×I×C e produz um mapa de recursos (também chamado de mapa de ativação) da saída de tamanho O×O×1. + +
+ + +**26. Filter** + +⟶ Filtros + +
+ + +**27. Remark: the application of K filters of size F×F results in an output feature map of size O×O×K.** + +⟶ Observação: a aplicação de K filtros de tamanho F×F resulta em um mapa de recursos de saída de tamanho O×O×K. + +
+ + +**28. Stride ― For a convolutional or a pooling operation, the stride S denotes the number of pixels by which the window moves after each operation.** + +⟶ Passo - Para uma operação convolucional ou de pooling, o passo S denota o número de pixels que a janela se move após cada operação. + +
+ + +**29. Zero-padding ― Zero-padding denotes the process of adding P zeroes to each side of the boundaries of the input. This value can either be manually specified or automatically set through one of the three modes detailed below:** + +⟶ Zero preenchimento (Zero-padding) - Zero preenchimento denota o processo de adicionar P zeros em cada lado das fronteiras de entrada. Esse valor pode ser especificado manualmente ou automaticamente ajustado através de um dos três modelos abaixo: + +
+ + +**30. [Mode, Value, Illustration, Purpose, Valid, Same, Full]** + +⟶ [Modo, Valor, Ilustração, Propósito, Válido, Idêntico, Completo] + +
+ + +**31. [No padding, Drops last convolution if dimensions do not match, Padding such that feature map size has size ⌈IS⌉, Output size is mathematically convenient, Also called 'half' padding, Maximum padding such that end convolutions are applied on the limits of the input, Filter 'sees' the input end-to-end]** + +⟶ [Sem preenchimento, Descarta a última convolução se as dimensões não corresponderem, Preenchimento de tal forma que o tamanho do mapa de recursos tenha tamanho ⌈IS⌉, Tamanho da saída é matematicamente conveniente, Também chamado de 'meio' preenchimento, Preenchimento máximo de tal forma que convoluções finais são aplicadas nos limites de a entrada, Filtro 'vê' a entrada de ponta a ponta] + +
+ + +**32. Tuning hyperparameters** + +⟶ Ajuste de hiperparâmetros + +
+ + +**33. Parameter compatibility in convolution layer ― By noting I the length of the input volume size, F the length of the filter, P the amount of zero padding, S the stride, then the output size O of the feature map along that dimension is given by:** + +⟶ Compatibilidade de parâmetro na camada convolucional - Considerando I o comprimento do tamanho do volume da entrada, F o tamanho do filtro, P a quantidade de preenchimento de zero (zero-padding) e S o tamanho do passo, então o tamanho de saída O do mapa de recursos ao longo dessa dimensão é dado por: + + +
+ + +**34. [Input, Filter, Output]** + +⟶ [Entrada, Filtro, Saída] + +
+ + +**35. Remark: often times, Pstart=Pend≜P, in which case we can replace Pstart+Pend by 2P in the formula above.** + +⟶ Observação: diversas vezes, Pstart=Pend≜P, em cujo caso podemos substituir Pstart+Pen por 2P na fórmula acima. + +
+ + +**36. Understanding the complexity of the model ― In order to assess the complexity of a model, it is often useful to determine the number of parameters that its architecture will have. In a given layer of a convolutional neural network, it is done as follows:** + +⟶ Entendendo a complexidade do modelo - Para avaliar a complexidade de um modelo, é geralmente útil determinar o número de parâmetros que a arquitetura deverá ter. Em uma determinada camada de uma rede neural convolucional, ela é dada da seguinte forma: + +
+ + +**37. [Illustration, Input size, Output size, Number of parameters, Remarks]** + +⟶ [Ilustração, Tamanho da entrada, Tamanho da saída, Número de parâmetros, Observações] + +
+ + +**38. [One bias parameter per filter, In most cases, S + + +**39. [Pooling operation done channel-wise, In most cases, S=F]** + +⟶ [Operação de pooling feita pelo canal, Na maior parte dos casos, S=F] + +
+ + +**40. [Input is flattened, One bias parameter per neuron, The number of FC neurons is free of structural constraints]** + +⟶ [Entrada é achatada, Um parâmetro de viés (bias parameter) por neurônio, O número de neurônios FC está livre de restrições estruturais] + +
+ + +**41. Receptive field ― The receptive field at layer k is the area denoted Rk×Rk of the input that each pixel of the k-th activation map can 'see'. By calling Fj the filter size of layer j and Si the stride value of layer i and with the convention S0=1, the receptive field at layer k can be computed with the formula:** + +⟶ Campo receptivo - O campo receptivo na camada k é a área denotada por Rk×Rk da entrada que cada pixel do k-ésimo mapa de ativação pode 'ver'. Ao chamar Fj o tamanho do filtro da camada j e Si o valor do passo da camada i e com a convenção S0=1, o campo receptivo na camada k pode ser calculado com a fórmula: + +
+ + +**42. In the example below, we have F1=F2=3 and S1=S2=1, which gives R2=1+2⋅1+2⋅1=5.** + +⟶ No exemplo abaixo, temos que F1=F2=3 e S1=S2=1, o que resulta em R2=1+2⋅1+2⋅1=5. + +
+ + +**43. Commonly used activation functions** + +⟶ Funções de ativação comumente usadas + +
+ + +**44. Rectified Linear Unit ― The rectified linear unit layer (ReLU) is an activation function g that is used on all elements of the volume. It aims at introducing non-linearities to the network. Its variants are summarized in the table below:** + +⟶ Unidade Linear Retificada (Rectified Linear Unit) - A camada unitária linear retificada (ReLU) é uma função de ativação g que é usada em todos os elementos do volume. Tem como objetivo introduzir não linearidades na rede. Suas variantes estão resumidas na tabela abaixo: + +
+ + +**45. [ReLU, Leaky ReLU, ELU, with]** + +⟶ [ReLU, Leaky ReLU, ELU, com] + +
+ + +**46. [Non-linearity complexities biologically interpretable, Addresses dying ReLU issue for negative values, Differentiable everywhere]** + +⟶ [Complexidades de não-linearidade biologicamente interpretáveis, Endereça o problema da ReLU para valores negativos, Diferenciável em todos os lugares] + +
+ + +**47. Softmax ― The softmax step can be seen as a generalized logistic function that takes as input a vector of scores x∈Rn and outputs a vector of output probability p∈Rn through a softmax function at the end of the architecture. It is defined as follows:** + +⟶ Softmax - O passo de softmax pode ser visto como uma função logística generalizada que pega como entrada um vetor de pontuações x∈Rn e retorna um vetor de probabilidades p∈Rn através de uma função softmax no final da arquitetura. É definida como: + +
+ + +**48. where** + +⟶ onde + +
+ + +**49. Object detection** + +⟶ Detecção de objeto + +
+ + +**50. Types of models ― There are 3 main types of object recognition algorithms, for which the nature of what is predicted is different. They are described in the table below:** + +⟶ Tipos de modelos - Existem 3 tipos de algoritmos de reconhecimento de objetos, para o qual a natureza do que é previsto é diferente para cada um. Eles estão descritos na tabela abaixo: + +
+ + +**51. [Image classification, Classification w. localization, Detection]** + +⟶ [Classificação de imagem, Classificação com localização, Detecção] + +
+ + +**52. [Teddy bear, Book]** + +⟶ [Urso de pelúcia, Livro] + +
+ + +**53. [Classifies a picture, Predicts probability of object, Detects an object in a picture, Predicts probability of object and where it is located, Detects up to several objects in a picture, Predicts probabilities of objects and where they are located]** + +⟶ [Classifica uma imagem, Prevê a probabilidade de um objeto, Detecta um objeto em uma imagem, Prevê a probabilidade de objeto e onde ele está localizado, Detecta vários objetos em uma imagem, Prevê probabilidades de objetos e onde eles estão localizados] + +
+ + +**54. [Traditional CNN, Simplified YOLO, R-CNN, YOLO, R-CNN]** + +⟶ [CNN tradicional, YOLO simplificado, R-CNN, YOLO, R-CNN] + +
+ + +**55. Detection ― In the context of object detection, different methods are used depending on whether we just want to locate the object or detect a more complex shape in the image. The two main ones are summed up in the table below:** + +⟶ Detecção - No contexto da detecção de objetos, diferentes métodos são usados dependendo se apenas queremos localizar o objeto ou detectar uma forma mais complexa na imagem. Os dois principais são resumidos na tabela abaixo: + +
+ + +**56. [Bounding box detection, Landmark detection]** + +⟶ [Detecção de caixa limite, Detecção de marco] + +
+ + +**57. [Detects the part of the image where the object is located, Detects a shape or characteristics of an object (e.g. eyes), More granular]** + +⟶ [Detecta parte da imagem onde o objeto está localizado, Detecta a forma ou característica de um objeto (e.g. olhos), Mais granular] + +
+ + +**58. [Box of center (bx,by), height bh and width bw, Reference points (l1x,l1y), ..., (lnx,lny)]** + +⟶ [Caixa central (bx,by), altura bh e largura bw, Pontos de referência (l1x,l1y), ..., (lnx,lny)] + +
+ + +**59. Intersection over Union ― Intersection over Union, also known as IoU, is a function that quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding box Ba. It is defined as:** + +⟶ Interseção sobre União (Intersection over Union) - Interseção sobre União, também conhecida como IoU, é uma função que quantifica quão corretamente posicionado uma caixa de delimitação predita Bp está sobre a caixa de delimitação real Ba. É definida por: + +
+ + +**60. Remark: we always have IoU∈[0,1]. By convention, a predicted bounding box Bp is considered as being reasonably good if IoU(Bp,Ba)⩾0.5.** + +⟶ Observação: temos que IoU∈[0,1]. Por convenção, uma caixa de delimitação predita Bp é considerada razoavelmente boa se IoU(Bp,Ba)⩾0.5. + +
+ + +**61. Anchor boxes ― Anchor boxing is a technique used to predict overlapping bounding boxes. In practice, the network is allowed to predict more than one box simultaneously, where each box prediction is constrained to have a given set of geometrical properties. For instance, the first prediction can potentially be a rectangular box of a given form, while the second will be another rectangular box of a different geometrical form.** + +⟶ Caixas de ancoragem (Anchor boxes) - Caixas de ancoragem é uma técnica usada para predizer caixas de delimitação que se sobrepõem. Na prática, a rede tem permissão para predizer mais de uma caixa simultaneamente, onde cada caixa prevista é restrita a ter um dado conjunto de propriedades geométricas. Por exemplo, a primeira predição pode ser potencialmente uma caixa retangular de uma determinada forma, enquanto a segunda pode ser outra caixa retangular de uma forma geométrica diferente. + +
+ + +**62. Non-max suppression ― The non-max suppression technique aims at removing duplicate overlapping bounding boxes of a same object by selecting the most representative ones. After having removed all boxes having a probability prediction lower than 0.6, the following steps are repeated while there are boxes remaining:** + +⟶ Supressão não máxima (Non-max suppression) - A técnica supressão não máxima visa remover caixas de delimitação de um mesmo objeto que estão duplicadas e se sobrepõem, selecionando as mais representativas. Depois de ter removido todas as caixas que contém uma predição menor que 0.6. os seguintes passos são repetidos enquanto existem caixas remanescentes: + +
+ + +**63. [For a given class, Step 1: Pick the box with the largest prediction probability., Step 2: Discard any box having an IoU⩾0.5 with the previous box.]** + +⟶ [Para uma dada classe, Passo 1: Pegue a caixa com a maior predição de probabilidade., Passo 2: Descarte todas as caixas que tem IoU⩾0.5 com a caixa anterior.] + +
+ + +**64. [Box predictions, Box selection of maximum probability, Overlap removal of same class, Final bounding boxes]** + +⟶ [Predição de caixa, Seleção de caixa com máxima probabilidade, Remoção de sobreposições da mesma classe, Caixas de delimitação final] + +
+ + +**65. YOLO ― You Only Look Once (YOLO) is an object detection algorithm that performs the following steps:** + +⟶ YOLO - Você Apenas Vê Uma Vez (You Only Look Once - YOLO) é um algoritmo de detecção de objeto que realiza os seguintes passos: + +
+ + +**66. [Step 1: Divide the input image into a G×G grid., Step 2: For each grid cell, run a CNN that predicts y of the following form:, repeated k times]** + +⟶ [Passo 1: Divide a imagem de entrada em uma grade G×G., Passo 2: Para cada célula da grade, roda uma CNN que prevê o valor y da seguinte forma:, repita k vezes] + +
+ + +**67. where pc is the probability of detecting an object, bx,by,bh,bw are the properties of the detected bouding box, c1,...,cp is a one-hot representation of which of the p classes were detected, and k is the number of anchor boxes.** + +⟶ onde pc é a probabilidade de detecção do objeto, bx,by,bh,bw são as propriedades das caixas delimitadoras detectadas, c1,...,cp é uma representação única (one-hot representation) de quais das classes p foram detectadas, e k é o número de caixas de ancoragem. + +
+ + +**68. Step 3: Run the non-max suppression algorithm to remove any potential duplicate overlapping bounding boxes.** + +⟶ Passo 3: Rode o algoritmo de supressão não máximo para remover qualquer caixa delimitadora duplicada e que se sobrepõe. + +
+ + +**69. [Original image, Division in GxG grid, Bounding box prediction, Non-max suppression]** + +⟶ [Imagem original, Divisão em uma grade GxG, Caixa delimitadora prevista, Supressão não máxima] + +
+ + +**70. Remark: when pc=0, then the network does not detect any object. In that case, the corresponding predictions bx,...,cp have to be ignored.** + +⟶ Observação: Quando pc=0, então a rede não detecta nenhum objeto. Nesse caso, as predições correspondentes bx,...,cp devem ser ignoradas. + +
+ + +**71. R-CNN ― Region with Convolutional Neural Networks (R-CNN) is an object detection algorithm that first segments the image to find potential relevant bounding boxes and then run the detection algorithm to find most probable objects in those bounding boxes.** + +⟶ R-CNN - Região com Redes Neurais Convolucionais (R-CNN) é um algoritmo de detecção de objetos que primeiro segmenta a imagem para encontrar potenciais caixas de delimitação relevantes e então roda o algoritmo de detecção para encontrar os objetos mais prováveis dentro das caixas de delimitação. + +
+ + +**72. [Original image, Segmentation, Bounding box prediction, Non-max suppression]** + +⟶ [Imagem original, Segmentação, Predição da caixa delimitadora, Supressão não-máxima] + +
+ + +**73. Remark: although the original algorithm is computationally expensive and slow, newer architectures enabled the algorithm to run faster, such as Fast R-CNN and Faster R-CNN.** + +⟶ Observação: embora o algoritmo original seja computacionalmente caro e lento, arquiteturas mais recentes, como o Fast R-CNN e o Faster R-CNN, permitiram que o algoritmo fosse executado mais rapidamente. + +
+ + +**74. Face verification and recognition** + +⟶ Verificação facial e reconhecimento + +
+ + +**75. Types of models ― Two main types of model are summed up in table below:** + +⟶ Tipos de modelos - Os dois principais tipos de modelos são resumidos na tabela abaixo: + +
+ + +**76. [Face verification, Face recognition, Query, Reference, Database]** + +⟶ [Verificação facial, Reconhecimento facial, Consulta, Referência, Banco de dados] + +
+ + +**77. [Is this the correct person?, One-to-one lookup, Is this one of the K persons in the database?, One-to-many lookup]** + +⟶ [Esta é a pessoa correta?, Pesquisa um-para-um, Esta é uma das K pessoas no banco de dados?, Pesquisa um-para-muitos] + +
+ + +**78. One Shot Learning ― One Shot Learning is a face verification algorithm that uses a limited training set to learn a similarity function that quantifies how different two given images are. The similarity function applied to two images is often noted d(image 1,image 2).** + +⟶ Aprendizado de Disparo Único (One Shot Learning) - One Shot Learning é um algoritmo de verificação facial que utiliza um conjunto de treinamento limitado para aprender uma função de similaridade que quantifica o quão diferentes são as duas imagens. A função de similaridade aplicada a duas imagens é frequentemente denotada como d(imagem 1, imagem 2). + +
+ + +**79. Siamese Network ― Siamese Networks aim at learning how to encode images to then quantify how different two images are. For a given input image x(i), the encoded output is often noted as f(x(i)).** + +⟶ Rede Siamesa (Siamese Network) - Siamese Networks buscam aprender como codificar imagens para depois quantificar quão diferentes são as duas imagens. Para uma imagem de entrada x(i), o resultado codificado é normalmente denotado como f(x(i)). + +
+ + +**80. Triplet loss ― The triplet loss ℓ is a loss function computed on the embedding representation of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive example belong to a same class, while the negative example to another one. By calling α∈R+ the margin parameter, this loss is defined as follows:** + +⟶ Perda tripla (Triplet loss) - A perda tripla ℓ é uma função de perda (loss function) computada na representação da encorporação de três imagens A (âncora), P (positiva) e N (negativa). O exemplo da âncora e positivo pertencem à mesma classe, enquanto o exemplo negativo pertence a uma classe diferente. Chamando o parâmetro de margem de α∈R+, essa função de perda é definida da seguinte forma: + +
+ + +**81. Neural style transfer** + +⟶ Transferência de estilo neural + +
+ + +**82. Motivation ― The goal of neural style transfer is to generate an image G based on a given content C and a given style S.** + +⟶ Motivação - O objetivo da transferência de estilo neural é gerar uma imagem G baseada num dado conteúdo C com um estilo S. + +
+ + +**83. [Content C, Style S, Generated image G]** + +⟶ [Conteúdo C, Estulo S, Imagem gerada G] + +
+ + +**84. Activation ― In a given layer l, the activation is noted a[l] and is of dimensions nH×nw×nc** + +⟶ Ativação - Em uma dada camada l, a ativação é denotada como a[l] e suas dimensões são nH×nw×nc + +
+ + +**85. Content cost function ― The content cost function Jcontent(C,G) is used to determine how the generated image G differs from the original content image C. It is defined as follows:** + +⟶ Função de custo de conteúdo (Content cost function) - A função de custo de conteúdo Jcontent(C,G) é usada para determinar como a imagem gerada G difere da imagem de conteúdo original C. Ela é definida da seguinte forma: + +
+ + +**86. Style matrix ― The style matrix G[l] of a given layer l is a Gram matrix where each of its elements G[l]kk′ quantifies how correlated the channels k and k′ are. It is defined with respect to activations a[l] as follows:** + +⟶ Matriz de estilo - A matriz de estilo G[l] de uma determinada camada l é a matriz de Gram em que cada um dos seus elementos G[l]kk′ quantificam quão correlacionados são os canais k e k′. Ela é definida com respeito às ativações a[l] da seguinte forma: + +
+ + +**87. Remark: the style matrix for the style image and the generated image are noted G[l] (S) and G[l] (G) respectively.** + +⟶ Observação: a matriz de estilo para a imagem estilizada e para a imagem gerada são denotadas como G[l] (S) e G[l] (G), respectivamente. + +
+ + +**88. Style cost function ― The style cost function Jstyle(S,G) is used to determine how the generated image G differs from the style S. It is defined as follows:** + +⟶ Função de custo de estilo (Style cost function) - A função de custo de estilo Jstyle(S,G) é usada para determinar como a imagem gerada G difere do estilo S. Ela é definida da seguinte forma: + +
+ + +**89. Overall cost function ― The overall cost function is defined as being a combination of the content and style cost functions, weighted by parameters α,β, as follows:** + +⟶ Função de custo geral (Overall cost function) é definida como sendo a combinação das funções de custo do conteúdo e do estilo, ponderada pelos parâmetros α,β, como mostrado abaixo: + +
+ + +**90. Remark: a higher value of α will make the model care more about the content while a higher value of β will make it care more about the style.** + +⟶ Observação: um valor de α maior irá fazer com que o modelo se preocupe mais com o conteúdo enquanto um maior valor de β irá fazer com que ele se preocupe mais com o estilo. + +
+ + +**91. Architectures using computational tricks** + +⟶ Arquiteturas usando truques computacionais + +
+ + +**92. Generative Adversarial Network ― Generative adversarial networks, also known as GANs, are composed of a generative and a discriminative model, where the generative model aims at generating the most truthful output that will be fed into the discriminative which aims at differentiating the generated and true image.** + +⟶ Rede Adversarial Gerativa (Generative Adversarial Network) - As Generaive Adversarial Networks, também conhecidas como GANs, são compostas de um modelo generativo e um modelo discriminativo, onde o modelo generativo visa gerar a saída mais verdadeira que será alimentada na discriminativa que visa diferenciar a imagem gerada e a imagem verdadeira. + +
+ + +**93. [Training, Noise, Real-world image, Generator, Discriminator, Real Fake]** + +⟶ [Treinamento, Ruído, Imagem real, Gerador, Discriminador, Falsa real] + +
+ + +**94. Remark: use cases using variants of GANs include text to image, music generation and synthesis.** + +⟶ Observação: casos de uso usando variações de GANs incluem texto para imagem, geração de música e síntese. + +
+ + +**95. ResNet ― The Residual Network architecture (also called ResNet) uses residual blocks with a high number of layers meant to decrease the training error. The residual block has the following characterizing equation:** + +⟶ ResNet - A arquitetura de Rede Residual (também chamada de ResNet) usa blocos residuais com um alto número de camadas para diminuir o erro de treinamento. O bloco residual possui a seguinte equação caracterizadora: + +
+ + +**96. Inception Network ― This architecture uses inception modules and aims at giving a try at different convolutions in order to increase its performance through features diversification. In particular, it uses the 1×1 convolution trick to limit the computational burden.** + +⟶ Rede de Iniciação - Esta arquitetura utiliza módulos de iniciação e visa experimentar diferentes convoluções, a fim de aumentar seu desempenho através da diversificação de recursos. Em particular, ele usa o truque de convolução 1×1 para limitar a carga computacional. + +
+ + +**97. The Deep Learning cheatsheets are now available in [target language].** + +⟶ Os resumos de Aprendizagem Profunda estão disponíveis em português. + +
+ + +**98. Original authors** + +⟶ Autores Originais + +
+ + +**99. Translated by X, Y and Z** + +⟶ Traduzido por Leticia Portella + +
+ + +**100. Reviewed by X, Y and Z** + +⟶ Revisado por Gabriel Fonseca + +
+ + +**101. View PDF version on GitHub** + +⟶ Ver versão em PDF no GitHub. + +
+ + +**102. By X and Y** + +⟶ Por X e Y + +
diff --git a/ru/cheatsheet-deep-learning.md b/ru/cheatsheet-deep-learning.md deleted file mode 100644 index a5aa3756c..000000000 --- a/ru/cheatsheet-deep-learning.md +++ /dev/null @@ -1,321 +0,0 @@ -**1. Deep Learning cheatsheet** - -⟶ - -
- -**2. Neural Networks** - -⟶ - -
- -**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** - -⟶ - -
- -**4. Architecture ― The vocabulary around neural networks architectures is described in the figure below:** - -⟶ - -
- -**5. [Input layer, hidden layer, output layer]** - -⟶ - -
- -**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** - -⟶ - -
- -**7. where we note w, b, z the weight, bias and output respectively.** - -⟶ - -
- -**8. Activation function ― Activation functions are used at the end of a hidden unit to introduce non-linear complexities to the model. Here are the most common ones:** - -⟶ - -
- -**9. [Sigmoid, Tanh, ReLU, Leaky ReLU]** - -⟶ - -
- -**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** - -⟶ - -
- -**11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** - -⟶ - -
- -**12. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to weight w is computed using chain rule and is of the following form:** - -⟶ - -
- -**13. As a result, the weight is updated as follows:** - -⟶ - -
- -**14. Updating weights ― In a neural network, weights are updated as follows:** - -⟶ - -
- -**15. Step 1: Take a batch of training data.** - -⟶ - -
- -**16. Step 2: Perform forward propagation to obtain the corresponding loss.** - -⟶ - -
- -**17. Step 3: Backpropagate the loss to get the gradients.** - -⟶ - -
- -**18. Step 4: Use the gradients to update the weights of the network.** - -⟶ - -
- -**19. Dropout ― Dropout is a technique meant at preventing overfitting the training data by dropping out units in a neural network. In practice, neurons are either dropped with probability p or kept with probability 1−p** - -⟶ - -
- -**20. Convolutional Neural Networks** - -⟶ - -
- -**21. Convolutional layer requirement ― By noting W the input volume size, F the size of the convolutional layer neurons, P the amount of zero padding, then the number of neurons N that fit in a given volume is such that:** - -⟶ - -
- -**22. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** - -⟶ - -
- -**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** - -⟶ - -
- -**24. Recurrent Neural Networks** - -⟶ - -
- -**25. Types of gates ― Here are the different types of gates that we encounter in a typical recurrent neural network:** - -⟶ - -
- -**26. [Input gate, forget gate, gate, output gate]** - -⟶ - -
- -**27. [Write to cell or not?, Erase a cell or not?, How much to write to cell?, How much to reveal cell?]** - -⟶ - -
- -**28. LSTM ― A long short-term memory (LSTM) network is a type of RNN model that avoids the vanishing gradient problem by adding 'forget' gates.** - -⟶ - -
- -**29. Reinforcement Learning and Control** - -⟶ - -
- -**30. The goal of reinforcement learning is for an agent to learn how to evolve in an environment.** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Markov decision processes ― A Markov decision process (MDP) is a 5-tuple (S,A,{Psa},γ,R) where:** - -⟶ - -
- -**33. S is the set of states** - -⟶ - -
- -**34. A is the set of actions** - -⟶ - -
- -**35. {Psa} are the state transition probabilities for s∈S and a∈A** - -⟶ - -
- -**36. γ∈[0,1[ is the discount factor** - -⟶ - -
- -**37. R:S×A⟶R or R:S⟶R is the reward function that the algorithm wants to maximize** - -⟶ - -
- -**38. Policy ― A policy π is a function π:S⟶A that maps states to actions.** - -⟶ - -
- -**39. Remark: we say that we execute a given policy π if given a state s we take the action a=π(s).** - -⟶ - -
- -**40. Value function ― For a given policy π and a given state s, we define the value function Vπ as follows:** - -⟶ - -
- -**41. Bellman equation ― The optimal Bellman equations characterizes the value function Vπ∗ of the optimal policy π∗:** - -⟶ - -
- -**42. Remark: we note that the optimal policy π∗ for a given state s is such that:** - -⟶ - -
- -**43. Value iteration algorithm ― The value iteration algorithm is in two steps:** - -⟶ - -
- -**44. 1) We initialize the value:** - -⟶ - -
- -**45. 2) We iterate the value based on the values before:** - -⟶ - -
- -**46. Maximum likelihood estimate ― The maximum likelihood estimates for the state transition probabilities are as follows:** - -⟶ - -
- -**47. times took action a in state s and got to s′** - -⟶ - -
- -**48. times took action a in state s** - -⟶ - -
- -**49. Q-learning ― Q-learning is a model-free estimation of Q, which is done as follows:** - -⟶ - -
- -**50. View PDF version on GitHub** - -⟶ - -
- -**51. [Neural Networks, Architecture, Activation function, Backpropagation, Dropout]** - -⟶ - -
- -**52. [Convolutional Neural Networks, Convolutional layer, Batch normalization]** - -⟶ - -
- -**53. [Recurrent Neural Networks, Gates, LSTM]** - -⟶ - -
- -**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** - -⟶ diff --git a/ru/cheatsheet-machine-learning-tips-and-tricks.md b/ru/cheatsheet-machine-learning-tips-and-tricks.md deleted file mode 100644 index 9712297b8..000000000 --- a/ru/cheatsheet-machine-learning-tips-and-tricks.md +++ /dev/null @@ -1,285 +0,0 @@ -**1. Machine Learning tips and tricks cheatsheet** - -⟶ - -
- -**2. Classification metrics** - -⟶ - -
- -**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** - -⟶ - -
- -**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** - -⟶ - -
- -**5. [Predicted class, Actual class]** - -⟶ - -
- -**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** - -⟶ - -
- -**7. [Metric, Formula, Interpretation]** - -⟶ - -
- -**8. Overall performance of model** - -⟶ - -
- -**9. How accurate the positive predictions are** - -⟶ - -
- -**10. Coverage of actual positive sample** - -⟶ - -
- -**11. Coverage of actual negative sample** - -⟶ - -
- -**12. Hybrid metric useful for unbalanced classes** - -⟶ - -
- -**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** - -⟶ - -
- -**14. [Metric, Formula, Equivalent]** - -⟶ - -
- -**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** - -⟶ - -
- -**16. [Actual, Predicted]** - -⟶ - -
- -**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** - -⟶ - -
- -**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** - -⟶ - -
- -**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** - -⟶ - -
- -**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** - -⟶ - -
- -**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** - -⟶ - -
- -**22. Model selection** - -⟶ - -
- -**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** - -⟶ - -
- -**24. [Training set, Validation set, Testing set]** - -⟶ - -
- -**25. [Model is trained, Model is assessed, Model gives predictions]** - -⟶ - -
- -**26. [Usually 80% of the dataset, Usually 20% of the dataset]** - -⟶ - -
- -**27. [Also called hold-out or development set, Unseen data]** - -⟶ - -
- -**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** - -⟶ - -
- -**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** - -⟶ - -
- -**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** - -⟶ - -
- -**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** - -⟶ - -
- -**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** - -⟶ - -
- -**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** - -⟶ - -
- -**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** - -⟶ - -
- -**35. Diagnostics** - -⟶ - -
- -**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** - -⟶ - -
- -**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** - -⟶ - -
- -**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** - -⟶ - -
- -**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** - -⟶ - -
- -**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** - -⟶ - -
- -**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** - -⟶ - -
- -**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** - -⟶ - -
- -**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** - -⟶ - -
- -**44. Regression metrics** - -⟶ - -
- -**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** - -⟶ - -
- -**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** - -⟶ - -
- -**47. [Model selection, cross-validation, regularization]** - -⟶ - -
- -**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** - -⟶ diff --git a/ru/cheatsheet-supervised-learning.md b/ru/cheatsheet-supervised-learning.md deleted file mode 100644 index a6b19ea1c..000000000 --- a/ru/cheatsheet-supervised-learning.md +++ /dev/null @@ -1,567 +0,0 @@ -**1. Supervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Supervised Learning** - -⟶ - -
- -**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** - -⟶ - -
- -**4. Type of prediction ― The different types of predictive models are summed up in the table below:** - -⟶ - -
- -**5. [Regression, Classifier, Outcome, Examples]** - -⟶ - -
- -**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** - -⟶ - -
- -**7. Type of model ― The different models are summed up in the table below:** - -⟶ - -
- -**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** - -⟶ - -
- -**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** - -⟶ - -
- -**10. Notations and general concepts** - -⟶ - -
- -**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** - -⟶ - -
- -**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** - -⟶ - -
- -**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** - -⟶ - -
- -**14. [Linear regression, Logistic regression, SVM, Neural Network]** - -⟶ - -
- -**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** - -⟶ - -
- -**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** - -⟶ - -
- -**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** - -⟶ - -
- -**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** - -⟶ - -
- -**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** - -⟶ - -
- -**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** - -⟶ - -
- -**21. Linear models** - -⟶ - -
- -**22. Linear regression** - -⟶ - -
- -**23. We assume here that y|x;θ∼N(μ,σ2)** - -⟶ - -
- -**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** - -⟶ - -
- -**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** - -⟶ - -
- -**26. Remark: the update rule is a particular case of the gradient ascent.** - -⟶ - -
- -**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** - -⟶ - -
- -**28. Classification and logistic regression** - -⟶ - -
- -**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** - -⟶ - -
- -**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** - -⟶ - -
- -**31. Remark: there is no closed form solution for the case of logistic regressions.** - -⟶ - -
- -**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** - -⟶ - -
- -**33. Generalized Linear Models** - -⟶ - -
- -**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** - -⟶ - -
- -**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** - -⟶ - -
- -**36. Here are the most common exponential distributions summed up in the following table:** - -⟶ - -
- -**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** - -⟶ - -
- -**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** - -⟶ - -
- -**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** - -⟶ - -
- -**40. Support Vector Machines** - -⟶ - -
- -**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** - -⟶ - -
- -**42: Optimal margin classifier ― The optimal margin classifier h is such that:** - -⟶ - -
- -**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** - -⟶ - -
- -**44. such that** - -⟶ - -
- -**45. support vectors** - -⟶ - -
- -**46. Remark: the line is defined as wTx−b=0.** - -⟶ - -
- -**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** - -⟶ - -
- -**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** - -⟶ - -
- -**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** - -⟶ - -
- -**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** - -⟶ - -
- -**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** - -⟶ - -
- -**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** - -⟶ - -
- -**53. Remark: the coefficients βi are called the Lagrange multipliers.** - -⟶ - -
- -**54. Generative Learning** - -⟶ - -
- -**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** - -⟶ - -
- -**56. Gaussian Discriminant Analysis** - -⟶ - -
- -**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** - -⟶ - -
- -**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** - -⟶ - -
- -**59. Naive Bayes** - -⟶ - -
- -**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** - -⟶ - -
- -**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** - -⟶ - -
- -**62. Remark: Naive Bayes is widely used for text classification and spam detection.** - -⟶ - -
- -**63. Tree-based and ensemble methods** - -⟶ - -
- -**64. These methods can be used for both regression and classification problems.** - -⟶ - -
- -**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** - -⟶ - -
- -**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** - -⟶ - -
- -**67. Remark: random forests are a type of ensemble methods.** - -⟶ - -
- -**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** - -⟶ - -
- -**69. [Adaptive boosting, Gradient boosting]** - -⟶ - -
- -**70. High weights are put on errors to improve at the next boosting step** - -⟶ - -
- -**71. Weak learners trained on remaining errors** - -⟶ - -
- -**72. Other non-parametric approaches** - -⟶ - -
- -**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** - -⟶ - -
- -**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** - -⟶ - -
- -**75. Learning Theory** - -⟶ - -
- -**76. Union bound ― Let A1,...,Ak be k events. We have:** - -⟶ - -
- -**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** - -⟶ - -
- -**78. Remark: this inequality is also known as the Chernoff bound.** - -⟶ - -
- -**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** - -⟶ - -
- -**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** - -⟶ - -
- -**81: the training and testing sets follow the same distribution ** - -⟶ - -
- -**82. the training examples are drawn independently** - -⟶ - -
- -**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** - -⟶ - -
- -**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** - -⟶ - -
- -**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** - -⟶ - -
- -**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** - -⟶ - -
- -**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** - -⟶ - -
- -**88. [Introduction, Type of prediction, Type of model]** - -⟶ - -
- -**89. [Notations and general concepts, loss function, gradient descent, likelihood]** - -⟶ - -
- -**90. [Linear models, linear regression, logistic regression, generalized linear models]** - -⟶ - -
- -**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** - -⟶ - -
- -**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** - -⟶ - -
- -**93. [Trees and ensemble methods, CART, Random forest, Boosting]** - -⟶ - -
- -**94. [Other methods, k-NN]** - -⟶ - -
- -**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** - -⟶ diff --git a/ru/cheatsheet-unsupervised-learning.md b/ru/cheatsheet-unsupervised-learning.md deleted file mode 100644 index e18b3f50f..000000000 --- a/ru/cheatsheet-unsupervised-learning.md +++ /dev/null @@ -1,340 +0,0 @@ -**1. Unsupervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Unsupervised Learning** - -⟶ - -
- -**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** - -⟶ - -
- -**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** - -⟶ - -
- -**5. Clustering** - -⟶ - -
- -**6. Expectation-Maximization** - -⟶ - -
- -**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** - -⟶ - -
- -**8. [Setting, Latent variable z, Comments]** - -⟶ - -
- -**9. [Mixture of k Gaussians, Factor analysis]** - -⟶ - -
- -**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** - -⟶ - -
- -**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** - -⟶ - -
- -**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** - -⟶ - -
- -**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** - -⟶ - -
- -**14. k-means clustering** - -⟶ - -
- -**15. We note c(i) the cluster of data point i and μj the center of cluster j.** - -⟶ - -
- -**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** - -⟶ - -
- -**17. [Means initialization, Cluster assignment, Means update, Convergence]** - -⟶ - -
- -**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** - -⟶ - -
- -**19. Hierarchical clustering** - -⟶ - -
- -**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** - -⟶ - -
- -**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** - -⟶ - -
- -**22. [Ward linkage, Average linkage, Complete linkage]** - -⟶ - -
- -**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** - -⟶ - -
- -**24. Clustering assessment metrics** - -⟶ - -
- -**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** - -⟶ - -
- -**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** - -⟶ - -
- -**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** - -⟶ - -
- -**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** - -⟶ - -
- -**29. Dimension reduction** - -⟶ - -
- -**30. Principal component analysis** - -⟶ - -
- -**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** - -⟶ - -
- -**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**34. diagonal** - -⟶ - -
- -**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** - -⟶ - -
- -**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k -dimensions by maximizing the variance of the data as follows:** - -⟶ - -
- -**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** - -⟶ - -
- -**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** - -⟶ - -
- -**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** - -⟶ - -
- -**40. Step 4: Project the data on spanR(u1,...,uk).** - -⟶ - -
- -**41. This procedure maximizes the variance among all k-dimensional spaces.** - -⟶ - -
- -**42. [Data in feature space, Find principal components, Data in principal components space]** - -⟶ - -
- -**43. Independent component analysis** - -⟶ - -
- -**44. It is a technique meant to find the underlying generating sources.** - -⟶ - -
- -**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** - -⟶ - -
- -**46. The goal is to find the unmixing matrix W=A−1.** - -⟶ - -
- -**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** - -⟶ - -
- -**48. Write the probability of x=As=W−1s as:** - -⟶ - -
- -**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** - -⟶ - -
- -**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** - -⟶ - -
- -**51. The Machine Learning cheatsheets are now available in Russian.** - -⟶ - -
- -**52. Original authors** - -⟶ - -
- -**53. Translated by X, Y and Z** - -⟶ - -
- -**54. Reviewed by X, Y and Z** - -⟶ - -
- -**55. [Introduction, Motivation, Jensen's inequality]** - -⟶ - -
- -**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** - -⟶ - -
- -**57. [Dimension reduction, PCA, ICA]** - -⟶ diff --git a/ru/refresher-linear-algebra.md b/ru/refresher-linear-algebra.md deleted file mode 100644 index a6b440d1e..000000000 --- a/ru/refresher-linear-algebra.md +++ /dev/null @@ -1,339 +0,0 @@ -**1. Linear Algebra and Calculus refresher** - -⟶ - -
- -**2. General notations** - -⟶ - -
- -**3. Definitions** - -⟶ - -
- -**4. Vector ― We note x∈Rn a vector with n entries, where xi∈R is the ith entry:** - -⟶ - -
- -**5. Matrix ― We note A∈Rm×n a matrix with m rows and n columns, where Ai,j∈R is the entry located in the ith row and jth column:** - -⟶ - -
- -**6. Remark: the vector x defined above can be viewed as a n×1 matrix and is more particularly called a column-vector.** - -⟶ - -
- -**7. Main matrices** - -⟶ - -
- -**8. Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** - -⟶ - -
- -**9. Remark: for all matrices A∈Rn×n, we have A×I=I×A=A.** - -⟶ - -
- -**10. Diagonal matrix ― A diagonal matrix D∈Rn×n is a square matrix with nonzero values in its diagonal and zero everywhere else:** - -⟶ - -
- -**11. Remark: we also note D as diag(d1,...,dn).** - -⟶ - -
- -**12. Matrix operations** - -⟶ - -
- -**13. Multiplication** - -⟶ - -
- -**14. Vector-vector ― There are two types of vector-vector products:** - -⟶ - -
- -**15. inner product: for x,y∈Rn, we have:** - -⟶ - -
- -**16. outer product: for x∈Rm,y∈Rn, we have:** - -⟶ - -
- -**17. Matrix-vector ― The product of matrix A∈Rm×n and vector x∈Rn is a vector of size Rn, such that:** - -⟶ - -
- -**18. where aTr,i are the vector rows and ac,j are the vector columns of A, and xi are the entries of x.** - -⟶ - -
- -**19. Matrix-matrix ― The product of matrices A∈Rm×n and B∈Rn×p is a matrix of size Rn×p, such that:** - -⟶ - -
- -**20. where aTr,i,bTr,i are the vector rows and ac,j,bc,j are the vector columns of A and B respectively** - -⟶ - -
- -**21. Other operations** - -⟶ - -
- -**22. Transpose ― The transpose of a matrix A∈Rm×n, noted AT, is such that its entries are flipped:** - -⟶ - -
- -**23. Remark: for matrices A,B, we have (AB)T=BTAT** - -⟶ - -
- -**24. Inverse ― The inverse of an invertible square matrix A is noted A−1 and is the only matrix such that:** - -⟶ - -
- -**25. Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1=B−1A−1** - -⟶ - -
- -**26. Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** - -⟶ - -
- -**27. Remark: for matrices A,B, we have tr(AT)=tr(A) and tr(AB)=tr(BA)** - -⟶ - -
- -**28. Determinant ― The determinant of a square matrix A∈Rn×n, noted |A| or det(A) is expressed recursively in terms of A∖i,∖j, which is the matrix A without its ith row and jth column, as follows:** - -⟶ - -
- -**29. Remark: A is invertible if and only if |A|≠0. Also, |AB|=|A||B| and |AT|=|A|.** - -⟶ - -
- -**30. Matrix properties** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** - -⟶ - -
- -**33. [Symmetric, Antisymmetric]** - -⟶ - -
- -**34. Norm ― A norm is a function N:V⟶[0,+∞[ where V is a vector space, and such that for all x,y∈V, we have:** - -⟶ - -
- -**35. N(ax)=|a|N(x) for a scalar** - -⟶ - -
- -**36. if N(x)=0, then x=0** - -⟶ - -
- -**37. For x∈V, the most commonly used norms are summed up in the table below:** - -⟶ - -
- -**38. [Norm, Notation, Definition, Use case]** - -⟶ - -
- -**39. Linearly dependence ― A set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others.** - -⟶ - -
- -**40. Remark: if no vector can be written this way, then the vectors are said to be linearly independent** - -⟶ - -
- -**41. Matrix rank ― The rank of a given matrix A is noted rank(A) and is the dimension of the vector space generated by its columns. This is equivalent to the maximum number of linearly independent columns of A.** - -⟶ - -
- -**42. Positive semi-definite matrix ― A matrix A∈Rn×n is positive semi-definite (PSD) and is noted A⪰0 if we have:** - -⟶ - -
- -**43. Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** - -⟶ - -
- -**44. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**45. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**46. diagonal** - -⟶ - -
- -**47. Singular-value decomposition ― For a given matrix A of dimensions m×n, the singular-value decomposition (SVD) is a factorization technique that guarantees the existence of U m×m unitary, Σ m×n diagonal and V n×n unitary matrices, such that:** - -⟶ - -
- -**48. Matrix calculus** - -⟶ - -
- -**49. Gradient ― Let f:Rm×n→R be a function and A∈Rm×n be a matrix. The gradient of f with respect to A is a m×n matrix, noted ∇Af(A), such that:** - -⟶ - -
- -**50. Remark: the gradient of f is only defined when f is a function that returns a scalar.** - -⟶ - -
- -**51. Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** - -⟶ - -
- -**52. Remark: the hessian of f is only defined when f is a function that returns a scalar** - -⟶ - -
- -**53. Gradient operations ― For matrices A,B,C, the following gradient properties are worth having in mind:** - -⟶ - -
- -**54. [General notations, Definitions, Main matrices]** - -⟶ - -
- -**55. [Matrix operations, Multiplication, Other operations]** - -⟶ - -
- -**56. [Matrix properties, Norm, Eigenvalue/Eigenvector, Singular-value decomposition]** - -⟶ - -
- -**57. [Matrix calculus, Gradient, Hessian, Operations]** - -⟶ diff --git a/ru/refresher-probability.md b/ru/refresher-probability.md deleted file mode 100644 index 5c9b34656..000000000 --- a/ru/refresher-probability.md +++ /dev/null @@ -1,381 +0,0 @@ -**1. Probabilities and Statistics refresher** - -⟶ - -
- -**2. Introduction to Probability and Combinatorics** - -⟶ - -
- -**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** - -⟶ - -
- -**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** - -⟶ - -
- -**5. Axioms of probability For each event E, we denote P(E) as the probability of event E occuring.** - -⟶ - -
- -**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** - -⟶ - -
- -**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** - -⟶ - -
- -**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** - -⟶ - -
- -**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** - -⟶ - -
- -**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** - -⟶ - -
- -**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** - -⟶ - -
- -**12. Conditional Probability** - -⟶ - -
- -**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** - -⟶ - -
- -**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** - -⟶ - -
- -**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** - -⟶ - -
- -**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** - -⟶ - -
- -**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** - -⟶ - -
- -**18. Independence ― Two events A and B are independent if and only if we have:** - -⟶ - -
- -**19. Random Variables** - -⟶ - -
- -**20. Definitions** - -⟶ - -
- -**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** - -⟶ - -
- -**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** - -⟶ - -
- -**23. Remark: we have P(a - -**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** - -⟶ - -
- -**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** - -⟶ - -
- -**26. [Case, CDF F, PDF f, Properties of PDF]** - -⟶ - -
- -**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** - -⟶ - -
- -**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** - -⟶ - -
- -**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** - -⟶ - -
- -**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** - -⟶ - -
- -**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** - -⟶ - -
- -**32. Probability Distributions** - -⟶ - -
- -**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** - -⟶ - -
- -**34. Main distributions ― Here are the main distributions to have in mind:** - -⟶ - -
- -**35. [Type, Distribution]** - -⟶ - -
- -**36. Jointly Distributed Random Variables** - -⟶ - -
- -**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** - -⟶ - -
- -**38. [Case, Marginal density, Cumulative function]** - -⟶ - -
- -**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** - -⟶ - -
- -**40. Independence ― Two random variables X and Y are said to be independent if we have:** - -⟶ - -
- -**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** - -⟶ - -
- -**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** - -⟶ - -
- -**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** - -⟶ - -
- -**44. Remark 2: If X and Y are independent, then ρXY=0.** - -⟶ - -
- -**45. Parameter estimation** - -⟶ - -
- -**46. Definitions** - -⟶ - -
- -**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** - -⟶ - -
- -**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** - -⟶ - -
- -**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** - -⟶ - -
- -**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** - -⟶ - -
- -**51. Estimating the mean** - -⟶ - -
- -**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** - -⟶ - -
- -**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** - -⟶ - -
- -**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** - -⟶ - -
- -**55. Estimating the variance** - -⟶ - -
- -**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** - -⟶ - -
- -**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** - -⟶ - -
- -**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** - -⟶ - -
- -**59. [Introduction, Sample space, Event, Permutation]** - -⟶ - -
- -**60. [Conditional probability, Bayes' rule, Independence]** - -⟶ - -
- -**61. [Random variables, Definitions, Expectation, Variance]** - -⟶ - -
- -**62. [Probability distributions, Chebyshev's inequality, Main distributions]** - -⟶ - -
- -**63. [Jointly distributed random variables, Density, Covariance, Correlation]** - -⟶ - -
- -**64. [Parameter estimation, Mean, Variance]** - -⟶ diff --git a/template/cheatsheet-deep-learning.md b/template/cheatsheet-deep-learning.md deleted file mode 100644 index a5aa3756c..000000000 --- a/template/cheatsheet-deep-learning.md +++ /dev/null @@ -1,321 +0,0 @@ -**1. Deep Learning cheatsheet** - -⟶ - -
- -**2. Neural Networks** - -⟶ - -
- -**3. Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** - -⟶ - -
- -**4. Architecture ― The vocabulary around neural networks architectures is described in the figure below:** - -⟶ - -
- -**5. [Input layer, hidden layer, output layer]** - -⟶ - -
- -**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** - -⟶ - -
- -**7. where we note w, b, z the weight, bias and output respectively.** - -⟶ - -
- -**8. Activation function ― Activation functions are used at the end of a hidden unit to introduce non-linear complexities to the model. Here are the most common ones:** - -⟶ - -
- -**9. [Sigmoid, Tanh, ReLU, Leaky ReLU]** - -⟶ - -
- -**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** - -⟶ - -
- -**11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** - -⟶ - -
- -**12. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to weight w is computed using chain rule and is of the following form:** - -⟶ - -
- -**13. As a result, the weight is updated as follows:** - -⟶ - -
- -**14. Updating weights ― In a neural network, weights are updated as follows:** - -⟶ - -
- -**15. Step 1: Take a batch of training data.** - -⟶ - -
- -**16. Step 2: Perform forward propagation to obtain the corresponding loss.** - -⟶ - -
- -**17. Step 3: Backpropagate the loss to get the gradients.** - -⟶ - -
- -**18. Step 4: Use the gradients to update the weights of the network.** - -⟶ - -
- -**19. Dropout ― Dropout is a technique meant at preventing overfitting the training data by dropping out units in a neural network. In practice, neurons are either dropped with probability p or kept with probability 1−p** - -⟶ - -
- -**20. Convolutional Neural Networks** - -⟶ - -
- -**21. Convolutional layer requirement ― By noting W the input volume size, F the size of the convolutional layer neurons, P the amount of zero padding, then the number of neurons N that fit in a given volume is such that:** - -⟶ - -
- -**22. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** - -⟶ - -
- -**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** - -⟶ - -
- -**24. Recurrent Neural Networks** - -⟶ - -
- -**25. Types of gates ― Here are the different types of gates that we encounter in a typical recurrent neural network:** - -⟶ - -
- -**26. [Input gate, forget gate, gate, output gate]** - -⟶ - -
- -**27. [Write to cell or not?, Erase a cell or not?, How much to write to cell?, How much to reveal cell?]** - -⟶ - -
- -**28. LSTM ― A long short-term memory (LSTM) network is a type of RNN model that avoids the vanishing gradient problem by adding 'forget' gates.** - -⟶ - -
- -**29. Reinforcement Learning and Control** - -⟶ - -
- -**30. The goal of reinforcement learning is for an agent to learn how to evolve in an environment.** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Markov decision processes ― A Markov decision process (MDP) is a 5-tuple (S,A,{Psa},γ,R) where:** - -⟶ - -
- -**33. S is the set of states** - -⟶ - -
- -**34. A is the set of actions** - -⟶ - -
- -**35. {Psa} are the state transition probabilities for s∈S and a∈A** - -⟶ - -
- -**36. γ∈[0,1[ is the discount factor** - -⟶ - -
- -**37. R:S×A⟶R or R:S⟶R is the reward function that the algorithm wants to maximize** - -⟶ - -
- -**38. Policy ― A policy π is a function π:S⟶A that maps states to actions.** - -⟶ - -
- -**39. Remark: we say that we execute a given policy π if given a state s we take the action a=π(s).** - -⟶ - -
- -**40. Value function ― For a given policy π and a given state s, we define the value function Vπ as follows:** - -⟶ - -
- -**41. Bellman equation ― The optimal Bellman equations characterizes the value function Vπ∗ of the optimal policy π∗:** - -⟶ - -
- -**42. Remark: we note that the optimal policy π∗ for a given state s is such that:** - -⟶ - -
- -**43. Value iteration algorithm ― The value iteration algorithm is in two steps:** - -⟶ - -
- -**44. 1) We initialize the value:** - -⟶ - -
- -**45. 2) We iterate the value based on the values before:** - -⟶ - -
- -**46. Maximum likelihood estimate ― The maximum likelihood estimates for the state transition probabilities are as follows:** - -⟶ - -
- -**47. times took action a in state s and got to s′** - -⟶ - -
- -**48. times took action a in state s** - -⟶ - -
- -**49. Q-learning ― Q-learning is a model-free estimation of Q, which is done as follows:** - -⟶ - -
- -**50. View PDF version on GitHub** - -⟶ - -
- -**51. [Neural Networks, Architecture, Activation function, Backpropagation, Dropout]** - -⟶ - -
- -**52. [Convolutional Neural Networks, Convolutional layer, Batch normalization]** - -⟶ - -
- -**53. [Recurrent Neural Networks, Gates, LSTM]** - -⟶ - -
- -**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** - -⟶ diff --git a/template/cheatsheet-machine-learning-tips-and-tricks.md b/template/cheatsheet-machine-learning-tips-and-tricks.md deleted file mode 100644 index 9712297b8..000000000 --- a/template/cheatsheet-machine-learning-tips-and-tricks.md +++ /dev/null @@ -1,285 +0,0 @@ -**1. Machine Learning tips and tricks cheatsheet** - -⟶ - -
- -**2. Classification metrics** - -⟶ - -
- -**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** - -⟶ - -
- -**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** - -⟶ - -
- -**5. [Predicted class, Actual class]** - -⟶ - -
- -**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** - -⟶ - -
- -**7. [Metric, Formula, Interpretation]** - -⟶ - -
- -**8. Overall performance of model** - -⟶ - -
- -**9. How accurate the positive predictions are** - -⟶ - -
- -**10. Coverage of actual positive sample** - -⟶ - -
- -**11. Coverage of actual negative sample** - -⟶ - -
- -**12. Hybrid metric useful for unbalanced classes** - -⟶ - -
- -**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** - -⟶ - -
- -**14. [Metric, Formula, Equivalent]** - -⟶ - -
- -**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** - -⟶ - -
- -**16. [Actual, Predicted]** - -⟶ - -
- -**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** - -⟶ - -
- -**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** - -⟶ - -
- -**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** - -⟶ - -
- -**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** - -⟶ - -
- -**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** - -⟶ - -
- -**22. Model selection** - -⟶ - -
- -**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** - -⟶ - -
- -**24. [Training set, Validation set, Testing set]** - -⟶ - -
- -**25. [Model is trained, Model is assessed, Model gives predictions]** - -⟶ - -
- -**26. [Usually 80% of the dataset, Usually 20% of the dataset]** - -⟶ - -
- -**27. [Also called hold-out or development set, Unseen data]** - -⟶ - -
- -**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** - -⟶ - -
- -**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** - -⟶ - -
- -**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** - -⟶ - -
- -**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** - -⟶ - -
- -**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** - -⟶ - -
- -**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** - -⟶ - -
- -**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** - -⟶ - -
- -**35. Diagnostics** - -⟶ - -
- -**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** - -⟶ - -
- -**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** - -⟶ - -
- -**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** - -⟶ - -
- -**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** - -⟶ - -
- -**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** - -⟶ - -
- -**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** - -⟶ - -
- -**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** - -⟶ - -
- -**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** - -⟶ - -
- -**44. Regression metrics** - -⟶ - -
- -**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** - -⟶ - -
- -**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** - -⟶ - -
- -**47. [Model selection, cross-validation, regularization]** - -⟶ - -
- -**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** - -⟶ diff --git a/template/cheatsheet-supervised-learning.md b/template/cheatsheet-supervised-learning.md deleted file mode 100644 index a6b19ea1c..000000000 --- a/template/cheatsheet-supervised-learning.md +++ /dev/null @@ -1,567 +0,0 @@ -**1. Supervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Supervised Learning** - -⟶ - -
- -**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** - -⟶ - -
- -**4. Type of prediction ― The different types of predictive models are summed up in the table below:** - -⟶ - -
- -**5. [Regression, Classifier, Outcome, Examples]** - -⟶ - -
- -**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** - -⟶ - -
- -**7. Type of model ― The different models are summed up in the table below:** - -⟶ - -
- -**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** - -⟶ - -
- -**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** - -⟶ - -
- -**10. Notations and general concepts** - -⟶ - -
- -**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** - -⟶ - -
- -**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** - -⟶ - -
- -**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** - -⟶ - -
- -**14. [Linear regression, Logistic regression, SVM, Neural Network]** - -⟶ - -
- -**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** - -⟶ - -
- -**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** - -⟶ - -
- -**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** - -⟶ - -
- -**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** - -⟶ - -
- -**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** - -⟶ - -
- -**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** - -⟶ - -
- -**21. Linear models** - -⟶ - -
- -**22. Linear regression** - -⟶ - -
- -**23. We assume here that y|x;θ∼N(μ,σ2)** - -⟶ - -
- -**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** - -⟶ - -
- -**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** - -⟶ - -
- -**26. Remark: the update rule is a particular case of the gradient ascent.** - -⟶ - -
- -**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** - -⟶ - -
- -**28. Classification and logistic regression** - -⟶ - -
- -**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** - -⟶ - -
- -**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** - -⟶ - -
- -**31. Remark: there is no closed form solution for the case of logistic regressions.** - -⟶ - -
- -**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** - -⟶ - -
- -**33. Generalized Linear Models** - -⟶ - -
- -**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** - -⟶ - -
- -**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** - -⟶ - -
- -**36. Here are the most common exponential distributions summed up in the following table:** - -⟶ - -
- -**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** - -⟶ - -
- -**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** - -⟶ - -
- -**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** - -⟶ - -
- -**40. Support Vector Machines** - -⟶ - -
- -**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** - -⟶ - -
- -**42: Optimal margin classifier ― The optimal margin classifier h is such that:** - -⟶ - -
- -**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** - -⟶ - -
- -**44. such that** - -⟶ - -
- -**45. support vectors** - -⟶ - -
- -**46. Remark: the line is defined as wTx−b=0.** - -⟶ - -
- -**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** - -⟶ - -
- -**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** - -⟶ - -
- -**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** - -⟶ - -
- -**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** - -⟶ - -
- -**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** - -⟶ - -
- -**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** - -⟶ - -
- -**53. Remark: the coefficients βi are called the Lagrange multipliers.** - -⟶ - -
- -**54. Generative Learning** - -⟶ - -
- -**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** - -⟶ - -
- -**56. Gaussian Discriminant Analysis** - -⟶ - -
- -**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** - -⟶ - -
- -**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** - -⟶ - -
- -**59. Naive Bayes** - -⟶ - -
- -**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** - -⟶ - -
- -**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** - -⟶ - -
- -**62. Remark: Naive Bayes is widely used for text classification and spam detection.** - -⟶ - -
- -**63. Tree-based and ensemble methods** - -⟶ - -
- -**64. These methods can be used for both regression and classification problems.** - -⟶ - -
- -**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** - -⟶ - -
- -**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** - -⟶ - -
- -**67. Remark: random forests are a type of ensemble methods.** - -⟶ - -
- -**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** - -⟶ - -
- -**69. [Adaptive boosting, Gradient boosting]** - -⟶ - -
- -**70. High weights are put on errors to improve at the next boosting step** - -⟶ - -
- -**71. Weak learners trained on remaining errors** - -⟶ - -
- -**72. Other non-parametric approaches** - -⟶ - -
- -**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** - -⟶ - -
- -**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** - -⟶ - -
- -**75. Learning Theory** - -⟶ - -
- -**76. Union bound ― Let A1,...,Ak be k events. We have:** - -⟶ - -
- -**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** - -⟶ - -
- -**78. Remark: this inequality is also known as the Chernoff bound.** - -⟶ - -
- -**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** - -⟶ - -
- -**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** - -⟶ - -
- -**81: the training and testing sets follow the same distribution ** - -⟶ - -
- -**82. the training examples are drawn independently** - -⟶ - -
- -**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** - -⟶ - -
- -**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** - -⟶ - -
- -**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** - -⟶ - -
- -**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** - -⟶ - -
- -**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** - -⟶ - -
- -**88. [Introduction, Type of prediction, Type of model]** - -⟶ - -
- -**89. [Notations and general concepts, loss function, gradient descent, likelihood]** - -⟶ - -
- -**90. [Linear models, linear regression, logistic regression, generalized linear models]** - -⟶ - -
- -**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** - -⟶ - -
- -**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** - -⟶ - -
- -**93. [Trees and ensemble methods, CART, Random forest, Boosting]** - -⟶ - -
- -**94. [Other methods, k-NN]** - -⟶ - -
- -**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** - -⟶ diff --git a/template/cheatsheet-unsupervised-learning.md b/template/cheatsheet-unsupervised-learning.md deleted file mode 100644 index 827d815a3..000000000 --- a/template/cheatsheet-unsupervised-learning.md +++ /dev/null @@ -1,340 +0,0 @@ -**1. Unsupervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Unsupervised Learning** - -⟶ - -
- -**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** - -⟶ - -
- -**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** - -⟶ - -
- -**5. Clustering** - -⟶ - -
- -**6. Expectation-Maximization** - -⟶ - -
- -**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** - -⟶ - -
- -**8. [Setting, Latent variable z, Comments]** - -⟶ - -
- -**9. [Mixture of k Gaussians, Factor analysis]** - -⟶ - -
- -**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** - -⟶ - -
- -**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** - -⟶ - -
- -**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** - -⟶ - -
- -**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** - -⟶ - -
- -**14. k-means clustering** - -⟶ - -
- -**15. We note c(i) the cluster of data point i and μj the center of cluster j.** - -⟶ - -
- -**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** - -⟶ - -
- -**17. [Means initialization, Cluster assignment, Means update, Convergence]** - -⟶ - -
- -**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** - -⟶ - -
- -**19. Hierarchical clustering** - -⟶ - -
- -**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** - -⟶ - -
- -**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** - -⟶ - -
- -**22. [Ward linkage, Average linkage, Complete linkage]** - -⟶ - -
- -**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** - -⟶ - -
- -**24. Clustering assessment metrics** - -⟶ - -
- -**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** - -⟶ - -
- -**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** - -⟶ - -
- -**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** - -⟶ - -
- -**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** - -⟶ - -
- -**29. Dimension reduction** - -⟶ - -
- -**30. Principal component analysis** - -⟶ - -
- -**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** - -⟶ - -
- -**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**34. diagonal** - -⟶ - -
- -**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** - -⟶ - -
- -**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k -dimensions by maximizing the variance of the data as follows:** - -⟶ - -
- -**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** - -⟶ - -
- -**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** - -⟶ - -
- -**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** - -⟶ - -
- -**40. Step 4: Project the data on spanR(u1,...,uk).** - -⟶ - -
- -**41. This procedure maximizes the variance among all k-dimensional spaces.** - -⟶ - -
- -**42. [Data in feature space, Find principal components, Data in principal components space]** - -⟶ - -
- -**43. Independent component analysis** - -⟶ - -
- -**44. It is a technique meant to find the underlying generating sources.** - -⟶ - -
- -**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** - -⟶ - -
- -**46. The goal is to find the unmixing matrix W=A−1.** - -⟶ - -
- -**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** - -⟶ - -
- -**48. Write the probability of x=As=W−1s as:** - -⟶ - -
- -**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** - -⟶ - -
- -**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** - -⟶ - -
- -**51. The Machine Learning cheatsheets are now available in Japanese.** - -⟶ - -
- -**52. Original authors** - -⟶ - -
- -**53. Translated by X, Y and Z** - -⟶ - -
- -**54. Reviewed by X, Y and Z** - -⟶ - -
- -**55. [Introduction, Motivation, Jensen's inequality]** - -⟶ - -
- -**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** - -⟶ - -
- -**57. [Dimension reduction, PCA, ICA]** - -⟶ diff --git a/template/cs-221-logic-models.md b/template/cs-221-logic-models.md new file mode 100644 index 000000000..8be03acc4 --- /dev/null +++ b/template/cs-221-logic-models.md @@ -0,0 +1,462 @@ +**Logic-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-logic-models) + +
+ +**1. Logic-based models with propositional and first-order logic** + +⟶ + +
+ + +**2. Basics** + +⟶ + +
+ + +**3. Syntax of propositional logic ― By noting f,g formulas, and ¬,∧,∨,→,↔ connectives, we can write the following logical expressions:** + +⟶ + +
+ + +**4. [Name, Symbol, Meaning, Illustration]** + +⟶ + +
+ + +**5. [Affirmation, Negation, Conjunction, Disjunction, Implication, Biconditional]** + +⟶ + +
+ + +**6. [not f, f and g, f or g, if f then g, f, that is to say g]** + +⟶ + +
+ + +**7. Remark: formulas can be built up recursively out of these connectives.** + +⟶ + +
+ + +**8. Model ― A model w denotes an assignment of binary weights to propositional symbols.** + +⟶ + +
+ + +**9. Example: the set of truth values w={A:0,B:1,C:0} is one possible model to the propositional symbols A, B and C.** + +⟶ + +
+ + +**10. Interpretation function ― The interpretation function I(f,w) outputs whether model w satisfies formula f:** + +⟶ + +
+ + +**11. Set of models ― M(f) denotes the set of models w that satisfy formula f. Mathematically speaking, we define it as follows:** + +⟶ + +
+ + +**12. Knowledge base** + +⟶ + +
+ + +**13. Definition ― The knowledge base KB is the conjunction of all formulas that have been considered so far. The set of models of the knowledge base is the intersection of the set of models that satisfy each formula. In other words:** + +⟶ + +
+ + +**14. Probabilistic interpretation ― The probability that query f is evaluated to 1 can be seen as the proportion of models w of the knowledge base KB that satisfy f, i.e.:** + +⟶ + +
+ + +**15. Satisfiability ― The knowledge base KB is said to be satisfiable if at least one model w satisfies all its constraints. In other words:** + +⟶ + +
+ + +**16. satisfiable** + +⟶ + +
+ + +**17. Remark: M(KB) denotes the set of models compatible with all the constraints of the knowledge base.** + +⟶ + +
+ + +**18. Relation between formulas and knowledge base - We define the following properties between the knowledge base KB and a new formula f:** + +⟶ + +
+ + +**19. [Name, Mathematical formulation, Illustration, Notes]** + +⟶ + +
+ + +**20. [KB entails f, KB contradicts f, f contingent to KB]** + +⟶ + +
+ + +**21. [f does not bring any new information, Also written KB⊨f, No model satisfies the constraints after adding f, Equivalent to KB⊨¬f, f does not contradict KB, f adds a non-trivial amount of information to KB]** + +⟶ + +
+ + +**22. Model checking ― A model checking algorithm takes as input a knowledge base KB and outputs whether it is satisfiable or not.** + +⟶ + +
+ + +**23. Remark: popular model checking algorithms include DPLL and WalkSat.** + +⟶ + +
+ + +**24. Inference rule ― An inference rule of premises f1,...,fk and conclusion g is written:** + +⟶ + +
+ + +**25. Forward inference algorithm ― From a set of inference rules Rules, this algorithm goes through all possible f1,...,fk and adds g to the knowledge base KB if a matching rule exists. This process is repeated until no more additions can be made to KB.** + +⟶ + +
+ + +**26. Derivation ― We say that KB derives f (written KB⊢f) with rules Rules if f already is in KB or gets added during the forward inference algorithm using the set of rules Rules.** + +⟶ + +
+ + +**27. Properties of inference rules ― A set of inference rules Rules can have the following properties:** + +⟶ + +
+ + +**28. [Name, Mathematical formulation, Notes]** + +⟶ + +
+ + +**29. [Soundness, Completeness]** + +⟶ + +
+ + +**30. [Inferred formulas are entailed by KB, Can be checked one rule at a time, "Nothing but the truth", Formulas entailing KB are either already in the knowledge base or inferred from it, "The whole truth"]** + +⟶ + +
+ + +**31. Propositional logic** + +⟶ + +
+ + +**32. In this section, we will go through logic-based models that use logical formulas and inference rules. The idea here is to balance expressivity and computational efficiency.** + +⟶ + +
+ + +**33. Horn clause ― By noting p1,...,pk and q propositional symbols, a Horn clause has the form:** + +⟶ + +
+ + +**34. Remark: when q=false, it is called a "goal clause", otherwise we denote it as a "definite clause".** + +⟶ + +
+ + +**35. Modus ponens ― For propositional symbols f1,...,fk and p, the modus ponens rule is written:** + +⟶ + +
+ + +**36. Remark: it takes linear time to apply this rule, as each application generate a clause that contains a single propositional symbol.** + +⟶ + +
+ + +**37. Completeness ― Modus ponens is complete with respect to Horn clauses if we suppose that KB contains only Horn clauses and p is an entailed propositional symbol. Applying modus ponens will then derive p.** + +⟶ + +
+ + +**38. Conjunctive normal form ― A conjunctive normal form (CNF) formula is a conjunction of clauses, where each clause is a disjunction of atomic formulas.** + +⟶ + +
+ + +**39. Remark: in other words, CNFs are ∧ of ∨.** + +⟶ + +
+ + +**40. Equivalent representation ― Every formula in propositional logic can be written into an equivalent CNF formula. The table below presents general conversion properties:** + +⟶ + +
+ + +**41. [Rule name, Initial, Converted, Eliminate, Distribute, over]** + +⟶ + +
+ + +**42. Resolution rule ― For propositional symbols f1,...,fn, and g1,...,gm as well as p, the resolution rule is written:** + +⟶ + +
+ + +**43. Remark: it can take exponential time to apply this rule, as each application generates a clause that has a subset of the propositional symbols.** + +⟶ + +
+ + +**44. [Resolution-based inference ― The resolution-based inference algorithm follows the following steps:, Step 1: Convert all formulas into CNF, Step 2: Repeatedly apply resolution rule, Step 3: Return unsatisfiable if and only if False, is derived]** + +⟶ + +
+ + +**45. First-order logic** + +⟶ + +
+ + +**46. The idea here is to use variables to yield more compact knowledge representations.** + +⟶ + +
+ + +**47. [Model ― A model w in first-order logic maps:, constant symbols to objects, predicate symbols to tuple of objects]** + +⟶ + +
+ + +**48. Horn clause ― By noting x1,...,xn variables and a1,...,ak,b atomic formulas, the first-order logic version of a horn clause has the form:** + +⟶ + +
+ + +**49. Substitution ― A substitution θ maps variables to terms and Subst[θ,f] denotes the result of substitution θ on f.** + +⟶ + +
+ + +**50. Unification ― Unification takes two formulas f and g and returns the most general substitution θ that makes them equal:** + +⟶ + +
+ + +**51. such that** + +⟶ + +
+ + +**52. Note: Unify[f,g] returns Fail if no such θ exists.** + +⟶ + +
+ + +**53. Modus ponens ― By noting x1,...,xn variables, a1,...,ak and a′1,...,a′k atomic formulas and by calling θ=Unify(a′1∧...∧a′k,a1∧...∧ak) the first-order logic version of modus ponens can be written:** + +⟶ + +
+ + +**54. Completeness ― Modus ponens is complete for first-order logic with only Horn clauses.** + +⟶ + +
+ + +**55. Resolution rule ― By noting f1,...,fn, g1,...,gm, p, q formulas and by calling θ=Unify(p,q), the first-order logic version of the resolution rule can be written:** + +⟶ + +
+ + +**56. [Semi-decidability ― First-order logic, even restricted to only Horn clauses, is semi-decidable., if KB⊨f, forward inference on complete inference rules will prove f in finite time, if KB⊭f, no algorithm can show this in finite time]** + +⟶ + +
+ + +**57. [Basics, Notations, Model, Interpretation function, Set of models]** + +⟶ + +
+ + +**58. [Knowledge base, Definition, Probabilistic interpretation, Satisfiability, Relationship with formulas, Forward inference, Rule properties]** + +⟶ + +
+ + +**59. [Propositional logic, Clauses, Modus ponens, Conjunctive normal form, Representation equivalence, Resolution]** + +⟶ + +
+ + +**60. [First-order logic, Substitution, Unification, Resolution rule, Modus ponens, Resolution, Semi-decidability]** + +⟶ + +
+ + +**61. View PDF version on GitHub** + +⟶ + +
+ + +**62. Original authors** + +⟶ + +
+ + +**63. Translated by X, Y and Z** + +⟶ + +
+ + +**64. Reviewed by X, Y and Z** + +⟶ + +
+ + +**65. By X and Y** + +⟶ + +
+ + +**66. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ diff --git a/template/cs-221-reflex-models.md b/template/cs-221-reflex-models.md new file mode 100644 index 000000000..f64a380b0 --- /dev/null +++ b/template/cs-221-reflex-models.md @@ -0,0 +1,539 @@ +**Reflex-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-reflex-models) + +
+ +**1. Reflex-based models with Machine Learning** + +⟶ + +
+ + +**2. Linear predictors** + +⟶ + +
+ + +**3. In this section, we will go through reflex-based models that can improve with experience, by going through samples that have input-output pairs.** + +⟶ + +
+ + +**4. Feature vector ― The feature vector of an input x is noted ϕ(x) and is such that:** + +⟶ + +
+ + +**5. Score ― The score s(x,w) of an example (ϕ(x),y)∈Rd×R associated to a linear model of weights w∈Rd is given by the inner product:** + +⟶ + +
+ + +**6. Classification** + +⟶ + +
+ + +**7. Linear classifier ― Given a weight vector w∈Rd and a feature vector ϕ(x)∈Rd, the binary linear classifier fw is given by:** + +⟶ + +
+ + +**8. if** + +⟶ + +
+ + +**9. Margin ― The margin m(x,y,w)∈R of an example (ϕ(x),y)∈Rd×{−1,+1} associated to a linear model of weights w∈Rd quantifies the confidence of the prediction: larger values are better. It is given by:** + +⟶ + +
+ + +**10. Regression** + +⟶ + +
+ + +**11. Linear regression ― Given a weight vector w∈Rd and a feature vector ϕ(x)∈Rd, the output of a linear regression of weights w denoted as fw is given by:** + +⟶ + +
+ + +**12. Residual ― The residual res(x,y,w)∈R is defined as being the amount by which the prediction fw(x) overshoots the target y:** + +⟶ + +
+ + +**13. Loss minimization** + +⟶ + +
+ + +**14. Loss function ― A loss function Loss(x,y,w) quantifies how unhappy we are with the weights w of the model in the prediction task of output y from input x. It is a quantity we want to minimize during the training process.** + +⟶ + +
+ + +**15. Classification case - The classification of a sample x of true label y∈{−1,+1} with a linear model of weights w can be done with the predictor fw(x)≜sign(s(x,w)). In this situation, a metric of interest quantifying the quality of the classification is given by the margin m(x,y,w), and can be used with the following loss functions:** + +⟶ + +
+ + +**16. [Name, Illustration, Zero-one loss, Hinge loss, Logistic loss]** + +⟶ + +
+ + +**17. Regression case - The prediction of a sample x of true label y∈R with a linear model of weights w can be done with the predictor fw(x)≜s(x,w). In this situation, a metric of interest quantifying the quality of the regression is given by the margin res(x,y,w) and can be used with the following loss functions:** + +⟶ + +
+ + +**18. [Name, Squared loss, Absolute deviation loss, Illustration]** + +⟶ + +
+ + +**19. Loss minimization framework ― In order to train a model, we want to minimize the training loss is defined as follows:** + +⟶ + +
+ + +**20. Non-linear predictors** + +⟶ + +
+ + +**21. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** + +⟶ + +
+ + +**22. Remark: the higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** + +⟶ + +
+ + +**23. Neural networks ― Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks. The vocabulary around neural networks architectures is described in the figure below:** + +⟶ + +
+ + +**24. [Input layer, Hidden layer, Output layer]** + +⟶ + +
+ + +**25. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** + +⟶ + +
+ + +**26. where we note w, b, x, z the weight, bias, input and non-activated output of the neuron respectively.** + +⟶ + +
+ + +**27. For a more detailed overview of the concepts above, check out the Supervised Learning cheatsheets!** + +⟶ + +
+ + +**28. Stochastic gradient descent** + +⟶ + +
+ + +**29. Gradient descent ― By noting η∈R the learning rate (also called step size), the update rule for gradient descent is expressed with the learning rate and the loss function Loss(x,y,w) as follows:** + +⟶ + +
+ + +**30. Stochastic updates ― Stochastic gradient descent (SGD) updates the parameters of the model one training example (ϕ(x),y)∈Dtrain at a time. This method leads to sometimes noisy, but fast updates.** + +⟶ + +
+ + +**31. Batch updates ― Batch gradient descent (BGD) updates the parameters of the model one batch of examples (e.g. the entire training set) at a time. This method computes stable update directions, at a greater computational cost.** + +⟶ + +
+ + +**32. Fine-tuning models** + +⟶ + +
+ + +**33. Hypothesis class ― A hypothesis class F is the set of possible predictors with a fixed ϕ(x) and varying w:** + +⟶ + +
+ + +**34. Logistic function ― The logistic function σ, also called the sigmoid function, is defined as:** + +⟶ + +
+ + +**35. Remark: we have σ′(z)=σ(z)(1−σ(z)).** + +⟶ + +
+ + +**36. Backpropagation ― The forward pass is done through fi, which is the value for the subexpression rooted at i, while the backward pass is done through gi=∂out∂fi and represents how fi influences the output.** + +⟶ + +
+ + +**37. Approximation and estimation error ― The approximation error ϵapprox represents how far the entire hypothesis class F is from the target predictor g∗, while the estimation error ϵest quantifies how good the predictor ^f is with respect to the best predictor f∗ of the hypothesis class F.** + +⟶ + +
+ + +**38. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** + +⟶ + +
+ + +**39. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶ + +
+ + +**40. Hyperparameters ― Hyperparameters are the properties of the learning algorithm, and include features, regularization parameter λ, number of iterations T, step size η, etc.** + +⟶ + +
+ + +**41. Sets vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** + +⟶ + +
+ + +**42. [Training set, Validation set, Testing set]** + +⟶ + +
+ + +**43. [Model is trained, Usually 80% of the dataset, Model is assessed, Usually 20% of the dataset, Also called hold-out or development set, Model gives predictions, Unseen data]** + +⟶ + +
+ + +**44. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** + +⟶ + +
+ + +**45. [Dataset, Unseen data, train, validation, test]** + +⟶ + +
+ + +**46. For a more detailed overview of the concepts above, check out the Machine Learning tips and tricks cheatsheets!** + +⟶ + +
+ + +**47. Unsupervised Learning** + +⟶ + +
+ + +**48. The class of unsupervised learning methods aims at discovering the structure of the data, which may have of rich latent structures.** + +⟶ + +
+ + +**49. k-means** + +⟶ + +
+ + +**50. Clustering ― Given a training set of input points Dtrain, the goal of a clustering algorithm is to assign each point ϕ(xi) to a cluster zi∈{1,...,k}** + +⟶ + +
+ + +**51. Objective function ― The loss function for one of the main clustering algorithms, k-means, is given by:** + +⟶ + +
+ + +**52. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** + +⟶ + +
+ + +**53. and** + +⟶ + +
+ + +**54. [Means initialization, Cluster assignment, Means update, Convergence]** + +⟶ + +
+ + +**55. Principal Component Analysis** + +⟶ + +
+ + +**56. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** + +⟶ + +
+ + +**57. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** + +⟶ + +
+ + +**58. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** + +⟶ + +
+ + +**59. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k dimensions by maximizing the variance of the data as follows:** + +⟶ + +
+ + +**60. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** + +⟶ + +
+ + +**61. [where, and]** + +⟶ + +
+ + +**62. [Step 2: Compute Σ=1mm∑i=1ϕ(xi)ϕ(xi)T∈Rn×n, which is symmetric with real eigenvalues., Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues., Step 4: Project the data on spanR(u1,...,uk).]** + +⟶ + +
+ + +**63. This procedure maximizes the variance among all k-dimensional spaces.** + +⟶ + +
+ + +**64. [Data in feature space, Find principal components, Data in principal components space]** + +⟶ + +
+ + +**65. For a more detailed overview of the concepts above, check out the Unsupervised Learning cheatsheets!** + +⟶ + +
+ + +**66. [Linear predictors, Feature vector, Linear classifier/regression, Margin]** + +⟶ + +
+ + +**67. [Loss minimization, Loss function, Framework]** + +⟶ + +
+ + +**68. [Non-linear predictors, k-nearest neighbors, Neural networks]** + +⟶ + +
+ + +**69. [Stochastic gradient descent, Gradient, Stochastic updates, Batch updates]** + +⟶ + +
+ + +**70. [Fine-tuning models, Hypothesis class, Backpropagation, Regularization, Sets vocabulary]** + +⟶ + +
+ + +**71. [Unsupervised Learning, k-means, Principal components analysis]** + +⟶ + +
+ + +**72. View PDF version on GitHub** + +⟶ + +
+ + +**73. Original authors** + +⟶ + +
+ + +**74. Translated by X, Y and Z** + +⟶ + +
+ + +**75. Reviewed by X, Y and Z** + +⟶ + +
+ + +**76. By X and Y** + +⟶ + +
+ + +**77. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ diff --git a/template/cs-221-states-models.md b/template/cs-221-states-models.md new file mode 100644 index 000000000..e21270f89 --- /dev/null +++ b/template/cs-221-states-models.md @@ -0,0 +1,980 @@ +**States-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-states-models) + +
+ +**1. States-based models with search optimization and MDP** + +⟶ + +
+ + +**2. Search optimization** + +⟶ + +
+ + +**3. In this section, we assume that by accomplishing action a from state s, we deterministically arrive in state Succ(s,a). The goal here is to determine a sequence of actions (a1,a2,a3,a4,...) that starts from an initial state and leads to an end state. In order to solve this kind of problem, our objective will be to find the minimum cost path by using states-based models.** + +⟶ + +
+ + +**4. Tree search** + +⟶ + +
+ + +**5. This category of states-based algorithms explores all possible states and actions. It is quite memory efficient, and is suitable for huge state spaces but the runtime can become exponential in the worst cases.** + +⟶ + +
+ + +**6. [Self-loop, More than a parent, Cycle, More than a root, Valid tree]** + +⟶ + +
+ + +**7. [Search problem ― A search problem is defined with:, a starting state sstart, possible actions Actions(s) from state s, action cost Cost(s,a) from state s with action a, successor Succ(s,a) of state s after action a, whether an end state was reached IsEnd(s)]** + +⟶ + +
+ + +**8. The objective is to find a path that minimizes the cost.** + +⟶ + +
+ + +**9. Backtracking search ― Backtracking search is a naive recursive algorithm that tries all possibilities to find the minimum cost path. Here, action costs can be either positive or negative.** + +⟶ + +
+ + +**10. Breadth-first search (BFS) ― Breadth-first search is a graph search algorithm that does a level-by-level traversal. We can implement it iteratively with the help of a queue that stores at each step future nodes to be visited. For this algorithm, we can assume action costs to be equal to a constant c⩾0.** + +⟶ + +
+ + +**11. Depth-first search (DFS) ― Depth-first search is a search algorithm that traverses a graph by following each path as deep as it can. We can implement it recursively, or iteratively with the help of a stack that stores at each step future nodes to be visited. For this algorithm, action costs are assumed to be equal to 0.** + +⟶ + +
+ + +**12. Iterative deepening ― The iterative deepening trick is a modification of the depth-first search algorithm so that it stops after reaching a certain depth, which guarantees optimality when all action costs are equal. Here, we assume that action costs are equal to a constant c⩾0.** + +⟶ + +
+ + +**13. Tree search algorithms summary ― By noting b the number of actions per state, d the solution depth, and D the maximum depth, we have:** + +⟶ + +
+ + +**14. [Algorithm, Action costs, Space, Time]** + +⟶ + +
+ + +**15. [Backtracking search, any, Breadth-first search, Depth-first search, DFS-Iterative deepening]** + +⟶ + +
+ + +**16. Graph search** + +⟶ + +
+ + +**17. This category of states-based algorithms aims at constructing optimal paths, enabling exponential savings. In this section, we will focus on dynamic programming and uniform cost search.** + +⟶ + +
+ + +**18. Graph ― A graph is comprised of a set of vertices V (also called nodes) as well as a set of edges E (also called links).** + +⟶ + +
+ + +**19. Remark: a graph is said to be acylic when there is no cycle.** + +⟶ + +
+ + +**20. State ― A state is a summary of all past actions sufficient to choose future actions optimally.** + +⟶ + +
+ + +**21. Dynamic programming ― Dynamic programming (DP) is a backtracking search algorithm with memoization (i.e. partial results are saved) whose goal is to find a minimum cost path from state s to an end state send. It can potentially have exponential savings compared to traditional graph search algorithms, and has the property to only work for acyclic graphs. For any given state s, the future cost is computed as follows:** + +⟶ + +
+ + +**22. [if, otherwise]** + +⟶ + +
+ + +**23. Remark: the figure above illustrates a bottom-to-top approach whereas the formula provides the intuition of a top-to-bottom problem resolution.** + +⟶ + +
+ + +**24. Types of states ― The table below presents the terminology when it comes to states in the context of uniform cost search:** + +⟶ + +
+ + +**25. [State, Explanation]** + +⟶ + +
+ + +**26. [Explored, Frontier, Unexplored]** + +⟶ + +
+ + +**27. [States for which the optimal path has already been found, States seen for which we are still figuring out how to get there with the cheapest cost, States not seen yet]** + +⟶ + +
+ + +**28. Uniform cost search ― Uniform cost search (UCS) is a search algorithm that aims at finding the shortest path from a state sstart to an end state send. It explores states s in increasing order of PastCost(s) and relies on the fact that all action costs are non-negative.** + +⟶ + +
+ + +**29. Remark 1: the UCS algorithm is logically equivalent to Dijkstra's algorithm.** + +⟶ + +
+ + +**30. Remark 2: the algorithm would not work for a problem with negative action costs, and adding a positive constant to make them non-negative would not solve the problem since this would end up being a different problem.** + +⟶ + +
+ + +**31. Correctness theorem ― When a state s is popped from the frontier F and moved to explored set E, its priority is equal to PastCost(s) which is the minimum cost path from sstart to s.** + +⟶ + +
+ + +**32. Graph search algorithms summary ― By noting N the number of total states, n of which are explored before the end state send, we have:** + +⟶ + +
+ + +**33. [Algorithm, Acyclicity, Costs, Time/space]** + +⟶ + +
+ + +**34. [Dynamic programming, Uniform cost search]** + +⟶ + +
+ + +**35. Remark: the complexity countdown supposes the number of possible actions per state to be constant.** + +⟶ + +
+ + +**36. Learning costs** + +⟶ + +
+ + +**37. Suppose we are not given the values of Cost(s,a), we want to estimate these quantities from a training set of minimizing-cost-path sequence of actions (a1,a2,...,ak).** + +⟶ + +
+ + +**38. [Structured perceptron ― The structured perceptron is an algorithm aiming at iteratively learning the cost of each state-action pair. At each step, it:, decreases the estimated cost of each state-action of the true minimizing path y given by the training data, increases the estimated cost of each state-action of the current predicted path y' inferred from the learned weights.]** + +⟶ + +
+ + +**39. Remark: there are several versions of the algorithm, one of which simplifies the problem to only learning the cost of each action a, and the other parametrizes Cost(s,a) to a feature vector of learnable weights.** + +⟶ + +
+ + +**40. A* search** + +⟶ + +
+ + +**41. Heuristic function ― A heuristic is a function h over states s, where each h(s) aims at estimating FutureCost(s), the cost of the path from s to send.** + +⟶ + +
+ + +**42. Algorithm ― A∗ is a search algorithm that aims at finding the shortest path from a state s to an end state send. It explores states s in increasing order of PastCost(s)+h(s). It is equivalent to a uniform cost search with edge costs Cost′(s,a) given by:** + +⟶ + +
+ + +**43. Remark: this algorithm can be seen as a biased version of UCS exploring states estimated to be closer to the end state.** + +⟶ + +
+ + +**44. [Consistency ― A heuristic h is said to be consistent if it satisfies the two following properties:, For all states s and actions a, The end state verifies the following:]** + +⟶ + +
+ + +**45. Correctness ― If h is consistent, then A∗ returns the minimum cost path.** + +⟶ + +
+ + +**46. Admissibility ― A heuristic h is said to be admissible if we have:** + +⟶ + +
+ + +**47. Theorem ― Let h(s) be a given heuristic. We have:** + +⟶ + +
+ + +**48. [consistent, admissible]** + +⟶ + +
+ + +**49. Efficiency ― A* explores all states s satisfying the following equation:** + +⟶ + +
+ + +**50. Remark: larger values of h(s) is better as this equation shows it will restrict the set of states s going to be explored.** + +⟶ + +
+ + +**51. Relaxation** + +⟶ + +
+ + +**52. It is a framework for producing consistent heuristics. The idea is to find closed-form reduced costs by removing constraints and use them as heuristics.** + +⟶ + +
+ + +**53. Relaxed search problem ― The relaxation of search problem P with costs Cost is noted Prel with costs Costrel, and satisfies the identity:** + +⟶ + +
+ + +**54. Relaxed heuristic ― Given a relaxed search problem Prel, we define the relaxed heuristic h(s)=FutureCostrel(s) as the minimum cost path from s to an end state in the graph of costs Costrel(s,a).** + +⟶ + +
+ + +**55. Consistency of relaxed heuristics ― Let Prel be a given relaxed problem. By theorem, we have:** + +⟶ + +
+ + +**56. consistent** + +⟶ + +
+ + +**57. [Tradeoff when choosing heuristic ― We have to balance two aspects in choosing a heuristic:, Computational efficiency: h(s)=FutureCostrel(s) must be easy to compute. It has to produce a closed form, easier search and independent subproblems., Good enough approximation: the heuristic h(s) should be close to FutureCost(s) and we have thus to not remove too many constraints.]** + +⟶ + +
+ + +**58. Max heuristic ― Let h1(s), h2(s) be two heuristics. We have the following property:** + +⟶ + +
+ + +**59. Markov decision processes** + +⟶ + +
+ + +**60. In this section, we assume that performing action a from state s can lead to several states s′1,s′2,... in a probabilistic manner. In order to find our way between an initial state and an end state, our objective will be to find the maximum value policy by using Markov decision processes that help us cope with randomness and uncertainty.** + +⟶ + +
+ + +**61. Notations** + +⟶ + +
+ + +**62. [Definition ― The objective of a Markov decision process is to maximize rewards. It is defined with:, a starting state sstart, possible actions Actions(s) from state s, transition probabilities T(s,a,s′) from s to s′ with action a, rewards Reward(s,a,s′) from s to s′ with action a, whether an end state was reached IsEnd(s), a discount factor 0⩽γ⩽1]** + +⟶ + +
+ + +**63. Transition probabilities ― The transition probability T(s,a,s′) specifies the probability of going to state s′ after action a is taken in state s. Each s′↦T(s,a,s′) is a probability distribution, which means that:** + +⟶ + +
+ + +**64. states** + +⟶ + +
+ + +**65. Policy ― A policy π is a function that maps each state s to an action a, i.e.** + +⟶ + +
+ + +**66. Utility ― The utility of a path (s0,...,sk) is the discounted sum of the rewards on that path. In other words,** + +⟶ + +
+ + +**67. The figure above is an illustration of the case k=4.** + +⟶ + +
+ + +**68. Q-value ― The Q-value of a policy π at state s with action a, also noted Qπ(s,a), is the expected utility from state s after taking action a and then following policy π. It is defined as follows:** + +⟶ + +
+ + +**69. Value of a policy ― The value of a policy π from state s, also noted Vπ(s), is the expected utility by following policy π from state s over random paths. It is defined as follows:** + +⟶ + +
+ + +**70. Remark: Vπ(s) is equal to 0 if s is an end state.** + +⟶ + +
+ + +**71. Applications** + +⟶ + +
+ + +**72. [Policy evaluation ― Given a policy π, policy evaluation is an iterative algorithm that aims at estimating Vπ. It is done as follows:, Initialization: for all states s, we have:, Iteration: for t from 1 to TPE, we have, with]** + +⟶ + +
+ + +**73. Remark: by noting S the number of states, A the number of actions per state, S′ the number of successors and T the number of iterations, then the time complexity is of O(TPESS′).** + +⟶ + +
+ + +**74. Optimal Q-value ― The optimal Q-value Qopt(s,a) of state s with action a is defined to be the maximum Q-value attained by any policy starting. It is computed as follows:** + +⟶ + +
+ + +**75. Optimal value ― The optimal value Vopt(s) of state s is defined as being the maximum value attained by any policy. It is computed as follows:** + +⟶ + +
+ + +**76. actions** + +⟶ + +
+ + +**77. Optimal policy ― The optimal policy πopt is defined as being the policy that leads to the optimal values. It is defined by:** + +⟶ + +
+ + +**78. [Value iteration ― Value iteration is an algorithm that finds the optimal value Vopt as well as the optimal policy πopt. It is done as follows:, Initialization: for all states s, we have:, Iteration: for t from 1 to TVI, we have:, with]** + +⟶ + +
+ + +**79. Remark: if we have either γ<1 or the MDP graph being acyclic, then the value iteration algorithm is guaranteed to converge to the correct answer.** + +⟶ + +
+ + +**80. When unknown transitions and rewards** + +⟶ + +
+ + +**81. Now, let's assume that the transition probabilities and the rewards are unknown.** + +⟶ + +
+ + +**82. Model-based Monte Carlo ― The model-based Monte Carlo method aims at estimating T(s,a,s′) and Reward(s,a,s′) using Monte Carlo simulation with: ** + +⟶ + +
+ + +**83. [# times (s,a,s′) occurs, and]** + +⟶ + +
+ + +**84. These estimations will be then used to deduce Q-values, including Qπ and Qopt.** + +⟶ + +
+ + +**85. Remark: model-based Monte Carlo is said to be off-policy, because the estimation does not depend on the exact policy.** + +⟶ + +
+ + +**86. Model-free Monte Carlo ― The model-free Monte Carlo method aims at directly estimating Qπ, as follows:** + +⟶ + +
+ + +**87. Qπ(s,a)=average of ut where st−1=s,at=a** + +⟶ + +
+ + +**88. where ut denotes the utility starting at step t of a given episode.** + +⟶ + +
+ + +**89. Remark: model-free Monte Carlo is said to be on-policy, because the estimated value is dependent on the policy π used to generate the data.** + +⟶ + +
+ + +**90. Equivalent formulation - By introducing the constant η=11+(#updates to (s,a)) and for each (s,a,u) of the training set, the update rule of model-free Monte Carlo has a convex combination formulation:** + +⟶ + +
+ + +**91. as well as a stochastic gradient formulation:** + +⟶ + +
+ + +**92. SARSA ― State-action-reward-state-action (SARSA) is a boostrapping method estimating Qπ by using both raw data and estimates as part of the update rule. For each (s,a,r,s′,a′), we have:** + +⟶ + +
+ + +**93. Remark: the SARSA estimate is updated on the fly as opposed to the model-free Monte Carlo one where the estimate can only be updated at the end of the episode.** + +⟶ + +
+ + +**94. Q-learning ― Q-learning is an off-policy algorithm that produces an estimate for Qopt. On each (s,a,r,s′,a′), we have:** + +⟶ + +
+ + +**95. Epsilon-greedy ― The epsilon-greedy policy is an algorithm that balances exploration with probability ϵ and exploitation with probability 1−ϵ. For a given state s, the policy πact is computed as follows:** + +⟶ + +
+ + +**96. [with probability, random from Actions(s)]** + +⟶ + +
+ + +**97. Game playing** + +⟶ + +
+ + +**98. In games (e.g. chess, backgammon, Go), other agents are present and need to be taken into account when constructing our policy.** + +⟶ + +
+ + +**99. Game tree ― A game tree is a tree that describes the possibilities of a game. In particular, each node is a decision point for a player and each root-to-leaf path is a possible outcome of the game.** + +⟶ + +
+ + +**100. [Two-player zero-sum game ― It is a game where each state is fully observed and such that players take turns. It is defined with:, a starting state sstart, possible actions Actions(s) from state s, successors Succ(s,a) from states s with actions a, whether an end state was reached IsEnd(s), the agent's utility Utility(s) at end state s, the player Player(s) who controls state s]** + +⟶ + +
+ + +**101. Remark: we will assume that the utility of the agent has the opposite sign of the one of the opponent.** + +⟶ + +
+ + +**102. [Types of policies ― There are two types of policies:, Deterministic policies, noted πp(s), which are actions that player p takes in state s., Stochastic policies, noted πp(s,a)∈[0,1], which are probabilities that player p takes action a in state s.]** + +⟶ + +
+ + +**103. Expectimax ― For a given state s, the expectimax value Vexptmax(s) is the maximum expected utility of any agent policy when playing with respect to a fixed and known opponent policy πopp. It is computed as follows:** + +⟶ + +
+ + +**104. Remark: expectimax is the analog of value iteration for MDPs.** + +⟶ + +
+ + +**105. Minimax ― The goal of minimax policies is to find an optimal policy against an adversary by assuming the worst case, i.e. that the opponent is doing everything to minimize the agent's utility. It is done as follows:** + +⟶ + +
+ + +**106. Remark: we can extract πmax and πmin from the minimax value Vminimax.** + +⟶ + +
+ + +**107. Minimax properties ― By noting V the value function, there are 3 properties around minimax to have in mind:** + +⟶ + +
+ + +**108. Property 1: if the agent were to change its policy to any πagent, then the agent would be no better off.** + +⟶ + +
+ + +**109. Property 2: if the opponent changes its policy from πmin to πopp, then he will be no better off.** + +⟶ + +
+ + +**110. Property 3: if the opponent is known to be not playing the adversarial policy, then the minimax policy might not be optimal for the agent.** + +⟶ + +
+ + +**111. In the end, we have the following relationship:** + +⟶ + +
+ + +**112. Speeding up minimax** + +⟶ + +
+ + +**113. Evaluation function ― An evaluation function is a domain-specific and approximate estimate of the value Vminimax(s). It is noted Eval(s).** + +⟶ + +
+ + +**114. Remark: FutureCost(s) is an analogy for search problems.** + +⟶ + +
+ + +**115. Alpha-beta pruning ― Alpha-beta pruning is a domain-general exact method optimizing the minimax algorithm by avoiding the unnecessary exploration of parts of the game tree. To do so, each player keeps track of the best value they can hope for (stored in α for the maximizing player and in β for the minimizing player). At a given step, the condition β<α means that the optimal path is not going to be in the current branch as the earlier player had a better option at their disposal.** + +⟶ + +
+ + +**116. TD learning ― Temporal difference (TD) learning is used when we don't know the transitions/rewards. The value is based on exploration policy. To be able to use it, we need to know rules of the game Succ(s,a). For each (s,a,r,s′), the update is done as follows:** + +⟶ + +
+ + +**117. Simultaneous games** + +⟶ + +
+ + +**118. This is the contrary of turn-based games, where there is no ordering on the player's moves.** + +⟶ + +
+ + +**119. Single-move simultaneous game ― Let there be two players A and B, with given possible actions. We note V(a,b) to be A's utility if A chooses action a, B chooses action b. V is called the payoff matrix.** + +⟶ + +
+ + +**120. [Strategies ― There are two main types of strategies:, A pure strategy is a single action:, A mixed strategy is a probability distribution over actions:]** + +⟶ + +
+ + +**121. Game evaluation ― The value of the game V(πA,πB) when player A follows πA and player B follows πB is such that:** + +⟶ + +
+ + +**122. Minimax theorem ― By noting πA,πB ranging over mixed strategies, for every simultaneous two-player zero-sum game with a finite number of actions, we have:** + +⟶ + +
+ + +**123. Non-zero-sum games** + +⟶ + +
+ + +**124. Payoff matrix ― We define Vp(πA,πB) to be the utility for player p.** + +⟶ + +
+ + +**125. Nash equilibrium ― A Nash equilibrium is (π∗A,π∗B) such that no player has an incentive to change its strategy. We have:** + +⟶ + +
+ + +**126. and** + +⟶ + +
+ + +**127. Remark: in any finite-player game with finite number of actions, there exists at least one Nash equilibrium.** + +⟶ + +
+ + +**128. [Tree search, Backtracking search, Breadth-first search, Depth-first search, Iterative deepening]** + +⟶ + +
+ + +**129. [Graph search, Dynamic programming, Uniform cost search]** + +⟶ + +
+ + +**130. [Learning costs, Structured perceptron]** + +⟶ + +
+ + +**131. [A star search, Heuristic function, Algorithm, Consistency, correctness, Admissibility, efficiency]** + +⟶ + +
+ + +**132. [Relaxation, Relaxed search problem, Relaxed heuristic, Max heuristic]** + +⟶ + +
+ + +**133. [Markov decision processes, Overview, Policy evaluation, Value iteration, Transitions, rewards]** + +⟶ + +
+ + +**134. [Game playing, Expectimax, Minimax, Speeding up minimax, Simultaneous games, Non-zero-sum games]** + +⟶ + +
+ + +**135. View PDF version on GitHub** + +⟶ + +
+ + +**136. Original authors** + +⟶ + +
+ + +**137. Translated by X, Y and Z** + +⟶ + +
+ + +**138. Reviewed by X, Y and Z** + +⟶ + +
+ + +**139. By X and Y** + +⟶ + +
+ + +**140. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ diff --git a/template/cs-221-variables-models.md b/template/cs-221-variables-models.md new file mode 100644 index 000000000..f55ef0270 --- /dev/null +++ b/template/cs-221-variables-models.md @@ -0,0 +1,617 @@ +**Variables-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-variables-models) + +
+ +**1. Variables-based models with CSP and Bayesian networks** + +⟶ + +
+ + +**2. Constraint satisfaction problems** + +⟶ + +
+ + +**3. In this section, our objective is to find maximum weight assignments of variable-based models. One advantage compared to states-based models is that these algorithms are more convenient to encode problem-specific constraints.** + +⟶ + +
+ + +**4. Factor graphs** + +⟶ + +
+ + +**5. Definition ― A factor graph, also referred to as a Markov random field, is a set of variables X=(X1,...,Xn) where Xi∈Domaini and m factors f1,...,fm with each fj(X)⩾0.** + +⟶ + +
+ + +**6. Domain** + +⟶ + +
+ + +**7. Scope and arity ― The scope of a factor fj is the set of variables it depends on. The size of this set is called the arity.** + +⟶ + +
+ + +**8. Remark: factors of arity 1 and 2 are called unary and binary respectively.** + +⟶ + +
+ + +**9. Assignment weight ― Each assignment x=(x1,...,xn) yields a weight Weight(x) defined as being the product of all factors fj applied to that assignment. Its expression is given by:** + +⟶ + +
+ + +**10. Constraint satisfaction problem ― A constraint satisfaction problem (CSP) is a factor graph where all factors are binary; we call them to be constraints:** + +⟶ + +
+ + +**11. Here, the constraint j with assignment x is said to be satisfied if and only if fj(x)=1.** + +⟶ + +
+ + +**12. Consistent assignment ― An assignment x of a CSP is said to be consistent if and only if Weight(x)=1, i.e. all constraints are satisfied.** + +⟶ + +
+ + +**13. Dynamic ordering** + +⟶ + +
+ + +**14. Dependent factors ― The set of dependent factors of variable Xi with partial assignment x is called D(x,Xi), and denotes the set of factors that link Xi to already assigned variables.** + +⟶ + +
+ + +**15. Backtracking search ― Backtracking search is an algorithm used to find maximum weight assignments of a factor graph. At each step, it chooses an unassigned variable and explores its values by recursion. Dynamic ordering (i.e. choice of variables and values) and lookahead (i.e. early elimination of inconsistent options) can be used to explore the graph more efficiently, although the worst-case runtime stays exponential: O(|Domain|n).** + +⟶ + +
+ + +**16. [Forward checking ― It is a one-step lookahead heuristic that preemptively removes inconsistent values from the domains of neighboring variables. It has the following characteristics:, After assigning a variable Xi, it eliminates inconsistent values from the domains of all its neighbors., If any of these domains becomes empty, we stop the local backtracking search., If we un-assign a variable Xi, we have to restore the domain of its neighbors.]** + +⟶ + +
+ + +**17. Most constrained variable ― It is a variable-level ordering heuristic that selects the next unassigned variable that has the fewest consistent values. This has the effect of making inconsistent assignments to fail earlier in the search, which enables more efficient pruning.** + +⟶ + +
+ + +**18. Least constrained value ― It is a value-level ordering heuristic that assigns the next value that yields the highest number of consistent values of neighboring variables. Intuitively, this procedure chooses first the values that are most likely to work.** + +⟶ + +
+ + +**19. Remark: in practice, this heuristic is useful when all factors are constraints.** + +⟶ + +
+ + +**20. The example above is an illustration of the 3-color problem with backtracking search coupled with most constrained variable exploration and least constrained value heuristic, as well as forward checking at each step.** + +⟶ + +
+ + +**21. [Arc consistency ― We say that arc consistency of variable Xl with respect to Xk is enforced when for each xl∈Domainl:, unary factors of Xl are non-zero, there exists at least one xk∈Domaink such that any factor between Xl and Xk is non-zero.]** + +⟶ + +
+ + +**22. AC-3 ― The AC-3 algorithm is a multi-step lookahead heuristic that applies forward checking to all relevant variables. After a given assignment, it performs forward checking and then successively enforces arc consistency with respect to the neighbors of variables for which the domain change during the process.** + +⟶ + +
+ + +**23. Remark: AC-3 can be implemented both iteratively and recursively.** + +⟶ + +
+ + +**24. Approximate methods** + +⟶ + +
+ + +**25. Beam search ― Beam search is an approximate algorithm that extends partial assignments of n variables of branching factor b=|Domain| by exploring the K top paths at each step. The beam size K∈{1,...,bn} controls the tradeoff between efficiency and accuracy. This algorithm has a time complexity of O(n⋅Kblog(Kb)).** + +⟶ + +
+ + +**26. The example below illustrates a possible beam search of parameters K=2, b=3 and n=5.** + +⟶ + +
+ + +**27. Remark: K=1 corresponds to greedy search whereas K→+∞ is equivalent to BFS tree search.** + +⟶ + +
+ + +**28. Iterated conditional modes ― Iterated conditional modes (ICM) is an iterative approximate algorithm that modifies the assignment of a factor graph one variable at a time until convergence. At step i, we assign to Xi the value v that maximizes the product of all factors connected to that variable.** + +⟶ + +
+ + +**29. Remark: ICM may get stuck in local minima.** + +⟶ + +
+ + +**30. [Gibbs sampling ― Gibbs sampling is an iterative approximate method that modifies the assignment of a factor graph one variable at a time until convergence. At step i:, we assign to each element u∈Domaini a weight w(u) that is the product of all factors connected to that variable, we sample v from the probability distribution induced by w and assign it to Xi.]** + +⟶ + +
+ + +**31. Remark: Gibbs sampling can be seen as the probabilistic counterpart of ICM. It has the advantage to be able to escape local minima in most cases.** + +⟶ + +
+ + +**32. Factor graph transformations** + +⟶ + +
+ + +**33. Independence ― Let A,B be a partitioning of the variables X. We say that A and B are independent if there are no edges between A and B and we write:** + +⟶ + +
+ + +**34. Remark: independence is the key property that allows us to solve subproblems in parallel.** + +⟶ + +
+ + +**35. Conditional independence ― We say that A and B are conditionally independent given C if conditioning on C produces a graph in which A and B are independent. In this case, it is written:** + +⟶ + +
+ + +**36. [Conditioning ― Conditioning is a transformation aiming at making variables independent that breaks up a factor graph into smaller pieces that can be solved in parallel and can use backtracking. In order to condition on a variable Xi=v, we do as follows:, Consider all factors f1,...,fk that depend on Xi, Remove Xi and f1,...,fk, Add gj(x) for j∈{1,...,k} defined as:]** + +⟶ + +
+ + +**37. Markov blanket ― Let A⊆X be a subset of variables. We define MarkovBlanket(A) to be the neighbors of A that are not in A.** + +⟶ + +
+ + +**38. Proposition ― Let C=MarkovBlanket(A) and B=X∖(A∪C). Then we have:** + +⟶ + +
+ + +**39. [Elimination ― Elimination is a factor graph transformation that removes Xi from the graph and solves a small subproblem conditioned on its Markov blanket as follows:, Consider all factors fi,1,...,fi,k that depend on Xi, Remove Xi +and fi,1,...,fi,k, Add fnew,i(x) defined as:]** + +⟶ + +
+ + +**40. Treewidth ― The treewidth of a factor graph is the maximum arity of any factor created by variable elimination with the best variable ordering. In other words,** + +⟶ + +
+ + +**41. The example below illustrates the case of a factor graph of treewidth 3.** + +⟶ + +
+ + +**42. Remark: finding the best variable ordering is a NP-hard problem.** + +⟶ + +
+ + +**43. Bayesian networks** + +⟶ + +
+ + +**44. In this section, our goal will be to compute conditional probabilities. What is the probability of a query given evidence?** + +⟶ + +
+ + +**45. Introduction** + +⟶ + +
+ + +**46. Explaining away ― Suppose causes C1 and C2 influence an effect E. Conditioning on the effect E and on one of the causes (say C1) changes the probability of the other cause (say C2). In this case, we say that C1 has explained away C2.** + +⟶ + +
+ + +**47. Directed acyclic graph ― A directed acyclic graph (DAG) is a finite directed graph with no directed cycles.** + +⟶ + +
+ + +**48. Bayesian network ― A Bayesian network is a directed acyclic graph (DAG) that specifies a joint distribution over random variables X=(X1,...,Xn) as a product of local conditional distributions, one for each node:** + +⟶ + +
+ + +**49. Remark: Bayesian networks are factor graphs imbued with the language of probability.** + +⟶ + +
+ + +**50. Locally normalized ― For each xParents(i), all factors are local conditional distributions. Hence they have to satisfy:** + +⟶ + +
+ + +**51. As a result, sub-Bayesian networks and conditional distributions are consistent.** + +⟶ + +
+ + +**52. Remark: local conditional distributions are the true conditional distributions.** + +⟶ + +
+ + +**53. Marginalization ― The marginalization of a leaf node yields a Bayesian network without that node.** + +⟶ + +
+ + +**54. Probabilistic programs** + +⟶ + +
+ + +**55. Concept ― A probabilistic program randomizes variables assignment. That way, we can write down complex Bayesian networks that generate assignments without us having to explicitly specify associated probabilities.** + +⟶ + +
+ + +**56. Remark: examples of probabilistic programs include Hidden Markov model (HMM), factorial HMM, naive Bayes, latent Dirichlet allocation, diseases and symptoms and stochastic block models.** + +⟶ + +
+ + +**57. Summary ― The table below summarizes the common probabilistic programs as well as their applications:** + +⟶ + +
+ + +**58. [Program, Algorithm, Illustration, Example]** + +⟶ + +
+ + +**59. [Markov Model, Hidden Markov Model (HMM), Factorial HMM, Naive Bayes, Latent Dirichlet Allocation (LDA)]** + +⟶ + +
+ + +**60. [Generate, distribution]** + +⟶ + +
+ + +**61. [Language modeling, Object tracking, Multiple object tracking, Document classification, Topic modeling]** + +⟶ + +
+ + +**62. Inference** + +⟶ + +
+ + +**63. [General probabilistic inference strategy ― The strategy to compute the probability P(Q|E=e) of query Q given evidence E=e is as follows:, Step 1: Remove variables that are not ancestors of the query Q or the evidence E by marginalization, Step 2: Convert Bayesian network to factor graph, Step 3: Condition on the evidence E=e, Step 4: Remove nodes disconnected from the query Q by marginalization, Step 5: Run a probabilistic inference algorithm (manual, variable elimination, Gibbs sampling, particle filtering)]** + +⟶ + +
+ + +**64. Forward-backward algorithm ― This algorithm computes the exact value of P(H=hk|E=e) (smoothing query) for any k∈{1,...,L} in the case of an HMM of size L. To do so, we proceed in 3 steps:** + +⟶ + +
+ + +**65. Step 1: for ..., compute ...** + +⟶ + +
+ + +**66. with the convention F0=BL+1=1. From this procedure and these notations, we get that** + +⟶ + +
+ + +**67. Remark: this algorithm interprets each assignment to be a path where each edge hi−1→hi is of weight p(hi|hi−1)p(ei|hi).** + +⟶ + +
+ + +**68. [Gibbs sampling ― This algorithm is an iterative approximate method that uses a small set of assignments (particles) to represent a large probability distribution. From a random assignment x, Gibbs sampling performs the following steps for i∈{1,...,n} until convergence:, For all u∈Domaini, compute the weight w(u) of assignment x where Xi=u, Sample v from the probability distribution induced by w: v∼P(Xi=v|X−i=x−i), Set Xi=v]** + +⟶ + +
+ + +**69. Remark: X−i denotes X∖{Xi} and x−i represents the corresponding assignment.** + +⟶ + +
+ + +**70. [Particle filtering ― This algorithm approximates the posterior density of state variables given the evidence of observation variables by keeping track of K particles at a time. Starting from a set of particles C of size K, we run the following 3 steps iteratively:, Step 1: proposal - For each old particle xt−1∈C, sample x from the transition probability distribution p(x|xt−1) and add x to a set C′., Step 2: weighting - Weigh each x of the set C′ by w(x)=p(et|x), where et is the evidence observed at time t., Step 3: resampling - Sample K elements from the set C′ using the probability distribution induced by w and store them in C: these are the current particles xt.]** + +⟶ + +
+ + +**71. Remark: a more expensive version of this algorithm also keeps track of past particles in the proposal step.** + +⟶ + +
+ + +**72. Maximum likelihood ― If we don't know the local conditional distributions, we can learn them using maximum likelihood.** + +⟶ + +
+ + +**73. Laplace smoothing ― For each distribution d and partial assignment (xParents(i),xi), add λ to countd(xParents(i),xi), then normalize to get probability estimates.** + +⟶ + +
+ + +**74. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** + +⟶ + +
+ + +**75. [E-step: Evaluate the posterior probability q(h) that each data point e came from a particular cluster h as follows:, M-step: Use the posterior probabilities q(h) as cluster specific weights on data points e to determine θ through maximum likelihood.]** + +⟶ + +
+ + +**76. [Factor graphs, Arity, Assignment weight, Constraint satisfaction problem, Consistent assignment]** + +⟶ + +
+ + +**77. [Dynamic ordering, Dependent factors, Backtracking search, Forward checking, Most constrained variable, Least constrained value]** + +⟶ + +
+ + +**78. [Approximate methods, Beam search, Iterated conditional modes, Gibbs sampling]** + +⟶ + +
+ + +**79. [Factor graph transformations, Conditioning, Elimination]** + +⟶ + +
+ + +**80. [Bayesian networks, Definition, Locally normalized, Marginalization]** + +⟶ + +
+ + +**81. [Probabilistic program, Concept, Summary]** + +⟶ + +
+ + +**82. [Inference, Forward-backward algorithm, Gibbs sampling, Laplace smoothing]** + +⟶ + +
+ + +**83. View PDF version on GitHub** + +⟶ + +
+ + +**84. Original authors** + +⟶ + +
+ + +**85. Translated by X, Y and Z** + +⟶ + +
+ + +**86. Reviewed by X, Y and Z** + +⟶ + +
+ + +**87. By X and Y** + +⟶ + +
+ + +**88. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ diff --git a/ar/cheatsheet-deep-learning.md b/template/cs-229-deep-learning.md similarity index 98% rename from ar/cheatsheet-deep-learning.md rename to template/cs-229-deep-learning.md index a5aa3756c..a7770a048 100644 --- a/ar/cheatsheet-deep-learning.md +++ b/template/cs-229-deep-learning.md @@ -1,3 +1,7 @@ +**Deep learning translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-229/cheatsheet-deep-learning) + +
+ **1. Deep Learning cheatsheet** ⟶ diff --git a/de/refresher-linear-algebra.md b/template/cs-229-linear-algebra.md similarity index 97% rename from de/refresher-linear-algebra.md rename to template/cs-229-linear-algebra.md index a6b440d1e..dced85397 100644 --- a/de/refresher-linear-algebra.md +++ b/template/cs-229-linear-algebra.md @@ -1,3 +1,7 @@ +**Linear Algebra and Calculus translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-229/refresher-algebra-calculus) + +
+ **1. Linear Algebra and Calculus refresher** ⟶ diff --git a/hi/cheatsheet-machine-learning-tips-and-tricks.md b/template/cs-229-machine-learning-tips-and-tricks.md similarity index 97% rename from hi/cheatsheet-machine-learning-tips-and-tricks.md rename to template/cs-229-machine-learning-tips-and-tricks.md index 9712297b8..edba03259 100644 --- a/hi/cheatsheet-machine-learning-tips-and-tricks.md +++ b/template/cs-229-machine-learning-tips-and-tricks.md @@ -1,3 +1,7 @@ +**Machine Learning tips and tricks translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-229/cheatsheet-machine-learning-tips-and-tricks) + +
+ **1. Machine Learning tips and tricks cheatsheet** ⟶ diff --git a/de/refresher-probability.md b/template/cs-229-probability.md similarity index 98% rename from de/refresher-probability.md rename to template/cs-229-probability.md index 5c9b34656..b8be13004 100644 --- a/de/refresher-probability.md +++ b/template/cs-229-probability.md @@ -1,3 +1,7 @@ +**Probabilities and Statistics translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-229/refresher-probabilities-statistics) + +
+ **1. Probabilities and Statistics refresher** ⟶ diff --git a/de/cheatsheet-supervised-learning.md b/template/cs-229-supervised-learning.md similarity index 98% rename from de/cheatsheet-supervised-learning.md rename to template/cs-229-supervised-learning.md index a6b19ea1c..d82685e6e 100644 --- a/de/cheatsheet-supervised-learning.md +++ b/template/cs-229-supervised-learning.md @@ -1,3 +1,7 @@ +**Supervised Learning translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-229/cheatsheet-supervised-learning) + +
+ **1. Supervised Learning cheatsheet** ⟶ diff --git a/he/cheatsheet-unsupervised-learning.md b/template/cs-229-unsupervised-learning.md similarity index 96% rename from he/cheatsheet-unsupervised-learning.md rename to template/cs-229-unsupervised-learning.md index 40724eb28..18fafef8c 100644 --- a/he/cheatsheet-unsupervised-learning.md +++ b/template/cs-229-unsupervised-learning.md @@ -1,3 +1,7 @@ +**Unsupervised Learning translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-229/cheatsheet-unsupervised-learning) + +
+ **1. Unsupervised Learning cheatsheet** ⟶ @@ -299,7 +303,7 @@ dimensions by maximizing the variance of the data as follows:**
-**51. The Machine Learning cheatsheets are now available in Hebrew.** +**51. The Machine Learning cheatsheets are now available in [target language].** ⟶ diff --git a/template/cs-230-convolutional-neural-networks.md b/template/cs-230-convolutional-neural-networks.md new file mode 100644 index 000000000..94006a675 --- /dev/null +++ b/template/cs-230-convolutional-neural-networks.md @@ -0,0 +1,716 @@ +**Convolutional Neural Networks translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks) + +
+ +**1. Convolutional Neural Networks cheatsheet** + +⟶ + +
+ + +**2. CS 230 - Deep Learning** + +⟶ + +
+ + +**3. [Overview, Architecture structure]** + +⟶ + +
+ + +**4. [Types of layer, Convolution, Pooling, Fully connected]** + +⟶ + +
+ + +**5. [Filter hyperparameters, Dimensions, Stride, Padding]** + +⟶ + +
+ + +**6. [Tuning hyperparameters, Parameter compatibility, Model complexity, Receptive field]** + +⟶ + +
+ + +**7. [Activation functions, Rectified Linear Unit, Softmax]** + +⟶ + +
+ + +**8. [Object detection, Types of models, Detection, Intersection over Union, Non-max suppression, YOLO, R-CNN]** + +⟶ + +
+ + +**9. [Face verification/recognition, One shot learning, Siamese network, Triplet loss]** + +⟶ + +
+ + +**10. [Neural style transfer, Activation, Style matrix, Style/content cost function]** + +⟶ + +
+ + +**11. [Computational trick architectures, Generative Adversarial Net, ResNet, Inception Network]** + +⟶ + +
+ + +**12. Overview** + +⟶ + +
+ + +**13. Architecture of a traditional CNN ― Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers:** + +⟶ + +
+ + +**14. The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections.** + +⟶ + +
+ + +**15. Types of layer** + +⟶ + +
+ + +**16. Convolution layer (CONV) ― The convolution layer (CONV) uses filters that perform convolution operations as it is scanning the input I with respect to its dimensions. Its hyperparameters include the filter size F and stride S. The resulting output O is called feature map or activation map.** + +⟶ + +
+ + +**17. Remark: the convolution step can be generalized to the 1D and 3D cases as well.** + +⟶ + +
+ + +**18. Pooling (POOL) ― The pooling layer (POOL) is a downsampling operation, typically applied after a convolution layer, which does some spatial invariance. In particular, max and average pooling are special kinds of pooling where the maximum and average value is taken, respectively.** + +⟶ + +
+ + +**19. [Type, Purpose, Illustration, Comments]** + +⟶ + +
+ + +**20. [Max pooling, Average pooling, Each pooling operation selects the maximum value of the current view, Each pooling operation averages the values of the current view]** + +⟶ + +
+ + +**21. [Preserves detected features, Most commonly used, Downsamples feature map, Used in LeNet]** + +⟶ + +
+ + +**22. Fully Connected (FC) ― The fully connected layer (FC) operates on a flattened input where each input is connected to all neurons. If present, FC layers are usually found towards the end of CNN architectures and can be used to optimize objectives such as class scores.** + +⟶ + +
+ + +**23. Filter hyperparameters** + +⟶ + +
+ + +**24. The convolution layer contains filters for which it is important to know the meaning behind its hyperparameters.** + +⟶ + +
+ + +**25. Dimensions of a filter ― A filter of size F×F applied to an input containing C channels is a F×F×C volume that performs convolutions on an input of size I×I×C and produces an output feature map (also called activation map) of size O×O×1.** + +⟶ + +
+ + +**26. Filter** + +⟶ + +
+ + +**27. Remark: the application of K filters of size F×F results in an output feature map of size O×O×K.** + +⟶ + +
+ + +**28. Stride ― For a convolutional or a pooling operation, the stride S denotes the number of pixels by which the window moves after each operation.** + +⟶ + +
+ + +**29. Zero-padding ― Zero-padding denotes the process of adding P zeroes to each side of the boundaries of the input. This value can either be manually specified or automatically set through one of the three modes detailed below:** + +⟶ + +
+ + +**30. [Mode, Value, Illustration, Purpose, Valid, Same, Full]** + +⟶ + +
+ + +**31. [No padding, Drops last convolution if dimensions do not match, Padding such that feature map size has size ⌈IS⌉, Output size is mathematically convenient, Also called 'half' padding, Maximum padding such that end convolutions are applied on the limits of the input, Filter 'sees' the input end-to-end]** + +⟶ + +
+ + +**32. Tuning hyperparameters** + +⟶ + +
+ + +**33. Parameter compatibility in convolution layer ― By noting I the length of the input volume size, F the length of the filter, P the amount of zero padding, S the stride, then the output size O of the feature map along that dimension is given by:** + +⟶ + +
+ + +**34. [Input, Filter, Output]** + +⟶ + +
+ + +**35. Remark: often times, Pstart=Pend≜P, in which case we can replace Pstart+Pend by 2P in the formula above.** + +⟶ + +
+ + +**36. Understanding the complexity of the model ― In order to assess the complexity of a model, it is often useful to determine the number of parameters that its architecture will have. In a given layer of a convolutional neural network, it is done as follows:** + +⟶ + +
+ + +**37. [Illustration, Input size, Output size, Number of parameters, Remarks]** + +⟶ + +
+ + +**38. [One bias parameter per filter, In most cases, S + + +**39. [Pooling operation done channel-wise, In most cases, S=F]** + +⟶ + +
+ + +**40. [Input is flattened, One bias parameter per neuron, The number of FC neurons is free of structural constraints]** + +⟶ + +
+ + +**41. Receptive field ― The receptive field at layer k is the area denoted Rk×Rk of the input that each pixel of the k-th activation map can 'see'. By calling Fj the filter size of layer j and Si the stride value of layer i and with the convention S0=1, the receptive field at layer k can be computed with the formula:** + +⟶ + +
+ + +**42. In the example below, we have F1=F2=3 and S1=S2=1, which gives R2=1+2⋅1+2⋅1=5.** + +⟶ + +
+ + +**43. Commonly used activation functions** + +⟶ + +
+ + +**44. Rectified Linear Unit ― The rectified linear unit layer (ReLU) is an activation function g that is used on all elements of the volume. It aims at introducing non-linearities to the network. Its variants are summarized in the table below:** + +⟶ + +
+ + +**45. [ReLU, Leaky ReLU, ELU, with]** + +⟶ + +
+ + +**46. [Non-linearity complexities biologically interpretable, Addresses dying ReLU issue for negative values, Differentiable everywhere]** + +⟶ + +
+ + +**47. Softmax ― The softmax step can be seen as a generalized logistic function that takes as input a vector of scores x∈Rn and outputs a vector of output probability p∈Rn through a softmax function at the end of the architecture. It is defined as follows:** + +⟶ + +
+ + +**48. where** + +⟶ + +
+ + +**49. Object detection** + +⟶ + +
+ + +**50. Types of models ― There are 3 main types of object recognition algorithms, for which the nature of what is predicted is different. They are described in the table below:** + +⟶ + +
+ + +**51. [Image classification, Classification w. localization, Detection]** + +⟶ + +
+ + +**52. [Teddy bear, Book]** + +⟶ + +
+ + +**53. [Classifies a picture, Predicts probability of object, Detects an object in a picture, Predicts probability of object and where it is located, Detects up to several objects in a picture, Predicts probabilities of objects and where they are located]** + +⟶ + +
+ + +**54. [Traditional CNN, Simplified YOLO, R-CNN, YOLO, R-CNN]** + +⟶ + +
+ + +**55. Detection ― In the context of object detection, different methods are used depending on whether we just want to locate the object or detect a more complex shape in the image. The two main ones are summed up in the table below:** + +⟶ + +
+ + +**56. [Bounding box detection, Landmark detection]** + +⟶ + +
+ + +**57. [Detects the part of the image where the object is located, Detects a shape or characteristics of an object (e.g. eyes), More granular]** + +⟶ + +
+ + +**58. [Box of center (bx,by), height bh and width bw, Reference points (l1x,l1y), ..., (lnx,lny)]** + +⟶ + +
+ + +**59. Intersection over Union ― Intersection over Union, also known as IoU, is a function that quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding box Ba. It is defined as:** + +⟶ + +
+ + +**60. Remark: we always have IoU∈[0,1]. By convention, a predicted bounding box Bp is considered as being reasonably good if IoU(Bp,Ba)⩾0.5.** + +⟶ + +
+ + +**61. Anchor boxes ― Anchor boxing is a technique used to predict overlapping bounding boxes. In practice, the network is allowed to predict more than one box simultaneously, where each box prediction is constrained to have a given set of geometrical properties. For instance, the first prediction can potentially be a rectangular box of a given form, while the second will be another rectangular box of a different geometrical form.** + +⟶ + +
+ + +**62. Non-max suppression ― The non-max suppression technique aims at removing duplicate overlapping bounding boxes of a same object by selecting the most representative ones. After having removed all boxes having a probability prediction lower than 0.6, the following steps are repeated while there are boxes remaining:** + +⟶ + +
+ + +**63. [For a given class, Step 1: Pick the box with the largest prediction probability., Step 2: Discard any box having an IoU⩾0.5 with the previous box.]** + +⟶ + +
+ + +**64. [Box predictions, Box selection of maximum probability, Overlap removal of same class, Final bounding boxes]** + +⟶ + +
+ + +**65. YOLO ― You Only Look Once (YOLO) is an object detection algorithm that performs the following steps:** + +⟶ + +
+ + +**66. [Step 1: Divide the input image into a G×G grid., Step 2: For each grid cell, run a CNN that predicts y of the following form:, repeated k times]** + +⟶ + +
+ + +**67. where pc is the probability of detecting an object, bx,by,bh,bw are the properties of the detected bouding box, c1,...,cp is a one-hot representation of which of the p classes were detected, and k is the number of anchor boxes.** + +⟶ + +
+ + +**68. Step 3: Run the non-max suppression algorithm to remove any potential duplicate overlapping bounding boxes.** + +⟶ + +
+ + +**69. [Original image, Division in GxG grid, Bounding box prediction, Non-max suppression]** + +⟶ + +
+ + +**70. Remark: when pc=0, then the network does not detect any object. In that case, the corresponding predictions bx,...,cp have to be ignored.** + +⟶ + +
+ + +**71. R-CNN ― Region with Convolutional Neural Networks (R-CNN) is an object detection algorithm that first segments the image to find potential relevant bounding boxes and then run the detection algorithm to find most probable objects in those bounding boxes.** + +⟶ + +
+ + +**72. [Original image, Segmentation, Bounding box prediction, Non-max suppression]** + +⟶ + +
+ + +**73. Remark: although the original algorithm is computationally expensive and slow, newer architectures enabled the algorithm to run faster, such as Fast R-CNN and Faster R-CNN.** + +⟶ + +
+ + +**74. Face verification and recognition** + +⟶ + +
+ + +**75. Types of models ― Two main types of model are summed up in table below:** + +⟶ + +
+ + +**76. [Face verification, Face recognition, Query, Reference, Database]** + +⟶ + +
+ + +**77. [Is this the correct person?, One-to-one lookup, Is this one of the K persons in the database?, One-to-many lookup]** + +⟶ + +
+ + +**78. One Shot Learning ― One Shot Learning is a face verification algorithm that uses a limited training set to learn a similarity function that quantifies how different two given images are. The similarity function applied to two images is often noted d(image 1,image 2).** + +⟶ + +
+ + +**79. Siamese Network ― Siamese Networks aim at learning how to encode images to then quantify how different two images are. For a given input image x(i), the encoded output is often noted as f(x(i)).** + +⟶ + +
+ + +**80. Triplet loss ― The triplet loss ℓ is a loss function computed on the embedding representation of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive example belong to a same class, while the negative example to another one. By calling α∈R+ the margin parameter, this loss is defined as follows:** + +⟶ + +
+ + +**81. Neural style transfer** + +⟶ + +
+ + +**82. Motivation ― The goal of neural style transfer is to generate an image G based on a given content C and a given style S.** + +⟶ + +
+ + +**83. [Content C, Style S, Generated image G]** + +⟶ + +
+ + +**84. Activation ― In a given layer l, the activation is noted a[l] and is of dimensions nH×nw×nc** + +⟶ + +
+ + +**85. Content cost function ― The content cost function Jcontent(C,G) is used to determine how the generated image G differs from the original content image C. It is defined as follows:** + +⟶ + +
+ + +**86. Style matrix ― The style matrix G[l] of a given layer l is a Gram matrix where each of its elements G[l]kk′ quantifies how correlated the channels k and k′ are. It is defined with respect to activations a[l] as follows:** + +⟶ + +
+ + +**87. Remark: the style matrix for the style image and the generated image are noted G[l] (S) and G[l] (G) respectively.** + +⟶ + +
+ + +**88. Style cost function ― The style cost function Jstyle(S,G) is used to determine how the generated image G differs from the style S. It is defined as follows:** + +⟶ + +
+ + +**89. Overall cost function ― The overall cost function is defined as being a combination of the content and style cost functions, weighted by parameters α,β, as follows:** + +⟶ + +
+ + +**90. Remark: a higher value of α will make the model care more about the content while a higher value of β will make it care more about the style.** + +⟶ + +
+ + +**91. Architectures using computational tricks** + +⟶ + +
+ + +**92. Generative Adversarial Network ― Generative adversarial networks, also known as GANs, are composed of a generative and a discriminative model, where the generative model aims at generating the most truthful output that will be fed into the discriminative which aims at differentiating the generated and true image.** + +⟶ + +
+ + +**93. [Training, Noise, Real-world image, Generator, Discriminator, Real Fake]** + +⟶ + +
+ + +**94. Remark: use cases using variants of GANs include text to image, music generation and synthesis.** + +⟶ + +
+ + +**95. ResNet ― The Residual Network architecture (also called ResNet) uses residual blocks with a high number of layers meant to decrease the training error. The residual block has the following characterizing equation:** + +⟶ + +
+ + +**96. Inception Network ― This architecture uses inception modules and aims at giving a try at different convolutions in order to increase its performance through features diversification. In particular, it uses the 1×1 convolution trick to limit the computational burden.** + +⟶ + +
+ + +**97. The Deep Learning cheatsheets are now available in [target language].** + +⟶ + +
+ + +**98. Original authors** + +⟶ + +
+ + +**99. Translated by X, Y and Z** + +⟶ + +
+ + +**100. Reviewed by X, Y and Z** + +⟶ + +
+ + +**101. View PDF version on GitHub** + +⟶ + +
+ + +**102. By X and Y** + +⟶ + +
diff --git a/template/cs-230-deep-learning-tips-and-tricks.md b/template/cs-230-deep-learning-tips-and-tricks.md new file mode 100644 index 000000000..75127ac5d --- /dev/null +++ b/template/cs-230-deep-learning-tips-and-tricks.md @@ -0,0 +1,457 @@ +**Deep Learning Tips and Tricks translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-deep-learning-tips-and-tricks) + +
+ +**1. Deep Learning Tips and Tricks cheatsheet** + +⟶ + +
+ + +**2. CS 230 - Deep Learning** + +⟶ + +
+ + +**3. Tips and tricks** + +⟶ + +
+ + +**4. [Data processing, Data augmentation, Batch normalization]** + +⟶ + +
+ + +**5. [Training a neural network, Epoch, Mini-batch, Cross-entropy loss, Backpropagation, Gradient descent, Updating weights, Gradient checking]** + +⟶ + +
+ + +**6. [Parameter tuning, Xavier initialization, Transfer learning, Learning rate, Adaptive learning rates]** + +⟶ + +
+ + +**7. [Regularization, Dropout, Weight regularization, Early stopping]** + +⟶ + +
+ + +**8. [Good practices, Overfitting small batch, Gradient checking]** + +⟶ + +
+ + +**9. View PDF version on GitHub** + +⟶ + +
+ + +**10. Data processing** + +⟶ + +
+ + +**11. Data augmentation ― Deep learning models usually need a lot of data to be properly trained. It is often useful to get more data from the existing ones using data augmentation techniques. The main ones are summed up in the table below. More precisely, given the following input image, here are the techniques that we can apply:** + +⟶ + +
+ + +**12. [Original, Flip, Rotation, Random crop]** + +⟶ + +
+ + +**13. [Image without any modification, Flipped with respect to an axis for which the meaning of the image is preserved, Rotation with a slight angle, Simulates incorrect horizon calibration, Random focus on one part of the image, Several random crops can be done in a row]** + +⟶ + +
+ + +**14. [Color shift, Noise addition, Information loss, Contrast change]** + +⟶ + +
+ + +**15. [Nuances of RGB is slightly changed, Captures noise that can occur with light exposure, Addition of noise, More tolerance to quality variation of inputs, Parts of image ignored, Mimics potential loss of parts of image, Luminosity changes, Controls difference in exposition due to time of day]** + +⟶ + +
+ + +**16. Remark: data is usually augmented on the fly during training.** + +⟶ + +
+ + +**17. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** + +⟶ + +
+ + +**18. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** + +⟶ + +
+ + +**19. Training a neural network** + +⟶ + +
+ + +**20. Definitions** + +⟶ + +
+ + +**21. Epoch ― In the context of training a model, epoch is a term used to refer to one iteration where the model sees the whole training set to update its weights.** + +⟶ + +
+ + +**22. Mini-batch gradient descent ― During the training phase, updating weights is usually not based on the whole training set at once due to computation complexities or one data point due to noise issues. Instead, the update step is done on mini-batches, where the number of data points in a batch is a hyperparameter that we can tune.** + +⟶ + +
+ + +**23. Loss function ― In order to quantify how a given model performs, the loss function L is usually used to evaluate to what extent the actual outputs y are correctly predicted by the model outputs z.** + +⟶ + +
+ + +**24. Cross-entropy loss ― In the context of binary classification in neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** + +⟶ + +
+ + +**25. Finding optimal weights** + +⟶ + +
+ + +**26. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to each weight w is computed using the chain rule.** + +⟶ + +
+ + +**27. Using this method, each weight is updated with the rule:** + +⟶ + +
+ + +**28. Updating weights ― In a neural network, weights are updated as follows:** + +⟶ + +
+ + +**29. [Step 1: Take a batch of training data and perform forward propagation to compute the loss, Step 2: Backpropagate the loss to get the gradient of the loss with respect to each weight, Step 3: Use the gradients to update the weights of the network.]** + +⟶ + +
+ + +**30. [Forward propagation, Backpropagation, Weights update]** + +⟶ + +
+ + +**31. Parameter tuning** + +⟶ + +
+ + +**32. Weights initialization** + +⟶ + +
+ + +**33. Xavier initialization ― Instead of initializing the weights in a purely random manner, Xavier initialization enables to have initial weights that take into account characteristics that are unique to the architecture.** + +⟶ + +
+ + +**34. Transfer learning ― Training a deep learning model requires a lot of data and more importantly a lot of time. It is often useful to take advantage of pre-trained weights on huge datasets that took days/weeks to train, and leverage it towards our use case. Depending on how much data we have at hand, here are the different ways to leverage this:** + +⟶ + +
+ + +**35. [Training size, Illustration, Explanation]** + +⟶ + +
+ + +**36. [Small, Medium, Large]** + +⟶ + +
+ + +**37. [Freezes all layers, trains weights on softmax, Freezes most layers, trains weights on last layers and softmax, Trains weights on layers and softmax by initializing weights on pre-trained ones]** + +⟶ + +
+ + +**38. Optimizing convergence** + +⟶ + +
+ + +**39. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. It can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate. +** + +⟶ + +
+ + +**40. Adaptive learning rates ― Letting the learning rate vary when training a model can reduce the training time and improve the numerical optimal solution. While Adam optimizer is the most commonly used technique, others can also be useful. They are summed up in the table below:** + +⟶ + +
+ + +**41. [Method, Explanation, Update of w, Update of b]** + +⟶ + +
+ + +**42. [Momentum, Dampens oscillations, Improvement to SGD, 2 parameters to tune]** + +⟶ + +
+ + +**43. [RMSprop, Root Mean Square propagation, Speeds up learning algorithm by controlling oscillations]** + +⟶ + +
+ + +**44. [Adam, Adaptive Moment estimation, Most popular method, 4 parameters to tune]** + +⟶ + +
+ + +**45. Remark: other methods include Adadelta, Adagrad and SGD.** + +⟶ + +
+ + +**46. Regularization** + +⟶ + +
+ + +**47. Dropout ― Dropout is a technique used in neural networks to prevent overfitting the training data by dropping out neurons with probability p>0. It forces the model to avoid relying too much on particular sets of features.** + +⟶ + +
+ + +**48. Remark: most deep learning frameworks parametrize dropout through the 'keep' parameter 1−p.** + +⟶ + +
+ + +**49. Weight regularization ― In order to make sure that the weights are not too large and that the model is not overfitting the training set, regularization techniques are usually performed on the model weights. The main ones are summed up in the table below:** + +⟶ + +
+ + +**50. [LASSO, Ridge, Elastic Net]** + +⟶ + +
+ +**50 bis. Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶ + +
+ +**51. Early stopping ― This regularization technique stops the training process as soon as the validation loss reaches a plateau or starts to increase.** + +⟶ + +
+ + +**52. [Error, Validation, Training, early stopping, Epochs]** + +⟶ + +
+ + +**53. Good practices** + +⟶ + +
+ + +**54. Overfitting small batch ― When debugging a model, it is often useful to make quick tests to see if there is any major issue with the architecture of the model itself. In particular, in order to make sure that the model can be properly trained, a mini-batch is passed inside the network to see if it can overfit on it. If it cannot, it means that the model is either too complex or not complex enough to even overfit on a small batch, let alone a normal-sized training set.** + +⟶ + +
+ + +**55. Gradient checking ― Gradient checking is a method used during the implementation of the backward pass of a neural network. It compares the value of the analytical gradient to the numerical gradient at given points and plays the role of a sanity-check for correctness.** + +⟶ + +
+ + +**56. [Type, Numerical gradient, Analytical gradient]** + +⟶ + +
+ + +**57. [Formula, Comments]** + +⟶ + +
+ + +**58. [Expensive; loss has to be computed two times per dimension, Used to verify correctness of analytical implementation, Trade-off in choosing h not too small (numerical instability) nor too large (poor gradient approximation)]** + +⟶ + +
+ + +**59. ['Exact' result, Direct computation, Used in the final implementation]** + +⟶ + +
+ + +**60. The Deep Learning cheatsheets are now available in [target language]. + +⟶ + + +**61. Original authors** + +⟶ + +
+ +**62.Translated by X, Y and Z** + +⟶ + +
+ +**63.Reviewed by X, Y and Z** + +⟶ + +
+ +**64.View PDF version on GitHub** + +⟶ + +
+ +**65.By X and Y** + +⟶ + +
diff --git a/template/cs-230-recurrent-neural-networks.md b/template/cs-230-recurrent-neural-networks.md new file mode 100644 index 000000000..bd3c638bc --- /dev/null +++ b/template/cs-230-recurrent-neural-networks.md @@ -0,0 +1,677 @@ +**Recurrent Neural Networks translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks) + +
+ +**1. Recurrent Neural Networks cheatsheet** + +⟶ + +
+ + +**2. CS 230 - Deep Learning** + +⟶ + +
+ + +**3. [Overview, Architecture structure, Applications of RNNs, Loss function, Backpropagation]** + +⟶ + +
+ + +**4. [Handling long term dependencies, Common activation functions, Vanishing/exploding gradient, Gradient clipping, GRU/LSTM, Types of gates, Bidirectional RNN, Deep RNN]** + +⟶ + +
+ + +**5. [Learning word representation, Notations, Embedding matrix, Word2vec, Skip-gram, Negative sampling, GloVe]** + +⟶ + +
+ + +**6. [Comparing words, Cosine similarity, t-SNE]** + +⟶ + +
+ + +**7. [Language model, n-gram, Perplexity]** + +⟶ + +
+ + +**8. [Machine translation, Beam search, Length normalization, Error analysis, Bleu score]** + +⟶ + +
+ + +**9. [Attention, Attention model, Attention weights]** + +⟶ + +
+ + +**10. Overview** + +⟶ + +
+ + +**11. Architecture of a traditional RNN ― Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. They are typically as follows:** + +⟶ + +
+ + +**12. For each timestep t, the activation a and the output y are expressed as follows:** + +⟶ + +
+ + +**13. and** + +⟶ + +
+ + +**14. where Wax,Waa,Wya,ba,by are coefficients that are shared temporally and g1,g2 activation functions.** + +⟶ + +
+ + +**15. The pros and cons of a typical RNN architecture are summed up in the table below:** + +⟶ + +
+ + +**16. [Advantages, Possibility of processing input of any length, Model size not increasing with size of input, Computation takes into account historical information, Weights are shared across time]** + +⟶ + +
+ + +**17. [Drawbacks, Computation being slow, Difficulty of accessing information from a long time ago, Cannot consider any future input for the current state]** + +⟶ + +
+ + +**18. Applications of RNNs ― RNN models are mostly used in the fields of natural language processing and speech recognition. The different applications are summed up in the table below:** + +⟶ + +
+ + +**19. [Type of RNN, Illustration, Example]** + +⟶ + +
+ + +**20. [One-to-one, One-to-many, Many-to-one, Many-to-many]** + +⟶ + +
+ + +**21. [Traditional neural network, Music generation, Sentiment classification, Name entity recognition, Machine translation]** + +⟶ + +
+ + +**22. Loss function ― In the case of a recurrent neural network, the loss function L of all time steps is defined based on the loss at every time step as follows:** + +⟶ + +
+ + +**23. Backpropagation through time ― Backpropagation is done at each point in time. At timestep T, the derivative of the loss L with respect to weight matrix W is expressed as follows:** + +⟶ + +
+ + +**24. Handling long term dependencies** + +⟶ + +
+ + +**25. Commonly used activation functions ― The most common activation functions used in RNN modules are described below:** + +⟶ + +
+ + +**26. [Sigmoid, Tanh, RELU]** + +⟶ + +
+ + +**27. Vanishing/exploding gradient ― The vanishing and exploding gradient phenomena are often encountered in the context of RNNs. The reason why they happen is that it is difficult to capture long term dependencies because of multiplicative gradient that can be exponentially decreasing/increasing with respect to the number of layers.** + +⟶ + +
+ + +**28. Gradient clipping ― It is a technique used to cope with the exploding gradient problem sometimes encountered when performing backpropagation. By capping the maximum value for the gradient, this phenomenon is controlled in practice.** + +⟶ + +
+ + +**29. clipped** + +⟶ + +
+ + +**30. Types of gates ― In order to remedy the vanishing gradient problem, specific gates are used in some types of RNNs and usually have a well-defined purpose. They are usually noted Γ and are equal to:** + +⟶ + +
+ + +**31. where W,U,b are coefficients specific to the gate and σ is the sigmoid function. The main ones are summed up in the table below:** + +⟶ + +
+ + +**32. [Type of gate, Role, Used in]** + +⟶ + +
+ + +**33. [Update gate, Relevance gate, Forget gate, Output gate]** + +⟶ + +
+ + +**34. [How much past should matter now?, Drop previous information?, Erase a cell or not?, How much to reveal of a cell?]** + +⟶ + +
+ + +**35. [LSTM, GRU]** + +⟶ + +
+ + +**36. GRU/LSTM ― Gated Recurrent Unit (GRU) and Long Short-Term Memory units (LSTM) deal with the vanishing gradient problem encountered by traditional RNNs, with LSTM being a generalization of GRU. Below is a table summing up the characterizing equations of each architecture:** + +⟶ + +
+ + +**37. [Characterization, Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Dependencies]** + +⟶ + +
+ + +**38. Remark: the sign ⋆ denotes the element-wise multiplication between two vectors.** + +⟶ + +
+ + +**39. Variants of RNNs ― The table below sums up the other commonly used RNN architectures:** + +⟶ + +
+ + +**40. [Bidirectional (BRNN), Deep (DRNN)]** + +⟶ + +
+ + +**41. Learning word representation** + +⟶ + +
+ + +**42. In this section, we note V the vocabulary and |V| its size.** + +⟶ + +
+ + +**43. Motivation and notations** + +⟶ + +
+ + +**44. Representation techniques ― The two main ways of representing words are summed up in the table below:** + +⟶ + +
+ + +**45. [1-hot representation, Word embedding]** + +⟶ + +
+ + +**46. [teddy bear, book, soft]** + +⟶ + +
+ + +**47. [Noted ow, Naive approach, no similarity information, Noted ew, Takes into account words similarity]** + +⟶ + +
+ + +**48. Embedding matrix ― For a given word w, the embedding matrix E is a matrix that maps its 1-hot representation ow to its embedding ew as follows:** + +⟶ + +
+ + +**49. Remark: learning the embedding matrix can be done using target/context likelihood models.** + +⟶ + +
+ + +**50. Word embeddings** + +⟶ + +
+ + +**51. Word2vec ― Word2vec is a framework aimed at learning word embeddings by estimating the likelihood that a given word is surrounded by other words. Popular models include skip-gram, negative sampling and CBOW.** + +⟶ + +
+ + +**52. [A cute teddy bear is reading, teddy bear, soft, Persian poetry, art]** + +⟶ + +
+ + +**53. [Train network on proxy task, Extract high-level representation, Compute word embeddings]** + +⟶ + +
+ + +**54. Skip-gram ― The skip-gram word2vec model is a supervised learning task that learns word embeddings by assessing the likelihood of any given target word t happening with a context word c. By noting θt a parameter associated with t, the probability P(t|c) is given by:** + +⟶ + +
+ + +**55. Remark: summing over the whole vocabulary in the denominator of the softmax part makes this model computationally expensive. CBOW is another word2vec model using the surrounding words to predict a given word.** + +⟶ + +
+ + +**56. Negative sampling ― It is a set of binary classifiers using logistic regressions that aim at assessing how a given context and a given target words are likely to appear simultaneously, with the models being trained on sets of k negative examples and 1 positive example. Given a context word c and a target word t, the prediction is expressed by:** + +⟶ + +
+ + +**57. Remark: this method is less computationally expensive than the skip-gram model.** + +⟶ + +
+ + +**57bis. GloVe ― The GloVe model, short for global vectors for word representation, is a word embedding technique that uses a co-occurence matrix X where each Xi,j denotes the number of times that a target i occurred with a context j. Its cost function J is as follows:** + +⟶ + +
+ + +**58. where f is a weighting function such that Xi,j=0⟹f(Xi,j)=0. +Given the symmetry that e and θ play in this model, the final word embedding e(final)w is given by:** + +⟶ + +
+ + +**59. Remark: the individual components of the learned word embeddings are not necessarily interpretable.** + +⟶ + +
+ + +**60. Comparing words** + +⟶ + +
+ + +**61. Cosine similarity ― The cosine similarity between words w1 and w2 is expressed as follows:** + +⟶ + +
+ + +**62. Remark: θ is the angle between words w1 and w2.** + +⟶ + +
+ + +**63. t-SNE ― t-SNE (t-distributed Stochastic Neighbor Embedding) is a technique aimed at reducing high-dimensional embeddings into a lower dimensional space. In practice, it is commonly used to visualize word vectors in the 2D space.** + +⟶ + +
+ + +**64. [literature, art, book, culture, poem, reading, knowledge, entertaining, loveable, childhood, kind, teddy bear, soft, hug, cute, adorable]** + +⟶ + +
+ + +**65. Language model** + +⟶ + +
+ + +**66. Overview ― A language model aims at estimating the probability of a sentence P(y).** + +⟶ + +
+ + +**67. n-gram model ― This model is a naive approach aiming at quantifying the probability that an expression appears in a corpus by counting its number of appearance in the training data.** + +⟶ + +
+ + +**68. Perplexity ― Language models are commonly assessed using the perplexity metric, also known as PP, which can be interpreted as the inverse probability of the dataset normalized by the number of words T. The perplexity is such that the lower, the better and is defined as follows:** + +⟶ + +
+ + +**69. Remark: PP is commonly used in t-SNE.** + +⟶ + +
+ + +**70. Machine translation** + +⟶ + +
+ + +**71. Overview ― A machine translation model is similar to a language model except it has an encoder network placed before. For this reason, it is sometimes referred as a conditional language model. The goal is to find a sentence y such that:** + +⟶ + +
+ + +**72. Beam search ― It is a heuristic search algorithm used in machine translation and speech recognition to find the likeliest sentence y given an input x.** + +⟶ + +
+ + +**73. [Step 1: Find top B likely words y<1>, Step 2: Compute conditional probabilities y|x,y<1>,...,y, Step 3: Keep top B combinations x,y<1>,...,y, End process at a stop word]** + +⟶ + +
+ + +**74. Remark: if the beam width is set to 1, then this is equivalent to a naive greedy search.** + +⟶ + +
+ + +**75. Beam width ― The beam width B is a parameter for beam search. Large values of B yield to better result but with slower performance and increased memory. Small values of B lead to worse results but is less computationally intensive. A standard value for B is around 10.** + +⟶ + +
+ + +**76. Length normalization ― In order to improve numerical stability, beam search is usually applied on the following normalized objective, often called the normalized log-likelihood objective, defined as:** + +⟶ + +
+ + +**77. Remark: the parameter α can be seen as a softener, and its value is usually between 0.5 and 1.** + +⟶ + +
+ + +**78. Error analysis ― When obtaining a predicted translation ˆy that is bad, one can wonder why we did not get a good translation y∗ by performing the following error analysis:** + +⟶ + +
+ + +**79. [Case, Root cause, Remedies]** + +⟶ + +
+ + +**80. [Beam search faulty, RNN faulty, Increase beam width, Try different architecture, Regularize, Get more data]** + +⟶ + +
+ + +**81. Bleu score ― The bilingual evaluation understudy (bleu) score quantifies how good a machine translation is by computing a similarity score based on n-gram precision. It is defined as follows:** + +⟶ + +
+ + +**82. where pn is the bleu score on n-gram only defined as follows:** + +⟶ + +
+ + +**83. Remark: a brevity penalty may be applied to short predicted translations to prevent an artificially inflated bleu score.** + +⟶ + +
+ + +**84. Attention** + +⟶ + +
+ + +**85. Attention model ― This model allows an RNN to pay attention to specific parts of the input that is considered as being important, which improves the performance of the resulting model in practice. By noting α the amount of attention that the output y should pay to the activation a and c the context at time t, we have:** + +⟶ + +
+ + +**86. with** + +⟶ + +
+ + +**87. Remark: the attention scores are commonly used in image captioning and machine translation.** + +⟶ + +
+ + +**88. A cute teddy bear is reading Persian literature.** + +⟶ + +
+ + +**89. Attention weight ― The amount of attention that the output y should pay to the activation a is given by α computed as follows:** + +⟶ + +
+ + +**90. Remark: computation complexity is quadratic with respect to Tx.** + +⟶ + +
+ + +**91. The Deep Learning cheatsheets are now available in [target language].** + +⟶ + +
+ +**92. Original authors** + +⟶ + +
+ +**93. Translated by X, Y and Z** + +⟶ + +
+ +**94. Reviewed by X, Y and Z** + +⟶ + +
+ +**95. View PDF version on GitHub** + +⟶ + +
+ +**96. By X and Y** + +⟶ + +
diff --git a/template/refresher-linear-algebra.md b/template/refresher-linear-algebra.md deleted file mode 100644 index a6b440d1e..000000000 --- a/template/refresher-linear-algebra.md +++ /dev/null @@ -1,339 +0,0 @@ -**1. Linear Algebra and Calculus refresher** - -⟶ - -
- -**2. General notations** - -⟶ - -
- -**3. Definitions** - -⟶ - -
- -**4. Vector ― We note x∈Rn a vector with n entries, where xi∈R is the ith entry:** - -⟶ - -
- -**5. Matrix ― We note A∈Rm×n a matrix with m rows and n columns, where Ai,j∈R is the entry located in the ith row and jth column:** - -⟶ - -
- -**6. Remark: the vector x defined above can be viewed as a n×1 matrix and is more particularly called a column-vector.** - -⟶ - -
- -**7. Main matrices** - -⟶ - -
- -**8. Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** - -⟶ - -
- -**9. Remark: for all matrices A∈Rn×n, we have A×I=I×A=A.** - -⟶ - -
- -**10. Diagonal matrix ― A diagonal matrix D∈Rn×n is a square matrix with nonzero values in its diagonal and zero everywhere else:** - -⟶ - -
- -**11. Remark: we also note D as diag(d1,...,dn).** - -⟶ - -
- -**12. Matrix operations** - -⟶ - -
- -**13. Multiplication** - -⟶ - -
- -**14. Vector-vector ― There are two types of vector-vector products:** - -⟶ - -
- -**15. inner product: for x,y∈Rn, we have:** - -⟶ - -
- -**16. outer product: for x∈Rm,y∈Rn, we have:** - -⟶ - -
- -**17. Matrix-vector ― The product of matrix A∈Rm×n and vector x∈Rn is a vector of size Rn, such that:** - -⟶ - -
- -**18. where aTr,i are the vector rows and ac,j are the vector columns of A, and xi are the entries of x.** - -⟶ - -
- -**19. Matrix-matrix ― The product of matrices A∈Rm×n and B∈Rn×p is a matrix of size Rn×p, such that:** - -⟶ - -
- -**20. where aTr,i,bTr,i are the vector rows and ac,j,bc,j are the vector columns of A and B respectively** - -⟶ - -
- -**21. Other operations** - -⟶ - -
- -**22. Transpose ― The transpose of a matrix A∈Rm×n, noted AT, is such that its entries are flipped:** - -⟶ - -
- -**23. Remark: for matrices A,B, we have (AB)T=BTAT** - -⟶ - -
- -**24. Inverse ― The inverse of an invertible square matrix A is noted A−1 and is the only matrix such that:** - -⟶ - -
- -**25. Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1=B−1A−1** - -⟶ - -
- -**26. Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** - -⟶ - -
- -**27. Remark: for matrices A,B, we have tr(AT)=tr(A) and tr(AB)=tr(BA)** - -⟶ - -
- -**28. Determinant ― The determinant of a square matrix A∈Rn×n, noted |A| or det(A) is expressed recursively in terms of A∖i,∖j, which is the matrix A without its ith row and jth column, as follows:** - -⟶ - -
- -**29. Remark: A is invertible if and only if |A|≠0. Also, |AB|=|A||B| and |AT|=|A|.** - -⟶ - -
- -**30. Matrix properties** - -⟶ - -
- -**31. Definitions** - -⟶ - -
- -**32. Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** - -⟶ - -
- -**33. [Symmetric, Antisymmetric]** - -⟶ - -
- -**34. Norm ― A norm is a function N:V⟶[0,+∞[ where V is a vector space, and such that for all x,y∈V, we have:** - -⟶ - -
- -**35. N(ax)=|a|N(x) for a scalar** - -⟶ - -
- -**36. if N(x)=0, then x=0** - -⟶ - -
- -**37. For x∈V, the most commonly used norms are summed up in the table below:** - -⟶ - -
- -**38. [Norm, Notation, Definition, Use case]** - -⟶ - -
- -**39. Linearly dependence ― A set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others.** - -⟶ - -
- -**40. Remark: if no vector can be written this way, then the vectors are said to be linearly independent** - -⟶ - -
- -**41. Matrix rank ― The rank of a given matrix A is noted rank(A) and is the dimension of the vector space generated by its columns. This is equivalent to the maximum number of linearly independent columns of A.** - -⟶ - -
- -**42. Positive semi-definite matrix ― A matrix A∈Rn×n is positive semi-definite (PSD) and is noted A⪰0 if we have:** - -⟶ - -
- -**43. Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** - -⟶ - -
- -**44. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**45. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**46. diagonal** - -⟶ - -
- -**47. Singular-value decomposition ― For a given matrix A of dimensions m×n, the singular-value decomposition (SVD) is a factorization technique that guarantees the existence of U m×m unitary, Σ m×n diagonal and V n×n unitary matrices, such that:** - -⟶ - -
- -**48. Matrix calculus** - -⟶ - -
- -**49. Gradient ― Let f:Rm×n→R be a function and A∈Rm×n be a matrix. The gradient of f with respect to A is a m×n matrix, noted ∇Af(A), such that:** - -⟶ - -
- -**50. Remark: the gradient of f is only defined when f is a function that returns a scalar.** - -⟶ - -
- -**51. Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** - -⟶ - -
- -**52. Remark: the hessian of f is only defined when f is a function that returns a scalar** - -⟶ - -
- -**53. Gradient operations ― For matrices A,B,C, the following gradient properties are worth having in mind:** - -⟶ - -
- -**54. [General notations, Definitions, Main matrices]** - -⟶ - -
- -**55. [Matrix operations, Multiplication, Other operations]** - -⟶ - -
- -**56. [Matrix properties, Norm, Eigenvalue/Eigenvector, Singular-value decomposition]** - -⟶ - -
- -**57. [Matrix calculus, Gradient, Hessian, Operations]** - -⟶ diff --git a/template/refresher-probability.md b/template/refresher-probability.md deleted file mode 100644 index 5c9b34656..000000000 --- a/template/refresher-probability.md +++ /dev/null @@ -1,381 +0,0 @@ -**1. Probabilities and Statistics refresher** - -⟶ - -
- -**2. Introduction to Probability and Combinatorics** - -⟶ - -
- -**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** - -⟶ - -
- -**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** - -⟶ - -
- -**5. Axioms of probability For each event E, we denote P(E) as the probability of event E occuring.** - -⟶ - -
- -**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** - -⟶ - -
- -**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** - -⟶ - -
- -**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** - -⟶ - -
- -**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** - -⟶ - -
- -**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** - -⟶ - -
- -**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** - -⟶ - -
- -**12. Conditional Probability** - -⟶ - -
- -**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** - -⟶ - -
- -**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** - -⟶ - -
- -**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** - -⟶ - -
- -**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** - -⟶ - -
- -**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** - -⟶ - -
- -**18. Independence ― Two events A and B are independent if and only if we have:** - -⟶ - -
- -**19. Random Variables** - -⟶ - -
- -**20. Definitions** - -⟶ - -
- -**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** - -⟶ - -
- -**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** - -⟶ - -
- -**23. Remark: we have P(a - -**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** - -⟶ - -
- -**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** - -⟶ - -
- -**26. [Case, CDF F, PDF f, Properties of PDF]** - -⟶ - -
- -**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** - -⟶ - -
- -**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** - -⟶ - -
- -**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** - -⟶ - -
- -**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** - -⟶ - -
- -**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** - -⟶ - -
- -**32. Probability Distributions** - -⟶ - -
- -**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** - -⟶ - -
- -**34. Main distributions ― Here are the main distributions to have in mind:** - -⟶ - -
- -**35. [Type, Distribution]** - -⟶ - -
- -**36. Jointly Distributed Random Variables** - -⟶ - -
- -**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** - -⟶ - -
- -**38. [Case, Marginal density, Cumulative function]** - -⟶ - -
- -**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** - -⟶ - -
- -**40. Independence ― Two random variables X and Y are said to be independent if we have:** - -⟶ - -
- -**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** - -⟶ - -
- -**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** - -⟶ - -
- -**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** - -⟶ - -
- -**44. Remark 2: If X and Y are independent, then ρXY=0.** - -⟶ - -
- -**45. Parameter estimation** - -⟶ - -
- -**46. Definitions** - -⟶ - -
- -**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** - -⟶ - -
- -**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** - -⟶ - -
- -**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** - -⟶ - -
- -**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** - -⟶ - -
- -**51. Estimating the mean** - -⟶ - -
- -**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** - -⟶ - -
- -**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** - -⟶ - -
- -**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** - -⟶ - -
- -**55. Estimating the variance** - -⟶ - -
- -**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** - -⟶ - -
- -**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** - -⟶ - -
- -**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** - -⟶ - -
- -**59. [Introduction, Sample space, Event, Permutation]** - -⟶ - -
- -**60. [Conditional probability, Bayes' rule, Independence]** - -⟶ - -
- -**61. [Random variables, Definitions, Expectation, Variance]** - -⟶ - -
- -**62. [Probability distributions, Chebyshev's inequality, Main distributions]** - -⟶ - -
- -**63. [Jointly distributed random variables, Density, Covariance, Correlation]** - -⟶ - -
- -**64. [Parameter estimation, Mean, Variance]** - -⟶ diff --git a/tr/cheatsheet-machine-learning-tips-and-tricks.md b/tr/cheatsheet-machine-learning-tips-and-tricks.md deleted file mode 100644 index 9712297b8..000000000 --- a/tr/cheatsheet-machine-learning-tips-and-tricks.md +++ /dev/null @@ -1,285 +0,0 @@ -**1. Machine Learning tips and tricks cheatsheet** - -⟶ - -
- -**2. Classification metrics** - -⟶ - -
- -**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** - -⟶ - -
- -**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** - -⟶ - -
- -**5. [Predicted class, Actual class]** - -⟶ - -
- -**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** - -⟶ - -
- -**7. [Metric, Formula, Interpretation]** - -⟶ - -
- -**8. Overall performance of model** - -⟶ - -
- -**9. How accurate the positive predictions are** - -⟶ - -
- -**10. Coverage of actual positive sample** - -⟶ - -
- -**11. Coverage of actual negative sample** - -⟶ - -
- -**12. Hybrid metric useful for unbalanced classes** - -⟶ - -
- -**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** - -⟶ - -
- -**14. [Metric, Formula, Equivalent]** - -⟶ - -
- -**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** - -⟶ - -
- -**16. [Actual, Predicted]** - -⟶ - -
- -**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** - -⟶ - -
- -**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** - -⟶ - -
- -**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** - -⟶ - -
- -**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** - -⟶ - -
- -**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** - -⟶ - -
- -**22. Model selection** - -⟶ - -
- -**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** - -⟶ - -
- -**24. [Training set, Validation set, Testing set]** - -⟶ - -
- -**25. [Model is trained, Model is assessed, Model gives predictions]** - -⟶ - -
- -**26. [Usually 80% of the dataset, Usually 20% of the dataset]** - -⟶ - -
- -**27. [Also called hold-out or development set, Unseen data]** - -⟶ - -
- -**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** - -⟶ - -
- -**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** - -⟶ - -
- -**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** - -⟶ - -
- -**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** - -⟶ - -
- -**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** - -⟶ - -
- -**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** - -⟶ - -
- -**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** - -⟶ - -
- -**35. Diagnostics** - -⟶ - -
- -**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** - -⟶ - -
- -**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** - -⟶ - -
- -**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** - -⟶ - -
- -**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** - -⟶ - -
- -**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** - -⟶ - -
- -**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** - -⟶ - -
- -**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** - -⟶ - -
- -**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** - -⟶ - -
- -**44. Regression metrics** - -⟶ - -
- -**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** - -⟶ - -
- -**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** - -⟶ - -
- -**47. [Model selection, cross-validation, regularization]** - -⟶ - -
- -**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** - -⟶ diff --git a/tr/cheatsheet-supervised-learning.md b/tr/cheatsheet-supervised-learning.md deleted file mode 100644 index a6b19ea1c..000000000 --- a/tr/cheatsheet-supervised-learning.md +++ /dev/null @@ -1,567 +0,0 @@ -**1. Supervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Supervised Learning** - -⟶ - -
- -**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** - -⟶ - -
- -**4. Type of prediction ― The different types of predictive models are summed up in the table below:** - -⟶ - -
- -**5. [Regression, Classifier, Outcome, Examples]** - -⟶ - -
- -**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** - -⟶ - -
- -**7. Type of model ― The different models are summed up in the table below:** - -⟶ - -
- -**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** - -⟶ - -
- -**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** - -⟶ - -
- -**10. Notations and general concepts** - -⟶ - -
- -**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** - -⟶ - -
- -**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** - -⟶ - -
- -**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** - -⟶ - -
- -**14. [Linear regression, Logistic regression, SVM, Neural Network]** - -⟶ - -
- -**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** - -⟶ - -
- -**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** - -⟶ - -
- -**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** - -⟶ - -
- -**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** - -⟶ - -
- -**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** - -⟶ - -
- -**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** - -⟶ - -
- -**21. Linear models** - -⟶ - -
- -**22. Linear regression** - -⟶ - -
- -**23. We assume here that y|x;θ∼N(μ,σ2)** - -⟶ - -
- -**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** - -⟶ - -
- -**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** - -⟶ - -
- -**26. Remark: the update rule is a particular case of the gradient ascent.** - -⟶ - -
- -**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** - -⟶ - -
- -**28. Classification and logistic regression** - -⟶ - -
- -**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** - -⟶ - -
- -**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** - -⟶ - -
- -**31. Remark: there is no closed form solution for the case of logistic regressions.** - -⟶ - -
- -**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** - -⟶ - -
- -**33. Generalized Linear Models** - -⟶ - -
- -**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** - -⟶ - -
- -**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** - -⟶ - -
- -**36. Here are the most common exponential distributions summed up in the following table:** - -⟶ - -
- -**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** - -⟶ - -
- -**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** - -⟶ - -
- -**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** - -⟶ - -
- -**40. Support Vector Machines** - -⟶ - -
- -**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** - -⟶ - -
- -**42: Optimal margin classifier ― The optimal margin classifier h is such that:** - -⟶ - -
- -**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** - -⟶ - -
- -**44. such that** - -⟶ - -
- -**45. support vectors** - -⟶ - -
- -**46. Remark: the line is defined as wTx−b=0.** - -⟶ - -
- -**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** - -⟶ - -
- -**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** - -⟶ - -
- -**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** - -⟶ - -
- -**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** - -⟶ - -
- -**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** - -⟶ - -
- -**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** - -⟶ - -
- -**53. Remark: the coefficients βi are called the Lagrange multipliers.** - -⟶ - -
- -**54. Generative Learning** - -⟶ - -
- -**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** - -⟶ - -
- -**56. Gaussian Discriminant Analysis** - -⟶ - -
- -**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** - -⟶ - -
- -**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** - -⟶ - -
- -**59. Naive Bayes** - -⟶ - -
- -**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** - -⟶ - -
- -**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** - -⟶ - -
- -**62. Remark: Naive Bayes is widely used for text classification and spam detection.** - -⟶ - -
- -**63. Tree-based and ensemble methods** - -⟶ - -
- -**64. These methods can be used for both regression and classification problems.** - -⟶ - -
- -**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** - -⟶ - -
- -**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** - -⟶ - -
- -**67. Remark: random forests are a type of ensemble methods.** - -⟶ - -
- -**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** - -⟶ - -
- -**69. [Adaptive boosting, Gradient boosting]** - -⟶ - -
- -**70. High weights are put on errors to improve at the next boosting step** - -⟶ - -
- -**71. Weak learners trained on remaining errors** - -⟶ - -
- -**72. Other non-parametric approaches** - -⟶ - -
- -**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** - -⟶ - -
- -**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** - -⟶ - -
- -**75. Learning Theory** - -⟶ - -
- -**76. Union bound ― Let A1,...,Ak be k events. We have:** - -⟶ - -
- -**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** - -⟶ - -
- -**78. Remark: this inequality is also known as the Chernoff bound.** - -⟶ - -
- -**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** - -⟶ - -
- -**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** - -⟶ - -
- -**81: the training and testing sets follow the same distribution ** - -⟶ - -
- -**82. the training examples are drawn independently** - -⟶ - -
- -**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** - -⟶ - -
- -**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** - -⟶ - -
- -**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** - -⟶ - -
- -**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** - -⟶ - -
- -**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** - -⟶ - -
- -**88. [Introduction, Type of prediction, Type of model]** - -⟶ - -
- -**89. [Notations and general concepts, loss function, gradient descent, likelihood]** - -⟶ - -
- -**90. [Linear models, linear regression, logistic regression, generalized linear models]** - -⟶ - -
- -**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** - -⟶ - -
- -**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** - -⟶ - -
- -**93. [Trees and ensemble methods, CART, Random forest, Boosting]** - -⟶ - -
- -**94. [Other methods, k-NN]** - -⟶ - -
- -**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** - -⟶ diff --git a/tr/cheatsheet-unsupervised-learning.md b/tr/cheatsheet-unsupervised-learning.md deleted file mode 100644 index 5eae29ed8..000000000 --- a/tr/cheatsheet-unsupervised-learning.md +++ /dev/null @@ -1,340 +0,0 @@ -**1. Unsupervised Learning cheatsheet** - -⟶ - -
- -**2. Introduction to Unsupervised Learning** - -⟶ - -
- -**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** - -⟶ - -
- -**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** - -⟶ - -
- -**5. Clustering** - -⟶ - -
- -**6. Expectation-Maximization** - -⟶ - -
- -**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** - -⟶ - -
- -**8. [Setting, Latent variable z, Comments]** - -⟶ - -
- -**9. [Mixture of k Gaussians, Factor analysis]** - -⟶ - -
- -**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** - -⟶ - -
- -**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** - -⟶ - -
- -**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** - -⟶ - -
- -**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** - -⟶ - -
- -**14. k-means clustering** - -⟶ - -
- -**15. We note c(i) the cluster of data point i and μj the center of cluster j.** - -⟶ - -
- -**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** - -⟶ - -
- -**17. [Means initialization, Cluster assignment, Means update, Convergence]** - -⟶ - -
- -**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** - -⟶ - -
- -**19. Hierarchical clustering** - -⟶ - -
- -**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** - -⟶ - -
- -**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** - -⟶ - -
- -**22. [Ward linkage, Average linkage, Complete linkage]** - -⟶ - -
- -**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** - -⟶ - -
- -**24. Clustering assessment metrics** - -⟶ - -
- -**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** - -⟶ - -
- -**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** - -⟶ - -
- -**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** - -⟶ - -
- -**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** - -⟶ - -
- -**29. Dimension reduction** - -⟶ - -
- -**30. Principal component analysis** - -⟶ - -
- -**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** - -⟶ - -
- -**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** - -⟶ - -
- -**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** - -⟶ - -
- -**34. diagonal** - -⟶ - -
- -**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** - -⟶ - -
- -**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k -dimensions by maximizing the variance of the data as follows:** - -⟶ - -
- -**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** - -⟶ - -
- -**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** - -⟶ - -
- -**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** - -⟶ - -
- -**40. Step 4: Project the data on spanR(u1,...,uk).** - -⟶ - -
- -**41. This procedure maximizes the variance among all k-dimensional spaces.** - -⟶ - -
- -**42. [Data in feature space, Find principal components, Data in principal components space]** - -⟶ - -
- -**43. Independent component analysis** - -⟶ - -
- -**44. It is a technique meant to find the underlying generating sources.** - -⟶ - -
- -**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** - -⟶ - -
- -**46. The goal is to find the unmixing matrix W=A−1.** - -⟶ - -
- -**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** - -⟶ - -
- -**48. Write the probability of x=As=W−1s as:** - -⟶ - -
- -**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** - -⟶ - -
- -**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** - -⟶ - -
- -**51. The Machine Learning cheatsheets are now available in Turkish.** - -⟶ - -
- -**52. Original authors** - -⟶ - -
- -**53. Translated by X, Y and Z** - -⟶ - -
- -**54. Reviewed by X, Y and Z** - -⟶ - -
- -**55. [Introduction, Motivation, Jensen's inequality]** - -⟶ - -
- -**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** - -⟶ - -
- -**57. [Dimension reduction, PCA, ICA]** - -⟶ diff --git a/tr/cs-221-logic-models.md b/tr/cs-221-logic-models.md new file mode 100644 index 000000000..23476dd86 --- /dev/null +++ b/tr/cs-221-logic-models.md @@ -0,0 +1,462 @@ +**Logic-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-logic-models) + +
+ +**1. Logic-based models with propositional and first-order logic** + +⟶ Önermeli ve birinci dereceden mantık (Lojik) temelli modeller + +
+ + +**2. Basics** + +⟶ Temeller + +
+ + +**3. Syntax of propositional logic ― By noting f,g formulas, and ¬,∧,∨,→,↔ connectives, we can write the following logical expressions:** + +⟶ Önerme mantığının sözdizimi ― f, g formülleri ve ¬,∧,∨,→,↔ bağlayıcılarını belirterek, aşağıdaki mantıksal ifadeleri yazabiliriz: + +
+ + +**4. [Name, Symbol, Meaning, Illustration]** + +⟶ [Ad, Sembol, Anlamı, Gösterim] + +
+ + +**5. [Affirmation, Negation, Conjunction, Disjunction, Implication, Biconditional]** + +⟶ [Doğrulama, Dışlayan, Kesişim, Birleşim, Implication, İki koşullu] + +
+ + +**6. [not f, f and g, f or g, if f then g, f, that is to say g]** + +⟶ [f değil, f ve g, f veya g, eğer f'den g çıkarsa, f, f ve g'nin ortak olduğu bölge] + +
+ + +**7. Remark: formulas can be built up recursively out of these connectives.** + +⟶ Not: Bu bağlantılar dışında tekrarlayan formüller oluşturulabilir. + +
+ + +**8. Model ― A model w denotes an assignment of binary weights to propositional symbols.** + +⟶ Model - w modeli, ikili sembollerin önermeli sembollere atanmasını belirtir. + +
+ + +**9. Example: the set of truth values w={A:0,B:1,C:0} is one possible model to the propositional symbols A, B and C.** + +⟶ Örnek: w = {A: 0, B: 1, C: 0} doğruluk değerleri kümesi, A, B ve C önermeli semboller için olası bir modeldir. + +
+ + +**10. Interpretation function ― The interpretation function I(f,w) outputs whether model w satisfies formula f:** + +⟶ Yorumlama fonksiyonu ― Yorumlama fonksiyonu I(f,w), w modelinin f formülüne uygun olup olmadığını gösterir: + +
+ + +**11. Set of models ― M(f) denotes the set of models w that satisfy formula f. Mathematically speaking, we define it as follows:** + +⟶ Modellerin seti ― M(f), f formülünü sağlayan model setini belirtir. Matematiksel konuşursak, şöyle tanımlarız: + +
+ + +**12. Knowledge base** + +⟶ Bilgi temelli + +
+ + +**13. Definition ― The knowledge base KB is the conjunction of all formulas that have been considered so far. The set of models of the knowledge base is the intersection of the set of models that satisfy each formula. In other words:** + +⟶ Tanım ― Bilgi temeli (KB-Knowledgde Base), şu ana kadar düşünülen tüm formüllerin birleşimidir. Bilgi temelinin model kümesi, her formülü karşılayan model dizisinin kesişimidir. Diğer bir deyişle: + +
+ + +**14. Probabilistic interpretation ― The probability that query f is evaluated to 1 can be seen as the proportion of models w of the knowledge base KB that satisfy f, i.e.:** + +⟶ Olasılıksal yorumlama ― f sorgusunun 1 olarak değerlendirilmesi olasılığı, f'yi sağlayan bilgi temeli KB'nin w modellerinin oranı olarak görülebilir, yani: + +
+ + +**15. Satisfiability ― The knowledge base KB is said to be satisfiable if at least one model w satisfies all its constraints. In other words:** + +⟶ Gerçeklenebilirlik ― En az bir modelin tüm kısıtlamaları yerine getirmesi durumunda KB'nin bilgi temelinin gerçeklenebilir olduğu söylenir. Diğer bir deyişle: + +
+ + +**16. satisfiable** + +⟶ Karşılanabilirlik + +
+ + +**17. Remark: M(KB) denotes the set of models compatible with all the constraints of the knowledge base.** + +⟶ Not: M(KB), bilgi temelinin tüm kısıtları ile uyumlu model kümesini belirtir. + +
+ + +**18. Relation between formulas and knowledge base - We define the following properties between the knowledge base KB and a new formula f:** + +⟶ Formüller ve bilgi temeli arasındaki ilişki - Bilgi temeli KB ile yeni bir formül f arasında aşağıdaki özellikleri tanımlarız: + +
+ + +**19. [Name, Mathematical formulation, Illustration, Notes]** + +⟶ [Adı, Matematiksel formülü, Gösterim, Notlar] + +
+ + +**20. [KB entails f, KB contradicts f, f contingent to KB]** + +⟶ [KB f içerir, KB f içermez, f koşullu KB] + +
+ + +**21. [f does not bring any new information, Also written KB⊨f, No model satisfies the constraints after adding f, Equivalent to KB⊨¬f, f does not contradict KB, f adds a non-trivial amount of information to KB]** + +⟶ [f yeni bir bilgi getirmiyor, Ayrıca KB⊨f yazıyor, Hiçbir model f ekledikten sonra kısıtlamaları yerine getirmiyor, f KB'ye eşdeğer, f KB'ye aykırı değil, f KB'ye önemsiz miktarda bilgi ekliyor] + +
+ + +**22. Model checking ― A model checking algorithm takes as input a knowledge base KB and outputs whether it is satisfiable or not.** + +⟶ Model denetimi - Bir model denetimi algoritması, KB'nin bilgi temelini girdi olarak alır ve bunun gerçeklenebilir/karşılanabilir olup olmadığını çıkarır. + +
+ + +**23. Remark: popular model checking algorithms include DPLL and WalkSat.** + +⟶ Not: popüler model kontrol algoritmaları DPLL ve WalkSat'ı içerir. + +
+ + +**24. Inference rule ― An inference rule of premises f1,...,fk and conclusion g is written:** + +⟶ Çıkarım kuralı - f1, ..., fk ve sonuç g yapısının çıkarım kuralı şöyle yazılmıştır: + +
+ + +**25. Forward inference algorithm ― From a set of inference rules Rules, this algorithm goes through all possible f1,...,fk and adds g to the knowledge base KB if a matching rule exists. This process is repeated until no more additions can be made to KB.** + +⟶ İleri çıkarım algoritması - Çıkarım kurallarından Kurallar, bu algoritma mümkün olan tüm f1, ..., fk'den geçer ve eşleşen bir kural varsa, KB bilgi tabanına g ekler. Bu işlem KB'ye daha fazla ekleme yapılamayana kadar tekrar edilir. + +
+ + +**26. Derivation ― We say that KB derives f (written KB⊢f) with rules Rules if f already is in KB or gets added during the forward inference algorithm using the set of rules Rules.** + +⟶ Türetme - f'nin KB içerisindeyse veya kurallar kurallarını kullanarak ileri çıkarım algoritması sırasında eklenmişse, KB'nin kurallar ile f (KB⊢f yazılır) türettiğini söylüyoruz. + +
+ + +**27. Properties of inference rules ― A set of inference rules Rules can have the following properties:** + +⟶ Çıkarım kurallarının özellikleri - Çıkarım kurallarının kümesi Kurallar aşağıdaki özelliklere sahip olabilir: + +
+ + +**28. [Name, Mathematical formulation, Notes]** + +⟶ [Adı, Matematiksel formülü, Notlar] + +
+ + +**29. [Soundness, Completeness]** + +⟶ [Sağlamlık, Tamlık] + +
+ + +**30. [Inferred formulas are entailed by KB, Can be checked one rule at a time, "Nothing but the truth", Formulas entailing KB are either already in the knowledge base or inferred from it, "The whole truth"]** + +⟶ [Çıkarılan formüller KB tarafından sağlanmıştır, Her defasında bir kural kontrol edilebilir, ya KB'yi içeren Formüller ya bilgi tabanında zaten vardır "Gerçeğinden başka bir şey yok", ya da ondan çıkarılan "Tüm gerçek" değerlerdir] + +
+ + +**31. Propositional logic** + +⟶ Önerme mantığı + +
+ + +**32. In this section, we will go through logic-based models that use logical formulas and inference rules. The idea here is to balance expressivity and computational efficiency.** + +⟶ Bu bölümde, mantıksal formülleri ve çıkarım kurallarını kullanan mantık tabanlı modelleri inceleyeceğiz. Buradaki fikir ifade ve hesaplamanın verimliliğini dengelemektir. + +
+ + +**33. Horn clause ― By noting p1,...,pk and q propositional symbols, a Horn clause has the form:** + +⟶ Horn cümlesi ― p1, ..., pk ve q önerme sembollerini not ederek, bir Horn cümlesi şu şekildedir (Matematiksel mantık ve mantık programlamada, kural gibi özel bir biçime sahip mantıksal formüllere Horn cümlesi denir.): + +
+ + +**34. Remark: when q=false, it is called a "goal clause", otherwise we denote it as a "definite clause".** + +⟶ Not: q = false olduğunda, "hedeflenen bir cümle" olarak adlandırılır, aksi takdirde "kesin bir cümle" olarak adlandırırız + +
+ + +**35. Modus ponens ― For propositional symbols f1,...,fk and p, the modus ponens rule is written:** + +⟶ Modus ponens - f1, ..., fk ve p önermeli semboller için modus ponens kuralı yazılır (Modus ponens: Önerme mantığında, modus ponens bir çıkarım kuralıdır. "P, Q anlamına gelir ve P'nin doğru olduğu iddia edilir, bu yüzden Q doğru olmalı" şeklinde özetlenebilir. Modus ponens, başka bir geçerli argüman biçimi olan modus tollens ile yakından ilgilidir.): + +
+ + +**36. Remark: it takes linear time to apply this rule, as each application generate a clause that contains a single propositional symbol.** + +⟶ Not: Her uygulama tek bir önermeli sembol içeren bir cümle oluşturduğundan, bu kuralın uygulanması doğrusal bir zaman alır. + +
+ + +**37. Completeness ― Modus ponens is complete with respect to Horn clauses if we suppose that KB contains only Horn clauses and p is an entailed propositional symbol. Applying modus ponens will then derive p.** + +⟶ Tamlık ― KB'nin sadece Horn cümleleri içerdiğini ve p'nin zorunlu bir teklif sembolü olduğunu varsayalım, Hornus cümlelerine göre Modus ponenleri tamamlanmıştır. Modus ponens uygulanması daha sonra p'yi türetir. + +
+ + +**38. Conjunctive normal form ― A conjunctive normal form (CNF) formula is a conjunction of clauses, where each clause is a disjunction of atomic formulas.** + +⟶ Konjunktif (Birleştirici) normal form - Bir konjonktif normal form (CNF) formülü, her bir cümlenin atomik formüllerin bir ayrıntısı olduğu cümle birleşimidir. + +
+ + +**39. Remark: in other words, CNFs are ∧ of ∨.** + +⟶ Açıklama: başka bir deyişle, CNF'ler ∨ ait ∧ bulunmaktadır. + +
+ + +**40. Equivalent representation ― Every formula in propositional logic can be written into an equivalent CNF formula. The table below presents general conversion properties:** + +⟶ Eşdeğer temsil - Önerme mantığındaki her formül eşdeğer bir CNF formülüne yazılabilir. Aşağıdaki tabloda genel dönüşüm özellikleri gösterilmektedir: + +
+ + +**41. [Rule name, Initial, Converted, Eliminate, Distribute, over]** + +⟶ [Kural adı, Başlangıç, Dönüştürülmüş, Eleme, Dağıtma, üzerine] + +
+ + +**42. Resolution rule ― For propositional symbols f1,...,fn, and g1,...,gm as well as p, the resolution rule is written:** + +⟶ Çözünürlük kuralı - f1, ..., fn ve g1, ..., gm önerme sembolleri için, p, çözümleme kuralı yazılır: + +
+ + +**43. Remark: it can take exponential time to apply this rule, as each application generates a clause that has a subset of the propositional symbols.** + +⟶ Not: Her uygulama, teklif sembollerinin alt kümesine sahip bir cümle oluşturduğundan, bu kuralı uygulamak için üssel olarak zaman alabilir. + +
+ + +**44. [Resolution-based inference ― The resolution-based inference algorithm follows the following steps:, Step 1: Convert all formulas into CNF, Step 2: Repeatedly apply resolution rule, Step 3: Return unsatisfiable if and only if False, is derived]** + +⟶ [Çözünürlük tabanlı çıkarım - Çözünürlük tabanlı çıkarım algoritması, aşağıdaki adımları izler :, Adım 1: Tüm formülleri CNF'ye dönüştürün, Adım 2: Tekrar tekrar, çözünürlük kuralını uygulayın, Adım 3: Yanlışsa türetilmişse tatmin edici olmayan dönüş yapın] + +
+ + +**45. First-order logic** + +⟶ Birinci dereceden mantık + +
+ + +**46. The idea here is to use variables to yield more compact knowledge representations.** + +⟶ Buradaki fikir, daha kompakt bilgi sunumları sağlamak için değişkenleri kullanmaktır. + +
+ + +**47. [Model ― A model w in first-order logic maps:, constant symbols to objects, predicate symbols to tuple of objects]** + +⟶ [Model ― Birinci mertebeden mantık haritalarında bir w modeli :, nesnelere sabit semboller, nesnelerin dizisini sembolize etmek için tahmin] + +
+ + +**48. Horn clause ― By noting x1,...,xn variables and a1,...,ak,b atomic formulas, the first-order logic version of a horn clause has the form:** + +⟶ Horn cümlesi - x1, ..., xn değişkenleri ve a1, ..., ak, b atomik formüllerine dikkat çekerek, bir boynuz maddesinin birinci derece mantık versiyonu aşağıdaki şekildedir: + +
+ + +**49. Substitution ― A substitution θ maps variables to terms and Subst[θ,f] denotes the result of substitution θ on f.** + +⟶ Yer değiştirme - Bir yerdeğiştirme değişkenleri terimlerle eşler ve Subst[θ,f] yerdeğiştirme sonucunu f olarak belirtir. + +
+ + +**50. Unification ― Unification takes two formulas f and g and returns the most general substitution θ that makes them equal:** + +⟶ Birleştirme ― Birleştirme f ve g'nin iki formülünü alır ve onları eşit yapan en genel ikameyi θ verir: + +
+ + +**51. such that** + +⟶ öyle ki + +
+ + +**52. Note: Unify[f,g] returns Fail if no such θ exists.** + +⟶ Not: Unify[f,g], eğer böyle bir θ yoksa Fail döndürür. + +
+ + +**53. Modus ponens ― By noting x1,...,xn variables, a1,...,ak and a′1,...,a′k atomic formulas and by calling θ=Unify(a′1∧...∧a′k,a1∧...∧ak) the first-order logic version of modus ponens can be written:** + +⟶ Modus ponens ― x1, ..., xn değişkenleri, a1, ..., ak ve a′1, ..., a′k atomik formüllerine dikkat ederek ve θ=Unify(a′1∧...∧a′k,a1∧...∧ak) modus ponenlerin birinci dereceden mantık versiyonu yazılabilir: + +
+ + +**54. Completeness ― Modus ponens is complete for first-order logic with only Horn clauses.** + +⟶ Tamlık - Modus ponens sadece Horn cümleleriyle birinci dereceden mantık için tamamlanmıştır. + +
+ + +**55. Resolution rule ― By noting f1,...,fn, g1,...,gm, p, q formulas and by calling θ=Unify(p,q), the first-order logic version of the resolution rule can be written:** + +⟶ Çözünürlük kuralı ― f1,...,fn,g1,...,gm, p, q formüllerini not ederek ve θ=Unify(p,q) ifadesini kullanarak, çözümleme kuralının birinci dereceden mantık sürümü yazılabilir. : + +
+ + +**56. [Semi-decidability ― First-order logic, even restricted to only Horn clauses, is semi-decidable., if KB⊨f, forward inference on complete inference rules will prove f in finite time, if KB⊭f, no algorithm can show this in finite time]** + +⟶ Yarı-karar verilebilirlik ― Birinci dereceden mantık, sadece Horn cümleleriyle sınırlı olsa bile, yarı karar verilebilir eğer KB⊨f ise f sonsuz zamanlıdır. KB⊭f ise sonsuz zamanlı olabilirliği gösteren algoritma yoktur. + +
+ + +**57. [Basics, Notations, Model, Interpretation function, Set of models]** + +⟶ [Temeller, Notasyon, Model, Yorumlama fonksiyonu, Modellerin kümesi] + +
+ + +**58. [Knowledge base, Definition, Probabilistic interpretation, Satisfiability, Relationship with formulas, Forward inference, Rule properties]** + +⟶ [Bilgi temeli, Tanım, Olasılıksal yorumlama, Gerçeklenebilirlik, Formüllerle İlişki, İleri çıkarım, Kural özellikleri] + +
+ + +**59. [Propositional logic, Clauses, Modus ponens, Conjunctive normal form, Representation equivalence, Resolution]** + +⟶ [Önerme mantığı, Cümleler, Modus ponens, Eşlenik (Conjunctive) normal form, Temsil eşdeğeri, Çözüm] + +
+ + +**60. [First-order logic, Substitution, Unification, Resolution rule, Modus ponens, Resolution, Semi-decidability]** + +⟶ [Birinci derece mantık, Değiştirme, Birleştirme, Çözünürlük kuralı, Modus ponens, Çözünürlük, Yarı-karar verilebilirlik] + +
+ + +**61. View PDF version on GitHub** + +⟶ GitHub'da PDF sürümünü görüntüleyin + +
+ + +**62. Original authors** + +⟶ Orijinal yazarlar + +
+ + +**63. Translated by X, Y and Z** + +⟶ X, Y ve Z tarafından çevrilmiştir + +
+ + +**64. Reviewed by X, Y and Z** + +⟶ X, Y ve Z tarafından gözden geçirilmiştir + +
+ + +**65. By X and Y** + +⟶ X ve Y ile + +
+ + +**66. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ Yapay Zeka el kitabı şimdi [Türkçe] mevcuttur. diff --git a/tr/cs-221-reflex-models.md b/tr/cs-221-reflex-models.md new file mode 100644 index 000000000..e1aea4a79 --- /dev/null +++ b/tr/cs-221-reflex-models.md @@ -0,0 +1,538 @@ +**Reflex-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-reflex-models) + +
+ +**1. Reflex-based models with Machine Learning** + +⟶ Makine Öğrenmesi ile Refleks-temelli modeller + +
+ + +**2. Linear predictors** + +⟶ Doğrusal öngörücüler + +
+ + +**3. In this section, we will go through reflex-based models that can improve with experience, by going through samples that have input-output pairs.** + +⟶ Bu bölümde, girdi-çıktı çiftleri olan örneklerden geçerek, deneyim ile gelişebilecek refleks-temelli modelleri göreceğiz. + +
+ + +**4. Feature vector ― The feature vector of an input x is noted ϕ(x) and is such that:** + +⟶ Öznitelik vektörü ― x girişinin öznitelik vektörü ϕ (x) olarak not edilir ve şöyledir: + +
+ + +**5. Score ― The score s(x,w) of an example (ϕ(x),y)∈Rd×R associated to a linear model of weights w∈Rd is given by the inner product:** + +⟶ Puan - Bir örneğin s(x, w)si ni (ϕ(x),y))∈Rd×R, w∈Rd doğrusal ağırlık modeline bağlı olarak: + +
+ + +**6. Classification** + +⟶ Sınıflandırma + +
+ + +**7. Linear classifier ― Given a weight vector w∈Rd and a feature vector ϕ(x)∈Rd, the binary linear classifier fw is given by:** + +⟶ Doğrusal sınıflandırıcı - Bir ağırlık vektörü w∈Rd ve bir öznitelik vektörü ϕ(x)∈Rd verildiğinde, ikili doğrusal sınıflandırıcı fw şöyle verilir: + +
+ + +**8. if** + +⟶ + +
Eğer + + +**9. Margin ― The margin m(x,y,w)∈R of an example (ϕ(x),y)∈Rd×{−1,+1} associated to a linear model of weights w∈Rd quantifies the confidence of the prediction: larger values are better. It is given by:** + +⟶ Marj ― (ϕ(x),y)∈Rd×{−1,+1} örneğinin m(x,y,w)∈R marjları w∈Rd doğrusal ağırlık modeliyle ilişkili olarak, tahminin güvenirliği ölçülür: daha büyük değerler daha iyidir. Şöyle ifade edilir: + +
+ + +**10. Regression** + +⟶ Bağlanım (Regression) + +
+ + +**11. Linear regression ― Given a weight vector w∈Rd and a feature vector ϕ(x)∈Rd, the output of a linear regression of weights w denoted as fw is given by:** + +⟶ Doğrusal bağlanım (Linear regression) - w∈Rd bir ağırlık vektörü ve bir öznitelik vektörü ϕ(x)∈Rd verildiğinde, fw olarak belirtilen ağırlıkların doğrusal bir bağlanım" çıktısı şöyle verilir: + +
+ + +**12. Residual ― The residual res(x,y,w)∈R is defined as being the amount by which the prediction fw(x) overshoots the target y:** + +⟶ Artık (Residual) - Artık res(x,y,w)∈R, fw(x) tahmininin y hedefini aştığı miktar olarak tanımlanır: + +
+ + +**13. Loss minimization** + +⟶ Kayıp/Yitim minimizasyonu + +
+ + +**14. Loss function ― A loss function Loss(x,y,w) quantifies how unhappy we are with the weights w of the model in the prediction task of output y from input x. It is a quantity we want to minimize during the training process.** + +⟶ Kayıp fonksiyonu - Kayıp fonksiyonu Loss(x,y,w), x girişinden y çıktısının öngörme görevindeki model ağırlıkları ile ne kadar mutsuz olduğumuzu belirler. Bu değer eğitim sürecinde en aza indirmek istediğimiz bir miktar. + +
+ + +**15. Classification case - The classification of a sample x of true label y∈{−1,+1} with a linear model of weights w can be done with the predictor fw(x)≜sign(s(x,w)). In this situation, a metric of interest quantifying the quality of the classification is given by the margin m(x,y,w), and can be used with the following loss functions:** + +⟶ Sınıflandırma durumu - Doğru etiket y∈{−1,+1} değerinin x örneğinin doğrusal ağırlık w modeliyle sınıflandırılması fw(x)≜sign(s(x,w)) belirleyicisi ile yapılabilir. Bu durumda, sınıflandırma kalitesini ölçen bir fayda ölçütü m(x,y,w) marjı ile verilir ve aşağıdaki kayıp fonksiyonlarıyla birlikte kullanılabilir: + +
+ + +**16. [Name, Illustration, Zero-one loss, Hinge loss, Logistic loss]** + +⟶ [Ad, Örnekleme, Sıfır-bir kayıp, Menteşe kaybı, Lojistik kaybı] + +
+ + +**17. Regression case - The prediction of a sample x of true label y∈R with a linear model of weights w can be done with the predictor fw(x)≜s(x,w). In this situation, a metric of interest quantifying the quality of the regression is given by the margin res(x,y,w) and can be used with the following loss functions:** + +⟶ Regresyon durumu - Doğru etiket y∈R değerinin x örneğinin bir doğrusal ağırlık modeli w ile öngörülmesi fw(x)≜s(x,w) öngörüsü ile yapılabilir. Bu durumda, regresyonun kalitesini ölçen bir fayda ölçütü res(x,y,w) marjı ile verilir ve aşağıdaki kayıp fonksiyonlarıyla birlikte kullanılabilir: + +
+ + +**18. [Name, Squared loss, Absolute deviation loss, Illustration]** + +⟶ [Ad, Kareler kaybı, Mutlak sapma kaybı, Görselleştirme] + +
+ + +**19. Loss minimization framework ― In order to train a model, we want to minimize the training loss is defined as follows:** + +⟶ Kayıp minimize etme çerçevesi (framework) - Bir modeli eğitmek için, eğitim kaybını en aza indirmek istiyoruz; + +
+ + +**20. Non-linear predictors** + +⟶ Doğrusal olmayan öngörücüler + +
+ + +**21. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** + +⟶ k-en yakın komşu - Yaygın olarak k-NN olarak bilinen k-en yakın komşu algoritması, bir veri noktasının tepkisinin eğitim kümesinden k komşularının yapısı tarafından belirlendiği parametrik olmayan bir yaklaşımdır. Hem sınıflandırma hem de regresyon ayarlarında kullanılabilir. + +
+ + +**22. Remark: the higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** + +⟶ Not: k parametresi ne kadar yüksekse, önyargı (bias) o kadar yüksek ve k parametresi ne kadar düşükse, varyans o kadar yüksek olur. + +
+ + +**23. Neural networks ― Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks. The vocabulary around neural networks architectures is described in the figure below:** + +⟶ Yapay sinir ağları - Yapay sinir ağları katmanlarla oluşturulmuş bir model sınıfıdır. Yaygın olarak kullanılan sinir ağları, evrişimli ve tekrarlayan sinir ağlarını içerir. Yapay sinir ağları mimarisi etrafındaki kelime bilgisi aşağıdaki şekilde tanımlanmıştır: + +
+ + +**24. [Input layer, Hidden layer, Output layer]** + +⟶ [Giriş katmanı, Gizli katman, Çıkış katmanı] + +
+ + +**25. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** + +⟶ i, ağın i. katmanı ve j, katmanın j. gizli birimi olacak şekilde aşağıdaki gibi ifade edilir: + +
+ + +**26. where we note w, b, x, z the weight, bias, input and non-activated output of the neuron respectively.** + +⟶ w, b, x, z değerlerinin sırasıyla nöronun ağırlık, önyargı (bias), girdi ve aktive edilmemiş çıkışını olarak ifade eder. + +
+ + +**27. For a more detailed overview of the concepts above, check out the Supervised Learning cheatsheets!** + +⟶ Yukarıdaki kavramlara daha ayrıntılı bir bakış için, Gözetimli Öğrenme el kitabına göz atın! + +
+ + +**28. Stochastic gradient descent** + +⟶ Stokastik gradyan inişi (Bayır inişi) + +
+ + +**29. Gradient descent ― By noting η∈R the learning rate (also called step size), the update rule for gradient descent is expressed with the learning rate and the loss function Loss(x,y,w) as follows:** + +⟶ Gradyan inişi (Bayır inişi) - η∈R öğrenme oranını (aynı zamanda adım boyutu olarak da bilinir) dikkate alınarak, gradyan inişine ilişkin güncelleme kuralı, öğrenme oranı ve Loss(x,y,w) kayıp fonksiyonu ile aşağıdaki şekilde ifade edilir: + +
+ + +**30. Stochastic updates ― Stochastic gradient descent (SGD) updates the parameters of the model one training example (ϕ(x),y)∈Dtrain at a time. This method leads to sometimes noisy, but fast updates.** + +⟶ Stokastik güncellemeler - Stokastik gradyan inişi (SGİ / SGD), bir seferde bir eğitim örneğinin (ϕ(x),y)∈Değitim parametrelerini günceller. Bu yöntem bazen gürültülü, ancak hızlı güncellemeler yol açar. + +
+ + +**31. Batch updates ― Batch gradient descent (BGD) updates the parameters of the model one batch of examples (e.g. the entire training set) at a time. This method computes stable update directions, at a greater computational cost.** + +⟶ Yığın/küme güncellemeler - Yığın gradyan inişi (YGİ / BGD), bir seferde bir grup örnek (örneğin, tüm eğitim kümesi) parametrelerini günceller. Bu yöntem daha yüksek bir hesaplama maliyetiyle kararlı güncelleme talimatlarını hesaplar. + +
+ + +**32. Fine-tuning models** + +⟶ İnce ayar (Fine-tuning) modelleri + +
+ + +**33. Hypothesis class ― A hypothesis class F is the set of possible predictors with a fixed ϕ(x) and varying w:** + +⟶ Hipotez sınıfı - Bir hipotez sınıfı F, sabit bir ϕ (x) ve değişken w ile olası öngörücü kümesidir: + +
+ + +**34. Logistic function ― The logistic function σ, also called the sigmoid function, is defined as:** + +⟶ Lojistik fonksiyon - Ayrıca sigmoid fonksiyon olarak da adlandırılan lojistik fonksiyon σ, şöyle tanımlanır: + +
+ + +**35. Remark: we have σ′(z)=σ(z)(1−σ(z)).** + +⟶ Not: σ′(z)=σ(z)(1−σ(z)) şeklinde ifade edilir. + +
+ + +**36. Backpropagation ― The forward pass is done through fi, which is the value for the subexpression rooted at i, while the backward pass is done through gi=∂out∂fi and represents how fi influences the output.** + +⟶ Geri yayılım - İleriye geçiş, i'de yer alan alt ifadenin değeri olan fi ile yapılırken, geriye doğru geçiş gi=∂out∂fi aracılığıyla yapılır ve fi'nin çıkışı nasıl etkilediğini gösterir. + +
+ + +**37. Approximation and estimation error ― The approximation error ϵapprox represents how far the entire hypothesis class F is from the target predictor g∗, while the estimation error ϵest quantifies how good the predictor ^f is with respect to the best predictor f∗ of the hypothesis class F.** + +⟶ Yaklaşım ve kestirim hatası - Yaklaşım hatası ϵapprox, F tüm hipotez sınıfının hedef öngörücü g∗ ne kadar uzak olduğunu gösterirken, kestirim hatası ϵest öngörücüsü ^f, F hipotez sınıfının en iyi yordayıcısı f∗'ya göre ne kadar iyi olduğunu gösterir. +
+ + +**38. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** + +⟶ Düzenlileştirme (Regularization) - Düzenlileştirme prosedürü, modelin verilerin aşırı öğrenmesinden kaçınmayı amaçlar ve böylece yüksek değişkenlik sorunlarıyla ilgilenir. Aşağıdaki tablo, yaygın olarak kullanılan düzenlileştirme tekniklerinin farklı türlerini özetlemektedir: + +
+ + +**39. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶ [Katsayıları 0'a düşürür, Değişken seçimi için iyi, Katsayıları daha küçük yapar, Değişken seçimi ile küçük katsayılar arasında ödünleşim] + +
+ + +**40. Hyperparameters ― Hyperparameters are the properties of the learning algorithm, and include features, regularization parameter λ, number of iterations T, step size η, etc.** + +⟶ Hiperparametreler - Hiperparametreler öğrenme algoritmasının özellikleridir ve öznitelikler dahildir, λ normalizasyon parametresi, yineleme sayısı T, adım büyüklüğü η, vb. + +
+ + +**41. Sets vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** + +⟶ Kümeler - Bir model seçerken, veriyi aşağıdaki gibi 3 farklı parçaya ayırırız: + +
+ + +**42. [Training set, Validation set, Testing set]** + +⟶ [Eğitim kümesi, Doğrulama kümesi, Test kümesi] + +
+ + +**43. [Model is trained, Usually 80% of the dataset, Model is assessed, Usually 20% of the dataset, Also called hold-out or development set, Model gives predictions, Unseen data]** + +⟶ [Model eğitilir, Veri kümesinin genellikle %80'i, Model değerlendirilir, Veri kümesinin genellikle %20'si, Ayrıca tutma veya geliştirme kümesi olarak da adlandırılır, Model tahminlerini verir, Görünmeyen veriler] + +
+ + +**44. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** + +⟶ Model seçildikten sonra, tüm veri kümesi üzerinde eğitilir ve görünmeyen test kümesinde test edilir. Bunlar aşağıdaki şekilde gösterilmektedir: + +
+ + +**45. [Dataset, Unseen data, train, validation, test]** + +⟶ [Veri kümesi, Görünmeyen veriler, eğitim, doğrulama, test] + +
+ + +**46. For a more detailed overview of the concepts above, check out the Machine Learning tips and tricks cheatsheets!** + +⟶ Yukarıdaki kavramlara daha ayrıntılı bir bakış için, Makine Öğrenmesi ipuçları ve püf noktaları el kitabını göz atın! + +
+ + +**47. Unsupervised Learning** + +⟶ Gözetimsiz Öğrenme + +
+ + +**48. The class of unsupervised learning methods aims at discovering the structure of the data, which may have of rich latent structures.** + +⟶ Gözetimsiz öğrenme yöntemlerinin sınıfı, zengin gizli yapılara sahip olabilecek verilerin yapısını keşfetmeyi amaçlamaktadır. + +
+ + +**49. k-means** + +⟶ k-ortalama + +
+ + +**50. Clustering ― Given a training set of input points Dtrain, the goal of a clustering algorithm is to assign each point ϕ(xi) to a cluster zi∈{1,...,k}** + +⟶ Kümeleme - Dtrain giriş noktalarından oluşan bir eğitim kümesi göz önüne alındığında, kümeleme algoritmasının amacı, her bir ϕ(xi) noktasını zi∈{1,...,k} kümesine atamaktır. + +
+ + +**51. Objective function ― The loss function for one of the main clustering algorithms, k-means, is given by:** + +⟶ Amaç fonksiyonu - Ana kümeleme algoritmalarından biri olan k-ortalama için kayıp fonksiyonu şöyle ifade edilir: + +
+ + +**52. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** + +⟶ Algoritma - Küme merkezlerini μ1,μ2,...,μk∈Rn kümesini rasgele başlattıktan sonra, k-ortalama algoritması yakınsayana kadar aşağıdaki adımı tekrarlar: + +
+ + +**53. and** + +⟶ ve + +
+ + +**54. [Means initialization, Cluster assignment, Means update, Convergence]** + +⟶ [Başlatma anlamına gelir, Kümeleme görevi, Güncelleme, Yakınsama anlamına gelir] + +
+ + +**55. Principal Component Analysis** + +⟶ Temel Bileşenler Analizi + +
+ + +**56. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** + +⟶ Özdeğer, özvektör - Bir A∈Rn×n matrisi verildiğinde, z∈Rn∖{0} olacak şekilde bir vektör varsa λ, A'nın bir öz değeri olduğu söylenir, aşağıdaki gibi ifade edilir: + +
+ + +**57. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** + +⟶ Spektral teoremi - A∈Rn×n olsun. A simetrik ise, o zaman A gerçek ortogonal matris U∈Rn×n olacak şekilde köşegenleştirilebilir. Λ=diag(λ1,...,λn) formülü dikkate alınarak aşağıdaki gibi ifade edilir: + +
+ + +**58. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** + +⟶ Not: En büyük özdeğerle ilişkilendirilen özvektör, A matrisinin temel özvektörüdür. + +
+ + +**59. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k dimensions by maximizing the variance of the data as follows:** + +⟶ Algoritma - Temel Bileşenler Analizi (PCA) prosedürü, verilerin varyansını en üst düzeye çıkararak k boyutlarına indirgeyen bir boyut küçültme tekniğidir: + +
+ + +**60. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** + +⟶ Adım 1: Verileri ortalama 0 ve 1 standart sapma olacak şekilde normalize edin. + +
+ + +**61. [where, and]** + +⟶ [koşul, ve] + +
+ + +**62. [Step 2: Compute Σ=1mm∑i=1ϕ(xi)ϕ(xi)T∈Rn×n, which is symmetric with real eigenvalues., Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues., Step 4: Project the data on spanR(u1,...,uk).]** + +⟶ [Adım 2: Hesaplama Σ=1mm∑i=1ϕ(xi)ϕ(xi)T∈Rn×n, ki bu, gerçek özdeğerlerle simetriktir., Adım 3: Hesaplama u1,...,uk∈Rn k'nin ortogonal ana özvektörleri, yani k en büyük özdeğerlerin ortogonal özvektörleri., Adım 4: spanR(u1,...,uk)'daki verilerin izdüşümünü al. + +
+ + +**63. This procedure maximizes the variance among all k-dimensional spaces.** + +⟶ Bu prosedür, tüm k boyutlu uzaylar arasındaki farkı en üst düzeye çıkarır. + +
+ + +**64. [Data in feature space, Find principal components, Data in principal components space]** + +⟶ [Öznitelik uzayındaki veriler, Asıl bileşenleri bulma, Asıl bileşenler uzayındaki veriler] + +
+ + +**65. For a more detailed overview of the concepts above, check out the Unsupervised Learning cheatsheets!** + +⟶ Yukarıdaki kavramlara daha ayrıntılı bir genel bakış için, Gözetimsiz Öğrenme el kitaplarına göz atın! + +
+ + +**66. [Linear predictors, Feature vector, Linear classifier/regression, Margin]** + +⟶ [Doğrusal öngörücüler, Öznitelik vektörü, Doğrusal sınıflandırıcı/regresyon, Marj] + +
+ + +**67. [Loss minimization, Loss function, Framework]** + +⟶ [Kayıp minimizasyonu, Kayıp fonksiyonu, Çerçeve (Framework)] + +
+ + +**68. [Non-linear predictors, k-nearest neighbors, Neural networks]** + +⟶ [Doğrusal olmayan öngörücüler, k-en yakın komşular, Yapay sinir ağları] + +
+ + +**69. [Stochastic gradient descent, Gradient, Stochastic updates, Batch updates]** + +⟶ [Stokastik Dereceli Azalma/Bayır İnişi, Gradyan, Stokastik güncellemeler, Yığın/Küme (Batch) güncellemeler] + +
+ + +**70. [Fine-tuning models, Hypothesis class, Backpropagation, Regularization, Sets vocabulary]** + +⟶ [Hassas ayar modeller, Hipotez sınıfı, Geri yayılım, Düzenlileştirme (Regularization), Kelime dizisi] + +
+ + +**71. [Unsupervised Learning, k-means, Principal components analysis]** + +⟶ [Gözetimsiz Öğrenme, k-ortalama, Temel bileşenler analizi] + +
+ + +**72. View PDF version on GitHub** + +⟶ GitHub'da PDF sürümünü görüntüleyin + +
+ + +**73. Original authors** + +⟶ Orijinal yazarlar + +
+ + +**74. Translated by X, Y and Z** + +⟶ X, Y ve Z tarafından çevrilmiştir + +
+ + +**75. Reviewed by X, Y and Z** + +⟶ X, Y ve Z tarafından gözden geçirilmiştir + +
+ + +**76. By X and Y** + +⟶ X ve Y ile + +
+ + +**77. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ Yapay Zeka el kitabı şimdi [hedef dilde] mevcuttur. diff --git a/tr/cs-221-states-models.md b/tr/cs-221-states-models.md new file mode 100644 index 000000000..bceddce2b --- /dev/null +++ b/tr/cs-221-states-models.md @@ -0,0 +1,980 @@ +**States-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-states-models) + +
+ +**1. States-based models with search optimization and MDP** + +⟶ Arama optimizasyonu ve Markov karar sürecine (MDP) sahip durum-temelli modeller + +
+ + +**2. Search optimization** + +⟶ Arama optimizasyonu + +
+ + +**3. In this section, we assume that by accomplishing action a from state s, we deterministically arrive in state Succ(s,a). The goal here is to determine a sequence of actions (a1,a2,a3,a4,...) that starts from an initial state and leads to an end state. In order to solve this kind of problem, our objective will be to find the minimum cost path by using states-based models.** + +⟶ Bu bölümde, s durumunda a eylemini gerçekleştirdiğimizde, Succ(s,a) durumuna varacağımızı varsayıyoruz. Burada amaç, başlangıç durumundan başlayıp bitiş durumuna götüren bir eylem dizisi (a1,a2,a3,a4,...) belirlenmesidir. Bu tür bir problemi çözmek için, amacımız durum-temelli modelleri kullanarak asgari (minimum) maliyet yolunu bulmak olacaktır. + +
+ + +**4. Tree search** + +⟶ Ağaç arama + +
+ + +**5. This category of states-based algorithms explores all possible states and actions. It is quite memory efficient, and is suitable for huge state spaces but the runtime can become exponential in the worst cases.** + +⟶ Bu durum-temelli algoritmalar, olası bütün durum ve eylemleri araştırırlar. Oldukça bellek verimli ve büyük durum uzayları için uygundurlar ancak çalışma zamanı en kötü durumlarda üstel olabilir. + +
+ + +**6. [Self-loop, More than a parent, Cycle, More than a root, Valid tree]** + +⟶ [Kendinden-Döngü(Self-loop), Bir ebeveynden (parent) daha fazlası, Çevrim, Bir kökten daha fazlası, Geçerli ağaç] + +
+ + +**7. [Search problem ― A search problem is defined with:, a starting state sstart, possible actions Actions(s) from state s, action cost Cost(s,a) from state s with action a, successor Succ(s,a) of state s after action a, whether an end state was reached IsEnd(s)]** + +⟶ [Arama problemi ― Bir arama problemi aşağıdaki şekilde tanımlanmaktadır:, bir başlangıç durumu sstart, s durumunda gerçekleşebilecek olası eylemler Actions(s), s durumunda gerçekleşen a eyleminin eylem maliyeti Cost(s,a), a eyleminden sonraki varılacak durum Succ(s,a), son duruma ulaşılıp ulaşılamadığı IsEnd(s)] + +
+ + +**8. The objective is to find a path that minimizes the cost.** + +⟶ Amaç, maliyeti en aza indiren bir yol bulmaktır. + +
+ + +**9. Backtracking search ― Backtracking search is a naive recursive algorithm that tries all possibilities to find the minimum cost path. Here, action costs can be either positive or negative.** + +⟶ Geri izleme araması ― Geri izleme araması, asgari (minimum) maliyet yolunu bulmak için tüm olasılıkları deneyen saf (naive) bir özyinelemeli algoritmadır. Burada, eylem maliyetleri pozitif ya da negatif olabilir. + +
+ + +**10. Breadth-first search (BFS) ― Breadth-first search is a graph search algorithm that does a level-by-level traversal. We can implement it iteratively with the help of a queue that stores at each step future nodes to be visited. For this algorithm, we can assume action costs to be equal to a constant c⩾0.** + +⟶ Genişlik öncelikli arama (Breadth-first search-BFS) ― Genişlik öncelikli arama, seviye seviye arama yapan bir çizge arama algoritmasıdır. Gelecekte her adımda ziyaret edilecek düğümleri tutan bir kuyruk yardımıyla yinelemeli olarak gerçekleyebiliriz. Bu algoritma için, eylem maliyetlerinin belirli bir sabite c⩾0 eşit olduğunu kabul edebiliriz. + +
+ + +**11. Depth-first search (DFS) ― Depth-first search is a search algorithm that traverses a graph by following each path as deep as it can. We can implement it recursively, or iteratively with the help of a stack that stores at each step future nodes to be visited. For this algorithm, action costs are assumed to be equal to 0.** + +⟶ Derinlik öncelikli arama (Depth-first search-DFS) ― Derinlik öncelikli arama, her bir yolu olabildiğince derin bir şekilde takip ederek çizgeyi dolaşan bir arama algoritmasıdır. Bu algoritmayı, ziyaret edilecek gelecek düğümleri her adımda bir yığın yardımıyla saklayarak, yinelemeli (recursively) ya da tekrarlı (iteratively) olarak uygulayabiliriz. Bu algoritma için eylem maliyetlerinin 0 olduğu varsayılmaktadır. + +
+ + +**12. Iterative deepening ― The iterative deepening trick is a modification of the depth-first search algorithm so that it stops after reaching a certain depth, which guarantees optimality when all action costs are equal. Here, we assume that action costs are equal to a constant c⩾0.** + +⟶ Tekrarlı derinleşme ― Tekrarlı derinleşme hilesi, derinlik-ilk arama algoritmasının değiştirilmiş bir halidir, böylece belirli bir derinliğe ulaştıktan sonra durur, bu da tüm işlem maliyetleri eşit olduğunda en iyiliği (optimal) garanti eder. Burada, işlem maliyetlerinin c⩾0 gibi sabit bir değere eşit olduğunu varsayıyoruz. + +
+ + +**13. Tree search algorithms summary ― By noting b the number of actions per state, d the solution depth, and D the maximum depth, we have:** + +⟶ Ağaç arama algoritmaları özeti ― B durum başına eylem sayısını, d çözüm derinliğini ve D en yüksek (maksimum) derinliği ifade ederse, o zaman: + +
+ + +**14. [Algorithm, Action costs, Space, Time]** + +⟶ [Algoritma, Eylem maliyetleri, Arama uzayı, Zaman] + +
+ + +**15. [Backtracking search, any, Breadth-first search, Depth-first search, DFS-Iterative deepening]** + +⟶ [Geri izleme araması, herhangi bir şey, Genişlik öncelikli arama, Derinlik öncelikli arama, DFS - Tekrarlı derinleşme] + +
+ + +**16. Graph search** + +⟶ Çizge arama + +
+ + +**17. This category of states-based algorithms aims at constructing optimal paths, enabling exponential savings. In this section, we will focus on dynamic programming and uniform cost search.** + +⟶ Bu durum-temelli algoritmalar kategorisi, üssel tasarruf sağlayan en iyi (optimal) yolları oluşturmayı amaçlar. Bu bölümde, dinamik programlama ve tek tip maliyet araştırması üzerinde duracağız. + +
+ + +**18. Graph ― A graph is comprised of a set of vertices V (also called nodes) as well as a set of edges E (also called links).** + +⟶ Çizge ― Bir çizge, V köşeler (düğüm olarak da adlandırılır) kümesi ile E kenarlar (bağlantı olarak da adlandırılır) kümesinden oluşur. + +
+ + +**19. Remark: a graph is said to be acylic when there is no cycle.** + +⟶ Not: çevrim olmadığında, bir çizgenin asiklik (çevrimsiz) olduğu söylenir. + +
+ + +**20. State ― A state is a summary of all past actions sufficient to choose future actions optimally.** + +⟶ Durum ― Bir durum gelecekteki eylemleri en iyi (optimal) şekilde seçmek için, yeterli tüm geçmiş eylemlerin özetidir. + +
+ + +**21. Dynamic programming ― Dynamic programming (DP) is a backtracking search algorithm with memoization (i.e. partial results are saved) whose goal is to find a minimum cost path from state s to an end state send. It can potentially have exponential savings compared to traditional graph search algorithms, and has the property to only work for acyclic graphs. For any given state s, the future cost is computed as follows:** + +⟶ Dinamik programlama ― Dinamik programlama (DP), amacı s durumundan bitiş durumu olan send'e kadar asgari(minimum) maliyet yolunu bulmak olan hatırlamalı (memoization) (başka bir deyişle kısmi sonuçlar kaydedilir) bir geri izleme (backtracking) arama algoritmasıdır. Geleneksel çizge arama algoritmalarına kıyasla üstel olarak tasarruf sağlayabilir ve yalnızca asiklik (çevrimsiz) çizgeler ile çalışma özelliğine sahiptir. Herhangi bir durum için gelecekteki maliyet aşağıdaki gibi hesaplanır: + +
+ + +**22. [if, otherwise]** + +⟶ [eğer, aksi taktirde] + +
+ + +**23. Remark: the figure above illustrates a bottom-to-top approach whereas the formula provides the intuition of a top-to-bottom problem resolution.** + +⟶ Not: Yukarıdaki şekil, aşağıdan yukarıya bir yaklaşımı sergilerken, formül ise yukarıdan aşağıya bir önsezi ile problem çözümü sağlar. + +
+ + +**24. Types of states ― The table below presents the terminology when it comes to states in the context of uniform cost search:** + +⟶ Durum türleri ― Tek tip maliyet araştırması bağlamındaki durumlara ilişkin terminoloji aşağıdaki tabloda sunulmaktadır: + +
+ + +**25. [State, Explanation]** + +⟶ [Durum, Açıklama] + +
+ + +**26. [Explored, Frontier, Unexplored]** + +⟶ [Keşfedilmiş, Sırada (Frontier), Keşfedilmemiş] + +
+ + +**27. [States for which the optimal path has already been found, States seen for which we are still figuring out how to get there with the cheapest cost, States not seen yet]** + +⟶ [En iyi (optimal) yolun daha önce bulunduğu durumlar, Görülen ancak hala en ucuza nasıl gidileceği hesaplanmaya çalışılan durumlar, Daha önce görülmeyen durumlar] + +
+ + +**28. Uniform cost search ― Uniform cost search (UCS) is a search algorithm that aims at finding the shortest path from a state sstart to an end state send. It explores states s in increasing order of PastCost(s) and relies on the fact that all action costs are non-negative.** + +⟶ Tek tip maliyet araması ― Tek tip maliyet araması (Uniform cost search - UCS) bir başlangıç durumu olan Sstart, ile bir bitiş durumu olan Send arasındaki en kısa yolu bulmayı amaçlayan bir arama algoritmasıdır. Bu algoritma s durumlarını artan geçmiş maliyetleri olan PastCost(s)'a göre araştırır ve eylem maliyetlerinin negatif olmayacağı kuralına dayanır. + +
+ + +**29. Remark 1: the UCS algorithm is logically equivalent to Dijkstra's algorithm.** + +⟶ Not 1: UCS algoritması mantıksal olarak Dijkstra algoritması ile aynıdır. + +
+ + +**30. Remark 2: the algorithm would not work for a problem with negative action costs, and adding a positive constant to make them non-negative would not solve the problem since this would end up being a different problem.** + +⟶ Not 2: Algoritma, negatif eylem maliyetleriyle ilgili bir problem için çalışmaz ve negatif olmayan bir hale getirmek için pozitif bir sabit eklemek problemi çözmez, çünkü problem farklı bir problem haline gelmiş olur. + +
+ + +**31. Correctness theorem ― When a state s is popped from the frontier F and moved to explored set E, its priority is equal to PastCost(s) which is the minimum cost path from sstart to s.** + +⟶ Doğruluk teoremi ― S durumu sıradaki (frontier) F'den çıkarılır ve daha önceden keşfedilmiş olan E kümesine taşınırsa, önceliği başlangıç durumu olan Sstart'dan, s durumuna kadar asgari (minimum) maliyet yolu olan PastCost(s)'e eşittir. + +
+ + +**32. Graph search algorithms summary ― By noting N the number of total states, n of which are explored before the end state send, we have:** + +⟶ Çizge arama algoritmaları özeti ― N toplam durumların sayısı, n-bitiş durumu(Send)'ndan önce keşfedilen durum sayısı ise: + +
+ + +**33. [Algorithm, Acyclicity, Costs, Time/space]** + +⟶ [Algoritma, Asiklik (Çevrimsizlik), Maliyetler, Zaman/arama uzayı] + +
+ + +**34. [Dynamic programming, Uniform cost search]** + +⟶ [Dinamik programlama, Tek tip maliyet araması] + +
+ + +**35. Remark: the complexity countdown supposes the number of possible actions per state to be constant.** + +⟶ Not: Karmaşıklık geri sayımı, her durum için olası eylemlerin sayısını sabit olarak kabul eder. + +
+ + +**36. Learning costs** + +⟶ Öğrenme maliyetleri + +
+ + +**37. Suppose we are not given the values of Cost(s,a), we want to estimate these quantities from a training set of minimizing-cost-path sequence of actions (a1,a2,...,ak).** + +⟶ Diyelim ki, Cost(s,a) değerleri verilmedi ve biz bu değerleri maliyet yolu eylem dizisini,(a1,a2,...,ak), en aza indiren bir eğitim kümesinden tahmin etmek istiyoruz. + +
+ + +**38. [Structured perceptron ― The structured perceptron is an algorithm aiming at iteratively learning the cost of each state-action pair. At each step, it:, decreases the estimated cost of each state-action of the true minimizing path y given by the training data, increases the estimated cost of each state-action of the current predicted path y' inferred from the learned weights.]** + +⟶ [Yapılandırılmış algılayıcı ― Yapılandırılmış algılayıcı, her bir durum-eylem çiftinin maliyetini tekrarlı (iteratively) olarak öğrenmeyi amaçlayan bir algoritmadır. Her bir adımda, algılayıcı:, eğitim verilerinden elde edilen gerçek asgari (minimum) y yolunun her bir durum-eylem çiftinin tahmini (estimated) maliyetini azaltır, öğrenilen ağırlıklardan elde edilen şimdiki tahmini(predicted) y' yolununun durum-eylem çiftlerinin tahmini maliyetini artırır.] + +
+ + +**39. Remark: there are several versions of the algorithm, one of which simplifies the problem to only learning the cost of each action a, and the other parametrizes Cost(s,a) to a feature vector of learnable weights.** + +⟶ Not: Algoritmanın birkaç sürümü vardır, bunlardan biri problemi sadece her bir a eyleminin maliyetini öğrenmeye indirger, bir diğeri ise öğrenilebilir ağırlık öznitelik vektörünü, Cost(s,a)'nın parametresi haline getirir. + +
+ + +**40. A* search** + +⟶ A* arama + +
+ + +**41. Heuristic function ― A heuristic is a function h over states s, where each h(s) aims at estimating FutureCost(s), the cost of the path from s to send.** + +⟶ Sezgisel işlev(Heuristic function) ― Sezgisel, s durumu üzerinde işlem yapan bir h fonksiyonudur, burada her bir h(s), s ile send arasındaki yol maliyeti olan FutureCost(s)'yi tahmin etmeyi amaçlar. + +
+ + +**42. Algorithm ― A∗ is a search algorithm that aims at finding the shortest path from a state s to an end state send. It explores states s in increasing order of PastCost(s)+h(s). It is equivalent to a uniform cost search with edge costs Cost′(s,a) given by:** + +⟶ Algoritma ― A∗, s durumu ile send bitiş durumu arasındaki en kısa yolu bulmayı amaçlayan bir arama algoritmasıdır. Bahse konu algoritma PastCost(s)+h(s)'yi artan sıra ile araştırır. Aşağıda verilenler ışığında kenar maliyetlerini de içeren tek tip maliyet aramasına eşittir: + +
+ + +**43. Remark: this algorithm can be seen as a biased version of UCS exploring states estimated to be closer to the end state.** + +⟶ Not: Bu algoritma, son duruma yakın olduğu tahmin edilen durumları araştıran tek tip maliyet aramasının taraflı bir sürümü olarak görülebilir. + +
+ + +**44. [Consistency ― A heuristic h is said to be consistent if it satisfies the two following properties:, For all states s and actions a, The end state verifies the following:]** + +⟶ [Tutarlılık ― Bir sezgisel h, aşağıdaki iki özelliği sağlaması durumunda tutarlıdır denilebilir:, Bütün s durumları ve a eylemleri için, bitiş durumu aşağıdakileri doğrular:] + +
+ + +**45. Correctness ― If h is consistent, then A∗ returns the minimum cost path.** + +⟶ Doğruluk ― Eğer h tutarlı ise o zaman A∗ algoritması asgari (minimum) maliyet yolunu döndürür. + +
+ + +**46. Admissibility ― A heuristic h is said to be admissible if we have:** + +⟶ Kabul edilebilirlik ― Bir sezgisel h kabul edilebilirdir eğer: + +
+ + +**47. Theorem ― Let h(s) be a given heuristic. We have:** + +⟶ Teorem ― h(s) sezgisel olsun ve: + +
+ + +**48. [consistent, admissible]** + +⟶ [tutarlı, kabul edilebilir] + +
+ + +**49. Efficiency ― A* explores all states s satisfying the following equation:** + +⟶ Verimlilik ― A* algoritması aşağıdaki eşitliği sağlayan bütün s durumlarını araştırır: + +
+ + +**50. Remark: larger values of h(s) is better as this equation shows it will restrict the set of states s going to be explored.** + +⟶ Not: h(s)'nin yüksek değerleri, bu eşitliğin araştırılacak olan s durum kümesini kısıtlayacak olması nedeniyle daha iyidir. + +
+ + +**51. Relaxation** + +⟶ Rahatlama + +
+ + +**52. It is a framework for producing consistent heuristics. The idea is to find closed-form reduced costs by removing constraints and use them as heuristics.** + +⟶ Bu tutarlı sezgisel için bir altyapıdır (framework). Buradaki fikir, kısıtlamaları kaldırarak kapalı şekilli (closed-form) düşük maliyetler bulmak ve bunları sezgisel olarak kullanmaktır. + +
+ + +**53. Relaxed search problem ― The relaxation of search problem P with costs Cost is noted Prel with costs Costrel, and satisfies the identity:** + +⟶ Rahat arama problemi (Relaxed search problem) ― Cost maliyetli bir arama probleminin rahatlaması, Costrel maliyetli Prel ile ifade edilir ve kimliği karşılar (satisfies the identity) : + +
+ + +**54. Relaxed heuristic ― Given a relaxed search problem Prel, we define the relaxed heuristic h(s)=FutureCostrel(s) as the minimum cost path from s to an end state in the graph of costs Costrel(s,a).** + +⟶ Rahat sezgisel (Relaxed heuristic) ― Bir Prel rahat arama problemi verildiğinde, h(s)=FutureCostrel(s) rahat sezgisel eşitliğini Costrel(s,a) maliyet çizgesindeki s durumu ile bir bitiş durumu arasındaki asgari(minimum) maliyet yolu olarak tanımlarız. + +
+ + +**55. Consistency of relaxed heuristics ― Let Prel be a given relaxed problem. By theorem, we have:** + +⟶ Rahat sezgisel tutarlılığı ― Prel bir rahat problem olarak verilmiş olsun. Teoreme göre: + +
+ + +**56. consistent** + +⟶ tutarlı + +
+ + +**57. [Tradeoff when choosing heuristic ― We have to balance two aspects in choosing a heuristic:, Computational efficiency: h(s)=FutureCostrel(s) must be easy to compute. It has to produce a closed form, easier search and independent subproblems., Good enough approximation: the heuristic h(s) should be close to FutureCost(s) and we have thus to not remove too many constraints.]** + +⟶ [Sezgisel seçiminde ödünleşim (tradeoff) ― Sezgisel seçiminde iki yönü dengelemeliyiz:, Hesaplamalı verimlilik: h(s)=FutureCostrel(s) eşitliği kolay hesaplanabilir olmalıdır. Kapalı bir şekil, daha kolay arama ve bağımsız alt problemler üretmesi gerekir., Yeterince iyi yaklaşım: sezgisel h(s), FutureCost(s) işlevine yakın olmalı ve bu nedenle çok fazla kısıtlamayı ortadan kaldırmamalıyız.] + +
+ + +**58. Max heuristic ― Let h1(s), h2(s) be two heuristics. We have the following property:** + +⟶ En yüksek sezgisel ― h1(s) ve h2(s) aşağıdaki özelliklere sahip iki adet sezgisel olsun: + +
+ + +**59. Markov decision processes** + +⟶ Markov karar süreçleri + +
+ + +**60. In this section, we assume that performing action a from state s can lead to several states s′1,s′2,... in a probabilistic manner. In order to find our way between an initial state and an end state, our objective will be to find the maximum value policy by using Markov decision processes that help us cope with randomness and uncertainty.** + +⟶ Bu bölümde, s durumunda a eyleminin gerçekleştirilmesinin olasılıksal olarak birden fazla durum,(s′1,s′2,...), ile sonuçlanacağını kabul ediyoruz. Başlangıç durumu ile bitiş durumu arasındaki yolu bulmak için amacımız, rastgelelilik ve belirsizlik ile başa çıkabilmek için yardımcı olan Markov karar süreçlerini kullanarak en yüksek değer politikasını bulmak olacaktır. + +
+ + +**61. Notations** + +⟶ Gösterimler + +
+ + +**62. [Definition ― The objective of a Markov decision process is to maximize rewards. It is defined with:, a starting state sstart, possible actions Actions(s) from state s, transition probabilities T(s,a,s′) from s to s′ with action a, rewards Reward(s,a,s′) from s to s′ with action a, whether an end state was reached IsEnd(s), a discount factor 0⩽γ⩽1]** + +⟶ [Tanım ― Markov karar sürecinin amacı ödülleri en yüksek seviyeye çıkarmaktır. Markov karar süreci aşağıdaki bileşenlerden oluşmaktadır:, başlangıç durumu sstart, s durumunda gerçekleştirilebilecek olası eylemler Actions(s), s durumunda a eyleminin gerçekleştirilmesi ile s′ durumuna geçiş olasılıkları T(s,a,s′), s durumunda a eyleminin gerçekleştirilmesi ile elde edilen ödüller Reward(s,a,s′), bitiş durumuna ulaşılıp ulaşılamadığı IsEnd(s), indirim faktörü 0⩽γ⩽1] + +
+ + +**63. Transition probabilities ― The transition probability T(s,a,s′) specifies the probability of going to state s′ after action a is taken in state s. Each s′↦T(s,a,s′) is a probability distribution, which means that:** + +⟶ Geçiş olasılıkları ― Geçiş olasılığı T(s,a,s′) s durumundayken gerçekleştirilen a eylemi neticesinde s′ durumuna gitme olasılığını belirtir. Her bir s′↦T(s,a,s′) aşağıda belirtildiği gibi bir olasılık dağılımıdır: + +
+ + +**64. states** + +⟶ durumlar + +
+ + +**65. Policy ― A policy π is a function that maps each state s to an action a, i.e.** + +⟶ Politika ― Bir π politikası her s durumunu bir a eylemi ile ilişkilendiren bir işlevdir. + +
+ + +**66. Utility ― The utility of a path (s0,...,sk) is the discounted sum of the rewards on that path. In other words,** + +⟶ Fayda ― Bir (s0,...,sk) yolunun faydası, o yol üzerindeki ödüllerin indirimli toplamıdır. Diğer bir deyişle, + +
+ + +**67. The figure above is an illustration of the case k=4.** + +⟶ Yukarıdaki şekil k=4 durumunun bir gösterimidir. + +
+ + +**68. Q-value ― The Q-value of a policy π at state s with action a, also noted Qπ(s,a), is the expected utility from state s after taking action a and then following policy π. It is defined as follows:** + +⟶ Q-değeri ― S durumunda gerçekleştirilen bir a eylemi için π politikasının Q-değeri, Qπ(s,a) olarak da gösterilir, a eylemini gerçekleştirip ve sonrasında π politikasını takiben s durumundan beklenen faydadır. Q-değeri aşağıdaki şekilde tanımlanmaktadır: + +
+ + +**69. Value of a policy ― The value of a policy π from state s, also noted Vπ(s), is the expected utility by following policy π from state s over random paths. It is defined as follows:** + +⟶ Bir politikanın değeri ― S durumundaki π politikasının değeri,Vπ(s) olarak da gösterilir, rastgele yollar üzerinde s durumundaki π politikasını izleyerek elde edilen beklenen faydadır. S durumundaki π politikasının değeri aşağıdaki gibi tanımlanır: + +
+ + +**70. Remark: Vπ(s) is equal to 0 if s is an end state.** + +⟶ Not: Eğer s bitiş durumu ise Vπ(s) sıfıra eşittir. + +
+ + +**71. Applications** + +⟶ Uygulamalar + +
+ + +**72. [Policy evaluation ― Given a policy π, policy evaluation is an iterative algorithm that aims at estimating Vπ. It is done as follows:, Initialization: for all states s, we have:, Iteration: for t from 1 to TPE, we have, with]** + +⟶ [Politika değerlendirme ― bir π politikası verildiğinde, politika değerlendirmesini,Vπ, tahmin etmeyi amaçlayan bir tekrarlı (iterative) algoritmadır. Politika değerlendirme aşağıdaki gibi yapılmaktadır:, İlklendirme: bütün s durumları için:, Tekrar: 1'den TPE'ye kadar her t için, ile] + +
+ + +**73. Remark: by noting S the number of states, A the number of actions per state, S′ the number of successors and T the number of iterations, then the time complexity is of O(TPESS′).** + +⟶ Not: S durum sayısını, A her bir durum için eylem sayısını, S′ ardılların (successors) sayısını ve T yineleme sayısını gösterdiğinde, zaman karmaşıklığı O(TPESS′) olur. + +
+ + +**74. Optimal Q-value ― The optimal Q-value Qopt(s,a) of state s with action a is defined to be the maximum Q-value attained by any policy starting. It is computed as follows:** + +⟶ En iyi Q-değeri ― S durumunda a eylemi gerçekleştirildiğinde bu durumun en iyi Q-değeri,Qopt(s,a), herhangi bir politika başlangıcında elde edilen en yüksek Q-değeri olarak tanımlanmaktadır. En iyi Q-değeri aşağıdaki gibi hesaplanmaktadır: + +
+ + +**75. Optimal value ― The optimal value Vopt(s) of state s is defined as being the maximum value attained by any policy. It is computed as follows:** + +⟶ En iyi değer ― S durumunun en iyi değeri olan Vopt(s), herhangi bir politika ile elde edilen en yüksek değer olarak tanımlanmaktadır. En iyi değer aşağıdaki gibi hesaplanmaktadır: + +
+ + +**76. actions** + +⟶ eylemler + +
+ + +**77. Optimal policy ― The optimal policy πopt is defined as being the policy that leads to the optimal values. It is defined by:** + +⟶ En iyi politika ― En iyi politika olan πopt, en iyi değerlere götüren politika olarak tanımlanmaktadır. En iyi politika aşağıdaki gibi tanımlanmaktadır: + +
+ + +**78. [Value iteration ― Value iteration is an algorithm that finds the optimal value Vopt as well as the optimal policy πopt. It is done as follows:, Initialization: for all states s, we have:, Iteration: for t from 1 to TVI, we have:, with]** + +⟶ [Değer tekrarı(iteration) ― Değer tekrarı(iteration) en iyi politika olan πopt, yanında en iyi değeri Vopt'ı, bulan bir algoritmadır. Değer tekrarı(iteration) aşağıdaki gibi yapılmaktadır:, İlklendirme: bütün s durumları için:, Tekrar: 1'den TVI'ya kadar her bir t için:, ile] + +
+ + +**79. Remark: if we have either γ<1 or the MDP graph being acyclic, then the value iteration algorithm is guaranteed to converge to the correct answer.** + +⟶ Not: Eğer γ<1 ya da Markov karar süreci (Markov Decision Process - MDP) asiklik (çevrimsiz) olursa, o zaman değer tekrarı algoritmasının doğru cevaba yakınsayacağı garanti edilir. + +
+ + +**80. When unknown transitions and rewards** + +⟶ Bilinmeyen geçişler ve ödüller + +
+ + +**81. Now, let's assume that the transition probabilities and the rewards are unknown.** + +⟶ Şimdi, geçiş olasılıklarının ve ödüllerin bilinmediğini varsayalım. + +
+ + +**82. Model-based Monte Carlo ― The model-based Monte Carlo method aims at estimating T(s,a,s′) and Reward(s,a,s′) using Monte Carlo simulation with: ** + +⟶ Model-temelli Monte Carlo ― Model-temelli Monte Carlo yöntemi, T(s,a,s′) ve Reward(s,a,s′) işlevlerini Monte Carlo benzetimi kullanarak aşağıdaki formüllere uygun bir şekilde tahmin etmeyi amaçlar: + +
+ + +**83. [# times (s,a,s′) occurs, and]** + +⟶ [# kere (s,a,s′) gerçekleşme sayısı, ve] + +
+ + +**84. These estimations will be then used to deduce Q-values, including Qπ and Qopt.** + +⟶ Bu tahminler daha sonra Qπ ve Qopt'yi içeren Q-değerleri çıkarımı için kullanılacaktır. + +
+ + +**85. Remark: model-based Monte Carlo is said to be off-policy, because the estimation does not depend on the exact policy.** + +⟶ Not: model-tabanlı Monte Carlo'nun politika dışı olduğu söyleniyor, çünkü tahmin kesin politikaya bağlı değildir. + +
+ + +**86. Model-free Monte Carlo ― The model-free Monte Carlo method aims at directly estimating Qπ, as follows:** + +⟶ Model içermeyen Monte Carlo ― Model içermeyen Monte Carlo yöntemi aşağıdaki şekilde doğrudan Qπ'yi tahmin etmeyi amaçlar: + +
+ + +**87. Qπ(s,a)=average of ut where st−1=s,at=a** + +⟶ Qπ(s,a)= ortalama ut , st−1=s ve at=a olduğunda + +
+ + +**88. where ut denotes the utility starting at step t of a given episode.** + +⟶ ut belirli bir bölümün t anında başlayan faydayı ifade etmektedir. + +
+ + +**89. Remark: model-free Monte Carlo is said to be on-policy, because the estimated value is dependent on the policy π used to generate the data.** + +⟶ Not: model içermeyen Monte Carlo'nun politikaya dahil olduğu söyleniyor, çünkü tahmini değer veriyi üretmek için kullanılan π politikasına bağlıdır. + +
+ + +**90. Equivalent formulation - By introducing the constant η=11+(#updates to (s,a)) and for each (s,a,u) of the training set, the update rule of model-free Monte Carlo has a convex combination formulation:** + +⟶ Eşdeğer formülasyon - Sabit tanımı η=11+(#güncelleme sayısı (s,a) ) ve eğitim kümesinin her bir (s,a,u) üçlemesi için, model içermeyen Monte Carlo'nun güncelleme kuralı dışbükey bir kombinasyon formülasyonuna sahiptir: + +
+ + +**91. as well as a stochastic gradient formulation:** + +⟶ olasılıksal bayır formülasyonu yanında: + +
+ + +**92. SARSA ― State-action-reward-state-action (SARSA) is a boostrapping method estimating Qπ by using both raw data and estimates as part of the update rule. For each (s,a,r,s′,a′), we have:** + +⟶ SARSA ― Durum-eylem-ödül-durum-eylem (State-Action-Reward-State-Action - SARSA), hem ham verileri hem de güncelleme kuralının bir parçası olarak tahminleri kullanarak Qπ'yi tahmin eden bir destekleme yöntemidir. Her bir (s,a,r,s′,a′) için: + +
+ + +**93. Remark: the SARSA estimate is updated on the fly as opposed to the model-free Monte Carlo one where the estimate can only be updated at the end of the episode.** + +⟶ Not: the SARSA tahmini, tahminin yalnızca bölüm sonunda güncellenebildiği model içermeyen Monte Carlo yönteminin aksine anında güncellenir. + +
+ + +**94. Q-learning ― Q-learning is an off-policy algorithm that produces an estimate for Qopt. On each (s,a,r,s′,a′), we have:** + +⟶ Q-öğrenme ― Q-öğrenme, Qopt için tahmin üreten politikaya dahil olmayan bir algoritmadır. Her bir (s,a,r,s′,a′) için: + +
+ + +**95. Epsilon-greedy ― The epsilon-greedy policy is an algorithm that balances exploration with probability ϵ and exploitation with probability 1−ϵ. For a given state s, the policy πact is computed as follows:** + +⟶ Epsilon-açgözlü ― Epsilon-açgözlü politika, ϵ olasılıkla araştırmayı ve 1−ϵ olasılıkla sömürüyü dengeleyen bir algoritmadır. Her bir s durumu için, πact politikası aşağıdaki şekilde hesaplanır: + +
+ + +**96. [with probability, random from Actions(s)]** + +⟶ [olasılıkla, Actions(s) eylem kümesi içinden rastgele] + +
+ + +**97. Game playing** + +⟶ Oyun oynama + +
+ + +**98. In games (e.g. chess, backgammon, Go), other agents are present and need to be taken into account when constructing our policy.** + +⟶ Oyunlarda (örneğin satranç, tavla, Go), başka oyuncular vardır ve politikamızı oluştururken göz önünde bulundurulması gerekir. + +
+ + +**99. Game tree ― A game tree is a tree that describes the possibilities of a game. In particular, each node is a decision point for a player and each root-to-leaf path is a possible outcome of the game.** + +⟶ Oyun ağacı ― Oyun ağacı, bir oyunun olasılıklarını tarif eden bir ağaçtır. Özellikle, her bir düğüm, oyuncu için bir karar noktasıdır ve her bir kökten (root) yaprağa (leaf) giden yol oyunun olası bir sonucudur. + +
+ + +**100. [Two-player zero-sum game ― It is a game where each state is fully observed and such that players take turns. It is defined with:, a starting state sstart, possible actions Actions(s) from state s, successors Succ(s,a) from states s with actions a, whether an end state was reached IsEnd(s), the agent's utility Utility(s) at end state s, the player Player(s) who controls state s]** + +⟶ [İki oyunculu sıfır toplamlı oyun ― Her durumun tamamen gözlendiği ve oyuncuların sırayla oynadığı bir oyundur. Aşağıdaki gibi tanımlanır:, bir başlangıç durumu sstart, s durumunda gerçekleştirilebilecek olası eylemler Actions(s), s durumunda a eylemi gerçekleştirildiğindeki ardıllar Succ(s,a), bir bitiş durumuna ulaşılıp ulaşılmadığı IsEnd(s), s bitiş durumunda etmenin elde ettiği fayda Utility(s), s durumunu kontrol eden oyuncu Player(s)] + +
+ + +**101. Remark: we will assume that the utility of the agent has the opposite sign of the one of the opponent.** + +⟶ Not: Oyuncu faydasının işaretinin, rakibinin faydasının tersi olacağını varsayacağız. + +
+ + +**102. [Types of policies ― There are two types of policies:, Deterministic policies, noted πp(s), which are actions that player p takes in state s., Stochastic policies, noted πp(s,a)∈[0,1], which are probabilities that player p takes action a in state s.]** + +⟶ [Politika türleri ― İki tane politika türü vardır:, πp(s) olarak gösterilen belirlenimci politikalar , p oyuncusunun s durumunda gerçekleştirdiği eylemler., πp(s,a)∈[0,1] olarak gösterilen olasılıksal politikalar, p oyuncusunun s durumunda a eylemini gerçekleştirme olasılıkları.] + +
+ + +**103. Expectimax ― For a given state s, the expectimax value Vexptmax(s) is the maximum expected utility of any agent policy when playing with respect to a fixed and known opponent policy πopp. It is computed as follows:** + +⟶ En yüksek beklenen değer(Expectimax) ― Belirli bir s durumu için, en yüksek beklenen değer olan Vexptmax(s), sabit ve bilinen bir rakip politikası olan πopp'a göre oynarken, bir oyuncu politikasının en yüksek beklenen faydasıdır. En yüksek beklenen değer(Expectimax) aşağıdaki gibi hesaplanmaktadır: + +
+ + +**104. Remark: expectimax is the analog of value iteration for MDPs.** + +⟶ Not: En yüksek beklenen değer(Expectimax), MDP'ler için değer yinelemenin analog halidir. + +
+ + +**105. Minimax ― The goal of minimax policies is to find an optimal policy against an adversary by assuming the worst case, i.e. that the opponent is doing everything to minimize the agent's utility. It is done as follows:** + +⟶ En küçük-en büyük (minimax) ― En küçük-enbüyük (minimax) politikaların amacı en kötü durumu kabul ederek, diğer bir deyişle; rakip, oyuncunun faydasını en aza indirmek için her şeyi yaparken, rakibe karşı en iyi politikayı bulmaktır. En küçük-en büyük(minimax) aşağıdaki şekilde yapılır: + +
+ + +**106. Remark: we can extract πmax and πmin from the minimax value Vminimax.** + +⟶ Not: πmax ve πmin değerleri, en küçük-en büyük olan Vminimax'dan elde edilebilir. + +
+ + +**107. Minimax properties ― By noting V the value function, there are 3 properties around minimax to have in mind:** + +⟶ En küçük-en büyük (minimax) özellikleri ― V değer fonksiyonunu ifade ederse, En küçük-en büyük (minimax) ile ilgili aklımızda bulundurmamız gereken 3 özellik vardır: + +
+ + +**108. Property 1: if the agent were to change its policy to any πagent, then the agent would be no better off.** + +⟶ Özellik 1: Oyuncu politikasını herhangi bir πagent ile değiştirecek olsaydı, o zaman oyuncu daha iyi olmazdı. + +
+ + +**109. Property 2: if the opponent changes its policy from πmin to πopp, then he will be no better off.** + +⟶ Özellik 2: Eğer rakip oyuncu politikasını πmin'den πopp'a değiştirecek olsaydı, o zaman rakip oyuncu daha iyi olamazdı. + +
+ + +**110. Property 3: if the opponent is known to be not playing the adversarial policy, then the minimax policy might not be optimal for the agent.** + +⟶ Özellik 3: Eğer rakip oyuncunun muhalif (adversarial) politikayı oynamadığı biliniyorsa, o zaman en küçük-en büyük(minimax) politika oyuncu için ey iyi (optimal) olmayabilir. + +
+ + +**111. In the end, we have the following relationship:** + +⟶ Sonunda, aşağıda belirtildiği gibi bir ilişkiye sahip oluruz: + +
+ + +**112. Speeding up minimax** + +⟶ En küçük-en büyük (minimax) hızlandırma + +
+ + +**113. Evaluation function ― An evaluation function is a domain-specific and approximate estimate of the value Vminimax(s). It is noted Eval(s).** + +⟶ Değerlendirme işlevi ― Değerlendirme işlevi, alana özgü (domain-specific) ve Vminimax(s) değerinin yaklaşık bir tahminidir. Eval(s) olarak ifade edilmektedir. + +
+ + +**114. Remark: FutureCost(s) is an analogy for search problems.** + +⟶ Not: FutureCost(s) arama problemleri için bir benzetmedir(analogy). + +
+ + +**115. Alpha-beta pruning ― Alpha-beta pruning is a domain-general exact method optimizing the minimax algorithm by avoiding the unnecessary exploration of parts of the game tree. To do so, each player keeps track of the best value they can hope for (stored in α for the maximizing player and in β for the minimizing player). At a given step, the condition β<α means that the optimal path is not going to be in the current branch as the earlier player had a better option at their disposal.** + +⟶ Alpha-beta budama ― Alfa-beta budama, oyun ağacının parçalarının gereksiz yere keşfedilmesini önleyerek en küçük-en büyük(minimax) algoritmasını en iyileyen (optimize eden) alana-özgü olmayan genel bir yöntemdir. Bunu yapmak için, her oyuncu ümit edebileceği en iyi değeri takip eder (maksimize eden oyuncu için α'da ve minimize eden oyuncu için β'de saklanır). Belirli bir adımda, β <α koşulu, önceki oyuncunun emrinde daha iyi bir seçeneğe sahip olması nedeniyle en iyi (optimal) yolun mevcut dalda olamayacağı anlamına gelir. + +
+ + +**116. TD learning ― Temporal difference (TD) learning is used when we don't know the transitions/rewards. The value is based on exploration policy. To be able to use it, we need to know rules of the game Succ(s,a). For each (s,a,r,s′), the update is done as follows:** + +⟶ TD öğrenme ― Geçici fark (Temporal difference - TD) öğrenmesi, geçiş/ödülleri bilmediğimiz zaman kullanılır. Değer, keşif politikasına dayanır. Bunu kullanabilmek için, oyununun kurallarını,Succ (s, a), bilmemiz gerekir. Her bir (s,a,r,s′) için, güncelleme aşağıdaki şekilde yapılır: + +
+ + +**117. Simultaneous games** + +⟶ Eşzamanlı oyunlar + +
+ + +**118. This is the contrary of turn-based games, where there is no ordering on the player's moves.** + +⟶ Bu, oyuncunun hamlelerinin sıralı olmadığı sıra temelli oyunların tam tersidir. + +
+ + +**119. Single-move simultaneous game ― Let there be two players A and B, with given possible actions. We note V(a,b) to be A's utility if A chooses action a, B chooses action b. V is called the payoff matrix.** + +⟶ Tek-hamleli eşzamanlı oyun ― Olası hareketlere sahip A ve B iki oyuncu olsun. V(a,b), A'nın a eylemini ve B'nin de b eylemini seçtiği A'nın faydasını ifade eder. V, getiri dizeyi olarak adlandırılır. + +
+ + +**120. [Strategies ― There are two main types of strategies:, A pure strategy is a single action:, A mixed strategy is a probability distribution over actions:]** + +⟶ [Stratejiler ― İki tane ana strateji türü vardır:, Saf strateji, tek bir eylemdir:, Karışık strateji, eylemler üzerindeki bir olasılık dağılımıdır:] + +
+ + +**121. Game evaluation ― The value of the game V(πA,πB) when player A follows πA and player B follows πB is such that:** + +⟶ Oyun değerlendirme ― oyuncu A πA'yı ve oyuncu B de πB'yi izlediğinde, Oyun değeri V(πA,πB): + +
+ + +**122. Minimax theorem ― By noting πA,πB ranging over mixed strategies, for every simultaneous two-player zero-sum game with a finite number of actions, we have:** + +⟶ En küçük-en büyük (minimax) teoremi ― ΠA, πB’nin karma stratejilere göre değiştiğini belirterek, sonlu sayıda eylem ile eşzamanlı her iki oyunculu sıfır toplamlı oyun için: + +
+ + +**123. Non-zero-sum games** + +⟶ Sıfır toplamı olmayan oyunlar + +
+ + +**124. Payoff matrix ― We define Vp(πA,πB) to be the utility for player p.** + +⟶ Getiri matrisi ― Vp(πA,πB)'yi oyuncu p'nin faydası olarak tanımlıyoruz. + +
+ + +**125. Nash equilibrium ― A Nash equilibrium is (π∗A,π∗B) such that no player has an incentive to change its strategy. We have:** + +⟶ Nash dengesi ― Nash dengesi (π ∗ A, π ∗ B) öyle birşey ki hiçbir oyuncuyu, stratejisini değiştirmeye teşvik etmiyor: + +
+ + +**126. and** + +⟶ ve + +
+ + +**127. Remark: in any finite-player game with finite number of actions, there exists at least one Nash equilibrium.** + +⟶ Not: sonlu sayıda eylem olan herhangi bir sonlu oyunculu oyunda, en azından bir tane Nash denegesi mevcuttur. + +
+ + +**128. [Tree search, Backtracking search, Breadth-first search, Depth-first search, Iterative deepening]** + +⟶ [Ağaç arama, Geri izleme araması, Genişlik öncelikli arama, Derinlik öncelikli arama, Tekrarlı (Iterative) derinleşme] + +
+ + +**129. [Graph search, Dynamic programming, Uniform cost search]** + +⟶ [Çizge arama, Dinamik programlama, Tek tip maliyet araması] + +
+ + +**130. [Learning costs, Structured perceptron]** + +⟶ [Öğrenme maliyetleri, Yapısal algılayıcı] + +
+ + +**131. [A star search, Heuristic function, Algorithm, Consistency, correctness, Admissibility, efficiency]** + +⟶ [A yıldız arama, Sezgisel işlev, Algoritma, Tutarlılık, doğruluk, kabul edilebilirlik, verimlilik] + +
+ + +**132. [Relaxation, Relaxed search problem, Relaxed heuristic, Max heuristic]** + +⟶ [Rahatlama, Rahat arama problemi, Rahat sezgisel, En yüksek sezgisel] + +
+ + +**133. [Markov decision processes, Overview, Policy evaluation, Value iteration, Transitions, rewards]** + +⟶ [Markov karar süreçleri, Genel bakış, Politika değerlendirme, Değer yineleme, Geçişler, ödüller] + +
+ + +**134. [Game playing, Expectimax, Minimax, Speeding up minimax, Simultaneous games, Non-zero-sum games]** + +⟶ [Oyun oynama, En yüksek beklenti, En küçük-en büyük, En küçük-en büyük hızlandırma, Eşzamanlı oyunlar, Sıfır toplamı olmayan oyunlar] + +
+ + +**135. View PDF version on GitHub** + +⟶ GitHub'da PDF sürümünü görüntüleyin + +
+ + +**136. Original authors** + +⟶ Asıl yazarlar + +
+ + +**137. Translated by X, Y and Z** + +⟶ X, Y ve Z tarafından tercüme edilmiştir. + +
+ + +**138. Reviewed by X, Y and Z** + +⟶ X,Y,Z tarafından gözden geçirilmiştir. + +
+ + +**139. By X and Y** + +⟶ X ve Y ile + +
+ + +**140. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶ Yapay Zeka el kitapları artık [hedef dilde] mevcuttur. diff --git a/tr/cs-221-variables-models.md b/tr/cs-221-variables-models.md new file mode 100644 index 000000000..aac242e96 --- /dev/null +++ b/tr/cs-221-variables-models.md @@ -0,0 +1,617 @@ +**Variables-based models translation** [[webpage]](https://stanford.edu/~shervine/teaching/cs-221/cheatsheet-variables-models) + +
+ +**1. Variables-based models with CSP and Bayesian networks** + +⟶ 1. CSP ile değişken-temelli modeller ve Bayesçi ağlar + +
+ + +**2. Constraint satisfaction problems** + +⟶ 2. Kısıt memnuniyet problemleri + +
+ + +**3. In this section, our objective is to find maximum weight assignments of variable-based models. One advantage compared to states-based models is that these algorithms are more convenient to encode problem-specific constraints.** + +⟶ 3. Bu bölümde hedefimiz değişken-temelli modellerin maksimum ağırlık seçimlerini bulmaktır. Durum temelli modellerle kıyaslandığında, bu algoritmaların probleme özgü kısıtları kodlamak için daha uygun olmaları bir avantajdır. + +
+ + +**4. Factor graphs** + +⟶ 4. Faktör grafikleri + +
+ + +**5. Definition ― A factor graph, also referred to as a Markov random field, is a set of variables X=(X1,...,Xn) where Xi∈Domaini and m factors f1,...,fm with each fj(X)⩾0.** + +⟶5. Tanımlama - Markov rasgele alanı olarak da adlandırılan faktör grafiği, Xi∈Domaini ve herbir fj(X)⩾0 olan f1,...,fm m faktör olmak üzere X=(X1,...,Xn) değişkenler kümesidir. + +
+ + +**6. Domain** + +⟶ 6. Etki Alanı (Domain) + +
+ + +**7. Scope and arity ― The scope of a factor fj is the set of variables it depends on. The size of this set is called the arity.** + +⟶ 7. Kapsam ve ilişki derecesi - Fj faktörünün kapsamı, dayandığı değişken kümesidir. Bu kümenin boyutuna ilişki derecesi (arity) denir. + +
+ + +**8. Remark: factors of arity 1 and 2 are called unary and binary respectively.** + +⟶ 8. Not: Faktörlerin ilişki derecesi 1 ve 2 olanlarına sırasıyla tek ve ikili denir. + +
+ + +**9. Assignment weight ― Each assignment x=(x1,...,xn) yields a weight Weight(x) defined as being the product of all factors fj applied to that assignment. Its expression is given by:** + +⟶9. Atama ağırlığı - Her atama x = (x1, ..., xn), o atamaya uygulanan tüm faktörlerin çarpımı olarak tanımlanan bir Ağırlık (x) ağırlığı verir.Şöyle ifade edilir: + +
+ + +**10. Constraint satisfaction problem ― A constraint satisfaction problem (CSP) is a factor graph where all factors are binary; we call them to be constraints:** + +⟶ 10. Kısıt memnuniyet problemi - Kısıtlama memnuniyet problemi (constraint satisfaction problem-CSP), tüm faktörlerin ikili olduğu bir faktör grafiğidir; bunları kısıt olarak adlandırıyoruz: + +
+ + +**11. Here, the constraint j with assignment x is said to be satisfied if and only if fj(x)=1.** + +⟶11.Burada, j kısıtlı x ataması ancak ve ancak fj(x)=1 olduğunda uygundur (satisfied) denir. + +
+ + +**12. Consistent assignment ― An assignment x of a CSP is said to be consistent if and only if Weight(x)=1, i.e. all constraints are satisfied.** + +⟶ 12.Tutarlı atama - Bir CSP'nin bir x atamasının, yalnızca Ağırlık (x) = 1 olduğunda, yani tüm kısıtların yerine getirilmesi durumunda tutarlı olduğu söylenir. + +
+ + +**13. Dynamic ordering** + +⟶ 13. Dinamik düzenleşim (Dynamic ordering) + +
+ + +**14. Dependent factors ― The set of dependent factors of variable Xi with partial assignment x is called D(x,Xi), and denotes the set of factors that link Xi to already assigned variables.** + +⟶14.Bağımlı faktörler - X değişkeninin kısmi atamaya sahip bağımlı X değişken faktörlerinin kümesi D (x, Xi) ile gösterilir ve Xi'yi önceden atanmış değişkenlere bağlayan faktörler kümesini belirtir. + +
+ + +**15. Backtracking search ― Backtracking search is an algorithm used to find maximum weight assignments of a factor graph. At each step, it chooses an unassigned variable and explores its values by recursion. Dynamic ordering (i.e. choice of variables and values) and lookahead (i.e. early elimination of inconsistent options) can be used to explore the graph more efficiently, although the worst-case runtime stays exponential: O(|Domain|n).** + +⟶ 15. Geri izleme araması - Geri izleme araması, bir faktör grafiğinin maksimum ağırlık atamalarını bulmak için kullanılan bir algoritmadır. Her adımda, atanmamış bir değişken seçer ve değerlerini özyineleme ile arar. Dinamik düzenleşim (yani değişkenlerin ve değerlerin seçimi) ve bakış açısı (yani tutarsız seçeneklerin erken elenmesi), en kötü durum çalışma süresi üssel olarak olsa da grafiği daha verimli aramak için kullanılabilir. O (| Domain | n). + +
+ + +**16. [Forward checking ― It is a one-step lookahead heuristic that preemptively removes inconsistent values from the domains of neighboring variables. It has the following characteristics:, After assigning a variable Xi, it eliminates inconsistent values from the domains of all its neighbors., If any of these domains becomes empty, we stop the local backtracking search., If we un-assign a variable Xi, we have to restore the domain of its neighbors.]** + +⟶ 16. [İleri kontrol - Tutarsız değerleri komşu değişkenlerin etki alanlarından öncelikli bir şekilde ortadan kaldıran sezgisel bakış açısıdır. Aşağıdaki özelliklere sahiptir : Bir Xi değişkenini atadıktan sonra, tüm komşularının etki alanlarından tutarsız değerleri eler. Bu etki alanlardan herhangi biri boş olursa, yerel geri arama araması durdurulur.Komşularının etki alanını eski haline getirilmek zorundadır.] + +
+ + +**17. Most constrained variable ― It is a variable-level ordering heuristic that selects the next unassigned variable that has the fewest consistent values. This has the effect of making inconsistent assignments to fail earlier in the search, which enables more efficient pruning.** + +⟶ 17. En kısıtlı değişken - En az tutarlı değere sahip bir sonraki atanmamış değişkeni seçen, değişken seviyeli sezgisel düzenleşimdir. Bu, daha verimli budama olanağı sağlayan aramada daha önce başarısız olmak için tutarsız atamalar yapma etkisine sahiptir. + +
+ + +**18. Least constrained value ― It is a value-level ordering heuristic that assigns the next value that yields the highest number of consistent values of neighboring variables. Intuitively, this procedure chooses first the values that are most likely to work.** + +⟶ 18. En düşük kısıtlı değer - Komşu değişkenlerin en yüksek tutarlı değerlerini elde ederek bir sonrakini veren seviye düzenleyici sezgisel bir değerdir. Sezgisel olarak, bu prosedür önce çalışması en muhtemel olan değerleri seçer. + +
+ + +**19. Remark: in practice, this heuristic is useful when all factors are constraints.** + +⟶ 19. Not: Uygulamada, bu sezgisel yaklaşım tüm faktörler kısıtlı olduğunda kullanışlıdır. + +
+ + +**20. The example above is an illustration of the 3-color problem with backtracking search coupled with most constrained variable exploration and least constrained value heuristic, as well as forward checking at each step.** + +⟶ 20. Yukarıdaki örnek, en kısıtlı değişken keşfi ve sezgisel en düşük kısıtlı değerin yanı sıra, her adımda ileri kontrol ile birleştirilmiş geri izleme arama ile 3 renk probleminin bir gösterimidir. + +
+ + +**21. [Arc consistency ― We say that arc consistency of variable Xl with respect to Xk is enforced when for each xl∈Domainl:, unary factors of Xl are non-zero, there exists at least one xk∈Domaink such that any factor between Xl and Xk is non-zero.]** + +⟶ 21. [Ark tutarlılığı (Arc consistency) - Xl değişkeninin ark tutarlılığının Xk'ye göre her bir xl∈Domainl için geçerli olduğu söylenir : Xl'in birleşik faktörleri sıfır olmadığında, en az bir xk∈Domaink vardır, öyle ki Xl ve Xk arasında sıfır olmayan herhangi bir faktör vardır. + +
+ + +**22. AC-3 ― The AC-3 algorithm is a multi-step lookahead heuristic that applies forward checking to all relevant variables. After a given assignment, it performs forward checking and then successively enforces arc consistency with respect to the neighbors of variables for which the domain change during the process.** + +⟶ 22. AC-3 - AC-3 algoritması, tüm ilgili değişkenlere ileri kontrol uygulayan çok adımlı sezgisel bir bakış açısıdır. Belirli bir görevden sonra ileriye doğru kontrol yapar ve ardından işlem sırasında etki alanının değiştiği değişkenlerin komşularına göre ark tutarlılığını ardı ardına uygular. + +
+ + +**23. Remark: AC-3 can be implemented both iteratively and recursively.** + +⟶ 23. Not: AC-3, tekrarlı ve özyinelemeli olarak uygulanabilir. + +
+ + +**24. Approximate methods** + +⟶24. Yaklaşık yöntemler (Approximate methods) + +
+ + +**25. Beam search ― Beam search is an approximate algorithm that extends partial assignments of n variables of branching factor b=|Domain| by exploring the K top paths at each step. The beam size K∈{1,...,bn} controls the tradeoff between efficiency and accuracy. This algorithm has a time complexity of O(n⋅Kblog(Kb)).** + +⟶ 25. Işın araması (Beam search) - Işın araması, her adımda K en üst yollarını keşfederek, b=|Domain| dallanma faktörünün n değişkeninin kısmi atamalarını genişleten yaklaşık bir algoritmadır. + +
+ + +**26. The example below illustrates a possible beam search of parameters K=2, b=3 and n=5.** + +⟶ 26. Aşağıdaki örnek, K = 2, b = 3 ve n = 5 parametreleri ile muhtemel ışın aramasını (beam search) göstermektedir. + +
+ + +**27. Remark: K=1 corresponds to greedy search whereas K→+∞ is equivalent to BFS tree search.** + +⟶ 27. Not: K = 1 açgözlü aramaya (greedy search) karşılık gelirken K → + ∞, BFS ağaç aramasına eşdeğerdir. + +
+ + +**28. Iterated conditional modes ― Iterated conditional modes (ICM) is an iterative approximate algorithm that modifies the assignment of a factor graph one variable at a time until convergence. At step i, we assign to Xi the value v that maximizes the product of all factors connected to that variable.** + +⟶28. Tekrarlanmış koşullu modlar - Tekrarlanmış koşullu modlar (Iterated conditional modes-ICM), yakınsamaya kadar bir seferde bir değişkenli bir faktör grafiğinin atanmasını değiştiren yinelemeli bir yaklaşık algoritmadır. İ adımında, Xi'ye, bu değişkene bağlı tüm faktörlerin çarpımını maksimize eden v değeri atanır. + +
+ + +**29. Remark: ICM may get stuck in local minima.** + +⟶ 29. Not: ICM yerel minimumda takılıp kalabilir. + +
+ + +**30. [Gibbs sampling ― Gibbs sampling is an iterative approximate method that modifies the assignment of a factor graph one variable at a time until convergence. At step i:, we assign to each element u∈Domaini a weight w(u) that is the product of all factors connected to that variable, we sample v from the probability distribution induced by w and assign it to Xi.]** + +⟶ 30. [Gibbs örneklemesi - Gibbs örneklemesi, yakınsamaya kadar bir seferde bir değişken grafik faktörünün atanmasını değiştiren yinelemeli bir yaklaşık yöntemdir. İ adımında, her bir u∈Domain olan öğeye , bu değişkene bağlı tüm faktörlerin çarpımı olan bir ağırlık w (u) atanır, v'yi w tarafından indüklenen olasılık dağılımından örnek alır ve Xi'ye atanır.] + +
+ + +**31. Remark: Gibbs sampling can be seen as the probabilistic counterpart of ICM. It has the advantage to be able to escape local minima in most cases.** + +⟶ 31. Not: Gibbs örneklemesi, ICM'nin olasılıksal karşılığı olarak görülebilir. Çoğu durumda yerel minimumlardan kaçabilme avantajına sahiptir. + +
+ + +**32. Factor graph transformations** + +⟶ 32. Faktör grafiği dönüşümleri + +
+ + +**33. Independence ― Let A,B be a partitioning of the variables X. We say that A and B are independent if there are no edges between A and B and we write:** + +⟶ 33. Bağımsızlık - A, B, X değişkenlerinin bir bölümü olsun. A ve B arasında kenar yoksa A ve B'nin bağımsız olduğu söylenir ve şöyle ifade edilir: + +
+ + +**34. Remark: independence is the key property that allows us to solve subproblems in parallel.** + +⟶ 34. Not: bağımsızlık, alt sorunları paralel olarak çözmemize olanak sağlayan bir kilit özelliktir. + +
+ + +**35. Conditional independence ― We say that A and B are conditionally independent given C if conditioning on C produces a graph in which A and B are independent. In this case, it is written:** + +⟶ 35. Koşullu bağımsızlık - Eğer C'nin şartlandırılması, A ve B'nin bağımsız olduğu bir grafik üretiyorsa A ve B verilen C koşulundan bağımsızdır. Bu durumda şöyle yazılır: + +
+ + +**36. [Conditioning ― Conditioning is a transformation aiming at making variables independent that breaks up a factor graph into smaller pieces that can be solved in parallel and can use backtracking. In order to condition on a variable Xi=v, we do as follows:, Consider all factors f1,...,fk that depend on Xi, Remove Xi and f1,...,fk, Add gj(x) for j∈{1,...,k} defined as:]** + +⟶ 36. [Koşullandırma - Koşullandırma, bir faktör grafiğini paralel olarak çözülebilen ve geriye doğru izlemeyi kullanabilen daha küçük parçalara bölen değişkenleri bağımsız kılmayı amaçlayan bir dönüşümdür. Xi = v değişkeninde koşullandırmak için aşağıdakileri yaparız: Xi'ye bağlı tüm f1, ..., fk faktörlerini göz önünde bulundurun, Xi ve f1, ..., fk öğelerini kaldırın, j∈ {1, ..., k} için gj (x) ekleyin:] + +
+ + +**37. Markov blanket ― Let A⊆X be a subset of variables. We define MarkovBlanket(A) to be the neighbors of A that are not in A.** + +⟶ 37. Markov blanket - A⊆X değişkenlerin bir alt kümesi olsun. MarkovBlanket'i (A), A'da olmayan A'nın komşuları olarak tanımlıyoruz. + +
+ + +**38. Proposition ― Let C=MarkovBlanket(A) and B=X∖(A∪C). Then we have:** + +⟶ Önerme - C = MarkovBlanket (A) ve B = X ∖ (A∪C) olsun.Bu durumda: + +
+ + +**39. [Elimination ― Elimination is a factor graph transformation that removes Xi from the graph and solves a small subproblem conditioned on its Markov blanket as follows:, Consider all factors fi,1,...,fi,k that depend on Xi, Remove Xi +and fi,1,...,fi,k, Add fnew,i(x) defined as:]** + +⟶ 39. [Eliminasyon - Eliminasyon, Xi'yi grafikten ayıran ve Markov blanket de şartlandırılmış küçük bir alt sorunu çözen bir faktör grafiği dönüşümüdür: Xi'ye bağlı tüm fi, 1, ..., fi, k faktörlerini göz önünde bulundurun, Xi ve fi, 1, ..., fi, k, kaldır, fnew ekleyin, i (x) şöyle tanımlanır:] + +
+ + +**40. Treewidth ― The treewidth of a factor graph is the maximum arity of any factor created by variable elimination with the best variable ordering. In other words,** + +⟶ 40. Ağaç genişliği (Treewidth) - Bir faktör grafiğinin ağaç genişliği, değişken elemeli en iyi değişken sıralamasıyla oluşturulan herhangi bir faktörün maksimum ilişki derecesidir. Diğer bir deyişle, + +
+ + +**41. The example below illustrates the case of a factor graph of treewidth 3.** + +⟶ 41. Aşağıdaki örnek, ağaç genişliği 3 olan faktör grafiğini gösterir. + +
+ + +**42. Remark: finding the best variable ordering is a NP-hard problem.** + +⟶ 42. Not: en iyi değişken sıralamasını bulmak NP-zor (NP-hard) bir problemdir. + +
+ + +**43. Bayesian networks** + +⟶ 43. Bayesçi ağlar + +
+ + +**44. In this section, our goal will be to compute conditional probabilities. What is the probability of a query given evidence?** + +⟶44. Bu bölümün amacı koşullu olasılıkları hesaplamak olacaktır. Bir sorgunun kanıt verilmiş olma olasılığı nedir? + +
+ + +**45. Introduction** + +⟶ 45. Giriş + +
+ + +**46. Explaining away ― Suppose causes C1 and C2 influence an effect E. Conditioning on the effect E and on one of the causes (say C1) changes the probability of the other cause (say C2). In this case, we say that C1 has explained away C2.** + +⟶ 47. Açıklamalar - C1 ve C2 sebeplerinin E etkisini yarattığını varsayalım. E etkisinin durumu ve sebeplerden biri (C1 olduğunu varsayalım) üzerindeki etkisi, diğer sebep olan C2'nin olasılığını değiştirir. Bu durumda, C1'in C2'yi açıkladığı söylenir. + +
+ + +**47. Directed acyclic graph ― A directed acyclic graph (DAG) is a finite directed graph with no directed cycles.** + +⟶47. Yönlü çevrimsiz çizge - Yönlü çevrimsiz bir çizge (Directed acyclic graph-DAG), yönlendirilmiş çevrimleri olmayan sonlu bir yönlü çizgedir. + +
+ + +**48. Bayesian network ― A Bayesian network is a directed acyclic graph (DAG) that specifies a joint distribution over random variables X=(X1,...,Xn) as a product of local conditional distributions, one for each node:** + +⟶48. Bayesçi ağ - Her düğüm için bir tane olmak üzere, yerel koşullu dağılımların bir çarpımı olarak, X = (X1, ..., Xn) rasgele değişkenleri üzerindeki bir ortak dağılımı belirten yönlü bir çevrimsiz çizgedir: + +
+ + +**49. Remark: Bayesian networks are factor graphs imbued with the language of probability.** + +⟶ 49. Not: Bayesçi ağlar olasılık diliyle bütünleşik faktör grafikleridir. + +
+ + +**50. Locally normalized ― For each xParents(i), all factors are local conditional distributions. Hence they have to satisfy:** + +⟶ 50. Yerel olarak normalleştirilmiş - Her xParents (i) için tüm faktörler yerel koşullu dağılımlardır. Bu nedenle yerine getirmek zorundalar: + +
+ + +**51. As a result, sub-Bayesian networks and conditional distributions are consistent.** + +⟶51. Sonuç olarak, alt-Bayesçi ağlar ve koşullu dağılımlar tutarlıdır. + +
+ + +**52. Remark: local conditional distributions are the true conditional distributions.** + +⟶ 52. Not: Yerel koşullu dağılımlar gerçek koşullu dağılımlardır. + +
+ + +**53. Marginalization ― The marginalization of a leaf node yields a Bayesian network without that node.** + +⟶ 53. Marjinalleşme - Bir yaprak düğümünün marjinalleşmesi, o düğüm olmaksızın bir Bayesçi ağı sağlar. + +
+ + +**54. Probabilistic programs** + +⟶ 54. Olasılık programları + +
+ + +**55. Concept ― A probabilistic program randomizes variables assignment. That way, we can write down complex Bayesian networks that generate assignments without us having to explicitly specify associated probabilities.** + +⟶ 55. Konsept - Olasılıklı bir program değişkenlerin atanmasını randomize eder. Bu şekilde, ilişkili olasılıkları açıkça belirtmek zorunda kalmadan atamalar üreten karmaşık Bayesçi ağlar yazılabilir. + +
+ + +**56. Remark: examples of probabilistic programs include Hidden Markov model (HMM), factorial HMM, naive Bayes, latent Dirichlet allocation, diseases and symptoms and stochastic block models.** + +⟶ 56. Not: Olasılık programlarına örnekler arasında Gizli Markov modeli (Hidden Markov model-HMM), faktöriyel HMM, naif Bayes (naive Bayes), gizli Dirichlet tahsisi (latent Dirichlet allocation), hastalıklar ve semptomları belirtirler ve stokastik blok modelleri bulunmaktadır. + +
+ + +**57. Summary ― The table below summarizes the common probabilistic programs as well as their applications:** + +⟶ 57. Özet - Aşağıdaki tablo, ortak olasılıklı programları ve bunların uygulamalarını özetlemektedir: + +
+ + +**58. [Program, Algorithm, Illustration, Example]** + +⟶ 58. [Program, Algoritma, Gösterim, Örnek] + +
+ + +**59. [Markov Model, Hidden Markov Model (HMM), Factorial HMM, Naive Bayes, Latent Dirichlet Allocation (LDA)]** + +⟶ 59. [Markov Modeli, Gizli Markov Modeli (HMM), Faktöriyel HMM, Naif Bayes, Gizli Dirichlet Tahsisi (Latent Dirichlet Allocation-LDA)] + +
+ + +**60. [Generate, distribution]** + +⟶ 60. [Üretim, Dağılım] + +
+ + +**61. [Language modeling, Object tracking, Multiple object tracking, Document classification, Topic modeling]** + +⟶ 61. [Dil modelleme, Nesne izleme, Çoklu nesne izleme, Belge sınıflandırma, Konu modelleme] + +
+ + +**62. Inference** + +⟶ 62. Çıkarım + +
+ + +**63. [General probabilistic inference strategy ― The strategy to compute the probability P(Q|E=e) of query Q given evidence E=e is as follows:, Step 1: Remove variables that are not ancestors of the query Q or the evidence E by marginalization, Step 2: Convert Bayesian network to factor graph, Step 3: Condition on the evidence E=e, Step 4: Remove nodes disconnected from the query Q by marginalization, Step 5: Run a probabilistic inference algorithm (manual, variable elimination, Gibbs sampling, particle filtering)]** + +⟶ 63. [Genel olasılıksal çıkarım stratejisi - E = e kanıtı verilen Q sorgusunun P (Q | E = e) olasılığını hesaplama stratejisi aşağıdaki gibidir : Adım 1: Q sorgusunun ataları olmayan değişkenlerini ya da marjinalleştirme yoluyla E kanıtını silin, Adım 2: Bayesçi ağı faktör grafiğine dönüştürün, Adım 3: Kanıtın koşulu E = e, Adım 4: Q sorgusu ile bağlantısı kesilen düğümleri marjinalleştirme yoluyla silin, Adım 5: Olasılıklı bir çıkarım algoritması çalıştırın (kılavuz, değişken eleme, Gibbs örneklemesi, parçacık filtreleme)] + +
+ + +**64. Forward-backward algorithm ― This algorithm computes the exact value of P(H=hk|E=e) (smoothing query) for any k∈{1,...,L} in the case of an HMM of size L. To do so, we proceed in 3 steps:** + +⟶ 64. İleri-geri algoritma - Bu algoritma, L boyutunda bir HMM durumunda herhangi bir k∈ {1, ..., L} için P (H = hk | E = e) (düzeltme sorgusu) değerini hesaplar. Bunu yapmak için 3 adımda ilerlenir: + +
+ + +**65. Step 1: for ..., compute ...** + +⟶ 65. Adım 1: ... için (for), hesapla ... + +
+ + +**66. with the convention F0=BL+1=1. From this procedure and these notations, we get that** + +⟶ 66. F0 = BL + 1 = 1 kuralı ile. Bu prosedürden ve bu notasyonlardan anlıyoruz ki + +
+ + +**67. Remark: this algorithm interprets each assignment to be a path where each edge hi−1→hi is of weight p(hi|hi−1)p(ei|hi).** + +⟶ 67. Not: bu algoritma, her bir atamada her bir kenarın hi − 1 → hi'nin p (hi | hi − 1) p (ei | hi) olduğu bir yol olduğunu yorumlar. + +
+ + +**68. [Gibbs sampling ― This algorithm is an iterative approximate method that uses a small set of assignments (particles) to represent a large probability distribution. From a random assignment x, Gibbs sampling performs the following steps for i∈{1,...,n} until convergence:, For all u∈Domaini, compute the weight w(u) of assignment x where Xi=u, Sample v from the probability distribution induced by w: v∼P(Xi=v|X−i=x−i), Set Xi=v]** + +⟶ 68. [Gibbs örneklemesi - Bu algoritma, büyük olasılık dağılımını temsil etmek için küçük bir dizi atama (parçacık) kullanan tekrarlı bir yaklaşık yöntemdir. Rasgele bir x atamasından Gibbs örneklemesi, i∈ {1, ..., n} için yakınsamaya kadar aşağıdaki adımları uygular :, Tüm u∈Domaini için, x atamasının x (u) ağırlığını hesaplayın, burada Xi = u, Sample w: v∼P (Xi = v | X − i = x − i), Set Xi = v] ile uyarılmış olasılık dağılımından + +
+ + +**69. Remark: X−i denotes X∖{Xi} and x−i represents the corresponding assignment.** + +⟶ 69. Not: X − i, X ∖ {Xi} ve x − i, karşılık gelen atamayı temsil eder. + +
+ + +**70. [Particle filtering ― This algorithm approximates the posterior density of state variables given the evidence of observation variables by keeping track of K particles at a time. Starting from a set of particles C of size K, we run the following 3 steps iteratively:, Step 1: proposal - For each old particle xt−1∈C, sample x from the transition probability distribution p(x|xt−1) and add x to a set C′., Step 2: weighting - Weigh each x of the set C′ by w(x)=p(et|x), where et is the evidence observed at time t., Step 3: resampling - Sample K elements from the set C′ using the probability distribution induced by w and store them in C: these are the current particles xt.]** + +⟶70. [Parçacık filtreleme - Bu algoritma, bir seferde K parçacıklarını takip ederek gözlem değişkenlerinin kanıtı olarak verilen durum değişkenlerinin önceki yoğunluğuna yaklaşır.K boyutunda bir C parçacığı kümesinden başlayarak, aşağıdaki 3 adım tekrarlı olarak çalıştırılır: Adım 1: teklif - Her eski parçacık xt − 1∈C için, geçiş olasılığı dağılımından p (x | xt − 1) örnek x'i alın ve C ′ye ekleyin. Adım 2: ağırlıklandırma - C ′nin her x değerini w (x) = p (et | x) ile ağırlıklandırın, burada et t zamanında gözlemlenen kanıttır, Adım 3: yeniden örnekleme - w ile indüklenen olasılık dağılımını kullanarak C kümesinden örnek K elemanlarını C cinsinden saklayın: bunlar şuanki xt parçacıklarıdır.] + +
+ + +**71. Remark: a more expensive version of this algorithm also keeps track of past particles in the proposal step.** + +⟶ 71. Not: Bu algoritmanın daha pahalı bir versiyonu da teklif adımındaki geçmiş katılımcıların kaydını tutar. + +
+ + +**72. Maximum likelihood ― If we don't know the local conditional distributions, we can learn them using maximum likelihood.** + +⟶ 72. Maksimum olabilirlik - Yerel koşullu dağılımları bilmiyorsak, maksimum olasılık kullanarak bunları öğrenebiliriz. + +
+ + +**73. Laplace smoothing ― For each distribution d and partial assignment (xParents(i),xi), add λ to countd(xParents(i),xi), then normalize to get probability estimates.** + +⟶ 73. Laplace yumuşatma - Her d dağılımı ve (xParents (i), xi) kısmi ataması için, countd(xParents (i), xi)'a λ ekleyin, ardından olasılık tahminlerini almak için normalleştirin. + +
+ + +**74. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** + +⟶ 74. Algoritma - Beklenti-Maksimizasyon (EM) algoritması, olasılığa art arda bir alt sınır oluşturarak (E-adım) tekrarlayarak ve bu alt sınırın (M-adımını) optimize ederek θ parametresini maksimum olasılık tahmini ile tahmin etmede aşağıdaki gibi etkin bir yöntem sunar : + +
+ + +**75. [E-step: Evaluate the posterior probability q(h) that each data point e came from a particular cluster h as follows:, M-step: Use the posterior probabilities q(h) as cluster specific weights on data points e to determine θ through maximum likelihood.]** + +⟶ 75. [E-adım: Her bir (e) veri noktasının belirli bir (h) kümesinden geldiği gerideki q (h) durumunu şu şekilde değerlendirin: M-adım: (maksimum olasılığını belirlemek için e veri noktalarındaki küme özgül ağırlıkları olarak gerideki olasılıklar q (h) kullanın.] + +
+ + +**76. [Factor graphs, Arity, Assignment weight, Constraint satisfaction problem, Consistent assignment]** + +⟶ 76. [Faktör grafikleri, İlişki Derecesi, Atama ağırlığı, Kısıt memnuniyet sorunu, Tutarlı atama] + +
+ + +**77. [Dynamic ordering, Dependent factors, Backtracking search, Forward checking, Most constrained variable, Least constrained value]** + +⟶ 77. [Dinamik düzenleşim, Bağımlı faktörler, Geri izleme araması, İleriye dönük kontrol, En kısıtlı değişken, En düşük kısıtlanmış değer] + +
+ + +**78. [Approximate methods, Beam search, Iterated conditional modes, Gibbs sampling]** + +⟶ 78. [Yaklaşık yöntemler, Işın arama , Tekrarlı koşullu modlar, Gibbs örneklemesi] + +
+ + +**79. [Factor graph transformations, Conditioning, Elimination]** + +⟶ 79. [Faktör grafiği dönüşümleri, Koşullandırma, Eleme] + +
+ + +**80. [Bayesian networks, Definition, Locally normalized, Marginalization]** + +⟶ 80. [Bayesçi ağlar, Tanım, Yerel normalleştirme, Marjinalleşme] + +
+ + +**81. [Probabilistic program, Concept, Summary]** + +⟶ 81. [Olasılık programı, Kavram, Özet] + +
+ + +**82. [Inference, Forward-backward algorithm, Gibbs sampling, Laplace smoothing]** + +⟶ 82. [Çıkarım, İleri-geri algoritması, Gibbs örneklemesi, Laplace yumuşatması] + +
+ + +**83. View PDF version on GitHub** + +⟶ 83. GitHub'da PDF versiyonun görüntüleyin + +
+ + +**84. Original authors** + +⟶ 84. Orijinal yazarlar + +
+ + +**85. Translated by X, Y and Z** + +⟶ 85. X, Y ve Z tarafından çevrilmiştir. + +
+ + +**86. Reviewed by X, Y and Z** + +⟶ 86. X,Y,Z tarafından kontrol edilmiştir. + +
+ + +**87. By X and Y** + +⟶ 87. X ve Y ile + +
+ + +**88. The Artificial Intelligence cheatsheets are now available in [target language].** + +⟶88. Yapay Zeka el kitapları artık [hedef dilde] mevcuttur. diff --git a/tr/cheatsheet-deep-learning.md b/tr/cs-229-deep-learning.md similarity index 92% rename from tr/cheatsheet-deep-learning.md rename to tr/cs-229-deep-learning.md index da5226222..7c8b3e29e 100644 --- a/tr/cheatsheet-deep-learning.md +++ b/tr/cs-229-deep-learning.md @@ -24,7 +24,7 @@ **5. [Input layer, hidden layer, output layer]** -⟶ [Giriş katmanı, gizli katman, ürün katmanı] +⟶ [Giriş katmanı, gizli katman, çıkış katmanı]
@@ -60,7 +60,7 @@ **11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** -⟶ Öğrenme derecesi ― Öğrenme derecesi, sıklıkla α veya bazen η olarak belirtilir, ağırlıkların hangi tempoda güncellendiğini gösterir. Bu derece sabit olabilir veya uyarlamalı olarak değişebilir. Mevcut en gözde yöntem Adam olarak adlandırılan ve öğrenme oranını uyarlayan bir yöntemdir. +⟶ Öğrenme oranı ― Öğrenme oranı, sıklıkla α veya bazen η olarak belirtilir, ağırlıkların hangi tempoda güncellendiğini gösterir. Bu derece sabit olabilir veya uyarlamalı olarak değişebilir. Mevcut en gözde yöntem Adam olarak adlandırılan ve öğrenme oranını uyarlayan bir yöntemdir.
@@ -150,7 +150,7 @@ **26. [Input gate, forget gate, gate, output gate]** -⟶ [Girdi kapısı, unutma kapısı, kapı, ürün kapısı] +⟶ [Girdi kapısı, unutma kapısı, kapı, çıktı kapısı]
@@ -294,28 +294,28 @@ **50. View PDF version on GitHub** -⟶ +⟶ GitHub'da PDF sürümünü görüntüle
**51. [Neural Networks, Architecture, Activation function, Backpropagation, Dropout]** -⟶ +⟶ [Yapay Sinir Ağları, Mimari, Aktivasyon fonksiyonu, Geri yayılım, Seyreltme]
**52. [Convolutional Neural Networks, Convolutional layer, Batch normalization]** -⟶ +⟶ [Evrişimsel Sinir Ağları, Evreşim katmanı, Toplu normalizasyon]
**53. [Recurrent Neural Networks, Gates, LSTM]** -⟶ +⟶ [Yinelenen Sinir Ağları, Kapılar, LSTM]
**54. [Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** -⟶ +⟶ [Pekiştirmeli öğrenme, Markov karar süreçleri, Değer/politika iterasyonu, Yaklaşık dinamik programlama, Politika araştırması] diff --git a/tr/refresher-linear-algebra.md b/tr/cs-229-linear-algebra.md similarity index 100% rename from tr/refresher-linear-algebra.md rename to tr/cs-229-linear-algebra.md diff --git a/tr/cs-229-machine-learning-tips-and-tricks.md b/tr/cs-229-machine-learning-tips-and-tricks.md new file mode 100644 index 000000000..b12670229 --- /dev/null +++ b/tr/cs-229-machine-learning-tips-and-tricks.md @@ -0,0 +1,290 @@ +**1. Machine Learning tips and tricks cheatsheet** + +⟶ Makine Öğrenmesi ipuçları ve püf noktaları el kitabı + +
+ +**2. Classification metrics** + +⟶ Sınıflandırma metrikleri + +
+ +**3. In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** + +⟶ İkili bir sınıflandırma durumunda, modelin performansını değerlendirmek için gerekli olan ana metrikler aşağıda verilmiştir. + +
+ +**4. Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** + +⟶ Karışıklık matrisi - Karışıklık matrisi, bir modelin performansını değerlendirirken daha eksiksiz bir sonuca sahip olmak için kullanılır. Aşağıdaki şekilde tanımlanmıştır: + +
+ +**5. [Predicted class, Actual class]** + +⟶ [Tahmini sınıf, Gerçek sınıf] + +
+ +**6. Main metrics ― The following metrics are commonly used to assess the performance of classification models:** + +⟶ Ana metrikler - Sınıflandırma modellerinin performansını değerlendirmek için aşağıda verilen metrikler yaygın olarak kullanılmaktadır: + +
+ +**7. [Metric, Formula, Interpretation]** + +⟶ [Metrik, Formül, Açıklama] + +
+ +**8. Overall performance of model** + +⟶ Modelin genel performansı + +
+ +**9. How accurate the positive predictions are** + +⟶ Doğru tahminlerin ne kadar kesin olduğu + +
+ +**10. Coverage of actual positive sample** + +⟶ Gerçek pozitif örneklerin oranı + +
+ +**11. Coverage of actual negative sample** + +⟶ Gerçek negatif örneklerin oranı + +
+ +**12. Hybrid metric useful for unbalanced classes** + +⟶ Dengesiz sınıflar için yararlı hibrit metrik + +
+ +**13. ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** + +⟶ İşlem Karakteristik Eğrisi (ROC) ― İşlem Karakteristik Eğrisi (receiver operating curve), eşik değeri değiştirilerek Doğru Pozitif Oranı-Yanlış Pozitif Oranı grafiğidir. Bu metrikler aşağıdaki tabloda özetlenmiştir: + +
+ +**14. [Metric, Formula, Equivalent]** + +⟶ [Metrik, Formül, Eşdeğer] + +
+ +**15. AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** + +⟶ Eğri Altında Kalan Alan (AUC) ― Aynı zamanda AUC veya AUROC olarak belirtilen işlem karakteristik eğrisi altındaki alan, aşağıdaki şekilde gösterildiği gibi İşlem Karakteristik Eğrisi (ROC)'nin altındaki alandır: + +
+ +**16. [Actual, Predicted]** + +⟶ [Gerçek, Tahmin Edilen] + +
+ +**17. Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** + +⟶ Temel metrikler - Bir f regresyon modeli verildiğinde aşağıdaki metrikler genellikle modelin performansını değerlendirmek için kullanılır: + +
+ +**18. [Total sum of squares, Explained sum of squares, Residual sum of squares]** + +⟶ [Toplam karelerinin toplamı, Karelerinin toplamının açıklaması, Karelerinin toplamından artanlar] + +
+ +**19. Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** + +⟶ Belirleme katsayısı - Genellikle R2 veya r2 olarak belirtilen belirleme katsayısı, gözlemlenen sonuçların model tarafından ne kadar iyi kopyalandığının bir ölçütüdür ve aşağıdaki gibi tanımlanır: + +
+ +**20. Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** + +⟶ Ana metrikler - Aşağıdaki metrikler, göz önüne aldıkları değişken sayısını dikkate alarak regresyon modellerinin performansını değerlendirmek için yaygın olarak kullanılır: + +
+ +**21. where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** + +⟶ burada L olabilirlik ve ˆσ2, her bir yanıtla ilişkili varyansın bir tahminidir. + +
+ +**22. Model selection** + +⟶ Model seçimi + +
+ +**23. Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** + +⟶ Kelime Bilgisi - Bir model seçerken, aşağıdaki gibi sahip olduğumuz verileri 3 farklı parçaya ayırırız: + +
+ +**24. [Training set, Validation set, Testing set]** + +⟶ [Eğitim seti, Doğrulama seti, Test seti] + +
+ +**25. [Model is trained, Model is assessed, Model gives predictions]** + +⟶ [Model eğitildi, Model değerlendirildi, Model tahminleri gerçekleştiriyor] + +
+ +**26. [Usually 80% of the dataset, Usually 20% of the dataset]** + +⟶ [Genelde veri kümesinin %80'i, Genelde veri kümesinin %20'si] + +
+ +**27. [Also called hold-out or development set, Unseen data]** + +⟶ [Ayrıca doğrulama için bir kısmını bekletme veya geliştirme seti olarak da bilinir, Görülmemiş veri] + +
+ +**28. Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** + +⟶ Model bir kere seçildikten sonra, tüm veri seti üzerinde eğitilir ve görünmeyen test setinde test edilir. Bunlar aşağıdaki şekilde gösterilmiştir: + +
+ +**29. Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** + +⟶ Çapraz doğrulama ― Çapraz doğrulama, başlangıçtaki eğitim setine çok fazla güvenmeyen bir modeli seçmek için kullanılan bir yöntemdir. Farklı tipleri aşağıdaki tabloda özetlenmiştir: + +
+ +**30. [Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** + +⟶ [k − 1 katı üzerinde eğitim ve geriye kalanlar üzerinde değerlendirme, n − p gözlemleri üzerine eğitim ve kalan p üzerinde değerlendirme] + +
+ +**31. [Generally k=5 or 10, Case p=1 is called leave-one-out]** + +⟶ [Genel olarak k=5 veya 10, Durum p=1'e bir tanesini dışarıda bırak denir] + +
+ +**32. The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** + +⟶ En yaygın olarak kullanılan yöntem k-kat çapraz doğrulama olarak adlandırılır ve k-1 diğer katlarda olmak üzere, bu k sürelerinin hepsinde model eğitimi yapılırken, modeli bir kat üzerinde doğrulamak için eğitim verilerini k katlarına ayırır. Hata için daha sonra k-katlar üzerinden ortalama alınır ve çapraz doğrulama hatası olarak adlandırılır. + +
+ +**33. Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** + +⟶ Düzenlileştirme (Regularization) - Düzenlileştirme prosedürü, modelin verileri aşırı öğrenmesinden kaçınılmasını ve dolayısıyla yüksek varyans sorunları ile ilgilenmeyi amaçlamaktadır. Aşağıdaki tablo, yaygın olarak kullanılan düzenlileştirme tekniklerinin farklı türlerini özetlemektedir: + + +
+ +**34. [Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶ [Değişkenleri 0'a kadra küçült, Değişken seçimi için iyi, Katsayıları daha küçük yap, Değişken seçimi ile küçük katsayılar arasındaki çelişki] + + +
+ +**35. Diagnostics** + +⟶ Tanı + +
+ +**36. Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** + +⟶ Önyargı - Bir modelin önyargısı, beklenen tahmin ve verilen veri noktaları için tahmin etmeye çalıştığımız doğru model arasındaki farktır. + +
+ +**37. Variance ― The variance of a model is the variability of the model prediction for given data points.** + +⟶ Varyans - Bir modelin varyansı, belirli veri noktaları için model tahmininin değişkenliğidir. + +
+ +**38. Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** + +⟶ Önyargı/varyans çelişkisi - Daha basit model, daha yüksek önyargı, ve daha karmaşık model, daha yüksek varyans. + + +
+ +**39. [Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** + +⟶ [Belirtiler, Regresyon illüstrasyonu, sınıflandırma illüstrasyonu, derin öğrenme illüstrasyonu, olası çareler] + +
+ +**40. [High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** + +⟶ [Yüksek eğitim hatası, Test hatasına yakın eğitim hatası, Yüksek önyargı, Eğitim hatasından biraz daha düşük eğitim hatası, Çok düşük eğitim hatası, Eğitim hatası test hatasının çok altında, Yüksek varyans] + + +
+ +**41. [Complexify model, Add more features, Train longer, Perform regularization, Get more data]** + +⟶ [Model karmaşıklaştığında, Daha fazla özellik ekle, Daha uzun eğitim süresi ile eğit, Düzenlileştirme gerçekleştir, Daha fazla bilgi edin] + + +
+ +**42. Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** + +⟶ Hata analizi - Hata analizinde mevcut ve mükemmel modeller arasındaki performans farkının temel nedeni analiz edilir. + +
+ +**43. Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** + +⟶ Ablatif analiz - Ablatif analizde mevcut ve başlangıç modelleri arasındaki performans farkının temel nedeni analiz edilir. + +
+ +**44. Regression metrics** + +⟶ Regresyon metrikleri + +
+ +**45. [Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** + +⟶ [Sınıflandırma metrikleri, karışıklık matrisi, doğruluk, kesinlik, geri çağırma, F1 skoru, ROC] + +
+ +**46. [Regression metrics, R squared, Mallow's CP, AIC, BIC]** + +⟶ [Regresyon metrikleri, R karesi, Mallow'un CP'si, AIC, BIC] + +
+ +**47. [Model selection, cross-validation, regularization]** + +⟶ [Model seçimi, çapraz doğrulama, düzenlileştirme] + +
+ +**48. [Diagnostics, Bias/variance tradeoff, error/ablative analysis]** + +⟶ [Tanı, Önyargı/varyans çelişkisi, hata/ablatif analiz] diff --git a/tr/cs-229-probability.md b/tr/cs-229-probability.md new file mode 100644 index 000000000..5e30fe358 --- /dev/null +++ b/tr/cs-229-probability.md @@ -0,0 +1,381 @@ +**1. Probabilities and Statistics refresher** + +⟶ Olasılık ve İstatistik hatırlatma + +
+ +**2. Introduction to Probability and Combinatorics** + +⟶ Olasılık ve Kombinasyonlara Giriş + +
+ +**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** + +⟶ Örnek alanı - Bir deneyin olası tüm sonuçlarının kümesidir, deneyin örnek alanı olarak bilinir ve S ile gösterilir. + +
+ +**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** + +⟶ Olay - Örnek alanın herhangi bir E alt kümesi, olay olarak bilinir. Yani bir olay, deneyin olası sonuçlarından oluşan bir kümedir. Deneyin sonucu E'de varsa, E'nin gerçekleştiğini söyleriz. + +
+ +**5. Axioms of probability: For each event E, we denote P(E) as the probability of event E occuring.** + +⟶ Olasılık aksiyomları: Her E olayı için, E olayının meydana gelme olasılığı P (E) olarak ifade edilir. + +
+ +**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** + +⟶ Aksiyom 1 - Her olasılık 0 ve 1 de dahil olmak üzere 0 ve 1 arasındadır, yani: + +
+ +**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** + +⟶ Aksiyom 2 - Tüm örnek uzayındaki temel olaylardan en az birinin ortaya çıkma olasılığı 1'dir, yani: + +
+ +**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** + +⟶ Aksiyom 3 - Karşılıklı özel olayların herhangi bir dizisi için, E1, ..., En, + +
+ +**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** + +⟶ Permütasyon - Permütasyon, n nesneler havuzundan r nesnelerinin belirli bir sıra ile düzenlenmesidir. Bu tür düzenlemelerin sayısı P (n, r) tarafından aşağıdaki gibi tanımlanır: + +
+ +**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** + +⟶ Kombinasyon - Bir kombinasyon, sıranın önemli olmadığı n nesneler havuzundan r nesnelerinin bir düzenlemesidir. Bu tür düzenlemelerin sayısı C (n, r) tarafından aşağıdaki gibi tanımlanır: + +
+ +**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** + +⟶ Not: 0⩽r⩽n için P (n, r) ⩾C (n, r) değerine sahibiz. + +
+ +**12. Conditional Probability** + +⟶ Koşullu Olasılık + +
+ +**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** + +⟶ Bayes kuralı - A ve B olayları için P (B)> 0 olacak şekilde: + +
+ +**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** + +⟶ Not: P(A∩B)=P(A)P(B|A)=P(A|B)P(B) + +
+ +**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** + +⟶ Parça - Tüm i değerleri için Ai≠∅ olmak üzere {Ai,i∈[[1,n]]} olsun. {Ai} bir parça olduğunu söyleriz eğer : + +
+ +**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** + +⟶ Not: Örneklem uzaydaki herhangi bir B olayı için P(B)=n∑i=1P(B|Ai)P(Ai)'ye sahibiz. + +
+ +**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** + +⟶ Genişletilmiş Bayes kuralı formu - {Ai,i∈[[1,n]]} örneklem uzayının bir bölümü olsun. Elde edilen: + +
+ +**18. Independence ― Two events A and B are independent if and only if we have:** + +⟶ Bağımsızlık - İki olay A ve B birbirinden bağımsızdır ancak ve ancak eğer: + +
+ +**19. Random Variables** + +⟶ Rastgele Değişkenler + +
+ +**20. Definitions** + +⟶ Tanımlamalar + +
+ +**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** + +⟶ Rastgele değişken - Genellikle X olarak ifade edilen rastgele bir değişken, bir örneklem uzayındaki her öğeyi gerçek bir çizgiye eşleyen bir fonksiyondur. + +
+ +**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** + +⟶ Kümülatif dağılım fonksiyonu (KDF/ Cumulative distribution function-CDF) - Monotonik olarak azalmayan ve limx→−∞F(x)=0 ve limx→+∞F(x)=1 olacak şekilde kümülatif dağılım fonksiyonu F şu şekilde tanımlanır: + +
+ +**23. Remark: we have P(a + +**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** + +⟶ Olasılık yoğunluğu fonksiyonu (OYF/Probability density function-PDF) - Olasılık yoğunluğu fonksiyonu f, X'in rastgele değişkenin iki bitişik gerçekleşmesi arasındaki değerleri alması ihtimalidir. + +
+ +**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** + +⟶ OYF ve KDF'yi içeren ilişkiler - Ayrık (D) ve sürekli (C) olaylarında bilmeniz gereken önemli özelliklerdir. + +
+ +**26. [Case, CDF F, PDF f, Properties of PDF]** + +⟶ [Olay, KDF F, OYF f, OYF Özellikleri] + +
+ +**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** + +⟶ Beklenti ve Dağılım Momentleri - Burada, ayrık ve sürekli durumlar için beklenen değer E[X], genelleştirilmiş beklenen değer E[g(X)], k. Moment E[Xk] ve karakteristik fonksiyon ψ(ω) ifadeleri verilmiştir : + +
+ +**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** + +⟶ Varyans - Genellikle Var(X) veya σ2 olarak ifade edilen rastgele değişkenin varyansı, dağılım fonksiyonunun yayılmasının bir ölçüsüdür. Aşağıdaki şekilde belirlenir: + +
+ +**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** + +⟶ Standart sapma - Genellikle σ olarak ifade edilen rastgele bir değişkenin standart sapması, gerçek rastgele değişkenin birimleriyle uyumlu olan dağılım fonksiyonunun yayılmasının bir ölçüsüdür. Aşağıdaki şekilde belirlenir: + +
+ +**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** + +⟶ Rastgele değişkenlerin dönüşümü - X ve Y değişkenlerinin bazı fonksiyonlarla bağlanır. fX ve fY'ye sırasıyla X ve Y'nin dağılım fonksiyonu şöyledir: + +
+ +**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** + +⟶ Leibniz integral kuralı - g, x'e ve potansiyel olarak c'nin, c'ye bağlı olabilecek potansiyel c ve a, b sınırlarının bir fonksiyonu olsun. Elde edilen: + +
+ +**32. Probability Distributions** + +⟶ Olasılık Dağılımları + +
+ +**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** + +⟶ Chebyshev'in eşitsizliği - X beklenen değeri μ olan rastgele bir değişken olsun. K, σ>0 için aşağıdaki eşitsizliği elde edilir: + +
+ +**34. Main distributions ― Here are the main distributions to have in mind:** + +⟶ Ana dağıtımlar - İşte akılda tutulması gereken ana dağıtımlar: + +
+ +**35. [Type, Distribution]** + +⟶ [Tür, Dağılım] + +
+ +**36. Jointly Distributed Random Variables** + +⟶ Ortak Dağılımlı Rastgele Değişkenler + +
+ +**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** + +⟶ Marjinal yoğunluk ve kümülatif dağılım - fXY ortak yoğunluk olasılık fonksiyonundan, + +
+ +**38. [Case, Marginal density, Cumulative function]** + +⟶ [Olay, Marjinal yoğunluk, Kümülatif fonksiyon] + +
+ +**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** + +⟶ Koşullu yoğunluk - Y'ye göre X'in koşullu yoğunluğu, genellikle fX|Y olarak elde edilir: + +
+ +**40. Independence ― Two random variables X and Y are said to be independent if we have:** + +⟶ Bağımsızlık - İki rastgele değişkenin X ve Y olması durumunda bağımsız olduğu söylenir: + +
+ +**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** + +⟶ Kovaryans - σ2XY veya daha genel olarak Cov(X,Y) olarak elde ettiğimiz iki rastgele değişken olan X ve Y'nin kovaryansını aşağıdaki gibi tanımlarız: + +
+ +**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** + +⟶ Korelasyon - σX, σY, X ve Y'nin standart sapmalarını elde ederek, ρXY olarak belirtilen rastgele X ve Y değişkenleri arasındaki korelasyonu şu şekilde tanımlarız: + +
+ +**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** + +⟶ Not 1: X, Y'nin herhangi bir rastgele değişkeni için ρXY∈ [note1,1] olduğuna dikkat edin. + +
+ +**44. Remark 2: If X and Y are independent, then ρXY=0.** + +⟶ Not 2: Eğer X ve Y bağımsızsa, ρXY = 0 olur. + +
+ +**45. Parameter estimation** + +⟶ Parametre tahmini (kestirimi) + +
+ +**46. Definitions** + +⟶ Tanımlamalar + +
+ +**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** + +⟶ Rastgele örnek - Rastgele bir örnek, bağımsız ve aynı şekilde X ile dağıtılan n1, ..., Xn değişkeninin rastgele değişkenidir. + +
+ +**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** + +⟶ Tahminci (Kestirimci) - Tahmin edici, istatistiksel bir modelde bilinmeyen bir parametrenin değerini ortaya çıkarmak için kullanılan verilerin bir fonksiyonudur. + +
+ +**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** + +⟶ Önyargı - Bir tahmin edicinin önyargısı ^ θ, ^ θ dağılımının beklenen değeri ile gerçek değer arasındaki fark olarak tanımlanır, yani: + +
+ +**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** + +⟶ Not: E [^ θ] = θ olduğunda bir tahmincinin tarafsız olduğu söylenir. + +
+ +**51. Estimating the mean** + +⟶ Ortalamayı tahmin etme + +
+ +**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** + +⟶ Örnek ortalaması - Rastgele bir numunenin numune ortalaması, dağılımın gerçek ortalamasını to tahmin etmek için kullanılır, genellikle ¯¯¯¯¯X olarak belirtilir ve şöyle tanımlanır: + +
+ +**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** + +⟶ Not: örnek ortalama tarafsız, yani: E[¯¯¯¯¯X]=μ. + +
+ +**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** + +⟶ Merkezi Limit Teoremi - Ortalama μ ve varyans σ2 ile verilen bir dağılımın ardından rastgele bir X1, ..., Xn örneğine sahip olalım. + +
+ +**55. Estimating the variance** + +⟶ Varyansı tahmin etmek + +
+ +**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** + +⟶ Örnek varyansı - Rastgele bir örneğin örnek varyansı, bir dağılımın σ2 gerçek varyansını tahmin etmek için kullanılır, genellikle s2 veya ^σ2 olarak elde edilir ve aşağıdaki gibi tanımlanır: + +
+ +**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** + +⟶ Not: Örneklem sapması yansızdır,E[s2]=σ2. + +
+ +**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** + +⟶ Örnek varyansı ile ki-kare ilişkisi - s2, rastgele bir örneğin örnek varyansı olsun. Elde edilir: + +
+ +**59. [Introduction, Sample space, Event, Permutation]** + +⟶ [Giriş, Örnek uzay, Olay, Permütasyon] + +
+ +**60. [Conditional probability, Bayes' rule, Independence]** + +⟶ [Koşullu olasılık, Bayes kuralı, Bağımsızlık] + +
+ +**61. [Random variables, Definitions, Expectation, Variance]** + +⟶ [Rastgele değişkenler, Tanımlamalar, Beklenti, Varyans] + +
+ +**62. [Probability distributions, Chebyshev's inequality, Main distributions]** + +⟶ [Olasılık dağılımları, Chebyshev eşitsizliği, Ana dağılımlar] + +
+ +**63. [Jointly distributed random variables, Density, Covariance, Correlation]** + +⟶ [Ortak dağınık rastgele değişkenler, Yoğunluk, Kovaryans, Korelasyon] + +
+ +**64. [Parameter estimation, Mean, Variance]** + +⟶ [Parameter tahmini, Ortalama, Varyans] diff --git a/tr/cs-229-supervised-learning.md b/tr/cs-229-supervised-learning.md new file mode 100644 index 000000000..90d816803 --- /dev/null +++ b/tr/cs-229-supervised-learning.md @@ -0,0 +1,567 @@ +**1. Supervised Learning cheatsheet** + +⟶ Gözetimli Öğrenme El kitabı + +
+ +**2. Introduction to Supervised Learning** + +⟶ Gözetimli Öğrenmeye Giriş + +
+ +**3. Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** + +⟶ {y(1),...,y(m)} çıktı kümesi ile ilişkili olan {x(1),...,x(m)} veri noktalarının kümesi göz önüne alındığında, y'den x'i nasıl tahmin edebileceğimizi öğrenen bir sınıflandırıcı tasarlamak istiyoruz. + +
+ +**4. Type of prediction ― The different types of predictive models are summed up in the table below:** + +⟶ Tahmin türü ― Farklı tahmin modelleri aşağıdaki tabloda özetlenmiştir: + +
+ +**5. [Regression, Classifier, Outcome, Examples]** + +⟶ [Regresyon, Sınıflandırıcı, Çıktı , Örnekler] + +
+ +**6. [Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** + +⟶ [Sürekli, Sınıf, Lineer regresyon (bağlanım), Lojistik regresyon (bağlanım), Destek Vektör Makineleri (DVM), Naive Bayes] + +
+ +**7. Type of model ― The different models are summed up in the table below:** + +⟶ Model türleri ― Farklı modeller aşağıdaki tabloda özetlenmiştir: + +
+ +**8. [Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** + +⟶ [Ayırt edici model, Üretici model, Amaç, Öğrenilenler, Örnekleme, Örnekler] + +
+ +**9. [Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** + +⟶ [ Doğrudan tahmin P (y|x), P (y|x)'i tahmin etmek için P(x|y)'i tahmin etme, Karar Sınırı, Verilerin olasılık dağılımı, Regresyon, Destek Vektör Makineleri, Gauss Diskriminant Analizi, Naive Bayes] + +
+ +**10. Notations and general concepts** + +⟶ Gösterimler ve genel konsept + +
+ +**11. Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** + +⟶ Hipotez ― Hipotez hθ olarak belirtilmiştir ve bu bizim seçtiğimiz modeldir. Verilen x(i) verisi için modelin tahminlediği çıktı hθ(x(i))'dir. + +
+ +**12. Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** + +⟶ Kayıp fonksiyonu ― L:(z,y)∈R×Y⟼L(z,y)∈R şeklinde tanımlanan bir kayıp fonksiyonu y gerçek değerine karşılık geleceği öngörülen z değerini girdi olarak alan ve ne kadar farklı olduklarını gösteren bir fonksiyondur. Yaygın kayıp fonksiyonları aşağıdaki tabloda özetlenmiştir: + +
+ +**13. [Least squared error, Logistic loss, Hinge loss, Cross-entropy]** + +⟶ [En küçük kareler hatası, Lojistik yitimi (kaybı), Menteşe yitimi (kaybı), Çapraz entropi] + +
+ +**14. [Linear regression, Logistic regression, SVM, Neural Network]** + +⟶ [Lineer regresyon (bağlanım), Lojistik regresyon (bağlanım), Destek Vektör Makineleri, Sinir Ağı] + +
+ +**15. Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** + +⟶ Maliyet fonksiyonu ― J maliyet fonksiyonu genellikle bir modelin performansını değerlendirmek için kullanılır ve L kayıp fonksiyonu aşağıdaki gibi tanımlanır: + +
+ +**16. Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** + +⟶ Bayır inişi ― α∈R öğrenme oranı olmak üzere, bayır inişi için güncelleme kuralı olarak ifade edilen öğrenme oranı ve J maliyet fonksiyonu aşağıdaki gibi ifade edilir: +
+ + +**17. Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** + +⟶ Not: Stokastik bayır inişi her eğitim örneğine bağlı olarak parametreyi günceller, ve yığın bayır inişi bir dizi eğitim örneği üzerindedir. + +
+ +**18. Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** + +⟶ Olabilirlik - θ parametreleri verilen bir L (θ) modelinin olabilirliğini,olabilirliği maksimize ederek en uygun θ parametrelerini bulmak için kullanılır. bulmak için kullanılır. Uygulamada, optimize edilmesi daha kolay olan log-olabilirlik ℓ (θ) = log (L (θ))'i kullanıyoruz. Sahip olduklarımız: + +
+ +**19. Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** + +⟶ Newton'un algoritması - ℓ′(θ)=0 olacak şekilde bir θ bulan nümerik bir yöntemdir. Güncelleme kuralı aşağıdaki gibidir: + +
+ +**20. Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** + +⟶ Not: Newton-Raphson yöntemi olarak da bilinen çok boyutlu genelleme aşağıdaki güncelleme kuralına sahiptir: + +
+ +**21. Linear models** + +⟶ Lineer modeller + +
+ +**22. Linear regression** + +⟶ Lineer regresyon + +
+ +**23. We assume here that y|x;θ∼N(μ,σ2)** + +⟶y|x;θ∼N(μ,σ2) olduğunu varsayıyoruz + +
+ +**24. Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** + +⟶ Normal denklemler - X matris tasarımı olmak üzere, maliyet fonksiyonunu en aza indiren θ değeri X'in matris tasarımını not ederek, maliyet fonksiyonunu en aza indiren θ değeri kapalı formlu bir çözümdür: + +
+ +**25. LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** + +⟶ En Küçük Ortalama Kareler algoritması (Least Mean Squares-LMS) - α öğrenme oranı olmak üzere, m veri noktasını içeren eğitim kümesi için Widrow-Hoff öğrenme oranı olarak bilinen En Küçük Ortalama Kareler Algoritmasının güncelleme kuralı aşağıdaki gibidir: + +
+ +**26. Remark: the update rule is a particular case of the gradient ascent.** + +⟶ Not: güncelleme kuralı, bayır yükselişinin özel bir halidir. + +
+ +**27. LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** + +⟶ Yerel Ağırlıklı Regresyon (Locally Weighted Regression-LWR) - LWR olarak da bilinen Yerel Ağırlıklı Regresyon ağırlıkları her eğitim örneğini maliyet fonksiyonunda w (i) (x) ile ölçen doğrusal regresyonun bir çeşididir. + +
+ +**28. Classification and logistic regression** + +⟶ Sınıflandırma ve lojistik regresyon + +
+ +**29. Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** + +⟶ Sigmoid fonksiyonu - Lojistik fonksiyonu olarak da bilinen sigmoid fonksiyonu g, aşağıdaki gibi tanımlanır: + +
+ +**30. Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** + +⟶ Lojistik regresyon - y|x;θ∼Bernoulli(ϕ) olduğunu varsayıyoruz. Aşağıdaki forma sahibiz: + +
+ +**31. Remark: there is no closed form solution for the case of logistic regressions.** + +⟶ Not: Lojistik regresyon durumunda kapalı form çözümü yoktur. + +
+ +**32. Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** + +⟶ Softmax regresyonu - Çok sınıflı lojistik regresyon olarak da adlandırılan Softmax regresyonu 2'den fazla sınıf olduğunda lojistik regresyonu genelleştirmek için kullanılır. Genel kabul olarak, her i sınıfı için Bernoulli parametresi ϕi'nin eşit olmasını sağlaması için θK=0 olarak ayarlanır. + +
+ +**33. Generalized Linear Models** + +⟶ Genelleştirilmiş Lineer Modeller + +
+ +**34. Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** + +⟶ Üstel aile - Eğer kanonik parametre veya bağlantı fonksiyonu olarak adlandırılan doğal bir parametre η, yeterli bir istatistik T (y) ve aşağıdaki gibi bir log-partition fonksiyonu a (η) şeklinde yazılabilirse, dağılım sınıfının üstel ailede olduğu söylenir: + +
+ +**35. Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** + +⟶ Not: Sık sık T (y) = y olur. Ayrıca, exp (−a (η)), olasılıkların birleştiğinden emin olan normalleştirme parametresi olarak görülebilir. + +
+ +**36. Here are the most common exponential distributions summed up in the following table:** + +⟶ Aşağıdaki tabloda özetlenen en yaygın üstel dağılımlar: + +
+ +**37. [Distribution, Bernoulli, Gaussian, Poisson, Geometric]** + +⟶ [Dağılım, Bernoulli, Gauss, Poisson, Geometrik] + +
+ +**38. Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** + +⟶ Genelleştirilmiş Lineer Modellerin (Generalized Linear Models-GLM) Yaklaşımları - Genelleştirilmiş Lineer Modeller x∈Rn+1 için rastgele bir y değişkenini tahminlemeyi hedeflen ve aşağıdaki 3 varsayıma dayanan bir fonksiyondur: + +
+ +**39. Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** + +⟶ Not: sıradan en küçük kareler ve lojistik regresyon, genelleştirilmiş doğrusal modellerin özel durumlarıdır. + +
+ +**40. Support Vector Machines** + +⟶ Destek Vektör Makineleri + +
+ +**41: The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** + +⟶ Destek Vektör Makinelerinin amacı minimum mesafeyi maksimuma çıkaran doğruyu bulmaktır. + +
+ +**42: Optimal margin classifier ― The optimal margin classifier h is such that:** + +⟶ Optimal marj sınıflandırıcısı - h optimal marj sınıflandırıcısı şöyledir: + +
+ +**43: where (w,b)∈Rn×R is the solution of the following optimization problem:** + +⟶ burada (w,b)∈Rn×R, aşağıdaki optimizasyon probleminin çözümüdür: + +
+ +**44. such that** + +⟶ öyle ki + +
+ +**45. support vectors** + +⟶ destek vektörleri + +
+ +**46. Remark: the line is defined as wTx−b=0.** + +⟶ Not: doğru wTx−b=0 şeklinde tanımlanır. + +
+ +**47. Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** + +⟶ Menteşe yitimi (kaybı) - Menteşe yitimi Destek Vektör Makinelerinin ayarlarında kullanılır ve aşağıdaki gibi tanımlanır: + +
+ +**48. Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** + +⟶ Çekirdek - ϕ gibi bir özellik haritası verildiğinde, K olarak tanımlanacak çekirdeği tanımlarız: + +
+**49. In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** + +⟶ Uygulamada, K (x, z) = exp (- || x − z || 22σ2) tarafından tanımlanan çekirdek K, Gauss çekirdeği olarak adlandırılır ve yaygın olarak kullanılır. + +
+ +**50. [Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** + +⟶ [Lineer olmayan ayrılabilirlik, Çekirdek Haritalamının Kullanımı, Orjinal uzayda karar sınırı] + +
+ +**51. Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** + +⟶ Not: Çekirdeği kullanarak maliyet fonksiyonunu hesaplamak için "çekirdek numarası" nı kullandığımızı söylüyoruz çünkü genellikle çok karmaşık olan ϕ açık haritalamasını bilmeye gerek yok. Bunun yerine, yalnızca K(x,z) değerlerine ihtiyacımız vardır. + +
+ +**52. Lagrangian ― We define the Lagrangian L(w,b) as follows:** + +⟶ Lagranj - Lagranj L(w,b) şeklinde şöyle tanımlanır: + +
+ +**53. Remark: the coefficients βi are called the Lagrange multipliers.** + +⟶ Not: βi katsayılarına Lagranj çarpanları denir. + +
+ +**54. Generative Learning** + +⟶ Üretici Öğrenme + +
+ +**55. A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** + +⟶ Üretken bir model, önce Bayes kuralını kullanarak P (y | x) değerini tahmin etmek için kullanabileceğimiz P (x | y) değerini tahmin ederek verilerin nasıl üretildiğini öğrenmeye çalışır. + +
+ +**56. Gaussian Discriminant Analysis** + +⟶ Gauss Diskriminant (Ayırtaç) Analizi + +
+ +**57. Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** + +⟶ Yöntem - Gauss Diskriminant Analizi y ve x|y=0 ve x|y=1 'in şu şekilde olduğunu varsayar: + +
+ +**58. Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** + +⟶ Tahmin - Aşağıdaki tablo, olasılığı en üst düzeye çıkarırken bulduğumuz tahminleri özetlemektedir: + +
+ +**59. Naive Bayes** + +⟶ Naive Bayes + +
+ +**60. Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** + +⟶ Varsayım - Naive Bayes modeli, her veri noktasının özelliklerinin tamamen bağımsız olduğunu varsayar: + +
+ +**61. Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** + +⟶ Çözümler - Log-olabilirliğinin k∈{0,1},l∈[[1,L]] ile birlikte aşağıdaki çözümlerle maksimize edilmesi: + +
+ +**62. Remark: Naive Bayes is widely used for text classification and spam detection.** + +⟶ Not: Naive Bayes, metin sınıflandırması ve spam tespitinde yaygın olarak kullanılır. + +
+ +**63. Tree-based and ensemble methods** + +⟶ Ağaç temelli ve topluluk yöntemleri + +
+ +**64. These methods can be used for both regression and classification problems.** + +⟶ Bu yöntemler hem regresyon hem de sınıflandırma problemleri için kullanılabilir. + +
+ +**65. CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** + +⟶ CART - Sınıflandırma ve Regresyon Ağaçları (Classification and Regression Trees (CART)), genellikle karar ağaçları olarak bilinir, ikili ağaçlar olarak temsil edilirler. + +
+ +**66. Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** + +⟶ Rastgele orman - Rastgele seçilen özelliklerden oluşan çok sayıda karar ağacı kullanan ağaç tabanlı bir tekniktir. +Basit karar ağacının tersine, oldukça yorumlanamaz bir yapıdadır ancak genel olarak iyi performansı onu popüler bir algoritma yapar. + +
+ +**67. Remark: random forests are a type of ensemble methods.** + +⟶ Not: Rastgele ormanlar topluluk yöntemlerindendir. + +
+ +**68. Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** + +⟶ Artırım - Artırım yöntemlerinin temel fikri bazı zayıf öğrenicileri biraraya getirerek güçlü bir öğrenici oluşturmaktır. Temel yöntemler aşağıdaki tabloda özetlenmiştir: + +
+ +**69. [Adaptive boosting, Gradient boosting]** + +⟶ [Adaptif artırma, Gradyan artırma] + +
+ +**70. High weights are put on errors to improve at the next boosting step** + +⟶ Yüksek ağırlıklar bir sonraki artırma adımında iyileşmesi için hatalara maruz kalır. + +
+ +**71. Weak learners trained on remaining errors** + +⟶ Zayıf öğreniciler kalan hatalar üzerinde eğitildi + +
+ +**72. Other non-parametric approaches** + +⟶ Diğer parametrik olmayan yaklaşımlar + +
+ +**73. k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** + +⟶ k-en yakın komşular - genellikle k-NN olarak adlandırılan k- en yakın komşular algoritması, bir veri noktasının tepkisi eğitim kümesindeki kendi k komşularının doğası ile belirlenen parametrik olmayan bir yaklaşımdır. Hem sınıflandırma hem de regresyon yöntemleri için kullanılabilir. + +
+ +**74. Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** + +⟶ Not: k parametresi ne kadar yüksekse, yanlılık okadar yüksek ve k parametresi ne kadar düşükse, varyans o kadar yüksek olur. + +
+ +**75. Learning Theory** + +⟶ Öğrenme Teorisi + +
+ +**76. Union bound ― Let A1,...,Ak be k events. We have:** + +⟶ Birleşim sınırı - A1,...,Ak k olayları olsun. Sahip olduklarımız: + +
+ +**77. Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** + +⟶ Hoeffding eşitsizliği - Z1, .., Zm, ϕ parametresinin Bernoulli dağılımından çizilen değişkenler olsun. Örnek ortalamaları mean ve γ>0 sabit olsun. Sahip olduklarımız: + +
+ +**78. Remark: this inequality is also known as the Chernoff bound.** + +⟶ Not: Bu eşitsizlik, Chernoff sınırı olarak da bilinir. + +
+ +**79. Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** + +⟶ Eğitim hatası - Belirli bir h sınıflandırıcısı için, ampirik risk veya ampirik hata olarak da bilinen eğitim hatasını ˆϵ (h) şöyle tanımlarız: + +
+ +**80. Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** + +⟶ Olası Yaklaşık Doğru (Probably Approximately Correct (PAC)) ― PAC, öğrenme teorisi üzerine sayısız sonuçların kanıtlandığı ve aşağıdaki varsayımlara sahip olan bir çerçevedir: +
+ + +**81: the training and testing sets follow the same distribution ** + +⟶ eğitim ve test kümeleri aynı dağılımı takip ediyor + +
+ +**82. the training examples are drawn independently** + +⟶ eğitim örnekleri bağımsız olarak çizilir + +
+ +**83. Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** + +⟶ Parçalanma ― S={x(1),...,x(d)} kümesi ve H sınıflandırıcıların kümesi verildiğinde, H herhangi bir etiketler kümesi S'e parçalar. + +
+ +**84. Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** + +⟶ Üst sınır teoremi ― |H|=k , δ ve örneklem sayısı m'nin sabit olduğu sonlu bir hipotez sınıfı H olsun. Ardından, en az 1−δ olasılığı ile elimizde: + +
+ +**85. VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** + +⟶ VC boyutu ― VC(H) olarak ifade edilen belirli bir sonsuz H hipotez sınıfının Vapnik-Chervonenkis (VC) boyutu, H tarafından parçalanan en büyük kümenin boyutudur. + +
+ +**86. Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** + +⟶ Not: H = {2 boyutta doğrusal sınıflandırıcılar kümesi}'nin VC boyutu 3'tür. + +
+ +**87. Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** + +⟶ Teorem (Vapnik) - H, VC(H)=d ve eğitim örneği sayısı m verilmiş olsun. En az 1−δ olasılığı ile, sahip olduklarımız: + +
+ +**88. [Introduction, Type of prediction, Type of model]** + +⟶ [Giriş, Tahmin türü, Model türü] + +
+ +**89. [Notations and general concepts, loss function, gradient descent, likelihood]** + +⟶ [Notasyonlar ve genel kavramlar,kayıp fonksiyonu, bayır inişi, olabilirlik] + +
+ +**90. [Linear models, linear regression, logistic regression, generalized linear models]** + +⟶ [Lineer modeller, Lineer regresyon, lojistik regresyon, genelleştirilmiş lineer modeller] + +
+ +**91. [Support vector machines, Optimal margin classifier, Hinge loss, Kernel]** + +⟶ [Destek vektör makineleri, optimal marj sınıflandırıcı, Menteşe yitimi, Çekirdek] + +
+ +**92. [Generative learning, Gaussian Discriminant Analysis, Naive Bayes]** + +⟶ [Üretici öğrenme, Gauss Diskriminant Analizi, Naive Bayes] + +
+ +**93. [Trees and ensemble methods, CART, Random forest, Boosting]** + +⟶ [Ağaçlar ve topluluk yöntemleri, CART, Rastegele orman, Artırma] + +
+ +**94. [Other methods, k-NN]** + +⟶ [Diğer yöntemler, k-NN] + +
+ +**95. [Learning theory, Hoeffding inequality, PAC, VC dimension]** + +⟶ [Öğrenme teorisi, Hoeffding eşitsizliği, PAC, VC boyutu] diff --git a/tr/cs-229-unsupervised-learning.md b/tr/cs-229-unsupervised-learning.md new file mode 100644 index 000000000..c6392c414 --- /dev/null +++ b/tr/cs-229-unsupervised-learning.md @@ -0,0 +1,340 @@ +**1. Unsupervised Learning cheatsheet** + +⟶ Gözetimsiz Öğrenme El Kitabı + +
+ +**2. Introduction to Unsupervised Learning** + +⟶ Gözetimsiz Öğrenmeye Giriş + +
+ +**3. Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** + +⟶ Motivasyon ― Gözetimsiz öğrenmenin amacı etiketlenmemiş verilerdeki gizli örüntüleri bulmaktır {x (1), ..., x (m)}. + +
+ +**4. Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** + +⟶ Jensen eşitsizliği - f bir konveks fonksiyon ve X bir rastgele değişken olsun. Aşağıdaki eşitsizliklerimiz: + +
+ +**5. Clustering** + +⟶ Kümeleme + +
+ +**6. Expectation-Maximization** + +⟶ Beklenti-Ençoklama (Maksimizasyon) + +
+ +**7. Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** + +⟶ Gizli değişkenler - Gizli değişkenler, tahmin problemlerini zorlaştıran ve çoğunlukla z olarak adlandırılan gizli / gözlemlenmemiş değişkenlerdir. Gizli değişkenlerin bulunduğu yerlerdeki en yaygın ayarlar şöyledir: + +
+ +**8. [Setting, Latent variable z, Comments]** + +⟶ Yöntem, Gizli değişken z, Açıklamalar + +
+ +**9. [Mixture of k Gaussians, Factor analysis]** + +⟶ [K Gaussianların birleşimi, Faktör analizi] + +
+ +**10. Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** + +⟶ Algoritma - Beklenti-Ençoklama (Maksimizasyon) (BE) algoritması, θ parametresinin maksimum olabilirlik kestirimiyle tahmin edilmesinde, olasılığa ard arda alt sınırlar oluşturan (E-adımı) ve bu alt sınırın (M-adımı) aşağıdaki gibi optimize edildiği etkin bir yöntem sunar: + +
+ +**11. E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** + +⟶ E-adımı: Her bir veri noktasının x(i)'in belirli bir kümeden z(i) geldiğinin sonsal olasılık değerinin Qi(z(i)) hesaplanması aşağıdaki gibidir: + +
+ +**12. M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** + +⟶ M-adımı: Her bir küme modelini ayrı ayrı yeniden tahmin etmek için x(i) veri noktalarındaki kümeye özgü ağırlıklar olarak Qi(z(i)) sonsal olasılıklarının kullanımı aşağıdaki gibidir: + +
+ +**13. [Gaussians initialization, Expectation step, Maximization step, Convergence]** + +⟶ [Gauss ilklendirme, Beklenti adımı, Maksimizasyon adımı, Yakınsaklık] + +
+ +**14. k-means clustering** + +⟶ k-ortalamalar (k-means) kümeleme + +
+ +**15. We note c(i) the cluster of data point i and μj the center of cluster j.** + +⟶ C(i), i veri noktasının bulunduğu küme olmak üzere, μj j kümesinin merkez noktasıdır. + +
+ +**16. Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** + +⟶ Algoritma - Küme ortalamaları μ1, μ2, ..., μk∈Rn rasgele olarak başlatıldıktan sonra, k-ortalamalar algoritması yakınsayana kadar aşağıdaki adımı tekrar eder: + +
+ +**17. [Means initialization, Cluster assignment, Means update, Convergence]** + +⟶ [Başlangıç ortalaması, Küme Tanımlama, Ortalama Güncelleme, Yakınsama] + +
+ +**18. Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** + +⟶ Bozulma fonksiyonu - Algoritmanın yakınsadığını görmek için aşağıdaki gibi tanımlanan bozulma fonksiyonuna bakarız: + +
+ +**19. Hierarchical clustering** + +⟶ Hiyerarşik kümeleme + +
+ +**20. Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** + +⟶ Algoritma - Ardışık olarak iç içe geçmiş kümelerden oluşturan hiyerarşik bir yaklaşıma sahip bir kümeleme algoritmasıdır. + +
+ +**21. Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** + +⟶ Türler - Aşağıdaki tabloda özetlenen farklı amaç fonksiyonlarını optimize etmeyi amaçlayan farklı hiyerarşik kümeleme algoritmaları vardır: + +
+ +**22. [Ward linkage, Average linkage, Complete linkage]** + +⟶ [Ward bağlantı, Ortalama bağlantı, Tam bağlantı] + +
+ +**23. [Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** + +⟶ [Küme mesafesi içinde minimize edin, Küme çiftleri arasındaki ortalama uzaklığı en aza indirin, Küme çiftleri arasındaki maksimum uzaklığı en aza indirin] + +
+ +**24. Clustering assessment metrics** + +⟶ Kümeleme değerlendirme metrikleri + +
+ +**25. In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** + +⟶ Gözetimsiz bir öğrenme ortamında, bir modelin performansını değerlendirmek çoğu zaman zordur, çünkü gözetimli öğrenme ortamında olduğu gibi, gerçek referans etiketlere sahip değiliz. + +
+ +**26. Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** + +⟶ Siluet katsayısı - Bir örnek ile aynı sınıftaki diğer tüm noktalar arasındaki ortalama mesafeyi ve bir örnek ile bir sonraki en yakın kümedeki diğer tüm noktalar arasındaki ortalama mesafeyi not ederek, tek bir örnek için siluet katsayısı aşağıdaki gibi tanımlanır: + +
+ +**27. Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** + +⟶ Calinski-Harabaz indeksi - k kümelerin sayısını belirtmek üzere Bk ve Wk sırasıyla, kümeler arası ve küme içi dağılım matrisleri olarak aşağıdaki gibi tanımlanır + +
+ +**28. the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** + +⟶ Calinski-Harabaz indeksi s(k), kümelenme modelinin kümeleri ne kadar iyi tanımladığını gösterir, böylece skor ne kadar yüksek olursa, kümeler daha yoğun ve iyi ayrılır. Aşağıdaki şekilde tanımlanmıştır: + +
+ +**29. Dimension reduction** + +⟶ Boyut küçültme + +
+ +**30. Principal component analysis** + +⟶ Temel bileşenler analizi + +
+ +**31. It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** + +⟶ Verilerin yansıtılacağı yönleri maksimize eden varyansı bulan bir boyut küçültme tekniğinidir. + +
+ +**32. Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** + +⟶ Özdeğer, özvektör - Bir matris A∈Rn×n verildiğinde λ'nın, özvektör olarak adlandırılan bir vektör z∈Rn∖{0} varsa, A'nın bir özdeğeri olduğu söylenir: + +
+ +**33. Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** + +⟶ Spektral teorem - A∈Rn×n olsun. Eğer A simetrik ise, o zaman A gerçek ortogonal matris U∈Rn×n n ile diyagonalleştirilebilir. Λ=diag(λ1, ..., λn) yazarak, bizde: + +
+ +**34. diagonal** + +⟶ diyagonal + +
+ +**35. Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** + +⟶ Not: En büyük özdeğere sahip özvektör, matris A'nın temel özvektörü olarak adlandırılır. + +
+ +**36. Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k +dimensions by maximizing the variance of the data as follows:** + +⟶ Algoritma - Temel Bileşen Analizi (TBA) yöntemi, verilerin aşağıdaki gibi varyansı en üst düzeye çıkararak veriyi k boyutlarına yansıtan bir boyut azaltma tekniğidir: + +
+ +**37. Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** + +⟶ Adım 1: Verileri ortalama 0 ve standart sapma 1 olacak şekilde normalleştirin. + +
+ +**38. Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** + +⟶ Adım 2: Gerçek özdeğerler ile simetrik olan Σ=1mm∑i=1x(i)x(i)T∈Rn×n hesaplayın. + +
+ +**39. Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** + +⟶ u1, ...,uk∈Rn olmak üzere Σ ort'nin ortogonal ana özvektörlerini, yani k en büyük özdeğerlerin ortogonal özvektörlerini hesaplayın. + +
+ +**40. Step 4: Project the data on spanR(u1,...,uk).** + +⟶ Adım 4: spanR (u1, ..., uk) üzerindeki verileri gösterin. + +
+ +**41. This procedure maximizes the variance among all k-dimensional spaces.** + +⟶ Bu yöntem tüm k-boyutlu uzaylar arasındaki varyansı en üst düzeye çıkarır. + +
+ +**42. [Data in feature space, Find principal components, Data in principal components space]** + +⟶ [Öznitelik uzayındaki veri, Temel bileşenleri bul, Temel bileşenler uzayındaki veri] + +
+ +**43. Independent component analysis** + +⟶ Bağımsız bileşen analizi + +
+ +**44. It is a technique meant to find the underlying generating sources.** + +⟶ Temel oluşturan kaynakları bulmak için kullanılan bir tekniktir. + +
+ +**45. Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** + +⟶ Varsayımlar - Verilerin x'in n boyutlu kaynak vektörü s=(s1, ..., sn) tarafından üretildiğini varsayıyoruz, burada si bağımsız rasgele değişkenler, bir karışım ve tekil olmayan bir matris A ile aşağıdaki gibi: + +
+ +**46. The goal is to find the unmixing matrix W=A−1.** + +⟶ Amaç, işlem görmemiş matrisini W=A−1 bulmaktır. + +
+ +**47. Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** + +⟶ Bell ve Sejnowski ICA algoritması - Bu algoritma, aşağıdaki adımları izleyerek işlem görmemiş matrisi W'yi bulur: + +
+ +**48. Write the probability of x=As=W−1s as:** + +⟶ X=As=W−1s olasılığını aşağıdaki gibi yazınız: + +
+ +**49. Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** + +⟶ Eğitim verisi {x(i),i∈[[1, m]]} ve g sigmoid fonksiyonunu not ederek log olasılığını yazınız: + +
+ +**50. Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** + +⟶ Bu nedenle, rassal (stokastik) eğim yükselme öğrenme kuralı, her bir eğitim örneği için x(i), W'yi aşağıdaki gibi güncelleştiririz: + +
+ +**51. The Machine Learning cheatsheets are now available in Turkish.** + +⟶ Makine Öğrenmesi El Kitabı artık Türkçe dilinde mevcuttur. + +
+ +**52. Original authors** + +⟶ Orjinal yazarlar + +
+ +**53. Translated by X, Y and Z** + +⟶ X, Y ve Z ile çevrilmiştir. + +
+ +**54. Reviewed by X, Y and Z** + +⟶ X, Y ve Z tarafından yorumlandı + +
+ +**55. [Introduction, Motivation, Jensen's inequality]** + +⟶ [Giriş, Motivasyon, Jensen'in eşitsizliği] + +
+ +**56. [Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** + +⟶ [Kümeleme, Beklenti-Ençoklama (Maksimizasyon), k-ortalamalar, Hiyerarşik kümeleme, Metrikler] + +
+ +**57. [Dimension reduction, PCA, ICA]** + +⟶ [Boyut küçültme, TBA(PCA), BBA(ICA)] diff --git a/tr/cs-230-convolutional-neural-networks.md b/tr/cs-230-convolutional-neural-networks.md new file mode 100644 index 000000000..e1fd03e51 --- /dev/null +++ b/tr/cs-230-convolutional-neural-networks.md @@ -0,0 +1,712 @@ +**1. Convolutional Neural Networks cheatsheet** + +⟶ Evrişimli Sinir Ağları el kitabı + +
+ + +**2. CS 230 - Deep Learning** + +⟶ CS 230 - Derin Öğrenme + +
+ + +**3. [Overview, Architecture structure]** + +⟶ [Genel bakış, Mimari yapı] + +
+ + +**4. [Types of layer, Convolution, Pooling, Fully connected]** + +⟶ [Katman tipleri, Evrişim, Ortaklama, Tam bağlantı] + +
+ + +**5. [Filter hyperparameters, Dimensions, Stride, Padding]** + +⟶ [Filtre hiperparametreleri, Boyut, Adım aralığı/Adım kaydırma, Ekleme/Doldurma] + +
+ + +**6. [Tuning hyperparameters, Parameter compatibility, Model complexity, Receptive field]** + +⟶ [Hiperparametrelerin ayarlanması, Parametre uyumluluğu, Model karmaşıklığı, Receptive field] + +
+ + +**7. [Activation functions, Rectified Linear Unit, Softmax]** + +⟶ [Aktivasyon fonksiyonları, Düzeltilmiş Doğrusal Birim, Softmax] + +
+ + +**8. [Object detection, Types of models, Detection, Intersection over Union, Non-max suppression, YOLO, R-CNN]** + +⟶ [Nesne algılama, Model tipleri, Algılama, Kesiştirilmiş Bölgeler, Maksimum olmayan bastırma, YOLO, R-CNN] + +
+ + +**9. [Face verification/recognition, One shot learning, Siamese network, Triplet loss]** + +⟶ [Yüz doğrulama/tanıma, Tek atış öğrenme, Siamese ağ, Üçlü yitim/kayıp] + +
+ + +**10. [Neural style transfer, Activation, Style matrix, Style/content cost function]** + +⟶ [Sinirsel stil aktarımı, Aktivasyon, Stil matrisi, Stil/içerik maliyet fonksiyonu] + +
+ + +**11. [Computational trick architectures, Generative Adversarial Net, ResNet, Inception Network]** + +⟶ [İşlemsel püf nokta mimarileri, Çekişmeli Üretici Ağ, ResNet, Inception Ağı] + +
+ + +**12. Overview** + +⟶ Genel bakış + +
+ + +**13. Architecture of a traditional CNN ― Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers:** + +⟶ Geleneksel bir CNN (Evrişimli Sinir Ağı) mimarisi - CNN'ler olarak da bilinen evrişimli sinir ağları, genellikle aşağıdaki katmanlardan oluşan belirli bir tür sinir ağıdır: + +
+ + +**14. The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections.** + +⟶ Evrişim katmanı ve ortaklama katmanı, sonraki bölümlerde açıklanan hiperparametreler ile ince ayar (fine-tuned) yapılabilir. + +
+ + +**15. Types of layer** + +⟶ Katman tipleri + +
+ + +**16. Convolution layer (CONV) ― The convolution layer (CONV) uses filters that perform convolution operations as it is scanning the input I with respect to its dimensions. Its hyperparameters include the filter size F and stride S. The resulting output O is called feature map or activation map.** + +⟶ Evrişim katmanı (CONV) ― Evrişim katmanı (CONV) evrişim işlemlerini gerçekleştiren filtreleri, I girişini boyutlarına göre tararken kullanır. Hiperparametreleri F filtre boyutunu ve S adımını içerir. Elde edilen çıktı O, öznitelik haritası veya aktivasyon haritası olarak adlandırılır. + +
+ + +**17. Remark: the convolution step can be generalized to the 1D and 3D cases as well.** + +⟶ Not: evrişim adımı, 1B ve 3B durumlarda da genelleştirilebilir (B: boyut). + +
+ + +**18. Pooling (POOL) ― The pooling layer (POOL) is a downsampling operation, typically applied after a convolution layer, which does some spatial invariance. In particular, max and average pooling are special kinds of pooling where the maximum and average value is taken, respectively.** + +⟶ Ortaklama (POOL) - Ortaklama katmanı (POOL), tipik olarak bir miktar uzamsal değişkenlik gösteren bir evrişim katmanından sonra uygulanan bir örnekleme işlemidir. Özellikle, maksimum ve ortalama ortaklama, sırasıyla maksimum ve ortalama değerin alındığı özel ortaklama türleridir. + +
+ + +**19. [Type, Purpose, Illustration, Comments]** + +⟶ [Tip, Amaç, Görsel Açıklama, Açıklama] + +
+ + +**20. [Max pooling, Average pooling, Each pooling operation selects the maximum value of the current view, Each pooling operation averages the values of the current view]** + +⟶ [Maksimum ortaklama, Ortalama ortaklama, Her ortaklama işlemi, geçerli matrisin maksimum değerini seçer, Her ortaklama işlemi, geçerli matrisin değerlerinin ortalaması alır.] + +
+ + +**21. [Preserves detected features, Most commonly used, Downsamples feature map, Used in LeNet]** + +⟶ [Algılanan özellikleri korur, En çok kullanılan, Boyut azaltarak örneklenmiştelik öznitelik haritası, LeNet'te kullanılmış] + +
+ + +**22. Fully Connected (FC) ― The fully connected layer (FC) operates on a flattened input where each input is connected to all neurons. If present, FC layers are usually found towards the end of CNN architectures and can be used to optimize objectives such as class scores.** + +⟶ Tam Bağlantı (FC) ― Tam bağlı katman (FC), her girişin tüm nöronlara bağlı olduğu bir giriş üzerinde çalışır. Eğer varsa, FC katmanları genellikle CNN mimarisinin sonuna doğru bulunur ve sınıf skorları gibi hedefleri optimize etmek için kullanılabilir. + +
+ + +**23. Filter hyperparameters** + +⟶ Hiperparametrelerin filtrelenmesi + +
+ + +**24. The convolution layer contains filters for which it is important to know the meaning behind its hyperparameters.** + +⟶ Evrişim katmanı, hiperparametrelerinin ardındaki anlamı bilmenin önemli olduğu filtreler içerir. + +
+ + +**25. Dimensions of a filter ― A filter of size F×F applied to an input containing C channels is a F×F×C volume that performs convolutions on an input of size I×I×C and produces an output feature map (also called activation map) of size O×O×1.** + +⟶ Bir filtrenin boyutları - C kanalları içeren bir girişe uygulanan F×F boyutunda bir filtre, I×I×C boyutundaki bir girişte evrişim gerçekleştiren ve aynı zamanda bir çıkış özniteliği haritası üreten F aktivitesi (aktivasyon olarak da adlandırılır) O) O×O×1 boyutunda harita. + +
+ + +**26. Filter** + +⟶ Filtre + +
+ + +**27. Remark: the application of K filters of size F×F results in an output feature map of size O×O×K.** + +⟶ Not: F×F boyutunda K filtrelerinin uygulanması, O×O×K boyutunda bir çıktı öznitelik haritasının oluşmasını sağlar. + +
+ + +**28. Stride ― For a convolutional or a pooling operation, the stride S denotes the number of pixels by which the window moves after each operation.** + +⟶ Adım aralığı ― Evrişimli veya bir ortaklama işlemi için, S adımı (adım aralığı), her işlemden sonra pencerenin hareket ettiği piksel sayısını belirtir. + +
+ + +**29. Zero-padding ― Zero-padding denotes the process of adding P zeroes to each side of the boundaries of the input. This value can either be manually specified or automatically set through one of the three modes detailed below:** + +⟶ Sıfır ekleme/doldurma ― Sıfır ekleme/doldurma, girişin sınırlarının her bir tarafına P sıfır ekleme işlemini belirtir. Bu değer manuel olarak belirlenebilir veya aşağıda detaylandırılan üç moddan biri ile otomatik olarak ayarlanabilir: + +
+ + +**30. [Mode, Value, Illustration, Purpose, Valid, Same, Full]** + +⟶ [Mod, Değer, Görsel Açıklama, Amaç, Geçerli, Aynı, Tüm] + +
+ + +**31. [No padding, Drops last convolution if dimensions do not match, Padding such that feature map size has size ⌈IS⌉, Output size is mathematically convenient, Also called 'half' padding, Maximum padding such that end convolutions are applied on the limits of the input, Filter 'sees' the input end-to-end]** + +⟶ [Ekleme/doldurma yok, Boyutlar uyuşmuyorsa son evrişimi düşürür, Öznitelik harita büyüklüğüne sahip ekleme/doldurma ⌈IS⌉, Çıktı boyutu matematiksel olarak uygundur, 'Yarım' ekleme olarak da bilinir, Son konvolüsyonların giriş sınırlarına uygulandığı maksimum ekleme, Filtre girişi uçtan uca "görür"] + +
+ + +**32. Tuning hyperparameters** + +⟶ Hiperparametreleri ayarlama + +
+ + +**33. Parameter compatibility in convolution layer ― By noting I the length of the input volume size, F the length of the filter, P the amount of zero padding, S the stride, then the output size O of the feature map along that dimension is given by:** + +⟶ Evrişim katmanında parametre uyumu - Girdinin hacim büyüklüğü I uzunluğu, F filtresinin uzunluğu, P sıfır ekleme miktarı, S adım aralığı, daha sonra bu boyut boyunca öznitelik haritasının O çıkış büyüklüğü belirtilir: + +
+ + +**34. [Input, Filter, Output]** + +⟶ [Giriş, Filtre, Çıktı] + +
+ + +**35. Remark: often times, Pstart=Pend≜P, in which case we can replace Pstart+Pend by 2P in the formula above.** + +⟶ Not: çoğunlukla, Pstart=Pend≜P, bu durumda Pstart+Pend'i yukarıdaki formülde 2P ile değiştirebiliriz. + +
+ + +**36. Understanding the complexity of the model ― In order to assess the complexity of a model, it is often useful to determine the number of parameters that its architecture will have. In a given layer of a convolutional neural network, it is done as follows:** + +⟶ Modelin karmaşıklığını anlama - Bir modelin karmaşıklığını değerlendirmek için mimarisinin sahip olacağı parametrelerin sayısını belirlemek genellikle yararlıdır. Bir evrişimsli sinir ağının belirli bir katmanında, aşağıdaki şekilde yapılır: + +
+ + +**37. [Illustration, Input size, Output size, Number of parameters, Remarks]** + +⟶ [Görsel Açıklama, Giriş boyutu, Çıkış boyutu, Parametre sayısı, Not] + +
+ + +**38. [One bias parameter per filter, In most cases, S + + +**39. [Pooling operation done channel-wise, In most cases, S=F]** + +⟶ [Ortaklama işlemi kanal bazında yapılır, Çoğu durumda S=F] + +
+ + +**40. [Input is flattened, One bias parameter per neuron, The number of FC neurons is free of structural constraints]** + +⟶ [Giriş bağlantılanmış, Nöron başına bir bias parametresi, tam bağlantı (FC) nöronlarının sayısı yapısal kısıtlamalardan arındırılmış] + +
+ + +**41. Receptive field ― The receptive field at layer k is the area denoted Rk×Rk of the input that each pixel of the k-th activation map can 'see'. By calling Fj the filter size of layer j and Si the stride value of layer i and with the convention S0=1, the receptive field at layer k can be computed with the formula:** + +⟶ Evrişim sonucu oluşan haritanın boyutu ― K katmanında filtre çıkışı, k-inci aktivasyon haritasının her bir pikselinin 'görebileceği' girişin Rk×Rk olarak belirtilen alanını ifade eder. Fj, j ve Si katmanlarının filtre boyutu, i katmanının adım aralığı ve S0=1 (ilk adım aralığının 1 seçilmesi durumu) kuralıyla, k katmanındaki işlem sonucunda elde edilen aktivasyon haritasının boyutları bu formülle hesaplanabilir: + +
+ + +**42. In the example below, we have F1=F2=3 and S1=S2=1, which gives R2=1+2⋅1+2⋅1=5.** + +⟶ Aşağıdaki örnekte, F1=F2=3 ve S1=S2=1 için R2=1+2⋅1+2⋅1=5 sonucu elde edilir. + +
+ + +**43. Commonly used activation functions** + +⟶ Yaygın olarak kullanılan aktivasyon fonksiyonları + +
+ + +**44. Rectified Linear Unit ― The rectified linear unit layer (ReLU) is an activation function g that is used on all elements of the volume. It aims at introducing non-linearities to the network. Its variants are summarized in the table below:** + +⟶ Düzeltilmiş Doğrusal Birim ― Düzeltilmiş doğrusal birim katmanı (ReLU), (g)'nin tüm elemanlarında kullanılan bir aktivasyon fonksiyonudur. Doğrusal olmamaları ile ağın öğrenmesi amaçlanmaktadır. Çeşitleri aşağıdaki tabloda özetlenmiştir: + +
+ + +**45. [ReLU, Leaky ReLU, ELU, with]** + +⟶[ReLU, Sızıntı ReLU, ELU, ile] + +
+ + +**46. [Non-linearity complexities biologically interpretable, Addresses dying ReLU issue for negative values, Differentiable everywhere]** + +⟶ [Doğrusal olmama karmaşıklığı biyolojik olarak yorumlanabilir, Negatif değerler için ölen ReLU sorununu giderir, Her yerde türevlenebilir] + +
+ + +**47. Softmax ― The softmax step can be seen as a generalized logistic function that takes as input a vector of scores x∈Rn and outputs a vector of output probability p∈Rn through a softmax function at the end of the architecture. It is defined as follows:** + +⟶ Softmax ― Softmax adımı, x∈Rn skorlarının bir vektörünü girdi olarak alan ve mimarinin sonunda softmax fonksiyonundan p∈Rn çıkış olasılık vektörünü oluşturan genelleştirilmiş bir lojistik fonksiyon olarak görülebilir. Aşağıdaki gibi tanımlanır: + +
+ + +**48. where** + +⟶ buna karşılık + +
+ + +**49. Object detection** + +⟶ Nesne algılama + +
+ + +**50. Types of models ― There are 3 main types of object recognition algorithms, for which the nature of what is predicted is different. They are described in the table below:** + +⟶ Model tipleri ― Burada, nesne tanıma algoritmasının doğası gereği 3 farklı kestirim türü vardır. Aşağıdaki tabloda açıklanmıştır: + +
+ + +**51. [Image classification, Classification w. localization, Detection]** + +⟶ [Görüntü sınıflandırma, Sınıflandırma ve lokalizasyon (konumlama), Algılama] + +
+ + +**52. [Teddy bear, Book]** + +⟶ [Oyuncak ayı, Kitap] + +
+ + +**53. [Classifies a picture, Predicts probability of object, Detects an object in a picture, Predicts probability of object and where it is located, Detects up to several objects in a picture, Predicts probabilities of objects and where they are located]** + +⟶ [Bir görüntüyü sınıflandırır, Nesnenin olasılığını tahmin eder, Görüntüdeki bir nesneyi algılar/tanır, Nesnenin olasılığını ve bulunduğu yeri tahmin eder, Bir görüntüdeki birden fazla nesneyi algılar, Nesnelerin olasılıklarını ve nerede olduklarını tahmin eder] + +
+ + +**54. [Traditional CNN, Simplified YOLO, R-CNN, YOLO, R-CNN]** + +⟶ [Geleneksel CNN, Basitleştirilmiş YOLO (You-Only-Look-Once), R-CNN (R: Region - Bölge), YOLO, R-CNN] + +
+ + +**55. Detection ― In the context of object detection, different methods are used depending on whether we just want to locate the object or detect a more complex shape in the image. The two main ones are summed up in the table below:** + +⟶ Algılama ― Nesne algılama bağlamında, nesneyi konumlandırmak veya görüntüdeki daha karmaşık bir şekli tespit etmek isteyip istemediğimize bağlı olarak farklı yöntemler kullanılır. İki ana tablo aşağıdaki tabloda özetlenmiştir: + +
+ + +**56. [Bounding box detection, Landmark detection]** + +⟶ [Sınırlayıcı kutu ile tespit, Karakteristik nokta algılama] + +
+ + +**57. [Detects the part of the image where the object is located, Detects a shape or characteristics of an object (e.g. eyes), More granular]** + +⟶ [Görüntüde nesnenin bulunduğu yeri algılar, Bir nesnenin şeklini veya özelliklerini algılar (örneğin gözler), Daha ayrıntılı] + +
+ + +**58. [Box of center (bx,by), height bh and width bw, Reference points (l1x,l1y), ..., (lnx,lny)]** + +⟶ [Kutu merkezi (bx,by), yükseklik bh ve genişlik bw, Referans noktalar (l1x,l1y), ..., (lnx,lny)] + +
+ + +**59. Intersection over Union ― Intersection over Union, also known as IoU, is a function that quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding box Ba. It is defined as:** + +⟶ Kesiştirilmiş Bölgeler - Kesiştirilmiş Bölgeler, IoU (Intersection over Union) olarak da bilinir, Birleştirilmiş sınırlama kutusu, tahmin edilen sınırlama kutusu (Bp) ile gerçek sınırlama kutusu Ba üzerinde ne kadar doğru konumlandırıldığını ölçen bir fonksiyondur. Olarak tanımlanır: + +
+ + +**60. Remark: we always have IoU∈[0,1]. By convention, a predicted bounding box Bp is considered as being reasonably good if IoU(Bp,Ba)⩾0.5.** + +⟶ Not: Her zaman IoU∈ [0,1] ile başlarız. Kural olarak, Öngörülen bir sınırlama kutusu Bp, IoU (Bp, Ba)⩾0.5 olması durumunda makul derecede iyi olarak kabul edilir. + +
+ + +**61. Anchor boxes ― Anchor boxing is a technique used to predict overlapping bounding boxes. In practice, the network is allowed to predict more than one box simultaneously, where each box prediction is constrained to have a given set of geometrical properties. For instance, the first prediction can potentially be a rectangular box of a given form, while the second will be another rectangular box of a different geometrical form.** + +⟶ Öneri (Anchor) kutular, örtüşen sınırlayıcı kutuları öngörmek için kullanılan bir tekniktir. Uygulamada, ağın aynı anda birden fazla kutuyu tahmin etmesine izin verilir, burada her kutu tahmini belirli bir geometrik öznitelik setine sahip olmakla sınırlıdır. Örneğin, ilk tahmin potansiyel olarak verilen bir formun dikdörtgen bir kutusudur, ikincisi ise farklı bir geometrik formun başka bir dikdörtgen kutusudur. + +
+ + +**62. Non-max suppression ― The non-max suppression technique aims at removing duplicate overlapping bounding boxes of a same object by selecting the most representative ones. After having removed all boxes having a probability prediction lower than 0.6, the following steps are repeated while there are boxes remaining:** + +⟶ Maksimum olmayan bastırma - Maksimum olmayan bastırma tekniği, nesne için yinelenen ve örtüşen öneri kutuları içinde en uygun temsilleri seçerek örtüşmesi düşük olan kutuları kaldırmayı amaçlar. Olasılık tahmini 0.6'dan daha düşük olan tüm kutuları çıkardıktan sonra, kalan kutular ile aşağıdaki adımlar tekrarlanır: + +
+ + +**63. [For a given class, Step 1: Pick the box with the largest prediction probability., Step 2: Discard any box having an IoU⩾0.5 with the previous box.]** + +⟶ [Verilen bir sınıf için, Adım 1: En büyük tahmin olasılığı olan kutuyu seçin., Adım 2: Önceki kutuyla IoU⩾0.5 olan herhangi bir kutuyu çıkarın.] + +
+ + +**64. [Box predictions, Box selection of maximum probability, Overlap removal of same class, Final bounding boxes]** + +⟶ [Kutu tahmini/kestirimi, Maksimum olasılığa göre kutu seçimi, Aynı sınıf için örtüşme kaldırma, Son sınırlama kutuları] + +
+ + +**65. YOLO ― You Only Look Once (YOLO) is an object detection algorithm that performs the following steps:** + +⟶ YOLO ― You Only Look Once (YOLO), aşağıdaki adımları uygulayan bir nesne algılama algoritmasıdır: + +
+ + +**66. [Step 1: Divide the input image into a G×G grid., Step 2: For each grid cell, run a CNN that predicts y of the following form:, repeated k times]** + +⟶ [Adım 1: Giriş görüntüsünü G×G kare parçalara (hücrelere) bölün., Adım 2: Her bir hücre için, aşağıdaki formdan y'yi öngören bir CNN çalıştırın: k kez tekrarlayın] + +
+ + +**67. where pc is the probability of detecting an object, bx,by,bh,bw are the properties of the detected bouding box, c1,...,cp is a one-hot representation of which of the p classes were detected, and k is the number of anchor boxes.** + +⟶ pc'nin bir nesneyi algılama olasılığı olduğu durumlarda, bx, by, bh, bw tespit edilen olası sınırlayıcı kutusunun özellikleridir, cl, ..., cp, p sınıflarının tespit edilen one-hot temsildir ve k öneri (anchor) kutularının sayısıdır. + +
+ + +**68. Step 3: Run the non-max suppression algorithm to remove any potential duplicate overlapping bounding boxes.** + +⟶ Adım3: Potansiyel yineli çakışan sınırlayıcı kutuları kaldırmak için maksimum olmayan bastırma algoritmasını çalıştır. + +
+ + +**69. [Original image, Division in GxG grid, Bounding box prediction, Non-max suppression]** + +⟶ [Orijinal görüntü, GxG kare parçalara (hücrelere) bölünmesi, Sınırlayıcı kutu kestirimi, Maksimum olmayan bastırma] + +
+ + +**70. Remark: when pc=0, then the network does not detect any object. In that case, the corresponding predictions bx,...,cp have to be ignored.** + +⟶ Not: pc=0 olduğunda, ağ herhangi bir nesne algılamamaktadır. Bu durumda, ilgili bx, ..., cp tahminleri dikkate alınmamalıdır. + +
+ + +**71. R-CNN ― Region with Convolutional Neural Networks (R-CNN) is an object detection algorithm that first segments the image to find potential relevant bounding boxes and then run the detection algorithm to find most probable objects in those bounding boxes.** + +⟶ R-CNN - Evrişimli Sinir Ağları ile Bölge Bulma (R-CNN), potansiyel olarak sınırlayıcı kutuları bulmak için görüntüyü bölütleyen (segmente eden) ve daha sonra sınırlayıcı kutularda en olası nesneleri bulmak için algılama algoritmasını çalıştıran bir nesne algılama algoritmasıdır. + +
+ + +**72. [Original image, Segmentation, Bounding box prediction, Non-max suppression]** + +⟶ [Orijinal görüntü, Bölütleme (Segmentasyon), Sınırlayıcu kutu kestirimi, Maksimum olmayan bastırma] + +
+ + +**73. Remark: although the original algorithm is computationally expensive and slow, newer architectures enabled the algorithm to run faster, such as Fast R-CNN and Faster R-CNN.** + +⟶ Not: Orijinal algoritma hesaplamalı olarak maliyetli ve yavaş olmasına rağmen, yeni mimariler algoritmanın Hızlı R-CNN ve Daha Hızlı R-CNN gibi daha hızlı çalışmasını sağlamıştır. + +
+ + +**74. Face verification and recognition** + +⟶ Yüz doğrulama ve tanıma + +
+ + +**75. Types of models ― Two main types of model are summed up in table below:** + +⟶ Model tipleri ― İki temel model aşağıdaki tabloda özetlenmiştir: + +
+ + +**76. [Face verification, Face recognition, Query, Reference, Database]** + +⟶ [Yüz doğrulama, Yüz tanıma, Sorgu, Kaynak, Veri tabanı] + +
+ + +**77. [Is this the correct person?, One-to-one lookup, Is this one of the K persons in the database?, One-to-many lookup]** + +⟶ [Bu doğru kişi mi?, Bire bir arama, Veritabanındaki K kişilerden biri mi?, Bire-çok arama] + +
+ + +**78. One Shot Learning ― One Shot Learning is a face verification algorithm that uses a limited training set to learn a similarity function that quantifies how different two given images are. The similarity function applied to two images is often noted d(image 1,image 2).** + +⟶ Tek Atış (Onr-Shot) Öğrenme - Tek Atış Öğrenme, verilen iki görüntünün ne kadar farklı olduğunu belirleyen benzerlik fonksiyonunu öğrenmek için sınırlı bir eğitim seti kullanan bir yüz doğrulama algoritmasıdır. İki resme uygulanan benzerlik fonksiyonu sıklıkla kaydedilir (resim 1, resim 2). + +
+ + +**79. Siamese Network ― Siamese Networks aim at learning how to encode images to then quantify how different two images are. For a given input image x(i), the encoded output is often noted as f(x(i)).** + +⟶ Siyam (Siamese) Ağı - Siyam Ağı, iki görüntünün ne kadar farklı olduğunu ölçmek için görüntülerin nasıl kodlanacağını öğrenmeyi amaçlar. Belirli bir giriş görüntüsü x(i) için kodlanmış çıkış genellikle f(x(i)) olarak alınır. + +
+ + +**80. Triplet loss ― The triplet loss ℓ is a loss function computed on the embedding representation of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive example belong to a same class, while the negative example to another one. By calling α∈R+ the margin parameter, this loss is defined as follows:** + +⟶ Üçlü kayıp - Üçlü kayıp ℓ, A (öneri), P (pozitif) ve N (negatif) görüntülerinin üçlüsünün gömülü gösterimde hesaplanan bir kayıp fonksiyonudur. Öneri ve pozitif örnek aynı sınıfa aitken, negatif örnek bir diğerine aittir. α∈R+ marjın parametresini çağırarak, bu kayıp aşağıdaki gibi tanımlanır: + +
+ + +**81. Neural style transfer** + +⟶ Sinirsel stil transferi (aktarımı) + +
+ + +**82. Motivation ― The goal of neural style transfer is to generate an image G based on a given content C and a given style S.** + +⟶ Motivasyon ― Sinirsel stil transferinin amacı, verilen bir C içeriğine ve verilen bir S stiline dayanan bir G görüntüsü oluşturmaktır. + +
+ + +**83. [Content C, Style S, Generated image G]** + +⟶ [İçerik C, Stil S, Oluşturulan görüntü G] + +
+ + +**84. Activation ― In a given layer l, the activation is noted a[l] and is of dimensions nH×nw×nc** + +⟶ Aktivasyon ― Belirli bir l katmanında, aktivasyon [l] olarak gösterilir ve nH×nw×nc boyutlarındadır + +
+ + +**85. Content cost function ― The content cost function Jcontent(C,G) is used to determine how the generated image G differs from the original content image C. It is defined as follows:** + +⟶ İçerik maliyeti fonksiyonu ― İçerik maliyeti fonksiyonu Jcontent(C,G), G oluşturulan görüntüsünün, C orijinal içerik görüntüsünden ne kadar farklı olduğunu belirlemek için kullanılır.Aşağıdaki gibi tanımlanır: + +
+ + +**86. Style matrix ― The style matrix G[l] of a given layer l is a Gram matrix where each of its elements G[l]kk′ quantifies how correlated the channels k and k′ are. It is defined with respect to activations a[l] as follows:** + +⟶ Stil matrisi - Stil matrisi G[l], belirli bir l katmanının her birinin G[l]kk′ elemanlarının k ve k′ kanallarının ne kadar ilişkili olduğunu belirlediği bir Gram matristir. A[l] aktivasyonlarına göre aşağıdaki gibi tanımlanır: + +
+ + +**87. Remark: the style matrix for the style image and the generated image are noted G[l] (S) and G[l] (G) respectively.** + +⟶ Not: Stil görüntüsü ve oluşturulan görüntü için stil matrisi, sırasıyla G[l] (S) ve G[l] (G) olarak belirtilmiştir. + +
+ + +**88. Style cost function ― The style cost function Jstyle(S,G) is used to determine how the generated image G differs from the style S. It is defined as follows:** + +⟶ Stil maliyeti fonksiyonu - Stil maliyeti fonksiyonu Jstyle(S,G), oluşturulan G görüntüsünün S stilinden ne kadar farklı olduğunu belirlemek için kullanılır. Aşağıdaki gibi tanımlanır: + +
+ + +**89. Overall cost function ― The overall cost function is defined as being a combination of the content and style cost functions, weighted by parameters α,β, as follows:** + +⟶ Genel maliyet fonksiyonu - Genel maliyet fonksiyonu, α, β parametreleriyle ağırlıklandırılan içerik ve stil maliyet fonksiyonlarının bir kombinasyonu olarak tanımlanır: + +
+ + +**90. Remark: a higher value of α will make the model care more about the content while a higher value of β will make it care more about the style.** + +⟶ Not: yüksek bir α değeri modelin içeriğe daha fazla önem vermesini sağlarken, yüksek bir β değeri de stile önem verir. + +
+ + +**91. Architectures using computational tricks** + +⟶ Hesaplama ipuçları kullanan mimariler + +
+ + +**92. Generative Adversarial Network ― Generative adversarial networks, also known as GANs, are composed of a generative and a discriminative model, where the generative model aims at generating the most truthful output that will be fed into the discriminative which aims at differentiating the generated and true image.** + +⟶ Çekişmeli Üretici Ağlar - GAN olarak da bilinen çekişmeli üretici ağlar, modelin üretici denen ve gerçek imajı ayırt etmeyi amaçlayan ayırıcıya beslenecek en doğru çıktının oluşturulmasını amaçladığı üretici ve ayırt edici bir modelden oluşur. + +
+ + +**93. [Training, Noise, Real-world image, Generator, Discriminator, Real Fake]** + +⟶ [Eğitim, Gürültü, Gerçek dünya görüntüsü, Üretici, Ayırıcı, Gerçek Sahte] + +
+ + +**94. Remark: use cases using variants of GANs include text to image, music generation and synthesis.** + +⟶ Not: GAN'ın kullanım alanları, yazıdan görüntüye, müzik üretimi ve sentezi. + +
+ + +**95. ResNet ― The Residual Network architecture (also called ResNet) uses residual blocks with a high number of layers meant to decrease the training error. The residual block has the following characterizing equation:** + +⟶ ResNet ― Artık Ağ mimarisi (ResNet olarak da bilinir), eğitim hatasını azaltmak için çok sayıda katman içeren artık bloklar kullanır. Artık blok aşağıdaki karakterizasyon denklemine sahiptir: + +
+ + +**96. Inception Network ― This architecture uses inception modules and aims at giving a try at different convolutions in order to increase its performance through features diversification. In particular, it uses the 1×1 convolution trick to limit the computational burden.** + +⟶ Inception Ağ ― Bu mimari inception modüllerini kullanır ve özelliklerini çeşitlendirme yoluyla performansını artırmak için farklı evrişim kombinasyonları denemeyi amaçlamaktadır. Özellikle, hesaplama yükünü sınırlamak için 1x1 evrişm hilesini kullanır. + +
+ + +**97. The Deep Learning cheatsheets are now available in [target language].** + +⟶ Derinöğrenme el kitabı artık kullanıma hazır [hedef dilde]. + +
+ + +**98. Original authors** + +⟶ Orijinal yazarlar + +
+ + +**99. Translated by X, Y and Z** + +⟶ X, Y ve Z tarafından çevirildi + +
+ + +**100. Reviewed by X, Y and Z** + +⟶ X, Y ve Z tarafından kontrol edildi + +
+ + +**101. View PDF version on GitHub** + +⟶ GitHub'da PDF sürümünü görüntüleyin + +
+ + +**102. By X and Y** + +⟶ X ve Y ile + +
diff --git a/tr/cs-230-deep-learning-tips-and-tricks.md b/tr/cs-230-deep-learning-tips-and-tricks.md new file mode 100644 index 000000000..8bc96d387 --- /dev/null +++ b/tr/cs-230-deep-learning-tips-and-tricks.md @@ -0,0 +1,450 @@ +**1. Deep Learning Tips and Tricks cheatsheet** + +⟶ Derin öğrenme püf noktaları ve ipuçları el kitabı + +
+ + +**2. CS 230 - Deep Learning** + +⟶ CS 230 - Derin Öğrenme + +
+ + +**3. Tips and tricks** + +⟶ Püf noktaları ve ipuçları + +
+ + +**4. [Data processing, Data augmentation, Batch normalization]** + +⟶ [Veri işleme, Veri artırma, Küme normalizasyonu] + +
+ +**5. [Training a neural network, Epoch, Mini-batch, Cross-entropy loss, Backpropagation, Gradient descent, Updating weights, Gradient checking]** + +⟶ [Bir sinir ağının eğitilmesi, Dönem (Epok), Mini-küme, Çapraz-entropy yitimi (kaybı), Geriye yayılım, Gradyan (Bayır) iniş, Ağırlıkların güncellenmesi, Gradyan (Bayır) kontrolü] + +
+ + +**6. [Parameter tuning, Xavier initialization, Transfer learning, Learning rate, Adaptive learning rates]** + +⟶ [Parametrelerin ayarlanması, Xavier başlatma, Transfer öğrenme, Öğrenme oranı, Uyarlamalı öğrenme oranları] + +
+ + +**7. [Regularization, Dropout, Weight regularization, Early stopping]** + +⟶ [Düzenlileştirme, Seyreltme, Ağırlıkların düzeltilmesi, Erken durdurma] + +
+ + +**8. [Good practices, Overfitting small batch, Gradient checking]** + +⟶ [İyi örnekler, Küçük kümelerin aşırı öğrenmesi, Gradyan kontrolü] + +
+ + +**9. View PDF version on GitHub** + +⟶ GitHub'da PDF sürümünü görüntüleyin + +
+ + +**10. Data processing** + +⟶ Veri işleme + +
+ + +**11. Data augmentation ― Deep learning models usually need a lot of data to be properly trained. It is often useful to get more data from the existing ones using data augmentation techniques. The main ones are summed up in the table below. More precisely, given the following input image, here are the techniques that we can apply:** + +⟶ Veri artırma ― Derin öğrenme modelleri genellikle uygun şekilde eğitilmek için çok fazla veriye ihtiyaç duyar. Veri artırma tekniklerini kullanarak mevcut verilerden daha fazla veri üretmek genellikle yararlıdır. Temel işlemler aşağıdaki tabloda özetlenmiştir. Daha doğrusu, aşağıdaki girdi görüntüsüne bakıldığında, uygulayabileceğimiz teknikler şunlardır: + +
+ + +**12. [Original, Flip, Rotation, Random crop]** + +⟶ [Orijinal, Çevirme, Rotasyon (Yönlendirme), Rastgele kırpma/kesme] + +
+ + +**13. [Image without any modification, Flipped with respect to an axis for which the meaning of the image is preserved, Rotation with a slight angle, Simulates incorrect horizon calibration, Random focus on one part of the image, Several random crops can be done in a row]** + +⟶ [Herhangi bir değişiklik yapılmamış görüntü, Görüntünün anlamının korunduğu bir eksene göre çevrilmiş görüntü, Hafif açılı döndürme, Yanlış yatay kalibrasyonu simule eder, Görüntünün bir bölümüne rastgele odaklanma, Arka arkaya birkaç rasgele kesme yapılabilir] + +
+ + +**14. [Color shift, Noise addition, Information loss, Contrast change]** + +⟶ [Renk değişimi, Gürültü ekleme, Bilgi kaybı, Kontrast değişimi] + +
+ + +**15. [Nuances of RGB is slightly changed, Captures noise that can occur with light exposure, Addition of noise, More tolerance to quality variation of inputs, Parts of image ignored, Mimics potential loss of parts of image, Luminosity changes, Controls difference in exposition due to time of day]** + +⟶ [RGB'nin nüansları biraz değiştirilmesi, Işığa maruz kalırken oluşabilecek gürültü, Gürültü ekleme, Girdilerin kalite değişkenliğine daha fazla toleranslı olması, Yok sayılan görüntüler, Görüntünün parçalardaki olası kayıplarını kopyalanması, Gün içindeki ışık ve renk değişimim kontrolü] + +
+ + +**16. Remark: data is usually augmented on the fly during training.** + +⟶ Not: Veriler genellikle eğitim sırasında artırılır. + +
+ + +**17. Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** + +⟶ Küme normalleştirme - Bu, {xi} kümesini normalleştiren, β hiperparametresinin bir adımıdır. μB ve σ2B'ye dikkat ederek, kümeyi düzeltmek istediklerimizin ortalaması ve varyansı şu şekilde yapılır: + +
+ + +**18. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** + +⟶ Genellikle tam-tüm bağlı/evrişimli bir katmandan sonra ve doğrusal olmayan bir katmandan önce yapılır. Daha yüksek öğrenme oranlarına izin vermeyi ve başlangıç durumuna güçlü bir şekilde bağımlılığı azaltmayı amaçlar. + +
+ + +**19. Training a neural network** + +⟶ Bir sinir ağının eğitilmesi + +
+ + +**20. Definitions** + +⟶ Tanımlamalar + +
+ + +**21. Epoch ― In the context of training a model, epoch is a term used to refer to one iteration where the model sees the whole training set to update its weights.** + +⟶ Dönem (Epok/Epoch) ― Bir modelin eğitimi kapsamında, modelin ağırlıklarını güncellemek için tüm eğitim setini kullandığı bir yinelemeye ifade etmek için kullanılan bir terimdir. + +
+ + +**22. Mini-batch gradient descent ― During the training phase, updating weights is usually not based on the whole training set at once due to computation complexities or one data point due to noise issues. Instead, the update step is done on mini-batches, where the number of data points in a batch is a hyperparameter that we can tune.** + +⟶ Mini-küme gradyan (bayır) iniş ― Eğitim aşamasında, ağırlıkların güncellenmesi genellikle hesaplama karmaşıklıkları nedeniyle bir kerede ayarlanan tüm eğitime veya gürültü sorunları nedeniyle bir veri noktasına dayanmaz. Bunun yerine, güncelleme adımı bir toplu işdeki veri noktalarının sayısının ayarlayabileceğimiz bir hiperparametre olduğu mini kümelerle yapılır. Veriler mini-kümeler halinde alınır. + +
+ + +**23. Loss function ― In order to quantify how a given model performs, the loss function L is usually used to evaluate to what extent the actual outputs y are correctly predicted by the model outputs z.** + +⟶ Yitim fonksiyonu ― Belirli bir modelin nasıl bir performans gösterdiğini ölçmek için, L yitim (kayıp) fonksiyonu genellikle y gerçek çıktıların, z model çıktıları tarafından ne kadar doğru tahmin edildiğini değerlendirmek için kullanılır. + +
+ + +**24. Cross-entropy loss ― In the context of binary classification in neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** + +⟶ Çapraz-entropi kaybı ― Yapay sinir ağlarında ikili sınıflandırma bağlamında, çapraz entropi kaybı L (z, y) yaygın olarak kullanılır ve şöyle tanımlanır: + +
+ + +**25. Finding optimal weights** + +⟶ Optimum ağırlıkların bulunması + +
+ + +**26. Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to each weight w is computed using the chain rule.** + +⟶ Geriye yayılım ― Geri yayılım, asıl çıktıyı ve istenen çıktıyı dikkate alarak sinir ağındaki ağırlıkları güncellemek için kullanılan bir yöntemdir. Her bir ağırlığa göre türev, zincir kuralı kullanılarak hesaplanır. + +
+ + +**27. Using this method, each weight is updated with the rule:** + +⟶ Bu yöntemi kullanarak, her ağırlık kurala göre güncellenir: + +
+ + +**28. Updating weights ― In a neural network, weights are updated as follows:** + +⟶ Ağırlıkların güncellenmesi ― Bir sinir ağında, ağırlıklar aşağıdaki gibi güncellenir: + +
+ + +**29. [Step 1: Take a batch of training data and perform forward propagation to compute the loss, Step 2: Backpropagate the loss to get the gradient of the loss with respect to each weight, Step 3: Use the gradients to update the weights of the network.]** + +⟶ [Adım 1: Bir küme eğitim verisi alın ve kaybı hesaplamak için ileriye doğru ilerleyin, Step 2: Her ağırlığa göre kaybın derecesini elde etmek için kaybı tekrar geriye doğru yayın, Adım 3: Ağın ağırlıklarını güncellemek için gradyanları kullanın.] + + +
+ + +**30. [Forward propagation, Backpropagation, Weights update]** + +⟶ [İleri yayılım, Geriye yayılım, Ağırlıkların güncellenmesi] + +
+ + +**31. Parameter tuning** + +⟶ Parametre ayarlama + +
+ + +**32. Weights initialization** + +⟶ Ağırlıkların başlangıçlandırılması + +
+ + +**33. Xavier initialization ― Instead of initializing the weights in a purely random manner, Xavier initialization enables to have initial weights that take into account characteristics that are unique to the architecture.** + +⟶ Xavier başlangıcı (ilklendirme) ― Ağırlıkları tamamen rastgele bir şekilde başlatmak yerine, Xavier başlangıcı, mimariye özgü özellikleri dikkate alan ilk ağırlıkların alınmasını sağlar. + +
+ + +**34. Transfer learning ― Training a deep learning model requires a lot of data and more importantly a lot of time. It is often useful to take advantage of pre-trained weights on huge datasets that took days/weeks to train, and leverage it towards our use case. Depending on how much data we have at hand, here are the different ways to leverage this:** + +⟶ Transfer öğrenme ― Bir derin öğrenme modelini eğitmek çok fazla veri ve daha da önemlisi çok zaman gerektirir. Kullanım durumumuza yönelik eğitim yapmak ve güçlendirmek için günler/haftalar süren dev veri setleri üzerinde önceden eğitilmiş ağırlıklardan yararlanmak genellikle yararlıdır. Elimizdeki ne kadar veri olduğuna bağlı olarak, aşağıdakilerden yararlanmanın farklı yolları: + +
+ + +**35. [Training size, Illustration, Explanation]** + +⟶ [Eğitim boyutu, Görselleştirme, Açıklama] + +
+ + +**36. [Small, Medium, Large]** + +⟶ [Küçük, Orta, Büyük] + +
+ + +**37. [Freezes all layers, trains weights on softmax, Freezes most layers, trains weights on last layers and softmax, Trains weights on layers and softmax by initializing weights on pre-trained ones]** + +⟶ [Tüm katmanlar dondurulur, Softmax'taki ağırlıkları eğitilir, Çoğu katmanlar dondurulur, son katmanlar ve softmax katmanı ağırlıklar ile eğitilir, Önceden eğitilerek elde edilen ağırlıkları kullanarak katmanlar ve softmax için kullanır] + +
+ + +**38. Optimizing convergence** + +⟶ Yakınsamayı optimize etmek + +
+ + +**39. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. It can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** + +⟶ Öğrenme oranı (adımı) ― Genellikle α veya bazen η olarak belirtilen öğrenme oranı, ağırlıkların hangi hızda güncellendiğini belirler. Sabitlenebilir veya uyarlanabilir şekilde değiştirilebilir. Mevcut en popüler yöntemin adı Adam'dır ve öğrenme hızını ayarlayan bir yöntemdir. + +
+ +**40. Adaptive learning rates ― Letting the learning rate vary when training a model can reduce the training time and improve the numerical optimal solution. While Adam optimizer is the most commonly used technique, others can also be useful. They are summed up in the table below:** + +⟶ Uyarlanabilir öğrenme oranları ― Bir modelin eğitilmesi sırasında öğrenme oranının değişmesine izin vermek eğitim süresini kısaltabilir ve sayısal optimum çözümü iyileştirebilir. Adam optimizasyonu yöntemi en çok kullanılan teknik olmasına rağmen, diğer yöntemler de faydalı olabilir. Bunlar aşağıdaki tabloda özetlenmiştir: + +
+ + +**41. [Method, Explanation, Update of w, Update of b]** + +⟶ [Yöntem, Açıklama, w'ların güncellenmesi, b'nin güncellenmesi] + +
+ + +**42. [Momentum, Dampens oscillations, Improvement to SGD, 2 parameters to tune]** + +⟶ [Momentum, Osilasyonların azaltılması/yumuşatılması, SGD (Stokastik Gradyan/Bayır İniş) iyileştirmesi, Ayarlanacak 2 parametre] + +
+ + +**43. [RMSprop, Root Mean Square propagation, Speeds up learning algorithm by controlling oscillations]** + +⟶ [RMSprop, Ortalama Karekök yayılımı, Osilasyonları kontrol ederek öğrenme algoritmasını hızlandırır] + +
+ + +**44. [Adam, Adaptive Moment estimation, Most popular method, 4 parameters to tune]** + +⟶ [Adam, Uyarlamalı Moment tahmini/kestirimi, En popüler yöntem, Ayarlanacak 4 parametre] + +
+ + +**45. Remark: other methods include Adadelta, Adagrad and SGD.** + +⟶ Not: diğer yöntemler içinde Adadelta, Adagrad ve SGD. + +
+ + +**46. Regularization** + +⟶ Düzenlileştirme + +
+ + +**47. Dropout ― Dropout is a technique used in neural networks to prevent overfitting the training data by dropping out neurons with probability p>0. It forces the model to avoid relying too much on particular sets of features.** + +⟶ Seyreltme ― Seyreltme, sinir ağlarında, p>0 olasılıklı nöronları silerek eğitim verilerinin fazla kullanılmaması için kullanılan bir tekniktir. Modeli, belirli özellik kümelerine çok fazla güvenmekten kaçınmaya zorlar. + +
+ + +**48. Remark: most deep learning frameworks parametrize dropout through the 'keep' parameter 1−p.** + +⟶ Not: Çoğunlukla derin öğrenme kütüphanleri, 'keep' ('tutma') parametresi 1−p aracılığıyla seyreltmeyi parametrize eder. + +
+ + +**49. Weight regularization ― In order to make sure that the weights are not too large and that the model is not overfitting the training set, regularization techniques are usually performed on the model weights. The main ones are summed up in the table below:** + +⟶ Ağırlık düzenlileştirme ― Ağırlıkların çok büyük olmadığından ve modelin eğitim setine uygun olmadığından emin olmak için, genellikle model ağırlıklarında düzenlileştirme teknikleri uygulanır. Temel olanlar aşağıdaki tabloda özetlenmiştir: + +
+ + +**50. [LASSO, Ridge, Elastic Net]** + +⟶ [LASSO, Ridge, Elastic Net] + +
+ +**50 bis. Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** + +⟶ [Katsayıları 0'a düşürür, Değişken seçimi için iyi, Katsayıları daha küçük yapar, Değişken seçimi ile küçük katsayılar arasında ödünleşim sağlar] + +
+ +**51. Early stopping ― This regularization technique stops the training process as soon as the validation loss reaches a plateau or starts to increase.** + +⟶ Erken durdurma ― Bu düzenleme tekniği, onaylama kaybı bir stabilliğe ulaştığında veya artmaya başladığında eğitim sürecini durdurur. + +
+ + +**52. [Error, Validation, Training, early stopping, Epochs]** + +⟶ [Hata, Geçerleme/Doğrulama, Eğitim, erken durdurma, Epochs] + +
+ + +**53. Good practices** + +⟶ İyi uygulamalar + +
+ + +**54. Overfitting small batch ― When debugging a model, it is often useful to make quick tests to see if there is any major issue with the architecture of the model itself. In particular, in order to make sure that the model can be properly trained, a mini-batch is passed inside the network to see if it can overfit on it. If it cannot, it means that the model is either too complex or not complex enough to even overfit on a small batch, let alone a normal-sized training set.** + +⟶ Küçük kümelerin ezberlenmesi ― Bir modelde hata ayıklama yaparken, modelin mimarisinde büyük bir sorun olup olmadığını görmek için hızlı testler yapmak genellikle yararlıdır. Özellikle, modelin uygun şekilde eğitilebildiğinden emin olmak için, ezberleyecek mi diye görmek için ağ içinde bir mini küme ile eğitilir. Olmazsa, modelin normal boyutta bir eğitim setini bırakmadan, küçük bir kümeyi bile ezberleyecek kadar çok karmaşık ya da yeterince karmaşık olmadığı anlamına gelir. + +
+ + +**55. Gradient checking ― Gradient checking is a method used during the implementation of the backward pass of a neural network. It compares the value of the analytical gradient to the numerical gradient at given points and plays the role of a sanity-check for correctness.** + +⟶ Gradyanların kontrolü ― Gradyan kontrolü, bir sinir ağının geriye doğru geçişinin uygulanması sırasında kullanılan bir yöntemdir. Analitik gradyanların değerini verilen noktalardaki sayısal gradyanlarla karşılaştırır ve doğruluk için bir kontrol rolü oynar. + +
+ + +**56. [Type, Numerical gradient, Analytical gradient]** + +⟶ [Tip, Sayısal gradyan, Analitik gradyan] + +
+ + +**57. [Formula, Comments]** + +⟶ [Formül, Açıklamalar] + +
+ + +**58. [Expensive; loss has to be computed two times per dimension, Used to verify correctness of analytical implementation, Trade-off in choosing h not too small (numerical instability) nor too large (poor gradient approximation)]** + +⟶ [Maliyetli; Kayıp, boyut başına iki kere hesaplanmalı, Analitik uygulamanın doğruluğunu anlamak için kullanılır, Ne çok küçük (sayısal dengesizlik) ne de çok büyük (zayıf gradyan yaklaşımı) seçimi yapılmalı, bunun için ödünleşim gerekir] + +
+ + +**59. ['Exact' result, Direct computation, Used in the final implementation]** + +⟶ ['Kesin' sonuç, Doğrudan hesaplama, Son uygulamada kullanılır] + +
+ + +**60. The Deep Learning cheatsheets are now available in [target language]. + +⟶ Derin Öğrenme el kitabı şimdi [hedef dilde] mevcuttur. + +**61. Original authors** + +⟶ Orijinal yazarlar + +
+ +**62.Translated by X, Y and Z** + +⟶ X, Y ve Z tarafından çevirildi + +
+ +**63.Reviewed by X, Y and Z** + +⟶ X, Y ve Z tarafından gözden geçirildi + +
+ +**64.View PDF version on GitHub** + +⟶ GitHub'da PDF sürümünü görüntüleyin + +
+ +**65.By X and Y** + +⟶ X ve Y tarafından + +
diff --git a/tr/cs-230-recurrent-neural-networks.md b/tr/cs-230-recurrent-neural-networks.md new file mode 100644 index 000000000..17536b665 --- /dev/null +++ b/tr/cs-230-recurrent-neural-networks.md @@ -0,0 +1,674 @@ +**1. Recurrent Neural Networks cheatsheet** + +⟶ Tekrarlayan Yapay Sinir Ağları (Recurrent Neural Networks-RNN) El Kitabı + +
+ + +**2. CS 230 - Deep Learning** + +⟶ CS 230 - Derin Öğrenme + +
+ + +**3. [Overview, Architecture structure, Applications of RNNs, Loss function, Backpropagation]** + +⟶ [Genel bakış, Mimari yapı, RNN'lerin uygulamaları, Kayıp fonksiyonu, Geriye Yayılım] + +
+ + +**4. [Handling long term dependencies, Common activation functions, Vanishing/exploding gradient, Gradient clipping, GRU/LSTM, Types of gates, Bidirectional RNN, Deep RNN]** + +⟶ [Uzun vadeli bağımlılıkların ele alınması, Ortak aktivasyon fonksiyonları, Gradyanın kaybolması / patlaması, Gradyan kırpma, GRU / LSTM, Kapı tipleri, Çift Yönlü RNN, Derin RNN] + +
+ + +**5. [Learning word representation, Notations, Embedding matrix, Word2vec, Skip-gram, Negative sampling, GloVe]** + +⟶ [Kelime gösterimini öğrenme, Notasyonlar, Gömme matrisi, Word2vec, Skip-gram, Negatif örnekleme, GloVe] + +
+ + +**6. [Comparing words, Cosine similarity, t-SNE]** + +⟶ [Kelimeleri karşılaştırmak, Cosine benzerliği, t-SNE] + +
+ + +**7. [Language model, n-gram, Perplexity]** + +⟶ [Dil modeli, n-gram, Karışıklık] + +
+ + +**8. [Machine translation, Beam search, Length normalization, Error analysis, Bleu score]** + +⟶ [Makine çevirisi, Işın araması, Uzunluk normalizasyonu, Hata analizi, Bleu skoru] + +
+ + +**9. [Attention, Attention model, Attention weights]** + +⟶ [Dikkat, Dikkat modeli, Dikkat ağırlıkları] + +
+ + +**10. Overview** + +⟶ Genel Bakış + +
+ + +**11. Architecture of a traditional RNN ― Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. They are typically as follows:** + +⟶ Geleneksel bir RNN mimarisi - RNN'ler olarak da bilinen tekrarlayan sinir ağları, gizli durumlara sahipken önceki çıktıların girdi olarak kullanılmasına izin veren bir sinir ağları sınıfıdır. Tipik olarak aşağıdaki gibidirler: + +
+ + +**12. For each timestep t, the activation a and the output y are expressed as follows:** + +⟶ Her bir t zamanında, a aktivasyonu ve y çıktısı aşağıdaki gibi ifade edilir: + +
+ + +**13. and** + +⟶ ve + +
+ + +**14. where Wax,Waa,Wya,ba,by are coefficients that are shared temporally and g1,g2 activation functions.** + +⟶ burada Wax,Waa,Wya,ba,by geçici olarak paylaşılan katsayılardır ve g1,g2 aktivasyon fonksiyonlarıdır. + +
+ + +**15. The pros and cons of a typical RNN architecture are summed up in the table below:** + +⟶ Tipik bir RNN mimarisinin artıları ve eksileri aşağıdaki tabloda özetlenmiştir: + +
+ + +**16. [Advantages, Possibility of processing input of any length, Model size not increasing with size of input, Computation takes into account historical information, Weights are shared across time]** + +⟶ [Avantajlar, Herhangi bir uzunluktaki girdilerin işlenmesi imkanı, Girdi büyüklüğüyle artmayan model boyutu, Geçmiş bilgileri dikkate alarak hesaplama, Zaman içinde paylaşılan ağırlıklar] + +
+ + +**17. [Drawbacks, Computation being slow, Difficulty of accessing information from a long time ago, Cannot consider any future input for the current state]** + +⟶ [Dezavantajları, Yavaş hesaplama, Uzun zaman önceki bilgiye erişme zorluğu, Mevcut durum için gelecekteki herhangi bir girdinin düşünülememesi] + +
+ + +**18. Applications of RNNs ― RNN models are mostly used in the fields of natural language processing and speech recognition. The different applications are summed up in the table below:** + +⟶ RNN'lerin Uygulamaları ― RNN modelleri çoğunlukla doğal dil işleme ve konuşma tanıma alanlarında kullanılır. Farklı uygulamalar aşağıdaki tabloda özetlenmiştir: + +
+ + +**19. [Type of RNN, Illustration, Example]** + +⟶ [RNN Türü, Örnekleme, Örnek] + +
+ + +**20. [One-to-one, One-to-many, Many-to-one, Many-to-many]** + +⟶ [Bire bir, Bire çok, Çoka bir, Çoka çok] + +
+ + +**21. [Traditional neural network, Music generation, Sentiment classification, Name entity recognition, Machine translation]** + +⟶ [Geleneksel sinir ağı, Müzik üretimi, Duygu sınıflandırma, İsim varlık tanıma, Makine çevirisi] + +
+ + +**22. Loss function ― In the case of a recurrent neural network, the loss function L of all time steps is defined based on the loss at every time step as follows:** + +⟶ Kayıp fonksiyonu ― Tekrarlayan bir sinir ağı olması durumunda, tüm zaman dilimlerindeki L kayıp fonksiyonu, her zaman dilimindeki kayıbı temel alınarak aşağıdaki gibi tanımlanır: + +
+ + +**23. Backpropagation through time ― Backpropagation is done at each point in time. At timestep T, the derivative of the loss L with respect to weight matrix W is expressed as follows:** + +⟶ Zamanla geri yayılım ― Geriye yayılım zamanın her noktasında yapılır. T zaman diliminde, ağırlık matrisi W'ye göre L kaybının türevi aşağıdaki gibi ifade edilir: + +
+ + +**24. Handling long term dependencies** + +⟶ Uzun vadeli bağımlılıkların ele alınması + +
+ + +**25. Commonly used activation functions ― The most common activation functions used in RNN modules are described below:** + +⟶ Yaygın olarak kullanılan aktivasyon fonksiyonları ― RNN modüllerinde kullanılan en yaygın aktivasyon fonksiyonları aşağıda açıklanmıştır: + +
+ + +**26. [Sigmoid, Tanh, RELU]** + +⟶ [Sigmoid, Tanh, RELU] + +
+ + +**27. Vanishing/exploding gradient ― The vanishing and exploding gradient phenomena are often encountered in the context of RNNs. The reason why they happen is that it is difficult to capture long term dependencies because of multiplicative gradient that can be exponentially decreasing/increasing with respect to the number of layers.** + +⟶ Kaybolan / patlayan gradyan ― Kaybolan ve patlayan gradyan fenomenlerine RNN'ler bağlamında sıklıkla rastlanır. Bunların olmasının nedeni, katman sayısına göre katlanarak azalan / artan olabilen çarpımsal gradyan nedeniyle uzun vadeli bağımlılıkları yakalamanın zor olmasıdır. + +
+ + +**28. Gradient clipping ― It is a technique used to cope with the exploding gradient problem sometimes encountered when performing backpropagation. By capping the maximum value for the gradient, this phenomenon is controlled in practice.** + +⟶ Gradyan kırpma ― Geri yayılım işlemi sırasında bazen karşılaşılan patlayan gradyan sorunuyla başa çıkmak için kullanılan bir tekniktir. Gradyan için maksimum değeri sınırlayarak, bu durum pratikte kontrol edilir. + +
+ + +**29. clipped** + +⟶ kırpılmış + +
+ + +**30. Types of gates ― In order to remedy the vanishing gradient problem, specific gates are used in some types of RNNs and usually have a well-defined purpose. They are usually noted Γ and are equal to:** + +⟶ Giriş Kapıları Çeşitleri ― Kaybolan gradyan problemini çözmek için bazı RNN türlerinde belirli kapılar kullanılır ve genellikle iyi tanımlanmış bir amaca sahiptir. Genellikle Γ olarak ifade edilir ve şuna eşittir: + +
+ + +**31. where W,U,b are coefficients specific to the gate and σ is the sigmoid function. The main ones are summed up in the table below:** + +⟶ burada W, U, b kapıya özgü katsayılardır ve σ ise sigmoid fonksiyondur. Temel olanlar aşağıdaki tabloda özetlenmiştir: + +
+ + +**32. [Type of gate, Role, Used in]** + +⟶ [Kapının tipi, Rol, Kullanılan] + +
+ + +**33. [Update gate, Relevance gate, Forget gate, Output gate]** + +⟶ [Güncelleme kapısı, Uygunluk kapısı, Unutma kapısı, Çıkış kapısı] + +
+ + +**34. [How much past should matter now?, Drop previous information?, Erase a cell or not?, How much to reveal of a cell?]** + +⟶ [Şimdi ne kadar geçmiş olması gerekir?, Önceki bilgiyi bırak?, Bir hücreyi sil ya da silme?, Bir hücreyi ortaya çıkarmak için ne kadar?] + +
+ + +**35. [LSTM, GRU]** + +⟶ [LSTM, GRU] + +
+ + +**36. GRU/LSTM ― Gated Recurrent Unit (GRU) and Long Short-Term Memory units (LSTM) deal with the vanishing gradient problem encountered by traditional RNNs, with LSTM being a generalization of GRU. Below is a table summing up the characterizing equations of each architecture:** + +⟶ GRU/LSTM ― Geçitli Tekrarlayan Birim (Gated Recurrent Unit-GRU) ve Uzun Kısa Süreli Bellek Birimleri (Long Short-Term Memory-LSTM), geleneksel RNN'lerin karşılaştığı kaybolan gradyan problemini ele alır, LSTM ise GRU'nun genelleştirilmiş halidir. Her bir mimarinin karakterizasyon denklemlerini özetleyen tablo aşağıdadır: + +
+ + +**37. [Characterization, Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Dependencies]** + +⟶ [Karakterizasyon, Geçitli Tekrarlayan Birim (GRU), Uzun Kısa Süreli Bellek (LSTM), Bağımlılıklar] + +
+ + +**38. Remark: the sign ⋆ denotes the element-wise multiplication between two vectors.** + +⟶ Not: ⋆ işareti iki vektör arasındaki birimsel çarpımı belirtir. + +
+ + +**39. Variants of RNNs ― The table below sums up the other commonly used RNN architectures:** + +⟶ RNN varyantları ― Aşağıdaki tablo, diğer yaygın kullanılan RNN mimarilerini özetlemektedir: + +
+ + +**40. [Bidirectional (BRNN), Deep (DRNN)]** + +⟶ [Çift Yönlü (Bidirectional-BRNN), Derin (Deep-DRNN)] + +
+ + +**41. Learning word representation** + +⟶ Kelime temsilini öğrenme + +
+ + +**42. In this section, we note V the vocabulary and |V| its size.** + +⟶ Bu bölümde V kelimeleri, |V| ise kelimelerin boyutlarını ifade eder. + +
+ + +**43. Motivation and notations** + +⟶ Motivasyon ve notasyon + +
+ + +**44. Representation techniques ― The two main ways of representing words are summed up in the table below:** + +⟶ Temsil etme teknikleri ― Kelimeleri temsil etmenin iki temel yolu aşağıdaki tabloda özetlenmiştir: + +
+ + +**45. [1-hot representation, Word embedding]** + +⟶ [1-hot gösterim, Kelime gömme] + +
+ + +**46. [teddy bear, book, soft]** + +⟶ [oyuncak ayı, kitap, yumuşak] + +
+ + +**47. [Noted ow, Naive approach, no similarity information, Noted ew, Takes into account words similarity]** + +⟶ + +
[ow not edildi, Naive yaklaşım, benzerlik bilgisi yok, ew not edildi, kelime benzerliği dikkate alınır] + + +**48. Embedding matrix ― For a given word w, the embedding matrix E is a matrix that maps its 1-hot representation ow to its embedding ew as follows:** + +⟶ Gömme matrisi ― Belirli bir w kelimesi için E gömme matrisi, 1-hot temsilini ew gömmesi sayesinde aşağıdaki gibi eşleştiren bir matristir: + +
+ + +**49. Remark: learning the embedding matrix can be done using target/context likelihood models.** + +⟶ Not: Gömme matrisinin öğrenilmesi hedef / içerik olabilirlik modelleri kullanılarak yapılabilir. + +
+ + +**50. Word embeddings** + +⟶ Kelime gömmeleri + +
+ + +**51. Word2vec ― Word2vec is a framework aimed at learning word embeddings by estimating the likelihood that a given word is surrounded by other words. Popular models include skip-gram, negative sampling and CBOW.** + +⟶ Word2vec ― Word2vec, belirli bir kelimenin diğer kelimelerle çevrili olma olasılığını tahmin ederek kelime gömmelerini öğrenmeyi amaçlayan bir çerçevedir. Popüler modeller arasında skip-gram, negatif örnekleme ve CBOW bulunur. + +
+ + +**52. [A cute teddy bear is reading, teddy bear, soft, Persian poetry, art]** + +⟶ [Sevimli ayıcık okuyor, ayıcık, yumuşak, Farsça şiir, sanat] + +
+ + +**53. [Train network on proxy task, Extract high-level representation, Compute word embeddings]** + +⟶ [Proxy görevinde ağı eğitme, üst düzey gösterimi çıkartme, Kelime gömme hesaplama] + +
+ + +**54. Skip-gram ― The skip-gram word2vec model is a supervised learning task that learns word embeddings by assessing the likelihood of any given target word t happening with a context word c. By noting θt a parameter associated with t, the probability P(t|c) is given by:** + +⟶ Skip-gram ― Skip-gram word2vec modeli verilen herhangi bir t hedef kelimesinin c gibi bir bağlam kelimesi ile gerçekleşme olasılığını değerlendirerek kelime gömmelerini öğrenen denetimli bir öğrenme görevidir. + +
+ + +**55. Remark: summing over the whole vocabulary in the denominator of the softmax part makes this model computationally expensive. CBOW is another word2vec model using the surrounding words to predict a given word.** + +⟶ Not: Softmax bölümünün paydasındaki tüm kelime dağarcığını toplamak, bu modeli hesaplama açısından maliyetli kılar. CBOW, verilen bir kelimeyi tahmin etmek için çevreleyen kelimeleri kullanan başka bir word2vec modelidir. + +
+ + +**56. Negative sampling ― It is a set of binary classifiers using logistic regressions that aim at assessing how a given context and a given target words are likely to appear simultaneously, with the models being trained on sets of k negative examples and 1 positive example. Given a context word c and a target word t, the prediction is expressed by:** + +⟶ Negatif örnekleme - Belirli bir bağlamın ve belirli bir hedef kelimenin eşzamanlı olarak ortaya çıkmasının muhtemel olup olmadığının değerlendirilmesini, modellerin k negatif örnek kümeleri ve 1 pozitif örnek kümesinde eğitilmesini hedefleyen, lojistik regresyon kullanan bir ikili sınıflandırma kümesidir. Bağlam sözcüğü c ve hedef sözcüğü t göz önüne alındığında, tahmin şöyle ifade edilir: + +
+ + +**57. Remark: this method is less computationally expensive than the skip-gram model.** + +⟶ Not: Bu yöntem, skip-gram modelinden daha az hesaplamalıdır. + +
+ + +**57bis. GloVe ― The GloVe model, short for global vectors for word representation, is a word embedding technique that uses a co-occurence matrix X where each Xi,j denotes the number of times that a target i occurred with a context j. Its cost function J is as follows:** + +⟶ GloVe ― Kelime gösterimi için Global vektörler tanımının kısaltılmış hali olan GloVe, eşzamanlı bir X matrisi kullanan ki burada her bir Xi,j , bir hedefin bir j bağlamında gerçekleştiği sayısını belirten bir kelime gömme tekniğidir. Maliyet fonksiyonu J aşağıdaki gibidir: + +
+ + +**58. where f is a weighting function such that Xi,j=0⟹f(Xi,j)=0. +Given the symmetry that e and ? play in this model, the final word embedding e(final)w is given by:** + +⟶ f, Xi,j=0⟹f(Xi,j)=0 olacak şekilde bir ağırlıklandırma fonksiyonudur. +Bu modelde e ve θ'nin oynadığı simetri göz önüne alındığında, e (final) w'nin kelime gömmesi şöyle ifade edilir: + +
+ + +**59. Remark: the individual components of the learned word embeddings are not necessarily interpretable.** + +⟶ Not: Öğrenilen kelime gömme bileşenlerinin ayrı ayrı bileşenleri tam olarak yorumlanamaz. + +
+ + +**60. Comparing words** + +⟶ Kelimelerin karşılaştırılması + +
+ + +**61. Cosine similarity ― The cosine similarity between words w1 and w2 is expressed as follows:** + +⟶ Kosinüs benzerliği ― w1 ve w2 kelimeleri arasındaki kosinüs benzerliği şu şekilde ifade edilir: + +
+ + +**62. Remark: θ is the angle between words w1 and w2.** + +⟶ Not: θ, w1 ve w2 kelimeleri arasındaki açıdır. + +
+ + +**63. t-SNE ― t-SNE (t-distributed Stochastic Neighbor Embedding) is a technique aimed at reducing high-dimensional embeddings into a lower dimensional space. In practice, it is commonly used to visualize word vectors in the 2D space.** + +⟶ t-SNE ― t-SNE (t-dağıtımlı Stokastik Komşu Gömme), yüksek boyutlu gömmeleri daha düşük boyutlu bir alana indirmeyi amaçlayan bir tekniktir. Uygulamada, kelime uzaylarını 2B alanda görselleştirmek için yaygın olarak kullanılır. + +
+ + +**64. [literature, art, book, culture, poem, reading, knowledge, entertaining, loveable, childhood, kind, teddy bear, soft, hug, cute, adorable]** + +⟶ [edebiyat, sanat, kitap, kültür, şiir, okuma, bilgi, eğlendirici, sevimli, çocukluk, kibar, ayıcık, yumuşak, sarılmak, sevimli, sevimli] + +
+ + +**65. Language model** + +⟶ Dil modeli + +
+ + +**66. Overview ― A language model aims at estimating the probability of a sentence P(y).** + +⟶ Genel bakış - Bir dil modeli P(y) cümlesinin olasılığını tahmin etmeyi amaçlar. + +
+ + +**67. n-gram model ― This model is a naive approach aiming at quantifying the probability that an expression appears in a corpus by counting its number of appearance in the training data.** + +⟶ n-gram modeli ― Bu model, eğitim verilerindeki görünüm sayısını sayarak bir ifadenin bir korpusta ortaya çıkma olasılığını ölçmeyi amaçlayan naif bir yaklaşımdır. + +
+ + +**68. Perplexity ― Language models are commonly assessed using the perplexity metric, also known as PP, which can be interpreted as the inverse probability of the dataset normalized by the number of words T. The perplexity is such that the lower, the better and is defined as follows:** + +⟶ Karışıklık - Dil modelleri yaygın olarak, PP olarak da bilinen karışıklık metriği kullanılarak değerlendirilir ve bunlar T kelimelerinin sayısıyla normalize edilmiş veri setinin ters olasılığı olarak yorumlanabilir. Karışıklık, daha düşük, daha iyi ve şöyle tanımlanır: + +
+ + +**69. Remark: PP is commonly used in t-SNE.** + +⟶ Not: PP, t-SNE'de yaygın olarak kullanılır. + +
+ + +**70. Machine translation** + +⟶ Makine çevirisi + +
+ + +**71. Overview ― A machine translation model is similar to a language model except it has an encoder network placed before. For this reason, it is sometimes referred as a conditional language model. The goal is to find a sentence y such that:** + +⟶ Genel bakış ― Bir makine çeviri modeli, daha önce yerleştirilmiş bir kodlayıcı ağına sahip olması dışında, bir dil modeline benzer. Bu nedenle, bazen koşullu dil modeli olarak da adlandırılır. Amaç şu şekilde bir cümle bulmaktır: + +
+ + +**72. Beam search ― It is a heuristic search algorithm used in machine translation and speech recognition to find the likeliest sentence y given an input x.** + +⟶ Işın arama ― Makine çevirisinde ve konuşma tanımada kullanılan ve x girişi verilen en olası cümleyi bulmak için kullanılan sezgisel bir arama algoritmasıdır. + +
+ + +**73. [Step 1: Find top B likely words y<1>, Step 2: Compute conditional probabilities y|x,y<1>,...,y, Step 3: Keep top B combinations x,y<1>,...,y, End process at a stop word]** + +⟶ [Adım 1: En olası B kelimeleri bulun y<1>, 2. Adım: Koşullu olasılıkları hesaplayın y|x,y<1>, ..., y, 3. Adım: En olası B kombinasyonlarını koruyun x,y<1>, ..., y, İşlemi durdurarak sonlandırın] + +
+ + +**74. Remark: if the beam width is set to 1, then this is equivalent to a naive greedy search.** + +⟶ Not: Eğer ışın genişliği 1 olarak ayarlanmışsa, bu naif (naive) bir açgözlü (greedy) aramaya eşdeğerdir. + +
+ + +**75. Beam width ― The beam width B is a parameter for beam search. Large values of B yield to better result but with slower performance and increased memory. Small values of B lead to worse results but is less computationally intensive. A standard value for B is around 10.** + +⟶ Işın genişliği ― Işın genişliği B, ışın araması için bir parametredir. Daha yüksek B değerleri daha iyi sonuç elde edilmesini sağlar fakat daha düşük performans ve daha yüksek hafıza ile. Küçük B değerleri daha kötü sonuçlara neden olur, ancak hesaplama açısından daha az yoğundur. B için standart bir değer 10 civarındadır. + +
+ + +**76. Length normalization ― In order to improve numerical stability, beam search is usually applied on the following normalized objective, often called the normalized log-likelihood objective, defined as:** + +⟶ Uzunluk normalizasyonu ― Sayısal stabiliteyi arttırmak için, ışın arama genellikle, aşağıdaki gibi tanımlanan normalize edilmiş log-olabilirlik amacı olarak adlandırılan normalize edilmiş hedefe uygulanır: + +
+ + +**77. Remark: the parameter α can be seen as a softener, and its value is usually between 0.5 and 1.** + +⟶ Not: α parametresi yumuşatıcı olarak görülebilir ve değeri genellikle 0,5 ile 1 arasındadır. + +
+ + +**78. Error analysis ― When obtaining a predicted translation ˆy that is bad, one can wonder why we did not get a good translation y∗ by performing the following error analysis:** + +⟶ Hata analizi ― Kötü bir çeviri elde edildiğinde, aşağıdaki hata analizini yaparak neden iyi bir çeviri almadığımızı araştırabiliriz: + +
+ + +**79. [Case, Root cause, Remedies]** + +⟶ [Durum, Ana neden, Çözümler] + +
+ + +**80. [Beam search faulty, RNN faulty, Increase beam width, Try different architecture, Regularize, Get more data]** + +⟶ [Işın arama hatası, RNN hatası, Işın genişliğini artırma, farklı mimariyi deneme, Düzenlileştirme, Daha fazla bilgi edinme] + +
+ + +**81. Bleu score ― The bilingual evaluation understudy (bleu) score quantifies how good a machine translation is by computing a similarity score based on n-gram precision. It is defined as follows:** + +⟶ Bleu puanı ― İki dilli değerlendirme alt ölçeği (bleu) puanı, makine çevirisinin ne kadar iyi olduğunu, n-gram hassasiyetine dayalı bir benzerlik puanı hesaplayarak belirler. Aşağıdaki gibi tanımlanır: + +
+ + +**82. where pn is the bleu score on n-gram only defined as follows:** + +⟶ pn, n-gramdaki bleu skorunun sadece aşağıdaki şekilde tanımlandığı durumlarda: + +
+ + +**83. Remark: a brevity penalty may be applied to short predicted translations to prevent an artificially inflated bleu score.** + +⟶ Not: Yapay olarak şişirilmiş bir bleu skorunu önlemek için kısa öngörülen çevirilere küçük bir ceza verilebilir. + +
+ + +**84. Attention** + +⟶ Dikkat + +
+ + +**85. Attention model ― This model allows an RNN to pay attention to specific parts of the input that is considered as being important, which improves the performance of the resulting model in practice. By noting α the amount of attention that the output y should pay to the activation a and c the context at time t, we have:** + +⟶ Dikkat modeli ― Bu model, bir RNN'de girişin önemli olduğu düşünülen belirli kısımlarına dikkat etmesine olanak sağlar,sonuçta ortaya çıkan modelin pratikteki performansını arttırır. α ile ifade edilen dikkat miktarı, a aktivasyonu ve t zamanındaki c bağlamını y çıktısı olarak verir. + +
+ + +**86. with** + +⟶ ile + +
+ + +**87. Remark: the attention scores are commonly used in image captioning and machine translation.** + +⟶ Not: Dikkat skorları, görüntü altyazılama ve makine çevirisinde yaygın olarak kullanılır. + +
+ + +**88. A cute teddy bear is reading Persian literature.** + +⟶ Sevimli bir oyuncak ayı Fars edebiyatı okuyor. + +
+ + +**89. Attention weight ― The amount of attention that the output y should pay to the activation a is given by α computed as follows:** + +⟶ Dikkat ağırlığı ― Y çıktısının a aktivasyonuna vermesi gereken dikkat miktarı, aşağıdaki gibi hesaplanan α ile verilir: + +
+ + +**90. Remark: computation complexity is quadratic with respect to Tx.** + +⟶ Not: hesaplama karmaşıklığı Tx'e göre ikinci derecedendir. + +
+ + +**91. The Deep Learning cheatsheets are now available in [target language].** + +⟶ Derin Öğrenme el kitapları şimdi [hedef dilde] mevcuttur. + +
+ +**92. Original authors** + +⟶Orijinal yazarlar + +
+ +**93. Translated by X, Y and Z** + +⟶ X, Y ve Z tarafından çevrilmiştir. + +
+ +**94. Reviewed by X, Y and Z** + +⟶ X, Y ve Z tarafından gözden geçirilmiştir. + +
+ +**95. View PDF version on GitHub** + +⟶ GitHub'da PDF versiyonunu görüntüleyin. + +
+ +**96. By X and Y** + +⟶ X ve Y tarafından + +
diff --git a/tr/refresher-probability.md b/tr/refresher-probability.md deleted file mode 100644 index 5c9b34656..000000000 --- a/tr/refresher-probability.md +++ /dev/null @@ -1,381 +0,0 @@ -**1. Probabilities and Statistics refresher** - -⟶ - -
- -**2. Introduction to Probability and Combinatorics** - -⟶ - -
- -**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** - -⟶ - -
- -**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** - -⟶ - -
- -**5. Axioms of probability For each event E, we denote P(E) as the probability of event E occuring.** - -⟶ - -
- -**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** - -⟶ - -
- -**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** - -⟶ - -
- -**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** - -⟶ - -
- -**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** - -⟶ - -
- -**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** - -⟶ - -
- -**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** - -⟶ - -
- -**12. Conditional Probability** - -⟶ - -
- -**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** - -⟶ - -
- -**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** - -⟶ - -
- -**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** - -⟶ - -
- -**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** - -⟶ - -
- -**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** - -⟶ - -
- -**18. Independence ― Two events A and B are independent if and only if we have:** - -⟶ - -
- -**19. Random Variables** - -⟶ - -
- -**20. Definitions** - -⟶ - -
- -**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** - -⟶ - -
- -**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** - -⟶ - -
- -**23. Remark: we have P(a - -**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** - -⟶ - -
- -**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** - -⟶ - -
- -**26. [Case, CDF F, PDF f, Properties of PDF]** - -⟶ - -
- -**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** - -⟶ - -
- -**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** - -⟶ - -
- -**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** - -⟶ - -
- -**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** - -⟶ - -
- -**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** - -⟶ - -
- -**32. Probability Distributions** - -⟶ - -
- -**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** - -⟶ - -
- -**34. Main distributions ― Here are the main distributions to have in mind:** - -⟶ - -
- -**35. [Type, Distribution]** - -⟶ - -
- -**36. Jointly Distributed Random Variables** - -⟶ - -
- -**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** - -⟶ - -
- -**38. [Case, Marginal density, Cumulative function]** - -⟶ - -
- -**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** - -⟶ - -
- -**40. Independence ― Two random variables X and Y are said to be independent if we have:** - -⟶ - -
- -**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** - -⟶ - -
- -**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** - -⟶ - -
- -**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** - -⟶ - -
- -**44. Remark 2: If X and Y are independent, then ρXY=0.** - -⟶ - -
- -**45. Parameter estimation** - -⟶ - -
- -**46. Definitions** - -⟶ - -
- -**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** - -⟶ - -
- -**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** - -⟶ - -
- -**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** - -⟶ - -
- -**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** - -⟶ - -
- -**51. Estimating the mean** - -⟶ - -
- -**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** - -⟶ - -
- -**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** - -⟶ - -
- -**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** - -⟶ - -
- -**55. Estimating the variance** - -⟶ - -
- -**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** - -⟶ - -
- -**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** - -⟶ - -
- -**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** - -⟶ - -
- -**59. [Introduction, Sample space, Event, Permutation]** - -⟶ - -
- -**60. [Conditional probability, Bayes' rule, Independence]** - -⟶ - -
- -**61. [Random variables, Definitions, Expectation, Variance]** - -⟶ - -
- -**62. [Probability distributions, Chebyshev's inequality, Main distributions]** - -⟶ - -
- -**63. [Jointly distributed random variables, Density, Covariance, Correlation]** - -⟶ - -
- -**64. [Parameter estimation, Mean, Variance]** - -⟶ diff --git a/uk/cs-229-probability.md b/uk/cs-229-probability.md new file mode 100644 index 000000000..a09ab965d --- /dev/null +++ b/uk/cs-229-probability.md @@ -0,0 +1,381 @@ +**1. Probabilities and Statistics refresher** + +⟶ Швидке повторення з теорії ймовірностей та комбінаторики. + +
+ +**2. Introduction to Probability and Combinatorics** + +⟶ Вступ до теорії ймовірностей та комбінаторики. + +
+ +**3. Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** + +⟶ Простір елементарних подій ― Множина всіх можливих результатiв експерименту називається простором елементарних подій і позначається літерою S. + +
+ +**4. Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** + +⟶ Випадкова подія - будь-яка підмножина E, що належить до певного простору елементарних подій, називається подією. Таким чином, подія це множина, що містить можливі результати експерименту. Якщо результати експерименту містяться в Е, тоді ми говоримо що Е відбулася. + +
+ +**5. Axioms of probability For each event E, we denote P(E) as the probability of event E occuring.** + +⟶ Аксіоми теорії ймовірностей. Для кожної події Е, P(E) є ймовірністю події Е. + +
+ +**6. Axiom 1 ― Every probability is between 0 and 1 included, i.e:** + +⟶ Аксіома 1 - Всі ймовірності існують між 0 та 1 включно. + +
+ +**7. Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** + +⟶ Аксіома 2 - Ймовірність що як мінімум одна подія з простору елементарних подій відбудеться дорівнює 1. + +
+ +**8. Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** + +⟶ Аксіома 3 - Для будь-якої послідовності взаємновиключних подій E1,...,En, ми маємо: + +
+ +**9. Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** + +⟶ Підстановка - підстановка це спосіб вибору r об'єктів з набору n об'єктів в певному порядку. Кількість таких способів вибору задається через P(n,r): + +
+ +**10. Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** + +⟶ Комбiнацiя - комбiнацiя це спосіб вибору r об'єктів з набору n об'єктів, де порядок не має значення. Кількість таких способів вибору задається через C(n,r): + +
+ +**11. Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** + +⟶ Примітка: ми зауважуємо що для 0⩽r⩽n, ми маємо P(n,r)⩾C(n,r) + +
+ +**12. Conditional Probability** + +⟶ Умовна ймовірність + +
+ +**13. Bayes' rule ― For events A and B such that P(B)>0, we have:** + +⟶ Теорема Баєса - Для подій А і В таких що P(B)>0, маємо: + +
+ +**14. Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** + +⟶ Примітка: P(A∩B)=P(A)P(B|A)=P(A|B)P(B) + +
+ +**15. Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** + +⟶ Поділ множини - Нехай {Ai,i∈[[1,n]]} буде таким для всіх i, Ai≠∅. Ми називаємо {Ai} поділом множини якщо: + +
+ +**16. Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** + +⟶ Примітка: для будь-якої події В в просторі елементарних подій, маємо P(B)=n∑i=1P(B|Ai)P(Ai). + +
+ +**17. Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** + +⟶ Розгорнута форма теореми Баєса - Нехай {Ai,i∈[[1,n]]} буде поділом множини простору елементарних подій. Маємо: + +
+ +**18. Independence ― Two events A and B are independent if and only if we have:** + +⟶ Незалежність - Дві події А і В є незалежними якщо і тільки якщо ми маємо: + +
+ +**19. Random Variables** + +⟶ Випадкові змінні + +
+ +**20. Definitions** + +⟶ Означення + +
+ +**21. Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** + +⟶ Випадкова змінна - Випадкова змінна, часто означена X, є функцією що проектує кожну подію в просторі елементарних подій на реальну лінію. + +
+ +**22. Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** + +⟶ Функція розподілу ймовірностей (CDF) - Функція розподілу ймовірностей F, що є монотонно зростаючою і є такою, що limx→−∞F(x)=0 та limx→+∞F(x)=1 і задається як: + +
+ +**23. Remark: we have P(a + +**24. Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** + +⟶ Функція густини імовірності (PDF) - Функція густини імовірності F є імовірністю що X набирає значень між двома сусідніми випадковими величинами. + +
+ +**25. Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** + +⟶ Залежність між PDF та CDF - Ось деякі важливі характеристики в одиночних i тривалих випадках: + +
+ +**26. [Case, CDF F, PDF f, Properties of PDF]** + +⟶ [Випадок, CDF F, PDF f, характеристики PDF] + +
+ +**27. Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** + +⟶ Математичне сподівання і моменти випадкового значення - Ось вирази очікуваного значення E[X], узагальненого очікуваного значення E[g(X)], k-го моменту E[Xk] та характеристичною функцією ψ(ω) дискретного або неперервного значення величини: + +
+ +**28. Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** + +⟶ Дисперсія випадкової змiнної - Дисперсія випадкової змiнної, що позначається Var(X) або σ2 є мірою величини розподілення значень Функції. Вона визначаєтья: + +
+ +**29. Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** + +⟶ Стандартне відхилення - Стандартне відхилення випадкової величини, що позначається σ, є мірою величини розподілення значень функції, сумісною з одиницями випадкової величини. Вона визначаєтья: + +
+ +**30. Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** + +⟶ Перетворення випадкових величин - Нехай змінні X та Y будуть поєднані певною функцією. Називаючи fX та fY розподілом відповідно функцій X та Y, маємо: + +
+ +**31. Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** + +⟶ Інтегральне правило Лейбніца - Нехай g буде функцією x і потенційно c, і a,b будуть кордонами що можуть залежати від с. Маємо : + +
+ +**32. Probability Distributions** + +⟶ Розподіл ймовірностей + +
+ +**33. Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** + +⟶ Нерівність Чебишова ― Нехай X буде випадковою змінною з очікуваною велечиною μ. Для k,σ>0, маємо наступну нерівність : + +
+ +**34. Main distributions ― Here are the main distributions to have in mind:** + +⟶ Головні розподіли - Ось кілька найважливіших розподілів які варто знати: + +
+ +**35. [Type, Distribution]** + +⟶ [Тип, Розподіл] + +
+ +**36. Jointly Distributed Random Variables** + +⟶ Спільно розподілені випадкові величини + +
+ +**37. Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** + +⟶ Відособлена густина та розподіл ймовірностей - Виходячи з формули спільної густини ймовірностей fXY, маємо : + +
+ +**38. [Case, Marginal density, Cumulative function]** + +⟶ [Випадок, Відособлена густина, Розподіл ймовірностей] + +
+ +**39. Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** + +⟶ Умовна густина ― Умовна густина X відносно Y, означена fX|Y, визначаєтья: + +
+ +**40. Independence ― Two random variables X and Y are said to be independent if we have:** + +⟶ Незалежність - Дві події А і В є незалежними якщо і тільки якщо ми маємо: + +
+ +**41. Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** + +⟶ Коваріація ― Коваріація двох випадкових змінних X та Y, що означена як σ2XY або частіше як Cov(X,Y), визначаєтья : + +
+ +**42. Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** + +⟶ Кореляція ― Означивши σX,σY станартним відхиленням X та Y, ми визначаємо кореляцію X та Y, означену ρXY, в наступний спосіб : + +
+ +**43. Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** + +⟶ Примітка 1: ми зазначаємо що для будь-яких випадкових змінних X, Y, маємо ρXY∈[−1,1]. + +
+ +**44. Remark 2: If X and Y are independent, then ρXY=0.** + +⟶ Примітка 2 : Якщо X та Y є незалежними, тоді ρXY=0. + +
+ +**45. Parameter estimation** + +⟶ Оцінювання параметрів + +
+ +**46. Definitions** + +⟶ Визначення + +
+ +**47. Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** + +⟶ Випадкова вибірка ― Випадкова вибірка це набір випадкових змінних X1,...,Xn які є незалежними і ідентично розподіленими в X. + +
+ +**48. Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** + +⟶ Статистична оцінка - Статистична оцінка це функція даних що використовується щоб визначити невідомий параметр статистичної моделі. + +
+ +**49. Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** + +⟶ Систематична похибка ― Систематична похибка статистичної оцінки ^θ визначаєтья як різниця очікуваної величини розподілу ^θ і фактичної величини, тобіж: + +
+ +**50. Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** + +⟶ Примітка: оцінка немає похибки якщо E[^θ]=θ. + +
+ +**51. Estimating the mean** + +⟶ Оцінка середнього значення + +
+ +**52. Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯¯¯¯¯X and is defined as follows:** + +⟶ Середнє значення вибірки ― Середнє значення вибірки ¯¯¯¯¯X вказує середнє μ розподілу і визначаєтья: + +
+ +**53. Remark: the sample mean is unbiased, i.e E[¯¯¯¯¯X]=μ.** + +⟶ Примітка : середнє значення не має похибки, тобто E[¯¯¯¯¯X]=μ. + +
+ +**54. Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** + +⟶ Центральна гранична теорема ― Маючи випадкову вибірку X1,...,Xn слідуючи даному розподілу з середнім значенням σ2, маємо : + +
+ +**55. Estimating the variance** + +⟶ Розрахунок дисперсії + +
+ +**56. Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** + +⟶ Дисперсія вибірки ― Дисперсія випадкової вибірки - s2 або ^σ2, використовується щоб визначити справжню дисперсію σ2 вибірки, і визначаєтья: + +
+ +**57. Remark: the sample variance is unbiased, i.e E[s2]=σ2.** + +⟶ Примітка: дисперсія вибірки не має похибки, тобто E[s2]=σ2. + +
+ +**58. Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** + +⟶ Розподіл хі-квадрат та дисперсія вибірки ― Нехай s2 буде дисперсією випадкової вибірка. Маємо: + +
+ +**59. [Introduction, Sample space, Event, Permutation]** + +⟶ [Вступ, Простір елементарних подій, Подія, Підстановка]; + +
+ +**60. [Conditional probability, Bayes' rule, Independence]** + +⟶ [Умовна ймовірність, Теорема Баєса, Незалежність]; + +
+ +**61. [Random variables, Definitions, Expectation, Variance]** + +⟶ [Випадкові змінні, Означення, Очікування, Дисперсія] + +
+ +**62. [Probability distributions, Chebyshev's inequality, Main distributions]** + +⟶ [Розподіли ймовірності, Нерівність Чебишова, Головні розподіли] + +
+ +**63. [Jointly distributed random variables, Density, Covariance, Correlation]** + +⟶ [Спільно розподілені випадкові величини, Щільність, Коваріація, Кореляція] + +
+ +**64. [Parameter estimation, Mean, Variance]** + +⟶ [Оцінювання параметрів, Середнє значення, Дисперсія] diff --git a/zh-tw/cheatsheet-deep-learning.md b/zh-tw/cs-229-deep-learning.md similarity index 100% rename from zh-tw/cheatsheet-deep-learning.md rename to zh-tw/cs-229-deep-learning.md diff --git a/zh/refresher-linear-algebra.md b/zh-tw/cs-229-linear-algebra.md similarity index 58% rename from zh/refresher-linear-algebra.md rename to zh-tw/cs-229-linear-algebra.md index 6cef234fe..36d4cef5d 100644 --- a/zh/refresher-linear-algebra.md +++ b/zh-tw/cs-229-linear-algebra.md @@ -1,339 +1,338 @@ 1. **Linear Algebra and Calculus refresher** ⟶ - +線性代數與微積分回顧
2. **General notations** ⟶ - +通用符號
3. **Definitions** ⟶ - +定義
4. **Vector ― We note x∈Rn a vector with n entries, where xi∈R is the ith entry:** ⟶ - +向量 - 我們定義 x∈Rn 是一個向量,包含 n 維元素,xi∈R 是第 i 維元素:
5. **Matrix ― We note A∈Rm×n a matrix with m rows and n columns, where Ai,j∈R is the entry located in the ith row and jth column:** ⟶ - +矩陣 - 我們定義 A∈Rm×n 是一個 m 列 n 行的矩陣,Ai,j∈R 代表位在第 i 列第 j 行的元素:
6. **Remark: the vector x defined above can be viewed as a n×1 matrix and is more particularly called a column-vector.** ⟶ - +注意:上述定義的向量 x 可以視為 nx1 的矩陣,或是更常被稱為行向量
7. **Main matrices** ⟶ - +主要的矩陣
8. **Identity matrix ― The identity matrix I∈Rn×n is a square matrix with ones in its diagonal and zero everywhere else:** ⟶ - +單位矩陣 - 單位矩陣 I∈Rn×n 是一個方陣,其主對角線皆為 1,其餘皆為 0
9. **Remark: for all matrices A∈Rn×n, we have A×I=I×A=A.** ⟶ - +注意:對於所有矩陣 A∈Rn×n,我們有 A×I=I×A=A
10. **Diagonal matrix ― A diagonal matrix D∈Rn×n is a square matrix with nonzero values in its diagonal and zero everywhere else:** ⟶ - +對角矩陣 - 對角矩陣 D∈Rn×n 是一個方陣,其主對角線為非 0,其餘皆為 0
11. **Remark: we also note D as diag(d1,...,dn).** ⟶ - +注意:我們令 D 為 diag(d1,...,dn)
12. **Matrix operations** ⟶ - +矩陣運算
13. **Multiplication** ⟶ - +乘法
14. **Vector-vector ― There are two types of vector-vector products:** ⟶ - +向量-向量 - 有兩種類型的向量-向量相乘:
15. **inner product: for x,y∈Rn, we have:** ⟶ - +內積:對於 x,y∈Rn,我們可以得到:
16. **outer product: for x∈Rm,y∈Rn, we have:** ⟶ - +外積:對於 x∈Rm,y∈Rn,我們可以得到:
17. **Matrix-vector ― The product of matrix A∈Rm×n and vector x∈Rn is a vector of size Rn, such that:** ⟶ - +矩陣-向量 - 矩陣 A∈Rm×n 和向量 x∈Rn 的乘積是一個大小為 Rm 的向量,使得:
18. **where aTr,i are the vector rows and ac,j are the vector columns of A, and xi are the entries of x.** ⟶ - +其中 aTr,i 是 A 的列向量、ac,j 是 A 的行向量、xi 是 x 的元素
19. **Matrix-matrix ― The product of matrices A∈Rm×n and B∈Rn×p is a matrix of size Rn×p, such that:** ⟶ - +矩陣-矩陣:矩陣 A∈Rm×n 和 B∈Rn×p 的乘積為一個大小 Rm×p 的矩陣,使得:
20. **where aTr,i,bTr,i are the vector rows and ac,j,bc,j are the vector columns of A and B respectively** ⟶ - +其中,aTr,i,bTr,i 和 ac,j,bc,j 分別是 A 和 B 的列向量與行向量
21. **Other operations** ⟶ - +其他操作
22. **Transpose ― The transpose of a matrix A∈Rm×n, noted AT, is such that its entries are flipped:** ⟶ - +轉置 - 一個矩陣的轉置矩陣 A∈Rm×n,記作 AT,指的是其中元素的翻轉:
23. **Remark: for matrices A,B, we have (AB)T=BTAT** ⟶ - +注意:對於矩陣 A、B,我們有 (AB)T=BTAT
24. **Inverse ― The inverse of an invertible square matrix A is noted A−1 and is the only matrix such that:** ⟶ - +可逆 - 一個可逆矩陣 A 記作 A−1,存在唯一的矩陣,使得:
25. **Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1=B−1A−1** ⟶ - +注意:並非所有的方陣都是可逆的。同樣的,對於矩陣 A、B 來說,我們有 (AB)−1=B−1A−1
26. **Trace ― The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:** ⟶ - +跡 - 一個方陣 A 的跡,記作 tr(A),指的是主對角線元素之合:
27. **Remark: for matrices A,B, we have tr(AT)=tr(A) and tr(AB)=tr(BA)** ⟶ - +注意:對於矩陣 A、B 來說,我們有 tr(AT)=tr(A) 及 tr(AB)=tr(BA)
28. **Determinant ― The determinant of a square matrix A∈Rn×n, noted |A| or det(A) is expressed recursively in terms of A∖i,∖j, which is the matrix A without its ith row and jth column, as follows:** ⟶ - +行列式 - 一個方陣 A∈Rn×n 的行列式,記作|A| 或 det(A),可以透過 A∖i,∖j 來遞迴表示,它是一個沒有第 i 列和第 j 行的矩陣 A:
29. **Remark: A is invertible if and only if |A|≠0. Also, |AB|=|A||B| and |AT|=|A|.** ⟶ - +注意:A 是一個可逆矩陣,若且唯若 |A|≠0。同樣的,|AB|=|A||B| 且 |AT|=|A|
30. **Matrix properties** ⟶ - +矩陣的性質
31. **Definitions** ⟶ - +定義
32. **Symmetric decomposition ― A given matrix A can be expressed in terms of its symmetric and antisymmetric parts as follows:** ⟶ - +對稱分解 - 給定一個矩陣 A,它可以透過其對稱和反對稱的部分表示如下:
33. **[Symmetric, Antisymmetric]** ⟶ - +[對稱, 反對稱]
34. **Norm ― A norm is a function N:V⟶[0,+∞[ where V is a vector space, and such that for all x,y∈V, we have:** ⟶ - +範數 - 範數指的是一個函式 N:V⟶[0,+∞[,其中 V 是一個向量空間,且對於所有 x,y∈V,我們有:
35. **N(ax)=|a|N(x) for a scalar** ⟶ - +對一個純量來說,我們有 N(ax)=|a|N(x)
36. **if N(x)=0, then x=0** ⟶ - +若 N(x)=0 時,則 x=0
37. **For x∈V, the most commonly used norms are summed up in the table below:** ⟶ - +對於 x∈V,最常用的範數總結如下表:
38. **[Norm, Notation, Definition, Use case]** ⟶ - +[範數, 表示法, 定義, 使用情境]
39. **Linearly dependence ― A set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others.** ⟶ - +線性相關 - 當集合中的一個向量可以用被定義為集合中其他向量的線性組合時,則則稱此集合的向量為線性相關
40. **Remark: if no vector can be written this way, then the vectors are said to be linearly independent** ⟶ - +注意:如果沒有向量可以如上表示時,則稱此集合的向量彼此為線性獨立
41. **Matrix rank ― The rank of a given matrix A is noted rank(A) and is the dimension of the vector space generated by its columns. This is equivalent to the maximum number of linearly independent columns of A.** ⟶ - +矩陣的秩 - 一個矩陣 A 的秩記作 rank(A),指的是其列向量空間所產生的維度,等價於 A 的線性獨立的最大最大行向量
42. **Positive semi-definite matrix ― A matrix A∈Rn×n is positive semi-definite (PSD) and is noted A⪰0 if we have:** ⟶ - +半正定矩陣 - 當以下成立時,一個矩陣 A∈Rn×n 是半正定矩陣 (PSD),且記作A⪰0:
43. **Remark: similarly, a matrix A is said to be positive definite, and is noted A≻0, if it is a PSD matrix which satisfies for all non-zero vector x, xTAx>0.** ⟶ - +注意:同樣的,一個矩陣 A 是一個半正定矩陣 (PSD),且滿足所有非零向量 x,xTAx>0 時,稱之為正定矩陣,記作 A≻0
44. **Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** ⟶ - +特徵值、特徵向量 - 給定一個矩陣 A∈Rn×n,當存在一個向量 z∈Rn∖{0} 時,此向量被稱為特徵向量,λ 稱之為 A 的特徵值,且滿足:
45. **Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** ⟶ - +譜分解 - 令 A∈Rn×n,如果 A 是對稱的,則 A 可以被一個實數正交矩陣 U∈Rn×n 給對角化。令 Λ=diag(λ1,...,λn),我們得到:
46. **diagonal** ⟶ - +對角線
47. **Singular-value decomposition ― For a given matrix A of dimensions m×n, the singular-value decomposition (SVD) is a factorization technique that guarantees the existence of U m×m unitary, Σ m×n diagonal and V n×n unitary matrices, such that:** ⟶ - +奇異值分解 - 對於給定維度為 mxn 的矩陣 A,其奇異值分解指的是一種因子分解技巧,保證存在 mxm 的單式矩陣 U、對角線矩陣 Σ m×n 和 nxn 的單式矩陣 V,滿足:
48. **Matrix calculus** ⟶ - +矩陣導數
49. **Gradient ― Let f:Rm×n→R be a function and A∈Rm×n be a matrix. The gradient of f with respect to A is a m×n matrix, noted ∇Af(A), such that:** ⟶ - +梯度 - 令 f:Rm×n→R 是一個函式,且 A∈Rm×n 是一個矩陣。f 相對於 A 的梯度是一個 mxn 的矩陣,記作 ∇Af(A),滿足:
50. **Remark: the gradient of f is only defined when f is a function that returns a scalar.** ⟶ - +注意:f 的梯度僅在 f 為一個函數且該函數回傳一個純量時有效
51. **Hessian ― Let f:Rn→R be a function and x∈Rn be a vector. The hessian of f with respect to x is a n×n symmetric matrix, noted ∇2xf(x), such that:** ⟶ - +海森 - 令 f:Rn→R 是一個函式,且 x∈Rn 是一個向量,則一個 f 的海森對於向量 x 是一個 nxn 的對稱矩陣,記作 ∇2xf(x),滿足:
52. **Remark: the hessian of f is only defined when f is a function that returns a scalar** ⟶ - +注意:f 的海森僅在 f 為一個函數且該函數回傳一個純量時有效
53. **Gradient operations ― For matrices A,B,C, the following gradient properties are worth having in mind:** - +梯度運算 - 對於矩陣 A、B、C,下列的梯度性質值得牢牢記住: ⟶ -
- 54. **[General notations, Definitions, Main matrices]** ⟶ - +[通用符號, 定義, 主要矩陣]
55. **[Matrix operations, Multiplication, Other operations]** ⟶ - +[矩陣運算, 矩陣乘法, 其他運算]
56. **[Matrix properties, Norm, Eigenvalue/Eigenvector, Singular-value decomposition]** ⟶ - +[矩陣性質, 範數, 特徵值/特徵向量, 奇異值分解]
57. **[Matrix calculus, Gradient, Hessian, Operations]** ⟶ +[矩陣導數, 梯度, 海森, 運算] \ No newline at end of file diff --git a/zh/cheatsheet-machine-learning-tips-and-tricks.md b/zh-tw/cs-229-machine-learning-tips-and-tricks.md similarity index 59% rename from zh/cheatsheet-machine-learning-tips-and-tricks.md rename to zh-tw/cs-229-machine-learning-tips-and-tricks.md index 61fab788c..b7a5db1c0 100644 --- a/zh/cheatsheet-machine-learning-tips-and-tricks.md +++ b/zh-tw/cs-229-machine-learning-tips-and-tricks.md @@ -1,285 +1,257 @@ 1. **Machine Learning tips and tricks cheatsheet** ⟶ - +機器學習秘訣和技巧參考手冊
2. **Classification metrics** ⟶ - +分類器的評估指標
3. **In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.** ⟶ - +在二元分類的問題上,底下是主要用來衡量模型表現的指標
4. **Confusion matrix ― The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:** ⟶ - +混淆矩陣 - 混淆矩陣是用來衡量模型整體表現的指標
5. **[Predicted class, Actual class]** ⟶ - +[預測類別, 真實類別]
6. **Main metrics ― The following metrics are commonly used to assess the performance of classification models:** ⟶ - +主要的衡量指標 - 底下的指標經常用在評估分類模型的表現
7. **[Metric, Formula, Interpretation]** ⟶ - +[指標, 公式, 解釋]
8. **Overall performance of model** ⟶ - +模型的整體表現
9. **How accurate the positive predictions are** ⟶ - +預測的類別有多精準的比例
10. **Coverage of actual positive sample** ⟶ - +實際正的樣本的覆蓋率有多少
11. **Coverage of actual negative sample** ⟶ - +實際負的樣本的覆蓋率
12. **Hybrid metric useful for unbalanced classes** ⟶ - +對於非平衡類別相當有用的混合指標
13. **ROC ― The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:** ⟶ - +ROC - 接收者操作特徵曲線 (ROC Curve),又被稱為 ROC,是透過改變閥值來表示 TPR 和 FPR 之間關係的圖形。這些指標總結如下:
14. **[Metric, Formula, Equivalent]** ⟶ - +[衡量指標, 公式, 等同於]
15. **AUC ― The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:** ⟶ - +AUC - 在接收者操作特徵曲線 (ROC) 底下的面積,也稱為 AUC 或 AUROC:
16. **[Actual, Predicted]** ⟶ - +[實際值, 預測值]
17. **Basic metrics ― Given a regression model f, the following metrics are commonly used to assess the performance of the model:** ⟶ - +基本的指標 - 給定一個迴歸模型 f,底下是經常用來評估此模型的指標:
18. **[Total sum of squares, Explained sum of squares, Residual sum of squares]** ⟶ - +[總平方和, 被解釋平方和, 殘差平方和]
19. **Coefficient of determination ― The coefficient of determination, often noted R2 or r2, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:** ⟶ - +決定係數 - 決定係數又被稱為 R2 or r2,它提供了模型是否具備復現觀測結果的能力。定義如下:
20. **Main metrics ― The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables n that they take into consideration:** ⟶ - +主要的衡量指標 - 藉由考量變數 n 的數量,我們經常用使用底下的指標來衡量迴歸模型的表現:
21. **where L is the likelihood and ˆσ2 is an estimate of the variance associated with each response.** ⟶ - +當中,L 代表的是概似估計,ˆσ2 則是變異數的估計
22. **Model selection** ⟶ - +模型選擇
23. **Vocabulary ― When selecting a model, we distinguish 3 different parts of the data that we have as follows:** ⟶ - +詞彙 - 當進行模型選擇時,我們會針對資料進行以下區分:
24. **[Training set, Validation set, Testing set]** ⟶ - +[訓練資料集, 驗證資料集, 測試資料集]
25. **[Model is trained, Model is assessed, Model gives predictions]** ⟶ - +[用來訓練模型, 用來評估模型, 模型用來預測用的資料集]
26. **[Usually 80% of the dataset, Usually 20% of the dataset]** ⟶ - +[通常是 80% 的資料集, 通常是 20% 的資料集]
27. **[Also called hold-out or development set, Unseen data]** ⟶ - +[又被稱為 hold-out 資料集或開發資料集, 模型沒看過的資料集]
28. **Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:** ⟶ - +當模型被選擇後,就會使用整個資料集來做訓練,並且在沒看過的資料集上做測試。你可以參考以下的圖表:
29. **Cross-validation ― Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:** ⟶ - +交叉驗證 - 交叉驗證,又稱之為 CV,它是一種不特別依賴初始訓練集來挑選模型的方法。幾種不同的方法如下:
-30. [**Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** +30. **[Training on k−1 folds and assessment on the remaining one, Training on n−p observations and assessment on the p remaining ones]** ⟶ - +[把資料分成 k 份,利用 k-1 份資料來訓練,剩下的一份用來評估模型效能, 在 n-p 份資料上進行訓練,剩下的 p 份資料用來評估模型效能]
31. **[Generally k=5 or 10, Case p=1 is called leave-one-out]** ⟶ - +[一般來說 k=5 或 10, 當 p=1 時,又稱為 leave-one-out]
32. **The most commonly used method is called k-fold cross-validation and splits the training data into k folds to validate the model on one fold while training the model on the k−1 other folds, all of this k times. The error is then averaged over the k folds and is named cross-validation error.** ⟶ - +最常用到的方法叫做 k-fold 交叉驗證。它將訓練資料切成 k 份,在 k-1 份資料上進行訓練,而剩下的一份用來評估模型的效能,這樣的流程會重複 k 次次。最後計算出來的模型損失是 k 次結果的平均,又稱為交叉驗證損失值。
33. **Regularization ― The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:** ⟶ - +正規化 - 正歸化的目的是為了避免模型對於訓練資料過擬合,進而導致高方差。底下的表格整理了常見的正規化技巧:
34. **[Shrinks coefficients to 0, Good for variable selection, Makes coefficients smaller, Tradeoff between variable selection and small coefficients]** ⟶ - +[將係數縮減為 0, 有利變數的選擇, 將係數變得更小, 在變數的選擇和小係數之間作權衡]
35. **Diagnostics** ⟶ - +診斷
36. **Bias ― The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.** ⟶ - +偏差 - 模型的偏差指的是模型預測值與實際值之間的差異
37. **Variance ― The variance of a model is the variability of the model prediction for given data points.** ⟶ - +變異 - 變異指的是模型在預測資料時的變異程度
38. **Bias/variance tradeoff ― The simpler the model, the higher the bias, and the more complex the model, the higher the variance.** ⟶ - +偏差/變異的權衡 - 越簡單的模型,偏差就越大。而越複雜的模型,變異就越大
39. **[Symptoms, Regression illustration, classification illustration, deep learning illustration, possible remedies]** ⟶ - +[現象, 迴歸圖示, 分類圖示, 深度學習圖示, 可能的解法]
40. **[High training error, Training error close to test error, High bias, Training error slightly lower than test error, Very low training error, Training error much lower than test error, High variance]** ⟶ - +[訓練錯誤較高, 訓練錯誤和測試錯誤接近, 高偏差, 訓練誤差會稍微比測試誤差低, 訓練誤差很低, 訓練誤差比測試誤差低很多, 高變異]
41. **[Complexify model, Add more features, Train longer, Perform regularization, Get more data]** ⟶ - +[使用較複雜的模型, 增加更多特徵, 訓練更久, 採用正規化化的方法, 取得更多資料]
42. **Error analysis ― Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.** ⟶ - +誤差分析 - 誤差分析指的是分析目前使用的模型和最佳模型之間差距的根本原因
43. **Ablative analysis ― Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.** ⟶ - -
- -44. **Regression metrics** - -⟶ - +銷蝕分析 (Ablative analysis) - 銷蝕分析指的是分析目前模型和基準模型之間差異的根本原因
- -45. **[Classification metrics, confusion matrix, accuracy, precision, recall, F1 score, ROC]** - -⟶ - -
- -46. **[Regression metrics, R squared, Mallow's CP, AIC, BIC]** - -⟶ - -
- -47. **[Model selection, cross-validation, regularization]** - -⟶ - -
- -48. **[Diagnostics, Bias/variance tradeoff, error/ablative analysis]** - -⟶ diff --git a/zh/refresher-probability.md b/zh-tw/cs-229-probability.md similarity index 56% rename from zh/refresher-probability.md rename to zh-tw/cs-229-probability.md index 52e0056e0..0db481cf5 100644 --- a/zh/refresher-probability.md +++ b/zh-tw/cs-229-probability.md @@ -1,381 +1,382 @@ 1. **Probabilities and Statistics refresher** ⟶ - +機率和統計回顧
2. **Introduction to Probability and Combinatorics** ⟶ - +幾率與組合數學介紹
3. **Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted by S.** ⟶ - +樣本空間 - 一個實驗的所有可能結果的集合稱之為這個實驗的樣本空間,記做 S
4. **Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.** ⟶ - +事件 - 樣本空間的任何子集合 E 被稱之為一個事件。也就是說,一個事件是實驗的可能結果的集合。如果該實驗的結果包含 E,我們稱我們稱 E 發生
5. **Axioms of probability For each event E, we denote P(E) as the probability of event E occuring.** ⟶ - +機率公理。對於每個事件 E,我們用 P(E) 表示事件 E 發生的機率
6. **Axiom 1 ― Every probability is between 0 and 1 included, i.e:** ⟶ - +公理 1 - 每一個機率值介於 0 到 1 之間,包含兩端點。即:
7. **Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:** ⟶ - +公理 2 - 至少一個基本事件出現在整個樣本空間中的機率是 1。即:
8. **Axiom 3 ― For any sequence of mutually exclusive events E1,...,En, we have:** ⟶ - +公理 3 - 對於任何互斥的事件 E1,...,En,我們定義如下:
9. **Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such arrangements is given by P(n,r), defined as:** ⟶ - +排列 - 排列指的是從 n 個相異的物件中,取出 r 個物件按照固定順序重新安排,這樣安排的數量用 P(n,r) 來表示,定義為:
10. **Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The number of such arrangements is given by C(n,r), defined as:** ⟶ - +組合 - 組合指的是從 n 個物件中,取出 r 個物件,但不考慮他的順序。這樣組合要考慮的數量用 C(n,r) 來表示,定義為:
11. **Remark: we note that for 0⩽r⩽n, we have P(n,r)⩾C(n,r)** ⟶ - +注意:對於 0⩽r⩽n,我們會有 P(n,r)⩾C(n,r)
12. **Conditional Probability** ⟶ - +條件機率
13. **Bayes' rule ― For events A and B such that P(B)>0, we have:** ⟶ - +貝氏定理 - 對於事件 A 和 B 滿足 P(B)>0 時,我們定義如下:
14. **Remark: we have P(A∩B)=P(A)P(B|A)=P(A|B)P(B)** ⟶ - +注意:P(A∩B)=P(A)P(B|A)=P(A|B)P(B)
15. **Partition ― Let {Ai,i∈[[1,n]]} be such that for all i, Ai≠∅. We say that {Ai} is a partition if we have:** ⟶ - +分割 - 令 {Ai,i∈[[1,n]]} 對所有的 i,Ai≠∅,我們說 {Ai} 是一個分割,當底下成立時:
16. **Remark: for any event B in the sample space, we have P(B)=n∑i=1P(B|Ai)P(Ai).** ⟶ - +注意:對於任何在樣本空間的事件 B 來說,P(B)=n∑i=1P(B|Ai)P(Ai)
17. **Extended form of Bayes' rule ― Let {Ai,i∈[[1,n]]} be a partition of the sample space. We have:** ⟶ - +貝氏定理的擴展 - 令 {Ai,i∈[[1,n]]} 為樣本空間的一個分割,我們定義:
18. **Independence ― Two events A and B are independent if and only if we have:** ⟶ - +獨立 - 當以下條件滿足時,兩個事件 A 和 B 為獨立事件:
19. **Random Variables** ⟶ - +隨機變數
20. **Definitions** ⟶ - +定義
21. **Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.** ⟶ - +隨機變數 - 一個隨機變數 X,它是一個將樣本空間中的每個元素映射到實數域的函數
22. **Cumulative distribution function (CDF) ― The cumulative distribution function F, which is monotonically non-decreasing and is such that limx→−∞F(x)=0 and limx→+∞F(x)=1, is defined as:** ⟶ - +累積分佈函數 (CDF) - 累積分佈函數 F 是單調遞增的函數,其 limx→−∞F(x)=0 且 limx→+∞F(x)=1,定義如下:
23. **Remark: we have P(a 24. **Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two adjacent realizations of the random variable.** ⟶ - +機率密度函數 - 機率密度函數 f 是隨機變數 X 在兩個相鄰的實數值附近取值的機率
25. **Relationships involving the PDF and CDF ― Here are the important properties to know in the discrete (D) and the continuous (C) cases.** ⟶ - +機率密度函數和累積分佈函數的關係 - 底下是一些關於離散 (D) 和連續 (C) 的情況下的重要屬性
26. **[Case, CDF F, PDF f, Properties of PDF]** ⟶ - +[情況, 累積分佈函數 F, 機率密度函數 f, 機率密度函數的屬性]
27. **Expectation and Moments of the Distribution ― Here are the expressions of the expected value E[X], generalized expected value E[g(X)], kth moment E[Xk] and characteristic function ψ(ω) for the discrete and continuous cases:** ⟶ - +分佈的期望值和動差 - 底下是期望值 E[X]、一般期望值 E[g(X)]、第 k 個動差和特徵函數 ψ(ω) 在離散和連續的情況下的表示式:
28. **Variance ― The variance of a random variable, often noted Var(X) or σ2, is a measure of the spread of its distribution function. It is determined as follows:** ⟶ - +變異數 - 隨機變數的變異數通常表示為 Var(X) 或 σ2,用來衡量一個分佈離散程度的指標。其表示如下:
29. **Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution function which is compatible with the units of the actual random variable. It is determined as follows:** ⟶ - +標準差 - 一個隨機變數的標準差通常表示為 σ,用來衡量一個分佈離散程度的指標,其單位和實際的隨機變數相容,表示如下:
30. **Transformation of random variables ― Let the variables X and Y be linked by some function. By noting fX and fY the distribution function of X and Y respectively, we have:** ⟶ - +隨機變數的轉換 - 令變數 X 和 Y 由某個函式連結在一起。我們定義 fX 和 fY 是 X 和 Y 的分佈函式,可以得到:
31. **Leibniz integral rule ― Let g be a function of x and potentially c, and a,b boundaries that may depend on c. We have:** ⟶ - +萊布尼茲積分法則 - 令 g 為 x 和 c 的函數,a 和 b 是依賴於 c 的的邊界,我們得到:
32. **Probability Distributions** ⟶ - +機率分佈
33. **Chebyshev's inequality ― Let X be a random variable with expected value μ. For k,σ>0, we have the following inequality:** ⟶ - +柴比雪夫不等式 - 令 X 是一隨機變數,期望值為 μ。對於 k, σ>0,我們有以下不等式:
34. **Main distributions ― Here are the main distributions to have in mind:** ⟶ - +主要的分佈 - 底下是我們需要熟悉的幾個主要的不等式:
35. **[Type, Distribution]** ⟶ - +[種類, 分佈]
36. **Jointly Distributed Random Variables** ⟶ - +聯合分佈隨機變數
37. **Marginal density and cumulative distribution ― From the joint density probability function fXY , we have** ⟶ - +邊緣密度和累積分佈 - 從聯合密度機率函數 fXY 中我們可以得到:
38. **[Case, Marginal density, Cumulative function]** ⟶ - +[種類, 邊緣密度函數, 累積函數]
39. **Conditional density ― The conditional density of X with respect to Y, often noted fX|Y, is defined as follows:** ⟶ - +條件密度 - X 對於 Y 的條件密度,通常用 fX|Y 表示如下:
40. **Independence ― Two random variables X and Y are said to be independent if we have:** ⟶ - +獨立 - 當滿足以下條件時,我們稱隨機變數 X 和 Y 互相獨立:
41. **Covariance ― We define the covariance of two random variables X and Y, that we note σ2XY or more commonly Cov(X,Y), as follows:** ⟶ - +共變異數 - 我們定義隨機變數 X 和 Y 的共變異數為 σ2XY 或 Cov(X,Y) 如下:
42. **Correlation ― By noting σX,σY the standard deviations of X and Y, we define the correlation between the random variables X and Y, noted ρXY, as follows:** ⟶ - +相關性 - 我們定義 σX、σY 為 X 和 Y 的標準差,而 X 和 Y 的相關係數 ρXY 定義如下:
43. **Remark 1: we note that for any random variables X,Y, we have ρXY∈[−1,1].** ⟶ - +注意一:對於任何隨機變數 X 和 Y 來說,ρXY∈[−1,1] 成立
44. **Remark 2: If X and Y are independent, then ρXY=0.** ⟶ - +注意二:當 X 和 Y 獨立時,ρXY=0
45. **Parameter estimation** ⟶ - +參數估計
46. **Definitions** ⟶ - +定義
47. **Random sample ― A random sample is a collection of n random variables X1,...,Xn that are independent and identically distributed with X.** ⟶ - +隨機抽樣 - 隨機抽樣指的是 n 個隨機變數 X1,...,Xn 和 X 獨立且同分佈的集合
48. **Estimator ― An estimator is a function of the data that is used to infer the value of an unknown parameter in a statistical model.** ⟶ - +估計量 - 估計量是一個資料的函數,用來推斷在統計模型中未知參數的值
49. **Bias ― The bias of an estimator ^θ is defined as being the difference between the expected value of the distribution of ^θ and the true value, i.e.:** ⟶ - +偏差 - 一個估計量的偏差 ^θ 定義為 ^θ 分佈期望值和真實值之間的差距:
50. **Remark: an estimator is said to be unbiased when we have E[^θ]=θ.** ⟶ - +注意:當 E[^θ]=θ 時,我們稱為不偏估計量
51. **Estimating the mean** ⟶ - +預估平均數
52. **Sample mean ― The sample mean of a random sample is used to estimate the true mean μ of a distribution, is often noted ¯X and is defined as follows:** ⟶ - +樣本平均 - 一個隨機樣本的樣本平均是用來預估一個分佈的真實平均 μ,通常我們用 ¯X 來表示,定義如下:
53. **Remark: the sample mean is unbiased, i.e E[¯X]=μ.** ⟶ - +注意:當 E[¯X]=μ 時,則為不偏樣本平均
54. **Central Limit Theorem ― Let us have a random sample X1,...,Xn following a given distribution with mean μ and variance σ2, then we have:** ⟶ - +中央極限定理 - 當我們有一個隨機樣本 X1,...,Xn 滿足一個給定的分佈,其平均數為 μ,變異數為 σ2,我們有:
55. **Estimating the variance** ⟶ - +估計變異數
56. **Sample variance ― The sample variance of a random sample is used to estimate the true variance σ2 of a distribution, is often noted s2 or ^σ2 and is defined as follows:** ⟶ - +樣本變異數 - 一個隨機樣本的樣本變異數是用來估計一個分佈的真實變異數 σ2,通常使用 s2 或 ^σ2 來表示,定義如下:
57. **Remark: the sample variance is unbiased, i.e E[s2]=σ2.** ⟶ - +注意:當 E[s2]=σ2 時,稱之為不偏樣本變異數
58. **Chi-Squared relation with sample variance ― Let s2 be the sample variance of a random sample. We have:** ⟶ - +與樣本變異數的卡方關聯 - 令 s2 是一個隨機樣本的樣本變異數,我們可以得到:
-59. **[Introduction, Sample space, Event, Permutation]** +**59. [Introduction, Sample space, Event, Permutation]** ⟶ - +[介紹, 樣本空間, 事件, 排列]
-60. **[Conditional probability, Bayes' rule, Independence]** +**60. [Conditional probability, Bayes' rule, Independence]** ⟶ - +[條件機率, 貝氏定理, 獨立性]
-61. **[Random variables, Definitions, Expectation, Variance]** +**61. [Random variables, Definitions, Expectation, Variance]** ⟶ - +[隨機變數, 定義, 期望值, 變異數]
-62. **[Probability distributions, Chebyshev's inequality, Main distributions]** +**62. [Probability distributions, Chebyshev's inequality, Main distributions]** ⟶ - +[機率分佈, 柴比雪夫不等式, 主要分佈]
-63. **[Jointly distributed random variables, Density, Covariance, Correlation]** +**63. [Jointly distributed random variables, Density, Covariance, Correlation]** ⟶ - +[聯合分佈隨機變數, 密度, 共變異數, 相關]
-64. **[Parameter estimation, Mean, Variance]** +**64. [Parameter estimation, Mean, Variance]** ⟶ +[參數估計, 平均數, 變異數] \ No newline at end of file diff --git a/zh-tw/cs-229-supervised-learning.md b/zh-tw/cs-229-supervised-learning.md new file mode 100644 index 000000000..0b329e8db --- /dev/null +++ b/zh-tw/cs-229-supervised-learning.md @@ -0,0 +1,352 @@ +1. **Supervised Learning cheatsheet** + +⟶ 監督式學習參考手冊 + +2. **Introduction to Supervised Learning** + +⟶ 監督式學習介紹 + +3. **Given a set of data points {x(1),...,x(m)} associated to a set of outcomes {y(1),...,y(m)}, we want to build a classifier that learns how to predict y from x.** + +⟶ 給定一組資料點 {x(1),...,x(m)},以及對應的一組輸出 {y(1),...,y(m)},我們希望建立一個分類器,用來學習如何從 x 來預測 y + +4. **Type of prediction ― The different types of predictive models are summed up in the table below:** + +⟶ 預測的種類 - 根據預測的種類不同,我們將預測模型分為底下幾種: + +5. **[Regression, Classifier, Outcome, Examples]** + +⟶ [迴歸, 分類器, 結果, 範例] + +6. **[Continuous, Class, Linear regression, Logistic regression, SVM, Naive Bayes]** + +⟶ [連續, 類別, 線性迴歸, 邏輯迴歸, 支援向量機 (SVM) , 單純貝式分類器] + +7. **Type of model ― The different models are summed up in the table below:** + +⟶ 模型種類 - 不同種類的模型歸納如下表: + +8. **[Discriminative model, Generative model, Goal, What's learned, Illustration, Examples]** + +⟶ [判別模型, 生成模型, 目標, 學到什麼, 示意圖, 範例] + +9. **[Directly estimate P(y|x), Estimate P(x|y) to then deduce P(y|x), Decision boundary, Probability distributions of the data, Regressions, SVMs, GDA, Naive Bayes]** + +⟶ [直接估計 P(y|x), 先估計 P(x|y),然後推論出 P(y|x), 決策分界線, 資料的機率分佈, 迴歸, 支援向量機 (SVM), 高斯判別分析 (GDA), 單純貝氏 (Naive Bayes)] + +10. **Notations and general concepts** + +⟶ 符號及一般概念 + +11. **Hypothesis ― The hypothesis is noted hθ and is the model that we choose. For a given input data x(i) the model prediction output is hθ(x(i)).** + +⟶ 假設 - 我們使用 hθ 來代表所選擇的模型,對於給定的輸入資料 x(i),模型預測的輸出是 hθ(x(i)) + +12. **Loss function ― A loss function is a function L:(z,y)∈R×Y⟼L(z,y)∈R that takes as inputs the predicted value z corresponding to the real data value y and outputs how different they are. The common loss functions are summed up in the table below:** + +⟶ 損失函數 - 損失函數是一個函數 L:(z,y)∈R×Y⟼L(z,y)∈R, +目的在於計算預測值 z 和實際值 y 之間的差距。底下是一些常見的損失函數: + +13. **[Least squared error, Logistic loss, Hinge loss, Cross-entropy]** + +⟶ [最小平方法, Logistic 損失函數, Hinge 損失函數, 交叉熵] + +14. **[Linear regression, Logistic regression, SVM, Neural Network]** + +⟶ [線性迴歸, 邏輯迴歸, 支援向量機 (SVM), 神經網路] + +15. **Cost function ― The cost function J is commonly used to assess the performance of a model, and is defined with the loss function L as follows:** + +⟶ 代價函數 - 代價函數 J 通常用來評估一個模型的表現,它可以透過損失函數 L 來定義: + +16. **Gradient descent ― By noting α∈R the learning rate, the update rule for gradient descent is expressed with the learning rate and the cost function J as follows:** + +⟶ 梯度下降 - 使用 α∈R 表示學習速率,我們透過學習速率和代價函數來使用梯度下降的方法找出網路參數更新的方法可以表示為: + +17. **Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training example, and batch gradient descent is on a batch of training examples.** + +⟶ 注意:隨機梯度下降法 (SGD) 使用每一個訓練資料來更新參數。而批次梯度下降法則是透過一個批次的訓練資料來更新參數。 + +18. **Likelihood ― The likelihood of a model L(θ) given parameters θ is used to find the optimal parameters θ through maximizing the likelihood. In practice, we use the log-likelihood ℓ(θ)=log(L(θ)) which is easier to optimize. We have:** + +⟶ 概似估計 - 在給定參數 θ 的條件下,一個模型 L(θ) 的概似估計的目的是透過最大概似估計法來找到最佳的參數。實務上,我們會使用對數概似估計函數 (log-likelihood) ℓ(θ)=log(L(θ)),會比較容易最佳化。如下: + +19. **Newton's algorithm ― The Newton's algorithm is a numerical method that finds θ such that ℓ′(θ)=0. Its update rule is as follows:** + +⟶ 牛頓演算法 - 牛頓演算法是一個數值方法,目的在於找到一個 θ,讓 ℓ′(θ)=0。其更新的規則為: + +20. **Remark: the multidimensional generalization, also known as the Newton-Raphson method, has the following update rule:** + +⟶ 注意:多維度正規化的方法,或又被稱之為牛頓-拉弗森 (Newton-Raphson) 演算法,是透過以下的規則更新: + +21. **Linear models** + +⟶ 線性模型 + +22. **Linear regression** + +⟶ 線性迴歸 + +23. **We assume here that y|x;θ∼N(μ,σ2)** + +⟶ 我們假設 y|x;θ∼N(μ,σ2) + +24. **Normal equations ― By noting X the matrix design, the value of θ that minimizes the cost function is a closed-form solution such that:** + +⟶ 正規方程法 - 我們使用 X 代表矩陣,讓代價函數最小的 θ 值有一個封閉解,如下: + +25. **LMS algorithm ― By noting α the learning rate, the update rule of the Least Mean Squares (LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff learning rule, is as follows:** + +⟶ 最小均方演算法 (LMS) - 我們使用 α 表示學習速率,針對 m 個訓練資料,透過最小均方演算法的更新規則,或是叫做 Widrow-Hoff 學習法如下: + +26. **Remark: the update rule is a particular case of the gradient ascent.** + +⟶ 注意:這個更新的規則是梯度上升的一種特例 + +27. **LWR ― Locally Weighted Regression, also known as LWR, is a variant of linear regression that weights each training example in its cost function by w(i)(x), which is defined with parameter τ∈R as:** + +⟶ 局部加權迴歸 ,又稱為 LWR,是線性洄歸的變形,通過w(i)(x) 對其成本函數中的每個訓練樣本進行加權,其中參數 τ∈R 定義為: + +28. **Classification and logistic regression** + +⟶ 分類與邏輯迴歸 + +29. **Sigmoid function ― The sigmoid function g, also known as the logistic function, is defined as follows:** + +⟶ Sigmoid 函數 - Sigmoid 函數 g,也可以稱為邏輯函數定義如下: + +30. **Logistic regression ― We assume here that y|x;θ∼Bernoulli(ϕ). We have the following form:** + +⟶ 邏輯迴歸 - 我們假設 y|x;θ∼Bernoulli(ϕ),請參考以下: + +31. **Remark: there is no closed form solution for the case of logistic regressions.** + +⟶ 注意:對於這種情況的邏輯迴歸,並沒有一個封閉解 + +32. **Softmax regression ― A softmax regression, also called a multiclass logistic regression, is used to generalize logistic regression when there are more than 2 outcome classes. By convention, we set θK=0, which makes the Bernoulli parameter ϕi of each class i equal to:** + +⟶ Softmax 迴歸 - Softmax 迴歸又稱做多分類邏輯迴歸,目的是用在超過兩個以上的分類時的迴歸使用。按照慣例,我們設定 θK=0,讓每一個類別的 Bernoulli 參數 ϕi 等同於: + +33. **Generalized Linear Models** + +⟶ 廣義線性模型 + +34. **Exponential family ― A class of distributions is said to be in the exponential family if it can be written in terms of a natural parameter, also called the canonical parameter or link function, η, a sufficient statistic T(y) and a log-partition function a(η) as follows:** + +⟶ 指數族分佈 - 一個分佈如果可以透過自然參數 (或稱之為正準參數或連結函數) η、充分統計量 T(y) 和對數區分函數 (log-partition function) a(η) 來表示時,我們就稱這個分佈是屬於指數族分佈。該分佈可以表示如下: + +35. **Remark: we will often have T(y)=y. Also, exp(−a(η)) can be seen as a normalization parameter that will make sure that the probabilities sum to one.** + +⟶ 注意:我們經常讓 T(y)=y,同時,exp(−a(η)) 可以看成是一個正規化的參數,目的在於讓機率總和為一。 + +36. **Here are the most common exponential distributions summed up in the following table:** + +⟶ 底下是最常見的指數分佈: + +37. **[Distribution, Bernoulli, Gaussian, Poisson, Geometric]** + +⟶ [分佈, 白努利 (Bernoulli), 高斯 (Gaussian), 卜瓦松 (Poisson), 幾何 (Geometric)] + +38. **Assumptions of GLMs ― Generalized Linear Models (GLM) aim at predicting a random variable y as a function fo x∈Rn+1 and rely on the following 3 assumptions:** + +⟶ 廣義線性模型的假設 - 廣義線性模型 (GLM) 的目的在於,給定 x∈Rn+1,要預測隨機變數 y,同時它依賴底下三個假設: + +39. **Remark: ordinary least squares and logistic regression are special cases of generalized linear models.** + +⟶ 注意:最小平方法和邏輯迴歸是廣義線性模型的一種特例 + +40. **Support Vector Machines** + +⟶ 支援向量機 + +41. **The goal of support vector machines is to find the line that maximizes the minimum distance to the line.** + +⟶ 支援向量機的目的在於找到一條決策邊界和資料樣本之間最大化最小距離的線 + +42. **Optimal margin classifier ― The optimal margin classifier h is such that:** + +⟶ 最佳的邊界分類器 - 最佳的邊界分類器可以表示為: + +43. **where (w,b)∈Rn×R is the solution of the following optimization problem:** + +⟶ 其中,(w,b)∈Rn×R 是底下最佳化問題的答案: + +44. **such that** + +⟶ 使得 + +45. **support vectors** + +⟶ 支援向量 + +46. **Remark: the line is defined as wTx−b=0.** + +⟶ 注意:該條直線定義為 wTx−b=0 + +47. **Hinge loss ― The hinge loss is used in the setting of SVMs and is defined as follows:** + +⟶ Hinge 損失函數 - Hinge 損失函數用在支援向量機上,定義如下: + +48. **Kernel ― Given a feature mapping ϕ, we define the kernel K to be defined as:** + +⟶ 核(函數) - 給定特徵轉換 ϕ,我們定義核(函數) K 為: + +49. **In practice, the kernel K defined by K(x,z)=exp(−||x−z||22σ2) is called the Gaussian kernel and is commonly used.** + +⟶ 實務上,K(x,z)=exp(−||x−z||22σ2) 定義的核(函數) K,一般稱作高斯核(函數)。這種核(函數)經常被使用 + +50. **[Non-linear separability, Use of a kernel mapping, Decision boundary in the original space]** + +⟶ [非線性可分, 使用核(函數)進行映射, 原始空間中的決策邊界] + +51. **Remark: we say that we use the "kernel trick" to compute the cost function using the kernel because we actually don't need to know the explicit mapping ϕ, which is often very complicated. Instead, only the values K(x,z) are needed.** + +⟶ 注意:我們使用 "核(函數)技巧" 來計算代價函數時,不需要真正的知道映射函數 ϕ,這個函數非常複雜。相反的,我們只需要知道 K(x,z) 的值即可。 + +52. **Lagrangian ― We define the Lagrangian L(w,b) as follows:** + +⟶ Lagrangian - 我們將 Lagrangian L(w,b) 定義如下: + +53. **Remark: the coefficients βi are called the Lagrange multipliers.** + +⟶ 注意:係數 βi 稱為 Lagrange 乘數 + +54. **Generative Learning** + +⟶ 生成學習 + +55. **A generative model first tries to learn how the data is generated by estimating P(x|y), which we can then use to estimate P(y|x) by using Bayes' rule.** + +⟶ 生成模型嘗試透過預估 P(x|y) 來學習資料如何生成,而我們可以透過貝氏定理來預估 P(y|x) + +56. **Gaussian Discriminant Analysis** + +⟶ 高斯判別分析 + +57. **Setting ― The Gaussian Discriminant Analysis assumes that y and x|y=0 and x|y=1 are such that:** + +⟶ 設定 - 高斯判別分析針對 y、x|y=0 和 x|y=1 進行以下假設: + +58. **Estimation ― The following table sums up the estimates that we find when maximizing the likelihood:** + +⟶ 估計 - 底下的表格總結了我們在最大概似估計時的估計值: + +59. **Naive Bayes** + +⟶ 單純貝氏 + +60. **Assumption ― The Naive Bayes model supposes that the features of each data point are all independent:** + +⟶ 假設 - 單純貝氏模型會假設每個資料點的特徵都是獨立的。 + +61. **Solutions ― Maximizing the log-likelihood gives the following solutions, with k∈{0,1},l∈[[1,L]]** + +⟶ 解決方法 - 最大化對數概似估計來給出以下解答,k∈{0,1},l∈[[1,L]] + +62. **Remark: Naive Bayes is widely used for text classification and spam detection.** + +⟶ 注意:單純貝氏廣泛應用在文字分類和垃圾信件偵測上 + +63. **Tree-based and ensemble methods** + +⟶ 基於樹狀結構的學習和整體學習 + +64. **These methods can be used for both regression and classification problems.** + +⟶ 這些方法可以應用在迴歸或分類問題上 + +65. **CART ― Classification and Regression Trees (CART), commonly known as decision trees, can be represented as binary trees. They have the advantage to be very interpretable.** + +⟶ CART - 分類與迴歸樹 (CART),通常稱之為決策數,可以被表示為二元樹。它的優點是具有可解釋性。 + +66. **Random forest ― It is a tree-based technique that uses a high number of decision trees built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly uninterpretable but its generally good performance makes it a popular algorithm.** + +⟶ 隨機森林 - 這是一個基於樹狀結構的方法,它使用大量經由隨機挑選的特徵所建構的決策樹。與單純的決策樹不同,它通常具有高度不可解釋性,但它的效能通常很好,所以是一個相當流行的演算法。 + +67. **Remark: random forests are a type of ensemble methods.** + +⟶ 注意:隨機森林是一種整體學習方法 + +68. **Boosting ― The idea of boosting methods is to combine several weak learners to form a stronger one. The main ones are summed up in the table below:** + +⟶ 增強學習 (Boosting) - 增強學習方法的概念是結合數個弱學習模型來變成強學習模型。主要的分類如下: + +69. **[Adaptive boosting, Gradient boosting]** + +⟶ [自適應增強, 梯度增強] + +70. **High weights are put on errors to improve at the next boosting step** + +⟶ 在下一輪的提升步驟中,錯誤的部分會被賦予較高的權重 + +71. **Weak learners trained on remaining errors** + +⟶ 弱學習器會負責訓練剩下的錯誤 + +72. **Other non-parametric approaches** + +⟶ 其他非參數方法 + +73. **k-nearest neighbors ― The k-nearest neighbors algorithm, commonly known as k-NN, is a non-parametric approach where the response of a data point is determined by the nature of its k neighbors from the training set. It can be used in both classification and regression settings.** + +⟶ k-最近鄰 - k-最近鄰演算法,又稱之為 k-NN,是一個非參數的方法,其中資料點的決定是透過訓練集中最近的 k 個鄰居而決定。它可以用在分類和迴歸問題上。 + +74. **Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the higher the variance.** + +⟶ 注意:參數 k 的值越大,偏差越大。k 的值越小,變異越大。 + +75. **Learning Theory** + +⟶ 學習理論 + +76. **Union bound ― Let A1,...,Ak be k events. We have:** + +⟶ 聯集上界 - 令 A1,...,Ak 為 k 個事件,我們有: + +77. **Hoeffding inequality ― Let Z1,..,Zm be m iid variables drawn from a Bernoulli distribution of parameter ϕ. Let ˆϕ be their sample mean and γ>0 fixed. We have:** + +⟶ 霍夫丁不等式 - 令 Z1,..,Zm 為 m 個從參數 ϕ 的白努利分佈中抽出的獨立同分佈 (iid) 的變數。令 ˆϕ 為其樣本平均、固定 γ>0,我們可以得到: + +78. **Remark: this inequality is also known as the Chernoff bound.** + +⟶ 注意:這個不等式也被稱之為 Chernoff 界線 + +79. **Training error ― For a given classifier h, we define the training error ˆϵ(h), also known as the empirical risk or empirical error, to be as follows:** + +⟶ 訓練誤差 - 對於一個分類器 h,我們定義訓練誤差為 ˆϵ(h),也可以稱為經驗風險或經驗誤差。定義如下: + +80. **Probably Approximately Correct (PAC) ― PAC is a framework under which numerous results on learning theory were proved, and has the following set of assumptions: ** + +⟶ 可能近似正確 (PAC) - PAC 是一個框架,有許多學習理論都證明其有效性。它包含以下假設: + +81: **the training and testing sets follow the same distribution** + +⟶ 訓練和測試資料集具有相同的分佈 + +82. **the training examples are drawn independently** + +⟶ 訓練資料集之間彼此獨立 + +83. **Shattering ― Given a set S={x(1),...,x(d)}, and a set of classifiers H, we say that H shatters S if for any set of labels {y(1),...,y(d)}, we have:** + +⟶ 打散 (Shattering) - 給定一個集合 S={x(1),...,x(d)} 以及一組分類器的集合 H,如果對於任何一組標籤 {y(1),...,y(d)},H 都能打散 S,定義如下: + +84. **Upper bound theorem ― Let H be a finite hypothesis class such that |H|=k and let δ and the sample size m be fixed. Then, with probability of at least 1−δ, we have:** + +⟶ 上限定理 - 令 H 是一個有限假設類別,使 |H|=k 且令 δ 和樣本大小 m 固定,結著,在機率至少為 1−δ 的情況下,我們得到: + +85. **VC dimension ― The Vapnik-Chervonenkis (VC) dimension of a given infinite hypothesis class H, noted VC(H) is the size of the largest set that is shattered by H.** + +⟶ VC 維度 - 一個有限假設類別的 Vapnik-Chervonenkis (VC) 維度 VC(H) 指的是 H 最多能夠打散的數量 + +86. **Remark: the VC dimension of H={set of linear classifiers in 2 dimensions} is 3.** + +⟶ 注意:H={2 維的線性分類器} 的 VC 維度為 3 + +87. **Theorem (Vapnik) ― Let H be given, with VC(H)=d and m the number of training examples. With probability at least 1−δ, we have:** + +⟶ 理論 (Vapnik) - 令 H 已給定,VC(H)=d 且 m 是訓練資料級的數量,在機率至少為 1−δ 的情況下,我們得到: + +88. **Known as Adaboost** + +⟶ 被稱為 Adaboost diff --git a/zh/cheatsheet-unsupervised-learning.md b/zh-tw/cs-229-unsupervised-learning.md similarity index 59% rename from zh/cheatsheet-unsupervised-learning.md rename to zh-tw/cs-229-unsupervised-learning.md index 93708b826..0f6d5ee34 100644 --- a/zh/cheatsheet-unsupervised-learning.md +++ b/zh-tw/cs-229-unsupervised-learning.md @@ -1,339 +1,298 @@ 1. **Unsupervised Learning cheatsheet** ⟶ - +非監督式學習參考手冊
2. **Introduction to Unsupervised Learning** ⟶ - +非監督式學習介紹
3. **Motivation ― The goal of unsupervised learning is to find hidden patterns in unlabeled data {x(1),...,x(m)}.** ⟶ - +動機 - 非監督式學習的目的是要找出未標籤資料 {x(1),...,x(m)} 之間的隱藏模式
4. **Jensen's inequality ― Let f be a convex function and X a random variable. We have the following inequality:** ⟶ - +Jensen's 不等式 - 令 f 為一個凸函數、X 為一個隨機變數,我們可以得到底下這個不等式:
5. **Clustering** ⟶ - +分群
6. **Expectation-Maximization** ⟶ - +最大期望值
7. **Latent variables ― Latent variables are hidden/unobserved variables that make estimation problems difficult, and are often denoted z. Here are the most common settings where there are latent variables:** ⟶ - +潛在變數 (Latent variables) - 潛在變數指的是隱藏/沒有觀察到的變數,這會讓問題的估計變得困難,我們通常使用 z 來代表它。底下是潛在變數的常見設定:
8. **[Setting, Latent variable z, Comments]** ⟶ - +[設定, 潛在變數 z, 評論]
9. **[Mixture of k Gaussians, Factor analysis]** ⟶ - +[k 元高斯模型, 因素分析]
10. **Algorithm ― The Expectation-Maximization (EM) algorithm gives an efficient method at estimating the parameter θ through maximum likelihood estimation by repeatedly constructing a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:** ⟶ - +演算法 - 最大期望演算法 (EM Algorithm) 透過重複建構一個概似函數的下界 (E-step) 和最佳化下界 (M-step) 來進行最大概似估計給出參數 θ 的高效率估計方法:
11. **E-step: Evaluate the posterior probability Qi(z(i)) that each data point x(i) came from a particular cluster z(i) as follows:** ⟶ - +E-step: 評估後驗機率 Qi(z(i)),其中每個資料點 x(i) 來自於一個特定的群集 z(i),如下:
12. **M-step: Use the posterior probabilities Qi(z(i)) as cluster specific weights on data points x(i) to separately re-estimate each cluster model as follows:** ⟶ - +M-step: 使用後驗機率 Qi(z(i)) 作為資料點 x(i) 在群集中特定的權重,用來分別重新估計每個群集,如下:
13. **[Gaussians initialization, Expectation step, Maximization step, Convergence]** ⟶ - +[高斯分佈初始化, E-Step, M-Step, 收斂]
14. **k-means clustering** ⟶ - +k-means 分群法
15. **We note c(i) the cluster of data point i and μj the center of cluster j.** ⟶ - +我們使用 c(i) 表示資料 i 屬於某群,而 μj 則是群 j 的中心
16. **Algorithm ― After randomly initializing the cluster centroids μ1,μ2,...,μk∈Rn, the k-means algorithm repeats the following step until convergence:** ⟶ - +演算法 - 在隨機初始化群集中心點 μ1,μ2,...,μk∈Rn 後,k-means 演算法重複以下步驟直到收斂:
17. **[Means initialization, Cluster assignment, Means update, Convergence]** ⟶ - +[中心點初始化, 指定群集, 更新中心點, 收斂]
18. **Distortion function ― In order to see if the algorithm converges, we look at the distortion function defined as follows:** ⟶ - +畸變函數 - 為了確認演算法是否收斂,我們定義以下的畸變函數:
19. **Hierarchical clustering** ⟶ - +階層式分群法
20. **Algorithm ― It is a clustering algorithm with an agglomerative hierarchical approach that build nested clusters in a successive manner.** ⟶ - +演算法 - 階層式分群法是透過一種階層架構的方式,將資料建立為一種連續層狀結構的形式。
21. **Types ― There are different sorts of hierarchical clustering algorithms that aims at optimizing different objective functions, which is summed up in the table below:** ⟶ - +類型 - 底下是幾種不同類型的階層式分群法,差別在於要最佳化的目標函式的不同,請參考底下:
22. **[Ward linkage, Average linkage, Complete linkage]** ⟶ - +[Ward 鏈結距離, 平均鏈結距離, 完整鏈結距離]
23. **[Minimize within cluster distance, Minimize average distance between cluster pairs, Minimize maximum distance of between cluster pairs]** ⟶ - +[最小化群內距離, 最小化各群彼此的平均距離, 最小化各群彼此的最大距離]
24. **Clustering assessment metrics** ⟶ - +分群衡量指標
25. **In an unsupervised learning setting, it is often hard to assess the performance of a model since we don't have the ground truth labels as was the case in the supervised learning setting.** ⟶ - +在非監督式學習中,通常很難去評估一個模型的好壞,因為我們沒有擁有像在監督式學習任務中正確答案的標籤
26. **Silhouette coefficient ― By noting a and b the mean distance between a sample and all other points in the same class, and between a sample and all other points in the next nearest cluster, the silhouette coefficient s for a single sample is defined as follows:** ⟶ - +輪廓係數 (Silhouette coefficient) - 我們指定 a 為一個樣本點和相同群集中其他資料點的平均距離、b 為一個樣本點和下一個最接近群集其他資料點的平均距離,輪廓係數 s 對於此一樣本點的定義為:
27. **Calinski-Harabaz index ― By noting k the number of clusters, Bk and Wk the between and within-clustering dispersion matrices respectively defined as** ⟶ - +Calinski-Harabaz 指標 - 定義 k 是群集的數量,Bk 和 Wk 分別是群內和群集之間的離差矩陣 (dispersion matrices):
28. **the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such that the higher the score, the more dense and well separated the clusters are. It is defined as follows:** ⟶ - +Calinski-Harabaz 指標 s(k) 指出分群模型的好壞,此指標的值越高,代表分群模型的表現越好。定義如下:
29. **Dimension reduction** ⟶ - +維度縮減
30. **Principal component analysis** ⟶ - +主成份分析
31. **It is a dimension reduction technique that finds the variance maximizing directions onto which to project the data.** ⟶ - +這是一個維度縮減的技巧,在於找到投影資料的最大方差
32. **Eigenvalue, eigenvector ― Given a matrix A∈Rn×n, λ is said to be an eigenvalue of A if there exists a vector z∈Rn∖{0}, called eigenvector, such that we have:** ⟶ - +特徵值、特徵向量 - 給定一個矩陣 A∈Rn×n,我們說 λ 是 A 的特徵值,當存在一個特徵向量 z∈Rn∖{0},使得:
33. **Spectral theorem ― Let A∈Rn×n. If A is symmetric, then A is diagonalizable by a real orthogonal matrix U∈Rn×n. By noting Λ=diag(λ1,...,λn), we have:** ⟶ - +譜定理 - 令 A∈Rn×n,如果 A 是對稱的,則 A 可以可以透過正交矩陣 U∈Rn×n 對角化。當 Λ=diag(λ1,...,λn),我們得到:
34. **diagonal** ⟶ - +對角線
35. **Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of matrix A.** ⟶ - +注意:與特徵值所關聯的特徵向量就是 A 矩陣的主特徵向量
36. **Algorithm ― The Principal Component Analysis (PCA) procedure is a dimension reduction technique that projects the data on k dimensions by maximizing the variance of the data as follows:** ⟶ - +演算法 - 主成份分析 (PCA) 是一種維度縮減的技巧,它會透過尋找資料最大變異的方式,將資料投影在 k 維空間上:
37. **Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.** ⟶ - +第一步:正規化資料,讓資料平均為 0,變異數為 1
38. **Step 2: Compute Σ=1mm∑i=1x(i)x(i)T∈Rn×n, which is symmetric with real eigenvalues.** ⟶ - +第二步:計算 Σ=1mm∑i=1x(i)x(i)T∈Rn×n,即對稱實際特徵值
39. **Step 3: Compute u1,...,uk∈Rn the k orthogonal principal eigenvectors of Σ, i.e. the orthogonal eigenvectors of the k largest eigenvalues.** ⟶ - +第三步:計算 u1,...,uk∈Rn,k 個正交主特徵向量的總和 Σ,即是 k 個最大特徵值的正交特徵向量
40. **Step 4: Project the data on spanR(u1,...,uk).** ⟶ - +第四部:將資料投影到 spanR(u1,...,uk)
41. **This procedure maximizes the variance among all k-dimensional spaces.** ⟶ - +這個步驟會最大化所有 k 維空間的變異數
42. **[Data in feature space, Find principal components, Data in principal components space]** ⟶ - +[資料在特徵空間, 尋找主成分, 資料在主成分空間]
43. **Independent component analysis** ⟶ - +獨立成分分析
44. **It is a technique meant to find the underlying generating sources.** ⟶ - +這是用來尋找潛在生成來源的技巧
45. **Assumptions ― We assume that our data x has been generated by the n-dimensional source vector s=(s1,...,sn), where si are independent random variables, via a mixing and non-singular matrix A as follows:** ⟶ - +假設 - 我們假設資料 x 是從 n 維的來源向量 s=(s1,...,sn) 產生,si 為獨立變數,透過一個混合與非奇異矩陣 A 產生如下:
46. **The goal is to find the unmixing matrix W=A−1.** ⟶ - +目的在於找到一個 unmixing 矩陣 W=A−1
47. **Bell and Sejnowski ICA algorithm ― This algorithm finds the unmixing matrix W by following the steps below:** ⟶ - +Bell 和 Sejnowski 獨立成份分析演算法 - 此演算法透過以下步驟來找到 unmixing 矩陣:
48. **Write the probability of x=As=W−1s as:** ⟶ - +紀錄 x=As=W−1s 的機率如下:
49. **Write the log likelihood given our training data {x(i),i∈[[1,m]]} and by noting g the sigmoid function as:** ⟶ - +在給定訓練資料 {x(i),i∈[[1,m]]} 的情況下,其對數概似估計函數與定義 g 為 sigmoid 函數如下:
50. **Therefore, the stochastic gradient ascent learning rule is such that for each training example x(i), we update W as follows:** ⟶ - -
- -51. **The Machine Learning cheatsheets are now available in Mandarin.** - -⟶ - -
- -52. **Original authors** - -⟶ - -
- -53. **Translated by X, Y and Z** - -⟶ - -
- -54. **Reviewed by X, Y and Z** - -⟶ - -
- -55. **[Introduction, Motivation, Jensen's inequality]** - -⟶ - -
- -56. **[Clustering, Expectation-Maximization, k-means, Hierarchical clustering, Metrics]** - -⟶ - -
- -57. **[Dimension reduction, PCA, ICA]** - -⟶ +因此,梯度隨機下降學習規則對每個訓練樣本 x(i) 來說,我們透過以下方法來更新 W: diff --git a/zh/cheatsheet-deep-learning.md b/zh/cheatsheet-deep-learning.md deleted file mode 100644 index a7604ccc6..000000000 --- a/zh/cheatsheet-deep-learning.md +++ /dev/null @@ -1,321 +0,0 @@ -1. **Deep Learning cheatsheet** - -⟶ - -
- -2. **Neural Networks** - -⟶ - -
- -3. **Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.** - -⟶ - -
- -4. **Architecture ― The vocabulary around neural networks architectures is described in the figure below:** - -⟶ - -
- -5. **[Input layer, hidden layer, output layer]** - -⟶ - -
- -6. **By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** - -⟶ - -
- -7. **where we note w, b, z the weight, bias and output respectively.** - -⟶ - -
- -8. **Activation function ― Activation functions are used at the end of a hidden unit to introduce non-linear complexities to the model. Here are the most common ones:** - -⟶ - -
- -9. **[Sigmoid, Tanh, ReLU, Leaky ReLU]** - -⟶ - -
- -10. **Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** - -⟶ - -
- -11. **Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** - -⟶ - -
- -12. **Backpropagation ― Backpropagation is a method to update the weights in the neural network by taking into account the actual output and the desired output. The derivative with respect to weight w is computed using chain rule and is of the following form:** - -⟶ - -
- -13. **As a result, the weight is updated as follows:** - -⟶ - -
- -14. **Updating weights ― In a neural network, weights are updated as follows:** - -⟶ - -
- -15. **Step 1: Take a batch of training data.** - -⟶ - -
- -16. **Step 2: Perform forward propagation to obtain the corresponding loss.** - -⟶ - -
- -17. **Step 3: Backpropagate the loss to get the gradients.** - -⟶ - -
- -18. **Step 4: Use the gradients to update the weights of the network.** - -⟶ - -
- -19. **Dropout ― Dropout is a technique meant at preventing overfitting the training data by dropping out units in a neural network. In practice, neurons are either dropped with probability p or kept with probability 1−p** - -⟶ - -
- -20. **Convolutional Neural Networks** - -⟶ - -
- -21. **Convolutional layer requirement ― By noting W the input volume size, F the size of the convolutional layer neurons, P the amount of zero padding, then the number of neurons N that fit in a given volume is such that:** - -⟶ - -
- -22. **Batch normalization ― It is a step of hyperparameter γ,β that normalizes the batch {xi}. By noting μB,σ2B the mean and variance of that we want to correct to the batch, it is done as follows:** - -⟶ - -
- -23. **It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** - -⟶ - -
- -24. **Recurrent Neural Networks** - -⟶ - -
- -25. **Types of gates ― Here are the different types of gates that we encounter in a typical recurrent neural network:** - -⟶ - -
- -26. **[Input gate, forget gate, gate, output gate]** - -⟶ - -
- -27. **[Write to cell or not?, Erase a cell or not?, How much to write to cell?, How much to reveal cell?]** - -⟶ - -
- -28. **LSTM ― A long short-term memory (LSTM) network is a type of RNN model that avoids the vanishing gradient problem by adding 'forget' gates.** - -⟶ - -
- -29. **Reinforcement Learning and Control** - -⟶ - -
- -30. **The goal of reinforcement learning is for an agent to learn how to evolve in an environment.** - -⟶ - -
- -31. **Definitions** - -⟶ - -
- -32. **Markov decision processes ― A Markov decision process (MDP) is a 5-tuple (S,A,{Psa},γ,R) where:** - -⟶ - -
- -33. **S is the set of states** - -⟶ - -
- -34. **A is the set of actions** - -⟶ - -
- -35. **{Psa} are the state transition probabilities for s∈S and a∈A** - -⟶ - -
- -36. **γ∈[0,1[ is the discount factor** - -⟶ - -
- -37. **R:S×A⟶R or R:S⟶R is the reward function that the algorithm wants to maximize** - -⟶ - -
- -38. **Policy ― A policy π is a function π:S⟶A that maps states to actions.** - -⟶ - -
- -39. **Remark: we say that we execute a given policy π if given a state s we take the action a=π(s).** - -⟶ - -
- -40. **Value function ― For a given policy π and a given state s, we define the value function Vπ as follows:** - -⟶ - -
- -41. **Bellman equation ― The optimal Bellman equations characterizes the value function Vπ∗ of the optimal policy π∗:** - -⟶ - -
- -42. **Remark: we note that the optimal policy π∗ for a given state s is such that:** - -⟶ - -
- -43. **Value iteration algorithm ― The value iteration algorithm is in two steps:** - -⟶ - -
- -44. **1) We initialize the value:** - -⟶ - -
- -45. **2) We iterate the value based on the values before:** - -⟶ - -
- -46. **Maximum likelihood estimate ― The maximum likelihood estimates for the state transition probabilities are as follows:** - -⟶ - -
- -47. **times took action a in state s and got to s′** - -⟶ - -
- -48. **times took action a in state s** - -⟶ - -
- -49. **Q-learning ― Q-learning is a model-free estimation of Q, which is done as follows:** - -⟶ - -
- -50. **View PDF version on GitHub** - -⟶ - -
- -51. **[Neural Networks, Architecture, Activation function, Backpropagation, Dropout]** - -⟶ - -
- -52. **[Convolutional Neural Networks, Convolutional layer, Batch normalization]** - -⟶ - -
- -53. **[Recurrent Neural Networks, Gates, LSTM]** - -⟶ - -
- -54. **[Reinforcement learning, Markov decision processes, Value/policy iteration, Approximate dynamic programming, Policy search]** - -⟶ diff --git a/zh/cheatsheet-supervised-learning.md b/zh/cs-229-supervised-learning.md similarity index 100% rename from zh/cheatsheet-supervised-learning.md rename to zh/cs-229-supervised-learning.md