-
Notifications
You must be signed in to change notification settings - Fork 2
/
userguide.html
595 lines (537 loc) · 48.9 KB
/
userguide.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.19: https://docutils.sourceforge.io/" />
<title>User guide — kymatio 0.3.0 documentation</title>
<link rel="stylesheet" type="text/css" href="_static/pygments.css" />
<link rel="stylesheet" type="text/css" href="_static/alabaster.css" />
<link rel="stylesheet" type="text/css" href="_static/sg_gallery.css" />
<link rel="stylesheet" type="text/css" href="_static/sg_gallery-binder.css" />
<link rel="stylesheet" type="text/css" href="_static/sg_gallery-dataframe.css" />
<link rel="stylesheet" type="text/css" href="_static/sg_gallery-rendered-html.css" />
<script data-url_root="./" id="documentation_options" src="_static/documentation_options.js"></script>
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/_sphinx_javascript_frameworks_compat.js"></script>
<script src="_static/doctools.js"></script>
<script async="async" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<link rel="shortcut icon" href="_static/kymatio.ico"/>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="Information for developers" href="developerguide.html" />
<link rel="prev" title="Kymatio: Wavelet scattering in Python - v0.3.0 “Erdre”" href="index.html" />
<link rel="stylesheet" href="_static/custom.css" type="text/css" />
<link rel="apple-touch-icon" href="_static/kymatio.jpg" />
<meta name="viewport" content="width=device-width, initial-scale=0.9, maximum-scale=0.9" />
</head><body>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<section id="user-guide">
<span id="id1"></span><h1>User guide<a class="headerlink" href="#user-guide" title="Permalink to this heading">¶</a></h1>
<section id="introduction-to-scattering-transforms">
<h2>Introduction to scattering transforms<a class="headerlink" href="#introduction-to-scattering-transforms" title="Permalink to this heading">¶</a></h2>
<p>A scattering transform is a non-linear signal representation that builds
invariance to geometric transformations while preserving a high degree of
discriminability. These transforms can be made invariant to translations,
rotations (for 2D or 3D signals), frequency shifting (for 1D signals), or
changes of scale. These transformations are often irrelevant to many
classification and regression tasks, so representing signals using their
scattering transform reduces unnecessary variability while capturing structure
needed for a given task. This reduced variability simplifies the building of
models, especially given small training sets.</p>
<p>The scattering transform is defined as a complex-valued convolutional neural
network whose filters are fixed to be wavelets and the non-linearity is a
complex modulus. Each layer is a wavelet transform, which separates the scales
of the incoming signal. The wavelet transform is contractive, and so is the
complex modulus, so the whole network is contractive. The result is a reduction
of variance and a stability to additive noise. The separation of scales by
wavelets also enables stability to deformation of the original signal. These
properties make the scattering transform well-suited for representing structured
signals such as natural images, textures, audio recordings, biomedical signals,
or molecular density functions.</p>
<p>Let us consider a set of wavelets <span class="math notranslate nohighlight">\(\{\psi_\lambda\}_\lambda\)</span>, such that
there exists some <span class="math notranslate nohighlight">\(\epsilon\)</span> satisfying:</p>
<div class="math notranslate nohighlight">
\[1-\epsilon \leq \sum_\lambda |\hat \psi_\lambda(\omega)|^2 \leq 1\]</div>
<p>Given a signal <span class="math notranslate nohighlight">\(x\)</span>, we define its scattering coefficient of order
<span class="math notranslate nohighlight">\(k\)</span> corresponding to the sequence of frequencies
<span class="math notranslate nohighlight">\((\lambda_1,...,\lambda_k)\)</span> to be</p>
<div class="math notranslate nohighlight">
\[Sx[\lambda_1,...,\lambda_k] = |\psi_{\lambda_k} \star ...| \psi_{\lambda_1} \star x|...|\]</div>
<p>For a general treatment of the scattering transform, see
<span id="id2">[<a class="reference internal" href="#id27" title="Stéphane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics, 65(10):1331–1398, 2012.">Mal12</a>]</span>. More specific descriptions of the scattering transform
are found in <span id="id3">[<a class="reference internal" href="#id30" title="J. Andén and S. Mallat. Deep scattering spectrum. IEEE Trans. Signal Process., 62:4114–4128, 2014.">AndenM14</a>]</span> for 1D, <span id="id4">[<a class="reference internal" href="#id29" title="J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1872-1886, 2013.">BM13</a>]</span> for 2D,
and <span id="id5">[<a class="reference internal" href="#id31" title="Michael Eickenberg, Georgios Exarchakis, Matthew Hirn, and Stéphane Mallat. Solid harmonic wavelet scattering: predicting quantum molecular energy from invariant descriptors of 3d electronic densities. In Advances in Neural Information Processing Systems, 6540–6549. 2017.">EEHM17</a>]</span> for 3D.</p>
</section>
<section id="practical-implementation">
<h2>Practical implementation<a class="headerlink" href="#practical-implementation" title="Permalink to this heading">¶</a></h2>
<p>Previous implementations, such as ScatNet <span id="id6">[<a class="reference internal" href="#id28" title="J Andén, L Sifre, S Mallat, M Kapoko, V Lostanlen, and E Oyallon. Scatnet. Computer Software. Available: http://www.di.ens.fr/data/software/scatnet, 2014.">AndenSM+14</a>]</span>, of the
scattering transform relied on computing the scattering coefficients layer by
layer. In Kymatio, we instead traverse the scattering transform tree in a
depth-first fashion. This limits memory usage and makes the implementation
better suited for execution on a GPU. The difference between the two approaches
is illustrated in the figure below.</p>
<figure class="align-center" id="id32">
<a class="reference internal image-reference" href="_images/algorithm.png"><img alt="Comparison of ScatNet and Kymatio implementations." src="_images/algorithm.png" style="width: 600px;" /></a>
<figcaption>
<p><span class="caption-text">The scattering tree traversal strategies of (a) the ScatNet toolbox, and (b)
Kymatio. While ScatNet traverses the tree in a breadth-first fashion (layer
by layer), Kymatio performs a depth-first traversal.</span><a class="headerlink" href="#id32" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>More details about our implementation can be found in <a class="reference internal" href="developerguide.html#dev-guide"><span class="std std-ref">Information for developers</span></a>.</p>
<section id="d">
<h3>1-D<a class="headerlink" href="#d" title="Permalink to this heading">¶</a></h3>
<p>The 1D scattering coefficients computed by Kymatio are similar to those of
ScatNet <span id="id7">[<a class="reference internal" href="#id28" title="J Andén, L Sifre, S Mallat, M Kapoko, V Lostanlen, and E Oyallon. Scatnet. Computer Software. Available: http://www.di.ens.fr/data/software/scatnet, 2014.">AndenSM+14</a>]</span>, but do not coincide exactly. This is due to a
slightly different choice of filters, subsampling rules, and coefficient
selection criteria. The resulting coefficients, however, have a comparable
performance for classification and regression tasks.</p>
</section>
<section id="id8">
<h3>2-D<a class="headerlink" href="#id8" title="Permalink to this heading">¶</a></h3>
<p>The 2D implementation in this package provides scattering coefficients that
exactly match those of ScatNet <span id="id9">[<a class="reference internal" href="#id28" title="J Andén, L Sifre, S Mallat, M Kapoko, V Lostanlen, and E Oyallon. Scatnet. Computer Software. Available: http://www.di.ens.fr/data/software/scatnet, 2014.">AndenSM+14</a>]</span>.</p>
</section>
<section id="id10">
<h3>3-D<a class="headerlink" href="#id10" title="Permalink to this heading">¶</a></h3>
<p>The 3D scattering transform is currently limited to solid harmonic wavelets,
which are solid harmonics (spherical harmonics multiplied by a radial polynomial)
multiplied by Gaussians of different width.
They perform scale separation and feature extraction relevant to e.g. molecule structure
while remaining perfectly covariant to transformations with the Euclidean group.</p>
<p>The current implementation is very similar to the one used in <span id="id11">[<a class="reference internal" href="#id31" title="Michael Eickenberg, Georgios Exarchakis, Matthew Hirn, and Stéphane Mallat. Solid harmonic wavelet scattering: predicting quantum molecular energy from invariant descriptors of 3d electronic densities. In Advances in Neural Information Processing Systems, 6540–6549. 2017.">EEHM17</a>]</span>,
and while it doesn’t correspond exactly, it makes use of better theory on sampling
and leads to similar performance on QM7.</p>
</section>
</section>
<section id="output-size">
<h2>Output size<a class="headerlink" href="#output-size" title="Permalink to this heading">¶</a></h2>
<section id="id12">
<h3>1-D<a class="headerlink" href="#id12" title="Permalink to this heading">¶</a></h3>
<p>If the input <span class="math notranslate nohighlight">\(x\)</span> is a Tensor of size <span class="math notranslate nohighlight">\((B, T)\)</span>, the output of the
1D scattering transform is of size <span class="math notranslate nohighlight">\((B, P, T/2^J)\)</span>, where <span class="math notranslate nohighlight">\(P\)</span> is
the number of scattering coefficients and <span class="math notranslate nohighlight">\(2^J\)</span> is the maximum scale of the
transform. The value of <span class="math notranslate nohighlight">\(P\)</span> depends on the maximum order of the scattering
transform and the parameters <span class="math notranslate nohighlight">\(Q\)</span> and <span class="math notranslate nohighlight">\(J\)</span>. It is roughly proportional
to <span class="math notranslate nohighlight">\(1 + J Q + J (J-1) Q / 2\)</span>.</p>
</section>
<section id="id13">
<h3>2-D<a class="headerlink" href="#id13" title="Permalink to this heading">¶</a></h3>
<p>Let us assume that <span class="math notranslate nohighlight">\(x\)</span> is a tensor of size <span class="math notranslate nohighlight">\((B,C,N_1,N_2)\)</span>. Then the
output <span class="math notranslate nohighlight">\(Sx\)</span> via a Scattering Transform with scale <span class="math notranslate nohighlight">\(J\)</span> and <span class="math notranslate nohighlight">\(L\)</span> angles and <span class="math notranslate nohighlight">\(m\)</span> order 2 will have
size:</p>
<div class="math notranslate nohighlight">
\[(B,C,1+LJ+\frac{L^2J(J-1)}{2},\frac{N_1}{2^J},\frac{N_2}{2^J})\]</div>
</section>
<section id="id14">
<h3>3-D<a class="headerlink" href="#id14" title="Permalink to this heading">¶</a></h3>
<p>For an input array of shape <span class="math notranslate nohighlight">\((B, C, N_1, N_2, N_3)\)</span>, a solid harmonic scattering with <span class="math notranslate nohighlight">\(J\)</span>
scales and <span class="math notranslate nohighlight">\(L\)</span> angular frequencies, which applies <span class="math notranslate nohighlight">\(P\)</span> different types of <span class="math notranslate nohighlight">\(\mathcal L_p\)</span>
spatial averaging, and <span class="math notranslate nohighlight">\(m\)</span> order 2 will result in an output of shape</p>
<div class="math notranslate nohighlight">
\[(B, C, 1+J+\frac{J(J + 1)}{2}, 1+L, P)\,.\]</div>
<p>The current configuration of Solid Harmonic Scattering reflects the one in <span id="id15">[<a class="reference internal" href="#id31" title="Michael Eickenberg, Georgios Exarchakis, Matthew Hirn, and Stéphane Mallat. Solid harmonic wavelet scattering: predicting quantum molecular energy from invariant descriptors of 3d electronic densities. In Advances in Neural Information Processing Systems, 6540–6549. 2017.">EEHM17</a>]</span>
in that second order coefficients are obtained for the same angular frequency only
(as opposed to the cartesian product of all angular frequency pairs), at higher scale.</p>
</section>
</section>
<section id="frontends">
<h2>Frontends<a class="headerlink" href="#frontends" title="Permalink to this heading">¶</a></h2>
<p>The Kymatio API is divided between different frontends, which perform the same operations, but integrate in different frameworks. This integration allows the user to take advantage of different features available in certain frameworks, such as autodifferentiation and GPU processing in PyTorch and TensorFlow/Keras, while having code that runs almost identically in NumPy or scikit-learn. The available frontends are:</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">kymatio.numpy</span></code> for NumPy,</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">kymatio.sklearn</span></code> for scikit-learn (as <code class="xref py py-class docutils literal notranslate"><span class="pre">Transformer</span></code> and <code class="xref py py-class docutils literal notranslate"><span class="pre">Estimator</span></code> objects),</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">kymatio.torch</span></code> for PyTorch,</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">kymatio.tensorflow</span></code> for TensorFlow, and</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">kymatio.keras</span></code> for Keras.</p></li>
</ul>
<p>To instantiate a <code class="xref py py-class docutils literal notranslate"><span class="pre">Scattering2D</span></code> object for the <code class="docutils literal notranslate"><span class="pre">numpy</span></code> frontend, run:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">kymatio.numpy</span> <span class="kn">import</span> <span class="n">Scattering2D</span>
<span class="n">S</span> <span class="o">=</span> <span class="n">Scattering2D</span><span class="p">(</span><span class="n">J</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span>
</pre></div>
</div>
<p>Alternatively, the object may be instantiated in a dynamic way using the <code class="xref py py-class docutils literal notranslate"><span class="pre">kymatio.Scattering2D</span></code> object by providing a <code class="docutils literal notranslate"><span class="pre">frontend</span></code> argument. This object then transforms itself to the desired frontend. Using this approach, the above example becomes:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">kymatio</span> <span class="kn">import</span> <span class="n">Scattering2D</span>
<span class="n">S</span> <span class="o">=</span> <span class="n">Scattering2D</span><span class="p">(</span><span class="n">J</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">),</span> <span class="n">frontend</span><span class="o">=</span><span class="s1">'numpy'</span><span class="p">)</span>
</pre></div>
</div>
<p>In Kymatio 0.2, the default frontend is <code class="docutils literal notranslate"><span class="pre">torch</span></code> for backwards compatibility reasons, but this change to <code class="docutils literal notranslate"><span class="pre">numpy</span></code> in the next version.</p>
<section id="numpy">
<h3>NumPy<a class="headerlink" href="#numpy" title="Permalink to this heading">¶</a></h3>
<p>The NumPy frontend takes <code class="xref py py-class docutils literal notranslate"><span class="pre">ndarray</span></code>s as input and outputs <code class="xref py py-class docutils literal notranslate"><span class="pre">ndarray</span></code>s. All computation is done on the CPU, which means that it will be slow for large inputs. To call this frontend, run:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">kymatio.numpy</span> <span class="kn">import</span> <span class="n">Scattering2D</span>
<span class="n">scattering</span> <span class="o">=</span> <span class="n">Scattering2D</span><span class="p">(</span><span class="n">J</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span>
</pre></div>
</div>
<p>This will only use standard NumPy routines to calculate the scattering transform.</p>
</section>
<section id="scikit-learn">
<h3>Scikit-learn<a class="headerlink" href="#scikit-learn" title="Permalink to this heading">¶</a></h3>
<p>For scikit-learn, we have the <code class="docutils literal notranslate"><span class="pre">sklearn</span></code> frontend, which is both a <code class="xref py py-class docutils literal notranslate"><span class="pre">Transformer</span></code> and an <code class="xref py py-class docutils literal notranslate"><span class="pre">Estimator</span></code>, making it easy to integrate the object into a scikit-learn <code class="xref py py-class docutils literal notranslate"><span class="pre">Pipeline</span></code>. For example, you can write the following:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">sklearn.pipeline</span> <span class="kn">import</span> <span class="n">Pipeline</span>
<span class="kn">from</span> <span class="nn">sklearn.linear_model</span> <span class="kn">import</span> <span class="n">LogisticRegression</span>
<span class="kn">from</span> <span class="nn">kymatio.sklearn</span> <span class="kn">import</span> <span class="n">Scattering2D</span>
<span class="n">S</span> <span class="o">=</span> <span class="n">Scattering2D</span><span class="p">(</span><span class="n">J</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">))</span>
<span class="n">classifier</span> <span class="o">=</span> <span class="n">LogisticRegression</span><span class="p">()</span>
<span class="n">pipeline</span> <span class="o">=</span> <span class="n">Pipeline</span><span class="p">([(</span><span class="s1">'scatter'</span><span class="p">,</span> <span class="n">S</span><span class="p">),</span> <span class="p">(</span><span class="s1">'clf'</span><span class="p">,</span> <span class="n">classifier</span><span class="p">)])</span>
</pre></div>
</div>
<p>which creates a <code class="xref py py-class docutils literal notranslate"><span class="pre">Pipeline</span></code> consisting of a 2D scattering transform and a logistic regression estimator.</p>
</section>
<section id="pytorch">
<h3>PyTorch<a class="headerlink" href="#pytorch" title="Permalink to this heading">¶</a></h3>
<p>If PyTorch is installed, we may also use the <code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend, which is implemented as a <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.Module</span></code>. As a result, it can be integrated with other PyTorch <code class="xref py py-class docutils literal notranslate"><span class="pre">Module</span></code>s to create a computational model. It also supports the <code class="xref py py-meth docutils literal notranslate"><span class="pre">cuda()</span></code>, <code class="xref py py-meth docutils literal notranslate"><span class="pre">cpu()</span></code>, and <code class="xref py py-meth docutils literal notranslate"><span class="pre">to()</span></code> methods, allowing the user to easily move the object from CPU to GPU and back. When initialized, a scattering transform object is stored on the CPU:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">kymatio.torch</span> <span class="kn">import</span> <span class="n">Scattering2D</span>
<span class="n">scattering</span> <span class="o">=</span> <span class="n">Scattering2D</span><span class="p">(</span><span class="n">J</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span>
</pre></div>
</div>
<p>We use this to compute scattering transforms of signals in CPU memory:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span>
<span class="n">Sx</span> <span class="o">=</span> <span class="n">scattering</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
</pre></div>
</div>
<p>If a CUDA-enabled GPU is available, we may transfer the scattering transform
object to GPU memory by calling <code class="xref py py-meth docutils literal notranslate"><span class="pre">cuda()</span></code>:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">scattering</span><span class="o">.</span><span class="n">cuda</span><span class="p">()</span>
</pre></div>
</div>
<p>Transferring the signal to GPU memory as well, we can then compute its
scattering coefficients:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">x_gpu</span> <span class="o">=</span> <span class="n">x</span><span class="o">.</span><span class="n">cuda</span><span class="p">()</span>
<span class="n">Sx_gpu</span> <span class="o">=</span> <span class="n">scattering</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
</pre></div>
</div>
<p>Transferring the output back to CPU memory, we may then compare the outputs:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">Sx_gpu</span> <span class="o">=</span> <span class="n">Sx_gpu</span><span class="o">.</span><span class="n">cpu</span><span class="p">()</span>
<span class="nb">print</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">norm</span><span class="p">(</span><span class="n">Sx_gpu</span><span class="o">-</span><span class="n">Sx</span><span class="p">))</span>
</pre></div>
</div>
<p>These coefficients should agree up to machine precision. We may transfer the
scattering transform object back to the CPU by calling <code class="xref py py-meth docutils literal notranslate"><span class="pre">cpu()</span></code>, like this:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">scattering</span><span class="o">.</span><span class="n">cpu</span><span class="p">()</span>
</pre></div>
</div>
</section>
<section id="tensorflow">
<span id="backend-story"></span><h3>TensorFlow<a class="headerlink" href="#tensorflow" title="Permalink to this heading">¶</a></h3>
<p>If TensorFlow is installed, you may use the <code class="docutils literal notranslate"><span class="pre">tensorflow</span></code> frontend, which is implemented as a <code class="xref py py-class docutils literal notranslate"><span class="pre">tf.Module</span></code>. To call this frontend, run:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">kymatio.tensorflow</span> <span class="kn">import</span> <span class="n">Scattering2D</span>
<span class="n">scattering</span> <span class="o">=</span> <span class="n">Scattering2D</span><span class="p">(</span><span class="n">J</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span>
</pre></div>
</div>
<p>This is a TensorFlow module that one can use directly in eager mode. Like other modules (and like the <code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend), you may transfer it onto and off the GPU using the <code class="xref py py-meth docutils literal notranslate"><span class="pre">cuda()</span></code> and <code class="xref py py-meth docutils literal notranslate"><span class="pre">cpu()</span></code> methods.</p>
</section>
<section id="keras">
<h3>Keras<a class="headerlink" href="#keras" title="Permalink to this heading">¶</a></h3>
<p>For compatibility with the Keras framework, we also include a <code class="docutils literal notranslate"><span class="pre">keras</span></code> frontend, which wraps the TensorFlow class in a Keras <code class="xref py py-class docutils literal notranslate"><span class="pre">Layer</span></code>, allowing us to include it in a <code class="xref py py-class docutils literal notranslate"><span class="pre">Model</span></code> with relative ease. Note that since Keras infers the input shape of a <code class="xref py py-class docutils literal notranslate"><span class="pre">Layer</span></code>, we do not specify the shape when creating the scattering object. The result may look something like:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">tensorflow.keras.models</span> <span class="kn">import</span> <span class="n">Model</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.layers</span> <span class="kn">import</span> <span class="n">Input</span><span class="p">,</span> <span class="n">Flatten</span><span class="p">,</span> <span class="n">Dense</span>
<span class="kn">from</span> <span class="nn">kymatio.keras</span> <span class="kn">import</span> <span class="n">Scattering2D</span>
<span class="n">in_layer</span> <span class="o">=</span> <span class="n">Input</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">28</span><span class="p">,</span> <span class="mi">28</span><span class="p">))</span>
<span class="n">sc</span> <span class="o">=</span> <span class="n">Scattering2D</span><span class="p">(</span><span class="n">J</span><span class="o">=</span><span class="mi">3</span><span class="p">)(</span><span class="n">in_layer</span><span class="p">)</span>
<span class="n">sc_flat</span> <span class="o">=</span> <span class="n">Flatten</span><span class="p">()(</span><span class="n">sc</span><span class="p">)</span>
<span class="n">out_layer</span> <span class="o">=</span> <span class="n">Dense</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">'softmax'</span><span class="p">)(</span><span class="n">sc_flat</span><span class="p">)</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">Model</span><span class="p">(</span><span class="n">in_layer</span><span class="p">,</span> <span class="n">out_layer</span><span class="p">)</span>
</pre></div>
</div>
<p>where we feed the scattering coefficients into a dense layer with ten outputs for handwritten digit classification on MNIST.</p>
</section>
</section>
<section id="backend">
<h2>Backend<a class="headerlink" href="#backend" title="Permalink to this heading">¶</a></h2>
<p>The backends encapsulate the most computationally intensive part of the
scattering transform calculation. As a result, improved performance can
often be achieved by replacing the default backend with a more optimized
alternative.</p>
<p>For instance, the default backend of the <code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend is the <code class="docutils literal notranslate"><span class="pre">torch</span></code> backend,
implemented exclusively in PyTorch. This is available for 1D, 2D, and 3D. It is also
compatible with the PyTorch automatic differentiation framework, and runs on
both CPU and GPU. If one wants additional improved performance on GPU, we
recommended to use the <code class="docutils literal notranslate"><span class="pre">torch_skcuda</span></code> backend.</p>
<p>Currently, two backends exist for <code class="docutils literal notranslate"><span class="pre">torch</span></code>:</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">torch</span></code>: A PyTorch-only implementation which is differentiable with respect
to its inputs. However, it relies on general-purpose CUDA kernels for GPU
computation which reduces performance.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">torch17</span></code>: Same as above, except it is compatible with the version <=1.7.1 of
PyTorch.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">torch_skcuda</span></code>: An implementation using custom CUDA kernels (through <code class="docutils literal notranslate"><span class="pre">cupy</span></code>) and
<code class="docutils literal notranslate"><span class="pre">scikit-cuda</span></code>. This implementation only runs on the GPU (that is, you must
call <code class="xref py py-meth docutils literal notranslate"><span class="pre">cuda()</span></code> prior to applying it). Since it uses kernels optimized for
the various steps of the scattering transform, it achieves better performance
compared to the default <code class="docutils literal notranslate"><span class="pre">torch</span></code> backend (see benchmarks below). This
improvement is currently small in 1D and 3D, but work is underway to further
optimize this backend.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">torch17_skcuda</span></code>: Same as above, except it is compatible with the version <=1.7.1
of PyTorch.</p></li>
</ul>
<p>This backend can be specified via:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">from</span> <span class="nn">kymatio.torch</span> <span class="kn">import</span> <span class="n">Scattering2D</span>
<span class="n">scattering</span> <span class="o">=</span> <span class="n">Scattering2D</span><span class="p">(</span><span class="n">J</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">),</span> <span class="n">backend</span><span class="o">=</span><span class="s1">'torch_skcuda'</span><span class="p">)</span>
</pre></div>
</div>
<p>Each of the other frontends currently only has a single backend, which is the
default. Work is currently underway, however, to extend some of these frontends
with more powerful backends.</p>
</section>
<section id="benchmarks">
<h2>Benchmarks<a class="headerlink" href="#benchmarks" title="Permalink to this heading">¶</a></h2>
<section id="id16">
<h3>1D<a class="headerlink" href="#id16" title="Permalink to this heading">¶</a></h3>
<p>We compared our implementation with that of the ScatNet MATLAB package
<span id="id17">[<a class="reference internal" href="#id28" title="J Andén, L Sifre, S Mallat, M Kapoko, V Lostanlen, and E Oyallon. Scatnet. Computer Software. Available: http://www.di.ens.fr/data/software/scatnet, 2014.">AndenSM+14</a>]</span> with similar settings. The following table shows the
average computation time for a batch of size <span class="math notranslate nohighlight">\(64 \times 65536\)</span>. This
corresponds to <span class="math notranslate nohighlight">\(64\)</span> signals containing <span class="math notranslate nohighlight">\(65536\)</span>, or a total of about
<span class="math notranslate nohighlight">\(95\)</span> seconds of audio sampled at <span class="math notranslate nohighlight">\(44.1~\mathrm{kHz}\)</span>.</p>
<table class="docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Name</p></th>
<th class="head"><p>Average time per batch (s)</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>ScatNet <span id="id18">[<a class="reference internal" href="#id28" title="J Andén, L Sifre, S Mallat, M Kapoko, V Lostanlen, and E Oyallon. Scatnet. Computer Software. Available: http://www.di.ens.fr/data/software/scatnet, 2014.">AndenSM+14</a>]</span></p></td>
<td><p>1.65</p></td>
</tr>
<tr class="row-odd"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, CPU)</p></td>
<td><p>2.74</p></td>
</tr>
<tr class="row-even"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, Quadro M4000 GPU)</p></td>
<td><p>0.81</p></td>
</tr>
<tr class="row-odd"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, V100 GPU) 0.15</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend, <code class="docutils literal notranslate"><span class="pre">skcuda</span></code> backend, Quadro M4000 GPU)</p></td>
<td><p>0.66</p></td>
</tr>
<tr class="row-odd"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend, <code class="docutils literal notranslate"><span class="pre">skcuda</span></code> backend, V100 GPU)</p></td>
<td><p>0.11</p></td>
</tr>
</tbody>
</table>
<p>The CPU tests were performed on a 24-core machine. Further optimization of both
the <code class="docutils literal notranslate"><span class="pre">torch</span></code> and <code class="docutils literal notranslate"><span class="pre">skcuda</span></code> backends is currently underway, so we expect these
numbers to improve in the near future.</p>
</section>
<section id="id19">
<h3>2D<a class="headerlink" href="#id19" title="Permalink to this heading">¶</a></h3>
<p>We compared our implementation the ScatNetLight MATLAB package
<span id="id20">[<a class="reference internal" href="#id25" title="Edouard Oyallon and Stephane Mallat. Deep roto-translation scattering for object classification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2015.">OM15</a>]</span> and a previous PyTorch implementation, <em>PyScatWave</em>
<span id="id21">[<a class="reference internal" href="#id26" title="E. Oyallon, S. Zagoruyko, G. Huang, N. Komodakis, S. Lacoste-Julien, M. B. Blaschko, and E. Belilovsky. Scattering networks for hybrid representation learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, ():1-1, 2018. doi:10.1109/TPAMI.2018.2855738.">OZH+18</a>]</span>. The following table shows the average computation time for a
batch of size <span class="math notranslate nohighlight">\(128 \times 3 \times 256 \times 256\)</span>. This corresponds to
<span class="math notranslate nohighlight">\(128\)</span> three-channel (e.g., RGB) images of size <span class="math notranslate nohighlight">\(256 \times 256\)</span>.</p>
<table class="docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Name</p></th>
<th class="head"><p>Average time per batch (s)</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>MATLAB <span id="id22">[<a class="reference internal" href="#id25" title="Edouard Oyallon and Stephane Mallat. Deep roto-translation scattering for object classification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2015.">OM15</a>]</span></p></td>
<td><p>>200</p></td>
</tr>
<tr class="row-odd"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, CPU)</p></td>
<td><p>110</p></td>
</tr>
<tr class="row-even"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, 1080Ti GPU)</p></td>
<td><p>4.4</p></td>
</tr>
<tr class="row-odd"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, V100 GPU)</p></td>
<td><p>2.9</p></td>
</tr>
<tr class="row-even"><td><p>PyScatWave (1080Ti GPU)</p></td>
<td><p>0.5</p></td>
</tr>
<tr class="row-odd"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend, <code class="docutils literal notranslate"><span class="pre">skcuda</span></code> backend, 1080Ti GPU)</p></td>
<td><p>0.5</p></td>
</tr>
</tbody>
</table>
<p>The CPU tests were performed on a 48-core machine.</p>
</section>
<section id="id23">
<h3>3D<a class="headerlink" href="#id23" title="Permalink to this heading">¶</a></h3>
<p>We compared our implementation for different backends with a batch size of <span class="math notranslate nohighlight">\(8 \times 128 \times 128 \times 128\)</span>.
This means that eight different volumes of size <span class="math notranslate nohighlight">\(128 \times 128 \times 128\)</span> were processed at the same time. The resulting timings are:</p>
<table class="docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Name</p></th>
<th class="head"><p>Average time per batch (s)</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, CPU)</p></td>
<td><p>45</p></td>
</tr>
<tr class="row-odd"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, Quadro M4000 GPU)</p></td>
<td><p>7.5</p></td>
</tr>
<tr class="row-even"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend-backend, V100 GPU)</p></td>
<td><p>0.88</p></td>
</tr>
<tr class="row-odd"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend, <code class="docutils literal notranslate"><span class="pre">skcuda</span></code> backend, Quadro M4000 GPU)</p></td>
<td><p>6.4</p></td>
</tr>
<tr class="row-even"><td><p>Kymatio (<code class="docutils literal notranslate"><span class="pre">torch</span></code> frontend, <code class="docutils literal notranslate"><span class="pre">skcuda</span></code> backend, V100 GPU)</p></td>
<td><p>0.74</p></td>
</tr>
</tbody>
</table>
<p>The CPU tests were performed on a 24-core machine. Further optimization of both
the <code class="docutils literal notranslate"><span class="pre">torch</span></code> and <code class="docutils literal notranslate"><span class="pre">skcuda</span></code> backends is currently underway, so we expect these
numbers to improve in the near future.</p>
</section>
</section>
<section id="how-to-cite">
<h2>How to cite<a class="headerlink" href="#how-to-cite" title="Permalink to this heading">¶</a></h2>
<p>If you use this package, please cite the following paper:</p>
<p>Andreux M., Angles T., Exarchakis G., Leonarduzzi R., Rochette G., Thiry L., Zarka J., Mallat S., Andén J., Belilovsky E., Bruna J., Lostanlen V., Hirn M. J., Oyallon E., Zhang S., Cella C., Eickenberg M. (2019). Kymatio: Scattering Transforms in Python. arXiv preprint arXiv:1812.11214. <a class="reference external" href="https://arxiv.org/abs/1812.11214">(paper)</a></p>
<p class="rubric">References</p>
<div class="docutils container" id="id24">
<div class="citation" id="id30" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="#id3">AndenM14</a><span class="fn-bracket">]</span></span>
<p>J. Andén and S. Mallat. Deep scattering spectrum. <em>IEEE Trans. Signal Process.</em>, 62:4114–4128, 2014.</p>
</div>
<div class="citation" id="id28" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>AndenSM+14<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="#id6">1</a>,<a role="doc-backlink" href="#id7">2</a>,<a role="doc-backlink" href="#id9">3</a>,<a role="doc-backlink" href="#id17">4</a>,<a role="doc-backlink" href="#id18">5</a>)</span>
<p>J Andén, L Sifre, S Mallat, M Kapoko, V Lostanlen, and E Oyallon. Scatnet. <em>Computer Software. Available: <a class="reference external" href="http://www.di.ens.fr/data/software/scatnet">http://www.di.ens.fr/data/software/scatnet</a></em>, 2014.</p>
</div>
<div class="citation" id="id29" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="#id4">BM13</a><span class="fn-bracket">]</span></span>
<p>J. Bruna and S. Mallat. Invariant scattering convolution networks. <em>IEEE Trans. Pattern Anal. Mach. Intell.</em>, 35(8):1872–1886, 2013.</p>
</div>
<div class="citation" id="id31" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>EEHM17<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="#id5">1</a>,<a role="doc-backlink" href="#id11">2</a>,<a role="doc-backlink" href="#id15">3</a>)</span>
<p>Michael Eickenberg, Georgios Exarchakis, Matthew Hirn, and Stéphane Mallat. Solid harmonic wavelet scattering: predicting quantum molecular energy from invariant descriptors of 3d electronic densities. In <em>Advances in Neural Information Processing Systems</em>, 6540–6549. 2017.</p>
</div>
<div class="citation" id="id27" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="#id2">Mal12</a><span class="fn-bracket">]</span></span>
<p>Stéphane Mallat. Group invariant scattering. <em>Communications on Pure and Applied Mathematics</em>, 65(10):1331–1398, 2012.</p>
</div>
<div class="citation" id="id26" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="#id21">OZH+18</a><span class="fn-bracket">]</span></span>
<p>E. Oyallon, S. Zagoruyko, G. Huang, N. Komodakis, S. Lacoste-Julien, M. B. Blaschko, and E. Belilovsky. Scattering networks for hybrid representation learning. <em>IEEE Transactions on Pattern Analysis and Machine Intelligence</em>, ():1–1, 2018. <a class="reference external" href="https://doi.org/10.1109/TPAMI.2018.2855738">doi:10.1109/TPAMI.2018.2855738</a>.</p>
</div>
<div class="citation" id="id25" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>OM15<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="#id20">1</a>,<a role="doc-backlink" href="#id22">2</a>)</span>
<p>Edouard Oyallon and Stephane Mallat. Deep roto-translation scattering for object classification. In <em>The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</em>. June 2015.</p>
</div>
</div>
</div>
</section>
</section>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<p class="logo">
<a href="index.html">
<img class="logo" src="_static/kymatio.jpg" alt="Logo"/>
</a>
</p>
<p class="blurb">Wavelet Scattering in Python<br> <a href="https://twitter.com/KymatioWavelets"><img width="40px" src="https://avatars3.githubusercontent.com/u/50278?s=200&v=4"></a></p>
<p>
<iframe src="https://ghbtns.com/github-btn.html?user=kymatio&repo=kymatio&type=star&count=true&size=large&v=2"
allowtransparency="true" frameborder="0" scrolling="0" width="200px" height="35px"></iframe>
</p>
<h3>Navigation</h3>
<ul class="current">
<li class="toctree-l1 current"><a class="current reference internal" href="#">User guide</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#introduction-to-scattering-transforms">Introduction to scattering transforms</a></li>
<li class="toctree-l2"><a class="reference internal" href="#practical-implementation">Practical implementation</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#d">1-D</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id8">2-D</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id10">3-D</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#output-size">Output size</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id12">1-D</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id13">2-D</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id14">3-D</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#frontends">Frontends</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#numpy">NumPy</a></li>
<li class="toctree-l3"><a class="reference internal" href="#scikit-learn">Scikit-learn</a></li>
<li class="toctree-l3"><a class="reference internal" href="#pytorch">PyTorch</a></li>
<li class="toctree-l3"><a class="reference internal" href="#tensorflow">TensorFlow</a></li>
<li class="toctree-l3"><a class="reference internal" href="#keras">Keras</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#backend">Backend</a></li>
<li class="toctree-l2"><a class="reference internal" href="#benchmarks">Benchmarks</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id16">1D</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id19">2D</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id23">3D</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#how-to-cite">How to cite</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="developerguide.html">Information for developers</a></li>
<li class="toctree-l1"><a class="reference internal" href="codereference.html">Documentation</a></li>
<li class="toctree-l1"><a class="reference internal" href="gallery_1d/index.html">1D examples</a></li>
<li class="toctree-l1"><a class="reference internal" href="gallery_2d/index.html">2D examples</a></li>
<li class="toctree-l1"><a class="reference internal" href="gallery_3d/index.html">3D examples</a></li>
<li class="toctree-l1"><a class="reference internal" href="whats_new.html">What’s New</a></li>
</ul>
<div class="relations">
<h3>Related Topics</h3>
<ul>
<li><a href="index.html">Documentation overview</a><ul>
<li>Previous: <a href="index.html" title="previous chapter">Kymatio: Wavelet scattering in Python - v0.3.0 “Erdre”</a></li>
<li>Next: <a href="developerguide.html" title="next chapter">Information for developers</a></li>
</ul></li>
</ul>
</div>
<div id="searchbox" style="display: none" role="search">
<h3 id="searchlabel">Quick search</h3>
<div class="searchformwrapper">
<form class="search" action="search.html" method="get">
<input type="text" name="q" aria-labelledby="searchlabel" autocomplete="off" autocorrect="off" autocapitalize="off" spellcheck="false"/>
<input type="submit" value="Go" />
</form>
</div>
</div>
<script>document.getElementById('searchbox').style.display = "block"</script>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="footer">
©2018–2021, The Kymatio Developers.
|
Powered by <a href="http://sphinx-doc.org/">Sphinx 5.1.1</a>
& <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.12</a>
|
<a href="_sources/userguide.rst.txt"
rel="nofollow">Page source</a>
</div>
<a href="https://github.com/kymatio/kymatio" class="github">
<img style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_darkblue_121621.png" alt="Fork me on GitHub" class="github"/>
</a>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-130785726-1']);
_gaq.push(['_setDomainName', 'none']);
_gaq.push(['_setAllowLinker', true]);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>