diff --git a/.gitignore b/.gitignore index a482c0928..464f8e86a 100644 --- a/.gitignore +++ b/.gitignore @@ -19,3 +19,4 @@ build/mdd.zip /chapter_gluon-basics/mydict *egg-info* dist* +_build/ \ No newline at end of file diff --git a/chapter_preliminaries/index.md b/chapter_preliminaries/index.md new file mode 100644 index 000000000..f9ba2351f --- /dev/null +++ b/chapter_preliminaries/index.md @@ -0,0 +1,60 @@ +# 预备知识 +:label:`chap_preliminaries` + + +To get started with deep learning, +we will need to develop a few basic skills. +All machine learning is concerned +with extracting information from data. +So we will begin by learning the practical skills +for storing, manipulating, and preprocessing data. + +Moreover, machine learning typically requires +working with large datasets, which we can think of as tables, +where the rows correspond to examples +and the columns correspond to attributes. +Linear algebra gives us a powerful set of techniques +for working with tabular data. +We will not go too far into the weeds but rather focus on the basic +of matrix operations and their implementation. + +Additionally, deep learning is all about optimization. +We have a model with some parameters and +we want to find those that fit our data *the best*. +Determining which way to move each parameter at each step of an algorithm +requires a little bit of calculus, which will be briefly introduced. +Fortunately, the `autograd` package automatically computes differentiation for us, +and we will cover it next. + +Next, machine learning is concerned with making predictions: +what is the likely value of some unknown attribute, +given the information that we observe? +To reason rigorously under uncertainty +we will need to invoke the language of probability. + +In the end, the official documentation provides +plenty of descriptions and examples that are beyond this book. +To conclude the chapter, we will show you how to look up documentation for +the needed information. + +This book has kept the mathematical content to the minimum necessary +to get a proper understanding of deep learning. +However, it does not mean that +this book is mathematics free. +Thus, this chapter provides a rapid introduction to +basic and frequently-used mathematics to allow anyone to understand +at least *most* of the mathematical content of the book. +If you wish to understand *all* of the mathematical content, +further reviewing the [online appendix on mathematics](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/index.html) should be sufficient. + +```toc +:maxdepth: 2 + +ndarray +pandas +linear-algebra +calculus +autograd +probability +lookup-api +``` diff --git a/chapter_preliminaries/index_origin.md b/chapter_preliminaries/index_origin.md new file mode 100644 index 000000000..206537913 --- /dev/null +++ b/chapter_preliminaries/index_origin.md @@ -0,0 +1,65 @@ +--- +source: https://github.com/d2l-ai/d2l-en/blob/master/chapter_preliminaries/index.md +commit: 9bf95b1 +--- + +# Preliminaries +:label:`chap_preliminaries` + +To get started with deep learning, +we will need to develop a few basic skills. +All machine learning is concerned +with extracting information from data. +So we will begin by learning the practical skills +for storing, manipulating, and preprocessing data. + +Moreover, machine learning typically requires +working with large datasets, which we can think of as tables, +where the rows correspond to examples +and the columns correspond to attributes. +Linear algebra gives us a powerful set of techniques +for working with tabular data. +We will not go too far into the weeds but rather focus on the basic +of matrix operations and their implementation. + +Additionally, deep learning is all about optimization. +We have a model with some parameters and +we want to find those that fit our data *the best*. +Determining which way to move each parameter at each step of an algorithm +requires a little bit of calculus, which will be briefly introduced. +Fortunately, the `autograd` package automatically computes differentiation for us, +and we will cover it next. + +Next, machine learning is concerned with making predictions: +what is the likely value of some unknown attribute, +given the information that we observe? +To reason rigorously under uncertainty +we will need to invoke the language of probability. + +In the end, the official documentation provides +plenty of descriptions and examples that are beyond this book. +To conclude the chapter, we will show you how to look up documentation for +the needed information. + +This book has kept the mathematical content to the minimum necessary +to get a proper understanding of deep learning. +However, it does not mean that +this book is mathematics free. +Thus, this chapter provides a rapid introduction to +basic and frequently-used mathematics to allow anyone to understand +at least *most* of the mathematical content of the book. +If you wish to understand *all* of the mathematical content, +further reviewing the [online appendix on mathematics](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/index.html) should be sufficient. + +```toc +:maxdepth: 2 + +ndarray +pandas +linear-algebra +calculus +autograd +probability +lookup-api +``` + diff --git a/chapter_preliminaries/ndarray_origin.md b/chapter_preliminaries/ndarray_origin.md index 78177d869..96df1c0d7 100644 --- a/chapter_preliminaries/ndarray_origin.md +++ b/chapter_preliminaries/ndarray_origin.md @@ -1,6 +1,6 @@ --- source: https://github.com/d2l-ai/d2l-en/blob/master/chapter_preliminaries/ndarray.md -commit: 7240657 +commit: 5182024 --- # Data Manipulation @@ -11,22 +11,26 @@ Generally, there are two important things we need to do with data: (i) acquire them; and (ii) process them once they are inside the computer. There is no point in acquiring data without some way to store it, so let us get our hands dirty first by playing with synthetic data. To start, we introduce the -$n$-dimensional array, which is also called the *tensor*. +$n$-dimensional array. In Numpy and MXNet, such an array is called `ndarray`, +while it is called Tensor in PyTorch and TensorFlow. Through this book, we use the +`ndarray` name convention, and `ndarray` is a class and we call any instance "an +`ndarray`". + +:begin_tab:`mxnet` If you have worked with NumPy, the most widely-used scientific computing package in Python, then you will find this section familiar. -No matter which framework you use, -its *tensor class* (`ndarray` in MXNet, -`Tensor` in both PyTorch and TensorFlow) is similar to NumPy's `ndarray` with -a few killer features. -First, GPU is well-supported to accelerate the computation +MXNet's `ndarray` is an extension to NumPy's `ndarray` with a few killer features. +First, MXNet's `ndarray` supports asynchronous computation +on CPU, GPU, and distributed cloud architectures, whereas NumPy only supports CPU computation. -Second, the tensor class -supports automatic differentiation. -These properties make the tensor class suitable for deep learning. -Throughout the book, when we say tensors, -we are referring to instances of the tensor class unless otherwise stated. +Second, MXNet's `ndarray` supports automatic differentiation. +These properties make MXNet's `ndarray` suitable for deep learning. +Throughout the book, when we say `ndarray`, +we are referring to MXNet's `ndarray` unless otherwise stated. +:end_tab: + ## Getting Started @@ -46,20 +50,15 @@ To start, we import the `np` (`numpy`) and Here, the `np` module includes functions supported by NumPy, while the `npx` module contains a set of extensions developed to empower deep learning within a NumPy-like environment. -When using tensors, we almost always invoke the `set_np` function: -this is for compatibility of tensor processing by other components of MXNet. +When using `ndarray`, we almost always invoke the `set_np` function: +this is for compatibility of `ndarray` processing by other components of MXNet. :end_tab: :begin_tab:`pytorch` -To start, we import `torch`. Note that though it's called PyTorch, we should +To start, we import `torch`. Note that even it's called PyTorch, we should import `torch` instead of `pytorch`. :end_tab: -:begin_tab:`tensorflow` -To start, we import `tesnorflow`. As the name is a little long, we often import -it with a short alias `tf`. -:end_tab: - ```{.python .input} from mxnet import np, npx npx.set_np() @@ -70,23 +69,18 @@ npx.set_np() import torch ``` -```{.python .input} -#@tab tensorflow -import tensorflow as tf -``` - -A tensor represents a (possibly multi-dimensional) array of numerical values. -With one axis, a tensor corresponds (in math) to a *vector*. -With two axes, a tensor corresponds to a *matrix*. -Tensors with more than two axes do not have special -mathematical names. +An `ndarray` represents a (possibly multi-dimensional) array of numerical values. +With one axis, an `ndarray` corresponds (in math) to a *vector*. +With two axes, an `ndarray` corresponds to a *matrix*. +Arrays with more than two axes do not have special +mathematical names---we simply call them *tensors*. To start, we can use `arange` to create a row vector `x` containing the first 12 integers starting with 0, though they are created as floats by default. -Each of the values in a tensor is called an *element* of the tensor. -For instance, there are 12 elements in the tensor `x`. -Unless otherwise specified, a new tensor +Each of the values in an `ndarray` is called an *element* of the `ndarray`. +For instance, there are 12 elements in the `ndarray` `x`. +Unless otherwise specified, a new `ndarray` will be stored in main memory and designated for CPU-based computation. ```{.python .input} @@ -100,25 +94,23 @@ x = torch.arange(12) x ``` +We can access an `ndarray`'s *shape* (the length along each axis) +by inspecting its `shape` property. + ```{.python .input} -#@tab tensorflow -x = tf.constant(range(12)) -x +x.shape ``` -We can access a tensor's *shape* (the length along each axis) -by inspecting its `shape` property. - ```{.python .input} -#@tab all +#@tab pytorch x.shape ``` -If we just want to know the total number of elements in a tensor, +If we just want to know the total number of elements in an `ndarray`, i.e., the product of all of the shape elements, -we can inspect its size. +we can inspect its `size` property. Because we are dealing with a vector here, -the single element of its `shape` is identical to its size. +the single element of its `shape` is identical to its `size`. ```{.python .input} x.size @@ -126,34 +118,28 @@ x.size ```{.python .input} #@tab pytorch -x.numel() -``` - -```{.python .input} -#@tab tensorflow -tf.size(x) +x.size() ``` -To change the shape of a tensor without altering +To change the shape of an `ndarray` without altering either the number of elements or their values, we can invoke the `reshape` function. -For example, we can transform our tensor, `x`, +For example, we can transform our `ndarray`, `x`, from a row vector with shape (12,) to a matrix with shape (3, 4). -This new tensor contains the exact same values, +This new `ndarray` contains the exact same values, but views them as a matrix organized as 3 rows and 4 columns. To reiterate, although the shape has changed, the elements in `x` have not. -Note that the size is unaltered by reshaping. +Note that the `size` is unaltered by reshaping. ```{.python .input} -#@tab mxnet, pytorch x = x.reshape(3, 4) x ``` ```{.python .input} -#@tab tensorflow -x = tf.reshape(x, (3, 4)) +#@tab pytorch +x = x.reshape((3, 4)) x ``` @@ -163,16 +149,30 @@ then after we know the width, the height is given implicitly. Why should we have to perform the division ourselves? In the example above, to get a matrix with 3 rows, we specified both that it should have 3 rows and 4 columns. -Fortunately, tensors can automatically work out one dimension given the rest. +Fortunately, `ndarray` can automatically work out one dimension given the rest. We invoke this capability by placing `-1` for the dimension -that we would like tensors to automatically infer. +that we would like `ndarray` to automatically infer. In our case, instead of calling `x.reshape(3, 4)`, we could have equivalently called `x.reshape(-1, 4)` or `x.reshape(3, -1)`. +The `empty` method grabs a chunk of memory and hands us back a matrix +without bothering to change the value of any of its entries. +This is remarkably efficient but we must be careful because +the entries might take arbitrary values, including very big ones! + +```{.python .input} +np.empty((3, 4)) +``` + +```{.python .input} +#@tab pytorch +torch.empty(2, 3) +``` + Typically, we will want our matrices initialized either with zeros, ones, some other constants, or numbers randomly sampled from a specific distribution. -We can create a tensor representing a tensor with all elements +We can create an `ndarray` representing a tensor with all elements set to 0 and a shape of (2, 3, 4) as follows: ```{.python .input} @@ -184,11 +184,6 @@ np.zeros((2, 3, 4)) torch.zeros(2, 3, 4) ``` -```{.python .input} -#@tab tensorflow -tf.zeros((2, 3, 4)) -``` - Similarly, we can create tensors with each element set to 1 as follows: ```{.python .input} @@ -200,18 +195,13 @@ np.ones((2, 3, 4)) torch.ones((2, 3, 4)) ``` -```{.python .input} -#@tab tensorflow -tf.ones((2, 3, 4)) -``` - Often, we want to randomly sample the values -for each element in a tensor +for each element in an `ndarray` from some probability distribution. For example, when we construct arrays to serve as parameters in a neural network, we will typically initialize their values randomly. -The following snippet creates a tensor with shape (3, 4). +The following snippet creates an `ndarray` with shape (3, 4). Each of its elements is randomly sampled from a standard Gaussian (normal) distribution with a mean of 0 and a standard deviation of 1. @@ -225,12 +215,7 @@ np.random.normal(0, 1, size=(3, 4)) torch.randn(3, 4) ``` -```{.python .input} -#@tab tensorflow -tf.random.normal(shape=[3, 4]) -``` - -We can also specify the exact values for each element in the desired tensor +We can also specify the exact values for each element in the desired `ndarray` by supplying a Python list (or list of lists) containing the numerical values. Here, the outermost list corresponds to axis 0, and the inner list to axis 1. @@ -243,11 +228,6 @@ np.array([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]]) torch.tensor([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]]) ``` -```{.python .input} -#@tab tensorflow -tf.constant([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]]) -``` - ## Operations This book is not about software engineering. @@ -303,13 +283,6 @@ y = torch.tensor([2, 2, 2, 2]) x + y, x - y, x * y, x / y, x ** y # The ** operator is exponentiation ``` -```{.python .input} -#@tab tensorflow -x = tf.constant([1.0, 2, 4, 8]) -y = tf.constant([2.0, 2, 2, 2]) -x + y, x - y, x * y, x / y, x ** y # The ** operator is exponentiation -``` - Many more operations can be applied elementwise, including unary operators like exponentiation. @@ -322,28 +295,23 @@ np.exp(x) torch.exp(x) ``` -```{.python .input} -#@tab tensorflow -tf.exp(x) -``` - In addition to elementwise computations, we can also perform linear algebra operations, including vector dot products and matrix multiplication. We will explain the crucial bits of linear algebra (with no assumed prior knowledge) in :numref:`sec_linear-algebra`. -We can also *concatenate* multiple tensors together, -stacking them end-to-end to form a larger tensor. -We just need to provide a list of tensors +We can also *concatenate* multiple `ndarray`s together, +stacking them end-to-end to form a larger `ndarray`. +We just need to provide a list of `ndarray`s and tell the system along which axis to concatenate. The example below shows what happens when we concatenate two matrices along rows (axis 0, the first element of the shape) vs. columns (axis 1, the second element of the shape). -We can see that the first output tensor's axis-0 length ($6$) -is the sum of the two input tensors' axis-0 lengths ($3 + 3$); -while the second output tensor's axis-1 length ($8$) -is the sum of the two input tensors' axis-1 lengths ($4 + 4$). +We can see that the first output `ndarray`'s axis-0 length ($6$) +is the sum of the two input `ndarray`s' axis-0 lengths ($3 + 3$); +while the second output `ndarray`'s axis-1 length ($8$) +is the sum of the two input `ndarray`s' axis-1 lengths ($4 + 4$). ```{.python .input} x = np.arange(12).reshape(3, 4) @@ -358,49 +326,47 @@ y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]]) torch.cat((x, y), dim=0), torch.cat((x, y), dim=1) ``` -```{.python .input} -#@tab tensorflow -x = tf.constant(range(12), dtype=tf.float32, shape=(3, 4)) -y = tf.constant([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]]) -tf.concat([x, y], axis=0), tf.concat([x, y], axis=1) -``` - -Sometimes, we want to construct a binary tensor via *logical statements*. +Sometimes, we want to construct a binary `ndarray` via *logical statements*. Take `x == y` as an example. For each position, if `x` and `y` are equal at that position, -the corresponding entry in the new tensor takes a value of 1, +the corresponding entry in the new `ndarray` takes a value of 1, meaning that the logical statement `x == y` is true at that position; otherwise that position takes 0. ```{.python .input} -#@tab all x == y ``` -Summing all the elements in the tensor yields a tensor with only one element. +```{.python .input} +#@tab pytorch +x == y +``` + +Summing all the elements in the `ndarray` yields an `ndarray` with only one element. ```{.python .input} -#@tab mxnet, pytorch x.sum() ``` ```{.python .input} -#@tab tensorflow -tf.reduce_sum(x) +#@tab pytorch +x.sum() ``` +For stylistic convenience, we can write `x.sum()` as `np.sum(x)`. + ## Broadcasting Mechanism :label:`subsec_broadcasting` In the above section, we saw how to perform elementwise operations -on two tensors of the same shape. Under certain conditions, +on two `ndarray`s of the same shape. Under certain conditions, even when shapes differ, we can still perform elementwise operations by invoking the *broadcasting mechanism*. This mechanism works in the following way: First, expand one or both arrays by copying elements appropriately so that after this transformation, -the two tensors have the same shape. +the two `ndarray`s have the same shape. Second, carry out the elementwise operations on the resulting arrays. @@ -420,13 +386,6 @@ b = torch.arange(2).reshape((1, 2)) a, b ``` -```{.python .input} -#@tab tensorflow -a = tf.constant(range(3), shape=(3, 1)) -b = tf.constant(range(2), shape=(1, 2)) -a, b -``` - Since `a` and `b` are $3\times1$ and $1\times2$ matrices respectively, their shapes do not match up if we want to add them. We *broadcast* the entries of both matrices into a larger $3\times2$ matrix as follows: @@ -435,13 +394,17 @@ and for matrix `b` it replicates the rows before adding up both elementwise. ```{.python .input} -#@tab all +a + b +``` + +```{.python .input} +#@tab pytorch a + b ``` ## Indexing and Slicing -Just as in any other Python array, elements in a tensor can be accessed by index. +Just as in any other Python array, elements in an `ndarray` can be accessed by index. As in any Python array, the first element has index 0 and ranges are specified to include the first but *before* the last element. As in standard Python lists, we can access elements @@ -452,21 +415,24 @@ Thus, `[-1]` selects the last element and `[1:3]` selects the second and the third elements as follows: ```{.python .input} -#@tab all +x[-1], x[1:3] +``` + +```{.python .input} +#@tab pytorch x[-1], x[1:3] ``` Beyond reading, we can also write elements of a matrix by specifying indices. ```{.python .input} -#@tab mxnet, pytorch x[1, 2] = 9 x ``` ```{.python .input} -#@tab tensorflow -x = tf.convert_to_tensor(tf.Variable(x)[1, 2].assign(9)) +#@tab pytorch +x[1, 2] = 9 x ``` @@ -479,16 +445,13 @@ this obviously also works for vectors and for tensors of more than 2 dimensions. ```{.python .input} -#@tab mxnet, pytorch x[0:2, :] = 12 x ``` ```{.python .input} -#@tab tensorflow -x_var = tf.Variable(x) -x_var[1:2,:].assign(tf.ones(x_var[1:2,:].shape, dtype = tf.float32)*12) -x = tf.convert_to_tensor(x_var) +#@tab pytorch +x[0:2, :] = 12 x ``` @@ -497,7 +460,7 @@ x Running operations can cause new memory to be allocated to host results. For example, if we write `y = x + y`, -we will dereference the tensor that `y` used to point to +we will dereference the `ndarray` that `y` used to point to and instead point `y` at the newly allocated memory. In the following example, we demonstrate this with Python's `id()` function, which gives us the exact address of the referenced object in memory. @@ -507,7 +470,13 @@ allocating new memory for the result and then makes `y` point to this new location in memory. ```{.python .input} -#@tab all +before = id(y) +y = y + x +id(y) == before +``` + +```{.python .input} +#@tab pytorch before = id(y) y = y + x id(y) == before @@ -548,39 +517,30 @@ z[:] = x + y print('id(z):', id(z)) ``` -```{.python .input} -#@tab tensorflow -z = tf.Variable(tf.zeros_like(y)) -print('id(z):', id(z)) -z[:].assign(x + y) -print('id(z):', id(z)) -``` - If the value of `x` is not reused in subsequent computations, we can also use `x[:] = x + y` or `x += y` to reduce the memory overhead of the operation. ```{.python .input} -#@tab mxnet, pytorch before = id(x) x += y id(x) == before ``` ```{.python .input} -#@tab tensorflow +#@tab pytorch before = id(x) -tf.Variable(x).assign(x + y) +x += y id(x) == before ``` ## Conversion to Other Python Objects -Converting to a NumPy tensor, or vice versa, is easy. +Converting to a NumPy `ndarray`, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, -you do not want to halt computation, waiting to see +you do not want MXNet to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory. @@ -597,14 +557,7 @@ b = torch.tensor(a) type(a), type(b) ``` -```{.python .input} -#@tab tensorflow -a = x.numpy() -b = tf.constant(a) -type(a), type(b) -``` - -To convert a size-1 tensor to a Python scalar, +To convert a size-one `ndarray` to a Python scalar, we can invoke the `item` function or Python's built-in functions. ```{.python .input} @@ -618,71 +571,16 @@ a = torch.tensor([3.5]) a, a.item(), float(a), int(a) ``` -```{.python .input} -#@tab tensorflow -a = tf.constant([3.5]).numpy() -a, a.item(), float(a), int(a) -``` - -## The `d2l` Package - -Throughout the online version of this book, -we will provide implementations of multiple frameworks. -However, different frameworks may be different in their API names or usage. -To better reuse the same code block across multiple frameworks, -we unify a few commonly-used functions in the `d2l` package. -The comment `#@save` is a special mark where the following function, -class, or statements are saved in the `d2l` package. -For instance, later we can directly invoke -`d2l.numpy(a)` to convert a tensor `a`, -which can be defined in any supported framework, -into a NumPy tensor. - -```{.python .input} -#@save -numpy = lambda a: a.asnumpy() -size = lambda a: a.size -reshape = lambda a, *args: a.reshape(*args) -ones = np.ones -zeros = np.zeros -``` - -```{.python .input} -#@tab pytorch -#@save -numpy = lambda a: a.detach().numpy() -size = lambda a: a.numel() -reshape = lambda a, *args: a.reshape(*args) -ones = torch.ones -zeros = torch.zeros -``` - -```{.python .input} -#@tab tensorflow -#@save -numpy = lambda a: a.numpy() -size = lambda a: tf.size(a).numpy() -reshape = tf.reshape -ones = tf.ones -zeros = tf.zeros -``` - -In the rest of the book, -we often define more complicated functions or classes. -For those that can be used later, -we will also save them in the `d2l` package -so later they can be directly invoked without being redefined. - - ## Summary -* The main interface to store and manipulate data for deep learning is the tensor ($n$-dimensional array). It provides a variety of functionalities including basic mathematics operations, broadcasting, indexing, slicing, memory saving, and conversion to other Python objects. +* The main interface to store and manipulate data for deep learning is the $n$-dimensional array. It provides a variety of functionalities including basic mathematics operations, broadcasting, indexing, slicing, memory saving, and conversion to other Python objects. ## Exercises -1. Run the code in this section. Change the conditional statement `x == y` in this section to `x < y` or `x > y`, and then see what kind of tensor you can get. -1. Replace the two tensors that operate by element in the broadcasting mechanism with other shapes, e.g., 3-dimensional tensors. Is the result the same as expected? +1. Run the code in this section. Change the conditional statement `x == y` in this section to `x < y` or `x > y`, and then see what kind of `ndarray` you can get. +1. Replace the two `ndarray`s that operate by element in the broadcasting mechanism with other shapes, e.g., three dimensional tensors. Is the result the same as expected? + :begin_tab:`mxnet` [Discussions](https://discuss.d2l.ai/t/26) @@ -691,7 +589,3 @@ so later they can be directly invoked without being redefined. :begin_tab:`pytorch` [Discussions](https://discuss.d2l.ai/t/27) :end_tab: - -:begin_tab:`tensorflow` -[Discussions](https://discuss.d2l.ai/t/187) -:end_tab: diff --git a/config.ini b/config.ini index 65d3e73ce..b1ea600ce 100644 --- a/config.ini +++ b/config.ini @@ -17,7 +17,7 @@ release = 1.2.0 # A list of wildcards to indicate the markdown files that need to be evaluated as # Jupyter notebooks. -notebooks = *.md */*.md +notebooks = *.md */index.md chapter_preliminaries/ndarray.md # A list of files that will be copied to the build folder. resources = img/ d2lzh/ d2l.bib setup.py diff --git a/index.md b/index.md index 0788caa42..b3cd4606a 100644 --- a/index.md +++ b/index.md @@ -1,6 +1,10 @@ 《动手学深度学习》 ======================== +```eval_rst +.. raw:: html + :file: frontpage.html +``` ```toc diff --git a/index_origin.md b/index_origin.md new file mode 100644 index 000000000..ae52852cb --- /dev/null +++ b/index_origin.md @@ -0,0 +1,56 @@ +--- +source: https://github.com/d2l-ai/d2l-en/blob/master/index.md +commit: 9bf95b1 +--- + +Dive into Deep Learning +======================== + +```eval_rst +.. raw:: html + :file: frontpage.html +``` + + +```toc +:maxdepth: 1 + +chapter_preface/index +chapter_installation/index +chapter_notation/index +``` + + +```toc +:maxdepth: 2 +:numbered: + +chapter_introduction/index +chapter_preliminaries/index +chapter_linear-networks/index +chapter_multilayer-perceptrons/index +chapter_deep-learning-computation/index +chapter_convolutional-neural-networks/index +chapter_convolutional-modern/index +chapter_recurrent-neural-networks/index +chapter_recurrent-modern/index +chapter_attention-mechanisms/index +chapter_optimization/index +chapter_computational-performance/index +chapter_computer-vision/index +chapter_natural-language-processing-pretraining/index +chapter_natural-language-processing-applications/index +chapter_recommender-systems/index +chapter_generative-adversarial-networks/index +chapter_appendix-mathematics-for-deep-learning/index +chapter_appendix-tools-for-deep-learning/index + +``` + + +```toc +:maxdepth: 1 + +chapter_references/zreferences +``` +