diff --git a/src/category.tex b/src/category.tex
new file mode 100644
index 00000000..c12cef06
--- /dev/null
+++ b/src/category.tex
@@ -0,0 +1,12 @@
+\newcommand{\cat}{%
+\mathbf%
+}
+\newcommand{\idarrow}[1][]{%
+\mathbf{id}_{#1}%
+}
+\newcommand{\Set}{\cat{Set}}
+\newcommand{\Rel}{\cat{Rel}}
+\newcommand{\Cat}{\cat{Cat}}
+\newcommand{\id}{\mathbf{id}}
+
+\newcommand{\stimes}{{\times}}
\ No newline at end of file
diff --git a/src/content/1.1/Category - The Essence of Composition.tex b/src/content/1.1/Category - The Essence of Composition.tex
index 5e2d77f4..ea5bb251 100644
--- a/src/content/1.1/Category - The Essence of Composition.tex
+++ b/src/content/1.1/Category - The Essence of Composition.tex
@@ -5,47 +5,49 @@
for variety, I will occasionally draw objects as piggies and arrows as
fireworks.) But the essence of a category is \newterm{composition}. Or, if you
prefer, the essence of composition is a category. Arrows compose, so
-if you have an arrow from object A to object B, and another arrow from
-object B to object C, then there must be an arrow --- their composition
---- that goes from A to C.
+if you have an arrow from object $A$ to object $B$, and another arrow from
+object $B$ to object $C$, then there must be an arrow --- their composition
+--- that goes from $A$ to $C$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/img_1330.jpg}
-\caption{In a category, if there is an arrow going from A to B and an arrow going from B to C then there must also be a direct arrow from A to C that is their composition. This diagram is not a full category because it’s missing identity morphisms (see later).}
+\caption{In a category, if there is an arrow going from $A$ to $B$ and an arrow going from $B$ to $C$
+then there must also be a direct arrow from $A$ to $C$ that is their composition. This diagram is not a full
+category because it’s missing identity morphisms (see later).}
\end{figure}
\section{Arrows as Functions}\label{arrows-as-functions}
Is this already too much abstract nonsense? Do not despair. Let's talk
concretes. Think of arrows, which are also called \newterm{morphisms}, as
-functions. You have a function \code{f} that takes an argument of type A and
-returns a B. You have another function \code{g} that takes a B and returns a C.
-You can compose them by passing the result of \code{f} to \code{g}. You have just
-defined a new function that takes an A and returns a C.
+functions. You have a function \code{f} that takes an argument of type $A$ and
+returns a $B$. You have another function \code{g} that takes a $B$ and returns a $C$.
+You can compose them by passing the result of $f$ to $g$. You have just
+defined a new function that takes an $A$ and returns a $C$.
In math, such composition is denoted by a small circle between
functions: \ensuremath{g \circ f}. Notice the right to left order of composition. For some
people this is confusing. You may be familiar with the pipe notation in
Unix, as in:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
lsof | grep Chrome
\end{Verbatim}
or the chevron \code{>>} in F\#, which both
go from left to right. But in mathematics and in Haskell functions
-compose right to left. It helps if you read \code{g◦f} as ``g \emph{after} f.''
+compose right to left. It helps if you read \(g \circ f\) as ``g \emph{after} f.''
Let's make this even more explicit by writing some C code. We have one
function \code{f} that takes an argument of type \code{A} and
returns a value of type \code{B}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
B f(A a);
\end{Verbatim}
and another:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
C g(B b);
\end{Verbatim}
Their composition is:
@@ -64,17 +66,17 @@ \section{Arrows as Functions}\label{arrows-as-functions}
there isn't one. So let's try some Haskell for a change. Here's the
declaration of a function from A to B:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: A -> B
\end{Verbatim}
Similarly:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
g :: B -> C
\end{Verbatim}
Their composition is:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
g . f
\end{Verbatim}
Once you see how simple things are in Haskell, the inability to express
@@ -82,7 +84,7 @@ \section{Arrows as Functions}\label{arrows-as-functions}
fact, Haskell will let you use Unicode characters so you can write
composition as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
g ◦ f
\end{Verbatim}
You can even use Unicode double colons and arrows:
@@ -102,17 +104,14 @@ \section{Properties of Composition}\label{properties-of-composition}
\begin{enumerate}
\item
-Composition is associative. If you have three morphisms, f, g, and h,
+Composition is associative. If you have three morphisms, $f$, $g$, and $h$,
that can be composed (that is, their objects match end-to-end), you
don't need parentheses to compose them. In math notation this is
expressed as:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-h◦(g◦f) = (h◦g)◦f = h◦g◦f
-\end{Verbatim}
+\[h\circ{}(g\circ{}f) = (h\circ{}g)\circ{}f = h\circ{}g\circ{}f\]
In (pseudo) Haskell:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: A -> B
g :: B -> C
h :: C -> D
@@ -124,18 +123,15 @@ \section{Properties of Composition}\label{properties-of-composition}
be not as obvious in other categories.
\item
-For every object A there is an arrow which is a unit of composition.
+For every object $A$ there is an arrow which is a unit of composition.
This arrow loops from the object to itself. Being a unit of composition
-means that, when composed with any arrow that either starts at A or ends
-at A, respectively, it gives back the same arrow. The unit arrow for
-object A is called \code{id\textsubscript{A}} (\newterm{identity} on A). In math
-notation, if \code{f} goes from A to B then
-
-\code{f◦id\textsubscript{A} = f}
-
+means that, when composed with any arrow that either starts at $A$ or ends
+at $A$, respectively, it gives back the same arrow. The unit arrow for
+object A is called $\idarrow[A]$ (\newterm{identity} on $A$). In math
+notation, if \code{f} goes from $A$ to $B$ then
+\[f \circ \idarrow[A] = f\]
and
-
-\code{id\textsubscript{B}◦f = f}
+\[\idarrow[B] \circ f = f\]
\end{enumerate}
When dealing with functions, the identity arrow is implemented as the
identity function that just returns back its argument. The
@@ -152,7 +148,7 @@ \section{Properties of Composition}\label{properties-of-composition}
In Haskell, the identity function is part of the standard library
(called Prelude). Here's its declaration and definition:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
id :: a -> a
id x = x
\end{Verbatim}
@@ -180,7 +176,7 @@ \section{Properties of Composition}\label{properties-of-composition}
The identity conditions can be written (again, in pseudo-Haskell) as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f . id == f
id . f == f
\end{Verbatim}
@@ -229,7 +225,7 @@ \section{Composition is the Essence of
time. One of the most cited papers in psychology,
\href{http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two}{The
Magical Number Seven, Plus or Minus Two}, postulated that we can only
-keep \ensuremath{7 \pm 2} ``chunks'' of information in our minds. The details of our
+keep $7 \pm 2$ ``chunks'' of information in our minds. The details of our
understanding of the human short-term memory might be changing, but we
know for sure that it's limited. The bottom line is that we are unable
to deal with the soup of objects or the spaghetti of code. We need
diff --git a/src/content/1.10/Natural Transformations.tex b/src/content/1.10/Natural Transformations.tex
index d58bd9a3..486f1cd6 100644
--- a/src/content/1.10/Natural Transformations.tex
+++ b/src/content/1.10/Natural Transformations.tex
@@ -21,11 +21,11 @@
transformations help us compare these realizations. They are mappings of
functors --- special mappings that preserve their functorial nature.
-Consider two functors \code{F} and \code{G} between categories
-\emph{C} and \emph{D}. If you focus on just one object \code{a} in
-\emph{C}, it is mapped to two objects: \code{F a} and \code{G a}.
-A mapping of functors should therefore map \code{F a} to
-\code{G a}.
+Consider two functors $F$ and $G$ between categories
+$\cat{C}$ and $\cat{D}$. If you focus on just one object $a$ in
+$\cat{C}$, it is mapped to two objects: $F a$ and $G a$.
+A mapping of functors should therefore map $F a$ to
+$G a$.
\begin{figure}[H]
\centering
@@ -33,73 +33,69 @@
\end{figure}
\noindent
-Notice that \code{F a} and \code{G a} are objects in the same
-category \emph{D}. Mappings between objects in the same category should
+Notice that $F a$ and $G a$ are objects in the same
+category $\cat{D}$. Mappings between objects in the same category should
not go against the grain of the category. We don't want to make
artificial connections between objects. So it's \emph{natural} to use
existing connections, namely morphisms. A natural transformation is a
-selection of morphisms: for every object \code{a}, it picks one
-morphism from \code{F a} to \code{G a}. If we call the natural
-transformation \code{α}, this morphism is called the \newterm{component}
-of \code{α} at \code{a}, or \code{α\textsubscript{a}}.
+selection of morphisms: for every object $a$, it picks one
+morphism from $F a$ to $G a$. If we call the natural
+transformation $\alpha$, this morphism is called the \newterm{component}
+of $\alpha$ at $a$, or $\alpha_a$.
-\begin{Verbatim}[commandchars=\\\{\}]
-α\textsubscript{a} :: F a -> G a
-\end{Verbatim}
-Keep in mind that \code{a} is an object in \emph{C} while \code{α\textsubscript{a}}
-is a morphism in \emph{D}.
+\[\alpha_a \Colon F a \to G a\]
+Keep in mind that $a$ is an object in $\cat{C}$ while $\alpha_a$
+is a morphism in $\cat{D}$.
-If, for some \code{a}, there is no morphism between \code{F a} and
-\code{G a} in \emph{D}, there can be no natural transformation
-between \code{F} and \code{G}.
+If, for some $a$, there is no morphism between $F a$ and
+$G a$ in $\cat{D}$, there can be no natural transformation
+between $F$ and $G$.
Of course that's only half of the story, because functors not only map
objects, they map morphisms as well. So what does a natural
transformation do with those mappings? It turns out that the mapping of
-morphisms is fixed --- under any natural transformation between F and G,
-\code{F f} must be transformed into \code{G f}. What's more, the
+morphisms is fixed --- under any natural transformation between $F$ and $G$,
+$F f$ must be transformed into $G f$. What's more, the
mapping of morphisms by the two functors drastically restricts the
choices we have in defining a natural transformation that's compatible
-with it. Consider a morphism \code{f} between two objects \code{a}
-and \code{b} in \emph{C}. It's mapped to two morphisms, \code{F f}
-and \code{G f} in \emph{D}:
+with it. Consider a morphism $f$ between two objects $a$
+and $b$ in $\cat{C}$. It's mapped to two morphisms, $F f$
+and $G f$ in $\cat{D}$:
+
+\begin{gather*}
+F f \Colon F a \to F b \\
+G f \Colon G a \to G b
+\end{gather*}
+The natural transformation \code{α} provides two additional morphisms
+that complete the diagram in \emph{D}:
-\begin{Verbatim}[commandchars=\\\{\}]
-F f :: F a -> F b
-G f :: G a -> G b
-\end{Verbatim}
+\begin{gather*}
+\alpha_a \Colon F a \to G a \\
+\alpha_b \Colon F b \to G b
+\end{gather*}
\begin{wrapfigure}[5]{R}{0pt}
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
-\fbox{\includegraphics[width=40mm]{images/3_naturality.jpg}}}%
+\includegraphics[width=40mm]{images/3_naturality.jpg}}%
\end{wrapfigure}
\noindent
-The natural transformation \code{α} provides two additional morphisms
-that complete the diagram in \emph{D}:
-\begin{Verbatim}[commandchars=\\\{\}]
-α\textsubscript{a} :: F a -> G a
-α\textsubscript{b} :: F b -> G b
-\end{Verbatim}
-Now we have two ways of getting from \code{F a} to \code{G b}. To
+\noindent
+Now we have two ways of getting from $F a$ to $G b$. To
make sure that they are equal, we must impose the \newterm{naturality
-condition} that holds for any \code{f}:
+condition} that holds for any $f$:
-\begin{Verbatim}[commandchars=\\\{\}]
-G f ◦ α\textsubscript{a} = α\textsubscript{b} ◦ F f
-\end{Verbatim}
+\[G f \circ \alpha_a = \alpha_b \circ F f\]
The naturality condition is a pretty stringent requirement. For
-instance, if the morphism \code{F f} is invertible, naturality
-determines \code{α\textsubscript{b}} in terms of \code{α\textsubscript{a}}. It \newterm{transports}
-\code{α\textsubscript{a}} along \code{f}:
+instance, if the morphism $F f$ is invertible, naturality
+determines $\alpha_b$ in terms of $\alpha_a$. It \emph{transports}
+$\alpha_a$ along $f$:
-\begin{Verbatim}[commandchars=\\\{\}]
-α\textsubscript{b} = (G f) ◦ α\textsubscript{a} ◦ (F f)\textsuperscript{-1}
-\end{Verbatim}
+\[\alpha_b = (G f) \circ \alpha_a \circ (F f)^{-1}\]
\begin{wrapfigure}[7]{R}{0pt}
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
-\fbox{\includegraphics[width=40mm]{images/4_transport.jpg}}}%
+\includegraphics[width=40mm]{images/4_transport.jpg}}%
\end{wrapfigure}
\noindent
@@ -115,7 +111,7 @@
Looking at a natural transformation component-wise, one may say that it
maps objects to morphisms. Because of the naturality condition, one may
also say that it maps morphisms to commuting squares --- there is one
-commuting naturality square in \emph{D} for every morphism in \emph{C}.
+commuting naturality square in $\cat{D}$ for every morphism in $\cat{C}$.
\begin{figure}[H]
\centering
@@ -145,9 +141,9 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
To construct a natural transformation we start with an object, here a
type, \code{a}. One functor, \code{F}, maps it to the type
-\code{F a}. Another functor, \code{G}, maps it to \code{G a}.
+$F a$. Another functor, \code{G}, maps it to $G a$.
The component of a natural transformation \code{alpha} at \code{a}
-is a function from \code{F a} to \code{G a}. In pseudo-Haskell:
+is a function from $F a$ to $G a$. In pseudo-Haskell:
\begin{Verbatim}[commandchars=\\\{\}]
alpha\textsubscript{a} :: F a -> G a
@@ -155,14 +151,14 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
A natural transformation is a polymorphic function that is defined for
all types \code{a}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
alpha :: forall a . F a -> G a
\end{Verbatim}
The \code{forall a} is optional in Haskell (and in fact requires
turning on the language extension \code{ExplicitForAll}). Normally,
you would write it like this:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
alpha :: F a -> G a
\end{Verbatim}
Keep in mind that it's really a family of functions parameterized by
@@ -193,16 +189,14 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
Haskell's parametric polymorphism has an unexpected consequence: any
polymorphic function of the type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
alpha :: F a -> G a
\end{Verbatim}
where \code{F} and \code{G} are functors, automatically satisfies
-the naturality condition. Here it is in categorical notation (\code{f}
-is a function \code{f::a->b}):
+the naturality condition. Here it is in categorical notation ($f$
+is a function $f \Colon a \to b$):
-\begin{Verbatim}[commandchars=\\\{\}]
-G f ◦ α\textsubscript{a} = α\textsubscript{b} ◦ F f
-\end{Verbatim}
+\[G f \circ \alpha_a = \alpha_b \circ F f\]
In Haskell, the action of a functor \code{G} on a morphism \code{f}
is implemented using \code{fmap}. I'll first write it in
pseudo-Haskell, with explicit type annotations:
@@ -213,7 +207,7 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
Because of type inference, these annotations are not necessary, and the
following equation holds:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap f . alpha = alpha . fmap f
\end{Verbatim}
This is still not real Haskell --- function equality is not expressible
@@ -250,7 +244,7 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
first is between the list functor, and the \code{Maybe} functor. It
returns the head of the list, but only if the list is non-empty:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
safeHead :: [a] -> Maybe a
safeHead [] = Nothing
safeHead (x:xs) = Just x
@@ -261,21 +255,21 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
functors. But just to convince ourselves, let's verify the naturality
condition.
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap f . safeHead = safeHead . fmap f
\end{Verbatim}
We have two cases to consider; an empty list:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap f (safeHead []) = fmap f Nothing = Nothing
\end{Verbatim}
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
safeHead (fmap f []) = safeHead [] = Nothing
\end{Verbatim}
and a non-empty list:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap f (safeHead (x:xs)) = fmap f (Just x) = Just (f x)
\end{Verbatim}
\begin{minted}[breaklines]{text}
@@ -283,13 +277,13 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
\end{minted}
I used the implementation of \code{fmap} for lists:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap f [] = []
fmap f (x:xs) = f x : fmap f xs
\end{Verbatim}
and for \code{Maybe}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap f Nothing = Nothing
fmap f (Just x) = Just (f x)
\end{Verbatim}
@@ -301,7 +295,7 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
For instance, \code{length} can be thought of as a natural
transformation from the list functor to the \code{Const Int} functor:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
length :: [a] -> Const Int a
length [] = Const 0
length (x:xs) = Const (1 + unConst (length xs))
@@ -309,13 +303,13 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
Here, \code{unConst} is used to peel off the \code{Const}
constructor:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
unConst :: Const c a -> c
unConst (Const x) = x
\end{Verbatim}
Of course, in practice \code{length} is defined as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
length :: [a] -> Int
\end{Verbatim}
which effectively hides the fact that it's a natural transformation.
@@ -324,7 +318,7 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
\code{Const} functor is a little harder, since it would require the
creation of a value from nothing. The best we can do is:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
scam :: Const Int a -> Maybe a
scam (Const x) = Nothing
\end{Verbatim}
@@ -332,7 +326,7 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
important role in the Yoneda lemma, is the \code{Reader} functor. I
will rewrite its definition as a \code{newtype}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
newtype Reader e a = Reader (e -> a)
\end{Verbatim}
It is parameterized by two types, but is (covariantly) functorial only
@@ -343,20 +337,20 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
fmap f (Reader g) = Reader (\x -> f (g x))
\end{Verbatim}
For every type \code{e}, you can define a family of natural
-transformations from \code{Reader\ e} to any other functor \code{f}.
+transformations from \code{Reader e} to any other functor \code{f}.
We'll see later that the members of this family are always in one to one
correspondence with the elements of \code{f e} (the
\hyperref[the-yoneda-lemma]{Yoneda lemma}).
For instance, consider the somewhat trivial unit type \code{()} with
-one element \code{()}. The functor \code{Reader\ ()} takes any type
+one element \code{()}. The functor \code{Reader ()} takes any type
\code{a} and maps it into a function type \code{()->a}.
These are just all the functions that pick a single element from the set
\code{a}. There are as many of these as there are elements in
\code{a}. Now let's consider natural transformations from this functor
to the \code{Maybe} functor:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
alpha :: Reader () a -> Maybe a
\end{Verbatim}
There are only two of these, \code{dumb} and \code{obvious}:
@@ -366,7 +360,7 @@ \section{Polymorphic Functions}\label{polymorphic-functions}
\end{Verbatim}
and
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
obvious (Reader g) = Just (g ())
\end{Verbatim}
(The only thing you can do with \code{g} is to apply it to the unit
@@ -399,12 +393,12 @@ \section{Beyond Naturality}\label{beyond-naturality}
You might remember the example of a contravariant functor we've looked
at before:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
newtype Op r a = Op (a -> r)
\end{Verbatim}
This functor is contravariant in \code{a}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Contravariant (Op r) where
contramap f (Op g) = Op (g . f)
\end{Verbatim}
@@ -418,20 +412,20 @@ \section{Beyond Naturality}\label{beyond-naturality}
transformation in \textbf{Hask}. However, because they are both
contravariant, they satisfy the ``opposite'' naturality condition:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
contramap f . predToStr = predToStr . contramap f
\end{Verbatim}
Notice that the function \code{f} must go in the opposite direction
than what you'd use with \code{fmap}, because of the signature of
\code{contramap}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
contramap :: (b -> a) -> (Op Bool a -> Op Bool b)
\end{Verbatim}
Are there any type constructors that are not functors, whether covariant
or contravariant? Here's one example:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
a -> a
\end{Verbatim}
This is not a functor because the same type \code{a} is used both in
@@ -439,7 +433,7 @@ \section{Beyond Naturality}\label{beyond-naturality}
can't implement \code{fmap} or \code{contramap} for this type.
Therefore a function of the signature:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(a -> a) -> f a
\end{Verbatim}
where \code{f} is an arbitrary functor, cannot be a natural
@@ -452,55 +446,40 @@ \section{Functor Category}\label{functor-category}
Now that we have mappings between functors --- natural transformations
--- it's only natural to ask the question whether functors form a
category. And indeed they do! There is one category of functors for each
-pair of categories, C and D. Objects in this category are functors from
-C to D, and morphisms are natural transformations between those
+pair of categories, $\cat{C}$ and $\cat{D}$. Objects in this category are functors from
+$\cat{C}$ to $\cat{D}$, and morphisms are natural transformations between those
functors.
We have to define composition of two natural transformations, but that's
quite easy. The components of natural transformations are morphisms, and
we know how to compose morphisms.
-Indeed, let's take a natural transformation α from functor F to G. Its
-component at object \code{a} is some morphism:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-α\textsubscript{a} :: F a -> G a
-\end{Verbatim}
-We'd like to compose α with β, which is a natural transformation from
-functor G to H. The component of β at \code{a} is a morphism:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-β\textsubscript{a} :: G a -> H a
-\end{Verbatim}
+Indeed, let's take a natural transformation $\alpha$ from functor $F$ to $G$. Its
+component at object $a$ is some morphism:
+\[\alpha_a \Colon F a \to G a\]
+We'd like to compose $\alpha$ with $\beta$, which is a natural transformation from
+functor $G$ to $H$. The component of $\beta$ at $a$ is a morphism:
+\[\beta_a \Colon G a \to H a\]
These morphisms are composable and their composition is another
morphism:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-β\textsubscript{a} ◦ α\textsubscript{a} :: F a -> H a
-\end{Verbatim}
+\[\beta_a \circ \alpha_a \Colon F a \to H a\]
We will use this morphism as the component of the natural transformation
-β ⋅ α --- the composition of two natural transformations β after α:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-(β \ensuremath{⋅} α)a = β\textsubscript{a} ◦ α\textsubscript{a}
-\end{Verbatim}
+$\beta \cdot \alpha$ --- the composition of two natural transformations $\beta$ after $\alpha$:
+\[(\beta \cdot \alpha)_a = \beta_a \circ \alpha_a\]
\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=60mm]{images/5_vertical.jpg}}
+\includegraphics[width=40mm]{images/5_vertical.jpg}
\end{figure}
\noindent
One (long) look at a diagram convinces us that the result of this
composition is indeed a natural transformation from F to H:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-H f ◦ (β \ensuremath{⋅} α)\textsubscript{a} = (β \ensuremath{⋅} α)\textsubscript{b} ◦ F f
-\end{Verbatim}
+\[H f \circ (\beta \cdot \alpha)_a = (\beta \cdot \alpha)_b \circ F f\]
\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=60mm]{images/6_verticalnaturality.jpg}}
+\includegraphics[width=40mm]{images/6_verticalnaturality.jpg}
\end{figure}
\noindent
@@ -509,11 +488,8 @@ \section{Functor Category}\label{functor-category}
their composition.
Finally, for each functor F there is an identity natural transformation
-1\textsubscript{F} whose components are the identity morphisms:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-id\textsubscript{F a} :: F a -> F a
-\end{Verbatim}
+$1_F$ whose components are the identity morphisms:
+\[\id_{F a} \Colon F a \to F a\]
So, indeed, functors form a category.
A word about notation. Following Saunders Mac Lane I use the dot for the
@@ -524,14 +500,14 @@ \section{Functor Category}\label{functor-category}
composition is important in defining the functor category. I'll explain
horizontal composition shortly.
-\begin{figure}
+\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=60mm]{images/6a_vertical.jpg}}
+\includegraphics[width=40mm]{images/6a_vertical.jpg}
\end{figure}
-The functor category between categories C and D is written as
-\code{Fun(C, D)}, or \code{{[}C, D{]}}, or sometimes as
-\code{DC}. This last notation suggests that a functor category itself
+The functor category between categories $\cat{C}$ and $\cat{D}$ is written as
+$\cat{Fun(C, D)}$, or $\cat{{[}C, D{]}}$, or sometimes as
+$\cat{D^C}$. This last notation suggests that a functor category itself
might be considered a function object (an exponential) in some other
category. Is this indeed the case?
@@ -539,50 +515,48 @@ \section{Functor Category}\label{functor-category}
building so far. We started with a category, which is a collection of
objects and morphisms. Categories themselves (or, strictly speaking
\emph{small} categories, whose objects form sets) are themselves objects
-in a higher-level category \textbf{Cat}. Morphisms in that category are
-functors. A Hom-set in \textbf{Cat} is a set of functors. For instance
-Cat(C, D) is a set of functors between two categories C and D.
+in a higher-level category $\Cat$. Morphisms in that category are
+functors. A Hom-set in $\Cat$ is a set of functors. For instance
+$\cat{Cat(C, D)}$ is a set of functors between two categories $\cat{C}$ and $\cat{D}$.
-\begin{figure}
+\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=60mm]{images/7_cathomset.jpg}}
+\includegraphics[width=40mm]{images/7_cathomset.jpg}
\end{figure}
-A functor category {[}C, D{]} is also a set of functors between two
+A functor category $\cat{[}C, D{]}$ is also a set of functors between two
categories (plus natural transformations as morphisms). Its objects are
-the same as the members of Cat(C, D). Moreover, a functor category,
-being a category, must itself be an object of \textbf{Cat} (it so
+the same as the members of $\cat{Cat(C, D)}$. Moreover, a functor category,
+being a category, must itself be an object of $\Cat$ (it so
happens that the functor category between two small categories is itself
small). We have a relationship between a Hom-set in a category and an
object in the same category. The situation is exactly like the
exponential object that we've seen in the last section. Let's see how we
-can construct the latter in \textbf{Cat}.
+can construct the latter in $\Cat$.
As you may remember, in order to construct an exponential, we need to
-first define a product. In \textbf{Cat}, this turns out to be relatively
+first define a product. In $\Cat$, this turns out to be relatively
easy, because small categories are \emph{sets} of objects, and we know
how to define cartesian products of sets. So an object in a product
-category C × D is just a pair of objects, \code{(c, d)}, one from C
-and one from D. Similarly, a morphism between two such pairs,
-\code{(c, d)} and \code{(c', d')}, is a pair of
-morphisms, \code{(f, g)}, where
-\code{f :: c -> c'} and
-\code{g :: d -> d'}. These pairs of morphisms
+category $\cat{C\times D}$ is just a pair of objects, $(c, d)$, one from $\cat{C}$
+and one from $\cat{D}$. Similarly, a morphism between two such pairs,
+$(c, d)$ and $(c', d')$, is a pair of morphisms, $(f, g)$, where
+$f \Colon c \to c'$ and $g \Colon d \to d'$. These pairs of morphisms
compose component-wise, and there is always an identity pair that is
just a pair of identity morphisms. To make the long story short,
-\textbf{Cat} is a full-blown cartesian closed category in which there is
-an exponential object D\textsuperscript{C} for any pair of categories.
-And by ``object'' in \textbf{Cat} I mean a category, so
-D\textsuperscript{C} is a category, which we can identify with the
-functor category between C and D.
+$\Cat$ is a full-blown cartesian closed category in which there is
+an exponential object $\cat{D^C}$ for any pair of categories.
+And by ``object'' in $\Cat$ I mean a category, so
+$\cat{D^C}$ is a category, which we can identify with the
+functor category between $\cat{C}$ and $\cat{D}$.
\section{2-Categories}\label{categories}
-With that out of the way, let's have a closer look at \textbf{Cat}. By
-definition, any Hom-set in \textbf{Cat} is a set of functors. But, as we
+With that out of the way, let's have a closer look at $\Cat$. By
+definition, any Hom-set in $\Cat$ is a set of functors. But, as we
have seen, functors between two objects have a richer structure than
just a set. They form a category, with natural transformations acting as
-morphisms. Since functors are considered morphisms in \textbf{Cat},
+morphisms. Since functors are considered morphisms in $\Cat$,
natural transformations are morphisms between morphisms.
This richer structure is an example of a 2-category, a generalization of
@@ -590,7 +564,7 @@ \section{2-Categories}\label{categories}
1-morphisms in this context), there are also 2-morphisms, which are
morphisms between morphisms.
-In the case of \textbf{Cat} seen as a 2-category we have:
+In the case of $\Cat$ seen as a 2-category we have:
\begin{itemize}
\tightlist
@@ -602,17 +576,17 @@ \section{2-Categories}\label{categories}
2-morphisms: Natural transformations between functors.
\end{itemize}
-\begin{wrapfigure}[11]{R}{0pt}
-\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
-\fbox{\includegraphics[width=2.25000in]{images/8_cat-2-cat.jpg}}}%
-\end{wrapfigure}
+\begin{figure}[H]
+\centering
+\includegraphics[width=40mm]{images/8_cat-2-cat.jpg}
+\end{figure}
\noindent
-Instead of a Hom-set between two categories C and D, we have a
-Hom-category --- the functor category D\textsuperscript{C}. We have
-regular functor composition: a functor F from D\textsuperscript{C}
-composes with a functor G from E\textsuperscript{D} to give G ◦ F from
-E\textsuperscript{C}. But we also have composition inside each
+Instead of a Hom-set between two categories $\cat{C}$ and $\cat{D}$, we have a
+Hom-category --- the functor category $\cat{D^C}$. We have
+regular functor composition: a functor $F$ from $\cat{D^C}$
+composes with a functor $G$ from $\cat{E^D}$ to give $G \circ F$ from
+$\cat{E^C}$. But we also have composition inside each
Hom-category --- vertical composition of natural transformations, or
2-morphisms, between functors.
@@ -620,97 +594,79 @@ \section{2-Categories}\label{categories}
With two kinds of composition in a 2-category, the question arises: How
do they interact with each other?
-Let's pick two functors, or 1-morphisms, in \textbf{Cat}:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-F :: C -> D
-G :: D -> E
-\end{Verbatim}
+Let's pick two functors, or 1-morphisms, in $\Cat$:
+\begin{gather*}
+F \Colon \cat{C} \to \cat{D} \\
+G \Colon \cat{D} \to \cat{E}
+\end{gather*}
and their composition:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-G ◦ F :: C -> E
-\end{Verbatim}
-Suppose we have two natural transformations, α and β, that act,
-respectively, on functors F and G:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-α :: F -> F'
-β :: G -> G'
-\end{Verbatim}
+\[G \circ F \Colon \cat{C} \to \cat{E}\]
+Suppose we have two natural transformations, $\alpha$ and $\beta$, that act,
+respectively, on functors $F$ and $G$:
+\begin{gather*}
+\alpha \Colon F \to F' \\
+\beta \Colon G \to G'
+\end{gather*}
\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=60mm]{images/10_horizontal.jpg}}
+\includegraphics[width=40mm]{images/10_horizontal.jpg}
\end{figure}
\noindent
Notice that we cannot apply vertical composition to this pair, because
-the target of α is different from the source of β. In fact they are
-members of two different functor categories: D \textsuperscript{C} and E
-\textsuperscript{D}. We can, however, apply composition to the functors
-F' and G', because the target of F' is the source of G' --- it's the
-category D. What's the relation between the functors G'◦ F' and G ◦ F?
+the target of $\alpha$ is different from the source of $\beta$. In fact they are
+members of two different functor categories: $\cat{D^E}$ and $\cat{E^D}$.
+We can, however, apply composition to the functors
+$F'$ and $G'$, because the target of $F'$ is the source of $G'$ --- it's the
+category $\cat{D}$. What's the relation between the functors $G' \circ F'$ and $G \circ F$?
-Having α and β at our disposal, can we define a natural transformation
-from G ◦ F to G'◦ F'? Let me sketch the construction.
+Having $\alpha$ and $\beta$ at our disposal, can we define a natural transformation
+from $G \circ F$ to $G' \circ F'$? Let me sketch the construction.
\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=90mm]{images/9_horizontal.jpg}}
+\includegraphics[width=55mm]{images/9_horizontal.jpg}
\end{figure}
\noindent
-As usual, we start with an object \code{a} in C. Its image splits into
-two objects in D: \code{F a} and \code{F'a}. There is also a
-morphism, a component of α, connecting these two objects:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-α\textsubscript{a} :: F a -> F'a
-\end{Verbatim}
-When going from D to E, these two objects split further into four
-objects:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-G (F a), G'(F a), G (F'a), G'(F'a)
-\end{Verbatim}
+As usual, we start with an object $a$ in $\cat{C}$. Its image splits into
+two objects in $\cat{D}$: $F a$ and $F'a$. There is also a
+morphism, a component of $\alpha$, connecting these two objects:
+\[\alpha_a \Colon F a \to F'a\]
+When going from $\cat{D}$ to $\cat{E}$, these two objects split further into four
+objects: $G (F a)$, $G'(F a)$, $G (F'a)$, $G'(F'a)$.
We also have four morphisms forming a square. Two of these morphisms are
-the components of the natural transformation β:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-β\textsubscript{F a} :: G (F a) -> G'(F a)
-β\textsubscript{F'a} :: G (F'a) -> G'(F'a)
-\end{Verbatim}
-The other two are the images of α\textsubscript{a} under the two
+the components of the natural transformation $\beta$:
+\begin{gather*}
+\beta_{F a} \Colon G (F a) \to G'(F a) \\
+\beta_{F'a} \Colon G (F'a) \to G'(F'a)
+\end{gather*}
+The other two are the images of $\alpha_a$ under the two
functors (functors map morphisms):
-
-\begin{Verbatim}[commandchars=\\\{\}]
-G α\textsubscript{a} :: G (F a) -> G (F'a)
-G'α\textsubscript{a} :: G'(F a) -> G'(F'a)
-\end{Verbatim}
+\begin{gather*}
+G \alpha_a \Colon G (F a) \to G (F'a) \\
+G'\alpha_a \Colon G'(F a) \to G'(F'a)
+\end{gather*}
That's a lot of morphisms. Our goal is to find a morphism that goes from
-\code{G (F a)} to \code{G'(F'a)}, a candidate for the
-component of a natural transformation connecting the two functors G ◦ F
-and G'◦ F'. In fact there's not one but two paths we can take from
-\code{G (F a)} to \code{G'(F'a)}:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-G'α\textsubscript{a} ◦ β\textsubscript{F a}
-β\textsubscript{F'a} ◦ G α\textsubscript{a}
-\end{Verbatim}
+$G (F a)$ to $G'(F'a)$, a candidate for the
+component of a natural transformation connecting the two functors $G \circ F$
+and $G' \circ F'$. In fact there's not one but two paths we can take from
+$G (F a)$ to $G'(F'a)$:
+\begin{gather*}
+G'\alpha_a \circ \beta_{F a} \\
+\beta_{F'a} \circ G \alpha_a
+\end{gather*}
Luckily for us, they are equal, because the square we have formed turns
-out to be the naturality square for β.
+out to be the naturality square for $\beta$.
-We have just defined a component of a natural transformation from G ◦ F
-to G'◦ F'. The proof of naturality for this transformation is pretty
+We have just defined a component of a natural transformation from $G \circ F$
+to $G' \circ F'$. The proof of naturality for this transformation is pretty
straightforward, provided you have enough patience.
We call this natural transformation the \newterm{horizontal composition} of
-α and β:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-β ◦ α :: G ◦ F -> G' ◦ F'
-\end{Verbatim}
+$\alpha$ and $\beta$:
+\[\beta \circ \alpha \Colon G \circ F \to G' \circ F'\]
Again, following Mac Lane I use the small circle for horizontal
composition, although you may also encounter star in its place.
@@ -719,7 +675,7 @@ \section{2-Categories}\label{categories}
transformations, and it's part of the functor category. But what about
the horizontal composition? What category does that live in?
-The way to figure this out is to look at \textbf{Cat} sideways. Look at
+The way to figure this out is to look at $\Cat$ sideways. Look at
natural transformations not as arrows between functors but as arrows
between categories. A natural transformation sits between two
categories, the ones that are connected by the functors it transforms.
@@ -727,56 +683,46 @@ \section{2-Categories}\label{categories}
\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=60mm]{images/sideways.jpg}}
+\includegraphics[width=60mm]{images/sideways.jpg}
\end{figure}
\noindent
-Let's focus on two objects of \textbf{Cat} --- categories C and D. There
+Let's focus on two objects of $\Cat$ --- categories $\cat{C}$ and $\cat{D}$. There
is a set of natural transformations that go between functors that
-connect C to D. These natural transformations are our new arrows from C
-to D. By the same token, there are natural transformations going between
-functors that connect D to E, which we can treat as new arrows going
-from D to E. Horizontal composition is the composition of these arrows.
+connect $\cat{C}$ to $\cat{D}$. These natural transformations are our new arrows from $\cat{C}$
+to $\cat{D}$. By the same token, there are natural transformations going between
+functors that connect $\cat{D}$ to $\cat{E}$, which we can treat as new arrows going
+from $\cat{D}$ to $\cat{E}$. Horizontal composition is the composition of these arrows.
-We also have an identity arrow going from C to C. It's the identity
-natural transformation that maps the identity functor on C to itself.
+We also have an identity arrow going from $\cat{C}$ to $\cat{C}$. It's the identity
+natural transformation that maps the identity functor on $\cat{C}$ to itself.
Notice that the identity for horizontal composition is also the identity
for vertical composition, but not vice versa.
Finally, the two compositions satisfy the interchange law:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-(β' \ensuremath{\cdot} α') ◦ (β \ensuremath{\cdot} α) = (β' ◦ β) \ensuremath{\cdot} (α' ◦ α)
-\end{Verbatim}
-
+\[(\beta' \cdot \alpha') \circ (\beta \cdot \alpha) = (\beta' \circ \beta) \cdot (\alpha' \circ \alpha)\]
I will quote Saunders Mac Lane here: The reader may enjoy writing down
the evident diagrams needed to prove this fact.
There is one more piece of notation that might come in handy in the
-future. In this new sideways interpretation of \textbf{Cat} there are
+future. In this new sideways interpretation of $\Cat$ there are
two ways of getting from object to object: using a functor or using a
natural transformation. We can, however, re-interpret the functor arrow
as a special kind of natural transformation: the identity natural
transformation acting on this functor. So you'll often see this
notation:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-F ◦ α
-\end{Verbatim}
-where F is a functor from D to E, and α is a natural transformation
-between two functors going from C to D. Since you can't compose a
+\[F \circ \alpha\]
+where $F$ is a functor from $\cat{D}$ to $\cat{E}$, and $\alpha$ is a natural transformation
+between two functors going from $\cat{C}$ to $\cat{D}$. Since you can't compose a
functor with a natural transformation, this is interpreted as a
horizontal composition of the identity natural transformation
-1\textsubscript{F} after α.
+$1_F$ after $\alpha$.
Similarly:
+\[\alpha \circ F\]
+is a horizontal composition of $\alpha$ after $1_F$.
-\begin{Verbatim}[commandchars=\\\{\}]
-α ◦ F
-\end{Verbatim}
-is a horizontal composition of α after 1\textsubscript{F}.
-
-\section{Conclusion}\label{conclusion}
+\section{Conclusion}
This concludes the first part of the book. We've learned the basic
vocabulary of category theory. You may think of objects and categories
diff --git a/src/content/1.2/Types and Functions.tex b/src/content/1.2/Types and Functions.tex
index f2abdc9b..1854cca8 100644
--- a/src/content/1.2/Types and Functions.tex
+++ b/src/content/1.2/Types and Functions.tex
@@ -9,10 +9,10 @@ \section{Who Needs Types?}\label{who-needs-types}
happily hitting random keys, producing programs, compiling, and running
them.
-\begin{figure}
-\centering
-\fbox{\includegraphics[width=60mm]{images/img_1329.jpg}}
-\end{figure}
+\begin{wrapfigure}[10]{R}{0pt}
+\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
+\includegraphics[width=40mm]{images/img_1329.jpg}}
+\end{wrapfigure}
With machine language, any combination of bytes produced by monkeys
would be accepted and run. But with higher level languages, we do
@@ -108,7 +108,7 @@ \section{What Are Types?}\label{what-are-types}
When we declare \code{x} to be an \code{Integer}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
x :: Integer
\end{Verbatim}
we are saying that it's an element of the set of integers.
@@ -120,11 +120,11 @@ \section{What Are Types?}\label{what-are-types}
sets tricky. There are problems with polymorphic functions that involve
circular definitions, and with the fact that you can't have a set of all
sets; but as I promised, I won't be a stickler for math. The great thing
-is that there is a category of sets, which is called \textbf{Set}, and
-we'll just work with it. In \textbf{Set}, objects are sets and morphisms
+is that there is a category of sets, which is called $\Set$, and
+we'll just work with it. In $\Set$, objects are sets and morphisms
(arrows) are functions.
-\textbf{Set} is a very special category, because we can actually peek
+$\Set$ is a very special category, because we can actually peek
inside its objects and get a lot of intuitions from doing that. For
instance, we know that an empty set has no elements. We know that there
are special one-element sets. We know that functions map elements of one
@@ -150,7 +150,7 @@ \section{What Are Types?}\label{what-are-types}
Unicode \ensuremath{\bot}. This ``value'' corresponds to a non-terminating computation.
So a function declared as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: Bool -> Bool
\end{Verbatim}
may return \code{True}, \code{False}, or \code{\_|\_};
@@ -161,7 +161,7 @@ \section{What Are Types?}\label{what-are-types}
functions to return the bottom explicitly. The latter is usually done
using the expression \code{undefined}, as in:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: Bool -> Bool
f x = undefined
\end{Verbatim}
@@ -169,7 +169,7 @@ \section{What Are Types?}\label{what-are-types}
bottom, which is a member of any type, including \code{Bool}. You can
even write:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: Bool -> Bool
f = undefined
\end{Verbatim}
@@ -180,12 +180,12 @@ \section{What Are Types?}\label{what-are-types}
functions, which return valid results for every possible argument.
Because of the bottom, you'll see the category of Haskell types and
-functions referred to as \textbf{Hask} rather than \textbf{Set}. From
+functions referred to as \textbf{Hask} rather than $\Set$. From
the theoretical point of view, this is the source of never-ending
complications, so at this point I will use my butcher's knife and
terminate this line of reasoning. From the pragmatic point of view, it's
okay to ignore non-terminating functions and bottoms, and treat
-\textbf{Hask} as bona fide \textbf{Set}.\footnote{Nils Anders Danielsson,
+\textbf{Hask} as bona fide $\Set$.\footnote{Nils Anders Danielsson,
John Hughes, Patrik Jansson, Jeremy Gibbons, \href{http://www.cs.ox.ac.uk/jeremy.gibbons/publications/fast+loose.pdf}{
Fast and Loose Reasoning is Morally Correct}. This paper provides justification for ignoring bottoms in most contexts.}
@@ -239,7 +239,7 @@ \section{Why Do We Need a Mathematical
Consider the definition of a factorial function in Haskell, which is a
language quite amenable to denotational semantics:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fact n = product [1..n]
\end{Verbatim}
The expression \code{{[}1..n{]}} is a list of integers from \code{1} to \code{n}.
@@ -325,7 +325,7 @@ \section{Examples of Types}\label{examples-of-types}
it's a function that's polymorphic in the return type. Haskellers have a
name for it:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
absurd :: Void -> a
\end{Verbatim}
(Remember, \code{a} is a type variable that can stand for any type.)
@@ -357,7 +357,7 @@ \section{Examples of Types}\label{examples-of-types}
is used for the type, the constructor, and the only value corresponding
to a singleton set. So here's this function in Haskell:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f44 :: () -> Integer
f44 () = 44
\end{Verbatim}
@@ -367,7 +367,7 @@ \section{Examples of Types}\label{examples-of-types}
namely \code{()}, and producing the number 44. You call this function
by providing the unit value \code{()}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f44 ()
\end{Verbatim}
Notice that every function of unit is equivalent to picking a single
@@ -375,8 +375,8 @@ \section{Examples of Types}\label{examples-of-types}
fact you could think of \code{f44} as a different representation for
the number 44. This is an example of how we can replace explicit mention
of elements of a set by talking about functions (arrows) instead.
-Functions from unit to any type A are in one-to-one correspondence with
-the elements of that set A.
+Functions from unit to any type $A$ are in one-to-one correspondence with
+the elements of that set $A$.
What about functions with the \code{void} return type, or, in Haskell,
with the unit return type? In C++ such functions are used for side
@@ -389,7 +389,7 @@ \section{Examples of Types}\label{examples-of-types}
there is exactly one such function. Here's this function for
\code{Integer}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fInt :: Integer -> ()
fInt x = ()
\end{Verbatim}
@@ -398,7 +398,7 @@ \section{Examples of Types}\label{examples-of-types}
for an argument that is discarded. This way you don't have to invent a
name for it. So the above can be rewritten as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fInt :: Integer -> ()
fInt _ = ()
\end{Verbatim}
@@ -412,7 +412,7 @@ \section{Examples of Types}\label{examples-of-types}
concrete type. What should we call a polymorphic function from any type
to unit type? Of course we'll call it \code{unit}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
unit :: a -> ()
unit _ = ()
\end{Verbatim}
@@ -427,7 +427,7 @@ \section{Examples of Types}\label{examples-of-types}
is that in C++ \code{bool} is a built-in type, whereas in Haskell it
can be defined as follows:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Bool = True | False
\end{Verbatim}
(The way to read this definition is that \code{Bool} is either
diff --git a/src/content/1.3/Categories Great and Small.tex b/src/content/1.3/Categories Great and Small.tex
index f374fbb9..6f214239 100644
--- a/src/content/1.3/Categories Great and Small.tex
+++ b/src/content/1.3/Categories Great and Small.tex
@@ -40,28 +40,27 @@ \section{Orders}\label{orders}
is a relation between objects: the relation of being less than or equal.
Let's check if it indeed is a category. Do we have identity morphisms?
Every object is less than or equal to itself: check! Do we have
-composition? If \code{a <= b} and \code{b <= c} then \code{a
-<= c}: check! Is composition associative? Check! A set with a
+composition? If $a \leq b$ and $b \leq c$ then $a \leq c$: check! Is composition associative? Check! A set with a
relation like this is called a \newterm{preorder}, so a preorder is indeed
a category.
You can also have a stronger relation, that satisfies an additional
-condition that, if \code{a <= b} and \code{b <= a} then \code{a} must be
-the same as \code{b}. That's called a \newterm{partial order}.
+condition that, if $a \leq b$ and $b \leq a$ then $a$ must be
+the same as $b$. That's called a \newterm{partial order}.
Finally, you can impose the condition that any two objects are in a
relation with each other, one way or another; and that gives you a
\newterm{linear order} or \newterm{total order}.
Let's characterize these ordered sets as categories. A preorder is a
-category where there is at most one morphism going from any object a to
-any object b. Another name for such a category is ``thin.'' A preorder
+category where there is at most one morphism going from any object $a$ to
+any object $b$. Another name for such a category is ``thin.'' A preorder
is a thin category.
-A set of morphisms from object a to object b in a category C is called a
-\newterm{hom-set} and is written as \code{C(a, b)} (or, sometimes,
-\code{Hom\textsubscript{C}(a, b)}). So every hom-set in a preorder is either
-empty or a singleton. That includes the hom-set \code{C(a, a)}, the set of
+A set of morphisms from object a to object b in a category $\cat{C}$ is called a
+\newterm{hom-set} and is written as $\cat{C}(a, b)$ (or, sometimes,
+$\cat{Hom_C}(a, b)$). So every hom-set in a preorder is either
+empty or a singleton. That includes the hom-set $\cat{C}(a, a)$, the set of
morphisms from a to a, which must be a singleton, containing only the
identity, in any preorder. You may, however, have cycles in a preorder.
Cycles are forbidden in a partial order.
@@ -86,24 +85,15 @@ \section{Monoid as Set}\label{monoid-as-set}
For instance, natural numbers with zero form a monoid under addition.
Associativity means that:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-(a + b) + c = a + (b + c)
-\end{Verbatim}
+\[(a + b) + c = a + (b + c)\]
(In other words, we can skip parentheses when adding numbers.)
The neutral element is zero, because:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-0 + a = a
-\end{Verbatim}
+\[0 + a = a\]
and
-
-\begin{Verbatim}[commandchars=\\\{\}]
-a + 0 = a
-\end{Verbatim}
-The second equation is redundant, because addition is commutative \code{(a + b
-= b + a)}, but commutativity is not part of the definition of a monoid.
+\[a + 0 = a\]
+The second equation is redundant, because addition is commutative $(a + b
+= b + a)$, but commutativity is not part of the definition of a monoid.
For instance, string concatenation is not commutative and yet it forms a
monoid. The neutral element for string concatenation, by the way, is an
empty string, which can be attached to either side of a string without
@@ -113,7 +103,7 @@ \section{Monoid as Set}\label{monoid-as-set}
there is a neutral element called \code{mempty} and a binary operation
called \code{mappend}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
class Monoid m where
mempty :: m
mappend :: m -> m -> m
@@ -143,7 +133,7 @@ \section{Monoid as Set}\label{monoid-as-set}
\code{mappend} (this is, in fact, done for you in the standard
Prelude):
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Monoid String where
mempty = ""
mappend = (++)
@@ -155,12 +145,12 @@ \section{Monoid as Set}\label{monoid-as-set}
two-argument function by surrounding it with parentheses. Given two
strings, you can concatenate them by inserting \code{++} between them:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
"Hello " ++ "world!"
\end{Verbatim}
or by passing them as two arguments to the parenthesized \code{(++)}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(++) "Hello " "world!"
\end{Verbatim}
Notice that arguments to a function are not separated by commas or
@@ -170,23 +160,23 @@ \section{Monoid as Set}\label{monoid-as-set}
It's worth emphasizing that Haskell lets you express equality of
functions, as in:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
mappend = (++)
\end{Verbatim}
Conceptually, this is different than expressing the equality of values
produced by functions, as in:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
mappend s1 s2 = (++) s1 s2
\end{Verbatim}
The former translates into equality of morphisms in the category
-\textbf{Hask} (or \textbf{Set}, if we ignore bottoms, which is the name
+\textbf{Hask} (or $\Set$, if we ignore bottoms, which is the name
for never-ending calculations). Such equations are not only more
succinct, but can often be generalized to other categories. The latter
is called \newterm{extensional} equality, and states the fact that for any
two input strings, the outputs of \code{mappend} and \code{(++)} are
the same. Since the values of arguments are sometimes called
-\newterm{points} (as in: the value of f at point x), this is called
+\newterm{points} (as in: the value of $f$ at point $x$), this is called
point-wise equality. Function equality without specifying the arguments
is described as \newterm{point-free}. (Incidentally, point-free equations
often involve composition of functions, which is symbolized by a point,
@@ -243,8 +233,8 @@ \section{Monoid as Category}\label{monoid-as-category}
For instance, there is the operation of adding 5 to every natural
number. It maps 0 to 5, 1 to 6, 2 to 7, and so on. That's a function
defined on the set of natural numbers. That's good: we have a function
-and a set. In general, for any number n there is a function of adding n
---- the ``adder'' of n.
+and a set. In general, for any number n there is a function of adding $n$
+--- the ``adder'' of $n$.
How do adders compose? The composition of the function that adds 5 with
the function that adds 7 is a function that adds 12. So the composition
@@ -267,21 +257,20 @@ \section{Monoid as Category}\label{monoid-as-category}
tells us that \code{mappend} maps an element of a monoid set to a
function acting on that set.
+\begin{wrapfigure}[11]{R}{0pt}
+\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
+\includegraphics[width=40mm]{images/monoid.jpg}}
+\end{wrapfigure}
+
Now I want you to forget that you are dealing with the set of natural
numbers and just think of it as a single object, a blob with a bunch of
morphisms --- the adders. A monoid is a single object category. In fact
-the name monoid comes from Greek \newterm{mono}, which means single. Every
+the name monoid comes from Greek \emph{mono}, which means single. Every
monoid can be described as a single object category with a set of
morphisms that follow appropriate rules of composition.
-
-\begin{figure}
-\centering
-\fbox{\includegraphics[width=2.45833in]{images/monoid.jpg}}
-\end{figure}
-
String concatenation is an interesting case, because we have a choice of
-defining right appenders and left appenders (or \newterm{prependers}, if
+defining right appenders and left appenders (or \emph{prependers}, if
you will). The composition tables of the two models are a mirror reverse
of each other. You can easily convince yourself that appending ``bar''
after ``foo'' corresponds to prepending ``foo'' after prepending
@@ -291,13 +280,13 @@ \section{Monoid as Category}\label{monoid-as-category}
one-object category --- defines a unique set-with-binary-operator
monoid. It turns out that we can always extract a set from a
single-object category. This set is the set of morphisms --- the adders
-in our example. In other words, we have the hom-set \code{M(m, m)} of the
-single object \code{m} in the category \code{M}. We can easily define a binary
+in our example. In other words, we have the hom-set $\cat{M}(m, m)$ of the
+single object $m$ in the category $\cat{M}$. We can easily define a binary
operator in this set: The monoidal product of two set-elements is the
element corresponding to the composition of the corresponding morphisms.
-If you give me two elements of \code{M(m, m)} corresponding to \code{f} and
-\code{g}, their product will correspond to the composition
-\code{g◦f}. The composition always exists, because the source and the
+If you give me two elements of $\cat{M}(m, m)$ corresponding to $f$ and
+$g$, their product will correspond to the composition
+$f \circ g$. The composition always exists, because the source and the
target for these morphisms are the same object. And it's associative by
the rules of category. The identity morphism is the neutral element of
this product. So we can always recover a set monoid from a category
@@ -305,7 +294,7 @@ \section{Monoid as Category}\label{monoid-as-category}
\begin{figure}
\centering
-\fbox{\includegraphics[width=60mm]{images/monoidhomset.jpg}}
+\includegraphics[width=60mm]{images/monoidhomset.jpg}
\caption{Monoid hom-set seen as morphisms and as points in a set.}
\end{figure}
@@ -319,7 +308,7 @@ \section{Monoid as Category}\label{monoid-as-category}
fact that elements of a hom-set can be seen both as morphisms, which
follow the rules of composition, and as points in a set. Here,
composition of morphisms in M translates into monoidal product in the
-set \code{M(m, m)}.
+set $\cat{M}(m, m)$.
\section{Challenges}\label{challenges}
@@ -347,17 +336,17 @@ \section{Challenges}\label{challenges}
\begin{enumerate}
\tightlist
\item
- A set of sets with the inclusion relation: A is included in B if
- every element of A is also an element of B.
+ A set of sets with the inclusion relation: $A$ is included in $B$ if
+ every element of $A$ is also an element of $B$.
\item
- C++ types with the following subtyping relation: T1 is a subtype of
- T2 if a pointer to T1 can be passed to a function that expects a
- pointer to T2 without triggering a compilation error.
+ C++ types with the following subtyping relation: \code{T1} is a subtype of
+ \code{T2} if a pointer to \code{T1} can be passed to a function that expects a
+ pointer to \code{T2} without triggering a compilation error.
\end{enumerate}
\item
Considering that Bool is a set of two values True and False, show that
it forms two (set-theoretical) monoids with respect to, respectively,
- operator \code{\&\&} (AND) and \code{\textbar{}\textbar{}} (OR).
+ operator \code{\&\&} (AND) and \code{||} (OR).
\item
Represent the Bool monoid with the AND operator as a category: List
the morphisms and their rules of composition.
diff --git a/src/content/1.4/Kleisli Categories.tex b/src/content/1.4/Kleisli Categories.tex
index a2ea72e2..258599ca 100644
--- a/src/content/1.4/Kleisli Categories.tex
+++ b/src/content/1.4/Kleisli Categories.tex
@@ -36,12 +36,12 @@
you'd have to memoize all possible histories that can lead to a given
call. There would be a separate memo entry for:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
negate(true, "It was the best of times. ");
\end{Verbatim}
and
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
negate(true, "It was the worst of times. ");
\end{Verbatim}
and so on.
@@ -106,7 +106,7 @@
\begin{wrapfigure}[11]{R}{0pt}
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
-\fbox{\includegraphics[width=1.83333in]{images/piggyback.jpg}}}%
+\includegraphics[width=1.83333in]{images/piggyback.jpg}}%
\end{wrapfigure}
We want to modify the functions \code{toUpper} and \code{toWords} so
@@ -306,7 +306,7 @@ \section{Writer in Haskell}\label{writer-in-haskell}
more help from the compiler. Let's start by defining the \code{Writer}
type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
type Writer a = (a, String)
\end{Verbatim}
Here I'm just defining a type alias, an equivalent of a \code{typedef}
@@ -318,7 +318,7 @@ \section{Writer in Haskell}\label{writer-in-haskell}
Our morphisms are functions from an arbitrary type to some
\code{Writer} type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
a -> Writer b
\end{Verbatim}
We'll declare the composition as a funny infix operator, sometimes
@@ -366,19 +366,19 @@ \section{Writer in Haskell}\label{writer-in-haskell}
reasons that will become clear much later, I will call it
\code{return}.
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
return :: a -> Writer a
return x = (x, "")
\end{Verbatim}
For completeness, let's have the Haskell versions of the embellished
functions \code{upCase} and \code{toWords}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
upCase :: String -> Writer String
upCase s = (map toUpper s, "upCase ")
\end{Verbatim}
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
toWords :: String -> Writer [String]
toWords s = (words s, "toWords ")
\end{Verbatim}
@@ -402,8 +402,8 @@ \section{Kleisli Categories}\label{kleisli-categories}
based on a monad. We are not ready to discuss monads yet, but I wanted
to give you a taste of what they can do. For our limited purposes, a
Kleisli category has, as objects, the types of the underlying
-programming language. Morphisms from type A to type B are functions that
-go from A to a type derived from B using the particular embellishment.
+programming language. Morphisms from type $A$ to type $B$ are functions that
+go from $A$ to a type derived from $B$ using the particular embellishment.
Each Kleisli category defines its own way of composing such morphisms,
as well as the identity morphisms with respect to that composition.
(Later we'll see that the imprecise term ``embellishment'' corresponds
diff --git a/src/content/1.5/Products and Coproducts.tex b/src/content/1.5/Products and Coproducts.tex
index 898ebc79..c5d91318 100644
--- a/src/content/1.5/Products and Coproducts.tex
+++ b/src/content/1.5/Products and Coproducts.tex
@@ -31,8 +31,8 @@ \section{Initial Object}\label{initial-object}
possible that there is an overall net flow of arrows from one end of the
category to another. This is true in ordered categories, for instance in
partial orders. We could generalize that notion of object precedence by
-saying that object \emph{a} is ``more initial'' than object \emph{b} if
-there is an arrow (a morphism) going from \emph{a} to \emph{b}. We would
+saying that object $a$ is ``more initial'' than object $b$, if
+there is an arrow (a morphism) going from $a$ to $b$. We would
then define \emph{the} initial object as one that has arrows going to
all other objects. Obviously there is no guarantee that such an object
exists, and that's okay. A bigger problem is that there may be too many
@@ -55,7 +55,7 @@ \section{Initial Object}\label{initial-object}
\noindent
However, even that doesn't guarantee the uniqueness of the initial
object (if one exists). But it guarantees the next best thing:
-uniqueness \newterm{up to isomorphism}. Isomorphisms are very important in
+uniqueness \emph{up to isomorphism}. Isomorphisms are very important in
category theory, so I'll talk about them shortly. For now, let's just
agree that uniqueness up to isomorphism justifies the use of ``the'' in
the definition of the initial object.
@@ -71,7 +71,7 @@ \section{Initial Object}\label{initial-object}
polymorphic function from \code{Void} to any other type is called
\code{absurd}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
absurd :: Void -> a
\end{Verbatim}
It's this family of morphisms that makes \code{Void} the initial
@@ -80,9 +80,9 @@ \section{Initial Object}\label{initial-object}
\section{Terminal Object}\label{terminal-object}
Let's continue with the single-object pattern, but let's change the way
-we rank the objects. We'll say that object \emph{a} is ``more terminal''
-than object \emph{b} if there is a morphism going from \emph{b} to
-\emph{a} (notice the reversal of direction). We'll be looking for an
+we rank the objects. We'll say that object $a$ is ``more terminal''
+than object $b$ if there is a morphism going from $b$ to
+$a$ (notice the reversal of direction). We'll be looking for an
object that's more terminal than any other object in the category.
Again, we will insist on uniqueness:
@@ -107,7 +107,7 @@ \section{Terminal Object}\label{terminal-object}
\code{()}. We've also established that there is one and only one pure
function from any type to the unit type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
unit :: a -> ()
unit _ = ()
\end{Verbatim}
@@ -118,14 +118,14 @@ \section{Terminal Object}\label{terminal-object}
that have incoming morphisms from every set. For instance, there is a
Boolean-valued function (a predicate) defined for every type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
yes :: a -> Bool
yes _ = True
\end{Verbatim}
But \code{Bool} is not a terminal object. There is at least one more
\code{Bool}-valued function from every type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
no :: a -> Bool
no _ = False
\end{Verbatim}
@@ -137,15 +137,14 @@ \section{Duality}\label{duality}
You can't help but to notice the symmetry between the way we defined the
initial object and the terminal object. The only difference between the
two was the direction of morphisms. It turns out that for any category C
-we can define the \newterm{opposite category} C\textsuperscript{op} just by
+we can define the \newterm{opposite category} $\cat{C}^{op}$ just by
reversing all the arrows. The opposite category automatically satisfies
all the requirements of a category, as long as we simultaneously
redefine composition. If original morphisms
-\code{f::a->b} and \code{g::b->c} composed
-to \code{h::a->c} with \code{h=g◦f}, then the reversed
-morphisms \code{f\textsuperscript{op}::b->a} and
-\code{g\textsuperscript{op}::c->b} will compose to
-\code{h\textsuperscript{op}::c->a} with \code{h\textsuperscript{op}=f\textsuperscript{op}◦g\textsuperscript{op}}. And reversing
+$f\Colon{}a\to b$ and $g\Colon{}b\to c$ composed
+to $h\Colon{}a\to c$ with $h=g \circ f$, then the reversed
+morphisms $f^{op}\Colon{}b\to a$ and $g^{op}\Colon{}c\to b$ will compose to
+$h^{op}\Colon{}c\to a$ with $h^{op}=f^{op} \circ g^{op}$. And reversing
the identity arrows is a (pun alert!) no-op.
Duality is a very important property of categories because it doubles
@@ -178,30 +177,30 @@ \section{Isomorphisms}\label{isomorphisms}
same shape. It means that every part of one object corresponds to some
part of another object in a one-to-one mapping. As far as our
instruments can tell, the two objects are a perfect copy of each other.
-Mathematically it means that there is a mapping from object \emph{a} to
-object \emph{b}, and there is a mapping from object \emph{b} back to
-object \emph{a}, and they are the inverse of each other. In category
+Mathematically it means that there is a mapping from object $a$ to
+object $b$, and there is a mapping from object $b$ back to
+object $a$, and they are the inverse of each other. In category
theory we replace mappings with morphisms. An isomorphism is an
invertible morphism; or a pair of morphisms, one being the inverse of
the other.
We understand the inverse in terms of composition and identity: Morphism
-\emph{g} is the inverse of morphism \emph{f} if their composition is the
+$g$ is the inverse of morphism $f$ if their composition is the
identity morphism. These are actually two equations because there are
two ways of composing two morphisms:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f . g = id
g . f = id
\end{Verbatim}
When I said that the initial (terminal) object was unique up to
isomorphism, I meant that any two initial (terminal) objects are
isomorphic. That's actually easy to see. Let's suppose that we have two
-initial objects i\textsubscript{1} and i\textsubscript{2}. Since
-i\textsubscript{1} is initial, there is a unique morphism \emph{f} from
-i\textsubscript{1} to i\textsubscript{2}. By the same token, since
-i\textsubscript{2} is initial, there is a unique morphism \emph{g} from
-i\textsubscript{2} to i\textsubscript{1}. What's the composition of
+initial objects $i_{1}$ and $i_{2}$. Since
+$i_{1}$ is initial, there is a unique morphism $f$ from
+$i_{1}$ to $i_{2}$. By the same token, since
+$i_{2}$ is initial, there is a unique morphism $g$ from
+$i_{2}$ to $i_{1}$. What's the composition of
these two morphisms?
\begin{figure}[H]
@@ -211,21 +210,21 @@ \section{Isomorphisms}\label{isomorphisms}
\end{figure}
\noindent
-The composition \emph{g◦f} must be a morphism from i\textsubscript{1} to
-i\textsubscript{1}. But i\textsubscript{1} is initial so there can only
-be one morphism going from i\textsubscript{1} to i\textsubscript{1}.
+The composition $g \circ f$ must be a morphism from $i_{1}$ to
+$i_{1}$. But $i_{1}$ is initial so there can only
+be one morphism going from $i_{1}$ to $i_{1}$.
Since we are in a category, we know that there is an identity morphism
-from i\textsubscript{1} to i\textsubscript{1}, and since there is room
-for only one, that must be it. Therefore \emph{g◦f} is equal to
-identity. Similarly, \emph{f◦g} must be equal to identity, because there
-can be only one morphism from i\textsubscript{2} back to
-i\textsubscript{2}. This proves that \emph{f} and \emph{g} must be the
+from $i_{1}$ to $i_{1}$, and since there is room
+for only one, that must be it. Therefore $g \circ f$ is equal to
+identity. Similarly, $f \circ g$ must be equal to identity, because there
+can be only one morphism from $i_{2}$ back to
+$i_{2}$. This proves that $f$ and $g$ must be the
inverse of each other. Therefore any two initial objects are isomorphic.
Notice that in this proof we used the uniqueness of the morphism from
the initial object to itself. Without that we couldn't prove the ``up to
-isomorphism'' part. But why do we need the uniqueness of \emph{f} and
-\emph{g}? Because not only is the initial object unique up to
+isomorphism'' part. But why do we need the uniqueness of $f$ and
+$g$? Because not only is the initial object unique up to
isomorphism, it is unique up to \emph{unique} isomorphism. In principle,
there could be more than one isomorphism between two objects, but that's
not the case here. This ``uniqueness up to unique isomorphism'' is the
@@ -243,12 +242,12 @@ \section{Products}\label{products}
are called \code{fst} and \code{snd} and they pick, respectively,
the first and the second component of a pair:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fst :: (a, b) -> a
fst (x, y) = x
\end{Verbatim}
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
snd :: (a, b) -> b
snd (x, y) = y
\end{Verbatim}
@@ -259,7 +258,7 @@ \section{Products}\label{products}
These definitions can be simplified even further with the use of
wildcards:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fst (x, _) = x
snd (_, y) = y
\end{Verbatim}
@@ -273,12 +272,12 @@ \section{Products}\label{products}
\end{Verbatim}
Equipped with this seemingly very limited knowledge, let's try to define
a pattern of objects and morphisms in the category of sets that will
-lead us to the construction of a product of two sets, \emph{a} and
-\emph{b}. This pattern consists of an object \emph{c} and two morphisms
-\emph{p} and \emph{q} connecting it to \emph{a} and \emph{b},
+lead us to the construction of a product of two sets, $a$ and
+$b$. This pattern consists of an object $c$ and two morphisms
+$p$ and $q$ connecting it to $a$ and $b$,
respectively:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
p :: c -> a
q :: c -> b
\end{Verbatim}
@@ -289,7 +288,7 @@ \section{Products}\label{products}
\end{figure}
\noindent
-All \emph{c}s that fit this pattern will be considered candidates for
+All $c$s that fit this pattern will be considered candidates for
the product. There may be lots of them.
\begin{figure}[H]
@@ -306,7 +305,7 @@ \section{Products}\label{products}
the product of \code{Int} and \code{Bool}? Yes, it can --- and here
are its projections:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
p :: Int -> Int
p x = x
@@ -319,7 +318,7 @@ \section{Products}\label{products}
elements, or a triple. Here are two morphisms that make it a legitimate
candidate (we are using pattern matching on triples):
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
p :: (Int, Int, Bool) -> Int
p (x, _, _) = x
@@ -640,7 +639,7 @@ \section{Asymmetry}\label{asymmetry}
from the terminal end.
This is not an intrinsic property of sets, it's a property of functions,
-which we use as morphisms in \textbf{Set}. Functions are, in general,
+which we use as morphisms in $\Set$. Functions are, in general,
asymmetric. Let me explain.
A function must be defined for every element of its domain set (in
@@ -690,7 +689,7 @@ \section{Challenges}\label{challenges}
\begin{Verbatim}
int i(int n) { return n; }
-int j(bool b) { return b? 0: 1; }
+int j(bool b) { return b ? 0: 1; }
\end{Verbatim}
Hint: Define a function
@@ -713,7 +712,7 @@ \section{Challenges}\label{challenges}
return n + 2;
}
-int j(bool b) { return b? 0: 1; }
+int j(bool b) { return b ? 0: 1; }
\end{Verbatim}
\item
Come up with an inferior candidate for a coproduct of \code{int} and
diff --git a/src/content/1.6/Simple Algebraic Data Types.tex b/src/content/1.6/Simple Algebraic Data Types.tex
index bf7c89d0..49f78e7b 100644
--- a/src/content/1.6/Simple Algebraic Data Types.tex
+++ b/src/content/1.6/Simple Algebraic Data Types.tex
@@ -31,7 +31,7 @@ \section{Product Types}\label{product-types}
--- the isomorphism being given by the \code{swap} function (which is
its own inverse):
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
swap :: (a, b) -> (b, a)
swap (x, y) = (y, x)
\end{Verbatim}
@@ -45,25 +45,25 @@ \section{Product Types}\label{product-types}
in a product, \code{a}, \code{b}, and \code{c}, in this order, you
can do it in two ways:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
((a, b), c)
\end{Verbatim}
or
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(a, (b, c))
\end{Verbatim}
These types are different --- you can't pass one to a function that
expects the other --- but their elements are in one-to-one
correspondence. There is a function that maps one to another:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
alpha :: ((a, b), c) -> (a, (b, c))
alpha ((x, y), z) = (x, (y, z))
\end{Verbatim}
and this function is invertible:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
alpha_inv :: (a, (b, c)) -> ((a, b), c)
alpha_inv (x, (y, z)) = ((x, y), z)
\end{Verbatim}
@@ -73,10 +73,7 @@ \section{Product Types}\label{product-types}
You can interpret the creation of a product type as a binary operation
on types. From that perspective, the above isomorphism looks very much
like the associativity law we've seen in monoids:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-(a * b) * c = a * (b * c)
-\end{Verbatim}
+\[(a * b) * c = a * (b * c)\]
Except that, in the monoid case, the two ways of composing products were
equal, whereas here they are only equal ``up to isomorphism.''
@@ -86,21 +83,21 @@ \section{Product Types}\label{product-types}
Indeed, the pairing of a value of some type \code{a} with a unit
doesn't add any information. The type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(a, ())
\end{Verbatim}
is isomorphic to \code{a}. Here's the isomorphism:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
rho :: (a, ()) -> a
rho (x, ()) = x
\end{Verbatim}
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
rho_inv :: a -> (a, ())
rho_inv x = (x, ())
\end{Verbatim}
-These observations can be formalized by saying that \textbf{Set} (the
+These observations can be formalized by saying that $\Set$ (the
category of sets) is a \newterm{monoidal category}. It's a category that's
also a monoid, in the sense that you can multiply objects (here, take
their cartesian product). I'll talk more about monoidal categories, and
@@ -111,7 +108,7 @@ \section{Product Types}\label{product-types}
uses named constructors with multiple arguments. A pair, for instance,
can be defined alternatively as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Pair a b = P a b
\end{Verbatim}
Here, \code{Pair a b} is the name of the type paremeterized by two
@@ -122,7 +119,7 @@ \section{Product Types}\label{product-types}
instance, let's define a value \code{stmt} as a pair of
\code{String} and \code{Bool}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
stmt :: Pair String Bool
stmt = P "This statements is" False
\end{Verbatim}
@@ -137,7 +134,7 @@ \section{Product Types}\label{product-types}
Since the name spaces for type and data constructors are separate in
Haskell, you will often see the same name used for both, as in:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Pair a b = Pair a b
\end{Verbatim}
And if you squint hard enough, you may even view the built-in pair type
@@ -146,7 +143,7 @@ \section{Product Types}\label{product-types}
\code{(,)} just like any other named constructor and create pairs
using prefix notation:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
stmt = (,) "This statement is" False
\end{Verbatim}
Similarly, you can use \code{(,,)} to create triples, and so on.
@@ -154,7 +151,7 @@ \section{Product Types}\label{product-types}
Instead of using generic pairs or tuples, you can also define specific
named product types, as in:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Stmt = Stmt String Bool
\end{Verbatim}
which is just a product of \code{String} and \code{Bool}, but it's
@@ -178,7 +175,7 @@ \section{Records}\label{records}
function that checks if the symbol of the element is the prefix of its
name (as in \textbf{He} being the prefix of \textbf{Helium}):
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
startsWithSymbol :: (String, String, Int) -> Bool
startsWithSymbol (name, symbol, _) = isPrefixOf symbol name
\end{Verbatim}
@@ -200,7 +197,7 @@ \section{Records}\label{records}
, atomicNumber = a }
\end{Verbatim}
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
elemToTuple :: Element -> (String, String, Int)
elemToTuple e = (name e, symbol e, atomicNumber e)
\end{Verbatim}
@@ -209,13 +206,13 @@ \section{Records}\label{records}
\code{atomicNumber} field from \code{e}. We use
\code{atomicNumber} as a function of the type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
atomicNumber :: Element -> Int
\end{Verbatim}
With the record syntax for \code{Element}, our function
\code{startsWithSymbol} becomes more readable:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
startsWithSymbol :: Element -> Bool
startsWithSymbol e = isPrefixOf (symbol e) (name e)
\end{Verbatim}
@@ -223,7 +220,7 @@ \section{Records}\label{records}
\code{isPrefixOf} into an infix operator by surrounding it with
backquotes, and make it read almost like a sentence:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
startsWithSymbol e = symbol e `isPrefixOf` name e
\end{Verbatim}
The parentheses could be omitted in this case, because an infix operator
@@ -235,7 +232,7 @@ \section{Sum Types}\label{sum-types}
the coproduct gives rise to sum types. The canonical implementation of a
sum type in Haskell is:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Either a b = Left a | Right b
\end{Verbatim}
And like pairs, \code{Either}s are commutative (up to isomorphism),
@@ -247,7 +244,7 @@ \section{Sum Types}\label{sum-types}
\end{Verbatim}
and so on.
-It turns out that \textbf{Set} is also a (symmetric) monoidal category
+It turns out that $\Set$ is also a (symmetric) monoidal category
with respect to coproduct. The role of the binary operation is played by
the disjoint sum, and the role of the unit element is played by the
initial object. In terms of types, we have \code{Either} as the
@@ -256,15 +253,14 @@ \section{Sum Types}\label{sum-types}
\code{Void} as zero. Indeed, adding \code{Void} to a sum type
doesn't change its content. For instance:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
Either a Void
\end{Verbatim}
is isomorphic to \code{a}. That's because there is no way to construct
a \code{Right} version of this type --- there isn't a value of type
\code{Void}. The only inhabitants of \code{Either a Void} are
constructed using the \code{Left} constructors and they simply
-encapsulate a value of type \code{a}. So, symbolically,
-\code{a + 0 = a}.
+encapsulate a value of type \code{a}. So, symbolically, $a + 0 = a$.
Sum types are pretty common in Haskell, but their C++ equivalents,
unions or variants, are much less common. There are several reasons for
@@ -274,7 +270,7 @@ \section{Sum Types}\label{sum-types}
implemented using \code{enum} in C++. The equivalent of the Haskell
sum type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Color = Red | Green | Blue
\end{Verbatim}
is the C++:
@@ -284,7 +280,7 @@ \section{Sum Types}\label{sum-types}
\end{Verbatim}
An even simpler sum type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Bool = True | False
\end{Verbatim}
is the primitive \code{bool} in C++.
@@ -295,34 +291,34 @@ \section{Sum Types}\label{sum-types}
kind of optionality, if deliberate, is expressed in Haskell using the
\code{Maybe} type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Maybe a = Nothing | Just a
\end{Verbatim}
The \code{Maybe} type is a sum of two types. You can see this if you
separate the two constructors into individual types. The first one would
look like this:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data NothingType = Nothing
\end{Verbatim}
It's an enumeration with one value called \code{Nothing}. In other
words, it's a singleton, which is equivalent to the unit type
\code{()}. The second part:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data JustType a = Just a
\end{Verbatim}
is just an encapsulation of the type \code{a}. We could have encoded
\code{Maybe} as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Maybe a = Either () a
\end{Verbatim}
More complex sum types are often faked in C++ using pointers. A pointer
can be either null, or point to a value of specific type. For instance,
a Haskell list type, which can be defined as a (recursive) sum type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
List a = Nil | Cons a (List a)
\end{Verbatim}
can be translated to C++ using the null pointer trick to implement the
@@ -368,7 +364,7 @@ \section{Sum Types}\label{sum-types}
\code{Cons}-constructed list. For instance, here's the definition of a
simple function on \code{List}s:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
maybeTail :: List a -> Maybe (List a)
maybeTail Nil = Nothing
maybeTail (Cons _ t) = Just t
@@ -419,37 +415,34 @@ \section{Algebra of Types}\label{algebra-of-types}
with an integer, there is no value of type \code{Void}. Therefore, for
any type \code{a}, the type \code{(a, Void)} is uninhabited --- has
no values --- and is therefore equivalent to \code{Void}. In other
-words, \code{a*0 = 0}.
+words, $a \times 0 = 0$.
Another thing that links addition and multiplication is the distributive
property:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-a * (b + c) = a * b + a * c
-\end{Verbatim}
+\[a \times (b + c) = a \times b + a \times c\]
Does it also hold for product and sum types? Yes, it does --- up to
isomorphisms, as usual. The left hand side corresponds to the type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(a, Either b c)
\end{Verbatim}
and the right hand side corresponds to the type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
Either (a, b) (a, c)
\end{Verbatim}
Here's the function that converts them one way:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
prodToSum :: (a, Either b c) -> Either (a, b) (a, c)
-prodToSum (x, e) =
+prodToSum (x, e) =
case e of
Left y -> Left (x, y)
Right z -> Right (x, z)
\end{Verbatim}
and here's one that goes the other way:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
sumToProd :: Either (a, b) (a, c) -> (a, Either b c)
sumToProd e =
case e of
@@ -461,7 +454,7 @@ \section{Algebra of Types}\label{algebra-of-types}
evaluated when the pattern matches. For instance, if you call
\code{prodToSum} with the value:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
prod1 :: (Int, Either String Float)
prod1 = (2, Left "Hi!")
\end{Verbatim}
@@ -490,16 +483,12 @@ \section{Algebra of Types}\label{algebra-of-types}
Numbers & Types\tabularnewline
\midrule
\endhead
-0 & \code{Void}\tabularnewline
-1 & \code{()}\tabularnewline
-a + b &
-\code{Either a b = Left a | Right b}\tabularnewline
-a * b & \code{(a, b)} or
-\code{Pair a b = Pair a b}\tabularnewline
-2 = 1 + 1 &
-\code{data Bool = True | False}\tabularnewline
-1 + a &
-\code{data Maybe = Nothing | Just a}\tabularnewline
+$0$ & \code{Void}\tabularnewline
+$1$ & \code{()}\tabularnewline
+$a + b$ & \code{Either a b = Left a | Right b}\tabularnewline
+$a \times b$ & \code{(a, b)} or \code{Pair a b = Pair a b}\tabularnewline
+$2 = 1 + 1$ & \code{data Bool = True | False}\tabularnewline
+$1 + a$ & \code{data Maybe = Nothing | Just a}\tabularnewline
\bottomrule
\end{longtable}
@@ -508,13 +497,13 @@ \section{Algebra of Types}\label{algebra-of-types}
to an equation. The type we are defining appears on both sides of the
equation:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
List a = Nil | Cons a (List a)
\end{Verbatim}
If we do our usual substitutions, and also replace \code{List a} with
\code{x}, we get the equation:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
x = 1 + a * x
\end{Verbatim}
We can't solve it using traditional algebraic methods because we can't
@@ -557,11 +546,10 @@ \section{Algebra of Types}\label{algebra-of-types}
Logic & Types\tabularnewline
\midrule
\endhead
-false & \code{Void}\tabularnewline
-true & \code{()}\tabularnewline
-a \textbar{}\textbar{} b &
-\code{Either a b = Left a | Right b}\tabularnewline
-a \&\& b & \code{(a, b)}\tabularnewline
+$false$ & \code{Void}\tabularnewline
+$true$ & \code{()}\tabularnewline
+$a || b$ & \code{Either a b = Left a | Right b}\tabularnewline
+$a \&\& b$ & \code{(a, b)}\tabularnewline
\bottomrule
\end{longtable}
@@ -580,14 +568,14 @@ \section{Challenges}\label{challenges}
\item
Here's a sum type defined in Haskell:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Shape = Circle Float
| Rect Float Float
\end{Verbatim}
When we want to define a function like \code{area} that acts on a
\code{Shape}, we do it by pattern matching on the two constructors:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
area :: Shape -> Float
area (Circle r) = pi * r * r
area (Rect d h) = d * h
@@ -600,7 +588,7 @@ \section{Challenges}\label{challenges}
\code{circ} that calculates the circumference of a \code{Shape}.
We can do it without touching the definition of \code{Shape}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
circ :: Shape -> Float
circ (Circle r) = 2.0 * pi * r
circ (Rect d h) = 2.0 * (d + h)
@@ -613,7 +601,7 @@ \section{Challenges}\label{challenges}
have to touch in Haskell vs. C++ or Java? (Even if you're not a
Haskell programmer, the modifications should be pretty obvious.)
\item
- Show that \code{a + a = 2 * a} holds for types (up to
- isomorphism). Remember that \code{2} corresponds to \code{Bool},
+ Show that $a + a = 2 \times a$ holds for types (up to
+ isomorphism). Remember that $2$ corresponds to \code{Bool},
according to our translation table.
\end{enumerate}
\ No newline at end of file
diff --git a/src/content/1.7/Functors.tex b/src/content/1.7/Functors.tex
index 57a376bc..b4bd3c06 100644
--- a/src/content/1.7/Functors.tex
+++ b/src/content/1.7/Functors.tex
@@ -1,70 +1,56 @@
-\lettrine[lhang=0.17]{A}{t the risk} of sounding like a broken record, I will say this about
+\lettrine[lhang=0.17]{A}{t the risk of sounding} like a broken record, I will say this about
functors: A functor is a very simple but powerful idea. Category theory
is just full of those simple but powerful ideas. A functor is a mapping
-between categories. Given two categories, C and D, a functor F maps
-objects in C to objects in D --- it's a function on objects. If \emph{a}
-is an object in C, we'll write its image in D as \emph{F a} (no
+between categories. Given two categories, $\cat{C}$ and $\cat{D}$, a functor $F$ maps
+objects in $\cat{C}$ to objects in $\cat{D}$ --- it's a function on objects. If $a$
+is an object in $\cat{C}$, we'll write its image in $\cat{D}$ as $F a$ (no
parentheses). But a category is not just objects --- it's objects and
morphisms that connect them. A functor also maps morphisms --- it's a
function on morphisms. But it doesn't map morphisms willy-nilly --- it
-preserves connections. So if a morphism \emph{f} in C connects object
-\emph{a} to object \emph{b},
+preserves connections. So if a morphism $f$ in $\cat{C}$ connects object
+$a$ to object $b$,
+\[f \Colon a \to b\]
+the image of $f$ in $\cat{D}$, $F f$, will connect the image of
+$a$ to the image of $b$:
+\[F f \Colon F a \to F b\]
+
+\begin{wrapfigure}[8]{R}{0pt}
+\raisebox{0pt}[\dimexpr\height]{
+\includegraphics[width=42.5mm]{images/functor.jpg}}
+\end{wrapfigure}
-\begin{Verbatim}[commandchars=\\\{\}]
-f :: a -> b
-\end{Verbatim}
-the image of \emph{f} in D, \emph{F f}, will connect the image of
-\emph{a} to the image of \emph{b}:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-F f :: F a -> F b
-\end{Verbatim}
+\noindent
(This is a mixture of mathematical and Haskell notation that hopefully
makes sense by now. I won't use parentheses when applying functors to
objects or morphisms.)
-\begin{figure}[H]
-\centering
-\fbox{\includegraphics[width=3.12500in]{images/functor.jpg}}
-\end{figure}
-
\noindent
As you can see, a
functor preserves the structure of a category: what's connected in one
category will be connected in the other category. But there's something
more to the structure of a category: there's also the composition of
-morphisms. If \emph{h} is a composition of \emph{f} and \emph{g}:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-h = g . f
-\end{Verbatim}
-we want its image under F to be a composition of the images of \emph{f}
-and \emph{g}:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-F h = F g . F f
-\end{Verbatim}
+morphisms. If $h$ is a composition of $f$ and $g$:
+\[h = g . f\]
+we want its image under $F$ to be a composition of the images of $f$
+and $g$:
+\[F h = F g~.~F f\]
\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=2.87500in]{images/functorcompos.jpg}}
+\includegraphics[width=50mm]{images/functorcompos.jpg}
\end{figure}
\noindent
-Finally, we
-want all identity morphisms in C to be mapped to identity morphisms in
-D:
+Finally, we want all identity morphisms in $\cat{C}$ to be mapped to identity morphisms in
+$\cat{D}$:
+\[F \idarrow[a] = \idarrow[F a]\]
-\begin{Verbatim}[commandchars=\\\{\}]
-F id\textsubscript{a} = id\textsubscript{F a}
-\end{Verbatim}
-Here, \emph{id\textsubscript{a}} is the identity at the object \emph{a},
-and \emph{id\textsubscript{F a}} the identity at \emph{F a}.
-
-\begin{figure}
-\centering
-\fbox{\includegraphics[width=50mm]{images/functorid.jpg}}
-\end{figure}
+\begin{wrapfigure}[12]{R}{0pt}
+\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
+\includegraphics[width=40mm]{images/functorid.jpg}}
+\end{wrapfigure}
+Here, $\idarrow[A]$ is the identity at the object $a$,
+and $\idarrow[F a]$ the identity at $F a$.
\noindent
Note that these
@@ -85,10 +71,10 @@
other category simply selects an object in that category. This is fully
analogous to the property of morphisms from singleton sets selecting
elements in target sets. The maximally collapsing functor is called the
-constant functor \ensuremath{\Delta}\textsubscript{c}. It maps every object in the source
-category to one selected object \emph{c} in the target category. It also
+constant functor $\Delta_c$. It maps every object in the source
+category to one selected object $c$ in the target category. It also
maps every morphism in the source category to the identity morphism
-\emph{id\textsubscript{c}}. It acts like a black hole, compacting
+$\idarrow[c]$. It acts like a black hole, compacting
everything into one singularity. We'll see more of this functor when we
discuss limits and colimits.
@@ -107,7 +93,7 @@ \subsection{The Maybe Functor}\label{the-maybe-functor}
The definition of \code{Maybe} is a mapping from type \code{a} to
type \code{Maybe a}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Maybe a = Nothing | Just a
\end{Verbatim}
Here's an important subtlety: \code{Maybe} itself is not a type, it's
@@ -120,18 +106,18 @@ \subsection{The Maybe Functor}\label{the-maybe-functor}
but also a mapping of morphisms (here, functions). For any function from
\code{a} to \code{b}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: a -> b
\end{Verbatim}
we would like to produce a function from \code{Maybe a} to
-\code{Maybe\ b}. To define such a function, we'll have two cases to
+\code{Maybe b}. To define such a function, we'll have two cases to
consider, corresponding to the two constructors of \code{Maybe}. The
\code{Nothing} case is simple: we'll just return \code{Nothing}
back. And if the argument is \code{Just}, we'll apply the function
\code{f} to its contents. So the image of \code{f} under
\code{Maybe} is the function:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f' :: Maybe a -> Maybe b
f' Nothing = Nothing
f' (Just x) = Just (f x)
@@ -142,7 +128,7 @@ \subsection{The Maybe Functor}\label{the-maybe-functor}
\code{fmap}. In the case of \code{Maybe}, it has the following
signature:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap :: (a -> b) -> (Maybe a -> Maybe b)
\end{Verbatim}
@@ -153,20 +139,20 @@ \subsection{The Maybe Functor}\label{the-maybe-functor}
\noindent
We often say
-that \code{fmap} \newterm{lifts} a function. The lifted function acts on
+that \code{fmap} \emph{lifts} a function. The lifted function acts on
\code{Maybe} values. As usual, because of currying, this signature may
be interpreted in two ways: as a function of one argument --- which
itself is a function \code{(a->b)} --- returning a
function \code{(Maybe a -> Maybe b)}; or as a
function of two arguments returning \code{Maybe b}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap :: (a -> b) -> Maybe a -> Maybe b
\end{Verbatim}
Based on our previous discussion, this is how we implement \code{fmap}
for \code{Maybe}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap _ Nothing = Nothing
fmap f (Just x) = Just (f x)
\end{Verbatim}
@@ -187,7 +173,7 @@ \subsection{Equational Reasoning}\label{equational-reasoning}
expression into a function. Let's take the identity function as an
example:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
id x = x
\end{Verbatim}
If you see, for instance, \code{id y} in some expression, you can
@@ -202,7 +188,7 @@ \subsection{Equational Reasoning}\label{equational-reasoning}
around. Let's see how this works in practice. Let's start with the
preservation of identity:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap id = id
\end{Verbatim}
There are two cases to consider: \code{Nothing} and \code{Just}.
@@ -234,7 +220,7 @@ \subsection{Equational Reasoning}\label{equational-reasoning}
\end{Verbatim}
Now, lets show that \code{fmap} preserves composition:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap (g . f) = fmap g . fmap f
\end{Verbatim}
First the \code{Nothing} case:
@@ -349,7 +335,7 @@ \subsection{Optional}\label{optional}
\code{f} takes an \code{int} to a \code{bool}. How will the
compiler figure out the type of \code{g}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
auto g = fmap(f);
\end{Verbatim}
especially if, in the future, there are multiple functors overloading
@@ -362,7 +348,7 @@ \subsection{Typeclasses}\label{typeclasses}
a common interface. For instance, the class of objects that support
equality is defined as follows:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
class Eq a where
(==) :: a -> a -> Bool
\end{Verbatim}
@@ -374,12 +360,12 @@ \subsection{Typeclasses}\label{typeclasses}
\code{(==)}. For example, given the definition of a 2D \code{Point}
(a product type of two \code{Float}s):
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Point = Pt Float Float
\end{Verbatim}
you can define the equality of points:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Eq Point where
(Pt x y) == (Pt x' y') = x == x' && y == y'
\end{Verbatim}
@@ -399,7 +385,7 @@ \subsection{Typeclasses}\label{typeclasses}
Fortunately a Haskell typeclass works with type constructors as well as
with types. So here's the definition of the \code{Functor} class:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
class Functor f where
fmap :: (a -> b) -> f a -> f b
\end{Verbatim}
@@ -408,11 +394,11 @@ \subsection{Typeclasses}\label{typeclasses}
\code{f} is a type variable, similar to type variables \code{a} and
\code{b}. The compiler, however, is able to deduce that it represents
a type constructor rather than a type by looking at its usage: acting on
-other types, as in \code{f\ a} and \code{f\ b}. Accordingly, when
+other types, as in \code{f a} and \code{f b}. Accordingly, when
declaring an instance of \code{Functor}, you have to give it a type
constructor, as is the case with \code{Maybe}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Functor Maybe where
fmap _ Nothing = Nothing
fmap f (Just x) = Just (f x)
@@ -464,7 +450,7 @@ \subsection{The List Functor}\label{the-list-functor}
the type of the elements they store, so let's look at a very simple
container, the list:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data List a = Nil | Cons a (List a)
\end{Verbatim}
We have the type constructor \code{List}, which is a mapping from any
@@ -473,7 +459,7 @@ \subsection{The List Functor}\label{the-list-functor}
function \code{a->b} define a function\\
\code{List a -> List b}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap :: (a -> b) -> (List a -> List b)
\end{Verbatim}
A function acting on \code{List a} must consider two cases
@@ -491,7 +477,7 @@ \subsection{The List Functor}\label{the-list-functor}
definition, because we are defining lifted \code{f} in terms of lifted
\code{f}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap f (Cons x t) = Cons (f x) (fmap f t)
\end{Verbatim}
Notice that, on the right hand side, \code{fmap f} is applied to a
@@ -504,7 +490,7 @@ \subsection{The List Functor}\label{the-list-functor}
\code{(fmap f t)} using the \code{Cons} constructor. Putting it
all together, here's the instance declaration for the list functor:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Functor List where
fmap _ Nil = Nil
fmap f (Cons x t) = Cons (f x) (fmap f t)
@@ -557,14 +543,14 @@ \subsection{The Reader Functor}\label{the-reader-functor}
\code{a->b}, but it can equally well be used in prefix
form, when parenthesized:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(->) a b
\end{Verbatim}
Just like with regular functions, type functions of more than one
argument can be partially applied. So when we provide just one type
argument to the arrow, it still expects another one. That's why:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(->) a
\end{Verbatim}
is a type constructor. It needs one more type \code{b} to produce a
@@ -583,7 +569,7 @@ \subsection{The Reader Functor}\label{the-reader-functor}
\code{b}. Here's the type signature of \code{fmap} applied to this
case:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap :: (a -> b) -> (r -> a) -> (r -> b)
\end{Verbatim}
We have to solve the following puzzle: given a function
@@ -593,7 +579,7 @@ \subsection{The Reader Functor}\label{the-reader-functor}
functions, and the result is exactly what we need. So here's the
implementation of our \code{fmap}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Functor ((->) r) where
fmap f g = f . g
\end{Verbatim}
@@ -601,13 +587,13 @@ \subsection{The Reader Functor}\label{the-reader-functor}
reduced further by noticing that composition can be rewritten in prefix
form:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap f g = (.) f g
\end{Verbatim}
and the arguments can be omitted to yield a direct equality of two
functions:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap = (.)
\end{Verbatim}
This combination of the type constructor \code{(->) r}
@@ -626,8 +612,9 @@ \section{Functors as Containers}\label{functors-as-containers}
be implemented as a function. Consider, for instance, an infinite list
of natural numbers, which can be compactly defined as:
-\begin{Verbatim}[commandchars=\\\{\}]
-nats :: [Integer] nats = [1..]
+\begin{Verbatim}
+nats :: [Integer]
+nats = [1..]
\end{Verbatim}
In the first line, a pair of square brackets is the Haskell's built-in
type constructor for lists. In the second line, square brackets are used
@@ -666,7 +653,7 @@ \section{Functors as Containers}\label{functors-as-containers}
object, here's a type constructor that ignores completely its argument
\code{a}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Const c a = Const c
\end{Verbatim}
The \code{Const} type constructor takes two types, \code{c} and
@@ -676,14 +663,14 @@ \section{Functors as Containers}\label{functors-as-containers}
no dependence on \code{a}. The type of \code{fmap} for this type
constructor is:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap :: (a -> b) -> Const c a -> Const c b
\end{Verbatim}
Because the functor ignores its type argument, the implementation of
\code{fmap} is free to ignore its function argument --- the function
has nothing to act upon:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Functor (Const c) where
fmap _ (Const v) = Const v
\end{Verbatim}
@@ -710,7 +697,7 @@ \section{Functors as Containers}\label{functors-as-containers}
\end{Verbatim}
Despite its weirdness, the \code{Const} functor plays an important
role in many constructions. In category theory, it's a special case of
-the \ensuremath{\Delta}\textsubscript{c} functor I mentioned earlier --- the endo-functor
+the $\Delta_c$ functor I mentioned earlier --- the endo-functor
case of a black hole. We'll be seeing more of it in the future.
\section{Functor Composition}\label{functor-composition}
@@ -726,7 +713,7 @@ \section{Functor Composition}\label{functor-composition}
\code{maybeTail}? I'll rewrite it using the Haskell's built in
implementation of lists:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
maybeTail :: [a] -> Maybe [a]
maybeTail [] = Nothing
maybeTail (x:xs) = Just xs
@@ -746,7 +733,7 @@ \section{Functor Composition}\label{functor-composition}
let's see how we can square the elements of a \code{Maybe} list of
integers:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
square x = x * x
mis :: Maybe [Int]
@@ -760,29 +747,29 @@ \section{Functor Composition}\label{functor-composition}
implementation. It may not be immediately obvious that the above code
may be rewritten as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
mis2 = (fmap . fmap) square mis
\end{Verbatim}
But remember that \code{fmap} may be considered a function of just one
argument:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap :: (a -> b) -> (f a -> f b)
\end{Verbatim}
In our case, the second \code{fmap} in \code{(fmap . fmap)} takes
as its argument:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
square :: Int -> Int
\end{Verbatim}
and returns a function of the type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
[Int] -> [Int]
\end{Verbatim}
The first \code{fmap} then takes that function and returns a function:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
Maybe [Int] -> Maybe [Int]
\end{Verbatim}
Finally, that function is applied to \code{mis}. So the composition of
@@ -798,7 +785,7 @@ \section{Functor Composition}\label{functor-composition}
category of \emph{all} categories would have to include itself, and we
would get into the same kinds of paradoxes that made the set of all sets
impossible. There is, however, a category of all \emph{small} categories
-called \textbf{Cat} (which is big, so it can't be a member of itself). A
+called $\Cat$ (which is big, so it can't be a member of itself). A
small category is one in which objects form a set, as opposed to
something larger than a set. Mind you, in category theory, even an
infinite uncountable set is considered ``small.'' I thought I'd mention
@@ -809,12 +796,12 @@ \section{Functor Composition}\label{functor-composition}
\section{Challenges}\label{challenges}
\begin{enumerate}
- \tightlist
+\tightlist
\item
Can we turn the \code{Maybe} type constructor into a functor by
defining:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap _ _ = Nothing
\end{Verbatim}
@@ -827,5 +814,5 @@ \section{Challenges}\label{challenges}
\item
Prove the functor laws for the list functor. Assume that the laws are
true for the tail part of the list you're applying it to (in other
- words, use \newterm{induction}).
+ words, use \emph{induction}).
\end{enumerate}
\ No newline at end of file
diff --git a/src/content/1.8/Functoriality.tex b/src/content/1.8/Functoriality.tex
index b7e1c077..0f3b16a1 100644
--- a/src/content/1.8/Functoriality.tex
+++ b/src/content/1.8/Functoriality.tex
@@ -1,4 +1,4 @@
-\lettrine[lhang=0.17]{N}{ow that you} know what a functor is, and have seen a few examples, let's
+\lettrine[lhang=0.17]{N}{ow that you know} what a functor is, and have seen a few examples, let's
see how we can build larger functors from smaller ones. In particular
it's interesting to see which type constructors (which correspond to
mappings between objects in a category) can be extended to functors
@@ -6,36 +6,33 @@
\section{Bifunctors}\label{bifunctors}
-Since functors are morphisms in \newterm{Cat} (the category of categories),
+Since functors are morphisms in $\Cat$ (the category of categories),
a lot of intuitions about morphisms --- and functions in particular ---
apply to functors as well. For instance, just like you can have a
function of two arguments, you can have a functor of two arguments, or a
\newterm{bifunctor}. On objects, a bifunctor maps every pair of objects,
-one from category C, and one from category D, to an object in category
-E. Notice that this is just saying that it's a mapping from a
-\newterm{cartesian product} of categories C×D to E.
+one from category $\cat{C}$, and one from category $\cat{D}$, to an object in category
+$\cat{E}$. Notice that this is just saying that it's a mapping from a
+\newterm{cartesian product} of categories $\cat{C} \stimes \cat{D}$ to $\cat{E}$. % https://tex.stackexchange.com/questions/851/removing-spaces-between-words-in-math-mode
-\begin{figure}
-\centering
-\fbox{\includegraphics[width=70mm]{images/bifunctor.jpg}}
-\end{figure}
+\begin{wrapfigure}[11]{R}{0pt}
+\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
+\includegraphics[width=50mm]{images/bifunctor.jpg}}
+\end{wrapfigure}
\noindent
That's pretty straightforward. But functoriality means that a bifunctor
has to map morphisms as well. This time, though, it must map a pair of
-morphisms, one from C and one from D, to a morphism in E.
+morphisms, one from $\cat{C}$ and one from $\cat{D}$, to a morphism in $\cat{E}$.
Again, a pair of morphisms is just a single morphism in the product
-category C×D. We define a morphism in a cartesian product of categories
+category $\cat{C} \stimes \cat{D}$ to $\cat{E}$. We define a morphism in a cartesian product of categories
as a pair of morphisms which goes from one pair of objects to another
pair of objects. These pairs of morphisms can be composed in the obvious
way:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-(f, g) ◦ (f', g') = (f ◦ f', g ◦ g')
-\end{Verbatim}
+\[(f, g) \circ (f', g') = (f \circ f', g \circ g')\]
The composition is associative and it has an identity --- a pair of
-identity morphisms \emph{(id, id)}. So a cartesian product of categories
+identity morphisms $(\id, \id)$. So a cartesian product of categories
is indeed a category.
But an easier way to think about bifunctors is that they are functors in
@@ -54,7 +51,7 @@ \section{Bifunctors}\label{bifunctors}
\code{Bifunctor} typeclass taken directly from the library
\code{Control.Bifunctor}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
class Bifunctor f where
bimap :: (a -> c) -> (b -> d) -> f a b -> f c d
bimap g h = first g . second h
@@ -75,7 +72,7 @@ \section{Bifunctors}\label{bifunctors}
\begin{figure}[H]
\centering
-\fbox{\includegraphics[width=60mm]{images/bimap.jpg}}
+\fbox{\includegraphics[width=40mm]{images/bimap.jpg}}
\caption{bimap}
\end{figure}
@@ -94,7 +91,7 @@ \section{Bifunctors}\label{bifunctors}
\hypertarget{attachment_4072}{}
\includegraphics[width=1.56250in]{images/second.jpg}
-second\strut
+~second\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
@@ -120,7 +117,7 @@ \section{Product and Coproduct
in general, and in Haskell in particular. Here's the \code{Bifunctor}
instance for a pair constructor --- the simplest product type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Bifunctor (,) where
bimap f g (x, y) = (f x, g y)
\end{Verbatim}
@@ -129,13 +126,13 @@ \section{Product and Coproduct
component of a pair. The code pretty much writes itself, given the
types:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
bimap :: (a -> c) -> (b -> d) -> (a, b) -> (c, d)
\end{Verbatim}
The action of the bifunctor here is to make pairs of types, for
instance:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(,) a b = (a, b)
\end{Verbatim}
By duality, a coproduct, if it's defined for every pair of objects in a
@@ -143,7 +140,7 @@ \section{Product and Coproduct
\code{Either} type constructor being an instance of
\code{Bifunctor}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Bifunctor Either where
bimap f _ (Left x) = Left (f x)
bimap _ g (Right y) = Right (g y)
@@ -152,7 +149,7 @@ \section{Product and Coproduct
Now, remember when we talked about monoidal categories? A monoidal
category defines a binary operator acting on objects, together with a
-unit object. I mentioned that \code{Set} is a monoidal category with
+unit object. I mentioned that $\Set$ is a monoidal category with
respect to cartesian product, with the singleton set as a unit. And it's
also a monoidal category with respect to disjoint union, with the empty
set as a unit. What I haven't mentioned is that one of the requirements
@@ -169,11 +166,11 @@ \section{Functorial Algebraic Data
We've seen several examples of parameterized data types that turned out
to be functors --- we were able to define \code{fmap} for them.
Complex data types are constructed from simpler data types. In
-particular, algebraic data types (ADTs) are created using sums and
+particular, algebraic data types (\acronym{ADT}s) are created using sums and
products. We have just seen that sums and products are functorial. We
also know that functors compose. So if we can show that the basic
-building blocks of ADTs are functorial, we'll know that parameterized
-ADTs are functorial too.
+building blocks of \acronym{ADT}s are functorial, we'll know that parameterized
+\acronym{ADT}s are functorial too.
So what are the building blocks of parameterized algebraic data types?
First, there are the items that have no dependency on the type parameter
@@ -189,11 +186,11 @@ \section{Functorial Algebraic Data
identity morphism in \emph{Cat}, but didn't give its definition in
Haskell. Here it is:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Identity a = Identity a
\end{Verbatim}
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Functor Identity where
fmap f (Identity x) = Identity (f x)
\end{Verbatim}
@@ -206,7 +203,7 @@ \section{Functorial Algebraic Data
With this new knowledge, let's have a fresh look at the \code{Maybe}
type constructor:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Maybe a = Nothing | Just a
\end{Verbatim}
It's a sum of two types, and we now know that the sum is functorial. The
@@ -216,7 +213,7 @@ \section{Functorial Algebraic Data
The second part is just a different name for the identity functor. We
could have defined \code{Maybe}, up to isomorphism, as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
type Maybe a = Either (Const () a) (Identity a)
\end{Verbatim}
So \code{Maybe} is the composition of the bifunctor \code{Either}
@@ -239,7 +236,7 @@ \section{Functorial Algebraic Data
apply \code{fu} to \code{a} and \code{gu} to \code{b}, and then
apply \code{bf} to the resulting two types:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
newtype BiComp bf fu gu a b = BiComp (bf (fu a) (gu b))
\end{Verbatim}
That's the composition on objects, or types. Notice how in Haskell we
@@ -273,7 +270,7 @@ \section{Functorial Algebraic Data
The \code{x} in the definition of \code{bimap} has the type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
bf (fu a) (gu b)
\end{Verbatim}
which is quite a mouthful. The outer \code{bimap} breaks through the
@@ -281,14 +278,14 @@ \section{Functorial Algebraic Data
\code{fu} and \code{gu}, respectively. If the types of \code{f1}
and \code{f2} are:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f1 :: a -> a'
f2 :: b -> b'
\end{Verbatim}
then the final result is of the type
\code{bf (fu a') (gu b')}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
bimap (fu a -> fu a') -> (gu b -> gu b')
-> bf (fu a) (gu b) -> bf (fu a') (gu b')
\end{Verbatim}
@@ -305,12 +302,12 @@ \section{Functorial Algebraic Data
it is. You need to enable a particular Haskell extension by including
this line at the top of your source file:
-\begin{minted}{haskell}
+\begin{Verbatim}
{-# LANGUAGE DeriveFunctor #-}
-\end{minted}
+\end{Verbatim}
and then add \code{deriving Functor} to your data structure:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Maybe a = Nothing | Just a deriving Functor
\end{Verbatim}
and the corresponding \code{fmap} will be implemented for you.
@@ -419,7 +416,7 @@ \section{The Writer Functor}\label{the-writer-functor}
represented as ``embellished'' functions returning the \code{Writer}
data structure.
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
type Writer a = (a, String)
\end{Verbatim}
I said that the embellishment was somehow related to endofunctors. And,
@@ -441,7 +438,7 @@ \section{The Writer Functor}\label{the-writer-functor}
\end{Verbatim}
and the identity morphism by a function called \code{return}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
return :: a -> Writer a return x = (x, "")
\end{Verbatim}
It turns out that, if you look at the types of these two functions long
@@ -493,17 +490,17 @@ \section{Covariant and Contravariant
functor. It was based on the partially applied function-arrow type
constructor:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(->) r
\end{Verbatim}
We can rewrite it as a type synonym:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
type Reader r a = r -> a
\end{Verbatim}
for which the \code{Functor} instance, as we've seen before, reads:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
instance Functor (Reader r) where
fmap f g = f . g
\end{Verbatim}
@@ -516,7 +513,7 @@ \section{Covariant and Contravariant
a type synonym --- it's just like the \code{Reader} but with the
arguments flipped:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
type Op r a = a -> r
\end{Verbatim}
This time we fix the return type, \code{r}, and vary the argument
@@ -524,7 +521,7 @@ \section{Covariant and Contravariant
to implement \code{fmap}, which would have the following type
signature:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
fmap :: (a -> b) -> (a -> r) -> (b -> r)
\end{Verbatim}
With just two functions taking \code{a} and returning, respectively,
@@ -534,33 +531,29 @@ \section{Covariant and Contravariant
returned \code{a} instead. We can't invert an arbitrary function, but
we can go to the opposite category.
-A short recap: For every category \emph{C} there is a dual category
-\emph{C\textsuperscript{op}}. It's a category with the same objects as
-\emph{C}, but with all the arrows reversed.
-
-Consider a functor that goes between \emph{C\textsuperscript{op}} and
-some other category \emph{D}:\\
-\emph{F :: C\textsuperscript{op} \to\ D}\\
-Such a functor maps a morphism \emph{f\textsuperscript{op} :: a \to\ b} in
-\emph{C\textsuperscript{op}} to the morphism \emph{F
-f\textsuperscript{op} :: F a \to\ F b} in \emph{D}. But the morphism
-\emph{f\textsuperscript{op}} secretly corresponds to some morphism
-\emph{f :: b \to\ a} in the original category \emph{C}. Notice the
+A short recap: For every category $\cat{C}$ there is a dual category
+$\cat{C}^{op}$. It's a category with the same objects as
+$\cat{C}$, but with all the arrows reversed.
+
+Consider a functor that goes between $\cat{C}^{op}$ and
+some other category $\cat{D}$:
+\[F \Colon \cat{C}^{op} \to \cat{D}\]
+Such a functor maps a morphism $f^{op} \Colon a \to b$ in
+$\cat{C}^{op}$ to the morphism $F f^{op} \Colon F a \to F b$ in $\cat{D}$. But the morphism
+$f^{op}$ secretly corresponds to some morphism
+$f \Colon b \to a$ in the original category $\cat{C}$. Notice the
inversion.
-Now, \emph{F} is a regular functor, but there is another mapping we can
-define based on \emph{F}, which is not a functor --- let's call it
-\emph{G}. It's a mapping from \emph{C} to \emph{D}. It maps objects the
-same way \emph{F} does, but when it comes to mapping morphisms, it
-reverses them. It takes a morphism \emph{f :: b \to\ a} in \emph{C}, maps
-it first to the opposite morphism \emph{f\textsuperscript{op} :: a \to\ b}
-and then uses the functor F on it, to get \emph{F f\textsuperscript{op}
-:: F a \to\ F b}.
-
-Considering that \emph{F a} is the same as \emph{G a} and \emph{F b} is
-the same as \emph{G b}, the whole trip can be described as:\\
-\emph{G f :: (b → a) → (G a → G b)}
+Now, $F$ is a regular functor, but there is another mapping we can
+define based on $F$, which is not a functor --- let's call it
+$G$. It's a mapping from $\cat{C}$ to $\cat{D}$. It maps objects the
+same way $F$ does, but when it comes to mapping morphisms, it
+reverses them. It takes a morphism $f \Colon b \to a$ in $\cat{C}$, maps
+it first to the opposite morphism $f^{op} \Colon a \to b$
+and then uses the functor $F$ on it, to get $F f^{op} \Colon F a \to F b$.
+Considering that $F a$ is the same as $G a$ and $F b$ is
+the same as $G b$, the whole trip can be described as: $G f \Colon (b \to a) \to (G a \to G b)$
It's a ``functor with a twist.'' A mapping of categories that inverts
the direction of morphisms in this manner is called a
\emph{contravariant functor}. Notice that a contravariant functor is
@@ -597,13 +590,13 @@ \section{Covariant and Contravariant
with the arguments flipped. There is a special function for flipping
arguments, called \code{flip}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
flip :: (a -> b -> c) -> (b -> a -> c)
flip f y x = f x y
\end{Verbatim}
With it, we get:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
contramap = flip (.)
\end{Verbatim}
@@ -611,12 +604,11 @@ \section{Profunctors}\label{profunctors}
We've seen that the function-arrow operator is contravariant in its
first argument and covariant in the second. Is there a name for such a
-beast? It turns out that, if the target category is \textbf{Set}, such a
+beast? It turns out that, if the target category is $\Set$, such a
beast is called a \newterm{profunctor}. Because a contravariant functor is
equivalent to a covariant functor from the opposite category, a
-profunctor is defined as:\\
-\emph{C\textsuperscript{op} × D \to\ Set}
-
+profunctor is defined as:
+\[\cat{C}^{op} \times \cat{D} \to \Set\]
Since, to first approximation, Haskell types are sets, we apply the name
\code{Profunctor} to a type constructor \code{p} of two arguments,
which is contra-functorial in the first argument and functorial in the
@@ -665,7 +657,7 @@ \section{Challenges}\label{challenges}
\item
Show that the data type:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Pair a b = Pair a b
\end{Verbatim}
@@ -689,7 +681,7 @@ \section{Challenges}\label{challenges}
it's a precursor to a \code{List}. It replaces recursion with a type
parameter \code{b}.
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data PreList a b = Nil | Cons a b
\end{Verbatim}
@@ -702,15 +694,15 @@ \section{Challenges}\label{challenges}
Show that the following data types define bifunctors in \code{a} and
\code{b}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data K2 c a b = K2 c
\end{Verbatim}
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Fst a b = Fst a
\end{Verbatim}
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
data Snd a b = Snd b
\end{Verbatim}
diff --git a/src/content/1.9/Function Types.tex b/src/content/1.9/Function Types.tex
index 2d050258..f68f1cdb 100644
--- a/src/content/1.9/Function Types.tex
+++ b/src/content/1.9/Function Types.tex
@@ -9,14 +9,14 @@
Take \code{Integer}, for instance: It's just a set of integers.
\code{Bool} is a two element set. But a function type
-\code{a->b} is more than that: it's a set of morphisms
-between objects \code{a} and \code{b}. A set of morphisms between
+$a\to b$ is more than that: it's a set of morphisms
+between objects $a$ and $b$. A set of morphisms between
two objects in any category is called a hom-set. It just so happens that
-in the category \textbf{Set} every hom-set is itself an object in the
-same category ---because it is, after all, a \newterm{set}.
+in the category $\Set$ every hom-set is itself an object in the
+same category ---because it is, after all, a \emph{set}.
The same is not true of other categories where hom-sets are external to
-a category. They are even called \newterm{external} hom-sets.
+a category. They are even called \emph{external} hom-sets.
\pagebreak
\begin{wrapfigure}[11]{R}{0pt}
@@ -26,7 +26,7 @@
\end{wrapfigure}
\noindent
-It's the self-referential nature of the category \textbf{Set} that makes
+It's the self-referential nature of the category $\Set$ that makes
function types special. But there is a way, at least in some categories,
to construct objects that represent hom-sets. Such objects are called
\newterm{internal} hom-sets.
@@ -35,7 +35,7 @@ \section{Universal Construction}\label{universal-construction}
Let's forget for a moment that function types are sets and try to
construct a function type, or more generally, an internal hom-set, from
-scratch. As usual, we'll take our cues from the \textbf{Set} category,
+scratch. As usual, we'll take our cues from the $\Set$ category,
but carefully avoid using any properties of sets, so that the
construction will automatically work for other categories.
@@ -51,38 +51,37 @@ \section{Universal Construction}\label{universal-construction}
The obvious pattern that connects these three types is called
\newterm{function application} or \newterm{evaluation}. Given a candidate for
-a function type, let's call it \code{z} (notice that, if we are not in
-the category \textbf{Set}, this is just an object like any other
-object), and the argument type \code{a} (an object), the application
-maps this pair to the result type \code{b} (an object). We have three
+a function type, let's call it $z$ (notice that, if we are not in
+the category $\Set$, this is just an object like any other
+object), and the argument type $a$ (an object), the application
+maps this pair to the result type $b$ (an object). We have three
objects, two of them fixed (the ones representing the argument type and
the result type).
We also have the application, which is a mapping. How do we incorporate
this mapping into our pattern? If we were allowed to look inside
-objects, we could pair a function \code{f} (an element of \code{z})
-with an argument \code{x} (an element of \code{a}) and map it to
-\code{f\ x} (the application of \code{f} to \code{x}, which is an
-element of \code{b}).
+objects, we could pair a function $f$ (an element of $z$)
+with an argument $x$ (an element of $a$) and map it to
+$f x$ (the application of $f$ to $x$, which is an
+element of $b$).
\begin{figure}
\centering
\fbox{\includegraphics[width=60mm]{images/functionset.jpg}}
-\caption{In Set we can pick a function f from a set of functions z and we can
-pick an argument x from the set (type) a. We get an element f x in the
-set (type) b.}
+\caption{In Set we can pick a function $f$ from a set of functions $z$ and we can
+pick an argument $x$ from the set (type) $a$. We get an element $f x$ in the
+set (type) $b$.}
\end{figure}
-But instead of dealing with individual pairs \code{(f,\ x)}, we can as
-well talk about the whole \newterm{product} of the function type \code{z}
-and the argument type \code{a}. The product \code{z×a} is an object,
-and we can pick, as our application morphism, an arrow \code{g} from
-that object to \code{b}. In \textbf{Set}, \code{g} would be the
-function that maps every pair \code{(f,\ x)} to \code{f\ x}.
+But instead of dealing with individual pairs $(f, x)$, we can as
+well talk about the whole \emph{product} of the function type $z$
+and the argument type $a$. The product $z\times{}a$ is an object,
+and we can pick, as our application morphism, an arrow $g$ from
+that object to $b$. In $\Set$, $g$ would be the
+function that maps every pair $(f, x)$ to $f x$.
-So that's the pattern: a product of two objects \code{z} and
-\code{a} connected to another object \code{b} by a morphism
-\code{g}.
+So that's the pattern: a product of two objects $z$ and
+$a$ connected to another object $b$ by a morphism $g$.
\begin{figure}
\centering
@@ -102,20 +101,20 @@ \section{Universal Construction}\label{universal-construction}
Let's review the universal construction. We start with a pattern of
objects and morphisms. That's our imprecise query, and it usually yields
-lots and lots of hits. In particular, in \textbf{Set}, pretty much
+lots and lots of hits. In particular, in $\Set$, pretty much
everything is connected to everything. We can take any object
-\code{z}, form its product with \code{a}, and there's going to be a
-function from it to \code{b} (except when \code{b} is an empty set).
+$z$, form its product with $a$, and there's going to be a
+function from it to $b$ (except when $b$ is an empty set).
That's when we apply our secret weapon: ranking. This is usually done by
requiring that there be a unique mapping between candidate objects --- a
mapping that somehow factorizes our construction. In our case, we'll
-decree that \code{z} together with the morphism \code{g} from
-\code{z×a} to \code{b} is \emph{better} than some other
-\code{z'} with its own application \code{g'}, if and
-only if there is a unique mapping \code{h} from \code{z'} to
-\code{z} such that the application of \code{g'} factors
-through the application of \code{g}. (Hint: Read this sentence while
+decree that $z$ together with the morphism $g$ from
+$z\times a$ to $b$ is \emph{better} than some other
+$z'$ with its own application $g'$, if and
+only if there is a unique mapping $h$ from $z'$ to
+$z$ such that the application of $g'$ factors
+through the application of $g$. (Hint: Read this sentence while
looking at the picture.)
\begin{figure}
@@ -126,10 +125,10 @@ \section{Universal Construction}\label{universal-construction}
Now here's the tricky part, and the main reason I postponed this
particular universal construction till now. Given the morphism\\
-\code{h :: z'-> z}, we want to close the diagram
-that has both \code{z'} and \code{z} crossed with \code{a}.
-What we really need, given the mapping \code{h} from \code{z'}
-to \code{z}, is a mapping from \code{z'×a} to \code{z×a}.
+$h \Colon z'\to z$, we want to close the diagram
+that has both $z'$ and $z$ crossed with $a$.
+What we really need, given the mapping $h$ from $z'$
+to $z$, is a mapping from $z'\times a$ to $z\times a$.
And now, after discussing the \hyperref[functoriality]{functoriality
of the product}, we know how to do it. Because the product itself is a
functor (more precisely an endo-bi-functor), it's possible to lift pairs
@@ -137,34 +136,31 @@ \section{Universal Construction}\label{universal-construction}
but also products of morphisms.
Since we are not touching the second component of the product
-\code{z'×a}, we will lift the pair of morphisms
-\code{(h, id)}, where \code{id} is an identity on \code{a}.
+$z'\times a$, we will lift the pair of morphisms
+$(h, \id)$, where $\id$ is an identity on $a$.
-So, here's how we can factor one application, \code{g}, out of another
-application \code{g'}:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-g' = g ◦ (h × id)
-\end{Verbatim}
+So, here's how we can factor one application, $g$, out of another
+application $g'$:
+\[g' = g \circ (h \times \id)\]
The key here is the action of the product on morphisms.
The third part of the universal construction is selecting the object
-that is universally the best. Let's call this object \code{a\ensuremath{\Rightarrow}b} (think
+that is universally the best. Let's call this object $a{\Rightarrow} b$ (think
of this as a symbolic name for one object, not to be confused with a
Haskell typeclass constraint --- I'll discuss different ways of naming
it later). This object comes with its own application --- a morphism
-from \code{(a\ensuremath{\Rightarrow}b)×a} to \code{b} --- which we will call
-\code{eval}. The object \code{a\ensuremath{\Rightarrow}b} is the best if any other
+from $(a{\Rightarrow}b)\stimes a$ to $b$ --- which we will call
+$eval$ The object \code{$a{\Rightarrow}b$} is the best if any other
candidate for a function object can be uniquely mapped to it in such a
-way that its application morphism \code{g} factorizes through
-\code{eval}. This object is better than any other object according to
+way that its application morphism $g$ factorizes through
+$eval$. This object is better than any other object according to
our ranking.
\begin{figure}[H]
\centering
\fbox{\includegraphics{images/universalfunctionobject.jpg}}
\caption{The definition of the universal function object. This is the same
-diagram as above, but now the object \code{a\ensuremath{\Rightarrow}b} is \newterm{universal}.}
+diagram as above, but now the object $a{\Rightarrow}b$ is \emph{universal}.}
\end{figure}
\noindent
@@ -173,98 +169,73 @@ \section{Universal Construction}\label{universal-construction}
\begin{longtable}[]{@{}l@{}}
\toprule
\begin{minipage}[t]{0.97\columnwidth}\raggedright\strut
-A \textbf{function object} from \code{a} to \code{b} is an object
-\code{a\ensuremath{\Rightarrow}b} together with the morphism
-
-\begin{minted}[escapeinside=||,mathescape=true]{text}
-eval :: ((a|\ensuremath{\Rightarrow}|b) × a) -> b
-\end{minted}
-such that for any other object \code{z} with a morphism
-
-\begin{Verbatim}[commandchars=\\\{\}]
-g :: z × a -> b
-\end{Verbatim}
+A \emph{function object} from $a$ to $b$ is an object
+$a{\Rightarrow}b$ together with the morphism
+\[eval \Colon ((a \Rightarrow b) \times a) \to b\]
+such that for any other object $z$ with a morphism
+\[g \Colon z \times a \to b\]
there is a unique morphism
-
-\begin{minted}[escapeinside=||,mathescape=true]{text}
-h :: z -> (a|\ensuremath{\Rightarrow}|b)
-\end{minted}
-that factors \code{g} through \code{eval}:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-g = eval ◦ (h × id)
-\end{Verbatim}
-\strut
+\[h \Colon z \to (a \Rightarrow b)\]
+that factors $g$ through $eval$:
+\[g = eval \circ (h \times \id)\]
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\noindent
-Of course, there is no guarantee that such an object \code{a\ensuremath{\Rightarrow}b} exists
-for any pair of objects \code{a} and \code{b} in a given category.
-But it always does in \textbf{Set}. Moreover, in \textbf{Set}, this
-object is isomorphic to the hom-set \emph{Set(a, b)}.
+Of course, there is no guarantee that such an object $a{\Rightarrow} b$ exists
+for any pair of objects $a$ and $b$ in a given category.
+But it always does in $\Set$. Moreover, in $\Set$, this
+object is isomorphic to the hom-set $\Set(a, b)$.
This is why, in Haskell, we interpret the function type
-\code{a->b} as the categorical function object
-\code{a\ensuremath{\Rightarrow}b}.
+\code{a -> b} as the categorical function object $a{\Rightarrow} b$.
\section{Currying}\label{currying}
Let's have a second look at all the candidates for the function object.
-This time, however, let's think of the morphism \code{g} as a function
-of two variables, \code{z} and \code{a}.
-
-\begin{Verbatim}[commandchars=\\\{\}]
-g :: z × a -> b
-\end{Verbatim}
+This time, however, let's think of the morphism $g$ as a function
+of two variables, $z$ and $a$.
+\[g \Colon z \times a \to b\]
Being a morphism from a product comes as close as it gets to being a
-function of two variables. In particular, in \textbf{Set}, \code{g} is
-a function from pairs of values, one from the set \code{z} and one
-from the set \code{a}.
+function of two variables. In particular, in $\Set$, $g$ is
+a function from pairs of values, one from the set $z$ and one
+from the set $a$.
On the other hand, the universal property tells us that for each such
-\code{g} there is a unique morphism \code{h} that maps \code{z} to
-a function object \code{a\ensuremath{\Rightarrow}b}.
-
-\begin{Verbatim}[commandchars=\\\{\}]
-h :: z -> (a⇒b)
-\end{Verbatim}
-In \textbf{Set}, this just means that \code{h} is a function that
-takes one variable of type \code{z} and returns a function from
-\code{a} to \code{b}. That makes \code{h} a higher order function.
+$g$ there is a unique morphism $h$ that maps $z$ to
+a function object $a{\Rightarrow} b$.
+\[h \Colon z \to (a \Rightarrow b)\]
+In $\Set$, this just means that $h$ is a function that
+takes one variable of type $z$ and returns a function from
+$a$ to $b$. That makes $h$ a higher order function.
Therefore the universal construction establishes a one-to-one
correspondence between functions of two variables and functions of one
variable returning functions. This correspondence is called
-\newterm{currying}, and \code{h} is called the curried version of
-\code{g}.
+\newterm{currying}, and $h$ is called the curried version of $g$.
-This correspondence is one-to-one, because given any \code{g} there is
-a unique \code{h}, and given any \code{h} you can always recreate
-the two-argument function \code{g} using the formula:
-
-\begin{Verbatim}[commandchars=\\\{\}]
-g = eval ◦ (h × id)
-\end{Verbatim}
-The function \code{g} can be called the \newterm{uncurried} version of
-\code{h}.
+This correspondence is one-to-one, because given any $g$ there is
+a unique $h$, and given any $h$ you can always recreate
+the two-argument function $g$ using the formula:
+\[g = eval \circ (h \times \id)\]
+The function $g$ can be called the \emph{uncurried} version of $h$.
Currying is essentially built into the syntax of Haskell. A function
returning a function:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
a -> (b -> c)
\end{Verbatim}
is often thought of as a function of two variables. That's how we read
the un-parenthesized signature:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
a -> b -> c
\end{Verbatim}
This interpretation is apparent in the way we define multi-argument
functions. For instance:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
catstr :: String -> String -> String
catstr s s' = s ++ s'
\end{Verbatim}
@@ -277,27 +248,27 @@ \section{Currying}\label{currying}
These two definitions are equivalent, and either can be partially
applied to just one argument, producing a one-argument function, as in:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
greet :: String -> String
greet = catstr "Hello "
\end{Verbatim}
Strictly speaking, a function of two variables is one that takes a pair
(a product type):
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
(a, b) -> c
\end{Verbatim}
It's trivial to convert between the two representations, and the two
(higher-order) functions that do it are called, unsurprisingly,
\code{curry} and \code{uncurry}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
curry :: ((a, b)->c) -> (a->b->c)
curry f a b = f (a, b)
\end{Verbatim}
and
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
uncurry :: (a->b->c) -> ((a, b)->c)
uncurry f (a, b) = f a b
\end{Verbatim}
@@ -348,8 +319,8 @@ \section{Currying}\label{currying}
\section{Exponentials}\label{exponentials}
In mathematical literature, the function object, or the internal
-hom-object between two objects \code{a} and \code{b}, is often
-called the \newterm{exponential} and denoted by \code{ba}. Notice that
+hom-object between two objects $a$ and $b$, is often
+called the \newterm{exponential} and denoted by $b^{a}$. Notice that
the argument type is in the exponent. This notation might seem strange
at first, but it makes perfect sense if you think of the relationship
between functions and products. We've already seen that we have to use
@@ -368,12 +339,12 @@ \section{Exponentials}\label{exponentials}
specified by a pair of values: one corresponding to \code{False}, and
one corresponding to \code{True}. The set of all possible functions
from \code{Bool} to, say, \code{Int} is the set of all pairs of
-\code{Int}s. This is the same as the product \code{Int × Int} or,
-being a little creative with notation, \code{Int2}.
+\code{Int}s. This is the same as the product $Int \times Int$ or,
+being a little creative with notation, $Int^{2}$.
For another example, let's look at the C++ type \code{char}, which
contains 256 values (Haskell \code{Char} is larger, because Haskell
-uses Unicode). There are several functions in the \code{} part of the
+uses Unicode). There are several functions in the part of the
C++ Standard Library that are usually implemented using lookups.
Functions like \code{isupper} or \code{isspace} are implemented
using tables, which are equivalent to tuples of 256 Boolean values. A
@@ -381,14 +352,14 @@ \section{Exponentials}\label{exponentials}
Booleans: \code{bool × bool × bool × ... × bool}. We know from
arithmetics that an iterated product defines a power. If you
``multiply'' \code{bool} by itself 256 (or \code{char}) times, you
-get \code{bool} to the power of \code{char}, or \code{boolchar}.
+get \code{bool} to the power of \code{char}, or \code{bool\textsuperscript{char}}.
How many values are there in the type defined as 256-tuples of
-\code{bool}? Exactly 2\textsuperscript{256}. This is also the number
+\code{bool}? Exactly $2^{256}$. This is also the number
of different functions from \code{char} to \code{bool}, each
function corresponding to a unique 256-tuple. You can similarly
calculate that the number of functions from \code{bool} to
-\code{char} is 256\textsuperscript{2}, and so on. The exponential
+\code{char} is $256^{2}$, and so on. The exponential
notation for function types makes perfect sense in these cases.
We probably wouldn't want to fully memoize a function from \code{int}
@@ -408,7 +379,7 @@ \section{Cartesian Closed
Although I will continue using the category of sets as a model for types
and functions, it's worth mentioning that there is a larger family of
categories that can be used for that purpose. These categories are
-called \newterm{cartesian closed}, and \textbf{Set} is just one example of
+called \newterm{cartesian closed}, and $\Set$ is just one example of
such a category.
A cartesian closed category must contain:
@@ -436,13 +407,12 @@ \section{Cartesian Closed
The terminal object and the product have their duals: the initial object
and the coproduct. A cartesian closed category that also supports those
two, and in which product can be distributed over coproduct
-
-\begin{Verbatim}[commandchars=\\\{\}]
-a × (b + c) = a × b + a × c
-(b + c) × a = b × a + c × a
-\end{Verbatim}
+\begin{gather*}
+a \times (b + c) = a \times b + a \times c \\
+(b + c) \times a = b \times a + c \times a
+\end{gather*}
is called a \newterm{bicartesian closed} category. We'll see in the next
-section that bicartesian closed categories, of which \textbf{Set} is a
+section that bicartesian closed categories, of which $\Set$ is a
prime example, have some interesting properties.
\section{Exponentials and Algebraic Data
@@ -459,24 +429,22 @@ \section{Exponentials and Algebraic Data
\subsection{Zeroth Power}\label{zeroth-power}
-\begin{Verbatim}[commandchars=\\\{\}]
-a\textsuperscript{0} = 1
-\end{Verbatim}
+\[a^{0} = 1\]
In the categorical interpretation, we replace 0 with the initial object,
1 with the final object, and equality with isomorphism. The exponential
is the internal hom-object. This particular exponential represents the
set of morphisms going from the initial object to an arbitrary object
-\code{a}. By the definition of the initial object, there is exactly
-one such morphism, so the hom-set \emph{C(0, a)} is a singleton set. A
-singleton set is the terminal object in \textbf{Set}, so this identity
-trivially works in \textbf{Set}. What we are saying is that it works in
+$a$. By the definition of the initial object, there is exactly
+one such morphism, so the hom-set $\cat{C}(0, a)$ is a singleton set. A
+singleton set is the terminal object in $\Set$, so this identity
+trivially works in $\Set$. What we are saying is that it works in
any bicartesian closed category.
In Haskell, we replace 0 with \code{Void}; 1 with the unit type
\code{()}; and the exponential with function type. The claim is that
the set of functions from \code{Void} to any type \code{a} is
equivalent to the unit type --- which is a singleton. In other words,
-there is only one function \code{Void->a}. We've seen
+there is only one function \code{Void -> a}. We've seen
this function before: it's called \code{absurd}.
This is a little bit tricky, for two reasons. One is that in Haskell we
@@ -489,12 +457,10 @@ \subsection{Zeroth Power}\label{zeroth-power}
\subsection{Powers of One}\label{powers-of-one}
-\begin{Verbatim}[commandchars=\\\{\}]
-1\textsuperscript{a} = 1
-\end{Verbatim}
-This identity, when interpreted in \textbf{Set}, restates the definition
+\[1^{a} = 1\]
+This identity, when interpreted in $\Set$, restates the definition
of the terminal object: There is a unique morphism from any object to
-the terminal object. In general, the internal hom-object from \code{a}
+the terminal object. In general, the internal hom-object from $a$
to the terminal object is isomorphic to the terminal object itself.
In Haskell, there is only one function from any type \code{a} to unit.
@@ -504,21 +470,17 @@ \subsection{Powers of One}\label{powers-of-one}
\subsection{First Power}\label{first-power}
-\begin{Verbatim}[commandchars=\\\{\}]
-a\textsuperscript{1} = a
-\end{Verbatim}
+\[a^{1} = a\]
This is a restatement of the observation that morphisms from the
terminal object can be used to pick ``elements'' of the object
\code{a}. The set of such morphisms is isomorphic to the object
-itself. In \textbf{Set}, and in Haskell, the isomorphism is between
+itself. In $\Set$, and in Haskell, the isomorphism is between
elements of the set \code{a} and functions that pick those elements,
-\code{()->a}.
+\code{() -> a}.
\subsection{Exponentials of Sums}\label{exponentials-of-sums}
-\begin{Verbatim}[commandchars=\\\{\}]
-a\textsuperscript{b+c} = a\textsuperscript{b} × a\textsuperscript{c}
-\end{Verbatim}
+\[a^{b+c} = a^{b} \times a^{c}\]
Categorically, this says that the exponential from a coproduct of two
objects is isomorphic to a product of two exponentials. In Haskell, this
algebraic identity has a very practical, interpretation. It tells us
@@ -530,13 +492,13 @@ \subsection{Exponentials of Sums}\label{exponentials-of-sums}
instance, take a function from the sum type
\code{(Either Int Double)}:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: Either Int Double -> String
\end{Verbatim}
It may be defined as a pair of functions from, respectively,
\code{Int} and \code{Double}:
-\begin{minted}[breaklines, breakbefore=Positive]{text}
+\begin{minted}[breaklines]{text}
f (Left n) = if n < 0 then "Negative int" else "Positive int"
f (Right x) = if x < 0.0 then "Negative double" else "Positive double"
\end{minted}
@@ -545,9 +507,7 @@ \subsection{Exponentials of Sums}\label{exponentials-of-sums}
\subsection{Exponentials of
Exponentials}\label{exponentials-of-exponentials}
-\begin{Verbatim}[commandchars=\\\{\}]
-(a\textsuperscript{b})\textsuperscript{c} = a\textsuperscript{b×c}
-\end{Verbatim}
+\[(a^{b})^{c} = a^{b \times c}\]
This is just a way of expressing currying purely in terms of exponential
objects. A function returning a function is equivalent to a function
from a product (a two-argument function).
@@ -555,9 +515,7 @@ \subsection{Exponentials of
\subsection{Exponentials over
Products}\label{exponentials-over-products}
-\begin{Verbatim}[commandchars=\\\{\}]
-(a × b)\textsuperscript{c} = a\textsuperscript{c} × b\textsuperscript{c}
-\end{Verbatim}
+\[(a \times b)^{c} = a^{c} \times b^{c}\]
In Haskell: A function returning a pair is equivalent to a pair of
functions, each producing one element of the pair.
@@ -587,30 +545,26 @@ \section{Curry-Howard Isomorphism}\label{curry-howard-isomorphism}
Let's take the function \code{eval} we have introduced in the
definition of the function object. Its signature is:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
eval :: ((a -> b), a) -> b
\end{Verbatim}
It takes a pair consisting of a function and its argument and produces a
result of the appropriate type. It's the Haskell implementation of the
morphism:
-\begin{Verbatim}[commandchars=\\\{\}]
-eval :: (a\ensuremath{\Rightarrow}b) × a -> b
-\end{Verbatim}
-which defines the function type \code{a\ensuremath{\Rightarrow}b} (or the exponential object
-\code{ba}). Let's translate this signature to a logical predicate
+\[eval \Colon (a \Rightarrow b) \times a \to b\]
+which defines the function type $a \Rightarrow b$ (or the exponential object
+$b^{a}$). Let's translate this signature to a logical predicate
using the Curry-Howard isomorphism:
-\begin{Verbatim}[commandchars=\\\{\}]
-((a \ensuremath{\Rightarrow} b) \ensuremath{\wedge} a) \ensuremath{\Rightarrow} b
-\end{Verbatim}
-Here's how you can read this statement: If it's true that \code{b}
-follows from \code{a}, and \code{a} is true, then \code{b} must be
+\[((a \Rightarrow b) \wedge a) \Rightarrow b\]
+Here's how you can read this statement: If it's true that $b$
+follows from $a$, and $a$ is true, then $b$ must be
true. This makes perfect intuitive sense and has been known since
antiquity as \newterm{modus ponens}. We can prove this theorem by
implementing the function:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
eval :: ((a -> b), a) -> b
eval (f, x) = f x
\end{Verbatim}
@@ -623,18 +577,16 @@ \section{Curry-Howard Isomorphism}\label{curry-howard-isomorphism}
Therefore \newterm{modus ponens} is true in our logic.
How about a predicate that is blatantly false? For instance: if
-\code{a} or \code{b} is true then \code{a} must be true.
+$a$ or $b$ is true then $a$ must be true.
-\begin{Verbatim}[commandchars=\\\{\}]
-a \ensuremath{\vee} b \ensuremath{\Rightarrow} a
-\end{Verbatim}
-This is obviously wrong because you can chose an \code{a} that is
-false and a \code{b} that is true, and that's a counter-example.
+\[a \vee b \Rightarrow a\]
+This is obviously wrong because you can chose an $a$ that is
+false and a $b$ that is true, and that's a counter-example.
Mapping this predicate into a function signature using the Curry-Howard
isomorphism, we get:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
Either a b -> a
\end{Verbatim}
Try as you may, you can't implement this function --- you can't produce
@@ -643,23 +595,21 @@ \section{Curry-Howard Isomorphism}\label{curry-howard-isomorphism}
Finally, we come to the meaning of the \code{absurd} function:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
absurd :: Void -> a
\end{Verbatim}
Considering that \code{Void} translates into false, we get:
-\begin{Verbatim}[commandchars=\\\{\}]
-false \ensuremath{\Rightarrow} a
-\end{Verbatim}
+\[false \Rightarrow a\]
Anything follows from falsehood (\emph{ex falso quodlibet}). Here's one
possible proof (implementation) of this statement (function) in Haskell:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
absurd (Void a) = absurd a
\end{Verbatim}
where \code{Void} is defined as:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
newtype Void = Void Void
\end{Verbatim}
As always, the type \code{Void} is tricky. This definition makes it
diff --git a/src/content/2.1/Declarative Programming.tex b/src/content/2.1/Declarative Programming.tex
index 9adc443c..7b9b1b4c 100644
--- a/src/content/2.1/Declarative Programming.tex
+++ b/src/content/2.1/Declarative Programming.tex
@@ -7,22 +7,22 @@
declarative and the other imperative.
You can see this even at the most basic level. Composition itself may be
-defined declaratively; as in, \code{h} is a composite of \code{g}
-after \code{f}:
+defined declaratively; as in, $h$ is a composite of $g$
+after $f$:
\begin{Verbatim}[commandchars=\\\{\}]
h = g . f
\end{Verbatim}
-or imperatively; as in, call \code{f} first, remember the result of
-that call, then call \code{g} with the result:
+or imperatively; as in, call $f$ first, remember the result of
+that call, then call $g$ with the result:
\begin{Verbatim}[commandchars=\\\{\}]
h x = let y = f x
in g y
\end{Verbatim}
The imperative version of a program is usually described as a sequence
-of actions ordered in time. In particular, the call to \code{g} cannot
-happen before the execution of \code{f} completes. At least, that's
+of actions ordered in time. In particular, the call to $g$ cannot
+happen before the execution of $f$ completes. At least, that's
the conceptual picture --- in a lazy language, with \emph{call-by-need}
argument passing, the actual execution may proceed differently.
@@ -107,7 +107,7 @@
\end{equation}
\noindent
-where \ensuremath{v_1} is the speed of light in the air and \ensuremath{v_2} is
+where $v_1$ is the speed of light in the air and $v_2$ is
the speed of light in the water.
\begin{figure}[H]
@@ -171,9 +171,9 @@
and applies a series of transformations to it. Conceptually, the list of
all our future actions is there, available as the input data to our
program. From a program's perspective there's no difference between the
-list of digits of \ensuremath{\pi}, a list of pseudo-random numbers, or a list of mouse
+list of digits of $\pi$, a list of pseudo-random numbers, or a list of mouse
positions coming through computer hardware. In each case, if you want to
-get the nth item, you have to first go through the first \ensuremath{n-1} items. When
+get the nth item, you have to first go through the first $n-1$ items. When
applied to temporal events, we call this property \emph{causality}.
So what does it have to do with category theory? I will argue that
diff --git a/src/content/2.2/Limits and Colimits.tex b/src/content/2.2/Limits and Colimits.tex
index e59a130a..197397b8 100644
--- a/src/content/2.2/Limits and Colimits.tex
+++ b/src/content/2.2/Limits and Colimits.tex
@@ -12,15 +12,15 @@
\noindent
The construction of a product starts with the selection of two objects
-\code{a} and \code{b}, whose product we want to construct. But what
+$a$ and $b$, whose product we want to construct. But what
does it mean to \emph{select objects}? Can we rephrase this action in
more categorical terms? Two objects form a pattern --- a very simple
pattern. We can abstract this pattern into a category --- a very simple
category, but a category nevertheless. It's a category that we'll call
-\mathtext{2}. It contains just two objects, 1 and 2, and no morphisms
+$2$. It contains just two objects, 1 and 2, and no morphisms
other than the two obligatory identities. Now we can rephrase the
-selection of two objects in \mathtext{C} as the act of defining a functor \mathtext{D}
-from the category \mathtext{2} to \mathtext{C}. A functor maps objects to
+selection of two objects in $\cat{C}$ as the act of defining a functor $D$
+from the category $\cat{2}$ to $\cat{C}$. A functor maps objects to
objects, so its image is just two objects (or it could be one, if the
functor collapses objects, which is fine too). It also maps morphisms
--- in this case it simply maps identity morphisms to identity
@@ -36,18 +36,18 @@
notions, eschewing the imprecise descriptions like "selecting
objects", taken straight from the hunter-gatherer lexicon of our
ancestors. And, incidentally, it is also easily generalized, because
-nothing can stop us from using categories more complex than \mathtext{2}
+nothing can stop us from using categories more complex than $\cat{2}$
to define our patterns.
But let's continue. The next step in the definition of a product is the
-selection of the candidate object \mathtext{c}. Here again, we could
+selection of the candidate object $c$. Here again, we could
rephrase the selection in terms of a functor from a singleton category.
And indeed, if we were using Kan extensions, that would be the right
thing to do. But since we are not ready for Kan extensions yet, there is
-another trick we can use: a constant functor \mathtext{\Delta} from the same category
-\mathtext{2} to \mathtext{C}. The selection of \mathtext{c} in \mathtext{C} can be
-done with \mathtext{\Delta_c}. Remember, \mathtext{\Delta_c} maps all
-objects into \mathtext{c} and all morphisms into \mathtext{id_c}.
+another trick we can use: a constant functor $\Delta$ from the same category
+$\cat{2}$ to $\cat{C}$. The selection of $c$ in $\cat{C}$ can be
+done with $\Delta_c$. Remember, $\Delta_c$ maps all
+objects into $c$ and all morphisms into $\idarrow[c]$.
\begin{figure}[H]
\centering
@@ -55,22 +55,22 @@
\end{figure}
\noindent
-Now we have two functors, \mathtext{\Delta_c} and \mathtext{D} going between
-\mathtext{2} and \mathtext{C} so it's only natural to ask about natural
+Now we have two functors, $\Delta_c$ and $D$ going between
+$\cat{2}$ and $\cat{C}$ so it's only natural to ask about natural
transformations between them. Since there are only two objects in
-\mathtext{2}, a natural transformation will have two components. Object 1
-in \mathtext{2} is mapped to \mathtext{c} by \mathtext{\Delta_c} and to
-\mathtext{a} by \mathtext{D}. So the component of a natural transformation between
-\mathtext{\Delta_c} and \mathtext{D} at \mathtext{1} is a morphism from \mathtext{c} to
-\mathtext{a}. We can call it \mathtext{p}. Similarly, the second component
-is a morphism \mathtext{q} from \mathtext{c} to \mathtext{b} --- the image of
-the object 2 in \mathtext{2} under \mathtext{D}. But these are exactly like the two
+$\cat{2}$, a natural transformation will have two components. Object $1$
+in $\cat{2}$ is mapped to $c$ by $\Delta_c$ and to
+$a$ by $D$. So the component of a natural transformation between
+$\Delta_c$ and $D$ at $1$ is a morphism from $c$ to
+$a$. We can call it $p$. Similarly, the second component
+is a morphism $q$ from $c$ to $b$ --- the image of
+the object $2$ in $\cat{2}$ under $D$. But these are exactly like the two
projections we used in our original definition of the product. So
instead of talking about selecting objects and projections, we can just
talk about picking functors and natural transformations. It so happens
that in this simple case the naturality condition for our transformation
is trivially satisfied, because there are no morphisms (other than the
-identities) in \mathtext{2}.
+identities) in $\cat{2}$.
\begin{figure}[H]
\centering
@@ -79,22 +79,22 @@
\noindent
A generalization of this construction to categories other than
-\mathtext{2} --- ones that, for instance, contain non-trivial morphisms
+$\cat{2}$ --- ones that, for instance, contain non-trivial morphisms
--- will impose naturality conditions on the transformation between
-\mathtext{\Delta_c} and \mathtext{D}. We call such transformation a \emph{cone},
-because the image of \mathtext{\Delta} is the apex of a cone/pyramid whose sides are
-formed by the components of the natural transformation. The image of \mathtext{D}
+$\Delta_c$ and $D$. We call such transformation a \emph{cone},
+because the image of $\Delta$ is the apex of a cone/pyramid whose sides are
+formed by the components of the natural transformation. The image of $D$
forms the base of the cone.
-In general, to build a cone, we start with a category \mathtext{I} that
+In general, to build a cone, we start with a category $\cat{I}$ that
defines the pattern. It's a small, often finite category. We pick a
-functor \mathtext{D} from \mathtext{I} to \mathtext{C} and call it (or its image) a
-\emph{diagram}. We pick some \mathtext{c} in \mathtext{C} as the apex of our
-cone. We use it to define the constant functor \mathtext{\Delta_c} from
-\mathtext{I} to \mathtext{C}. A natural transformation from \mathtext{\Delta_c}
-to \mathtext{D} is then our cone. For a finite \mathtext{I} it's just a bunch of
-morphisms connecting \mathtext{c} to the diagram: the image of \mathtext{I}
-under \mathtext{D}.
+functor $D$ from $\cat{I}$ to $\cat{C}$ and call it (or its image) a
+\emph{diagram}. We pick some $c$ in $\cat{C}$ as the apex of our
+cone. We use it to define the constant functor $\Delta_c$ from
+$\cat{I}$ to $\cat{C}$. A natural transformation from $\Delta_c$
+to $D$ is then our cone. For a finite $\cat{I}$ it's just a bunch of
+morphisms connecting $c$ to the diagram: the image of $\cat{I}$
+under $\cat{D}$.
\begin{figure}[H]
\centering
@@ -103,11 +103,11 @@
\noindent
Naturality requires that all triangles (the walls of the pyramid) in
-this diagram commute. Indeed, take any morphism \mathtext{f} in \mathtext{I}.
-The functor \mathtext{D} maps it to a morphism \mathtext{D f} in \mathtext{C}, a
+this diagram commute. Indeed, take any morphism $f$ in $\cat{I}$.
+The functor $D$ maps it to a morphism $D f$ in $\cat{C}$, a
morphism that forms the base of some triangle. The constant functor
-\mathtext{\Delta_c} maps \mathtext{f} to the identity morphism on
-\mathtext{c}. \mathtext{\Delta} squishes the two ends of the morphism into one object, and
+$\Delta_c$ maps $f$ to the identity morphism on
+$c$. $\Delta$ squishes the two ends of the morphism into one object, and
the naturality square becomes a commuting triangle. The two arms of this
triangle are the components of the natural transformation.
@@ -122,10 +122,10 @@
product.
There are many ways to go about it. For instance, we may define a
-\emph{category of cones} based on a given functor \mathtext{D}. Objects in that
-category are cones. Not every object \mathtext{c} in \mathtext{C} can be an
+\emph{category of cones} based on a given functor $D$. Objects in that
+category are cones. Not every object $c$ in $\cat{C}$ can be an
apex of a cone, though, because there may be no natural transformation
-between \mathtext{\Delta_c} and \mathtext{D}.
+between $\Delta_c$ and $D$.
To make it a category, we also have to define morphisms between cones.
These would be fully determined by morphisms between their apexes. But
@@ -134,12 +134,10 @@
objects (the apexes) must be common factors for the projections. For
instance:
-\begin{equation*}
-\begin{split}
-p' = p \cdot m \\
-q' = q \cdot m
-\end{split}
-\end{equation*}
+\begin{Verbatim}
+p' = p . m
+q' = q . m
+\end{Verbatim}
\begin{figure}[H]
\centering
@@ -153,8 +151,8 @@
\centering
\fbox{\includegraphics[width=2.48958in]{images/conecommutativity.jpg}}
\caption{The commuting triangle connecting two cones, with the factorizing
-morphism \code{h} (here, the lower cone is the universal one, with
-\code{Lim D} as its apex)}
+morphism $h$ (here, the lower cone is the universal one, with
+$Lim D$ as its apex)}
\end{figure}
\noindent
@@ -168,8 +166,8 @@
there is a unique morphism from any other object to that object. In our
case it means that there is a unique factorizing morphism from the apex
of any other cone to the apex of the universal cone. We call this
-universal cone the \emph{limit} of the diagram \mathtext{D}, \code{Lim D} (in
-the literature, you'll often see a left arrow pointing towards \mathtext{I}
+universal cone the \emph{limit} of the diagram $D$, $Lim D$ (in
+the literature, you'll often see a left arrow pointing towards $\cat{I}$
under the \mathtext{Lim} sign). Often, as a shorthand, we call the apex of
this cone the limit (or the limit object).
@@ -191,58 +189,62 @@ \section{Limit as a Natural Isomorphism}\label{limit-as-a-natural-isomorphism}
fact, a category) of cones. If the limit exists (and --- let's make it
clear --- there's no guarantee of that), one of those cones is the
universal cone. For every other cone we have a unique factorizing
-morphism that maps its apex, let's call it \mathtext{c}, to the apex of
-the universal cone, which we named \code{Lim D}. (In fact, I can skip
+morphism that maps its apex, let's call it $c$, to the apex of
+the universal cone, which we named $Lim D$. (In fact, I can skip
the word ``other'', because the identity morphism maps the universal
cone to itself and it trivially factorizes through itself.) Let me
repeat the important part: given any cone, there is a unique morphism of
a special kind. We have a mapping of cones to special morphisms, and
it's a one-to-one mapping.
-This special morphism is a member of the hom-set \code{C(c, LimD)}.
+This special morphism is a member of the hom-set $\cat{C}(c, LimD)$.
The other members of this hom-set are less fortunate, in the sense that
they don't factorize the mapping of cones. What we want is to be able to
-pick, for each \code{c}, one morphism from the set
-\code{C(c, LimD)} --- a morphism that satisfies the particular
+pick, for each $c$, one morphism from the set
+$\cat{C}(c, LimD)$ --- a morphism that satisfies the particular
commutativity condition. Does that sound like defining a natural
transformation? It most certainly does!
But what are the functors that are related by this transformation?
-One functor is the mapping of \code{c} to the set
-\code{C(c, Lim D)}. It's a functor from \mathtext{C} to \textbf{Set} ---
+One functor is the mapping of $c$ to the set
+$\cat{C}(c, Lim D)$. It's a functor from $\cat{C}$ to $\Set$ ---
it maps objects to sets. In fact it's a contravariant functor. Here's
-how we define its action on morphisms: Let's take a morphism \code{f}
-from \code{c'} to \code{c}:
+how we define its action on morphisms: Let's take a morphism $f$
+from $c'$ to $c$:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: c' -> c
\end{Verbatim}
-Our functor maps \code{c'} to the set
-\code{C(c', Lim D)}. To define the action of this functor on
-\code{f} (in other words, to lift \code{f}), we have to define the
-corresponding mapping between \code{C(c, Lim D)} and
-\code{C(c', Lim D)}. So let's pick one element \code{u} of
-\code{C(c, Lim D)} and see if we can map it to some element of
-\code{C(c', Lim D)}. An element of a hom-set is a morphism, so
+
+Our functor maps $c'$ to the set
+$\cat{C}(c', Lim D)$. To define the action of this functor on
+$f$ (in other words, to lift $f$), we have to define the
+corresponding mapping between $\cat{C}(c, Lim D)$ and
+$\cat{C}(c', Lim D)$. So let's pick one element $u$ of
+$\cat{C}(c, Lim D)$ and see if we can map it to some element of
+$\cat{C}(c', Lim D)$. An element of a hom-set is a morphism, so
we have:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
u :: c -> Lim D
\end{Verbatim}
-We can precompose \code{u} with \code{f} to get:
-\begin{Verbatim}[commandchars=\\\{\}]
+We can precompose $u$ with $f$ to get:
+
+\begin{Verbatim}
u . f :: c' -> Lim D
\end{Verbatim}
-And that's an element of \code{C(c', Lim D)}--- so indeed, we
+
+And that's an element of $\cat{C}(c', Lim D)$--- so indeed, we
have found a mapping of morphisms:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
contramap :: (c' -> c) -> (c -> Lim D) -> (c' -> Lim D)
contramap f u = u . f
\end{Verbatim}
-Notice the inversion in the order of \code{c} and \code{c'}
+
+Notice the inversion in the order of $c$ and $c'$
characteristic of a \emph{contravariant} functor.
\begin{figure}[H]
@@ -252,48 +254,54 @@ \section{Limit as a Natural Isomorphism}\label{limit-as-a-natural-isomorphism}
\noindent
To define a natural transformation, we need another functor that's also
-a mapping from \mathtext{C} to \mathtext{Set}. But this time we'll consider a
+a mapping from $\cat{C}$ to $\Set$. But this time we'll consider a
set of cones. Cones are just natural transformations, so we are looking
-at the set of natural transformations \code{Nat(\ensuremath{\Delta_c} D)}. The mapping
+at the set of natural transformations $Nat(\Delta_c, D)$. The mapping
from \code{c} to this particular set of natural transformations is a
(contravariant) functor. How can we show that? Again, let's define its
action on a morphism:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: c' -> c
\end{Verbatim}
-The lifting of \code{f} should be a mapping of natural transformations
-between two functors that go from \mathtext{I} to \mathtext{C}:
+
+The lifting of $f$ should be a mapping of natural transformations
+between two functors that go from $\cat{I}$ to $\cat{C}$:
\begin{Verbatim}[commandchars=\\\{\}]
Nat(\ensuremath{\Delta_c}, D) -> Nat(\ensuremath{\Delta_{c'}}, D)
\end{Verbatim}
+
How do we map natural transformations? Every natural transformation is a
selection of morphisms --- its components --- one morphism per element
-of \mathtext{I}. A component of some \mathtext{\alpha} (a member of \code{ Nat(\ensuremath{\Delta_c}, D)}) at
-\code{a} (an object in \mathtext{I}) is a morphism:
+of \mathtext{I}. A component of some $\alpha$ (a member of $Nat(\Delta_c, D)$) at
+$a$ (an object in $\cat{I}$) is a morphism:
\begin{Verbatim}[commandchars=\\\{\}]
\ensuremath{\alpha_a}:: \ensuremath{\Delta_c a} -> D a
\end{Verbatim}
-or, using the definition of the constant functor \mathtext{\Delta},
+
+or, using the definition of the constant functor $\Delta$,
\begin{Verbatim}[commandchars=\\\{\}]
\ensuremath{\alpha_a} :: c -> D a
\end{Verbatim}
-Given \code{f} and \mathtext{\alpha}, we have to construct a \mathtext{\beta}, a member of
-\code{Nat(\ensuremath{\Delta_{c'}}, D)}. Its component at \code{a} should be a
+
+Given $f$ and $\alpha$, we have to construct a $\beta$, a member of
+$Nat(\Delta_{c'}, D)$. Its component at $a$ should be a
morphism:
\begin{Verbatim}[commandchars=\\\{\}]
\ensuremath{\beta_a} :: c' -> D a
\end{Verbatim}
+
We can easily get the latter from the former by precomposing it with
-\code{f}:
+$f$:
\begin{Verbatim}[commandchars=\\\{\}]
-|\ensuremath{\beta_a}| = |\ensuremath{\alpha_a}| . f
+\ensuremath{\beta_a} = \ensuremath{\alpha_a} . f
\end{Verbatim}
+
It's relatively easy to show that those components indeed add up to a
natural transformation.
@@ -310,26 +318,28 @@ \section{Limit as a Natural Isomorphism}\label{limit-as-a-natural-isomorphism}
\begin{Verbatim}[commandchars=\\\{\}]
c -> Nat(\ensuremath{\Delta_c}, D)
\end{Verbatim}
+
What I have just done is to show you that we have two (contravariant)
-functors from \mathtext{C} to \textbf{Set}. I haven't made any assumptions
+functors from $\cat{C}$ to $\Set$. I haven't made any assumptions
--- these functors always exist.
Incidentally, the first of these functors plays an important role in
category theory, and we'll see it again when we talk about Yoneda's
lemma. There is a name for contravariant functors from any category
-\mathtext{C} to \textbf{Set}: they are called ``presheaves''. This one is
+$\cat{C}$ to $\Set$: they are called ``presheaves''. This one is
called a \newterm{representable presheaf}. The second functor is also a
presheaf.
Now that we have two functors, we can talk about natural transformations
between them. So without further ado, here's the conclusion: A functor
-\code{D} from \mathtext{I} to \mathtext{C} has a limit \code{Lim D} if and
+$D$ from $\cat{I}$ to $\cat{C}$ has a limit $Lim D$ if and
only if there is a natural isomorphism between the two functors I have
just defined:
\begin{Verbatim}[commandchars=\\\{\}]
C(c, Lim D) \ensuremath{\backsimeq} Nat(\ensuremath{\Delta_c}, D)
\end{Verbatim}
+
Let me remind you what a natural isomorphism is. It's a natural
transformation whose every component is an isomorphism, that is to say
an invertible morphism.
@@ -337,13 +347,13 @@ \section{Limit as a Natural Isomorphism}\label{limit-as-a-natural-isomorphism}
I'm not going to go through the proof of this statement. The procedure
is pretty straightforward if not tedious. When dealing with natural
transformations, you usually focus on components, which are morphisms.
-In this case, since the target of both functors is \textbf{Set}, the
+In this case, since the target of both functors is $\Set$, the
components of the natural isomorphism will be functions. These are
higher order functions, because they go from the hom-set to the set of
natural transformations. Again, you can analyze a function by
considering what it does to its argument: here the argument will be a
-morphism --- a member of \code{C(c, Lim D)} --- and the result will
-be a natural transformation --- a member of \code{Nat(\ensuremath{\Delta_c}, D)}, or
+morphism --- a member of $\cat{C}(c, Lim D)$ --- and the result will
+be a natural transformation --- a member of $Nat(\Delta_c, D)$, or
what we have called a cone. This natural transformation, in turn, has
its own components, which are morphisms. So it's morphisms all the way
down, and if you can keep track of them, you can prove the statement.
@@ -353,14 +363,14 @@ \section{Limit as a Natural Isomorphism}\label{limit-as-a-natural-isomorphism}
cones.
As a preview of coming attractions, let me mention that the set
-\code{Nat(\ensuremath{\Delta_c}, D)} can be thought of as a hom-set in the functor
+$Nat(\Delta_c, D)$ can be thought of as a hom-set in the functor
category; so our natural isomorphism relates two hom-sets, which points
at an even more general relationship called an adjunction.
\section{Examples of Limits}\label{examples-of-limits}
We've seen that the categorical product is a limit of a diagram
-generated by a simple category we called \mathtext{2}.
+generated by a simple category we called $\cat{2}$.
There is an even simpler example of a limit: the terminal object. The
first impulse would be to think of a singleton category as leading to a
@@ -374,17 +384,18 @@ \section{Examples of Limits}\label{examples-of-limits}
The next interesting limit is called the \emph{equalizer}. It's a limit
generated by a two-element category with two parallel morphisms going
between them (and, as always, the identity morphisms). This category
-selects a diagram in \mathtext{C} consisting of two objects, \code{a} and
-\code{b}, and two morphisms:
+selects a diagram in $\cat{C}$ consisting of two objects, $a$ and
+$b$, and two morphisms:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: a -> b
g :: a -> b
\end{Verbatim}
-To build a cone over this diagram, we have to add the apex, \code{c}
+
+To build a cone over this diagram, we have to add the apex, $c$
and two projections:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
p :: c -> a
q :: c -> b
\end{Verbatim}
@@ -397,61 +408,68 @@ \section{Examples of Limits}\label{examples-of-limits}
\noindent
We have two triangles that must commute:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
q = f . p
q = g . p
\end{Verbatim}
-This tells us that \code{q} is uniquely determined by one of these
-equations, say, \code{q = f . p}, and we can omit it from the
+
+This tells us that $q$ is uniquely determined by one of these
+equations, say, $q = f \circ p$, and we can omit it from the
picture. So we are left with just one condition:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f . p = g . p
\end{Verbatim}
+
The way to think about it is that, if we restrict our attention to
-\textbf{Set}, the image of the function \code{p} selects a subset of
-\code{a}. When restricted to this subset, the functions \code{f} and
-\code{g} are equal.
+$\Set$, the image of the function $p$ selects a subset of
+$a$. When restricted to this subset, the functions $f$ and
+$g$ are equal.
-For instance, take \code{a} to be the two-dimensional plane
-parameterized by coordinates \code{x} and \code{y}. Take \code{b}
+For instance, take $a$ to be the two-dimensional plane
+parameterized by coordinates $x$ and $y$. Take $b$
to be the real line, and take:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f (x, y) = 2 * y + x
g (x, y) = y - x
\end{Verbatim}
+
The equalizer for these two functions is the set of real numbers (the
-apex, \code{c}) and the function:
+apex, $c$) and the function:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
p t = (t, (-2) * t)
\end{Verbatim}
-Notice that \code{(p t)} defines a straight line in the
+
+Notice that $(p t)$ defines a straight line in the
two-dimensional plane. Along this line, the two functions are equal.
-Of course, there are other sets \code{c'} and functions
-\code{p'} that may lead to the equality:
+Of course, there are other sets $c'$ and functions
+$p'$ that may lead to the equality:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f . p' = g . p'
\end{Verbatim}
-but they all uniquely factor out through \code{p}. For instance, we
-can take the singleton set \code{()} as \code{c'} and the
+
+but they all uniquely factor out through $p$. For instance, we
+can take the singleton set $()$ as $c'$ and the
function:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
p'() = (0, 0)
\end{Verbatim}
-It's a good cone, because \code{f (0, 0) = g (0, 0)}. But it's
-not universal, because of the unique factorization through \code{h}:
-\begin{Verbatim}[commandchars=\\\{\}]
+It's a good cone, because $f (0, 0) = g (0, 0)$. But it's
+not universal, because of the unique factorization through $h$:
+
+\begin{Verbatim}
p' = p . h
\end{Verbatim}
+
with
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
h () = 0
\end{Verbatim}
@@ -462,27 +480,28 @@ \section{Examples of Limits}\label{examples-of-limits}
\noindent
An equalizer can thus be used to solve equations of the type
-\code{f x = g x}. But it's much more general, because it's defined
+$f \quad x = g \quad x$. But it's much more general, because it's defined
in terms of objects and morphisms rather than algebraically.
An even more general idea of solving an equation is embodied in another
limit --- the pullback. Here, we still have two morphisms that we want
to equate, but this time their domains are different. We start with a
three-object category of the shape:
-\code{1->2<-3}. The diagram corresponding to
-this category consists of three objects, \code{a}, \code{b}, and
-\code{c}, and two morphisms:
+$1\rightarrow2\leftarrow3$. The diagram corresponding to
+this category consists of three objects, $a$, $b$, and
+$c$, and two morphisms:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: a -> b
g :: c -> b
\end{Verbatim}
+
This diagram is often called a \emph{cospan}.
-A cone built on top of this diagram consists of the apex, \code{d},
+A cone built on top of this diagram consists of the apex, $d$,
and three morphisms:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
p :: d -> a
q :: d -> c
r :: d -> b
@@ -494,11 +513,11 @@ \section{Examples of Limits}\label{examples-of-limits}
\end{figure}
\noindent
-Commutativity conditions tell us that \code{r} is completely
+Commutativity conditions tell us that $r$ is completely
determined by the other morphisms, and can be omitted from the picture.
So we are only left with the following condition:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
g . q = f . p
\end{Verbatim}
A pullback is a universal cone of this shape.
@@ -510,23 +529,24 @@ \section{Examples of Limits}\label{examples-of-limits}
\noindent
Again, if you narrow your focus down to sets, you can think of the
-object \code{d} as consisting of pairs of elements from \code{a} and
-\code{c} for which \code{f} acting on the first component is equal
-to \code{g} acting on the second component. If this is still too
-general, consider the special case in which \code{g} is a constant
-function, say \code{g \_ = 1.23} (assuming that \code{b} is a set
+object $d$ as consisting of pairs of elements from $a$ and
+$c$ for which $f$ acting on the first component is equal
+to $g$ acting on the second component. If this is still too
+general, consider the special case in which $g$ is a constant
+function, say $g \quad \_ = 1.23$ (assuming that $b$ is a set
of real numbers). Then you are really solving the equation:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f x = 1.23
\end{Verbatim}
-In this case, the choice of \code{c} is irrelevant (as long as it's
+
+In this case, the choice of $c$ is irrelevant (as long as it's
not an empty set), so we can take it to be a singleton set. The set
-\code{a} could, for instance, be the set of three-dimensional vectors,
-and \code{f} the vector length. Then the pullback is the set of pairs
-\code{(v, ())}, where \code{v} is a vector of length 1.23 (a
-solution to the equation \code{sqrt ({x\textsuperscript{2}+y\textsuperscript{2}+z\textsuperscript{2}}) = 1.23}), and
-\code{()} is the dummy element of the singleton set.
+$a$ could, for instance, be the set of three-dimensional vectors,
+and $f$ the vector length. Then the pullback is the set of pairs
+$(v, ())$, where $v$ is a vector of length 1.23 (a
+solution to the equation $sqrt(x^{2}+y^{2}+z^{2}) = 1.23$), and
+$()$ is the dummy element of the singleton set.
But pullbacks have more general applications, also in programming. For
instance, consider C++ classes as a category in which morphism are
@@ -563,13 +583,13 @@ \section{Examples of Limits}\label{examples-of-limits}
is often a need to \emph{unify} types of two expressions. For instance,
suppose that the compiler wants to infer the type of a function:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
twice f x = f (f x)
\end{Verbatim}
It will assign preliminary types to all variables and sub-expressions.
In particular, it will assign:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
f :: t0
x :: t1
f x :: t2
@@ -577,13 +597,13 @@ \section{Examples of Limits}\label{examples-of-limits}
\end{Verbatim}
from which it will deduce that:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
twice :: t0 -> t1 -> t3
\end{Verbatim}
It will also come up with a set of constraints resulting from the rules
of function application:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
t0 = t1 -> t2 -- because f is applied to x
t0 = t2 -> t3 -- because f is applied to (f x)
\end{Verbatim}
@@ -591,7 +611,7 @@ \section{Examples of Limits}\label{examples-of-limits}
variables) that, when substituted for the unknown types in both
expressions, produce the same type. One such substitution is:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
t1 = t2 = t3 = Int
twice :: (Int -> Int) -> Int -> Int
\end{Verbatim}
@@ -600,7 +620,7 @@ \section{Examples of Limits}\label{examples-of-limits}
because they are beyond the scope of this book, but you can convince
yourself that the result should be:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
twice :: (t -> t) -> t -> t
\end{Verbatim}
with \code{t} a free type variable.
@@ -620,10 +640,10 @@ \section{Colimits}\label{colimits}
\end{figure}
\noindent
-Cocone with a factorizing morphism \code{h} connecting two apexes.
+Cocone with a factorizing morphism $h$ connecting two apexes.
A typical example of a colimit is a coproduct, which corresponds to the
-diagram generated by \textbf{2}, the category we've used in the
+diagram generated by $\cat{2}$, the category we've used in the
definition of the product.
\begin{figure}[H]
@@ -640,20 +660,20 @@ \section{Colimits}\label{colimits}
The dual of the pullback is called the \emph{pushout}. It's based on a
diagram called a span, generated by the category
-\code{1<-2->3}.
+$1\leftarrow2\rightarrow3$.
\section{Continuity}\label{continuity}
I said previously that functors come close to the idea of continuous
mappings of categories, in the sense that they never break existing
connections (morphisms). The actual definition of a \emph{continuous
-functor} \code{F} from a category \mathtext{C} to \mathtext{C'} includes the
-requirement that the functor preserve limits. Every diagram \code{D}
-in \emph{C} can be mapped to a diagram \mathtext{F \circ D} in \mathtext{C'} by
-simply composing two functors. The continuity condition for \mathtext{F}
-states that, if the diagram \mathtext{D} has a limit \code{Lim D}, then
-the diagram \mathtext{F \circ D} also has a limit, and it is equal to
-\code{F (Lim D)}.
+functor} $F$ from a category $\cat{C}$ to $\cat{C'}$ includes the
+requirement that the functor preserve limits. Every diagram $D$
+in $\cat{C}$ can be mapped to a diagram $F \circ D$ in $\cat{C'}$ by
+simply composing two functors. The continuity condition for $F$
+states that, if the diagram $D$ has a limit $Lim D$, then
+the diagram $F \circ D$ also has a limit, and it is equal to
+$F (Lim D)$.
\begin{figure}[H]
\centering
@@ -667,20 +687,20 @@ \section{Continuity}\label{continuity}
preserve composition). The same is true for the factorizing morphisms:
the image of a factorizing morphism is also a factorizing morphism. So
every functor is \emph{almost} continuous. What may go wrong is the
-uniqueness condition. The factorizing morphism in \mathtext{C'} might not be
-unique. There may also be other ``better cones'' in \mathtext{C'} that were
-not available in \mathtext{C}.
+uniqueness condition. The factorizing morphism in $\cat{C'}$ might not be
+unique. There may also be other ``better cones'' in $\cat{C'}$ that were
+not available in $\cat{C}$.
A hom-functor is an example of a continuous functor. Recall that the
-hom-functor, \code{C(a, b)}, is contravariant in the first variable
+hom-functor, $\cat{C}(a, b)$, is contravariant in the first variable
and covariant in the second. In other words, it's a functor:
\begin{Verbatim}[commandchars=\\\{\}]
C\textsuperscript{op} \times C -> Set
\end{Verbatim}
When its second argument is fixed, the hom-set functor (which becomes
-the representable presheaf) maps colimits in \mathtext{C} to limits in
-\textbf{Set}; and when its first argument is fixed, it maps limits to
+the representable presheaf) maps colimits in $\cat{C}$ to limits in
+$\Set$; and when its first argument is fixed, it maps limits to
limits.
In Haskell, a hom-functor is the mapping of any two types to a function
@@ -688,16 +708,17 @@ \section{Continuity}\label{continuity}
parameter, let's say to \code{String}, we get the contravariant
functor:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
newtype ToString a = ToString (a -> String)
instance Contravariant ToString where
contramap f (ToString g) = ToString (g . f)
\end{Verbatim}
+
Continuity means that when \code{ToString} is applied to a colimit,
for instance a coproduct \code{Either b c}, it will produce a limit;
in this case a product of two function types:
-\begin{Verbatim}[commandchars=\\\{\}]
+\begin{Verbatim}
ToString (Either b c) ~ (b -> String, c -> String)
\end{Verbatim}
Indeed, any function of \code{Either b c} is implemented as a case
diff --git a/src/content/2.3/Free Monoids.tex b/src/content/2.3/Free Monoids.tex
index 30be3030..0abeaafa 100644
--- a/src/content/2.3/Free Monoids.tex
+++ b/src/content/2.3/Free Monoids.tex
@@ -155,7 +155,7 @@ \section{Free Monoid Universal Construction}\label{free-monoid-universal-constru
\textbf{Mon} to sets, but we can also map morphisms of \textbf{Mon}
(homomorphisms) to functions. Again, this seems sort of trivial, but it
will become useful soon. This mapping of objects and morphisms from
-\textbf{Mon} to \textbf{Set} is in fact a functor. Since this functor
+\textbf{Mon} to $\Set$ is in fact a functor. Since this functor
``forgets'' the monoidal structure --- once we are inside a plain set,
we no longer distinguish the unit element or care about multiplication
--- it's called a \newterm{forgetful functor}. Forgetful functors come up
@@ -183,17 +183,17 @@ \section{Free Monoid Universal Construction}\label{free-monoid-universal-constru
those blobs. Here's how it works:
We start with a set of generators, \code{x}. That's a set in
-\textbf{Set}.
+$\Set$.
The pattern we are going to match consists of a monoid \code{m} --- an
-object of \textbf{Mon} --- and a function \code{p} in \textbf{Set}:
+object of \textbf{Mon} --- and a function \code{p} in $\Set$:
\begin{Verbatim}[commandchars=\\\{\}]
p :: x -> U m
\end{Verbatim}
where \code{U} is our forgetful functor from \textbf{Mon} to
-\textbf{Set}. This is a weird heterogeneous pattern --- half in
-\textbf{Mon} and half in \textbf{Set}.
+$\Set$. This is a weird heterogeneous pattern --- half in
+\textbf{Mon} and half in $\Set$.
The idea is that the function \code{p} will identify the set of
generators inside the X-ray image of \code{m}. It doesn't matter that
diff --git a/src/content/2.4/Representable Functors.tex b/src/content/2.4/Representable Functors.tex
index 081c61a5..c63fc533 100644
--- a/src/content/2.4/Representable Functors.tex
+++ b/src/content/2.4/Representable Functors.tex
@@ -3,7 +3,7 @@
mathematics --- at least it used to be. Category theory tries to step
away from set theory, to some extent. For instance, it's a known fact
that the set of all sets doesn't exist, but the category of all sets,
-\textbf{Set}, does. So that's good. On the other hand, we assume that
+$\Set$, does. So that's good. On the other hand, we assume that
morphisms between any two objects in a category form a set. We even
called it a hom-set. To be fair, there is a branch of category theory
where morphisms don't form sets. Instead they are objects in another
@@ -46,14 +46,14 @@
\section{The Hom Functor}\label{the-hom-functor}
Every category comes equipped with a canonical family of mappings to
-\textbf{Set}. Those mappings are in fact functors, so they preserve the
+$\Set$. Those mappings are in fact functors, so they preserve the
structure of the category. Let's build one such mapping.
Let's fix one object \code{a} in \emph{C} and pick another object
\code{x} also in \emph{C}. The hom-set \code{C(a, x)} is a set, an
-object in \textbf{Set}. When we vary \code{x}, keeping \code{a}
-fixed, \code{C(a, x)} will also vary in \textbf{Set}. Thus we have a
-mapping from \code{x} to \textbf{Set}.
+object in $\Set$. When we vary \code{x}, keeping \code{a}
+fixed, \code{C(a, x)} will also vary in $\Set$. Thus we have a
+mapping from \code{x} to $\Set$.
\begin{figure}[H]
\centering
@@ -163,16 +163,16 @@ \section{The Hom Functor}\label{the-hom-functor}
\section{Representable Functors}\label{representable-functors}
We've seen that, for every choice of an object \code{a} in \emph{C},
-we get a functor from \emph{C} to \textbf{Set}. This kind of
-structure-preserving mapping to \textbf{Set} is often called a
+we get a functor from \emph{C} to $\Set$. This kind of
+structure-preserving mapping to $\Set$ is often called a
\newterm{representation}. We are representing objects and morphisms of
-\emph{C} as sets and functions in \textbf{Set}.
+\emph{C} as sets and functions in $\Set$.
The functor \code{C(a, -)} itself is sometimes called representable.
More generally, any functor \code{F} that is naturally isomorphic to
the hom-functor, for some choice of \code{a}, is called
\newterm{representable}. Such functor must necessarily be
-\textbf{Set}-valued, since \code{C(a, -)} is.
+$\Set$-valued, since \code{C(a, -)} is.
I said before that we often think of isomorphic sets as identical. More
generally, we think of isomorphic \newterm{objects} in a category as
@@ -203,7 +203,7 @@ \section{Representable Functors}\label{representable-functors}
natural transformation.
Let's look at the component of α at some object \code{x}. It's a
-function in \textbf{Set}:
+function in $\Set$:
\begin{Verbatim}[commandchars=\\\{\}]
α\textsubscript{x} :: C(a, x) -> F x
@@ -248,7 +248,7 @@ \section{Representable Functors}\label{representable-functors}
α ◦ β = id = β ◦ α
\end{Verbatim}
We will see later that a natural transformation from \code{C(a, -)}
-to any \textbf{Set}-valued functor always exists (Yoneda's lemma) but it
+to any $\Set$-valued functor always exists (Yoneda's lemma) but it
is not necessarily invertible.
Let me give you an example in Haskell with the list functor and
@@ -328,7 +328,7 @@ \section{Representable Functors}\label{representable-functors}
Representability for contravariant functors is similarly defined, except
that we keep the second argument of \code{C(-, a)} fixed. Or,
equivalently, we may consider functors from \emph{C}\textsuperscript{op}
-to \textbf{Set}, because \code{C\textsuperscript{op}(a, -)} is the same as
+to $\Set$, because \code{C\textsuperscript{op}(a, -)} is the same as
\code{C(-, a)}.
There is an interesting twist to representability. Remember that
@@ -370,7 +370,7 @@ \section{Challenges}\label{challenges}
\tightlist
\item
Show that the hom-functors map identity morphisms in \emph{C} to
- corresponding identity functions in \textbf{Set}.
+ corresponding identity functions in $\Set$.
\item
Show that \code{Maybe} is not representable.
\item
diff --git a/src/content/2.5/The Yoneda Lemma.tex b/src/content/2.5/The Yoneda Lemma.tex
index fd4f4a57..9f9234d5 100644
--- a/src/content/2.5/The Yoneda Lemma.tex
+++ b/src/content/2.5/The Yoneda Lemma.tex
@@ -13,10 +13,10 @@
group of some set).
The setting for the Yoneda lemma is an arbitrary category \emph{C}
-together with a functor \code{F} from \emph{C} to \textbf{Set}. We've
-seen in the previous section that some \textbf{Set}-valued functors are
+together with a functor \code{F} from \emph{C} to $\Set$. We've
+seen in the previous section that some $\Set$-valued functors are
representable, that is isomorphic to a hom-functor. The Yoneda lemma
-tells us that all \textbf{Set}-valued functors can be obtained from
+tells us that all $\Set$-valued functors can be obtained from
hom-functors through natural transformations, and it explicitly
enumerates all such transformations.
@@ -27,7 +27,7 @@
connected to it through a morphism. The more arrows between objects in
the source and the target categories there are, the more constraints you
have for transporting the components of natural transformations.
-\textbf{Set} happens to be a very arrow-rich category.
+$\Set$ happens to be a very arrow-rich category.
The Yoneda lemma tells us that a natural transformation between a
hom-functor and any other functor \code{F} is completely determined by
@@ -41,11 +41,11 @@
also seen that it maps any morphism \code{f} from \code{x} to
\code{y} to \code{C(a, f)}.
-The second functor is an arbitrary \textbf{Set}-valued functor
+The second functor is an arbitrary $\Set$-valued functor
\code{F}.
Let's call the natural transformation between these two functors
-\code{α}. Because we are operating in \textbf{Set}, the components of
+\code{α}. Because we are operating in $\Set$, the components of
the natural transformation, like \code{αx} or \code{αy}, are just
regular functions between sets:
@@ -127,7 +127,7 @@
Nat(C(a, -), F) \ensuremath{\cong} F a
\end{Verbatim}
Or, if we use the notation \code{{[}C, Set{]}} for the functor
-category between \emph{C} and \textbf{Set}, the set of natural
+category between \emph{C} and $\Set$, the set of natural
transformation is just a hom-set in that category, and we can write:
\begin{Verbatim}[commandchars=\\\{\}]
@@ -140,7 +140,7 @@
thing is that the whole natural transformation crystallizes from just
one nucleation site: the value we assign to it at \code{id\textsubscript{a}}. It
spreads from that point following the naturality condition. It floods
-the image of \emph{C} in \textbf{Set}. So let's first consider what the
+the image of \emph{C} in $\Set$. So let's first consider what the
image of \emph{C} is under \code{C(a, -)}.
Let's start with the image of \code{a} itself. Under the hom-functor
@@ -231,17 +231,17 @@
transformation: it can only be \code{absurd}.
One way of understanding the Yoneda lemma is to realize that natural
-transformations between \textbf{Set}-valued functors are just families
+transformations between $\Set$-valued functors are just families
of functions, and functions are in general lossy. A function may
collapse information and it may cover only parts of its codomain. The
only functions that are not lossy are the ones that are invertible ---
the isomorphisms. It follows then that the best structure-preserving
-\textbf{Set}-valued functors are the representable ones. They are either
+$\Set$-valued functors are the representable ones. They are either
the hom-functors or the functors that are naturally isomorphic to
hom-functors. Any other functor \code{F} is obtained from a
hom-functor through a lossy transformation. Such a transformation may
not only lose information, but it may also cover only a small part of
-the image of the functor \code{F} in \textbf{Set}.
+the image of the functor \code{F} in $\Set$.
\section{Yoneda in Haskell}\label{yoneda-in-haskell}
@@ -344,7 +344,7 @@ \section{Co-Yoneda}\label{co-yoneda}
Equivalently, we can derive the co-Yoneda lemma by fixing the target
object of our hom-functors instead of the source. We get the
-contravariant hom-functor from \emph{C} to \textbf{Set}:
+contravariant hom-functor from \emph{C} to $\Set$:
\code{C(-, a)}. The contravariant version of the Yoneda lemma
establishes one-to-one correspondence between natural transformations
from this functor to any other contravariant functor \code{F} and the
diff --git a/src/content/2.6/Yoneda Embedding.tex b/src/content/2.6/Yoneda Embedding.tex
index 44542eab..32f7a28b 100644
--- a/src/content/2.6/Yoneda Embedding.tex
+++ b/src/content/2.6/Yoneda Embedding.tex
@@ -1,11 +1,11 @@
\lettrine[lhang=0.17]{W}{e've seen previously} that, when we fix an object \code{a} in the
category \emph{C}, the mapping \code{C(a, -)} is a (covariant)
-functor from \emph{C} to \textbf{Set}.
+functor from \emph{C} to $\Set$.
\begin{Verbatim}[commandchars=\\\{\}]
x -> C(a, x)
\end{Verbatim}
-(The codomain is \textbf{Set} because the hom-set C(a, x) is a
+(The codomain is $\Set$ because the hom-set C(a, x) is a
\emph{set}.) We call this mapping a hom-functor --- we have previously
defined its action on morphisms as well.
@@ -20,7 +20,7 @@
categories in
\hyperref[natural-transformations]{Natural
Transformations}). Let's use the notation \code{{[}C, Set{]}} for the
-functor category from \emph{C} to \textbf{Set}. You may also recall that
+functor category from \emph{C} to $\Set$. You may also recall that
hom-functors are the prototypical
\hyperref[chap-representable-functors]{representable
functors}.
@@ -109,14 +109,14 @@ \section{The Embedding}\label{the-embedding}
This is a very useful result because mathematicians know a lot about the
category of functors, especially functors whose codomain is
-\textbf{Set}. We can get a lot of insight about an arbitrary category
+$\Set$. We can get a lot of insight about an arbitrary category
\emph{C} by embedding it in the functor category.
Of course there is a dual version of the Yoneda embedding, sometimes
called the co-Yoneda embedding. Observe that we could have started by
fixing the target object (rather than the source object) of each
hom-set, \code{C(-, a)}. That would give us a contravariant
-hom-functor. Contravariant functors from \emph{C} to \textbf{Set} are
+hom-functor. Contravariant functors from \emph{C} to $\Set$ are
our familiar presheaves (see, for instance,
\hyperref[limits-and-colimits]{Limits
and Colimits}). The co-Yoneda embedding defines the embedding of a
@@ -259,7 +259,7 @@ \section{Preorder Example}\label{preorder-example}
\section{Naturality}\label{naturality}
The Yoneda lemma establishes the isomorphism between the set of natural
-transformations and an object in \textbf{Set}. Natural transformations
+transformations and an object in $\Set$. Natural transformations
are morphisms in the functor category \code{{[}C, Set{]}}. The set of
natural transformation between any two functors is a hom-set in that
category. The Yoneda lemma is the isomorphism:
@@ -276,7 +276,7 @@ \section{Naturality}\label{naturality}
Let's think for a moment what this means. A natural isomorphism is an
invertible \newterm{natural transformation} between two functors. And
indeed, the right hand side of our isomorphism is a functor. It's a
-functor from \code{{[}C, Set{]} × C} to \textbf{Set}. Its action on
+functor from \code{{[}C, Set{]} × C} to $\Set$. Its action on
a pair \code{(F, a)} is a set --- the result of evaluating the
functor \code{F} at the object \code{a}. This is called the
evaluation functor.
diff --git a/src/content/3.10/Ends and Coends.tex b/src/content/3.10/Ends and Coends.tex
index a4132d11..5ceb17c6 100644
--- a/src/content/3.10/Ends and Coends.tex
+++ b/src/content/3.10/Ends and Coends.tex
@@ -131,7 +131,7 @@ \section{Ends}\label{ends}
profunctor. Instead of a cone, we have a wedge. The base of a wedge is
formed by diagonal elements of a profunctor \code{p}. The apex of the
wedge is an object (here, a set, since we are considering
-\textbf{Set}-valued profunctors), and the sides are a family of
+$\Set$-valued profunctors), and the sides are a family of
functions mapping the apex to the sets in the base. You may think of
this family as one polymorphic function --- a function that's
polymorphic in its return type:
@@ -241,8 +241,8 @@ \section{Ends}\label{ends}
diamond when we realize that \code{\ensuremath{\Delta}\textsubscript{c}} lifts all morphisms to one
identity function.
-Ends can also be defined for target categories other than \textbf{Set},
-but here we'll only consider \textbf{Set}-valued profunctors and their
+Ends can also be defined for target categories other than $\Set$,
+but here we'll only consider $\Set$-valued profunctors and their
ends.
\section{Ends as Equalizers}\label{ends-as-equalizers}
diff --git a/src/content/3.11/Kan Extensions.tex b/src/content/3.11/Kan Extensions.tex
index 3a9e7b46..18028034 100644
--- a/src/content/3.11/Kan Extensions.tex
+++ b/src/content/3.11/Kan Extensions.tex
@@ -346,12 +346,12 @@ \section{Kan Extensions as Ends}\label{kan-extensions-as-ends}
The real power of Kan extensions comes from the fact that they can be
calculated using ends (and coends). For simplicity, we'll restrict our
attention to the case where the target category \emph{C} is
-\textbf{Set}, but the formulas can be extended to any category.
+$\Set$, but the formulas can be extended to any category.
Let's revisit the idea that a Kan extension can be used to extend the
action of a functor outside of its original domain. Suppose that
\code{K} embeds \emph{I} inside \emph{A}. Functor \code{D} maps
-\emph{I} to \textbf{Set}. We could just say that for any object
+\emph{I} to $\Set$. We could just say that for any object
\code{a} in the image of \code{K}, that is \code{a = K i}, the
extended functor maps \code{a} to \code{D i}. The problem is, what
to do with those objects in \emph{A} that are outside of the image of
diff --git a/src/content/3.12/Enriched Categories.tex b/src/content/3.12/Enriched Categories.tex
index 09dc18c4..6506a644 100644
--- a/src/content/3.12/Enriched Categories.tex
+++ b/src/content/3.12/Enriched Categories.tex
@@ -10,7 +10,7 @@
form a set. If they don't form a set, we have to rethink a few
definitions. In particular, what does it mean to compose morphisms if we
can't even pick them from a set? The solution is to bootstrap ourselves
-by replacing hom-sets, which are objects in \textbf{Set}, with
+by replacing hom-sets, which are objects in $\Set$, with
\newterm{objects} from some other category \emph{V}. The difference is
that, in general, objects don't have elements, so we are no longer
allowed to talk about individual morphisms. We have to define all
@@ -38,7 +38,7 @@ \section{Why Monoidal Category?}\label{why-monoidal-category}
When constructing an enriched category we have to keep in mind that we
should be able to recover the usual definitions when we replace the
-monoidal category with \textbf{Set} and hom-objects with hom-sets. The
+monoidal category with $\Set$ and hom-objects with hom-sets. The
best way to accomplish this is to start with the usual definitions and
keep reformulating them in a point-free manner --- that is, without
naming elements of sets.
@@ -153,7 +153,7 @@ \section{Monoidal Category}\label{monoidal-category}
\end{Verbatim}
A monoidal category in which both are defined is called biclosed. An
example of a category that is not biclosed is the category of
-endofunctors in \textbf{Set}, with functor composition serving as tensor
+endofunctors in $\Set$, with functor composition serving as tensor
product. That's the category we used to define monads.
\section{Enriched Category}\label{enriched-category}
@@ -419,11 +419,11 @@ \section{Self Enrichment}\label{self-enrichment}
We know that this hom-set contains the left identity \code{λ\textsubscript{a}}. We can
define \code{j\textsubscript{a}} as its image under the adjunction.
-A practical example of self-enrichment is the category \textbf{Set} that
+A practical example of self-enrichment is the category $\Set$ that
serves as the prototype for types in programming languages. We've seen
before that it's a closed monoidal category with respect to cartesian
-product. In \textbf{Set}, the hom-set between any two sets is itself a
-set, so it's an object in \textbf{Set}. We know that it's isomorphic to
+product. In $\Set$, the hom-set between any two sets is itself a
+set, so it's an object in $\Set$. We know that it's isomorphic to
the exponential set, so the external and the internal homs are
equivalent. Now we also know that, through self-enrichment, we can use
the exponential set as the hom-object and express composition in terms
@@ -431,18 +431,18 @@ \section{Self Enrichment}\label{self-enrichment}
\section{Relation to 2-Categories}\label{relation-to-2-categories}
-I talked about 2-categories in the context of \textbf{Cat}, the category
+I talked about 2-categories in the context of $\Cat$, the category
of (small) categories. The morphisms between categories are functors,
but there is an additional structure: natural transformations between
functors. In a 2-category, the objects are often called zero-cells;
morphisms, 1-cells; and morphisms between morphisms, 2-cells. In
-\textbf{Cat} the 0-cells are categories, 1-cells are functors, and
+$\Cat$ the 0-cells are categories, 1-cells are functors, and
2-cells are natural transformations.
But notice that functors between two categories form a category too; so,
-in \textbf{Cat}, we really have a \newterm{hom-category} rather than a
-hom-set. It turns out that, just like \textbf{Set} can be treated as a
-category enriched over \textbf{Set}, \textbf{Cat} can be treated as a
-category enriched over \textbf{Cat}. Even more generally, just like
-every category can be treated as enriched over \textbf{Set}, every
-2-category can be considered enriched over \textbf{Cat}.
\ No newline at end of file
+in $\Cat$, we really have a \newterm{hom-category} rather than a
+hom-set. It turns out that, just like $\Set$ can be treated as a
+category enriched over $\Set$, $\Cat$ can be treated as a
+category enriched over $\Cat$. Even more generally, just like
+every category can be treated as enriched over $\Set$, every
+2-category can be considered enriched over $\Cat$.
\ No newline at end of file
diff --git a/src/content/3.13/Topoi.tex b/src/content/3.13/Topoi.tex
index 22a79dc6..8222619e 100644
--- a/src/content/3.13/Topoi.tex
+++ b/src/content/3.13/Topoi.tex
@@ -25,7 +25,7 @@
things like continuity. And the conventional approach to topology is,
you guessed it, through set theory. In particular, the notion of a
subset is central to topology. Not surprisingly, category theorists
-generalized this idea to categories other than \textbf{Set}. The type of
+generalized this idea to categories other than $\Set$. The type of
category that has just the right properties to serve as a replacement
for set theory is called a \newterm{topos} (plural: topoi), and it
provides, among other things, a generalized notion of a subset.
@@ -128,7 +128,7 @@ \section{Subobject Classifier}\label{subobject-classifier}
define what a subobject is, but also define the special object
\code{Ω} without talking about elements. The idea is that we want the
morphism \code{true} to represent a ``generic'' subobject. In
-\textbf{Set}, it picks a single-element subset from a two-element set
+$\Set$, it picks a single-element subset from a two-element set
\code{Ω}. This is as generic as it gets. It's clearly a proper subset,
because \code{Ω} has one more element that's \emph{not} in that
subset.
@@ -137,7 +137,7 @@ \section{Subobject Classifier}\label{subobject-classifier}
from the terminal object to the \emph{classifying object} \code{Ω}.
But we have to define the classifying object. We need a universal
property that links this object to the characteristic function. It turns
-out that, in \textbf{Set}, the pullback of \code{true} along the
+out that, in $\Set$, the pullback of \code{true} along the
characteristic function \code{χ} defines both the subset \code{a}
and the injective function that embeds it in \code{b}. Here's the
pullback diagram:
@@ -162,7 +162,7 @@ \section{Subobject Classifier}\label{subobject-classifier}
\code{f} is injective.
This pullback diagram can be used to define the classifying object in
-categories other than \textbf{Set}. Such a category must have a terminal
+categories other than $\Set$. Such a category must have a terminal
object, which will let us define the monomorphism \code{true}. It must
also have pullbacks --- the actual requirement is that it must have all
finite limits (a pullback is an example of a finite limit). Under those
@@ -220,14 +220,14 @@ \section{Topos}\label{topos}
Has a subobject classifier \code{Ω}.
\end{enumerate}
-This set of properties makes a topos a shoe-in for \textbf{Set} in most
+This set of properties makes a topos a shoe-in for $\Set$ in most
applications. It also has additional properties that follow from its
definition. For instance, a topos has all finite colimits, including the
initial object.
It would be tempting to define the subobject classifier as a coproduct
(sum) of two copies of the terminal object --that's what it is in
-\textbf{Set}--- but we want to be more general than that. Topoi in which
+$\Set$--- but we want to be more general than that. Topoi in which
this is true are called Boolean.
\section{Topoi and Logic}\label{topoi-and-logic}
diff --git a/src/content/3.14/Lawvere Theories.tex b/src/content/3.14/Lawvere Theories.tex
index bfbe2cf7..9125ef7b 100644
--- a/src/content/3.14/Lawvere Theories.tex
+++ b/src/content/3.14/Lawvere Theories.tex
@@ -31,7 +31,7 @@ \section{Universal Algebra}\label{universal-algebra}
universally quantified.
This definition of a universal algebra can be extended to categories
-other than \textbf{Set}, if we replace operations (functions) with
+other than $\Set$, if we replace operations (functions) with
morphisms. Instead of a set, we select an object \code{a} (called a
generic object). A unary operation is just an endomorphism of
\code{a}. But what about other arities (\newterm{arity} is the number of
@@ -216,9 +216,9 @@ \section{Models of Lawvere Theories}\label{models-of-lawvere-theories}
essence of being a monoid. It must be valid for all monoids. A
particular monoid becomes a model of such a theory. A model is defined
as a functor from the Lawvere theory \textbf{L} to the category of sets
-\textbf{Set}. (There are generalizations of Lawvere theories that use
+$\Set$. (There are generalizations of Lawvere theories that use
other categories for models but here I'll just concentrate on
-\textbf{Set}.) Since the structure of \textbf{L} depends heavily on
+$\Set$.) Since the structure of \textbf{L} depends heavily on
products, we require that such a functor preserve finite products. A
model of \textbf{L}, also called the algebra over the Lawvere theory
\textbf{L}, is therefore defined by a functor:
@@ -232,7 +232,7 @@ \section{Models of Lawvere Theories}\label{models-of-lawvere-theories}
products would eliminate most interesting theories.
The preservation of products by models means that the image of
-\code{M} in \textbf{Set} is a sequence of sets generated by powers of
+\code{M} in $\Set$ is a sequence of sets generated by powers of
the set \code{M 1} --- the image of the object \code{1} from
\textbf{L}. Let's call this set \code{a}. (This set is sometimes
called a \newterm{sort}, and such algebra is called single-sorted. There
@@ -243,7 +243,7 @@ \section{Models of Lawvere Theories}\label{models-of-lawvere-theories}
a × a -> a
\end{Verbatim}
As with any functor, it's possible that multiple morphisms in \textbf{L}
-are collapsed to the same function in \textbf{Set}.
+are collapsed to the same function in $\Set$.
Incidentally, the fact that all laws are universally quantified
equalities means that every Lawvere theory has a trivial model: a
@@ -286,12 +286,12 @@ \section{Models of Lawvere Theories}\label{models-of-lawvere-theories}
\textbf{F}\textsuperscript{op}. Such model is completely determined by
its value at \code{1}, \code{M 1}. Since \code{M 1} can be any
set, there are as many of these models as there are sets in
-\textbf{Set}. Moreover, every morphism in \code{Mod(F\textsuperscript{op}, Set)} (a
+$\Set$. Moreover, every morphism in \code{Mod(F\textsuperscript{op}, Set)} (a
natural transformation between functors \code{M} and \code{N}) is
uniquely determined by its component at \code{M 1}. Conversely, every
function \code{M 1 -> N 1} induces a natural
transformation between the two models \code{M} and \code{N}.
-Therefore \code{Mod(F\textsuperscript{op}, Set)} is equivalent to \textbf{Set}.
+Therefore \code{Mod(F\textsuperscript{op}, Set)} is equivalent to $\Set$.
\section{The Theory of Monoids}\label{the-theory-of-monoids}
@@ -373,7 +373,7 @@ \section{Lawvere Theories and Monads}\label{lawvere-theories-and-monads}
Mod(L, Set) -> Mod(F\textsuperscript{op}, Set)
\end{Verbatim}
But, as we discussed, the category of models of
-\textbf{F}\textsubscript{op} is equivalent to \textbf{Set}, so we get
+\textbf{F}\textsubscript{op} is equivalent to $\Set$, so we get
the forgetful functor:
\begin{Verbatim}[commandchars=\\\{\}]
@@ -410,7 +410,7 @@ \section{Lawvere Theories and Monads}\label{lawvere-theories-and-monads}
\end{Verbatim}
Together, the forgetful and the free functor define a
\hyperref[monads-categorically]{monad}
-\code{T = U◦F} on \textbf{Set}. Thus every Lawvere theory generates
+\code{T = U◦F} on $\Set$. Thus every Lawvere theory generates
a monad.
It turns out that the category of
@@ -424,7 +424,7 @@ \section{Lawvere Theories and Monads}\label{lawvere-theories-and-monads}
The connection between monads and Lawvere theories doesn't go both ways,
though. Only finitary monads lead to Lawvere thories. A finitary monad
-is based on a finitary functor. A finitary functor on \textbf{Set} is
+is based on a finitary functor. A finitary functor on $\Set$ is
fully determined by its action on finite sets. Its action on an
arbitrary set \code{a} can be evaluated using the following coend:
@@ -446,7 +446,7 @@ \section{Lawvere Theories and Monads}\label{lawvere-theories-and-monads}
\begin{Verbatim}[commandchars=\\\{\}]
T\textsubscript{L} a = ∫ \textsuperscript{n} a\textsuperscript{n} × L(n, 1)
\end{Verbatim}
-Conversely, given any finitary monad \code{T} on \textbf{Set}, we can
+Conversely, given any finitary monad \code{T} on $\Set$, we can
construct a Lawvere theory. We start by constructing a Kleisli category
for \code{T}. As you may remember, a morphism in a Kleisli category
from \code{a} to \code{b} is given by a morphism in the underlying
@@ -526,7 +526,7 @@ \section{Monads as Coends}\label{monads-as-coends}
by the functor \code{I\textsubscript{L}}).
Let's check the functoriality of \code{L(m, 1)} as a functor from
-\textbf{F} to \textbf{Set}. We want to lift a function
+\textbf{F} to $\Set$. We want to lift a function
\code{f :: m -> n}, so our goal is to implement a
function from \code{L(m, 1)} to \code{L(n, 1)}. Corresponding to
the function \code{f} there is a morphism in \textbf{L} from
@@ -695,8 +695,8 @@ \section{Challenges}\label{challenges}
binary operations can be generated using the corresponding Kleisli
arrows.
\item
- \textbf{FinSet} is a subcategory of \textbf{Set} and there is a
- functor that embeds it in \textbf{Set}. Any functor on \textbf{Set}
+ \textbf{FinSet} is a subcategory of $\Set$ and there is a
+ functor that embeds it in $\Set$. Any functor on $\Set$
can be restricted to \textbf{FinSet}. Show that a finitary functor is
the left Kan extension of its own restriction.
\end{enumerate}
diff --git a/src/content/3.15/Monads, Monoids, and Categories.tex b/src/content/3.15/Monads, Monoids, and Categories.tex
index 65805aa1..a1bdc479 100644
--- a/src/content/3.15/Monads, Monoids, and Categories.tex
+++ b/src/content/3.15/Monads, Monoids, and Categories.tex
@@ -17,10 +17,10 @@ \section{Bicategories}\label{bicategories}
are used to defining sets in terms of elements. An empty set has no
elements. A singleton set has one element. A cartesian product of two
sets is a set of pairs, and so on. But when talking about the category
-\textbf{Set} I asked you to forget about the contents of sets and
+$\Set$ I asked you to forget about the contents of sets and
instead concentrate on morphisms (arrows) between them. You were
allowed, from time to time, to peek under the covers to see what a
-particular universal construction in \textbf{Set} described in terms of
+particular universal construction in $\Set$ described in terms of
elements. The terminal object turned out to be a set with one element,
and so on. But these were just sanity checks.
@@ -31,7 +31,7 @@ \section{Bicategories}\label{bicategories}
we forfeit the information about its action on the internals of a
category (its objects and morphisms), just like we forfeit the
information about the action of a function on elements of a set when we
-treat it as an arrow in \textbf{Set}. But functors between any two
+treat it as an arrow in $\Set$. But functors between any two
categories also form a category. This time you are asked to consider
something that was an arrow in one category to be an object in another.
In a functor category functors are objects and natural transformations
@@ -51,7 +51,7 @@ \section{Bicategories}\label{bicategories}
\end{figure}
\noindent
-The category of categories \textbf{Cat} is an immediate example. We have
+The category of categories $\Cat$ is an immediate example. We have
categories as 0-cells, functors as 1-cells, and natural transformations
as 2-cells. The laws of a 2-category tell us that 1-cells between any
two 0-cells form a category (in other words, \code{C(a, b)} is a
@@ -73,7 +73,7 @@ \section{Bicategories}\label{bicategories}
the corresponding category laws.
Let's see what this means in our canonical example of a 2-category
-\textbf{Cat}. The hom-category \code{Cat(a, a)} is the category of
+$\Cat$. The hom-category \code{Cat(a, a)} is the category of
endofunctors on \code{a}. Endofunctor composition plays the role of a
tensor product in it. The identity functor is the unit with respect to
this product. We've seen before that endofunctors form a monoidal
@@ -202,7 +202,7 @@ \section{Monads}\label{monads}
By now you should be pretty familiar with the definition of a monad as a
monoid in the category of endofunctors. Let's revisit this definition
with the new understanding that the category of endofunctors is just one
-small hom-category of endo-1-cells in the bicategory \textbf{Cat}. We
+small hom-category of endo-1-cells in the bicategory $\Cat$. We
know it's a monoidal category: the tensor product comes from the
composition of endofunctors. A monoid is defined as an object in a
monoidal category --- here it will be an endofunctor \code{T} ---
@@ -251,7 +251,7 @@ \section{Monads}\label{monads}
\noindent
That's a much more general definition of a monad using only 0-cells,
1-cells, and 2-cells. It reduces to the usual monad when applied to the
-bicategory \textbf{Cat}. But let's see what happens in other
+bicategory $\Cat$. But let's see what happens in other
bicategories.
Let's construct a monad in \textbf{Span}. We pick a 0-cell, which is a
diff --git a/src/content/3.2/Adjunctions.tex b/src/content/3.2/Adjunctions.tex
index cbcd424c..0be28638 100644
--- a/src/content/3.2/Adjunctions.tex
+++ b/src/content/3.2/Adjunctions.tex
@@ -67,9 +67,9 @@ \section{Adjunction and Unit/Counit
can indeed use the equality of elements of a set to equality-compare
objects.
-But, remember, \textbf{Cat} is really a 2-category. Hom-sets in a
+But, remember, $\Cat$ is really a 2-category. Hom-sets in a
2-category have additional structure --- there are 2-morphisms acting
-between 1-morphisms. In \textbf{Cat}, 1-morphisms are functors, and
+between 1-morphisms. In $\Cat$, 1-morphisms are functors, and
2-morphisms are natural transformations. So it's more natural (can't
avoid this pun!) to consider natural isomorphisms as substitutes for
equality when talking about functors.
@@ -311,7 +311,7 @@ \section{Adjunctions and Hom-Sets}\label{adjunctions-and-hom-sets}
Naturality means that the source \code{d} can be varied smoothly
across \emph{D}; and the target \code{c}, across \emph{C}. More
precisely, we have a natural transformation \code{φ} between the
-following two (covariant) functors from \emph{C} to \textbf{Set}. Here's
+following two (covariant) functors from \emph{C} to $\Set$. Here's
the action of these functors on objects:
\begin{Verbatim}[commandchars=\\\{\}]
@@ -406,12 +406,12 @@ \section{Adjunctions and Hom-Sets}\label{adjunctions-and-hom-sets}
functor}. The reason for this is that, to the first approximation, we
can treat the category of Haskell types as the category of sets.
-When the right category \emph{D} is \textbf{Set}, the right adjoint
-\code{R} is a functor from \emph{C} to \textbf{Set}. Such a functor is
+When the right category \emph{D} is $\Set$, the right adjoint
+\code{R} is a functor from \emph{C} to $\Set$. Such a functor is
representable if we can find an object \code{rep} in \emph{C} such
that the hom-functor \code{C(rep, \_)} is naturally isomorphic to
\code{R}. It turns out that, if \code{R} is the right adjoint of
-some functor \code{L} from \textbf{Set} to \emph{C}, such an object
+some functor \code{L} from $\Set$ to \emph{C}, such an object
always exists --- it's the image of the singleton set \code{()} under
\code{L}:
diff --git a/src/content/3.3/Free-Forgetful Adjunctions.tex b/src/content/3.3/Free-Forgetful Adjunctions.tex
index c230c28a..e40427a4 100644
--- a/src/content/3.3/Free-Forgetful Adjunctions.tex
+++ b/src/content/3.3/Free-Forgetful Adjunctions.tex
@@ -6,7 +6,7 @@
sets, have no internal structure --- they have no elements. Still, those
objects often carry the memory of sets, in the sense that there is a
mapping --- a functor --- from a given category \emph{C} to
-\textbf{Set}. A set corresponding to some object in \emph{C} is called
+$\Set$. A set corresponding to some object in \emph{C} is called
its \newterm{underlying set}.
Monoids are such objects that have underlying sets --- sets of elements.
@@ -140,7 +140,7 @@ \section{Some Intuitions}\label{some-intuitions}
together multiple elements of a set. Also, their image may cover only
part of their codomain.
-An ``average'' hom-set in \textbf{Set} will contain a whole spectrum of
+An ``average'' hom-set in $\Set$ will contain a whole spectrum of
functions starting with the ones that are least lossy (e.g., injections
or, possibly, isomorphisms) and ending with constant functions that
collapse the whole domain to a single element (if there is one).
@@ -148,7 +148,7 @@ \section{Some Intuitions}\label{some-intuitions}
I tend to think of morphisms in an arbitrary category as being lossy
too. It's just a mental model, but it's a useful one, especially when
thinking of adjunctions --- in particular those in which one of the
-categories is \textbf{Set}.
+categories is $\Set$.
Formally, we can only speak of morphisms that are invertible
(isomorphisms) or non-invertible. It's that latter kind that may be
@@ -166,7 +166,7 @@ \section{Some Intuitions}\label{some-intuitions}
When we apply a forgetful functor \code{U} to an object \code{c} in
\emph{C}, we think of it as revealing the ``internal structure'' of
-\code{c}. In fact, if \emph{D} is \textbf{Set} we think of \code{U}
+\code{c}. In fact, if \emph{D} is $\Set$ we think of \code{U}
as \emph{defining} the internal structure of \code{c} --- its
underlying set. (In an arbitrary category, we can't talk about the
internals of an object other than through its connections to other
diff --git a/src/content/3.5/Monads and Effects.tex b/src/content/3.5/Monads and Effects.tex
index 22340eae..8f2679c0 100644
--- a/src/content/3.5/Monads and Effects.tex
+++ b/src/content/3.5/Monads and Effects.tex
@@ -97,7 +97,7 @@ \subsection{Partiality}\label{partiality}
may be potentially non-terminating, all types in Haskell are assumed to
be lifted. This is why we often talk about the category \textbf{Hask} of
Haskell (lifted) types and functions rather than the simpler
-\textbf{Set}. It is not clear, though, that \textbf{Hask} is a real
+$\Set$. It is not clear, though, that \textbf{Hask} is a real
category (see this
\href{http://math.andrej.com/2016/08/06/hask-is-not-a-category/}{Andrej
Bauer post}).
diff --git a/src/content/3.6/Monads Categorically.tex b/src/content/3.6/Monads Categorically.tex
index afce539c..d37600f8 100644
--- a/src/content/3.6/Monads Categorically.tex
+++ b/src/content/3.6/Monads Categorically.tex
@@ -248,7 +248,7 @@ \section{Monoidal Categories}\label{monoidal-categories}
\begin{Verbatim}[commandchars=\\\{\}]
eta :: () -> m
\end{Verbatim}
-The singleton is the terminal object in \textbf{Set}, so it's natural to
+The singleton is the terminal object in $\Set$, so it's natural to
generalize this definition to any category that has a terminal object
\code{t}:
@@ -451,7 +451,7 @@ \section{Monads as Monoids}\label{monads-as-monoids}
any two functors can be composed --- the target category of one has to
be the source category of the other. That's just the usual rule of
composition of morphisms --- and, as we know, functors are indeed
-morphisms in the category \textbf{Cat}. But just like endomorphisms
+morphisms in the category $\Cat$. But just like endomorphisms
(morphisms that loop back to the same object) are always composable, so
are endofunctors. For any given category \emph{C}, endofunctors from
\emph{C} to \emph{C} form the functor category \code{{[}C, C{]}}. Its
diff --git a/src/content/3.7/Comonads.tex b/src/content/3.7/Comonads.tex
index 9df0a239..99fadc61 100644
--- a/src/content/3.7/Comonads.tex
+++ b/src/content/3.7/Comonads.tex
@@ -380,7 +380,7 @@ \section{Comonad Categorically}\label{comonad-categorically}
split x = (x, x)
\end{Verbatim}
which shows that in Haskell (and, in general, in the category
-\textbf{Set}) every object is a trivial comonoid.
+$\Set$) every object is a trivial comonoid.
Fortunately there are other more interesting monoidal categories in
which to define comonoids. One of them is the category of endofunctors.
@@ -505,7 +505,7 @@ \section{The Store Comonad}\label{the-store-comonad}
set :: a -> s -> a
get :: a -> s
\end{Verbatim}
-If \code{a} is a product type, \code{set} could be implemented as
+If \code{a} is a product type, $\Set$ could be implemented as
setting the field of type \code{s} inside of \code{a} while
returning the modified version of \code{a}. Similarly, \code{get}
could be implemented to read the value of the \code{s} field from
diff --git a/src/content/3.8/F-Algebras.tex b/src/content/3.8/F-Algebras.tex
index 9182a2d6..dc5ff810 100644
--- a/src/content/3.8/F-Algebras.tex
+++ b/src/content/3.8/F-Algebras.tex
@@ -9,7 +9,7 @@
μ :: m × m -> m
η :: 1 -> m
\end{Verbatim}
-Here, 1 is the terminal object in \textbf{Set} --- the singleton set.
+Here, 1 is the terminal object in $\Set$ --- the singleton set.
The first function defines multiplication (it takes a pair of elements
and returns their product), the second selects the unit element from
\code{m}. Not every choice of two functions with these signatures
@@ -34,7 +34,7 @@
\begin{Verbatim}[commandchars=\\\{\}]
m \textsuperscript{m×m + 1}
\end{Verbatim}
-The plus sign stands for the coproduct in \textbf{Set}. We have just
+The plus sign stands for the coproduct in $\Set$. We have just
replaced a pair of functions with a single function --- an element of
the set:
@@ -204,7 +204,7 @@ \section{Recursion}\label{recursion}
up with our \code{Expr}. Notice that \code{Expr} does not depend on
\code{a}. The starting point of our journey doesn't matter, we always
end up in the same place. This is not always true for an arbitrary
-endofunctor in an arbitrary category, but in the category \textbf{Set}
+endofunctor in an arbitrary category, but in the category $\Set$
things are nice.
Of course, this is a hand-waving argument, and I'll make it more
diff --git a/src/content/3.9/Algebras for Monads.tex b/src/content/3.9/Algebras for Monads.tex
index 044696cd..edb10142 100644
--- a/src/content/3.9/Algebras for Monads.tex
+++ b/src/content/3.9/Algebras for Monads.tex
@@ -418,7 +418,7 @@ \section{Lenses}\label{lenses}
coalgs a = Store (set a) (get a)
\end{Verbatim}
Here, \code{a} is a value of type \code{a}. Notice that partially
-applied \code{set} is a function \code{s->a}.
+applied $\Set$ is a function \code{s->a}.
We also know that \code{Store s} is a comonad:
diff --git a/src/coverpage.svg b/src/coverpage.svg
index d5b81bdb..bc330e0b 100644
--- a/src/coverpage.svg
+++ b/src/coverpage.svg
@@ -43,14 +43,14 @@
inkscape:pageopacity="0"
inkscape:pageshadow="2"
inkscape:zoom="0.98994951"
- inkscape:cx="440.0701"
+ inkscape:cx="200.15887"
inkscape:cy="434.61539"
inkscape:document-units="px"
inkscape:current-layer="layer3"
showgrid="false"
inkscape:window-width="1920"
inkscape:window-height="1114"
- inkscape:window-x="0"
+ inkscape:window-x="1440"
inkscape:window-y="0"
inkscape:window-maximized="1"
height="1020px"
@@ -164,14 +164,14 @@
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:40px;line-height:139.99999762%;font-family:Alegreya;-inkscape-font-specification:'Alegreya, Normal';text-align:start;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1"
id="tspan7240">Bartosz Milewski
+ sodipodi:absref="content/1.3/images/monoid.jpg"
+ xlink:href="content/1.3/images/monoid.jpg"
+ y="406.50101"
+ x="173.54599"
+ id="pigs"
+ height="352.50055"
+ width="292.90802"
+ style="display:inline;enable-background:new" />