Finding correct (lens) laws

Posted on 2018-12-12 by Oleg Grenrus lens, agda

#Introduction

In this post I’ll try to convince you that optic laws (as in the lens package documentation) are the right ones. For example recall the Lens laws (law names come from alternative naming of the operations: view = get and set = put1.

  • GetPut: You get back what you put in:

    view l (set l v s) ≡ v
  • PutGet: Putting back what you got doesn’t change anything:

    set l (view l s) s ≡ s
  • PutPut: Setting twice is the same as setting once:

    set l v' (set l v s) ≡ set l v' s

Why there are these three laws (why three?). How we know these three laws are "enough", or that they don’t require too much?

The solution is to consider a representation which is obviously a Lens (or Prism or ...), but for which also the "laws" are obvious. Then when we write conversion functions between this and more concrete representation we’ll find out what laws are needed.

The Iso - an isomorphism is the simplest example. The concrete representation is

data Iso s a = Iso { to :: s -> a, from :: a -> s }

But the an obvious definition in math world (citing Awodey 2010): In any category \mathcal{C} , an arrow f : S \to A is called an isomorphism, if there is an arrow g : A \to S in \mathcal{C} such that g \circ f \equiv 1_S and f \circ g \equiv 1_A . We write S \cong A if there exists an isomorphism between S and A .

Here, the mathematical conditions tells precisely which laws are required, namely for a (RecordWildCards destructed) Iso fields:

to . from ≡ id
from . to ≡ id

The isomorphism case is trivial. In the post I’ll go through more interesting optics:

  • Lens and Prism. Their laws should not be surprising to people familiar with lens library. However we discover a new law for Prism.

  • Affine (traversal). I discussed affine traversal in a post about them and give laws in Glassery. These laws were written based purely on intuition, which we’ll now verify.

  • Achromatic lens. There are my quickly written notes from March 2018. Turns out, I missed few bits.

  • Grate. Russel O’Connor gives laws for a van Laarhoven Grate (a two functor variant) in Grate: A new kind of Optic. We formulate them for "normal" Grates: grate :: ((s -> a) -> b) -> t. In the process we also learn more about Grate.

At the end we discuss a little about Setters. And conclude the post with general thoughts.

The Isomorphism lenses by Twan van Laarhoven makes the same things for lenses; we do it for more classes of optics.

#Lens

A Lens is a purely functional reference. Concrete Lens representation is

data Lens s a = Lens { view :: s -> a, set :: s -> a -> s }

Note: Here and later we work with so called monomorphic optics, i.e. optics which don’t allow type change. Hopefully I will lift this restriction in a future blog post. Also we swapped the arguments for set (in lens library set :: ASetter s t a b -> b -> s -> t). This version is easier to do math with (we will partially apply it).

We claim that a Lens should be a subject to the following three laws:

  • view (set s a) ≡ a

  • set s (view s) ≡ s

  • set (set s a) a' ≡ set s a'

Let’s show that the laws are both sound (not asking too much) and complete (asking enough). For that we’ll need another representation of lenses: an existential formulation:

\quad \mathsf{Lens}\;S\;A = \exists C, S \cong C \times A

This formulation isn’t novel (e.g. Russel O’Connor have a footnote about affine traversal in A representation theorem for second-order functionals), but I haven’t seen it used to highlight the laws. In Lenses Are Exactly the Coalgebras for the Store Comonad the laws of a coalgebra for a comonad are used. As my category theory knowledge is not very deep, I find the "coalgebra for a comonad" intimidating. I don’t know if and how "Lenses are the coalgebras for the costate comonad" is generalisable to other optics. Existential formulation on the other hand works for everything.

#Soundness

We can define view and set operations using existential formulation:

\begin{aligned} \quad &\mathsf{view} : S \to A \\ &\mathsf{view}\;s = \mathsf{snd}\; (f\;s) \\ &\mathsf{set} : S \to A \to S \\ &\mathsf{set}\;s\;a = g\; (\mathsf{pair}\;(\mathsf{fst}\;(f\;s))\;a) \end{aligned}

where f : S \to C \times A is the forward direction, and g : C \times A \to S is the backward direction of the isomorphism.

We can prove that Lens laws are sound by showing that this definitions satisfy the laws. For example the first GetPut law:

\begin{aligned} \quad &\mathsf{view}\; (\mathsf{set}\; s\; a) &&\equivvia{ \text{definitions} } \\ &\mathsf{snd}\;(f\; (g\; (\mathsf{pair}\;(\mathsf{fst}\;(f\;s))\;a))) &&\equivvia{ f \circ g \equiv 1 } \\ &\mathsf{snd}\;(\mathsf{pair}\;(\mathsf{fst}\;(f\;s))\;a) &&\equivvia{ \mathsf{snd}\;(\mathsf{pair}\;x\;y) = y } \\ &a &&\;\blacksquare \end{aligned}

The remaining PutGet and PutPut laws can be proved sound using similar direct calculation. In fact, I mechanized the proofs in this post; and soundness properties are proven mostly by a very simple tactic.

#Completeness

The other direction, completeness, is more interesting. Given \mathsf{view} : S \to A and \mathsf{set} : S \to A \to S , such that the three laws above hold, construct a \mathsf{Lens}\;S\;A .

The key insight is to pick the right "left overs" type C . I found the process of mechanizing proofs helpful, you will immediately rule out e.g. bare S . The A \to S (partially applied \mathsf{set} ) is better, but still too "big". We need to use a smaller, refined type:

\quad C = \{ c : A \to S \mid \exists s : S, c \equiv \mathsf{set}\; s \}

All three options make sense if you consider how lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b can be defined using e.g. Profunctor encoding, Glassery uses:

lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b
lens getter setter pab = dimap
     -- c = s
     (\s -> (getter s, s))
     (\(b, s) -> setter s b)
     (first' pab)

but you can also write

lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b
lens getter setter pab = dimap
     -- c = b -> t; or refined
     -- c = { f : b -> t | exists s. f = setter s }
     (\s -> (getter s, setter s))
     (\(b, f) -> f b)
     (first' pab)

Anyway, after a C is found, we can define an isomorphism. The arrows are defined as in the latter Profunctor example:

\begin{aligned} \quad &f : S \to C \times A \\ &f\; s = \mathsf{pair}\; (\mathsf{set}\;s \mid \mathsf{exists}\;s\;\mathsf{refl})\; (\mathsf{view}\; s) \\ &g : C \times A \to S \\ &g\;(\mathsf{pair}\; (c \mid -)\; a) = c\; a \end{aligned}

The proof obligation for the C refinement is not really an obligation at all, the equality is definitionally true ( \mathsf{refl} ). However, next we need to prove the isomorphism properties. The backwards direction is easy, it’s essentially a PutGet-law:

\begin{aligned} \quad &g\; (f\; s) &&\equivvia{\text{definition of $f$}} \\ &g\; (\mathsf{pair}\; (\mathsf{set}\;s \mid \mathsf{exists}\;s\;\mathsf{refl})\; (\mathsf{view}\; s)) &&\equivvia{ \text{definition of $g$} } \\ &\mathsf{set}\;s\;(\mathsf{view}\;s) &&\equivvia{ \text{\textbf{PutGet}-law} } \\ &s &&\;\blacksquare \end{aligned}

Forward direction is more involved. Here we will use proof irrelevance for the refinement proofs. Those proofs aren’t computationally relevant, so this is justified. I think it’s possible to encode these proofs in Cubical Agda. I also use functional extensionality, but it’s a theorem in Cubical Agda (it’s used once in the following part of the proof, try to spot it!) We will need to use both PutPut and GetPut laws, for the both halves of the product (pair), respectively.

\begin{aligned} \quad &f\; (g\; (\mathsf{pair}\;(c \mid \mathsf{exists}\; s\; \mathit{proof})\;a)) &&\equivvia{ \text{definition of $g$} } \\ &f\; (c\; a) &&\equivvia{ \text{rewrite using $\mathit{proof}$} } \\ &f\; (\mathsf{set}\;s\; a) &&\equivvia{ \text{definition of $f$} } \\ &\mathsf{pair}\; (\mathsf{set}\; (\mathsf{set}\;s\; a) \mid -)\;(\mathsf{view}\;(\mathsf{set}\;s\;a)) &&\equivvia{ \text{definition of $g$} } \\ \end{aligned}

We continue by proving the equality of pair halves separately:

\begin{aligned} \quad &\mathsf{set}\; (\mathsf{set}\;s\; a) &&\equivvia{ \text{\textbf{PutPut}-law} } \\ &\mathsf{set}\;s &&\equivvia{ \text{rewrite using $\mathit{proof}$} } \\ &c &&\;\blacksquare_1 \end{aligned}

and the other half:

\begin{aligned} \quad &\mathsf{view}\;(\mathsf{set}\;s\;a) &&\equivvia{ \text{\textbf{GetPut}-law} } \\ &a &&\;\blacksquare_2 \end{aligned}

This completes the completeness proof. We had to use all three laws. I’m sure that intuitively thinking the PutPut-law is simple to forget, but these calculations show that it is important.

#Calculation of the laws

In previous subsections, we knew the right Lens laws (candidates) a priori. We only needed to verify that they are in fact the right ones. But what if we didn’t knew the laws beforehand. Could we somehow calculate them?

In the completeness proof, all branches ended up with the use of some law, however we knew which C to pick, so we end up with a proof goal which is exactly one of the laws we can use. Therefore, trying to write a completeness proof is not a direct calculation way, either we need to know the right existential type to use (in my opinion not entirely obvious), or which laws there possibly are.

Unfortunately I don’t have a definitive answer to this problem. A sketch of the idea is however to start with

\quad S \cong C \otimes A

isomorphism, and try to massage it into something without C . Notice that we work in a category where all arrows are isomorphisms (groupoid), therefore we use \otimes -tensor, not \times- product.

If we assume that we have a Symmetric closed monoidal groupoid, then we can use a property:

\quad \mathrm{Hom}(C\otimes A, S) \cong \mathrm{Hom}(C,[ S, A] )

to find out that C and [S,A] = A \leftrightarrow S are isomorphic. (And this is the hand-wavy part: I don’t understand groupoids enough to say if above reasoning make any sense at all). We need to "remember" how to go back in S \to A direction, not all A \to S are invertible (e.g. const someS). That’s one reasoning to c = \mathsf{set}\; s refinement of C = A \to S .

We could stop here, as we in fact calculated what existential C we have to use (or at least could try). We can also continue, at this point we have

\quad S \cong (A \leftrightarrow S) \otimes A

C -less representation of Lens. If we forget the isomorphisms, the remainders are S \to A \to S and S \to A arrows (when we split the product), these are \mathsf{set} and \mathsf{view} arrows! So we went from existential to concrete representation of Lens without using Yoneda. (What you needa know about Yoneda is an interesting reed anyway!)

But if we stick with this representation, then we can write down the laws too: If left-to-right direction is s \mapsto (\mathsf{set}\;s, \mathsf{view}\;s) and the right-to-left is (\mathsf{set}\;s, a) \mapsto \mathsf{set}\;s\;a (here I hand-wave about, what the first half of the pair should be!) then we get

\begin{aligned} s &= \mathsf{set}\;s\;(\mathsf{view}\;s) && \textbf{PutGet} \\ \mathsf{set}\;s &= \mathsf{set}\;(\mathsf{set}\;s\;a) && \textbf{PutPut} \\ a &= \mathsf{view}\;(\mathsf{set}\;a) && \textbf{GetPut} \end{aligned}

Note how the PutPut-law is in contracted form, the form which we would preferred in the above Lens laws completeness proof (spoiler: that’s where I used functional extensionality, if you haven’t spotted it).

#Prism

In this section we do similar soundness and completeness proofs for prisms. A Prism is a generalized constructor. Concrete Prism representation is (preview and review are often called match and build respectively, I stick with lens-library vocabulary).

data Prism s a = Prism { preview :: s -> Maybe a, review :: a -> s }

We claim that a Prism should be a subject to the following two laws:

  • MatchBuild: First, if I review a value and then preview I will get it back:

    preview (review a) ≡ Just a

  • BuildMatch: Second, if you can extract a value a using a preview from a value s, then the value s is completely described by a prism and a:

    preview s ≡ Just a review a ≡ s

The existential formulation of a Prism is similar to Lens’, but instead of product we have a coproduct (or sum):

\quad \mathsf{Prism}\;S\;A = \exists D, S \cong D + A

#Soundness

First step is to define preview and review operations for existential representation:

\begin{aligned} \quad &\mathsf{preview} : S \to \mathsf{Maybe}\;A \\ &\mathsf{preview}\;s = \mathsf{either}\;(d \mapsto \mathsf{nothing})\;(a \mapsto \mathsf{just}\;a) \; (f\;s) \\ &\mathsf{review} : A \to S \\ &\mathsf{review}\;a = g\; (\mathsf{right}\;a) \end{aligned}

where f : S \to D + A is the forward direction, and g : D + A \to S is the backward direction of the isomorphism.

As with Lens none of the laws is hard to show sound. The first one is a direct computation:

\begin{aligned} \quad &\mathsf{preview}\; (\mathsf{review}\; a) &&\equivvia{\text{definitions}} \\ &\mathsf{either}\;(d \mapsto \mathsf{nothing})\;(a \mapsto \mathsf{just}\;a)\; (f\; (g\;(\mathsf{right}\; a))) &&\equivvia{ f\circ g \equiv 1 } \\ &\mathsf{either}\;(d \mapsto \mathsf{nothing})\;(a \mapsto \mathsf{just}\;a)\; (\mathsf{right}\; a) &&\equivvia{\text{$\mathsf{either}$ computation}} \\ &\mathsf{just}\;a &&\;\blacksquare \\ \end{aligned}

and for the second one we need to prepare the precondition. The injectivity of constructors let’s rule out the case when f\;s = \mathsf{left}\;d , because \mathsf{nothing} \not\equiv \mathsf{just}\;x .

\begin{aligned} \quad &\mathsf{preview}\; s\equiv\mathsf{just}\;a &&\equivvia{\text{definitions}} \\ &\mathsf{either}\;(d \mapsto \mathsf{nothing})\;(a \mapsto \mathsf{just}\;a) (f \;s) \equiv \mathsf{just}\;a &&\equivvia{\text{injectivity of constructors}} \\ &f\;s \equiv \mathsf{right}\;a &&\;\blacksquare \end{aligned}

using which we can prove the law itself

\begin{aligned} \quad & \mathsf{review}\; a &&\equivvia{\text{definitions}} \\ & g\; (\mathsf{right }\; a) &&\equivvia{\text{rewrite using a precondition}} \\ & g\; (f\; (\mathsf{just}\;s)) &&\equivvia{g \circ f \equiv 1} \\ & s &&\;\blacksquare \end{aligned}

#Completeness

We need to construct a \mathsf{Prism}\;S\;A from the given \mathsf{review} : A \to S and \mathsf{preview} : S \to \mathsf{Maybe}\;A . Here also, as in the Lens case, we need to pick the right type for the existential. And again we need a refined type:

\quad D = \{ d : S \mid \mathsf{preview}\;d \equiv \mathsf{nothing} \}

And yet again, that type can be found either from Profunctor definition of prism (the either id setter):

prism :: (b -> t) -> (s -> Either t a) -> Prism s t a b
prism setter getter = dimap getter (either id setter) . right'}

or by informal calculation

\quad S \cong D + A \Rightarrow D \cong S - A

Then we can define the arrows of the isomorphism (I’m quite sure that smart enough proof search would find exactly these definitions):

\begin{aligned} \quad &f : S \to D + A \\ &f\; s = \mathsf{maybe\textnormal{-}case}\;(\mathsf{preview}\;s)\; (\mathit{proof} \mapsto \mathsf{left}\;(s \mid \mathit{proof}))\; (a\; \mathit{proof} \mapsto \mathsf{right}\; a)\\ &g : D + A \to S \\ &g\;(\mathsf{left}\; (s \mid - )) = s \\ &g\;(\mathsf{right}\; a) = \mathsf{review}\;a \\ \end{aligned}

We use non-standard \mathsf{maybe\textnormal{-}case} 2, which provides the match proofs too. yet it can be directly coded e.g. in Agda as:

open import Data.Maybe
open import Relation.Binary.PropositionalEquality

maybe-case :  {a b} {A : Set a} {B : Set b}
            (m : Maybe A)
            (m ≡ nothing  B)
            ((a : A)  m ≡ just a  B)
            B
maybe-case (just a) f g = g a refl
maybe-case nothing  f g = f refl

Next we can show that f and g are inverses. The g \circ f \equiv 1 case:

\begin{aligned} \quad & g\; (f\; s) &&\equivvia{\text{definition of $f$}} \\ & g\; (\mathsf{maybe\textnormal{-}case}\;(\mathsf{preview}\;s)\; (\mathit{proof} \mapsto \mathsf{left}\;(s \mid \mathit{proof}))\; (a\; \mathit{proof} \mapsto \mathsf{right}\; a)) \\ \end{aligned}

which we can split: \mathsf{preview}\;s \equiv \mathsf{none}

\begin{aligned} \quad & g\; (\mathsf{left}\; (s \mid \mathit{proof})) &&\equivvia{\text{definition of $g$}} \\ & s &&\;\blacksquare_1 \end{aligned}

and the second case \mathsf{preview}\;s \equiv \mathsf{just}\;a

\begin{aligned} \quad & g\; (\mathsf{right}\;a)) &&\equivvia{\text{definition of $g$}} \\ & \mathsf{review}\;a &&\equivvia{\text{\textbf{BuildMatch}-law}} \\ & s &&\;\blacksquare \end{aligned}

The other direction of the isomorphism, f \circ g \equiv 1 has two cases: first \mathsf{right}\; a :

\begin{aligned} \quad & f\; (g\; (\mathsf{right}\;a)) &&\equivvia{\text{definition of $g$}} \\ & f\; (\mathsf{review}\;a) &&\equivvia{\text{definition of $f$}} \\ & \mathsf{maybe\textnormal{-}case}\;(\mathsf{preview}\;(\mathsf{review}\;a))\; (\mathit{proof} \mapsto \mathsf{left}\;(s \mid \mathit{proof}))\; (a\; \mathit{proof} \mapsto \mathsf{right}\; a)) &&\equivvia{\text{\textbf{MatchBuild}-law}} \\ & \mathsf{maybe\textnormal{-}case}\;(\mathsf{just}\;a)\; (\mathit{proof} \mapsto \mathsf{left}\;(s \mid \mathit{proof}))\; (a\; \mathit{proof} \mapsto \mathsf{right}\; a)) &&\equivvia{\text{calculation of $\mathsf{maybe\textnormal{-}case}$}} \\ & \mathsf{right}\;a &&\;\blacksquare_1 \end{aligned}

and \mathsf{left}\;(s \mid \mathit{proof})

\begin{aligned} \quad & f\; (g\; (\mathsf{left}\;(s \mid \mathit{proof})) &&\equivvia{\text{definition of $g$}} \\ & f\; s &&\equivvia{\text{definition of $f$}} \\ & \mathsf{maybe\textnormal{-}case}\;(\mathsf{preview}\;s)\; (\mathit{proof} \mapsto \mathsf{left}\;(s \mid \mathit{proof}))\; (a\; \mathit{proof} \mapsto \mathsf{right}\; a) &&\equivvia{\text{using the $\mathit{proof} : \mathsf{preview}\;s \equiv \mathsf{nothing} $}} \\ & \mathsf{maybe\textnormal{-}case}\;\mathsf{nothing}\; (\mathit{proof} \mapsto \mathsf{left}\;(s \mid \mathit{proof}))\; (a\; \mathit{proof} \mapsto \mathsf{right}\; a) &&\equivvia{\text{calculation of $\mathsf{maybe\textnormal{-}case}$}} \\ & \mathsf{left}\;(s \mid \mathit{proof}&39;) && \equivvia{\text{proof-irrelevance}} \\ & \mathsf{left}\;(s \mid \mathit{proof}) &&\;\blacksquare \end{aligned}

The Prism law proofs are only slightly more complex than Lens law; mainly due the need to do case analysis.

One interesting point, is that we might want to have third, MatchMatch law. There is polymorphic matching :: APrism s t a b -> s -> Either t a combinator. Using its (non-existing in lens) monomorphic version matching' :: APrism' :: s -> Either s a version we can write:

matching' p s ≡ Left s'  s ≡ s'

The law is silly, as it completely defines the left part of the Either, therefore we can have Maybe in preview without any loss.

For example take a bogus _Right Prism variant:

_Right' :: Prism' (Either Int a) a
_Right' = prism Right $ \t ->  case t of 
    Right x -> Right x
    Left i  -> Left (Left (i + 1))

which will pass the QuickCheck tests:

matchBuild_prop :: Int -> Property
matchBuild_prop x = preview _Right' (review _Right' x) === Just x

buildMatch_prop :: T Int -> Property
buildMatch_prop s = case preview _Right' s of
    Just a  -> label "just" $ review _Right' a === s
    Nothing -> label "nothing" True

but behave weirdly

λ> matching _Right' (Right 'a')
Right 'a'
λ> matching _Right' (Left 42)
Left (Left 43)

or

λ> over _Right' id (Left 1)
Left 2

However, the MatchMatch property is clearly violated:

matchMatch_prop :: T Int -> Property
matchMatch_prop s = case matching _Right' s of
    Right _ -> label "right" True
    Left  t -> label "left" $ t === s

as QuickCheck points out:

λ> quickCheck matchMatch_prop
*** Failed! Falsifiable (after 1 test):  
Left 0
Left 1 /= Left 0

Our verification process didn’t needed this law, as we used preview and review as core Prism primitives. There are two ways to construct Prism in the prism library:

prism  :: (b -> t) -> (s -> Either t a) -> Prism s t a b
prism' :: (b -> s) -> (s -> Maybe    a) -> Prism s s a b

we use the latter prism' way. As you see it can construct only semi-monomorphic Prism. I think it’s type should be restricted to return Prism s s a a. And we need to check the MatchMatch law for Prisms constructed with prism. If we used (even the monomorphic version of) matching in the completeness proof, we would see the need for the law, if we actually use the left component of the result.

The MatchMatch law as stated doesn’t generalize to the polymorphic variant, as we require s ~ t, to be able to compare them.

My original idea of the law doesn’t work, neither another way to state it:

-- too weak
matching p s ≡ Left t  matching p t ≡ Left t'
matching p s `leftBind` matching p' ≡ matching p s

A counterexample is when all of negative matches are the same:

_Right'' :: Prism' (Either Int a) a
_Right'' = prism Right $ \t ->  case t of 
    Right x -> Right x
    Left i  -> Left (Left 0)

However one polymorphic variant of MatchMatch law:

matching p s ≡ Left t  matching p t ≡ Left s

seems to catch all invalid examples I can come with. It’s not directly enough for Coq (maybe it is). The relaxation of strict equality to matching p t ≡ Left s is sensible, as intuitively there should be a way to convert t back to s, t (as well as positive match results) fully determine the original s.

#Affine

I discussed affine traversal previously in a post about them and gave laws in Glassery.

  • You get back what you put in:

    preview l (set l v s) ≡ v <$ preview l s

  • If you can extract a value, and put it back, that doesn’t change anything.

    preview l s ≡ Just a set l s a ≡ s

  • If you get nothing when extracting a value, then whatever you put in, the operation is no-op.

    preview l s ≡ Nothing set l s a ≡ s

  • Setting twice is the same as setting once:

    set l v' (set l v s) ≡ set l v' s

Turns out that the third law is not required in monomorphic setting, similarly as MatchMatch-laws isn’t needed in the Prism case.

In this section I won’t show the full proofs, you can check (step through) the Coq sources at https://github.com/phadej/lens-laws. Let’s however highlight the main points.

The concrete representation is the "worst" of Lens and Prism (where Iso would be the best of both):

data Affine s a = Affine { preview :: s -> Maybe a; set :: s -> a -> s }

The existential representation explains the naming of this optic:

\quad \mathsf{Affine}\;S\;A = \exists C, D, S \cong D + C \times A

Defining operations using existential representation, and proving soundness of the laws is a combination of above Lens and Prism proofs.

The completeness proof is also similar to Lens and Prism proofs. The interesting part is what’s the value of existential types, given \mathsf{preview} : S \to \mathsf{Maybe}\;A and \mathsf{set} : S \to A \to S we use:

\begin{aligned} \quad D &= \{ s : S \mid \mathsf{preview}\;s \equiv \mathsf{nothing} \} \\ C &= \{ c : S \to A \mid \exists s : S, \mathsf{set}\;s \equiv c \land \exists (a : A), \mathsf{preview}\;s \equiv \mathsf{just}\;a \} \end{aligned}

These types are (again) very similar to the ones used in Prism and Lens completeness proofs.

At this point it’s good time to discuss some properties of optics we have covered so far:

  • any Iso can be turned into Lens, Prism or Affine

  • any Lens can be turned into Affine

  • any Prism can be turned into Affine

  • each class of optics is closed under composition

First three properties should be easy to see. For example take a \mathsf{Lens}\;S\;A = \exists C, S\cong C \times A , we can turn it into \mathsf{Affine}\;S\;A = \exists C, D, S\cong D + C \times A by taking D = 0 (Void in Haskell).

The last property is not hard to verify either. \mathsf{Iso} is clearly closed under composition (transitivity), \mathsf{Lens} and \mathsf{Prism} need associativity of + and \times , i.e.

\quad C \times (C&39; \times A) \cong (C \times C&39;) \times A \qquad D + (D&39; + A) \cong (D + D&39;) + A

The Affine case is slightly trickier:

\quad D + C \times (D&39; + C&39; \times A) \cong ((D + C \times D&39;)+ (C \times C&39;) \times A)

Imagine (or try to write or look into Coq proofs) the needed plumbing in the existential (or concrete) lens representations! It’s very neat, that van Laarhoven and Profunctor encodings use uniform representations so all optics can share the same composition combinator, and that encoding is based on type-classes allowing automatic "upcasting" of optics. The downside is that encoding internals leak into error messages (though https://github.com/mrkgnao/silica demonstrates how that can be improved!)

#Grate

Russell O’Connor and James Deikun discovered an optic called Grate 3 in Grate: A new kind of Optic. Using Grate we can zipWithOf structures. Essentially Grate s a witnesses that s is some kind of Distributive or Representable containing (always the same amount of) as, and nothing else.

It sounds like Traversal, but this optic is very different. The concrete representation is obscure:

newtype Grate s a = Grate { grate :: ((s -> a) -> a) -> s }

However its existential representation is very simple (I guess the Closed subclass of Profunctor helped the discovery to happen):

\quad \mathsf{Grate}\;S\;A = \exists I, S \cong I \to A

Our tasks is to discover the laws. Russell doesn’t formulate them for concrete representation, checking Distributive documentation doesn’t help either. We can write grate using existential encoding:

\begin{aligned} \quad & \mathsf{grate} : ((S \to A) \to A) \to S \\ & \mathsf{grate}\; h = g\; (i \mapsto h\; (s \mapsto f\, s\, i)) \end{aligned} where f : S \to I \to A is the forward direction, g : (I \to A) \to S is the backward direction of the isomorphism. Having this around will help us to rule out too strong laws, the one we cannot fulfill (i.e. violate soundness).

#Towards the laws: via completeness

The journey starts with a short detour. It turns out that stating the laws for tabulate and index construction is a lot easier. Also in this case we stay in Haskell98 as we don’t need type families (Representable type-class has associated type Rep f) the index type will be existentially hidden away, i.e:

fromRepresentable :: (s -> i -> a) -> ((i -> a) -> s) -> Grate s a
fromRepresentable index tabulate = _exercise

is not hard to implement, and the required properties of index and tabulate are almost obvious: they have to be inverses.

If you stare at Grate concrete representation and the types of index and tabulate (and implementation of fromRepresentable), you might see that s -> a acts as i, i.e. grate type is the similar to tabulate’s "give me a function from indices to elements, and I’ll return you a whole container". This gives us something to try for a completeness proof:

\quad I_{\mathrm{wrong}} \stackrel{?}{=} S \to A

We can define (possible) arrows of the isomorphism using \mathsf{grate} : ((S \to A) \to A) \to S , the implementation is (suspiciously) simple:

\begin{aligned} \quad &f : S \to I \to A \\ &f\; s\; i= i\; s \\ &g : (I \to A) \to S \\ &g\; = \mathsf{grate} \end{aligned}

We have try to write a proof to see what properties \mathsf{grate} must have to make f and g actually inverses.

\begin{aligned} \quad & g\; (f\; s) &&\equivvia{\text{definitions}} \\ & \mathsf{grate}\;((s&39;\,i \mapsto i\; s&39;)\;s) &&\equivvia{\text{calculation}} \\ & \mathsf{grate}\;(i \mapsto i\; s) \end{aligned}

This proof obligation seems reasonable to require as the law, it’s a special case of tabulate (\i -> index a i) Representable law:

TabulateIndex: Tabulating by indexing doesn’t change anything.

grate g (\i -> i s) ≡ s

So far everything looks good. Let’s try the other direction:

\begin{aligned} \quad & f\; (g\; h) &&\equivvia{\text{definitions}} \\ & (s\,i \mapsto i \,s)\;(\mathsf{grate}\;h) &&\equivvia{\text{calculation}} \\ & (i \mapsto i\; (\mathsf{grate}\;h)) &&\equivvia{\text{functional extensionality}} \\ & i\;(\mathsf{grate}\;h) &&\stackrel{?}{\equiv} h\; i \end{aligned}

Can we require that as a law?

IndexTabulate: Indexing into tabulated value gives a value at the index?

i (grate f) ≡ f i

The answer is yes and no. There is a but. If we require the law as such, it’s too strong. i and f are arbitrary!

import Data.Functor.Identity
import Test.QuickCheck
import Test.QuickCheck.Instances

newtype Grate s a = Grate { grate :: ((s -> a) -> a) -> s }

identityGrate :: Grate (Identity a) a
identityGrate = Grate (\f -> Identity (f runIdentity))

law2example :: (Fun (Identity Int) (Int)) -> Property
law2example (Fn i) = i (grate identityGrate f) === f i where
    f :: (Identity Int -> Int) -> Int
    f = const 42

I claim that identityGrate is lawful Grate, however lawExample Property doesn’t hold, QuickCheck finds a counterexample immediately. f indeed can be arbitrary, but the indices i aren’t! Only some values of S \to A type are used, in identityGrate a single one: runIdentity.

We have make small amendments to the previous. Each grate will have a associated predicate \mathsf{P} : (S \to A) \to \mathsf{Prop} and the correct existential type I is

\quad I = \{ i : S \to A \mid P\; i \}

Definitions of f needs to be slightly changed to forget the refinement:

\begin{aligned} \quad &f : S \to I \to A \\ &f\; s\; (i \mid -)= i\; s \\ \end{aligned}

The g need to be adjusted to, or actually the \mathsf{grate} itself, its more precise type is (remember that in "good" languages, like LiquidHaskell, refinement types are erased, i.e. don’t have any runtime cost!)

\quad \mathsf{grate} : (\{ i : S \to A \mid P\; i \} \to A) \to S

The user of the library "should" provide a predicate P and verify that \mathsf{grate} passes only functions satisfying that predicate. The final versions of two laws are:

  • TabulateIndex: Tabulating by indexing doesn’t change anything.

    grate g (\i -> i s) ≡ s
  • IndexTabulate: Indexing into tabulated value gives a value at the index?

    i (grate f) ≡ f i  -- forall i such that P i holds

For example, for above Identity example such predicate can be simply P\;i = i \equiv \mathtt{runIdentity} (can we encode this in LiquidHaskell, I remember seeing some predicate support?)

-- this holds (almost trivially)
law2proper :: Int -> Property
law2proper x = runIdentity (grate identityGrate f) === f runIdentity where
    f :: (Identity Int -> Int) -> Int
    f = const x

In general, for finite containers enumerating the all possible index values is the simplest way to define predicate P . There are some indices, though the implementation hides them.

#Soundness

Careful reader should notice that we have to do soundness proofs still. We have two parts of the isomorphism f : S \to I \to A is the forward direction, g : (I \to A) \to S is the backward direction for some existential index type I ; \mathsf{grate} is defined using them.

TabulateIndex holds almost trivially:

\begin{aligned} \quad &\mathsf{grate}\;(i \mapsto i\; s) &&\equivvia{\text{definition of $\mathsf{grate}$}} \\ & g\; (i \mapsto (i&39; \mapsto i&39; s) \; (s&39; \mapsto f\, s&39;\, i)) &&\equivvia{\text{computation}} \\ & g\; (i \mapsto f\;s\; i) &&\equivvia{g \circ f \equiv 1_S} \\ &s &&\;\blacksquare \end{aligned}

For IndexTabulate we have to define what is the P for the existential formulation. We check how \mathsf{grate} is implemented (the unfolding is just in the above proof), what’s the argument of \mathsf{grate} is called with:

\quad P\;i = \exists i&39; : I, i \equiv (s \mapsto f\;s\;i&39;)

Using this predicate, we can verify the law:

\begin{aligned} \quad & i\;(\mathsf{grate}\;h) &&\equivvia{\text{definition of $\mathsf{grate}$}} \\ &i\;(g\; (i&39; \mapsto h\; (s \mapsto f\, s\, i&39;))) &&\equivvia{\text{rewrite using $P\,i = i \equiv (s \mapsto f\;s\;j)$ for some $j$}}\\ &f\;(g\; (i&39; \mapsto h\; (s \mapsto f\, s\, i&39;)))\;j &&\equivvia{f \circ g \equiv 1_{I\to A}} \\ &h\; (s \mapsto f\, s\, j) &&\equivvia{\text{rewrite using $P$ in other direction}} \\ &h\; i &&\;\blacksquare \end{aligned}

This completeness the soundness proof. We now have discovered the Grate laws. As an exercise you could think about

data V2 a = V2 a a

_V2 :: Grate (V2 a) a
_V2 = _grateExercise1

silly :: Grate () a
silly = _grateExercise2

#Achromatic

Achromatic lenses are "A lost of the bidirectional programming folks like a variant of lenses that have get, put, and create." (quote by Russell O’Connor). Originally, in my notes on achromatic lenses the concrete representation has three operations, I rename them all to be nicely six characters long and reflect the database-holy-grailness: (update could have s -> (a -> a) -> s type as well).

data Achroma s a = Achroma
    { select :: s -> a
    , update :: s -> a -> s
    , insert :: a -> s
    }

I listed four laws, three Lens laws and one from Prism or Iso However I didn’t notice to specify the last, fifth law.

  • SelectUpdate

    select l (update l a s) ≡ a
  • UpdateSelect

    update l (select l s) s ≡ s
  • UpdateUpdate

    update l a' (update l a s) ≡ update l a' s
  • SelectInsert

    select l (insert l a) ≡ a
  • UpdateInsert new

    update l (insert l a') a ≡ insert l a

The existential representation for achromatic lens is almost like lenses’:

\quad \mathsf{Achroma}\;S\;A = \exists C, S \cong (1 + C) \times A

It can be understood that S might have C or none of it at all. It turned out that to create \mathsf{Achroma} from three lawful components, we also needed an extra bit:

\quad \mathsf{pristine} = \forall (s : S), \mathsf{Decidable}\;(\mathsf{update}\;s\equiv\mathsf{insert})

in other words, we need to know, for each S , whether it has those extra bits C .

In the completeness proof I used the following value for C :

\quad C = \{ c : A \to S \mid \exists s : S, c = \mathsf{update}\;s \land \mathsf{update}\; s \not\equiv \mathsf{insert} \}

In Haskell implementation we don’t care:

type Optic p s t a b = p a b -> p s t

class Profunctor p => PointedCartesian p where
    pointedSecond :: p a b -> p (Maybe c, a) (Maybe c, b)

achroma
    :: forall p s t a b. PointedCartesian p
    => (s -> a)       -- ^ select
    -> (b -> t)       -- ^ insert
    -> (s -> b -> t)  -- ^ update
    -> Optic p s t a b
achroma sa bt sabt
    = dimap f g
    . pointedSecond
  where
    -- note: always Just.
    f :: s -> (Maybe (b -> t), a)
    f s = (Just (sabt s), sa s)

    g :: (Maybe (b -> t), b) -> t
    g (Nothing,  b) = bt b
    g (Just bt', b) = bt' b

We could try to implement a combinator checking whether the value is "pristine", i.e. could been created with insert directly:

newtype Pristine a b = Pristine { unPristine :: a -> Bool }

instance Profunctor Pristine where
    dimap f _ (Pristine h) = Pristine (h . f)

instance PointedCartesian Pristine where
    pointedSecond (Pristine h) = Pristine $ \(m, a) -> isNothing m && h a

pristine :: Optic Pristine s s a a -> s -> Bool
pristine o s = unPristine (o (Pristine (const True))) s

But with our current version of achroma, pristine always returns False. If we add pristine to achroma:

-- | better? concrete representation
data Achroma' s a = Achroma'
    { select'   :: s -> a
    , update'   :: s -> a -> s
    , insert'   :: a -> s
    , pristine' :: s -> Bool
    }

-- | better? Profunctor encoding constructor
achroma'
    :: forall p s t a b. PointedCartesian p
    => (s -> a)       -- ^ select
    -> (b -> t)       -- ^ insert
    -> (s -> b -> t)  -- ^ update
    -> (s -> Bool)    -- ^ pristine
    -> Optic p s t a b
achroma' sa bt sabt pris
    = dimap f g
    . pointedSecond
  where
    f :: s -> (Maybe (b -> t), a)
    f s = (if pris s then Nothing else Just (sabt s), sa s)

    g :: (Maybe (b -> t), b) -> t
    g (Nothing,  b) = bt b
    g (Just bt', b) = bt' b

then pristine combinator will work, whenever way the value is created or modified. The f and g above are essentially the same as used in the completeness proof.

Addition of pristine will add two more laws (or one, depends on how you count), which are encoded in the type of \mathsf{pristine} :

  • Pristine

    pristine s ≡ True  update s a ≡ insert a

    or alternatively as a variant of Prism’s BuildMatch law4

     pristine s ≡ True  s ≡ insert (select s)
  • NotPristine

    pristine s ≡ False  s ≢ insert (select s)

The achromatic lens case illustrates again why it’s important to think about laws; when making notes I didn’t think about pristine case, for example.

#Setter

The last section is about Setters. They have only single operation over:

newtype Setter s a = Setter { over :: (a -> a) -> (s -> s) }

What is its existential representation? Look at all other different optics we examined:

\begin{aligned} \quad \mathsf{Iso}\;S\;A =&& S &\cong A \\ \quad \mathsf{Lens}\;S\;A =&& \exists C, S &\cong C \times A \\ \quad \mathsf{Prism}\;S\;A =&& \exists D, S &\cong D + A \\ \quad \mathsf{Affine}\;S\;A =&& \exists C, D, S &\cong D + C \times A \\ \quad \mathsf{Grate}\;S\;A =&& \exists I, S &\cong I \to A \\ \quad \mathsf{Achroma}\;S\;A =&& \exists C, S &\cong 1 + C \times A \end{aligned}

Hopefully the pattern stands out, namely S is isomorphic to F\,A for some endofunctor F :

\quad \mathsf{Setter}\;S\;A = \exists F : \mathsf{Endofunctor}, S \equiv F\,A

Endofunctors can be composed, and there’s an identity functor. All optics seems to be given by some (closed under composition) class of endofunctors. For example, Traversal is given by Traversable functors (whatever these are in category theoretical terms).

That’s the point which makes Profunctor encoding quite elegant, Look at the (stripped of) classes behind the implementation (there are no class for Affine, it’s made simply by a (Strong, Choice)

class Profunctor p where dimap    :: (a -> s) -> (t -> b) -> p a b -> p s t
class Strong p     where second'  :: p a b -> p (c, a) (c, b)
class Choice p     where right'   :: p a b -> p (Either d a) (Either d b)
class Closed p     where closed   :: p a b -> p (i -> a) (i -> b)
class PointedC p   where psecond' :: p a b -> p (Maybe c, a) (Maybe c, b)

-- Setter
class Mapping p    where map'      :: Functor f     => p a b -> p (f a) (f b)

-- Traversal
class Traversing p where traversed :: Traversable f => p a b -> p (f a) (f b)

It also explains why various Functor classes are enough to implement simple (i.e. not type-changing, as in this post) optics, described in Functor optics -post.

Now is possible to consistently invent (or discover?) new optics, for example LensGrate, combination of Lens and Grate (I remember seeing it in some library):

\quad \mathsf{LensGrate}\;S\;A = \exists C,I, S \cong C \times (I \to A)

Does it have some handy operations still, or does it degenerate to Setter? Or PrismGrate, which can still review things: try to verify that this class is closed under composition:

\quad \mathsf{PrismGrate}\;S\;A = \exists D,I, S \cong D + (I \to A)

I think AffineGrate is practically as useful as Setter, but you can still speak about it. AchromaticGrate seems to be useful concept, we can implement a variant of view for it, as there’s a single index which stands out (and e.g silly isn’t an AchromaticGrate):

\quad \mathsf{AchromaticGrate}\;S\;A = \exists I, S \cong (1 + I) \to A

Or we can have FiniteGrate, where index is some finite set. Representing the index with \mathsf{Fin}\;n gives us an order, so we can traverseOf FiniteGrate. However, Traversal’s aren’t FiniteGrates (exercise!)

\quad \mathsf{FiniteGrate}\;S\;A = \exists n : \mathbb{N}, S \cong \mathsf{Fin}\,n \to A

So many new or at least not well known optics, that we completely forgot to talk about Setter laws. They aren’t surprising, when we know the existential encoding:

  • OverIdentity

    over idid
  • OverComposition

    over f . over g ≡ over (f . g)

Proofs of their soundness and completeness can be found in Coq formalisation, or you can try to write them yourself as an exercise.

#Conclusion

In this blog post we presented a results of formalization of some optic laws. Thinking about laws paid off, among other things we

  • discovered MatchMatch-law for Prisms,

  • discovered a UpdateInsert law, as well as possible new core operation pristine for Achromas.

  • formulated laws for grate operation of Grate.

Also in the Setter section we discussed some other not well known optics.

A lot from a single insight of requiring the isomorphism (and not only a equivalence, i.e. bare back and forth arrows without requiring them to be inverses) in the existential formulation of optics.

Hopefully you enjoyed the read, and this blog post gave you some food for thought!


  1. but get and put clash with MonadState names↩︎

  2. Writing the definition using (dependent) Maybe elimination

    \quad \mathsf{maybe\textnormal{-}elim} : \prod_{A : \star} \prod_{P : \mathsf{Maybe}\;A \to \star} \left( P\;\mathsf{nothing} \to \prod_{x : A} P\,(\mathsf{just}\;x) \to \prod_{m : \mathsf{Maybe}\;A} P\,m \right)

    maybe-elim :  {a b} {A : Set a}
                (P : Maybe A  Set b)
                P nothing
                ((x : A)  P (just x))
                (m : Maybe A)
                P m
    maybe-elim P n j (just x) = j x
    maybe-elim P n j nothing  = n

    is a nice exercise. Something like that is what remember tactic would do in Coq. Knowing what happens helps to write definitions by hand.↩︎

  3. Russell writes "Until Edward Kmett comes up with a better name, let’s call this new optic a Grate", three years later we are probably stuck with that name.↩︎

  4. in my previous notes I though that law doesn’t hold: I didn’t notice to specify the precondition↩︎


Site proudly generated by Hakyll