In this post I’ll try to convince you that optic laws (as in the `lens`

package documentation) are the right ones. For example recall the `Lens`

laws (law names come from alternative naming of the operations: `view = get`

and `set = put`

^{1}.

**GetPut**: You get back what you put in:`view l (set l v s) ≡ v`

**PutGet**: Putting back what you got doesn’t change anything:`set l (view l s) s ≡ s`

**PutPut**: Setting twice is the same as setting once:`set l v' (set l v s) ≡ set l v' s`

Why there are these three laws (why three?). How we know these three laws are "enough", or that they don’t require too much?

The solution is to consider a representation which is obviously a `Lens`

(or `Prism`

or ...), but for which also the "laws" are obvious. Then when we write conversion functions between this and more concrete representation we’ll find out what laws are needed.

The `Iso`

- an isomorphism is the simplest example. The concrete representation is

`data Iso s a = Iso { to :: s -> a, from :: a -> s }`

But the an obvious definition in math world (citing Awodey 2010): In any category , an arrow is called an *isomorphism*, if there is an arrow in such that and . We write if there exists an isomorphism between and .

Here, the mathematical conditions tells precisely which laws are required, namely for a (`RecordWildCards`

destructed) `Iso`

fields:

```
to . from ≡ id
from . to ≡ id
```

The isomorphism case is trivial. In the post I’ll go through more interesting optics:

`Lens`

and`Prism`

. Their laws should not be surprising to people familiar with`lens`

library. However we discover a new law for`Prism`

.`Affine`

(traversal). I discussed affine traversal in a post about them and give laws in Glassery. These laws were written based purely on intuition, which we’ll now verify.`Achroma`

tic lens. There are my quickly written notes from March 2018. Turns out, I missed few bits.`Grate`

. Russel O’Connor gives laws for a van Laarhoven Grate (a two functor variant) in Grate: A new kind of Optic. We formulate them for "normal" Grates:`grate :: ((s -> a) -> b) -> t`

. In the process we also learn more about`Grate`

.

At the end we discuss a little about `Setter`

s. And conclude the post with general thoughts.

The Isomorphism lenses by Twan van Laarhoven makes the same things for lenses; we do it for more classes of optics.

A *Lens* is a purely functional reference. Concrete `Lens`

representation is

`data Lens s a = Lens { view :: s -> a, set :: s -> a -> s }`

**Note:** Here and later we work with so called monomorphic optics, i.e. optics which don’t allow type change. Hopefully I will lift this restriction in a future blog post. Also we swapped the arguments for `set`

(in `lens`

library `set :: ASetter s t a b -> b -> s -> t`

). This version is easier to do math with (we will partially apply it).

We claim that a `Lens`

*should be* a subject to the following three laws:

`view (set s a) ≡ a`

`set s (view s) ≡ s`

`set (set s a) a' ≡ set s a'`

Let’s show that the laws are both *sound* (not asking too much) and *complete* (asking enough). For that we’ll need another representation of lenses: an existential formulation:

This formulation isn’t novel (e.g. Russel O’Connor have a footnote about affine traversal in *A representation theorem for second-order functionals*), but I haven’t seen it used to highlight the laws. In Lenses Are Exactly the Coalgebras for the Store Comonad the laws of a coalgebra for a comonad are used. As my category theory knowledge is not very deep, I find the "coalgebra for a comonad" intimidating. I don’t know if and how "Lenses are the coalgebras for the costate comonad" is generalisable to other optics. Existential formulation on the other hand works *for everything*.

We can define `view`

and `set`

operations using existential formulation:

where is the forward direction, and is the backward direction of the isomorphism.

We can prove that `Lens`

laws are *sound* by showing that this definitions satisfy the laws. For example the first **GetPut** law:

The remaining **PutGet** and **PutPut** laws can be proved sound using similar direct calculation. In fact, I mechanized the proofs in this post; and soundness properties are proven mostly by a very simple tactic.

The other direction, *completeness*, is more interesting. Given and , such that the three laws above hold, construct a .

The key insight is to pick the right "left overs" type . I found the process of mechanizing proofs helpful, you will immediately rule out e.g. bare . The (partially applied ) is better, but still too "big". We need to use a smaller, *refined* type:

All three options make sense if you consider how `lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b`

can be defined using e.g. `Profunctor`

encoding, Glassery uses:

```
lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b
lens getter setter pab = dimap
-- c = s
(\s -> (getter s, s))
(\(b, s) -> setter s b)
(first' pab)
```

but you can also write

```
lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b
lens getter setter pab = dimap
-- c = b -> t; or refined
-- c = { f : b -> t | exists s. f = setter s }
(\s -> (getter s, setter s))
(\(b, f) -> f b)
(first' pab)
```

Anyway, after a is found, we can define an isomorphism. The arrows are defined as in the latter `Profunctor`

example:

The proof obligation for the refinement is not really an obligation at all, the equality is definitionally true (). However, next we need to prove the isomorphism properties. The backwards direction is easy, it’s essentially a **PutGet**-law:

Forward direction is more involved. Here we will use proof irrelevance for the refinement proofs. Those proofs aren’t computationally relevant, so this is justified. I think it’s possible to encode these proofs in Cubical Agda. I also use functional extensionality, but it’s a theorem in Cubical Agda (it’s used once in the following part of the proof, try to spot it!) We will need to use both **PutPut** and **GetPut** laws, for the both halves of the product (pair), respectively.

We continue by proving the equality of pair halves separately:

and the other half:

This completes the completeness proof. We had to use all three laws. I’m sure that intuitively thinking the **PutPut**-law is simple to forget, but these calculations show that it is important.

In previous subsections, we knew the right `Lens`

laws (candidates) a priori. We only needed to verify that they are in fact the right ones. But what if we didn’t knew the laws beforehand. Could we somehow calculate them?

In the completeness proof, all branches ended up with the use of some law, however we *knew* which to pick, so we end up with a proof goal which is exactly one of the laws we can use. Therefore, trying to write a completeness proof is not a direct calculation way, either we need to know the right existential type to use (in my opinion not entirely obvious), or which laws there possibly are.

Unfortunately I don’t have a definitive answer to this problem. A sketch of the idea is however to start with

isomorphism, and try to massage it into something without . Notice that we work in a category where all arrows are isomorphisms (groupoid), therefore we use -tensor, not product.

If we assume that we have a Symmetric closed monoidal groupoid, then we can use a property:

to find out that and are isomorphic. (And this is the hand-wavy part: I don’t understand groupoids enough to say if above reasoning make any sense at all). We need to "remember" how to go back in direction, not all are invertible (e.g. `const someS`

). That’s one reasoning to refinement of .

We could stop here, as we in fact calculated what existential we have to use (or at least could try). We can also continue, at this point we have

-less representation of `Lens`

. If we *forget* the isomorphisms, the remainders are and arrows (when we split the product), these are and arrows! So we went from existential to concrete representation of `Lens`

without using Yoneda. (What you needa know about Yoneda is an interesting reed anyway!)

But if we stick with this representation, then we can write down the laws too: If left-to-right direction is and the right-to-left is (here I hand-wave about, what the first half of the pair should be!) then we get

Note how the **PutPut**-law is in contracted form, the form which we would preferred in the above `Lens`

laws completeness proof (spoiler: that’s where I used functional extensionality, if you haven’t spotted it).

In this section we do similar soundness and completeness proofs for prisms. A *Prism* is a generalized constructor. Concrete `Prism`

representation is (`preview`

and `review`

are often called `match`

and `build`

respectively, I stick with `lens`

-library vocabulary).

`data Prism s a = Prism { preview :: s -> Maybe a, review :: a -> s }`

We claim that a `Prism`

*should be* a subject to the following two laws:

**MatchBuild**: First, if I`review`

a value and then`preview`

I will get it back:`preview (review a) ≡ Just a`

**BuildMatch**: Second, if you can extract a value`a`

using a`preview`

from a value`s`

, then the value`s`

is completely described by a prism and`a`

:`preview s ≡ Just a ⇒ review a ≡ s`

The existential formulation of a `Prism`

is similar to `Lens`

’, but instead of product we have a coproduct (or sum):

First step is to define `preview`

and `review`

operations for existential representation:

where is the forward direction, and is the backward direction of the isomorphism.

As with `Lens`

none of the laws is hard to show sound. The first one is a direct computation:

and for the second one we need to prepare the precondition. The injectivity of constructors let’s rule out the case when , because .

using which we can prove the law itself

We need to construct a from the given and . Here also, as in the `Lens`

case, we need to pick the right type for the existential. And again we need a refined type:

And yet again, that type can be found either from `Profunctor`

definition of `prism`

(the `either id setter`

):

```
prism :: (b -> t) -> (s -> Either t a) -> Prism s t a b
prism setter getter = dimap getter (either id setter) . right'}
```

or by informal calculation

Then we can define the arrows of the isomorphism (I’m quite sure that smart enough proof search would find exactly these definitions):

We use non-standard ^{2}, which provides the match proofs too. yet it can be directly coded e.g. in Agda as:

```
open import Data.Maybe
open import Relation.Binary.PropositionalEquality
maybe-case : ∀ {a b} {A : Set a} {B : Set b}
→ (m : Maybe A)
→ (m ≡ nothing → B)
→ ((a : A) → m ≡ just a → B)
→ B
maybe-case (just a) f g = g a refl
maybe-case nothing f g = f refl
```

Next we can show that and are inverses. The case: