In this post I’ll try to convince you that optic laws (as in the lens package documentation) are the right ones. For example recall the Lens laws (law names come from alternative naming of the operations: view = get and set = put1.
GetPut: You get back what you put in:
view l (set l v s) ≡ vPutGet: Putting back what you got doesn’t change anything:
set l (view l s) s ≡ sPutPut: Setting twice is the same as setting once:
set l v' (set l v s) ≡ set l v' sWhy there are these three laws (why three?). How we know these three laws are "enough", or that they don’t require too much?
The solution is to consider a representation which is obviously a Lens (or Prism or ...), but for which also the "laws" are obvious. Then when we write conversion functions between this and more concrete representation we’ll find out what laws are needed.
The Iso - an isomorphism is the simplest example. The concrete representation is
data Iso s a = Iso { to :: s -> a, from :: a -> s }But the an obvious definition in math world (citing Awodey 2010): In any category , an arrow is called an isomorphism, if there is an arrow in such that and . We write if there exists an isomorphism between and .
Here, the mathematical conditions tells precisely which laws are required, namely for a (RecordWildCards destructed) Iso fields:
to . from ≡ id
from . to ≡ idThe isomorphism case is trivial. In the post I’ll go through more interesting optics:
Lens and Prism. Their laws should not be surprising to people familiar with lens library. However we discover a new law for Prism.
Affine (traversal). I discussed affine traversal in a post about them and give laws in Glassery. These laws were written based purely on intuition, which we’ll now verify.
Achromatic lens. There are my quickly written notes from March 2018. Turns out, I missed few bits.
Grate. Russel O’Connor gives laws for a van Laarhoven Grate (a two functor variant) in Grate: A new kind of Optic. We formulate them for "normal" Grates: grate :: ((s -> a) -> b) -> t. In the process we also learn more about Grate.
At the end we discuss a little about Setters. And conclude the post with general thoughts.
The Isomorphism lenses by Twan van Laarhoven makes the same things for lenses; we do it for more classes of optics.
A Lens is a purely functional reference. Concrete Lens representation is
data Lens s a = Lens { view :: s -> a, set :: s -> a -> s }Note: Here and later we work with so called monomorphic optics, i.e. optics which don’t allow type change. Hopefully I will lift this restriction in a future blog post. Also we swapped the arguments for set (in lens library set :: ASetter s t a b -> b -> s -> t). This version is easier to do math with (we will partially apply it).
We claim that a Lens should be a subject to the following three laws:
view (set s a) ≡ a
set s (view s) ≡ s
set (set s a) a' ≡ set s a'
Let’s show that the laws are both sound (not asking too much) and complete (asking enough). For that we’ll need another representation of lenses: an existential formulation:
This formulation isn’t novel (e.g. Russel O’Connor have a footnote about affine traversal in A representation theorem for second-order functionals), but I haven’t seen it used to highlight the laws. In Lenses Are Exactly the Coalgebras for the Store Comonad the laws of a coalgebra for a comonad are used. As my category theory knowledge is not very deep, I find the "coalgebra for a comonad" intimidating. I don’t know if and how "Lenses are the coalgebras for the costate comonad" is generalisable to other optics. Existential formulation on the other hand works for everything.
We can define view and set operations using existential formulation:
where is the forward direction, and is the backward direction of the isomorphism.
We can prove that Lens laws are sound by showing that this definitions satisfy the laws. For example the first GetPut law:
The remaining PutGet and PutPut laws can be proved sound using similar direct calculation. In fact, I mechanized the proofs in this post; and soundness properties are proven mostly by a very simple tactic.
The other direction, completeness, is more interesting. Given and , such that the three laws above hold, construct a .
The key insight is to pick the right "left overs" type . I found the process of mechanizing proofs helpful, you will immediately rule out e.g. bare . The (partially applied ) is better, but still too "big". We need to use a smaller, refined type:
All three options make sense if you consider how lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b can be defined using e.g. Profunctor encoding, Glassery uses:
lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b
lens getter setter pab = dimap
-- c = s
(\s -> (getter s, s))
(\(b, s) -> setter s b)
(first' pab)but you can also write
lens :: (s -> a) -> (s -> b -> t) -> Lens s t a b
lens getter setter pab = dimap
-- c = b -> t; or refined
-- c = { f : b -> t | exists s. f = setter s }
(\s -> (getter s, setter s))
(\(b, f) -> f b)
(first' pab)Anyway, after a is found, we can define an isomorphism. The arrows are defined as in the latter Profunctor example:
The proof obligation for the refinement is not really an obligation at all, the equality is definitionally true (). However, next we need to prove the isomorphism properties. The backwards direction is easy, it’s essentially a PutGet-law:
Forward direction is more involved. Here we will use proof irrelevance for the refinement proofs. Those proofs aren’t computationally relevant, so this is justified. I think it’s possible to encode these proofs in Cubical Agda. I also use functional extensionality, but it’s a theorem in Cubical Agda (it’s used once in the following part of the proof, try to spot it!) We will need to use both PutPut and GetPut laws, for the both halves of the product (pair), respectively.
We continue by proving the equality of pair halves separately:
and the other half:
This completes the completeness proof. We had to use all three laws. I’m sure that intuitively thinking the PutPut-law is simple to forget, but these calculations show that it is important.
In previous subsections, we knew the right Lens laws (candidates) a priori. We only needed to verify that they are in fact the right ones. But what if we didn’t knew the laws beforehand. Could we somehow calculate them?
In the completeness proof, all branches ended up with the use of some law, however we knew which to pick, so we end up with a proof goal which is exactly one of the laws we can use. Therefore, trying to write a completeness proof is not a direct calculation way, either we need to know the right existential type to use (in my opinion not entirely obvious), or which laws there possibly are.
Unfortunately I don’t have a definitive answer to this problem. A sketch of the idea is however to start with
isomorphism, and try to massage it into something without . Notice that we work in a category where all arrows are isomorphisms (groupoid), therefore we use -tensor, not product.
If we assume that we have a Symmetric closed monoidal groupoid, then we can use a property:
to find out that and are isomorphic. (And this is the hand-wavy part: I don’t understand groupoids enough to say if above reasoning make any sense at all). We need to "remember" how to go back in direction, not all are invertible (e.g. const someS). That’s one reasoning to refinement of .
We could stop here, as we in fact calculated what existential we have to use (or at least could try). We can also continue, at this point we have
-less representation of Lens. If we forget the isomorphisms, the remainders are and arrows (when we split the product), these are and arrows! So we went from existential to concrete representation of Lens without using Yoneda. (What you needa know about Yoneda is an interesting reed anyway!)
But if we stick with this representation, then we can write down the laws too: If left-to-right direction is and the right-to-left is (here I hand-wave about, what the first half of the pair should be!) then we get
Note how the PutPut-law is in contracted form, the form which we would preferred in the above Lens laws completeness proof (spoiler: that’s where I used functional extensionality, if you haven’t spotted it).
In this section we do similar soundness and completeness proofs for prisms. A Prism is a generalized constructor. Concrete Prism representation is (preview and review are often called match and build respectively, I stick with lens-library vocabulary).
data Prism s a = Prism { preview :: s -> Maybe a, review :: a -> s }We claim that a Prism should be a subject to the following two laws:
MatchBuild: First, if I review a value and then preview I will get it back:
preview (review a) ≡ Just a
BuildMatch: Second, if you can extract a value a using a preview from a value s, then the value s is completely described by a prism and a:
preview s ≡ Just a ⇒ review a ≡ s
The existential formulation of a Prism is similar to Lens’, but instead of product we have a coproduct (or sum):
First step is to define preview and review operations for existential representation:
where is the forward direction, and is the backward direction of the isomorphism.
As with Lens none of the laws is hard to show sound. The first one is a direct computation:
and for the second one we need to prepare the precondition. The injectivity of constructors let’s rule out the case when , because .
using which we can prove the law itself
We need to construct a from the given and . Here also, as in the Lens case, we need to pick the right type for the existential. And again we need a refined type:
And yet again, that type can be found either from Profunctor definition of prism (the either id setter):
prism :: (b -> t) -> (s -> Either t a) -> Prism s t a b
prism setter getter = dimap getter (either id setter) . right'}or by informal calculation
Then we can define the arrows of the isomorphism (I’m quite sure that smart enough proof search would find exactly these definitions):
We use non-standard 2, which provides the match proofs too. yet it can be directly coded e.g. in Agda as:
open import Data.Maybe
open import Relation.Binary.PropositionalEquality
maybe-case : ∀ {a b} {A : Set a} {B : Set b}
→ (m : Maybe A)
→ (m ≡ nothing → B)
→ ((a : A) → m ≡ just a → B)
→ B
maybe-case (just a) f g = g a refl
maybe-case nothing f g = f reflNext we can show that and are inverses. The case:
which we can split:
and the second case
The other direction of the isomorphism, has two cases: first :
and
The Prism law proofs are only slightly more complex than Lens law; mainly due the need to do case analysis.
One interesting point, is that we might want to have third, MatchMatch law. There is polymorphic matching :: APrism s t a b -> s -> Either t a combinator. Using its (non-existing in lens) monomorphic version matching' :: APrism' :: s -> Either s a version we can write:
matching' p s ≡ Left s' ⇒ s ≡ s'The law is silly, as it completely defines the left part of the Either, therefore we can have Maybe in preview without any loss.
For example take a bogus _Right Prism variant:
_Right' :: Prism' (Either Int a) a
_Right' = prism Right $ \t -> case t of
Right x -> Right x
Left i -> Left (Left (i + 1))which will pass the QuickCheck tests:
matchBuild_prop :: Int -> Property
matchBuild_prop x = preview _Right' (review _Right' x) === Just x
buildMatch_prop :: T Int -> Property
buildMatch_prop s = case preview _Right' s of
Just a -> label "just" $ review _Right' a === s
Nothing -> label "nothing" Truebut behave weirdly
λ> matching _Right' (Right 'a')
Right 'a'
λ> matching _Right' (Left 42)
Left (Left 43)or
λ> over _Right' id (Left 1)
Left 2However, the MatchMatch property is clearly violated:
matchMatch_prop :: T Int -> Property
matchMatch_prop s = case matching _Right' s of
Right _ -> label "right" True
Left t -> label "left" $ t === sas QuickCheck points out:
λ> quickCheck matchMatch_prop
*** Failed! Falsifiable (after 1 test):
Left 0
Left 1 /= Left 0Our verification process didn’t needed this law, as we used preview and review as core Prism primitives. There are two ways to construct Prism in the prism library:
prism :: (b -> t) -> (s -> Either t a) -> Prism s t a b
prism' :: (b -> s) -> (s -> Maybe a) -> Prism s s a bwe use the latter prism' way. As you see it can construct only semi-monomorphic Prism. I think it’s type should be restricted to return Prism s s a a. And we need to check the MatchMatch law for Prisms constructed with prism. If we used (even the monomorphic version of) matching in the completeness proof, we would see the need for the law, if we actually use the left component of the result.
The MatchMatch law as stated doesn’t generalize to the polymorphic variant, as we require s ~ t, to be able to compare them.
My original idea of the law doesn’t work, neither another way to state it:
-- too weak
matching p s ≡ Left t ⇒ matching p t ≡ Left t'
matching p s `leftBind` matching p' ≡ matching p sA counterexample is when all of negative matches are the same:
_Right'' :: Prism' (Either Int a) a
_Right'' = prism Right $ \t -> case t of
Right x -> Right x
Left i -> Left (Left 0)However one polymorphic variant of MatchMatch law:
matching p s ≡ Left t ⇒ matching p t ≡ Left sseems to catch all invalid examples I can come with. It’s not directly enough for Coq (maybe it is). The relaxation of strict equality to matching p t ≡ Left s is sensible, as intuitively there should be a way to convert t back to s, t (as well as positive match results) fully determine the original s.
I discussed affine traversal previously in a post about them and gave laws in Glassery.
You get back what you put in:
preview l (set l v s) ≡ v <$ preview l s
If you can extract a value, and put it back, that doesn’t change anything.
preview l s ≡ Just a ⇒ set l s a ≡ s
If you get nothing when extracting a value, then whatever you put in, the operation is no-op.
preview l s ≡ Nothing ⇒ set l s a ≡ s
Setting twice is the same as setting once:
set l v' (set l v s) ≡ set l v' s
Turns out that the third law is not required in monomorphic setting, similarly as MatchMatch-laws isn’t needed in the Prism case.
In this section I won’t show the full proofs, you can check (step through) the Coq sources at https://github.com/phadej/lens-laws. Let’s however highlight the main points.
The concrete representation is the "worst" of Lens and Prism (where Iso would be the best of both):
data Affine s a = Affine { preview :: s -> Maybe a; set :: s -> a -> s }The existential representation explains the naming of this optic:
Defining operations using existential representation, and proving soundness of the laws is a combination of above Lens and Prism proofs.
The completeness proof is also similar to Lens and Prism proofs. The interesting part is what’s the value of existential types, given and we use:
These types are (again) very similar to the ones used in Prism and Lens completeness proofs.
At this point it’s good time to discuss some properties of optics we have covered so far:
any Iso can be turned into Lens, Prism or Affine
any Lens can be turned into Affine
any Prism can be turned into Affine
each class of optics is closed under composition
First three properties should be easy to see. For example take a , we can turn it into by taking (Void in Haskell).
The last property is not hard to verify either. is clearly closed under composition (transitivity), and need associativity of and , i.e.
The Affine case is slightly trickier:
Imagine (or try to write or look into Coq proofs) the needed plumbing in the existential (or concrete) lens representations! It’s very neat, that van Laarhoven and Profunctor encodings use uniform representations so all optics can share the same composition combinator, and that encoding is based on type-classes allowing automatic "upcasting" of optics. The downside is that encoding internals leak into error messages (though https://github.com/mrkgnao/silica demonstrates how that can be improved!)
Russell O’Connor and James Deikun discovered an optic called Grate 3 in Grate: A new kind of Optic. Using Grate we can zipWithOf structures. Essentially Grate s a witnesses that s is some kind of Distributive or Representable containing (always the same amount of) as, and nothing else.
It sounds like Traversal, but this optic is very different. The concrete representation is obscure:
newtype Grate s a = Grate { grate :: ((s -> a) -> a) -> s }However its existential representation is very simple (I guess the Closed subclass of Profunctor helped the discovery to happen):
Our tasks is to discover the laws. Russell doesn’t formulate them for concrete representation, checking Distributive documentation doesn’t help either. We can write grate using existential encoding:
where is the forward direction, is the backward direction of the isomorphism. Having this around will help us to rule out too strong laws, the one we cannot fulfill (i.e. violate soundness).
The journey starts with a short detour. It turns out that stating the laws for tabulate and index construction is a lot easier. Also in this case we stay in Haskell98 as we don’t need type families (Representable type-class has associated type Rep f) the index type will be existentially hidden away, i.e:
fromRepresentable :: (s -> i -> a) -> ((i -> a) -> s) -> Grate s a
fromRepresentable index tabulate = _exerciseis not hard to implement, and the required properties of index and tabulate are almost obvious: they have to be inverses.
If you stare at Grate concrete representation and the types of index and tabulate (and implementation of fromRepresentable), you might see that s -> a acts as i, i.e. grate type is the similar to tabulate’s "give me a function from indices to elements, and I’ll return you a whole container". This gives us something to try for a completeness proof:
We can define (possible) arrows of the isomorphism using , the implementation is (suspiciously) simple:
We have try to write a proof to see what properties must have to make and actually inverses.
This proof obligation seems reasonable to require as the law, it’s a special case of tabulate (\i -> index a i) Representable law:
TabulateIndex: Tabulating by indexing doesn’t change anything.
grate g (\i -> i s) ≡ sSo far everything looks good. Let’s try the other direction:
Can we require that as a law?
IndexTabulate: Indexing into tabulated value gives a value at the index?
i (grate f) ≡ f iThe answer is yes and no. There is a but. If we require the law as such, it’s too strong. i and f are arbitrary!
import Data.Functor.Identity
import Test.QuickCheck
import Test.QuickCheck.Instances
newtype Grate s a = Grate { grate :: ((s -> a) -> a) -> s }
identityGrate :: Grate (Identity a) a
identityGrate = Grate (\f -> Identity (f runIdentity))
law2example :: (Fun (Identity Int) (Int)) -> Property
law2example (Fn i) = i (grate identityGrate f) === f i where
f :: (Identity Int -> Int) -> Int
f = const 42I claim that identityGrate is lawful Grate, however lawExample Property doesn’t hold, QuickCheck finds a counterexample immediately. f indeed can be arbitrary, but the indices i aren’t! Only some values of type are used, in identityGrate a single one: runIdentity.
We have make small amendments to the previous. Each grate will have a associated predicate and the correct existential type is
Definitions of needs to be slightly changed to forget the refinement:
The need to be adjusted to, or actually the itself, its more precise type is (remember that in "good" languages, like LiquidHaskell, refinement types are erased, i.e. don’t have any runtime cost!)
The user of the library "should" provide a predicate and verify that passes only functions satisfying that predicate. The final versions of two laws are:
TabulateIndex: Tabulating by indexing doesn’t change anything.
grate g (\i -> i s) ≡ sIndexTabulate: Indexing into tabulated value gives a value at the index?
i (grate f) ≡ f i -- forall i such that P i holdsFor example, for above Identity example such predicate can be simply (can we encode this in LiquidHaskell, I remember seeing some predicate support?)
-- this holds (almost trivially)
law2proper :: Int -> Property
law2proper x = runIdentity (grate identityGrate f) === f runIdentity where
f :: (Identity Int -> Int) -> Int
f = const xIn general, for finite containers enumerating the all possible index values is the simplest way to define predicate . There are some indices, though the implementation hides them.
Careful reader should notice that we have to do soundness proofs still. We have two parts of the isomorphism is the forward direction, is the backward direction for some existential index type ; is defined using them.
TabulateIndex holds almost trivially:
For IndexTabulate we have to define what is the for the existential formulation. We check how is implemented (the unfolding is just in the above proof), what’s the argument of is called with:
Using this predicate, we can verify the law:
This completeness the soundness proof. We now have discovered the Grate laws. As an exercise you could think about
data V2 a = V2 a a
_V2 :: Grate (V2 a) a
_V2 = _grateExercise1
silly :: Grate () a
silly = _grateExercise2Achromatic lenses are "A lost of the bidirectional programming folks like a variant of lenses that have get, put, and create." (quote by Russell O’Connor). Originally, in my notes on achromatic lenses the concrete representation has three operations, I rename them all to be nicely six characters long and reflect the database-holy-grailness: (update could have s -> (a -> a) -> s type as well).
data Achroma s a = Achroma
{ select :: s -> a
, update :: s -> a -> s
, insert :: a -> s
}I listed four laws, three Lens laws and one from Prism or Iso However I didn’t notice to specify the last, fifth law.
SelectUpdate
select l (update l a s) ≡ aUpdateSelect
update l (select l s) s ≡ sUpdateUpdate
update l a' (update l a s) ≡ update l a' sSelectInsert
select l (insert l a) ≡ aUpdateInsert new
update l (insert l a') a ≡ insert l aThe existential representation for achromatic lens is almost like lenses’:
It can be understood that might have or none of it at all. It turned out that to create from three lawful components, we also needed an extra bit:
in other words, we need to know, for each , whether it has those extra bits .
In the completeness proof I used the following value for :
In Haskell implementation we don’t care:
type Optic p s t a b = p a b -> p s t
class Profunctor p => PointedCartesian p where
pointedSecond :: p a b -> p (Maybe c, a) (Maybe c, b)
achroma
:: forall p s t a b. PointedCartesian p
=> (s -> a) -- ^ select
-> (b -> t) -- ^ insert
-> (s -> b -> t) -- ^ update
-> Optic p s t a b
achroma sa bt sabt
= dimap f g
. pointedSecond
where
-- note: always Just.
f :: s -> (Maybe (b -> t), a)
f s = (Just (sabt s), sa s)
g :: (Maybe (b -> t), b) -> t
g (Nothing, b) = bt b
g (Just bt', b) = bt' bWe could try to implement a combinator checking whether the value is "pristine", i.e. could been created with insert directly:
newtype Pristine a b = Pristine { unPristine :: a -> Bool }
instance Profunctor Pristine where
dimap f _ (Pristine h) = Pristine (h . f)
instance PointedCartesian Pristine where
pointedSecond (Pristine h) = Pristine $ \(m, a) -> isNothing m && h a
pristine :: Optic Pristine s s a a -> s -> Bool
pristine o s = unPristine (o (Pristine (const True))) sBut with our current version of achroma, pristine always returns False. If we add pristine to achroma:
-- | better? concrete representation
data Achroma' s a = Achroma'
{ select' :: s -> a
, update' :: s -> a -> s
, insert' :: a -> s
, pristine' :: s -> Bool
}
-- | better? Profunctor encoding constructor
achroma'
:: forall p s t a b. PointedCartesian p
=> (s -> a) -- ^ select
-> (b -> t) -- ^ insert
-> (s -> b -> t) -- ^ update
-> (s -> Bool) -- ^ pristine
-> Optic p s t a b
achroma' sa bt sabt pris
= dimap f g
. pointedSecond
where
f :: s -> (Maybe (b -> t), a)
f s = (if pris s then Nothing else Just (sabt s), sa s)
g :: (Maybe (b -> t), b) -> t
g (Nothing, b) = bt b
g (Just bt', b) = bt' bthen pristine combinator will work, whenever way the value is created or modified. The f and g above are essentially the same as used in the completeness proof.
Addition of pristine will add two more laws (or one, depends on how you count), which are encoded in the type of :
Pristine
pristine s ≡ True ⇒ update s a ≡ insert aor alternatively as a variant of Prism’s BuildMatch law4
pristine s ≡ True ⇒ s ≡ insert (select s)NotPristine
pristine s ≡ False ⇒ s ≢ insert (select s)The achromatic lens case illustrates again why it’s important to think about laws; when making notes I didn’t think about pristine case, for example.
The last section is about Setters. They have only single operation over:
newtype Setter s a = Setter { over :: (a -> a) -> (s -> s) }What is its existential representation? Look at all other different optics we examined:
Hopefully the pattern stands out, namely is isomorphic to for some endofunctor :
Endofunctors can be composed, and there’s an identity functor. All optics seems to be given by some (closed under composition) class of endofunctors. For example, Traversal is given by Traversable functors (whatever these are in category theoretical terms).
That’s the point which makes Profunctor encoding quite elegant, Look at the (stripped of) classes behind the implementation (there are no class for Affine, it’s made simply by a (Strong, Choice)
class Profunctor p where dimap :: (a -> s) -> (t -> b) -> p a b -> p s t
class Strong p where second' :: p a b -> p (c, a) (c, b)
class Choice p where right' :: p a b -> p (Either d a) (Either d b)
class Closed p where closed :: p a b -> p (i -> a) (i -> b)
class PointedC p where psecond' :: p a b -> p (Maybe c, a) (Maybe c, b)
-- Setter
class Mapping p where map' :: Functor f => p a b -> p (f a) (f b)
-- Traversal
class Traversing p where traversed :: Traversable f => p a b -> p (f a) (f b)It also explains why various Functor classes are enough to implement simple (i.e. not type-changing, as in this post) optics, described in Functor optics -post.
Now is possible to consistently invent (or discover?) new optics, for example LensGrate, combination of Lens and Grate (I remember seeing it in some library):
Does it have some handy operations still, or does it degenerate to Setter? Or PrismGrate, which can still review things: try to verify that this class is closed under composition:
I think AffineGrate is practically as useful as Setter, but you can still speak about it. AchromaticGrate seems to be useful concept, we can implement a variant of view for it, as there’s a single index which stands out (and e.g silly isn’t an AchromaticGrate):
Or we can have FiniteGrate, where index is some finite set. Representing the index with gives us an order, so we can traverseOf FiniteGrate. However, Traversal’s aren’t FiniteGrates (exercise!)
So many new or at least not well known optics, that we completely forgot to talk about Setter laws. They aren’t surprising, when we know the existential encoding:
OverIdentity
over id ≡ idOverComposition
over f . over g ≡ over (f . g)Proofs of their soundness and completeness can be found in Coq formalisation, or you can try to write them yourself as an exercise.
In this blog post we presented a results of formalization of some optic laws. Thinking about laws paid off, among other things we
discovered MatchMatch-law for Prisms,
discovered a UpdateInsert law, as well as possible new core operation pristine for Achromas.
formulated laws for grate operation of Grate.
Also in the Setter section we discussed some other not well known optics.
A lot from a single insight of requiring the isomorphism (and not only a equivalence, i.e. bare back and forth arrows without requiring them to be inverses) in the existential formulation of optics.
Hopefully you enjoyed the read, and this blog post gave you some food for thought!
but get and put clash with MonadState names↩︎
Writing the definition using (dependent) Maybe elimination
maybe-elim : ∀ {a b} {A : Set a}
→ (P : Maybe A → Set b)
→ P nothing
→ ((x : A) → P (just x))
→ (m : Maybe A)
→ P m
maybe-elim P n j (just x) = j x
maybe-elim P n j nothing = nis a nice exercise. Something like that is what remember tactic would do in Coq. Knowing what happens helps to write definitions by hand.↩︎
Russell writes "Until Edward Kmett comes up with a better name, let’s call this new optic a Grate", three years later we are probably stuck with that name.↩︎
in my previous notes I though that law doesn’t hold: I didn’t notice to specify the precondition↩︎