Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
It’s probably too late/too much to do this in this paper, but here’s an
idea for how to make the proofs in GTT easier / a suggestion for what an
"Abadi-Plotkin logic” for dynamism might include.
First, suppose we have type variables in the program logic, where a
judgement
X type |- T(X) type
implies monotonicity of T, i.e. T(A) <= T(B) if A <= B.
(This is admissible in GTT, but not enforced for an “unknown" type
constructor.)
Second, suppose we have terms with free type variables, where
X type |- E : T(X)
implies that E satisfies a “fundamental theorem”
E(A) <= E(B) : T(A) <= T(B)
Next, suppose we have a value->value function type A => B (maybe this
only needs to exist in the program logic, not in the programming
language, as a way to talk about a term with a free value variable).
Having congruence and beta for this implies that every
complex-value-with-a-variable is monotone, because f <= g : A => B <= A’
=> B' and a <= a' : A <= A’ implies f(a) <= g(a') : B <= B’, and we have
beta for the applications as equidynamism.
Then I think the following “path induction” principles are valid for
proving dynamism by "replacing upcasts with the identity whenever that
type checks". These are basically just packaging up using monotonicity,
transitivity, and the universal property of the upcast, which says in
this notation
id_A <= <B <-< A> : A => A <= A => B
<B <-< A> <= id_B : A => B <= B => B
You have to be careful about (a) whether it’s the source or target of
the function that’s “free”, and (b) whether the term with the cast is on
the left or right of the term dynamism judgement, which gives:
(1) right upcast induction
X type, f : X => B |- E(X,f) : T(X)
T(B) <= C [might not hold]
E(B, id_B) <= F : T(B) <= C
——————————————————
E(A, <B <-< A>) <= F : T(A) <= C
[ use monotonicity to get E(A, upcast) <= E(B,id_B) : T(A) <= T(B) and
then transitivity with the premise ]
(2) left upcast induction
Y type, f : A => Y |- E(Y,f) : T(Y)
C <= T(A) [might not hold]
F <= E(A, id_A) <= F : C <= T(A)
——————————————————
F <= E(B, <B <-< A>) : C <= T(B)
Here are some examples using them:
(a) Suppose we have a List A type and a complex-value map (everything is
positive) : (A => B) x List A => List B
We want to show that map < B <- A > is an upcast, i.e.
(a1) id_<List A> <= map < B <- A > : L(A) => L(A) <= L(A) => L(B)
Note that Y type, f : A => Y |- map <Y <-< A> : L(A) => L(Y)
and L(A) => L(A) <= L(A) => L(A).
Therefore, by left upcast induction, it suffices to show
id_<List A> <= map id_A, which is true by beta/eta.
(a2) map < B <- A > <= id_<List B> : L(A) => L(B) <= L(B) => L(B)
Note that X type, f : X => B |- map <B <-< X> : L(X) => L(B),
and L(B) => L(B) <= L(B) => L(B).
Therefore, by right upcast induction, it suffices to show
map < id_B > <= id_<List B>, which is true by eta.
(b) We want to show that u := \z.split z as (x,y) in (<A’ <-< A>x, <B’
<-< B> y) is an upcast A x B <= A’ x B’.
(b1) To show:
id_<A x B> <= u : A x B => A x B <= A x B => A’ x B’
First, note that
Y type, f : A => Y |- \z.split z as (x,y) in (f x, <B’ <-< B> y) : A x B
=> Y x B
and
A x B => A x B <= A x B => A x B’.
Therefore, by left upcast induction (with T(Y) = A x B => Y x B) , it
suffices to show
id_<A x B> <= \z.split z as (x,y) in (id_A x, <B’ <-< B> y) : A x B => A
x B’
and by beta
id_<A x B> <= \z.split z as (x,y) in (x, <B’ <-< B> y) : A x B => A x B’
Second,
Y type, f : B => Y |- \z.split z as (x,y) in (x, f y) : A x B => A x Y
and
A x B => A x B <= A x B => A x B
Therefore, by left upcast induction (with T(Y) = A x B => A x Y), it
suffices to show
id_<A x B> <= \z.split z as (x,y) in (x, id_B y)
which is true by beta/eta.
(b2) To show:
u <= id_<A' x B'> : A x B => A' x B' <= A' x B' => A’ x B’
Since
X type, f : X => A' |- \z.split z as (x,y) in (f x, <B’ <-< B> y) : X x
B => A’ x B’
and
A' x B => A’ x B’ <= A’ x B' => A’ x B’
it suffices to show
\z.split z as (x,y) in (id_A' x, <B’ <-< B> y) <= id_<A' x B’>
And since
X type, f : X => B' |- \z.split z as (x,y) in (id_A’ x, f y) : A' x X =>
A’ x B’ <= A’ x B’ => A’ x B’
A' x B' => A’ x B’ <= A’ x B’ => A’ x B’
it suffices to show
\z.split z as (x,y) in (id_A' x, id_B’ y) <= id_<A' x B’>
which again holds by beta-eta.
----------------------------------------------------------------------
For (1) and (2), I was thinking that A => A’ and B -o B’ could exist
only at the program logic level, not as part of the programming
language. I.e. without changing GTT the programming language from
what’s currently there, add a separate layer where we have a judgement
like
Psi | G | D |- E : T
where
Psi ::= Psi,X type | Psi,f : A1 => A2 | Psi,s : B1 -o B2
which act like a meta-language quantifier that we would currently say as
“for any complex value from A to A’ ” or “for any stack from A -o A’ ”.
And then a judgement
f : A1 => A2 | G | D |- E : T
means
(1) for any complex value G, x : A1 |- V : A2, we have G | D |- E[x.V/f]
: T, and
(2) for related complex values G <= G, x : A1 <= x’ : A1’ |- V <= V' :
A2 <= A2’,
G <= G | D <= D |- E[x.V] <= E’[x’.V’] : T <= T
Similarly, S : B1 -o B2 | G | D |- E : T means
(1) for any stack G | B1 |- S : B2,
G | D |- E[S/s] : T
(2) for related stacks G <= G | B1 <= B1' |- S <= S’ : B2 <= B2’,
G <= G’ | D <= D’ |- E[S/s] <= E’[S’/s] : B2 <= B2’
I.e. in the program logic the first-order quantifiers range over both
the objects of the category and the maps.
----------------------------------------------------------------------
OK, I think that if we have type dynamism assumption, then the following
variation works:
(1) right type dynamism induction
X type, X <= B |- E(X) : T(X)
T(B) <= C [might not hold]
E(B) <= F : T(B) <= C
——————————————————
A type, A <= B |- E(A) <= F : T(A) <= C
i.e. if you have a term that makes sense for any type below B, then if
it has an upper bound at B, it has an upper bound everywhere
(2) left dynamism induction
Y type, A <= Y |- E(Y) : T(Y)
C <= T(A) [might not hold]
F <= E(A) : C <= T(A)
——————————————————
B type, A <= B |- F <= E(B) : C <= T(B)
(3) axioms <A <-< A> = id_A : A => A and <B <<- B> = id_B : B -o B
(if type dynamism were not thin, E would depend on the derivation, and
the identity would be substituted in in the premise).
To see why this is right, assume A <= B. For the downcasts, the
ordering is still
id_A : A -o A <= <A <<- B> : B -o A <= <B <<- B> : B -o B
just like for upcasts
id_A : A => A <= <B <-< A> : A <= B <= <B <-< B> : B -o B
i.e. the identity on the source of the type precision is less than the
cast, which is less than the identity on the target; the only difference
is the type in the middle.
So, for example, if you have
X type, X <= B |- <B <-< X> : X => B
and a particular A for which A <= B then it makes sense that
<B <-< A> <= <B <-< B> = id_B : A => B <= B => B
but also that (if they’re computation types instead)
X type, X <= B |- <X <<- B> : B -o X so
<A <<- B> <= <B <<- B> = id_B : B -o A <= B -o B
So here’s how one of the Kleisli upcasts goes:
Suppose B1 <= B1’ and B2 <= B2’.
To show:
u := \x : U (B1 x B2). thunk {force(<U B1’ <-< U B1> thunk(fst(force
x))), force(<U B2’ <-< U B2> thunk(fst(force x))) }
is an upcast <U B1’xB2' <-< U B1xB2>
(a) B1 <= B1’, B2 <= B2’ |- id_U (B1 x B2) <= u : U (B1 x B2) => U (B1 x
B2) <= U (B1 x B2) => U (B1' x B2')
Note that
Y type, B1 <= Y |- \x : U (B1 x B2). thunk {force(<U Y <-< U B1>
thunk(fst(force x))), force(<U B2’ <-< U B2> thunk(snd(force x))) } :
U(B1 & B2) => U(Y & B2)
so we can contract B1’ to B1, and it suffices to show
id <= \x : U (B1 x B2). thunk {force(<U B1 <-< U B1> thunk(fst(force
x))), ... }
: U (B1 x B2) => U (B1 x B2) <= U (B1 x B2) => U (B1 x B2’)
Then (secretly using the fact that U(id_B1) = id_U B1 : U B1 <= U B1),
this is equal to
\x : U (B1 x B2). thunk {force(thunk(fst(force x))), ... }
by reducing an identity cast.
Doing the same thing to contract B2’ to B2 and beta-reducing the
identity, it suffices to show
id <= \x : U (B1 x B2). thunk {force(thunk(fst(force x))),
force(thunk(snd(force x))) }
which is true by beta-reducing the force-thunk redex that killing the
cast exposed,
eta-reducing the (fst force x, snd force x) pair,
and eta-reducing the thunk(force x)
(b) a dual thing should work for
B1 <= B1’, B2 <= B2’ |- u <= id_U (B1' x B2’)
Because we’re showing an upper bound, use right dynamism induction to
contract B1 to B1’ and B2 to B2’, and then beta/eta does it.