diff --git a/paper-new/defs.tex b/paper-new/defs.tex
index 7f88a118d4f564054ef8263bf0880b0df7a6fc55..f0b8114e66fd645a340f39f7bbc2402ea40ed45d 100644
--- a/paper-new/defs.tex
+++ b/paper-new/defs.tex
@@ -24,7 +24,7 @@
 \newcommand{\nat}{\text{Nat}}
 \newcommand{\bool}{\text{Bool}}
 \newcommand{\ra}{\rightharpoonup}
-\newcommand{\Ret}{\mathsf{Ret}}
+\newcommand{\Ret}[1]{\mathsf{Ret\,}{#1}}
 \newcommand{\hole}[1]{\bullet \colon {#1}}
 
 
@@ -44,11 +44,15 @@
 
 \newcommand{\Lift}{\text{Lift}}
 
-\newcommand{\rel}{\circ\hspace{-4px}-\hspace{-4px}\bullet}
+%\newcommand{\rel}{\circ\hspace{-4px}-\hspace{-4px}\bullet}
+\newcommand{\rel}{\mathrel{\circ\mkern-6mu-\mkern-6mu\bullet}}
 \newcommand{\ltdyn}{\sqsubseteq}
 \newcommand{\gtdyn}{\sqsupseteq}
 \newcommand{\equidyn}{\mathrel{\gtdyn\ltdyn}}
 \newcommand{\gamlt}{\Gamma^\ltdyn}
+\newcommand{\deltalt}{\Delta^\ltdyn}
+\newcommand{\relcomp}{\odot}
+
 \newcommand{\hasty}[3]{{#1} \vdash {#2} \colon {#3}}
 \newcommand{\etmprec}[4]{{#1} \vdash {#2} \ltdyn_e {#3} \colon {#4}}
 \newcommand{\itmprec}[4]{{#1} \vdash {#2} \ltdyn_i {#3} \colon {#4}}
diff --git a/paper-new/graduality.tex b/paper-new/graduality.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b5e4f666f35548cfcc85b08c4e7d25faeb8eebed
--- /dev/null
+++ b/paper-new/graduality.tex
@@ -0,0 +1,18 @@
+\section{Unary Canonicity}
+Before discussing graduality, we seek to prove its ``unary'' analogue.
+Namely, instead of considering inequality between terms, we start by considering equality.
+
+
+\section{Graduality}\label{sec:graduality}
+The main theorem we would like to prove is the following:
+
+\begin{theorem}[Graduality]
+  If $\cdot \vdash M \ltdyn N : \nat$, then
+  \begin{enumerate}
+    \item If $N = \mho$, then $M = \mho$
+    \item If $N = `n$, then $M = \mho$ or $M = `n$
+    \item If $M = V$, then $N = V$
+  \end{enumerate}
+\end{theorem}
+
+
diff --git a/paper-new/intro.tex b/paper-new/intro.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c0ca56cfd46e46edec857af6eca2a103d27bce7e
--- /dev/null
+++ b/paper-new/intro.tex
@@ -0,0 +1,208 @@
+\section{Introduction}
+  
+% gradual typing, graduality
+\subsection{Gradual Typing and Graduality}
+One of the principal categories on which type systems of programming languages are classified
+is that of static versus dynamic type discipline.
+In static typing, the code is type-checked at compile time, while in a dynamic typing,
+the type checking is deferred to run-time. Both approaches have benefits: with static typing,
+the programmer is assured that if the program passes the type-checker, their program
+is free of type errors. Meanwhile, dynamic typing allows the programmer to rapidly prototype
+their application code without needing to commit to fixed type signatures for their functions.
+
+\emph{Gradually-typed languages} \cite{siek-taha06} allow for both disciplines
+to be used in the same codebase, and support interoperability between statically-typed
+and dynamically-typed code.
+This flexibility allows programmers to begin their projects in a dynamic style and
+enjoy the benefits of dynamic typing related to rapid prototyping and easy modification
+while the codebase ``solidifies''. Over time, as parts of the code become more mature
+and the programmer is more certain of what the types should be, the code can be
+\emph{gradually} migrated to a statically typed style without needing to start the project
+over in a completely differnt language.
+
+Gradually-typed languages should satisfy two intuitive properties.
+First, the interaction between the static and dynamic components of the codebase
+should be safe -- i.e., should preserve the guarantees made by the static types.
+In particular, while statically-typed code can error at runtime in a gradually-typed language,
+such an error can always be traced back to a dynamically-typed term that
+violated the typing contract imposed by statically typed code.
+% In other words, in the statically-typed portions of the codebase, type soundness must be preserved.
+Second, gradually-typed langugaes should support the smooth migration from dynamic typing
+to static typing, in that the programmer can initially leave off the
+typing annotations and provide them later without altering the meaning of the
+program.
+
+% Graduality property
+Formally speaking, gradually typed languages should satisfy the 
+\emph{dynamic gradual guarantee}, originally defined by Siek, Vitousek, Cimini,
+and Boyland \cite{siek_et_al:LIPIcs:2015:5031}.
+This property is also referred to as \emph{graduality}, by analogy with parametricity.
+Intuitively, graduality says that going from a dynamic to static style should not
+introduce changes in the meaning of the program.
+More specifically, making the types more precise by adding typing annotations
+% (replacing $\dyn$ with a more specific type
+%such as integers)
+will either result in the same behavior as the original, less precise program,
+or will result in a type error.
+
+
+\subsection{Current Approaches to Proving Graduality}
+Current approaches to proving graduality include the methods of Abstracting Gradual Typing
+\cite{garcia-clark-tanter2016} and the formal tools of the Gradualier \cite{cimini-siek2016}.
+These allow the language developer to start with a statically typed langauge and derive a
+gradually typed language that satisfies the gradual guarantee. The downside to these approaches
+is that the semantics of the resulting languages are too lazy: the frameworks consider only
+the $\beta$ rules and not the $\eta$ equalities. Furthermore, while these frameworks do prove
+graduality, they do not show the correctness of the equational theory, which is equally important
+to sound gradual typing. For example, programmers often refactor their code without thinking about
+whether the refactoring has broken the semantics of the program. 
+It is the validity of the laws in the equational theory that guarantees that such refactorings are sound. 
+Similarly, correctness of compiler optimizations rests on the validity of the corresponding equations
+from the equational theory. It is therefore important that the langages that claim to be gradually typed
+have provably correct equational theories.
+
+%The downside is that
+%not all gradually typed languages can be derived from these frameworks, and moreover, in both
+%approaches the semantics is derived from the static type system as opposed to the alternative
+%in which the semantics determines the type checking. Without a clear semantic interpretation of type
+%dynamism, it becomes difficult to extend these techniques to new language features such as polymorphism.
+
+% Proving graduality via LR
+New and Ahmed \cite{new-ahmed2018}
+have developed a semantic approach to specifying type dynamism in terms of
+\emph{embedding-projection pairs}, which allows for a particularly elegant formulation of the
+gradual guarantee.
+Moreover, their axiomatic account of program equivalence allows for type-based reasoning
+about program equivalences.
+%
+In this approach, a logical relation is constructed and shown to be sound with respect to
+the notion of observational approximation that specifies when one program is more precise than another.
+The downside of this approach is that each new language requires a different logical relation
+to prove graduality. Furthermore, the logical relations tend to be quite complicated due
+to a technical requirement known as \emph{step-indexing}.
+As a result, developments using this approach tend to require vast effort, with the
+corresponding technical reports having 50+ pages of proofs.
+
+An alternative approach, which we investigate in this paper, is provided by
+\emph{synthetic guarded domain theory}.
+The tecnhiques of synthetic guarded domain theory allow us to internalize the
+step-index reasoning normally required in logical relations proofs of graduality,
+ultimately allowing us to specify the logical relation in a manner that looks nearly
+identical to a typical, non-step-indexed logical relation.
+
+In this paper, we report on work we have done to mechanize proofs of graduality and
+correctness of equational theories using SGDT techniques in Agda.
+Our goal in this work is to mechanize these proofs in a reusable way,
+thereby providing a framework to use to more easily and
+conveniently prove that existing languages satsify graduality and have
+sound equational theories. Moreover, the aim is for designers of new languages
+to utlize the framework to facilitate the design of new provably-correct
+gradually-typed languages with nontrivial features.
+
+
+\subsection{Proving Graduality in SGDT}
+\textbf{TODO:} This section should probably be moved to after the relevant background has been introduced.
+
+% TODO introduce the idea of cast calculus and explicit casts?
+
+In this paper, we will utilize SGDT techniques to prove graduality for a particularly
+simple gradually-typed cast calculus, the gradually-typed lambda calculus.
+This is the usual simply-typed lambda calculus with a dynamic type $\dyn$ such that
+$A \ltdyn\, \dyn$ for all types $A$, as well as upcasts and downcasts between any types
+$A$ and $B$ such that $A \ltdyn B$. The complete definition will be provided in
+Section \ref{sec:GTLC}.
+The graduality theorem is shown below.
+
+
+% \begin{theorem}[Graduality]
+%   If $M \ltdyn N : \nat$, then either:
+%   \begin{enumerate}
+%     \item $M = \mho$
+%     \item $M = N = \zro$
+%     \item $M = N = \suc n$ for some $n$
+%     \item $M$ and $N$ diverge
+%   \end{enumerate}
+% \end{theorem}
+
+
+\begin{theorem}[Graduality]
+  If $\cdot \vdash M \ltdyn N : \nat$, then
+  \begin{enumerate}
+    \item $M \Downarrow$ iff $N \Downarrow$
+    \item If $M \Downarrow v_?$ and $N \Downarrow v'_?$ then either $v_? = \mho$, or $v_? = v'_?$.
+  \end{enumerate}
+\end{theorem}
+
+Details can be found in later sections, but we provide a brief explanation of the terminology and notation:
+
+\begin{itemize}
+  \item $M \ltdyn N : \nat$ means $M$ and $N$ are terms of type $\nat$ such that
+  $M$ is ``syntactically more precise'' than $N$, or equivalently, $N$ is 
+  ``more dynamic'' than $M$. Intuitively this means that $M$ and $N$ are the
+  same except that in some places where $M$ has explicit typing annotations,
+  $N$ has $\dyn$ instead.
+  \item $\cdot \Downarrow$ is a relation on terms that is defined such that $M \Downarrow$ means
+  that $M$ terminates, either with a run-time error or a value $n$ of type $\nat$.
+  \item $\mho$ is a syntactic representation of a run-time type error, which
+  happens, for example, when a programmer tries to call a function with a value whose type
+  is found to be incompatible with the argument type of the function.
+  \item $v_?$ is shorthand for the syntactic representation of a term that is either equal to
+  $\mho$, or equal to the syntactic representation of a value $n$ of type $\nat$.
+\end{itemize}
+
+% We also should be able to show that $\mho$, $\zro$, and $\suc\, N$ are not equal.
+
+Our first step toward proving graduality is to formulate an \emph{step-sensitive}, or \emph{intensional},
+gradual lambda calculus, which we call $\intlc$, in which the computation steps taken by a term are made explicit.
+The ``normal'' gradual lambda calculus for which we want to prove graduality will be called the
+\emph{step-insensitive}, or \emph{extensional}, gradual lambda calculus, denoted $\extlc$.
+We will define an erasure function $\erase{\cdot} : \intlc \to \extlc$ which takes a program in the intensional lambda calculus
+and ``forgets'' the syntactic information about the steps to produce a term in the extensional calculus.
+
+Every term $M_e$ in $\extlc$ will have a corresponding program $M_i$ in $\intlc$ such that
+$\erase{M_i} = M_e$. Moreover, we will show that if $M_e \ltdyn_e M_e'$ in the extensional theory,
+then there exists terms $M_i$ and $M_i'$ such that $\erase{M_i}=M_e$, $\erase{M_i'}=M_e'$ and
+$M_i \ltdyn_i M_i'$ in the intensional theory.
+
+We formulate and prove an analogous graduality theorem for the intensional lambda calculus.
+We define an interpretation of the intensional lambda calculus into a model in which we prove
+various results. Using the observation above, given $M_e \ltdyn M_e' : \nat$, we can find
+intensional programs $M_i$ and $M_i'$ that erase to them and are such that $M_i \ltdyn M_i'$.
+We will then apply the intensional graduality theorem to $M_i$ and $M_i'$, and translate the result
+back to $M_e$ and $M_e'$.
+
+\subsection{Contributions}
+Our main contribution is a reusable framework in Guarded Cubical Agda for developing machine-checked proofs
+of graduality of a cast calculus.
+To demonstrate the feasability and utility of our approach, we have used the framework to prove
+graduality for the simply-typed gradual lambda calculus. Along the way, we have developed an ``intensional" theory
+of graduality that is of independent interest.
+
+
+\subsection{Overview of Remainder of Paper}
+
+In Section \ref{sec:technical-background}, we provide technical background on gradually typed languages and
+on synthetic guarded domain theory.
+% 
+In Section \ref{sec:GTLC}, we introduce the gradually-typed cast calculus
+for which we will prove graduality. Important here are the notions of syntactic
+type precision and term precision. We introduce both the \emph{extensional} gradual lambda calculus
+($\extlc$) and the \emph{intensional} gradual lambda calculus ($\intlc$).
+%
+In Section \ref{sec:domain-theory}, we define several fundamental constructions
+internal to SGDT that will be needed when we give a denotational semantics to
+our intensional lambda calculus.
+This includes the notion of Predomains as well as the concept
+of EP-Pairs.
+%
+In Section \ref{sec:semantics}, we define the denotational semantics for the
+intensional gradually-typed lambda calculus using the domain theoretic constructions in the
+previous section.
+%
+In Section \ref{sec:graduality}, we outline in more detail the proof of graduality for the
+extensional gradual lambda calculus, which will make use of prove properties we prove about
+ the intensional gradual lambda calculus.
+%
+In Section \ref{sec:discussion}, we discuss the benefits and drawbacks to our approach in comparison
+to the traditional step-indexing approach, as well as possibilities for future work.
+
diff --git a/paper-new/paper.pdf b/paper-new/paper.pdf
index 53a2f7bec2bac6a10908aece99a2e9266a903b06..8da106e491b64786eb6649373af63f726b36f28f 100644
Binary files a/paper-new/paper.pdf and b/paper-new/paper.pdf differ
diff --git a/paper-new/paper.tex b/paper-new/paper.tex
index 10e383e55ccf80eb95fea8287212d765406ed0f0..b00ad7440a994d426290bb938b1294e3b3821908 100644
--- a/paper-new/paper.tex
+++ b/paper-new/paper.tex
@@ -38,8 +38,8 @@
 
 
 \begin{abstract}
-    We develop a denotational semantics for a gradually typed language
-    with effects that is adequate and proves the graduality theorem.
+    We develop a denotational semantics for a simple gradually typed language
+    that is adequate and proves the graduality theorem.
     %
     The denotational semantics is constructed using \emph{synthetic
     guarded domain theory} working in a type theory with a later
@@ -68,957 +68,31 @@
   \end{abstract}
 
   \maketitle
-  
-  \section{Introduction}
-  
-  % gradual typing, graduality
-  \subsection{Gradual Typing and Graduality}
-  Gradual typing allows a language to have both statically-typed and dynamically-typed terms;
-  the statically-typed terms are type checked at compile time, while type checking for the 
-  dynamically-typed terms is deferred to runtime.
-
-  Gradually-typed languages should satisfy two intuitive properties.
-  First, the interaction
-  between the static and dynamic components of the codebase should be safe -- i.e., should
-  preserve the guarantees made by the static types.
-  In other words, in the static portions of the codebase, type soundness must be preserved.
-  Second, gradual langugaes should support the smooth migration from dynamic typing
-  to static typing, in that the programmer can initially leave off the
-  typing annotations and provide them later without altering the meaning of the
-  program.
-
-  % Graduality property
-  Formally speaking, gradually typed languages should satisfy the 
-  \emph{dynamic gradual guarantee}, originally defined by Siek, Vitousek, Cimini,
-  and Boyland \cite{siek_et_al:LIPIcs:2015:5031}.
-  This property is also referred to as \emph{graduality}, by analogy with parametricity.
-  Intuitively, graduality says that in going from a dynamic to static style should not
-  introduce changes in the meaning of the program.
-  More specifically, making the types more precise by adding annotations
-  % (replacing $\dyn$ with a more specific type
-  %such as integers)
-  will either result in the same behavior as the less precise program,
-  or result in a type error.
-
-  
-  \subsection{Current Approaches}
-  Current approaches to proving graduality include the methods of Abstracting Gradual Typing
-  \cite{garcia-clark-tanter2016} and the formal tools of the Gradualier \cite{cimini-siek2016}.
-  These allow the language developer to start with a statically typed langauge and derive a
-  gradually typed language that satisfies the gradual guarantee. The downside is that
-  not all gradually typed languages can be derived from these frameworks, and moreover, in both
-  approaches the semantics is derived from the static type system as opposed to the alternative
-  in which the semantics determines the type checking. Without a clear semantic interpretation of type
-  dynamism, it becomes difficult to extend these techniques to new language features such as polymorphism.
-
-  % Proving graduality via LR
-  New and Ahmed \cite{new-ahmed2018}
-  have developed a semantic approach to specifying type dynamism in terms of
-  \emph{embedding-projection pairs}, which allows for a particularly elegant formulation of the
-  gradual guarantee.
-  Moreover, their axiomatic account of program equivalence allows for type-based reasoning
-  about program equivalences.
-%
-  In this approach, a logical relation is constructed and shown to be sound with respect to
-  the notion of observational approximation that specifies when one program is more precise than another.
-  The downside of this approach is that each new language requires a different logical relation
-  to prove graduality. Furthermore, the logical relations tend to be quite complicated due
-  to a technical requirement known as \emph{step-indexing}.
-  As a result, developments using this approach tend to require vast effort, with the
-  corresponding technical reports having 50+ pages of proofs.
-
-  An alternative approach, which we investigate in this paper, is provided by
-  \emph{synthetic guarded domain theory}.
-  The tecnhiques of synthetic guarded domain theory allow us to internalize the
-  step-index reasoning normally required in logical relations proofs of graduality,
-  ultimately allowing us to specify the logical relation in a manner that looks nearly
-  identical to a typical, non-step-indexed logical relation.
-
-  In this paper, we report on work we have done to mechanize graduality proofs
-  using SGDT techniques in Agda.
-  Our hope in this work is that by mechanizing a graduality proof in a reusable way,
-  we will provide a framework for other language designers to use to more easily and
-  conveniently prove that their languages satsify graduality.
-
-
-  \subsection{Contributions}
-  Our main contribution is a framework in Guarded Cubical Agda for proving graduality of a cast calculus.
-  To demonstrate the feasability and utility of our approach, we have used the framework to prove
-  graduality for the simply-typed gradual lambda calculus.Along the way, we have developed an ``intensional" theory
-  of graduality that is of independent interest.
-
-
-  \subsection{Proving Graduality in SGDT}
-  \textbf{TODO:} This section should probably be moved to after the relevant background has been introduced.
-
-  
-  In this paper, we will utilize SGDT techniques to prove graduality for a particularly
-  simple gradually-typed cast calculus, the gradually-typed lambda calculus. This is just
-  the usual simply-typed lambda calculus with a dynamic type $\dyn$ such that $A \ltdyn\, \dyn$ for all types $A$,
-  as well as upcasts and downcasts between any types $A$ and $B$ such that $A \ltdyn B$.
-  The complete definition will be provided in Section \ref{sec:GTLC}.
-
-  Our main theorem is the following:
-
-  
-  % \begin{theorem}[Graduality]
-  %   If $M \ltdyn N : \nat$, then either:
-  %   \begin{enumerate}
-  %     \item $M = \mho$
-  %     \item $M = N = \zro$
-  %     \item $M = N = \suc n$ for some $n$
-  %     \item $M$ and $N$ diverge
-  %   \end{enumerate}
-  % \end{theorem}
-  
-
-  \begin{theorem}[Graduality]
-    If $\cdot \vdash M \ltdyn N : \nat$, then
-    \begin{enumerate}
-      \item If $N = \mho$, then $M = \mho$
-      \item If $N = `n$, then $M = \mho$ or $M = `n$
-      \item If $M = V$, then $N = V$
-    \end{enumerate}
-  \end{theorem}
-
-  We also should be able to show that $\mho$, $\zro$, and $\suc\, N$ are not equal.
-
-  Our first step toward proving graduality is to formulate an \emph{intensional} gradual lambda calculus,
-  which we call $\intlc$, in which the computation steps taken by a term are made explicit.
-  The ``normal'' graudal lambda calculus for which we want to prove graduality will be called the
-  \emph{extensional} gradual lambda calculus, denoted $\extlc$. We will define an erasure function
-  $\erase{\cdot} : \intlc \to \extlc$ which takes a program in the intensional lambda calculus
-  and ``forgets'' the syntactic information about the steps to produce a term in the extensional calculus.
-
-  Every term $M_e$ in $\extlc$ will have a corresponding program $M_i$ in $\intlc$ such that
-  $\erase{M_i} = M_e$. Moreover, we will show that if $M_e \ltdyn_e M_e'$ in the extensional theory,
-  then there exists terms $M_i$ and $M_i'$ such that $\erase{M_i}=M_e$, $\erase{M_i'}=M_e'$ and
-  $M_i \ltdyn_i M_i'$ in the intensional theory.
-
-  We formulate and prove an analogous graduality theorem for the intensional lambda calculus.
-  We define an interpretation of the intensional lambda calculus into a model in which we prove
-  various results. Using the observation above, given $M_e \ltdyn M_e' : \nat$, we can find
-  intensional programs $M_i$ and $M_i'$ that erase to them and are such that $M_i \ltdyn M_i'$.
-  We will then apply the intensional graduality theorem to $M_i$ and $M_i'$, and translate the result
-  back to $M_e$ and $M_e'$.
-
-
-  \subsection{Overview of Remainder of Paper}
-
-  In Section \ref{sec:technical-background}, we provide technical background on gradually typed languages and
-  on synthetic guarded domain theory.
- % 
-  In Section \ref{sec:GTLC}, we introduce the gradually-typed cast calculus
-  for which we will prove graduality. Important here are the notions of syntactic
-  type precision and term precision. We introduce both the \emph{extensional} gradual lambda calculus
-  ($\extlc$) and the \emph{intensional} gradual lambda calculus ($\intlc$).
-%
-  In Section \ref{sec:domain-theory}, we define several fundamental constructions
-  internal to SGDT that will be needed when we give a denotational semantics to
-  our intensional lambda calculus.
-  This includes the notion of Predomains as well as the concept
-  of EP-Pairs.
-%
-  In Section \ref{sec:semantics}, we define the denotational semantics for the
-  intensional gradually-typed lambda calculus using the domain theoretic constructions in the
-  previous section.
-%
-  In Section \ref{sec:graduality}, we outline in more detail the proof of graduality for the
-  extensional gradual lambda calculus, which will make use of prove properties we prove about
-   the intensional gradual lambda calculus.
-%
-  In Section \ref{sec:discussion}, we discuss the benefits and drawbacks to our approach in comparison
-  to the traditional step-indexing approach, as well as possibilities for future work.
-
-
-\section{Technical Background}\label{sec:technical-background}
-
-\subsection{Gradual Typing}
-
-
-
-% Cast calculi
-In a gradually-typed language, the mixing of static and dynamic code is seamless, in that
-the dynamically typed parts are checked at runtime. This type checking occurs at the elimination
-forms of the language (e.g., pattern matching, field reference, etc.).
-Gradual languages are generally elaborated to a \emph{cast calculus}, in which the dynamic
-type checking is made explicit through the insertion of \emph{type casts}.
-
-% Up and down casts
-In a cast calculus, there is a relation $\ltdyn$ on types such that $A \ltdyn B$ means that
-$A$ is a \emph{more precise} type than $B$.
-There a dynamic type $\dyn$ with the property that $A \ltdyn \dyn$ for all $A$.
-%
-If $A \ltdyn B$, a term $M$ of type $A$ may be \emph{up}casted to $B$, written $\up A B M$,
-and a term $N$ of type $B$ may be \emph{down}casted to $A$, written $\dn A B N$.
-Upcasts always succeed, while downcasts may fail at runtime.
-%
-% Syntactic term precision
-We also have a notion of \emph{syntatcic term precision}.
-If $A \ltdyn B$, and $M$ and $N$ are terms of type $A$ and $B$ respectively, we write
-$M \ltdyn N : A \ltdyn B$ to mean that $M$ is more precise than $N$, i.e., $M$ and $N$
-behave the same except that $M$ may error more.
-
-% Modeling the dynamic type as a recursive sum type?
-% Observational equivalence and approximation?
-
-% synthetic guarded domain theory, denotational semantics therein
-
-\subsection{Difficulties in Prior Semantics}
-
- % TODO move to another section?
-  % Difficulties in prior semantics
-  % - two separate logical relations for expressions
-  % - transitivity
-
-  In this work, we compare our approach to proving graduality to the approach
-  introduced by New and Ahmed \cite{new-ahmed2018} which constructs a step-indexed
-  logical relations model and shows that this model is sound with respect to their
-  notion of contextual error approximation.
-
-  Because the dynamic type is modeled as a non-well-founded
-  recursive type, their logical relation needs to be paramterized by natural numbers
-  to restore well-foundedness. This technique is known as a \emph{step-indexed logical relation}.
-  Reasoning about step-indexed logical relations
-  can be tedious and error-prone, and there are some very subtle aspects that must
-  be taken into account in the proofs. Figure \ref{TODO} shows an example of a step-indexed logical
-  relation for the gradually-typed lambda calculus.
-
-  In particular, the prior approach of New and Ahmed requires two separate logical
-  relations for terms, one in which the steps of the left-hand term are counted,
-  and another in which the steps of the right-hand term are counted.
-  Then two terms $M$ and $N$ are related in the ``combined'' logical relation if they are
-  related in both of the one-sided logical relations. Having two separate logical relations
-  complicates the statement of the lemmas used to prove graduality, becasue any statement that
-  involves a term stepping needs to take into account whether we are counting steps on the left
-  or the right. Some of the differences can be abstracted over, but difficulties arise for properties %/results
-  as fundamental and seemingly straightforward as transitivty.
-
-  Specifically, for transitivity, we would like to say that if $M$ is related to $N$ at
-  index $i$ and $N$ is related to $P$ at index $i$, then $M$ is related to $P$ at $i$.
-  But this does not actually hold: we requrie that one of the two pairs of terms
-  be related ``at infinity'', i.e., that they are related at $i$ for all $i \in \mathbb{N}$.
-  Which pair is required to satisfy this depends on which logical relation we are considering,
-  (i.e., is it counting steps on the left or on the right),
-  and so any argument that uses transitivity needs to consider two cases, one
-  where $M$ and $N$ must be shown to be related for all $i$, and another where $N$ and $P$ must
-  be related for all $i$. This may not even be possible to show in some scenarios!
-
-  % These complications introduced by step-indexing lead one to wonder whether there is a
-  % way of proving graduality without relying on tedious arguments involving natural numbers.
-  % An alternative approach, which we investigate in this paper, is provided by
-  % \emph{synthetic guarded domain theory}, as discussed below.
-  % Synthetic guarded domain theory allows the resulting logical relation to look almost
-  % identical to a typical, non-step-indexed logical relation.
-
-\subsection{Synthetic Guarded Domain Theory}
-One way to avoid the tedious reasoning associated with step-indexing is to work
-axiomatically inside of a logical system that can reason about non-well-founded recursive
-constructions while abstracting away the specific details of step-indexing required
-if we were working analytically.
-The system that proves useful for this purpose is called \emph{synthetic guarded
-domain theory}, or SGDT for short. We provide a brief overview here, but more
-details can be found in \cite{birkedal-mogelberg-schwinghammer-stovring2011}.
-
-SGDT offers a synthetic approach to domain theory that allows for guarded recursion
-to be expressed syntactically via a type constructor $\later : \type \to \type$ 
-(pronounced ``later''). The use of a modality to express guarded recursion
-was introduced by Nakano \cite{Nakano2000}.
-%
-Given a type $A$, the type $\later A$ represents an element of type $A$
-that is available one time step later. There is an operator $\nxt : A \to\, \later A$
-that ``delays'' an element available now to make it available later.
-We will use a tilde to denote a term of type $\later A$, e.g., $\tilde{M}$.
-
-% TODO later is an applicative functor, but not a monad
-
-There is a \emph{guarded fixpoint} operator
-
-\[
-  \fix : \forall T, (\later T \to T) \to T.
-\]
-
-That is, to construct a term of type $T$, it suffices to assume that we have access to
-such a term ``later'' and use that to help us build a term ``now''.
-This operator satisfies the axiom that $\fix f = f (\nxt (\fix f))$.
-In particular, this axiom applies to propositions $P : \texttt{Prop}$; proving
-a statement in this manner is known as $\lob$-induction.
-
-The operators $\later$, $\next$, and $\fix$ described above can be indexed by objects
-called \emph{clocks}. A clock serves as a reference relative to which steps are counted.
-For instance, given a clock $k$ and type $T$, the type $\later^k T$ represents a value of type
-$T$ one unit of time in the future according to clock $k$.
-If we only ever had one clock, then we would not need to bother defining this notion.
-However, the notion of \emph{clock quantification} is crucial for encoding coinductive types using guarded
-recursion, an idea first introduced by Atkey and McBride \cite{atkey-mcbride2013}.
-
-
-% Clocked Cubical Type Theory
-\subsubsection{Ticked Cubical Type Theory}
-% TODO motivation for Clocked Cubical Type Theory, e.g., delayed substitutions?
-
-In Ticked Cubical Type Theory \cite{TODO}, there is an additional sort
-called \emph{ticks}. Given a clock $k$, a tick $t : \tick k$ serves
-as evidence that one unit of time has passed according to the clock $k$.
-The type $\later A$ is represented as a function from ticks of a clock $k$ to $A$.
-The type $A$ is allowed to depend on $t$, in which case we write $\later^k_t A$
-to emphasize the dependence.
-
-% TODO next as a function that ignores its input tick argument?
-
-The rules for tick abstraction and application are similar to those of dependent
-$\Pi$ types. 
-In particular, if we have a term $M$ of type $\later^k A$, and we
-have available in the context a tick $t' : \tick k$, then we can apply the tick to $M$ to get
-a term $M[t'] : A[t'/t]$. We will also write tick application as $M_t$.
-Conversely, if in a context $\Gamma, t : \tick k$ we have that $M$ has type $A$,
-then in context $\Gamma$ we have $\lambda t.M$ has type $\later A$. % TODO dependent version?
-
-The statements in this paper have been formalized in a variant of Agda called
-Guarded Cubical Agda \cite{TODO}, an implementation of Clocked Cubical Type Theory.
-
-
-% TODO axioms (clock irrelevance, tick irrelevance)?
-
-
-\subsubsection{The Topos of Trees Model}
-
-The topos of trees model provides a useful intuition for reasoning in SGDT 
-\cite{birkedal-mogelberg-schwinghammer-stovring2011}.
-This section presupposes knowledge of category theory and can be safely skipped without
-disrupting the continuity.
-
-The topos of trees $\calS$ is the presheaf category $\Set^{\omega^o}$. 
-%
-We assume a universe $\calU$ of types, with encodings for operations such
-as sum types (written as $\widehat{+}$). There is also an operator 
-$\laterhat \colon \later \calU \to \calU$ such that 
-$\El(\laterhat(\nxt A)) =\,\, \later \El(A)$, where $\El$ is the type corresponding
-to the code $A$.
-
-An object $X$ is a family $\{X_i\}$ of sets indexed by natural numbers,
-along with restriction maps $r^X_i \colon X_{i+1} \to X_i$ (see Figure \ref{fig:topos-of-trees-object}).
-These should be thought of as ``sets changing over time", where $X_i$ is the view of the set if we have $i+1$
-time steps to reason about it.
-We can also think of an ongoing computation, with $X_i$ representing the potential results of the computation
-after it has run for $i+1$ steps.
-
-\begin{figure}
-  % https://q.uiver.app/?q=WzAsNSxbMiwwLCJYXzIiXSxbMCwwLCJYXzAiXSxbMSwwLCJYXzEiXSxbMywwLCJYXzMiXSxbNCwwLCJcXGRvdHMiXSxbMiwxLCJyXlhfMCIsMl0sWzAsMiwicl5YXzEiLDJdLFszLDAsInJeWF8yIiwyXSxbNCwzLCJyXlhfMyIsMl1d
-\[\begin{tikzcd}[ampersand replacement=\&]
-	{X_0} \& {X_1} \& {X_2} \& {X_3} \& \dots
-	\arrow["{r^X_0}"', from=1-2, to=1-1]
-	\arrow["{r^X_1}"', from=1-3, to=1-2]
-	\arrow["{r^X_2}"', from=1-4, to=1-3]
-	\arrow["{r^X_3}"', from=1-5, to=1-4]
-\end{tikzcd}\]
-  \caption{An example of an object in the topos of trees.}
-  \label{fig:topos-of-trees-object}
-\end{figure}
-
-A morphism from $\{X_i\}$ to $\{Y_i\}$ is a family of functions $f_i \colon X_i \to Y_i$
-that commute with the restriction maps in the obvious way, that is,
-$f_i \circ r^X_i = r^Y_i \circ f_{i+1}$ (see Figure \ref{fig:topos-of-trees-morphism}).
-
-\begin{figure}
-% https://q.uiver.app/?q=WzAsMTAsWzIsMCwiWF8yIl0sWzAsMCwiWF8wIl0sWzEsMCwiWF8xIl0sWzMsMCwiWF8zIl0sWzQsMCwiXFxkb3RzIl0sWzAsMSwiWV8wIl0sWzEsMSwiWV8xIl0sWzIsMSwiWV8yIl0sWzMsMSwiWV8zIl0sWzQsMSwiXFxkb3RzIl0sWzIsMSwicl5YXzAiLDJdLFswLDIsInJeWF8xIiwyXSxbMywwLCJyXlhfMiIsMl0sWzQsMywicl5YXzMiLDJdLFs2LDUsInJeWV8wIiwyXSxbNyw2LCJyXllfMSIsMl0sWzgsNywicl5ZXzIiLDJdLFs5LDgsInJeWV8zIiwyXSxbMSw1LCJmXzAiLDJdLFsyLDYsImZfMSIsMl0sWzAsNywiZl8yIiwyXSxbMyw4LCJmXzMiLDJdXQ==
-\[\begin{tikzcd}[ampersand replacement=\&]
-	{X_0} \& {X_1} \& {X_2} \& {X_3} \& \dots \\
-	{Y_0} \& {Y_1} \& {Y_2} \& {Y_3} \& \dots
-	\arrow["{r^X_0}"', from=1-2, to=1-1]
-	\arrow["{r^X_1}"', from=1-3, to=1-2]
-	\arrow["{r^X_2}"', from=1-4, to=1-3]
-	\arrow["{r^X_3}"', from=1-5, to=1-4]
-	\arrow["{r^Y_0}"', from=2-2, to=2-1]
-	\arrow["{r^Y_1}"', from=2-3, to=2-2]
-	\arrow["{r^Y_2}"', from=2-4, to=2-3]
-	\arrow["{r^Y_3}"', from=2-5, to=2-4]
-	\arrow["{f_0}"', from=1-1, to=2-1]
-	\arrow["{f_1}"', from=1-2, to=2-2]
-	\arrow["{f_2}"', from=1-3, to=2-3]
-	\arrow["{f_3}"', from=1-4, to=2-4]
-\end{tikzcd}\]
-  \caption{An example of a morphism in the topos of trees.}
-  \label{fig:topos-of-trees-morphism}
-\end{figure}
-
-
-The type operator $\later$ is defined on an object $X$ by
-$(\later X)_0 = 1$ and $(\later X)_{i+1} = X_i$.
-The restriction maps are given by $r^\later_0 =\, !$, where $!$ is the
-unique map into $1$, and $r^\later_{i+1} = r^X_i$.
-The morphism $\nxt^X \colon X \to \later X$ is defined pointwise by
-$\nxt^X_0 =\, !$, and $\nxt^X_{i+1} = r^X_i$. It is easily checked that
-this satisfies the commutativity conditions required of a morphism in $\calS$.
-%
-Given a morphism $f \colon \later X \to X$, i.e., a family of functions
-$f_i \colon (\later X)_i \to X_i$ that commute with the restrictions in the appropriate way,
-we define $\fix(f) \colon 1 \to X$ pointwise
-by $\fix(f)_i = f_{i} \circ \dots \circ f_0$.
-This can be visualized as a diagram in the category of sets as shown in
-Figure \ref{fig:topos-of-trees-fix}.
-% Observe that the fixpoint is a \emph{global element} in the topos of trees.
-% Global elements allow us to view the entire computation on a global level.
-
-
-% https://q.uiver.app/?q=WzAsOCxbMSwwLCJYXzAiXSxbMiwwLCJYXzEiXSxbMywwLCJYXzIiXSxbMCwwLCIxIl0sWzAsMSwiWF8wIl0sWzEsMSwiWF8xIl0sWzIsMSwiWF8yIl0sWzMsMSwiWF8zIl0sWzMsNCwiZl8wIl0sWzAsNSwiZl8xIl0sWzEsNiwiZl8yIl0sWzIsNywiZl8zIl0sWzAsMywiISIsMl0sWzEsMCwicl5YXzAiLDJdLFsyLDEsInJeWF8xIiwyXSxbNSw0LCJyXlhfMCJdLFs2LDUsInJeWF8xIl0sWzcsNiwicl5YXzIiXV0=
-% \[\begin{tikzcd}[ampersand replacement=\&]
-% 	1 \& {X_0} \& {X_1} \& {X_2} \\
-% 	{X_0} \& {X_1} \& {X_2} \& {X_3}
-% 	\arrow["{f_0}", from=1-1, to=2-1]
-% 	\arrow["{f_1}", from=1-2, to=2-2]
-% 	\arrow["{f_2}", from=1-3, to=2-3]
-% 	\arrow["{f_3}", from=1-4, to=2-4]
-% 	\arrow["{!}"', from=1-2, to=1-1]
-% 	\arrow["{r^X_0}"', from=1-3, to=1-2]
-% 	\arrow["{r^X_1}"', from=1-4, to=1-3]
-% 	\arrow["{r^X_0}", from=2-2, to=2-1]
-% 	\arrow["{r^X_1}", from=2-3, to=2-2]
-% 	\arrow["{r^X_2}", from=2-4, to=2-3]
-% \end{tikzcd}\]
-
-% \begin{figure}
-%   % https://q.uiver.app/?q=WzAsMTksWzEsMiwiWF8wIl0sWzIsMywiWF8xIl0sWzMsMSwiMSJdLFswLDEsIjEiXSxbMCwyLCJYXzAiXSxbMSwzLCJYXzEiXSxbMSwxLCIxIl0sWzIsMSwiMSJdLFsyLDIsIlhfMCJdLFsyLDQsIlhfMiJdLFszLDIsIlhfMCJdLFszLDMsIlhfMSJdLFszLDQsIlhfMiJdLFszLDUsIlhfMyJdLFs0LDIsIlxcY2RvdHMiXSxbMCwwLCJcXGZpeChmKV8wIl0sWzEsMCwiXFxmaXgoZilfMSJdLFsyLDAsIlxcZml4KGYpXzIiXSxbMywwLCJcXGZpeChmKV8zIl0sWzMsNCwiZl8wIl0sWzAsNSwiZl8xIl0sWzYsMCwiZl8wIl0sWzcsOCwiZl8wIl0sWzgsMSwiZl8xIl0sWzEsOSwiZl8yIl0sWzIsMTAsImZfMCJdLFsxMCwxMSwiZl8xIl0sWzExLDEyLCJmXzIiXSxbMTIsMTMsImZfMyJdXQ==
-%   \[\begin{tikzcd}[ampersand replacement=\&]
-%     {\fix(f)_0} \& {\fix(f)_1} \& {\fix(f)_2} \& {\fix(f)_3} \\
-%     1 \& 1 \& 1 \& 1 \\
-%     {X_0} \& {X_0} \& {X_0} \& {X_0} \& \cdots \\
-%     \& {X_1} \& {X_1} \& {X_1} \\
-%     \&\& {X_2} \& {X_2} \\
-%     \&\&\& {X_3}
-%     \arrow["{f_0}", from=2-1, to=3-1]
-%     \arrow["{f_1}", from=3-2, to=4-2]
-%     \arrow["{f_0}", from=2-2, to=3-2]
-%     \arrow["{f_0}", from=2-3, to=3-3]
-%     \arrow["{f_1}", from=3-3, to=4-3]
-%     \arrow["{f_2}", from=4-3, to=5-3]
-%     \arrow["{f_0}", from=2-4, to=3-4]
-%     \arrow["{f_1}", from=3-4, to=4-4]
-%     \arrow["{f_2}", from=4-4, to=5-4]
-%     \arrow["{f_3}", from=5-4, to=6-4]
-%   \end{tikzcd}\]
-%   \caption{The first few approximations to the guarded fixpoint of $f$.}
-%   \label{fig:topos-of-trees-fix-approx}
-% \end{figure}
-
-
-\begin{figure}
-  % https://q.uiver.app/?q=WzAsNixbMywwLCIxIl0sWzAsMiwiWF8wIl0sWzIsMiwiWF8xIl0sWzQsMiwiWF8yIl0sWzYsMiwiWF8zIl0sWzgsMiwiXFxkb3RzIl0sWzIsMSwicl5YXzAiXSxbNCwzLCJyXlhfMiJdLFswLDEsIlxcZml4KGYpXzAiLDFdLFswLDIsIlxcZml4KGYpXzEiLDFdLFswLDMsIlxcZml4KGYpXzIiLDFdLFswLDQsIlxcZml4KGYpXzMiLDFdLFswLDUsIlxcY2RvdHMiLDMseyJzdHlsZSI6eyJib2R5Ijp7Im5hbWUiOiJub25lIn0sImhlYWQiOnsibmFtZSI6Im5vbmUifX19XSxbMywyLCJyXlhfMSJdLFs1LDQsInJeWF8zIl1d
-  \[\begin{tikzcd}[ampersand replacement=\&,column sep=2.25em]
-    \&\&\& 1 \\
-    \\
-    {X_0} \&\& {X_1} \&\& {X_2} \&\& {X_3} \&\& \dots
-    \arrow["{r^X_0}", from=3-3, to=3-1]
-    \arrow["{r^X_2}", from=3-7, to=3-5]
-    \arrow["{\fix(f)_0}"{description}, from=1-4, to=3-1]
-    \arrow["{\fix(f)_1}"{description}, from=1-4, to=3-3]
-    \arrow["{\fix(f)_2}"{description}, from=1-4, to=3-5]
-    \arrow["{\fix(f)_3}"{description}, from=1-4, to=3-7]
-    \arrow["\cdots"{marking}, draw=none, from=1-4, to=3-9]
-    \arrow["{r^X_1}", from=3-5, to=3-3]
-    \arrow["{r^X_3}", from=3-9, to=3-7]
-  \end{tikzcd}\]
-  \caption{The guarded fixpoint of $f$.}
-  \label{fig:topos-of-trees-fix}
-\end{figure}
-
-% TODO global elements?
-
-
-\section{GTLC}\label{sec:GTLC}
-
-Here we describe the syntax and typing for the graudally-typed lambda calculus.
-We also give the rules for syntactic type and term precision.
-We define two separate calculi: the normal gradually-typed lambda calculus, which we
-call the extensional lambda calculus ($\extlc$), as well as an \emph{intensional} lambda calculus
-($\intlc$) whose syntax makes explicit the steps taken by a program.
-
-\subsection{Syntax}
-
-
-\begin{align*}
-  &\text{Types } A, B := \nat, \, \dyn, \, (A \To B) \\
-  &\text{Terms } M, N := \err_A, \, \zro, \,  \suc\, M, \, (\lda{x}{M}), \, (M\, N), \, (\up{A}{B} M), \, (\dn{A}{B} M) \\
-  &\text{Contexts } \Gamma := \cdot, \, (\Gamma, x : A)
-\end{align*}
-
-The typing rules are as expected, with a cast between $A$ to $B$ allowed only when $A \ltdyn B$.
-
-\begin{mathpar}
-  \inferrule*{ }{\hasty \Gamma {\err_A} A}
-
-  \inferrule*{ }{\hasty \Gamma \zro \nat}
-
-  \inferrule {\hasty \Gamma M \nat} {\hasty \Gamma {\suc\, M} \nat}
-
-  \inferrule* 
-    {\hasty {\Gamma, x : A} M B} 
-    {\hasty \Gamma {\lda x M} {A \To B}}
-
-  \inferrule *
-    {\hasty \Gamma M {A \To B} \and \hasty \Gamma N A}
-    {\hasty \Gamma {M \, N} B}
-
-  \inferrule*
-    {A \ltdyn B \and \hasty \Gamma M A}
-    {\hasty \Gamma {\up A B M} B }
-
-  \inferrule*
-    {A \ltdyn B \and \hasty \Gamma M B}
-    {\hasty \Gamma {\dn A B M} A}
-\end{mathpar}
-
-The equational theory is also as expected, with $\beta$ and $\eta$ laws for function type.
-
-\subsection{Type Precision}
-
-The rules for type precision are as follows:
-
-\begin{mathpar}
-  \inferrule*[right = \dyn]
-    { }{\dyn \ltdyn \dyn}
-
-  \inferrule*[right = \nat]
-    { }{\nat \ltdyn \nat}
-
-  \inferrule*[right = $Inj_\nat$]
-    { }{\nat \ltdyn \dyn}
-
-  \inferrule*[right = $\To$]
-    {A_i \ltdyn B_i \and A_o \ltdyn B_o }
-    {(A_i \To A_o) \ltdyn (B_i \To B_o)}
-
-  \inferrule*[right=$Inj_{\To}$]
-    {(A_i \to A_o) \ltdyn (\dyn \To \dyn)}
-    {(A_i \to A_o) \ltdyn\, \dyn}
-
-  
-\end{mathpar}
 
-% Type precision derivations
-Note that as a consequence of this presentation of the type precision rules, we
-have that if $A \ltdyn B$, there is a unique precision derivation that witnesses this.
-As in previous work, we go a step farther and make these derivations first-class objects,
-known as \emph{type precision derivations}.
-Specifically, for every $A \ltdyn B$, we have a derivation $d : A \ltdyn B$ that is constructed
-using the rules above. For instance, there is a derivation $\dyn : \dyn \ltdyn \dyn$, and a derivation
-$\nat : \nat \ltdyn \nat$, and if $d_i : A_i \ltdyn B_i$ and $d_o : A_o \ltdyn B_o$, then
-there is a derivation $d_i \To d_o : (A_i \To A_o) \ltdyn (B_i \To B_o)$. Likewise for
-the remaining two rules. The benefit to making these derivations explicit in the syntax is that we
-can perform induction over them.
-Note also that for any type $A$, we use $A$ to denote the reflexivity derivation that $A \ltdyn A$,
-i.e., $A : A \ltdyn A$.
-Finally, observe that for type precision derivations $d : A \ltdyn B$ and $d' : B \ltdyn C$, we
-can define their composition $d' \circ d : A \ltdyn C$. This will be used in the statement
-of transitivity of the term precision relation.
+\input{intro}
 
-\subsection{Term Precision}
+\input{technical-background}
 
-We allow for a \emph{heterogeneous} term precision judgment on terms $M$ of type $A$ and $N$ of type $B$,
-provided that $A \ltdyn B$ holds.
-
-% Type precision contexts
-In order to deal with open terms, we will need the notion of a type precision \emph{context}, which we denote
-$\gamlt$. This is similar to a normal context but instead of mapping variables to types,
-it maps variables $x$ to related types $A \ltdyn B$, where $x$ has type $A$ in the left-hand term
-and $B$ in the right-hand term. We may also write $x : d$ where $d : A \ltdyn B$ to indicate this.
-Another way of thinking of type precision contexts is as a zipped pair of contexts $\Gamma^l, \Gamma^r$
-with the same domain such that $\Gamma^l(x) \ltdyn \Gamma^r(x)$ for each $x$ in the domain.
-
-The rules for term precision come in two forms. We first have the \emph{congruence} rules,
-one for each term constructor. These assert that the term constructors respect term precision.
-The congruence rules are as follows:
-
-
-\begin{mathpar}
-  \inferrule*[right = $\err$]
-    { \hasty \Gamma M A }
-    {\etmprec {\gamlt} {\err_A} M A}
-
-  \inferrule*[right = Var]
-    { d : A \ltdyn B \and \gamlt(x) = (A, B) } { \etmprec {\gamlt} x x d  }
-
-  \inferrule*[right = Zro]
-    { } {\etmprec \gamlt \zro \zro \nat }
-
-  \inferrule*[right = Suc]
-    { \etmprec \gamlt M N \nat } {\etmprec \gamlt {\suc\, M} {\suc\, N} \nat}
-
-  \inferrule*[right = Lambda]
-    { d_i : A_i \ltdyn B_i \and 
-      d_o : A_o \ltdyn B_o \and 
-      \etmprec {\gamlt , x : d_i} M N {d_o} } 
-    { \etmprec \gamlt {\lda x M} {\lda x N} (d_i \To d_o) }
-
-  \inferrule*[right = App]
-    { d_i : A_i \ltdyn B_i \and
-      d_o : A_o \ltdyn B_o \\\\
-      \etmprec \gamlt {M} {M'} (d_i \To d_o) \and
-      \etmprec \gamlt {N} {N'} d_i
-    } 
-    { \etmprec \gamlt {M\, N} {M'\, N'} {d_o}}
-\end{mathpar}
-
-We then have additional equational axioms, including transitivity, $\beta$ and $\eta$ laws, and
-rules characterizing upcasts as least upper bounds, and downcasts as greatest lower bounds.
-
-We write $M \equidyn N$ to mean that both $M \ltdyn N$ and $N \ltdyn M$.
-
-% TODO
-\begin{mathpar}
-  \inferrule*[right = Transitivity]
-    {\etmprec \gamlt M N d \and
-     \etmprec \gamlt N P {d'} } 
-    {\etmprec \gamlt M P {d' \circ d} }
-
-  \inferrule*[right = $\beta$]
-    { \hasty {\Gamma, x : A_i} M {A_o} \and
-      \hasty {\Gamma} V {A_i} } 
-    { \etmequidyn \gamlt {(\lda x M)\, V} {M[V/x]} {A_o} }
-
-  \inferrule*[right = $\eta$]
-    { \hasty {\Gamma} {V} {A_i \To A_o} } 
-    { \etmequidyn \gamlt {\lda x (V\, x)} V {A_i \To A_o} }
-
-  \inferrule*[right = UpR]
-    { d : A \ltdyn B \and 
-      \hasty \Gamma M A } 
-    { \etmprec \gamlt M {\up A B M} d  }
-
-  \inferrule*[right = UpL]
-    { d : A \ltdyn B \and
-      \etmprec \gamlt M N d } 
-    { \etmprec \gamlt {\up A B M} N B }
-
-  \inferrule*[right = DnL]
-    { d : A \ltdyn B \and 
-      \hasty \Gamma M B } 
-    { \etmprec \gamlt {\dn A B M} M d }
-
-  \inferrule*[right = DnR]
-    { d : A \ltdyn B \and
-      \etmprec \gamlt M N d } 
-    { \etmprec \gamlt M {\dn A B N} A }
-\end{mathpar}
-
-% TODO explain the least upper bound/greatest lower bound rules
-The rules UpR, UpL, DnL, and DnR were introduced in \cite{TODO} as a means
-of cleanly axiomatizing the intended behavior of casts in a way that
-doesn't depend on the specific constructs of the language.
-Intuitively, rule UpR says that the upcast of $M$ is an upper bound for $M$
-in that $M$ may error more, and UpL says that the upcast is the \emph{least}
-such upper bound, in that it errors more than any other upper bound for $M$.
-Conversely, DnL says that the downcast of $M$ is a lower bound, and DnR says
-that it is the \emph{greatest} lower bound.
-% These rules provide a clean axiomatization of the behavior of casts that doesn't
-% depend on the specific constructs of the language.
-
-
-\subsection{The Intensional Lambda Calculus}
-
-\textbf{TODO: Subject to change!}
-
-Now that we have described the syntax along with the type and term precision judgments for
-$\extlc$, we can now do the same for $\intlc$. One key difference between the two calculi
-is that we define $\intlc$ using the constructs available to us in the language of synthetic
-guarded domain theory, e.g., we use the $\later$ operator. Whereas when we defined the syntax
-of the extensional lambda calculus we were working in the category of sets, when we define the
-syntax of the intensional lambda calculus we will be working in the topos of trees. 
-% in a presheaf category known as the \emph{topos of trees}.
-
-More specifically, in $\intlc$, we not only have normal terms, but also terms available
-``later'', which we denote by $\tilde{M}$.
-We have a term constructor $\theta_s$ which takes a later-term and turns it into
-a term avaialble now. 
-The typing and precision rules for $\theta_s$ involve the $\later$ operator, as shown below.
-Observe that $\theta_s$ is a syntactic analogue to the $\theta$ constructor of the lifting monad
-that we will define in the section on domain theoretic constructions (Section \ref{sec:domain-theory}),
-but it is important to note that $\theta_s (\tilde{M})$ is an actual term in $\intlc$, whereas the
-$\theta$ constructor is a purely semantic construction. These will be connected when we discuss the
-interpretation of $\intlc$ into the semantic model.
-
-% Most aspects of the two calculi are the same, except that in $\intlc$ we add a new term constructor
-% $\theta_s$ which represents a syntactic way of delaying a term by one computation step.
-
-To better understand this situation, note that $\lda{x}{(\cdot)}$ can be viewed as a function
-(at the level of the ambient type theory) from terms of type $B$ under context $\Gamma, x : A$
-to terms of type $A \To B$ under context $\Gamma$. Similarly, we can view $\theta_s(\cdot)$ as a
-function (in the ambient type theory, which is sythetic guarded domain theory) taking terms
-$\tilde{M}$ of type $A$ avaiable \emph{later} to terms of type $A$ available \emph{now}.
-
-Notice that the notion of a ``term available later'' does not need to be part of the syntax
-of the intensional lambda calculus, because this can be expressed in the ambient theory.
-Similarly, we do not need a syntactic form to delay a term, because we can simply use $\nxt$.
-
-
-\begin{align*}
-  % &\text{Later Terms } \tilde{M}, \tilde{N}
-  &\text{Terms } M, N := \err_A, \, \dots \, \theta_s (\tilde{M}) \\
-\end{align*}
-
-\begin{mathpar}
-  \inferrule*[]
-    { \later_t (\hasty \Gamma {M_t} A) }
-    { \hasty \Gamma {\theta_s M} {A} }
-\end{mathpar}
-
-\begin{mathpar}
-  \inferrule*[]
-    { \later_t (\itmprec \gamlt {M_t} {N_t} d) }
-    { \itmprec \gamlt {\theta_s M} {\theta_s N} d }
-\end{mathpar}
-
-Recall that $\later_t$ is a dependent form of $\later$ where the arugment is allowed
-to mention $t$. In particular, here we apply the tick $t$ to the later-terms $M$ and $N$ to get 
-``now"-terms $M_t$ and $N_t$.
-
-Formally speaking, the term precision relation must be defined as a guarded fixpoint,
-i.e., we assume that the function is defined later, and use it to construct a definition
-that is defined ``now''. This involves applying a tick to the later-function to shift it
-to a now-function. Indeed, this is what we do in the formal Agda development, but in this
-paper, we will elide these issues as they are not relevant to the main ideas.
-
-% TODO equational theory
-
-
-\section{Domain-Theoretic Constructions}\label{sec:domain-theory}
-
-In this section, we discuss the fundamental objects of the model into which we will embed
-the intensional lambda calculus and inequational theory. It is important to remember that
-the constructions in this section are entirely independent of the syntax described in the
-previous section; the notions defined here exist in their own right as purely mathematical
-constructs. In the next section, we will link the syntax and semantics via an interpretation
-function.
-
-\subsection{The Lift Monad}
-
-When thinking about how to model intensional gradually-typed programs, we must consider
-their possible behaviors. On the one hand, we have \emph{failure}: a program may fail
-at run-time because of a type error. In addition to this, a program may ``think'',
-i.e., take a step of computation. If a program thinks forever, then it never returns a value,
-so we can think of the idea of thinking as a way of intensionally modelling \emph{partiality}.
-
-With this in mind, we can describe a semantic object that models these behaviors: a monad
-for embedding computations that has cases for failure and ``thinking''.
-Previous work has studied such a construct in the setting of the latter only, called the lift
-monad \cite{mogelberg-paviotti2016}; here, we add the additional effect of failure.
-
-For a type $A$, we define the \emph{lift monad with failure} $\li A$, which we will just call
-the \emph{lift monad}, as the following datatype:
-
-\begin{align*}
-  \li A &:= \\
-  &\eta \colon A \to \li A \\
-  &\mho \colon \li A \\
-  &\theta \colon \later (\li A) \to \li A
-\end{align*}
-
-Formally, the lift monad $\li A$ is defined as the solution to the guarded recursive type equation
-
-\[ \li A \cong A + 1 + \later \li A. \]
-
-An element of $\li A$ should be viewed as a computation that can either (1) return a value (via $\eta$),
-(2) raise an error and stop (via $\mho$), or (3) think for a step (via $\theta$).
-%
-Notice there is a computation $\fix \theta$ of type $\li A$. This represents a computation
-that thinks forever and never returns a value.
-
-Since we claimed that $\li A$ is a monad, we need to define the monadic operations
-and show that they repect the monadic laws. The return is just $\eta$, and extend
-is defined via by guarded recursion by cases on the input.
-% It is instructive to give at least one example of a use of guarded recursion, so
-% we show below how to define extend:
-% TODO
-%
-%
-Verifying that the monadic laws hold requires \lob-induction and is straightforward.
-
-The lift monad has the following universal property. Let $f$ be a function from $A$ to $B$,
-where $B$ is a $\later$-algebra, i.e., there is $\theta_B \colon \later B \to B$.
-Further suppose that $B$ is also an ``error-algebra'', that is, an algebra of the
-constant functor $1 \colon \text{Type} \to \text{Type}$ mapping all types to Unit.
-This latter statement amounts to saying that there is a map $\text{Unit} \to B$, so $B$ has a
-distinguished ``error element" $\mho_B \colon B$.
-
-Then there is a unique homomorphism of algebras $f' \colon \li A \to B$ such that
-$f' \circ \eta = f$. The function $f'(l)$ is defined via guarded fixpoint by cases on $l$. 
-In the $\mho$ case, we simply return $\mho_B$.
-In the $\theta(\tilde{l})$ case, we will return
-
-\[\theta_B (\lambda t . (f'_t \, \tilde{l}_t)). \]
-
-Recalling that $f'$ is a guaded fixpoint, it is available ``later'' and by
-applying the tick we get a function we can apply ``now''; for the argument,
-we apply the tick to $\tilde{l}$ to get a term of type $\li A$.
-
-
-\subsubsection{Model-Theoretic Description}
-We can describe the lift monad in the topos of trees model as follows.
-
-% TODO
-
-\subsection{Predomains}
-
-The next important construction is that of a \emph{predomain}. A predomain is intended to
-model the notion of error ordering that we want terms to have. Thus, we define a predomain $A$
-as a partially-ordered set, which consists of a type which we denote $\ty{A}$ and a reflexive,
-transitive, and antisymmetric relation $\le_P$ on $A$.
-
-For each type we want to represent, we define a predomain for the corresponding semantic
-type. For instance, we define a predomain for natural numbers, a predomain for the
-dynamic type, a predomain for functions, and a predomain for the lift monad. We
-describe each of these below.
-
-We define monotone functions between predomain as expected. Given predomains
-$A$ and $B$, we write $f \colon A \monto B$ to indicate that $f$ is a monotone
-function from $A$ to $B$, i.e, for all $a_1 \le_A a_2$, we have $f(a_1) \le_B f(a_2)$.
-
-% TODO
-\begin{itemize}
-  \item There is a predomain Nat for natural numbers, where the ordering is equality.
-  
-  \item There is a predomain Dyn to represent the dynamic type. The underlying type
-  for this predomain is defined by guarded fixpoint to be such that
-  $\ty{\Dyn} \cong \mathbb{N}\, +\, \later (\ty{\Dyn} \monto \ty{\Dyn})$.
-  This definition is valid because the occurrences of Dyn are guarded by the $\later$.
-  The ordering is defined via guarded recursion by cases on the argument, using the
-  ordering on $\mathbb{N}$ and the ordering on monotone functions described below.
-
-  \item For a predomain $A$, there is a predomain $\li A$ that is the ``lift'' of $A$
-  using the lift monad. We use the same notation for $\li A$ when $A$ is a type
-  and $A$ is a predomain, since the context should make clear which one we are referring to.
-  The underling type of $\li A$ is simply $\li \ty{A}$, i.e., the lift of the underlying
-  type of $A$.
-  The ordering on $\li A$ is the ``lock-step error-ordering'' which we describe in
-  \ref{subsec:lock-step}.
-
-  \item For predomains $A_i$ and $A_o$, we form the predomain of monotone functions
-  from $A_i$ to $A_o$, which we denote by $A_i \To A_o$.
-  The ordering is such that $f$ is below $g$ if for all $a$ in $\ty{A_i}$, we have
-  $f(a)$ is below $g(a)$ in the ordering for $A_o$. 
-\end{itemize}
-
-
-
-\subsection{Lock-step Error Ordering}\label{subsec:lock-step}
-
-As mentioned, the ordering on the lift of a predomain $A$
-is called the \emph{lock-step error-ordering}, the idea being that two computations
-$l$ and $l'$ are related if they are in lock-step with regard to their
-intensional behavior, up to $l$ erroring.
-Formally, we define this ordering as follows:
-
-\begin{itemize}
-  \item 	$\eta\, x \ltls \eta\, y$ if $x \le_A y$.
-  \item 	$\mho \ltls l$ for all $l$ 
-  \item   $\theta\, \tilde{r} \ltls \theta\, \tilde{r'}$ if
-          $\later_t (\tilde{r}_t \ltls \tilde{r'}_t)$
-\end{itemize}
-
-We also define a heterogeneous version of this ordering between the lifts of two
-different predomains $A$ and $B$, parameterized by a relation $R$ between $A$ and $B$.
-
-\subsection{Weak Bisimilarity Relation}
-
-We also define another ordering on $\li A$, called ``weak bisimilarity'',
-written $l \bisim l'$.
-Intuitively, we say $l \bisim l'$ if they are equivalent ``up to delays''.
-We introduce the notation $x \sim_A y$ to mean $x \le_A y$ and $y \le_A x$.
-
-The weak bisimilarity relation is defined by guarded fixpoint as follows:
-
-\begin{align*}
-  &\mho \bisim \mho \\
-%
-  &\eta\, x \bisim \eta\, y \text{ if } 
-    x \sim_A y \\
-%		
-  &\theta\, \tilde{x} \bisim \theta\, \tilde{y} \text{ if } 
-    \later_t (\tilde{x}_t \bisim \tilde{y}_t) \\
-%	
-  &\theta\, \tilde{x} \bisim \mho \text{ if } 
-    \theta\, \tilde{x} = \delta^n(\mho) \text { for some $n$ } \\
-%	
-  &\theta\, \tilde{x} \bisim \eta\, y \text{ if }
-    (\theta\, \tilde{x} = \delta^n(\eta\, x))
-  \text { for some $n$ and $x : \ty{A}$ such that $x \sim_A y$ } \\
-%
-  &\mho \bisim \theta\, \tilde{y} \text { if } 
-    \theta\, \tilde{y} = \delta^n(\mho) \text { for some $n$ } \\
-%	
-  &\eta\, x \bisim \theta\, \tilde{y} \text { if }
-    (\theta\, \tilde{y} = \delta^n (\eta\, y))
-  \text { for some $n$ and $y : \ty{A}$ such that $x \sim_A y$ }
-\end{align*}
-
-
-\subsection{EP-Pairs}
-
-Here, we adapt the notion of embedding-projection pair as used to study
-gradual typing (\cite{new-ahmed2018}) to the setting of intensional denotaional semantics.
-
-
-
-
-
-\section{Semantics}\label{sec:semantics}
-
-\subsection{Relational Semantics}
-
-\subsubsection{Types as Predomains}
-
-
-\subsubsection{Terms as Monotone Functions}
-
-
-\subsubsection{Type Precision as EP-Pairs}
-
-
-\subsubsection{Term Precision via the Lock-Step Error Ordering}
-% Homogeneous vs heterogeneous term precision
-
-\subsection{Logical Relations Semantics}
-
-
-\section{Graduality}\label{sec:graduality}
-The main theorem we would like to prove is the following:
-
-\begin{theorem}[Graduality]
-  If $\cdot \vdash M \ltdyn N : \nat$, then
-  \begin{enumerate}
-    \item If $N = \mho$, then $M = \mho$
-    \item If $N = `n$, then $M = \mho$ or $M = `n$
-    \item If $M = V$, then $N = V$
-  \end{enumerate}
-\end{theorem}
-
-
-\subsection{Outline}
-
-\subsection{Extensional to Intensional}
-
-\subsection{Intensional Results}
-
-\subsection{Adequacy}
-
-\subsection{Putting it Together}
+\input{syntax}
 
+\input{semantics}
 
+\input{graduality}
 
 \section{Discussion}\label{sec:discussion}
 
-\subsection{Synthetic Ordering}
-
-While the use of synthetic guarded domain theory allows us to very
-conveniently work with non-well-founded recursive constructions while
-abstracting away the precise details of step-indexing, we do work with
-the error ordering in a mostly analytic fashion in that gradual types
-are interpreted as sets equipped with an ordering relation, and all
-terms must be proven to be monotone.
-%
-It is possible that a combination of synthetic guarded domain theory
-with \emph{directed} type theory would allow for an a synthetic
-treatment of the error ordering as well.
+% \subsection{Synthetic Ordering}
+
+% While the use of synthetic guarded domain theory allows us to very
+% conveniently work with non-well-founded recursive constructions while
+% abstracting away the precise details of step-indexing, we do work with
+% the error ordering in a mostly analytic fashion in that gradual types
+% are interpreted as sets equipped with an ordering relation, and all
+% terms must be proven to be monotone.
+% %
+% It is possible that a combination of synthetic guarded domain theory
+% with \emph{directed} type theory would allow for an a synthetic
+% treatment of the error ordering as well.
 
 
 \bibliographystyle{ACM-Reference-Format}
diff --git a/paper-new/references.bib b/paper-new/references.bib
index 8ad633feadbb1ab32e63fd90fdf3bee93215cb8d..f12e978cfaf7db49daa3350de31f4e480261cbec 100644
--- a/paper-new/references.bib
+++ b/paper-new/references.bib
@@ -340,3 +340,22 @@ year = {2002},
 month = sep,
 pages = {48--59}
 }
+
+@article{new-licata18,
+  author    = {Max S. New and
+               Daniel R. Licata},
+  title     = {Call-by-name Gradual Type Theory},
+  journal   = {CoRR},
+  year      = {2018},
+  url       = {http://arxiv.org/abs/1802.00061},
+}
+
+@phdthesis{levy01:phd,
+author = "Levy, Paul Blain",
+title = "Call-by-Push-Value",
+type = "{Ph.~D.} Dissertation",
+school = "Queen Mary, University of London",
+department = "Department of Computer Science",
+address = "London, UK",
+month = mar,
+year = "2001"}
\ No newline at end of file
diff --git a/paper-new/semantics.tex b/paper-new/semantics.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3fca4e3a766cb161415c9ae26efa89243fb4a5b2
--- /dev/null
+++ b/paper-new/semantics.tex
@@ -0,0 +1,196 @@
+\section{Domain-Theoretic Constructions}\label{sec:domain-theory}
+
+In this section, we discuss the fundamental objects of the model into which we will embed
+the intensional lambda calculus and inequational theory. It is important to remember that
+the constructions in this section are entirely independent of the syntax described in the
+previous section; the notions defined here exist in their own right as purely mathematical
+constructs. In the next section, we will link the syntax and semantics via an interpretation
+function.
+
+\subsection{The Lift Monad}
+
+When thinking about how to model intensional gradually-typed programs, we must consider
+their possible behaviors. On the one hand, we have \emph{failure}: a program may fail
+at run-time because of a type error. In addition to this, a program may ``think'',
+i.e., take a step of computation. If a program thinks forever, then it never returns a value,
+so we can think of the idea of thinking as a way of intensionally modelling \emph{partiality}.
+
+With this in mind, we can describe a semantic object that models these behaviors: a monad
+for embedding computations that has cases for failure and ``thinking''.
+Previous work has studied such a construct in the setting of the latter, called the lift
+monad \cite{mogelberg-paviotti2016}; here, we augment it with the additional effect of failure.
+
+For a type $A$, we define the \emph{lift monad with failure} $\li A$, which we will just call
+the \emph{lift monad}, as the following datatype:
+
+\begin{align*}
+  \li A &:= \\
+  &\eta \colon A \to \li A \\
+  &\mho \colon \li A \\
+  &\theta \colon \later (\li A) \to \li A
+\end{align*}
+
+Unless otherwise mentioned, all constructs involving $\later$ or $\fix$
+are understood to be with repsect to a fixed clock $k$. So for the above, we really have for each
+clock $k$ a type $\li^k A$ with respect to that clock.
+
+Formally, the lift monad $\li A$ is defined as the solution to the guarded recursive type equation
+
+\[ \li A \cong A + 1 + \later \li A. \]
+
+An element of $\li A$ should be viewed as a computation that can either (1) return a value (via $\eta$),
+(2) raise an error and stop (via $\mho$), or (3) think for a step (via $\theta$).
+%
+Notice there is a computation $\fix \theta$ of type $\li A$. This represents a computation
+that thinks forever and never returns a value.
+
+Since we claimed that $\li A$ is a monad, we need to define the monadic operations
+and show that they repect the monadic laws. The return is just $\eta$, and extend
+is defined via by guarded recursion by cases on the input.
+% It is instructive to give at least one example of a use of guarded recursion, so
+% we show below how to define extend:
+% TODO
+%
+%
+Verifying that the monadic laws hold requires \lob-induction and is straightforward.
+
+The lift monad has the following universal property. Let $f$ be a function from $A$ to $B$,
+where $B$ is a $\later$-algebra, i.e., there is $\theta_B \colon \later B \to B$.
+Further suppose that $B$ is also an ``error-algebra'', that is, an algebra of the
+constant functor $1 \colon \text{Type} \to \text{Type}$ mapping all types to Unit.
+This latter statement amounts to saying that there is a map $\text{Unit} \to B$, so $B$ has a
+distinguished ``error element" $\mho_B \colon B$.
+
+Then there is a unique homomorphism of algebras $f' \colon \li A \to B$ such that
+$f' \circ \eta = f$. The function $f'(l)$ is defined via guarded fixpoint by cases on $l$. 
+In the $\mho$ case, we simply return $\mho_B$.
+In the $\theta(\tilde{l})$ case, we will return
+
+\[\theta_B (\lambda t . (f'_t \, \tilde{l}_t)). \]
+
+Recalling that $f'$ is a guaded fixpoint, it is available ``later'' and by
+applying the tick we get a function we can apply ``now''; for the argument,
+we apply the tick to $\tilde{l}$ to get a term of type $\li A$.
+
+
+%\subsubsection{Model-Theoretic Description}
+%We can describe the lift monad in the topos of trees model as follows.
+
+
+\subsection{Predomains}
+
+The next important construction is that of a \emph{predomain}. A predomain is intended to
+model the notion of error ordering that we want terms to have. Thus, we define a predomain $A$
+as a partially-ordered set, which consists of a type which we denote $\ty{A}$ and a reflexive,
+transitive, and antisymmetric relation $\le_P$ on $A$.
+
+For each type we want to represent, we define a predomain for the corresponding semantic
+type. For instance, we define a predomain for natural numbers, a predomain for the
+dynamic type, a predomain for functions, and a predomain for the lift monad. We
+describe each of these below.
+
+We define monotone functions between predomain as expected. Given predomains
+$A$ and $B$, we write $f \colon A \monto B$ to indicate that $f$ is a monotone
+function from $A$ to $B$, i.e, for all $a_1 \le_A a_2$, we have $f(a_1) \le_B f(a_2)$.
+
+\begin{itemize}
+  \item There is a predomain Nat for natural numbers, where the ordering is equality.
+  
+  \item There is a predomain Dyn to represent the dynamic type. The underlying type
+  for this predomain is defined by guarded fixpoint to be such that
+  $\ty{\Dyn} \cong \mathbb{N}\, +\, \later (\ty{\Dyn} \monto \ty{\Dyn})$.
+  This definition is valid because the occurrences of Dyn are guarded by the $\later$.
+  The ordering is defined via guarded recursion by cases on the argument, using the
+  ordering on $\mathbb{N}$ and the ordering on monotone functions described below.
+
+  \item For a predomain $A$, there is a predomain $\li A$ for the ``lift'' of $A$
+  using the lift monad. We use the same notation for $\li A$ when $A$ is a type
+  and $A$ is a predomain, since the context should make clear which one we are referring to.
+  The underling type of $\li A$ is simply $\li \ty{A}$, i.e., the lift of the underlying
+  type of $A$.
+  The ordering on $\li A$ is the ``step-sensitive error-ordering'' which we describe in
+  \ref{subsec:lock-step}.
+
+  \item For predomains $A_i$ and $A_o$, we form the predomain of monotone functions
+  from $A_i$ to $A_o$, which we denote by $A_i \To A_o$.
+  The ordering is such that $f$ is below $g$ if for all $a$ in $\ty{A_i}$, we have
+  $f(a)$ is below $g(a)$ in the ordering for $A_o$. 
+\end{itemize}
+
+
+
+\subsection{Step-Sensitive Error Ordering}\label{subsec:lock-step}
+
+As mentioned, the ordering on the lift of a predomain $A$ is called the
+\emph{step-sensitive error-ordering} (also called ``lock-step error ordering''),
+the idea being that two computations $l$ and $l'$ are related if they are in
+lock-step with regard to their intensional behavior, up to $l$ erroring.
+Formally, we define this ordering as follows:
+
+\begin{itemize}
+  \item 	$\eta\, x \ltls \eta\, y$ if $x \le_A y$.
+  \item 	$\mho \ltls l$ for all $l$ 
+  \item   $\theta\, \tilde{r} \ltls \theta\, \tilde{r'}$ if
+          $\later_t (\tilde{r}_t \ltls \tilde{r'}_t)$
+\end{itemize}
+
+We also define a heterogeneous version of this ordering between the lifts of two
+different predomains $A$ and $B$, parameterized by a relation $R$ between $A$ and $B$.
+
+\subsection{Step-Insensitive Relation}
+
+We define another ordering on $\li A$, called the ``step-insensitive ordering'' or
+``weak bisimilarity'', written $l \bisim l'$.
+Intuitively, we say $l \bisim l'$ if they are equivalent ``up to delays''.
+We introduce the notation $x \sim_A y$ to mean $x \le_A y$ and $y \le_A x$.
+
+The weak bisimilarity relation is defined by guarded fixpoint as follows:
+
+\begin{align*}
+  &\mho \bisim \mho \\
+%
+  &\eta\, x \bisim \eta\, y \text{ if } 
+    x \sim_A y \\
+%		
+  &\theta\, \tilde{x} \bisim \theta\, \tilde{y} \text{ if } 
+    \later_t (\tilde{x}_t \bisim \tilde{y}_t) \\
+%	
+  &\theta\, \tilde{x} \bisim \mho \text{ if } 
+    \theta\, \tilde{x} = \delta^n(\mho) \text { for some $n$ } \\
+%	
+  &\theta\, \tilde{x} \bisim \eta\, y \text{ if }
+    (\theta\, \tilde{x} = \delta^n(\eta\, x))
+  \text { for some $n$ and $x : \ty{A}$ such that $x \sim_A y$ } \\
+%
+  &\mho \bisim \theta\, \tilde{y} \text { if } 
+    \theta\, \tilde{y} = \delta^n(\mho) \text { for some $n$ } \\
+%	
+  &\eta\, x \bisim \theta\, \tilde{y} \text { if }
+    (\theta\, \tilde{y} = \delta^n (\eta\, y))
+  \text { for some $n$ and $y : \ty{A}$ such that $x \sim_A y$ }
+\end{align*}
+
+\subsection{Error Domains}
+
+
+\subsection{Globalization}
+
+Recall that in the above definitions, any occurrences of $\later$ were with
+repsect to a fixed clock $k$. Intuitively, this corresponds to a step-indexed set.
+It will be necessary to consider the ``globalization'' of these definitions,
+i.e., the ``global'' behavior of the type over all potential time steps.
+This is accomplished in the type theory by \emph{clock quantification} \cite{atkey-mcbride2013},
+whereby given a type $X$ parameterized by a clock $k$, we consider the type
+$\forall k. X[k]$. This corresponds to leaving the step-indexed world and passing to
+the usual semantics in the category of sets.
+
+
+\section{Semantics}\label{sec:semantics}
+
+
+\subsection{Relational Semantics}
+
+\subsubsection{Term Precision via the Step-Sensitive Error Ordering}
+% Homogeneous vs heterogeneous term precision
+
+% \subsection{Logical Relations Semantics}
\ No newline at end of file
diff --git a/paper-new/syntax.tex b/paper-new/syntax.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d4d156ec31ff4254699010d06be420c89b40fdd3
--- /dev/null
+++ b/paper-new/syntax.tex
@@ -0,0 +1,494 @@
+\section{GTLC}\label{sec:GTLC}
+
+Here we describe the syntax and typing for the gradually-typed lambda calculus.
+We also give the rules for syntactic type and term precision.
+% We define four separate calculi: the normal gradually-typed lambda calculus, which we
+% call the extensional or \emph{step-insensitive} lambda calculus ($\extlc$),
+% as well as an \emph{intensional} lambda calculus
+% ($\intlc$) whose syntax makes explicit the steps taken by a program.
+
+Before diving into the details, let us give a brief overview of what we will define.
+We begin with a gradually-typed lambda calculus $(\extlc)$, which is similar to
+the normal call-by-value gradually-typed lambda calculus, but differs in that it
+is actually a fragment of call-by-push-value specialized such that there are no
+non-trivial computation types. We do this for convenience, as either way
+we would need a distinction between values and effectful terms; the framework of
+of call-by-push-value gives us a convenient langugae to define what we need.
+
+We then show that composition of type precision derivations is admissible, as is
+heterogeneous transitivity for term precision, so it will suffice to consider a new
+language ($\extlcm$) in which we don't have composition of type precision derivations
+or heterogeneous transitivity of term precision.
+
+We then observe that all casts, except those between $\nat$ and $\dyn$
+and between $\dyn \ra \dyn$ and $\dyn$, are admissible.
+% (we can define the cast of a function type functorially using the casts for its domain and codomain).
+This means it will be sufficient to consider a new language ($\extlcmm$) in which
+instead of having arbitrary casts, we have injections from $\nat$ and
+$\dyn \ra \dyn$ into $\dyn$, and case inspections from $\dyn$ to $\nat$ and
+$\dyn$ to $\dyn \ra \dyn$.
+
+From here, we define a \emph{step-sensitive} (also called \emph{intensional}) GSTLC,
+so-named because it makes the intensional stepping behavior of programs explicit in the syntax.
+This is acocmplished by adding a syntactic ``later'' type and a
+syntactic $\theta$ that maps terms of type later $A$ to terms of type $A$.
+
+\subsection{Syntax}
+
+The language is based on Call-By-Push-Value \cite{levy01:phd}, and as such it has two kinds of types:
+\emph{value types}, representing pure values, and \emph{computation types}, represting
+potentially effectful computations.
+In the language, all computation types have the form $\Ret A$ for some value type $A$.
+Given a value $V$ of type $A$, the term $\ret V$ views $V$ as a term of computation type $\Ret A$.
+Given a term $M$ of computation type $B$, the term $\bind{x}{M}{N}$ should be thought of as
+running $M$ to a value $V$ and then continuning as $N$, with $V$ in place of $x$.
+
+
+We also have value contexts and computation contexts, where the latter can be viewed
+as a pair consisting of (1) a stoup $\Sigma$, which is either empty or a hole of type $B$,
+and (2) a (potentially empty) value context $\Gamma$.
+
+\begin{align*}
+  &\text{Value Types } A := \nat \alt \dyn \alt (A \ra A') \\
+  &\text{Computation Types } B := \Ret A \\
+  &\text{Value Contexts } \Gamma := \cdot \alt (\Gamma, x : A) \\
+  &\text{Computation Contexts } \Delta := \cdot \alt \hole B \alt \Delta , x : A \\
+  &\text{Values } V :=  \zro \alt \suc\, V \alt \up{A}{B} V \\ 
+  &\text{Terms } M, N := \err_B \alt \ret {V} \alt \bind{x}{M}{N}
+    \alt \lda{x}{M} \alt V_f\, V_x \alt
+    \alt \dn{A}{B} M 
+\end{align*}
+
+The value typing judgment is written $\hasty{\Gamma}{V}{A}$ and 
+the computation typing judgment is written $\hasty{\Delta}{M}{B}$.
+
+We define substitution for value contexts by the following rules:
+
+\begin{mathpar}
+  \inferrule*
+  { \gamma : \Gamma' \to \Gamma \and 
+    \hasty{\Gamma'}{V}{A}}
+  { (\gamma , V/x ) \colon \Gamma' \to \Gamma , x : A }
+
+  \inferrule*
+  {}
+  {\cdot \colon \cdot \to \cdot}
+\end{mathpar}
+
+We define substitution for computation contexts by the following rules:
+
+\begin{mathpar}
+    \inferrule*
+    { \delta : \Delta' \to \Delta \and 
+      \hasty{\Delta'|_V}{V}{A}}
+    { (\delta , V/x ) \colon \Delta' \to \Delta , x : A }
+
+    \inferrule*
+    {}
+    {\cdot \colon \cdot \to \cdot}
+
+    \inferrule*
+    {\hasty{\Delta'}{M}{B}}
+    {M \colon \Delta' \to \hole{B}}
+\end{mathpar}
+
+
+The typing rules are as expected, with a cast between $A$ to $B$ allowed only when $A \ltdyn B$.
+Notice that the upcast of a value is a value, since it always succeeds, while the downcast
+of a value is a computation, since it may fail.
+
+    \begin{mathpar}
+        % Err
+        \inferrule*{ }{\hasty {\cdot, \Gamma} {\err_B} B} 
+      
+        % Zero and suc
+        \inferrule*{ }{\hasty \Gamma \zro \nat}
+      
+        \inferrule*{\hasty \Gamma V \nat} {\hasty \Gamma {\suc\, V} \nat}
+      
+        % Lambda
+        \inferrule* 
+        {\hasty {\cdot, \Gamma, x : A} M {\Ret A'}} 
+        {\hasty \Gamma {\lda x M} {A \ra A'}}
+      
+        % App
+        \inferrule*
+        {\hasty \Gamma {V_f} {A \ra A'} \and \hasty \Gamma {V_x} A}
+        {\hasty {\cdot , \Gamma} {V_f \, V_x} {\Ret A'}}
+
+        % Ret
+        \inferrule*
+        {\hasty \Gamma V A}
+        {\hasty {\cdot , \Gamma} {\ret V} {\Ret A}}
+        % TODO should this involve a Delta?
+
+      
+        % Bind
+        \inferrule*
+        {\hasty \Delta M {\Ret A} \and \hasty{\cdot , \Delta|_V , x : A}{N}{B} } % Need x : A in context
+        {\hasty {\Delta} {\bind{x}{M}{N}} {B}}
+
+        % Upcast
+        \inferrule*
+        {A \ltdyn A' \and \hasty \Gamma V A}
+        {\hasty \Gamma {\up A {A'} V} {A'} }
+    
+        % Downcast
+        \inferrule*
+        {A \ltdyn A' \and \hasty {\Gamma} V {A'}}
+        {\hasty {\cdot, \Gamma} {\dn A {A'} V} {\Ret A}}
+        % TODO are the contexts correct?
+
+        % \inferrule*
+        % {A \ltdyn A' \and \hasty {\Delta|_V} V {A'}}
+        % {\hasty {\Delta} {\dn A {A'} V} {\Ret A}}
+
+    \end{mathpar}
+
+
+In the equational theory, we have $\beta$ and $\eta$ laws for function type,
+as well a $\beta$ and $\eta$ law for $\Ret A$.
+
+% TODO do we need to add a substitution rule here?
+\begin{mathpar}
+  % Function Beta and Eta
+  \inferrule*
+  {\hasty {\cdot, \Gamma, x : A} M {\Ret A'} \and \hasty \Gamma V A}
+  {(\lda x M)\, V = M[V/x]}
+
+  \inferrule*
+  {\hasty \Gamma V {A \ra A}}
+  {\Gamma \vdash V = \lda x {V\, x}}
+
+  % Ret Beta and Eta
+  \inferrule*
+  {}
+  {\bind{x}{\ret\, V}{N} = N[V/x]}
+
+  \inferrule*
+  {\hasty {\hole{\Ret A} , \Gamma} {M} {B}}
+  {\hole{\Ret A}, \Gamma \vdash M = \bind{x}{\bullet}{M[\ret\, x]}}
+
+\end{mathpar}
+
+\subsection{Type Precision}
+
+The type precision rules specify what it means for a type $A$ to be more precise than $A'$.
+We have reflexivity rules for $\dyn$ and $\nat$, as well as rules that $\nat$ is more precise than $\dyn$
+and $\dyn \ra \dyn$ is more precise than $\dyn$.
+We also have a transitivity rule for composition of type precision,
+and also a rule for function types stating that given $A_i \ltdyn A'_i$ and $A_o \ltdyn A'_o$, we can prove
+$A_i \ra A_o \ltdyn A'_i \ra A'_o$.
+Finally, we can lift a relation on value types $A \ltdyn A'$ to a relation $\Ret A \ltdyn \Ret A'$ on
+computation types.
+
+\begin{mathpar}
+  \inferrule*[right = \dyn]
+    { }{\dyn \ltdyn\, \dyn}
+
+  \inferrule*[right = \nat]
+    { }{\nat \ltdyn \nat}
+
+  \inferrule*[right = $Inj_\nat$]
+    { }{\nat \ltdyn\, \dyn}
+
+  \inferrule*[right = $\To$]
+    {A_i \ltdyn A'_i \and A_o \ltdyn A'_o }
+    {(A_i \To A_o) \ltdyn (A'_i \To A'_o)}
+
+  \inferrule*[right=$Inj_{\To}$]
+    { }
+    {(\dyn \ra \dyn) \ltdyn\, \dyn}
+
+  \inferrule*[right=ValComp]
+    {A \ltdyn A' \and A' \ltdyn A''}
+    {A \ltdyn A''}
+
+  \inferrule*[right=CompComp]
+    {B \ltdyn B' \and B' \ltdyn B''}
+    {B \ltdyn B''}
+
+  \inferrule*[right=$\Ret{}$]
+    {A \ltdyn A'}
+    {\Ret {A} \ltdyn \Ret {A'}}
+
+    % TODO are there other rules needed for computation types?
+
+  
+\end{mathpar}
+
+% Type precision derivations
+Note that as a consequence of this presentation of the type precision rules, we
+have that if $A \ltdyn A'$, there is a unique precision derivation that witnesses this.
+As in previous work, we go a step farther and make these derivations first-class objects,
+known as \emph{type precision derivations}.
+Specifically, for every $A \ltdyn A'$, we have a derivation $c : A \ltdyn A'$ that is constructed
+using the rules above. For instance, there is a derivation $\dyn : \dyn \ltdyn \dyn$, and a derivation
+$\nat : \nat \ltdyn \nat$, and if $c_i : A_i \ltdyn A_i$ and $c_o : A_o \ltdyn A'_o$, then
+there is a derivation $c_i \To c_o : (A_i \To A_o) \ltdyn (A'_i \To A'_o)$. Likewise for
+the remaining rules. The benefit to making these derivations explicit in the syntax is that we
+can perform induction over them.
+Note also that for any type $A$, we use $A$ to denote the reflexivity derivation that $A \ltdyn A$,
+i.e., $A : A \ltdyn A$.
+Finally, observe that for type precision derivations $c : A \ltdyn A'$ and $c' : A' \ltdyn A''$, we
+can define (via the rule ValComp) their composition $c \relcomp c' : A \ltdyn A''$.
+The same holds for computation type precision derivations.
+This notion will be used below in the statement of transitivity of the term precision relation.
+
+\subsection{Term Precision}
+
+We allow for a \emph{heterogeneous} term precision judgment on terms values $V$ of type
+$A$ and $V'$ of type $A'$ provided that $A \ltdyn A'$ holds. Likewise, for computation
+types $B \ltdyn B'$, if $M$ has type $B$ and $M'$ has type $B'$, we can form the judgment
+that $M \ltdyn M'$.
+
+% Type precision contexts
+In order to deal with open terms, we will need the notion of a type precision \emph{context}, which we denote
+$\gamlt$. This is similar to a normal context but instead of mapping variables to types,
+it maps variables $x$ to related types $A \ltdyn B$, where $x$ has type $A$ in the left-hand term
+and $B$ in the right-hand term. We may also write $x : d$ where $d : A \ltdyn B$ to indicate this.
+Another way of thinking of type precision contexts is as a zipped pair of contexts $\Gamma^l, \Gamma^r$
+with the same domain such that $\Gamma_l(x) \ltdyn \Gamma_r(x)$ for each $x$ in the domain.
+Similarly, we have computation type precision contexts $\Delta^\ltdyn$. Similar to ``normal'' computation
+type precision contexts $\Delta$, these consist of (1) a stoup $\Sigma$ which is either empty or
+has a hole $\hole{d}$ for some computation type precision derivation $d$, and (2) a value type precision context
+$\Gamma^\ltdyn$.
+%
+
+As with type precision derivations, we write $\Gamma$ to mean the context of reflexivity derivations
+$\Gamma(x) \ltdyn \Gamma(x)$. Likewise for computation type precision contexts.
+Furthermore, we write $\gamlt_1 \relcomp \gamlt_2$ to denote the ``composition'' of $\gamlt_1$ and $\gamlt_2$
+--- that is, the precision context whose value at $x$ is the type precision derivation
+$\gamlt_1(x) \relcomp \gamlt_2(x)$. This of course assumes that each of the type precision
+derivations is composable, i.e., that the RHS of $\gamlt_1(x)$ is the same as the left-hand side of $\gamlt_2(x)$.
+We define the same for computation type precision contexts $\deltalt_1$ and $\deltalt_2$,
+provided that both the computation type precision contexts have the same ``shape'', which is defined as
+(1) either the stoup is empty in both, or the stoup has a hole in both, say $\hole{d}$ and $\hole{d'}$
+where $d$ and $d'$ are composable, and (2) their value type precision contexts are composable as described above.
+% TODO reflexivity contexts and composition of contexts
+
+The rules for term precision come in two forms. We first have the \emph{congruence} rules,
+one for each term constructor. These assert that the term constructors respect term precision.
+The congruence rules are as follows:
+
+\begin{mathpar}
+
+  \inferrule*[right = Var]
+    { c : A \ltdyn B \and \gamlt(x) = (A, B) } { \etmprec {\gamlt} x x c  }
+
+  \inferrule*[right = Zro]
+    { } {\etmprec \gamlt \zro \zro \nat }
+
+  \inferrule*[right = Suc]
+    { \etmprec \gamlt V {V'} \nat } {\etmprec \gamlt {\suc\, V} {\suc\, V'} \nat}
+
+  \inferrule*[right = Lambda]
+    { c_i : A_i \ltdyn A'_i \and 
+      c_o : A_o \ltdyn A'_o \and 
+      \etmprec {\cdot , \gamlt , x : c_i} {M} {M'} {\Ret c_o} } 
+    { \etmprec \gamlt {\lda x M} {\lda x {M'}} {(c_i \ra c_o)} }
+
+  \inferrule*[right = App]
+    { c_i : A_i \ltdyn A'_i \and
+      c_o : A_o \ltdyn A'_o \\\\
+      \etmprec \gamlt {V_f} {V_f'} {(c_i \ra c_o)} \and
+      \etmprec \gamlt {V_x} {V_x'} {c_i}
+    } 
+    { \etmprec {\cdot , \gamlt} {V_f\, V_x} {V_f'\, V_x'} {\Ret {c_o}}}
+
+  \inferrule*[right = Ret]
+    {\etmprec {\gamlt} V {V'} c}
+    {\etmprec {\cdot , \gamlt} {\ret V} {\ret V'} {\Ret c}}
+
+  \inferrule*[right = Bind]
+    {\etmprec {\deltalt} {M} {M'} {\Ret c} \and 
+     \etmprec {\cdot , \deltalt|_V , x : c} {N} {N'} {d} }
+    {\etmprec {\deltalt} {\bind {x} {M} {N}} {\bind {x} {M'} {N'}} {d}}
+\end{mathpar}
+
+We then have additional equational axioms, including transitivity, $\beta$ and $\eta$ laws, and
+rules characterizing upcasts as least upper bounds, and downcasts as greatest lower bounds.
+
+We write $M \equidyn N$ to mean that both $M \ltdyn N$ and $N \ltdyn M$.
+
+% TODO adapt these for value/computation distinction
+% TODO substitution rule
+\begin{mathpar}
+  \inferrule*[right = $\err$]
+    { \hasty {\deltalt_l} M B }
+    {\etmprec {\Delta} {\err_B} M B}
+
+  \inferrule*[right = Transitivity]
+    { d : B \ltdyn B' \and d' : B' \ltdyn B'' \\\\
+     \etmprec {\deltalt_1} {M} {M'} {d} \and
+     \etmprec {\deltalt_2} {M'} {M''} {d'} } 
+    {\etmprec {\deltalt_1 \relcomp \deltalt_2} {M} {M''} {d \relcomp d'} }
+
+  \inferrule*[right = $\beta$-fun]
+    { \hasty {\Gamma, x : A_i} M {A_o} \and
+      \hasty {\Gamma} V {A_i} } 
+    { \etmequidyn \gamlt {(\lda x M)\, V} {M[V/x]} {A_o} }
+
+  \inferrule*[right = $\eta$-fun]
+    { \hasty {\Gamma} {V} {A_i \To A_o} } 
+    { \etmequidyn \gamlt {\lda x (V\, x)} V {A_i \To A_o} }
+
+  \inferrule*[right = $\beta$-ret]
+    {}
+    {\bind{x}{\ret\, V}{N} = N[V/x]}
+
+  \inferrule*[right = $\eta$-ret]
+    {\hasty {\hole{\Ret A} , \Gamma} {M} {B}}
+    {\hole{\Ret A}, \Gamma \vdash M = \bind{x}{\bullet}{M[\ret\, x]}}
+
+  \inferrule*[right = UpR]
+    { d : A \ltdyn B \and 
+      \hasty \Gamma M A } 
+    { \etmprec \gamlt M {\up A B M} d  }
+
+  \inferrule*[right = UpL]
+    { d : A \ltdyn B \and
+      \etmprec \gamlt M N d } 
+    { \etmprec \gamlt {\up A B M} N B }
+
+  \inferrule*[right = DnL]
+    { d : A \ltdyn B \and 
+      \hasty \Gamma M B } 
+    { \etmprec \gamlt {\dn A B M} M d }
+
+  \inferrule*[right = DnR]
+    { d : A \ltdyn B \and
+      \etmprec \gamlt M N d } 
+    { \etmprec \gamlt M {\dn A B N} A }
+\end{mathpar}
+
+% TODO explain the least upper bound/greatest lower bound rules
+The rules UpR, UpL, DnL, and DnR were introduced in \cite{new-licata18} as a means
+of cleanly axiomatizing the intended behavior of casts in a way that
+doesn't depend on the specific constructs of the language.
+Intuitively, rule UpR says that the upcast of $M$ is an upper bound for $M$
+in that $M$ may error more, and UpL says that the upcast is the \emph{least}
+such upper bound, in that it errors more than any other upper bound for $M$.
+Conversely, DnL says that the downcast of $M$ is a lower bound, and DnR says
+that it is the \emph{greatest} lower bound.
+% These rules provide a clean axiomatization of the behavior of casts that doesn't
+% depend on the specific constructs of the language.
+
+
+\subsection{Removing Transitivity}
+
+The first observation we make is that transitivity of type precision, and heterogeneous
+transitivity of term precision, are admissible. That is, consider a related language which
+is the same as $\extlc$ excpet that we have removed the composition rule for type precision and
+the heterogeneous transitivity rule for type precision. Denote this language by $\extlcm$.
+We claim that in this new language, the rules we removed are derivable from the remaining rules.
+More specifically, consider the following situation in $\extlc$:
+
+
+\begin{figure}
+% https://q.uiver.app/?q=WzAsNixbMCwwLCJcXEdhbW1hIl0sWzAsMSwiXFxHYW1tYSciXSxbMCwyLCJcXEdhbW1hJyciXSxbMiwwLCJBIl0sWzIsMSwiQSciXSxbMiwyLCJBJyciXSxbMiw1LCJNJyciXSxbMSw0LCJNJyJdLFswLDMsIk0iXSxbMyw0LCJcXGx0ZHluX2MiLDMseyJzdHlsZSI6eyJib2R5Ijp7Im5hbWUiOiJub25lIn0sImhlYWQiOnsibmFtZSI6Im5vbmUifX19XSxbNCw1LCJcXGx0ZHluX3tjJ30iLDMseyJzdHlsZSI6eyJib2R5Ijp7Im5hbWUiOiJub25lIn0sImhlYWQiOnsibmFtZSI6Im5vbmUifX19XSxbMCwxLCJcXGx0ZHluX3tcXEdhbW1hX2N9IiwzLHsic3R5bGUiOnsiYm9keSI6eyJuYW1lIjoibm9uZSJ9LCJoZWFkIjp7Im5hbWUiOiJub25lIn19fV0sWzEsMiwiIiwxLHsic3R5bGUiOnsiYm9keSI6eyJuYW1lIjoibm9uZSJ9LCJoZWFkIjp7Im5hbWUiOiJub25lIn19fV0sWzEsMiwiXFxsdGR5bl97XFxHYW1tYV97Yyd9fSIsMyx7InN0eWxlIjp7ImJvZHkiOnsibmFtZSI6Im5vbmUifSwiaGVhZCI6eyJuYW1lIjoibm9uZSJ9fX1dXQ==
+\[\begin{tikzcd}[ampersand replacement=\&]
+	\Gamma \&\& A \\
+	{\Gamma'} \&\& {A'} \\
+	{\Gamma''} \&\& {A''}
+	\arrow["{M''}", from=3-1, to=3-3]
+	\arrow["{M'}", from=2-1, to=2-3]
+	\arrow["M", from=1-1, to=1-3]
+	\arrow["{\ltdyn_c}"{marking}, draw=none, from=1-3, to=2-3]
+	\arrow["{\ltdyn_{c'}}"{marking}, draw=none, from=2-3, to=3-3]
+	\arrow["{\ltdyn_{\Gamma_c}}"{marking}, draw=none, from=1-1, to=2-1]
+	\arrow[draw=none, from=2-1, to=3-1]
+	\arrow["{\ltdyn_{\Gamma_{c'}}}"{marking}, draw=none, from=2-1, to=3-1]
+\end{tikzcd}\]
+    \caption{Heterogeneous transitivity.}
+    \label{fig:transitivity}
+  \end{figure}
+
+
+  \textbf{TODO}
+  
+
+
+\subsection{Removing Casts}
+
+% We now observe that all casts, except those between $\nat$ and $\dyn$
+% and between $\dyn \ra \dyn$ and $\dyn$, are admissible, in the sense that
+% we can start from $\extlcm$, remove casts except the aforementioned ones,
+% and in the resulting language we will be able to derive the other casts.
+
+We now observe that all casts, except those between $\nat$ and $\dyn$
+and between $\dyn \ra \dyn$ and $\dyn$, are admissible.
+Consider a new language ($\extlcmm$) in which
+instead of having arbitrary casts, we have injections from $\nat$ and
+$\dyn \ra \dyn$ into $\dyn$, and case inspections from $\dyn$ to $\nat$ and
+$\dyn$ to $\dyn \ra \dyn$. We claim that in $\extlcmm$, all of the casts
+present in $\extlcm$ are derivable.
+It will suffice to verify that casts for function type are derivable.
+This holds becasue function casts are constructed inductively from the cast
+for their domain and codomain. The base case is one of the casts inolving $\nat$
+or $\dyn \ra \dyn$ which are present in $\extlcmm$ as injections and case inspections.
+
+
+The resulting calculus now lacks transitivity of type precision, heterogeneous
+transitivity of term precision, and arbitrary casts.
+In this setting, rather than type precision, it makes more sense to speak of
+arbitrary monotone relations on types, which we denote by $A \rel A'$.
+We have relations on value types, as well as on computation types.
+
+\begin{align*}
+  &\text{Value Relations } R := \nat \alt \dyn \alt (R \ra R) \alt\, \dyn\, \\
+  &\text{Computation Relations } S := \Lift R \\
+  &\text{Value Relation Contexts } \Gamma^{\rel} := \cdot \alt \Gamma^{\rel} , A^{\rel} (x_l : A_l , x_r : A_r)\\
+  &\text{Computation Relation Contexts } \Delta^{\rel} := \cdot \alt \hole{B^{\rel}} \alt 
+    \Delta^{\rel} , A^{\rel} (x_l : A_l , x_r : A_r)   \\
+\end{align*}
+
+% TODO rules for relations
+
+
+\subsection{The Step-Sensitive Lambda Calculus}
+
+% \textbf{TODO: Subject to change!}
+
+From here, we define an \emph{step-sensitive} (also called \emph{intensional})
+GSTLC. As mentioned, this language makes the intensional stepping behavior of programs
+explicit in the syntax. We do this by adding a syntactic ``later'' type and a
+syntactic $\theta$ that maps terms of type later $A$ to terms of type $A$.
+
+In the step-sensitive syntax, we add a type constructor for later, as well as a
+syntactic $\theta$ term and a syntactic $\nxt$ term.
+We add rules for each of these, and also modify the rules for inj-arr and
+case-arr, since now the function is not $\Dyn \ra \Dyn$ but rather $\later (\Dyn \ra \Dyn)$.
+
+% TODO show changes
+
+We define an erasure function from step-sensitive syntax to step-insensitive syntax
+by induction on the step-sentive types and terms.
+The basic idea is that the syntactic type $\later A$ erases to $A$,
+and $\nxt$ and $\theta$ erase to the identity.
+
+
+
+% \begin{align*}
+%   &\text{Terms } M, N := \err_A, \, \dots \, \theta_s (\tilde{M}) \\
+% \end{align*}
+
+% \begin{mathpar}
+%   \inferrule*[]
+%     { \later_t (\hasty \Gamma {M_t} A) }
+%     { \hasty \Gamma {\theta_s M} {A} }
+% \end{mathpar}
+
+% \begin{mathpar}
+%   \inferrule*[]
+%     { \later_t (\itmprec \gamlt {M_t} {N_t} d) }
+%     { \itmprec \gamlt {\theta_s M} {\theta_s N} d }
+% \end{mathpar}
+
+% Recall that $\later_t$ is a dependent form of $\later$ where the argument is allowed
+% to mention $t$. In particular, here we apply the tick $t$ to the later-terms $M$ and $N$ to get 
+% ``now"-terms $M_t$ and $N_t$.
+
+
+\subsection{Quotienting by Syntactic Bisimilarity}
+
diff --git a/paper-new/technical-background.tex b/paper-new/technical-background.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8a9760406d5a0eb9038dcb91c991099123f93a95
--- /dev/null
+++ b/paper-new/technical-background.tex
@@ -0,0 +1,286 @@
+\section{Technical Background}\label{sec:technical-background}
+
+\subsection{Gradual Typing}
+
+% Cast calculi
+In a gradually-typed language, the mixing of static and dynamic code is seamless, in that
+the dynamically typed parts are checked at runtime. This type checking occurs at the elimination
+forms of the language (e.g., pattern matching, field reference, etc.).
+Gradual languages are generally elaborated to a \emph{cast calculus}, in which the dynamic
+type checking is made explicit through the insertion of \emph{type casts}.
+
+% Up and down casts
+In a cast calculus, there is a relation $\ltdyn$ on types such that $A \ltdyn B$ means that
+$A$ is a \emph{more precise} type than $B$.
+There a dynamic type $\dyn$ with the property that $A \ltdyn\, \dyn$ for all $A$.
+%
+If $A \ltdyn B$, a term $M$ of type $A$ may be \emph{up}casted to $B$, written $\up A B M$,
+and a term $N$ of type $B$ may be \emph{down}casted to $A$, written $\dn A B N$.
+Upcasts always succeed, while downcasts may fail at runtime.
+%
+% Syntactic term precision
+We also have a notion of \emph{syntactic term precision}.
+If $A \ltdyn B$, and $M$ and $N$ are terms of type $A$ and $B$ respectively, we write
+$M \ltdyn N : A \ltdyn B$ to mean that $M$ is more precise than $N$, i.e., $M$ and $N$
+behave the same except that $M$ may error more.
+
+% Modeling the dynamic type as a recursive sum type?
+% Observational equivalence and approximation?
+
+% synthetic guarded domain theory, denotational semantics therein
+
+\subsection{Difficulties in Prior Semantics}
+  % Difficulties in prior semantics
+
+  In this work, we compare our approach to proving graduality to the approach
+  introduced by New and Ahmed \cite{new-ahmed2018} which constructs a step-indexed
+  logical relations model and shows that this model is sound with respect to their
+  notion of contextual error approximation.
+
+  Because the dynamic type is modeled as a non-well-founded
+  recursive type, their logical relation needs to be paramterized by natural numbers
+  to restore well-foundedness. This technique is known as a \emph{step-indexed logical relation}.
+  Reasoning about step-indexed logical relations
+  can be tedious and error-prone, and there are some very subtle aspects that must
+  be taken into account in the proofs. Figure \ref{TODO} shows an example of a step-indexed logical
+  relation for the gradually-typed lambda calculus.
+
+  In particular, the prior approach of New and Ahmed requires two separate logical
+  relations for terms, one in which the steps of the left-hand term are counted,
+  and another in which the steps of the right-hand term are counted.
+  Then two terms $M$ and $N$ are related in the ``combined'' logical relation if they are
+  related in both of the one-sided logical relations. Having two separate logical relations
+  complicates the statement of the lemmas used to prove graduality, becasue any statement that
+  involves a term stepping needs to take into account whether we are counting steps on the left
+  or the right. Some of the differences can be abstracted over, but difficulties arise for properties %/results
+  as fundamental and seemingly straightforward as transitivty.
+
+  Specifically, for transitivity, we would like to say that if $M$ is related to $N$ at
+  index $i$ and $N$ is related to $P$ at index $i$, then $M$ is related to $P$ at $i$.
+  But this does not actually hold: we requrie that one of the two pairs of terms
+  be related ``at infinity'', i.e., that they are related at $i$ for all $i \in \mathbb{N}$.
+  Which pair is required to satisfy this depends on which logical relation we are considering,
+  (i.e., is it counting steps on the left or on the right),
+  and so any argument that uses transitivity needs to consider two cases, one
+  where $M$ and $N$ must be shown to be related for all $i$, and another where $N$ and $P$ must
+  be related for all $i$. % This may not even be possible to show in some scenarios!
+
+  % These complications introduced by step-indexing lead one to wonder whether there is a
+  % way of proving graduality without relying on tedious arguments involving natural numbers.
+  % An alternative approach, which we investigate in this paper, is provided by
+  % \emph{synthetic guarded domain theory}, as discussed below.
+  % Synthetic guarded domain theory allows the resulting logical relation to look almost
+  % identical to a typical, non-step-indexed logical relation.
+
+\subsection{Synthetic Guarded Domain Theory}
+One way to avoid the tedious reasoning associated with step-indexing is to work
+axiomatically inside of a logical system that can reason about non-well-founded recursive
+constructions while abstracting away the specific details of step-indexing required
+if we were working analytically.
+The system that proves useful for this purpose is called \emph{synthetic guarded
+domain theory}, or SGDT for short. We provide a brief overview here, but more
+details can be found in \cite{birkedal-mogelberg-schwinghammer-stovring2011}.
+
+SGDT offers a synthetic approach to domain theory that allows for guarded recursion
+to be expressed syntactically via a type constructor $\later : \type \to \type$ 
+(pronounced ``later''). The use of a modality to express guarded recursion
+was introduced by Nakano \cite{Nakano2000}.
+%
+Given a type $A$, the type $\later A$ represents an element of type $A$
+that is available one time step later. There is an operator $\nxt : A \to\, \later A$
+that ``delays'' an element available now to make it available later.
+We will use a tilde to denote a term of type $\later A$, e.g., $\tilde{M}$.
+
+% TODO later is an applicative functor, but not a monad
+
+There is a \emph{guarded fixpoint} operator
+
+\[
+  \fix : \forall T, (\later T \to T) \to T.
+\]
+
+That is, to construct a term of type $T$, it suffices to assume that we have access to
+such a term ``later'' and use that to help us build a term ``now''.
+This operator satisfies the axiom that $\fix f = f (\nxt (\fix f))$.
+In particular, this axiom applies to propositions $P : \texttt{Prop}$; proving
+a statement in this manner is known as $\lob$-induction.
+
+The operators $\later$, $\next$, and $\fix$ described above can be indexed by objects
+called \emph{clocks}. A clock serves as a reference relative to which steps are counted.
+For instance, given a clock $k$ and type $T$, the type $\later^k T$ represents a value of type
+$T$ one unit of time in the future according to clock $k$.
+If we only ever had one clock, then we would not need to bother defining this notion.
+However, the notion of \emph{clock quantification} is crucial for encoding coinductive types using guarded
+recursion, an idea first introduced by Atkey and McBride \cite{atkey-mcbride2013}.
+
+
+% Clocked Cubical Type Theory
+\subsubsection{Ticked Cubical Type Theory}
+% TODO motivation for Clocked Cubical Type Theory, e.g., delayed substitutions?
+
+In Ticked Cubical Type Theory \cite{TODO}, there is an additional sort
+called \emph{ticks}. Given a clock $k$, a tick $t : \tick k$ serves
+as evidence that one unit of time has passed according to the clock $k$.
+The type $\later A$ is represented as a function from ticks of a clock $k$ to $A$.
+The type $A$ is allowed to depend on $t$, in which case we write $\later^k_t A$
+to emphasize the dependence.
+
+% TODO next as a function that ignores its input tick argument?
+
+The rules for tick abstraction and application are similar to those of dependent
+$\Pi$ types. 
+In particular, if we have a term $M$ of type $\later^k A$, and we
+have available in the context a tick $t' : \tick k$, then we can apply the tick to $M$ to get
+a term $M[t'] : A[t'/t]$. We will also write tick application as $M_t$.
+Conversely, if in a context $\Gamma, t : \tick k$ we have that $M$ has type $A$,
+then in context $\Gamma$ we have $\lambda t.M$ has type $\later A$. % TODO dependent version?
+
+The statements in this paper have been formalized in a variant of Agda called
+Guarded Cubical Agda \cite{TODO}, an implementation of Clocked Cubical Type Theory.
+
+
+% TODO axioms (clock irrelevance, tick irrelevance)?
+
+
+\subsubsection{The Topos of Trees Model}
+
+The topos of trees model provides a useful intuition for reasoning in SGDT 
+\cite{birkedal-mogelberg-schwinghammer-stovring2011}.
+This section presupposes knowledge of category theory and can be safely skipped without
+disrupting the continuity.
+
+The topos of trees $\calS$ is the presheaf category $\Set^{\omega^o}$. 
+%
+We assume a universe $\calU$ of types, with encodings for operations such
+as sum types (written as $\widehat{+}$). There is also an operator 
+$\laterhat \colon \later \calU \to \calU$ such that 
+$\El(\laterhat(\nxt A)) =\,\, \later \El(A)$, where $\El$ is the type corresponding
+to the code $A$.
+
+An object $X$ is a family $\{X_i\}$ of sets indexed by natural numbers,
+along with restriction maps $r^X_i \colon X_{i+1} \to X_i$ (see Figure \ref{fig:topos-of-trees-object}).
+These should be thought of as ``sets changing over time", where $X_i$ is the view of the set if we have $i+1$
+time steps to reason about it.
+We can also think of an ongoing computation, with $X_i$ representing the potential results of the computation
+after it has run for $i+1$ steps.
+
+\begin{figure}
+  % https://q.uiver.app/?q=WzAsNSxbMiwwLCJYXzIiXSxbMCwwLCJYXzAiXSxbMSwwLCJYXzEiXSxbMywwLCJYXzMiXSxbNCwwLCJcXGRvdHMiXSxbMiwxLCJyXlhfMCIsMl0sWzAsMiwicl5YXzEiLDJdLFszLDAsInJeWF8yIiwyXSxbNCwzLCJyXlhfMyIsMl1d
+\[\begin{tikzcd}[ampersand replacement=\&]
+	{X_0} \& {X_1} \& {X_2} \& {X_3} \& \dots
+	\arrow["{r^X_0}"', from=1-2, to=1-1]
+	\arrow["{r^X_1}"', from=1-3, to=1-2]
+	\arrow["{r^X_2}"', from=1-4, to=1-3]
+	\arrow["{r^X_3}"', from=1-5, to=1-4]
+\end{tikzcd}\]
+  \caption{An example of an object in the topos of trees.}
+  \label{fig:topos-of-trees-object}
+\end{figure}
+
+A morphism from $\{X_i\}$ to $\{Y_i\}$ is a family of functions $f_i \colon X_i \to Y_i$
+that commute with the restriction maps in the obvious way, that is,
+$f_i \circ r^X_i = r^Y_i \circ f_{i+1}$ (see Figure \ref{fig:topos-of-trees-morphism}).
+
+\begin{figure}
+% https://q.uiver.app/?q=WzAsMTAsWzIsMCwiWF8yIl0sWzAsMCwiWF8wIl0sWzEsMCwiWF8xIl0sWzMsMCwiWF8zIl0sWzQsMCwiXFxkb3RzIl0sWzAsMSwiWV8wIl0sWzEsMSwiWV8xIl0sWzIsMSwiWV8yIl0sWzMsMSwiWV8zIl0sWzQsMSwiXFxkb3RzIl0sWzIsMSwicl5YXzAiLDJdLFswLDIsInJeWF8xIiwyXSxbMywwLCJyXlhfMiIsMl0sWzQsMywicl5YXzMiLDJdLFs2LDUsInJeWV8wIiwyXSxbNyw2LCJyXllfMSIsMl0sWzgsNywicl5ZXzIiLDJdLFs5LDgsInJeWV8zIiwyXSxbMSw1LCJmXzAiLDJdLFsyLDYsImZfMSIsMl0sWzAsNywiZl8yIiwyXSxbMyw4LCJmXzMiLDJdXQ==
+\[\begin{tikzcd}[ampersand replacement=\&]
+	{X_0} \& {X_1} \& {X_2} \& {X_3} \& \dots \\
+	{Y_0} \& {Y_1} \& {Y_2} \& {Y_3} \& \dots
+	\arrow["{r^X_0}"', from=1-2, to=1-1]
+	\arrow["{r^X_1}"', from=1-3, to=1-2]
+	\arrow["{r^X_2}"', from=1-4, to=1-3]
+	\arrow["{r^X_3}"', from=1-5, to=1-4]
+	\arrow["{r^Y_0}"', from=2-2, to=2-1]
+	\arrow["{r^Y_1}"', from=2-3, to=2-2]
+	\arrow["{r^Y_2}"', from=2-4, to=2-3]
+	\arrow["{r^Y_3}"', from=2-5, to=2-4]
+	\arrow["{f_0}"', from=1-1, to=2-1]
+	\arrow["{f_1}"', from=1-2, to=2-2]
+	\arrow["{f_2}"', from=1-3, to=2-3]
+	\arrow["{f_3}"', from=1-4, to=2-4]
+\end{tikzcd}\]
+  \caption{An example of a morphism in the topos of trees.}
+  \label{fig:topos-of-trees-morphism}
+\end{figure}
+
+
+The type operator $\later$ is defined on an object $X$ by
+$(\later X)_0 = 1$ and $(\later X)_{i+1} = X_i$.
+The restriction maps are given by $r^\later_0 =\, !$, where $!$ is the
+unique map into $1$, and $r^\later_{i+1} = r^X_i$.
+The morphism $\nxt^X \colon X \to \later X$ is defined pointwise by
+$\nxt^X_0 =\, !$, and $\nxt^X_{i+1} = r^X_i$. It is easily checked that
+this satisfies the commutativity conditions required of a morphism in $\calS$.
+%
+Given a morphism $f \colon \later X \to X$, i.e., a family of functions
+$f_i \colon (\later X)_i \to X_i$ that commute with the restrictions in the appropriate way,
+we define $\fix(f) \colon 1 \to X$ pointwise
+by $\fix(f)_i = f_{i} \circ \dots \circ f_0$.
+This can be visualized as a diagram in the category of sets as shown in
+Figure \ref{fig:topos-of-trees-fix}.
+% Observe that the fixpoint is a \emph{global element} in the topos of trees.
+% Global elements allow us to view the entire computation on a global level.
+
+
+% https://q.uiver.app/?q=WzAsOCxbMSwwLCJYXzAiXSxbMiwwLCJYXzEiXSxbMywwLCJYXzIiXSxbMCwwLCIxIl0sWzAsMSwiWF8wIl0sWzEsMSwiWF8xIl0sWzIsMSwiWF8yIl0sWzMsMSwiWF8zIl0sWzMsNCwiZl8wIl0sWzAsNSwiZl8xIl0sWzEsNiwiZl8yIl0sWzIsNywiZl8zIl0sWzAsMywiISIsMl0sWzEsMCwicl5YXzAiLDJdLFsyLDEsInJeWF8xIiwyXSxbNSw0LCJyXlhfMCJdLFs2LDUsInJeWF8xIl0sWzcsNiwicl5YXzIiXV0=
+% \[\begin{tikzcd}[ampersand replacement=\&]
+% 	1 \& {X_0} \& {X_1} \& {X_2} \\
+% 	{X_0} \& {X_1} \& {X_2} \& {X_3}
+% 	\arrow["{f_0}", from=1-1, to=2-1]
+% 	\arrow["{f_1}", from=1-2, to=2-2]
+% 	\arrow["{f_2}", from=1-3, to=2-3]
+% 	\arrow["{f_3}", from=1-4, to=2-4]
+% 	\arrow["{!}"', from=1-2, to=1-1]
+% 	\arrow["{r^X_0}"', from=1-3, to=1-2]
+% 	\arrow["{r^X_1}"', from=1-4, to=1-3]
+% 	\arrow["{r^X_0}", from=2-2, to=2-1]
+% 	\arrow["{r^X_1}", from=2-3, to=2-2]
+% 	\arrow["{r^X_2}", from=2-4, to=2-3]
+% \end{tikzcd}\]
+
+% \begin{figure}
+%   % https://q.uiver.app/?q=WzAsMTksWzEsMiwiWF8wIl0sWzIsMywiWF8xIl0sWzMsMSwiMSJdLFswLDEsIjEiXSxbMCwyLCJYXzAiXSxbMSwzLCJYXzEiXSxbMSwxLCIxIl0sWzIsMSwiMSJdLFsyLDIsIlhfMCJdLFsyLDQsIlhfMiJdLFszLDIsIlhfMCJdLFszLDMsIlhfMSJdLFszLDQsIlhfMiJdLFszLDUsIlhfMyJdLFs0LDIsIlxcY2RvdHMiXSxbMCwwLCJcXGZpeChmKV8wIl0sWzEsMCwiXFxmaXgoZilfMSJdLFsyLDAsIlxcZml4KGYpXzIiXSxbMywwLCJcXGZpeChmKV8zIl0sWzMsNCwiZl8wIl0sWzAsNSwiZl8xIl0sWzYsMCwiZl8wIl0sWzcsOCwiZl8wIl0sWzgsMSwiZl8xIl0sWzEsOSwiZl8yIl0sWzIsMTAsImZfMCJdLFsxMCwxMSwiZl8xIl0sWzExLDEyLCJmXzIiXSxbMTIsMTMsImZfMyJdXQ==
+%   \[\begin{tikzcd}[ampersand replacement=\&]
+%     {\fix(f)_0} \& {\fix(f)_1} \& {\fix(f)_2} \& {\fix(f)_3} \\
+%     1 \& 1 \& 1 \& 1 \\
+%     {X_0} \& {X_0} \& {X_0} \& {X_0} \& \cdots \\
+%     \& {X_1} \& {X_1} \& {X_1} \\
+%     \&\& {X_2} \& {X_2} \\
+%     \&\&\& {X_3}
+%     \arrow["{f_0}", from=2-1, to=3-1]
+%     \arrow["{f_1}", from=3-2, to=4-2]
+%     \arrow["{f_0}", from=2-2, to=3-2]
+%     \arrow["{f_0}", from=2-3, to=3-3]
+%     \arrow["{f_1}", from=3-3, to=4-3]
+%     \arrow["{f_2}", from=4-3, to=5-3]
+%     \arrow["{f_0}", from=2-4, to=3-4]
+%     \arrow["{f_1}", from=3-4, to=4-4]
+%     \arrow["{f_2}", from=4-4, to=5-4]
+%     \arrow["{f_3}", from=5-4, to=6-4]
+%   \end{tikzcd}\]
+%   \caption{The first few approximations to the guarded fixpoint of $f$.}
+%   \label{fig:topos-of-trees-fix-approx}
+% \end{figure}
+
+
+\begin{figure}
+  % https://q.uiver.app/?q=WzAsNixbMywwLCIxIl0sWzAsMiwiWF8wIl0sWzIsMiwiWF8xIl0sWzQsMiwiWF8yIl0sWzYsMiwiWF8zIl0sWzgsMiwiXFxkb3RzIl0sWzIsMSwicl5YXzAiXSxbNCwzLCJyXlhfMiJdLFswLDEsIlxcZml4KGYpXzAiLDFdLFswLDIsIlxcZml4KGYpXzEiLDFdLFswLDMsIlxcZml4KGYpXzIiLDFdLFswLDQsIlxcZml4KGYpXzMiLDFdLFswLDUsIlxcY2RvdHMiLDMseyJzdHlsZSI6eyJib2R5Ijp7Im5hbWUiOiJub25lIn0sImhlYWQiOnsibmFtZSI6Im5vbmUifX19XSxbMywyLCJyXlhfMSJdLFs1LDQsInJeWF8zIl1d
+  \[\begin{tikzcd}[ampersand replacement=\&,column sep=2.25em]
+    \&\&\& 1 \\
+    \\
+    {X_0} \&\& {X_1} \&\& {X_2} \&\& {X_3} \&\& \dots
+    \arrow["{r^X_0}", from=3-3, to=3-1]
+    \arrow["{r^X_2}", from=3-7, to=3-5]
+    \arrow["{\fix(f)_0}"{description}, from=1-4, to=3-1]
+    \arrow["{\fix(f)_1}"{description}, from=1-4, to=3-3]
+    \arrow["{\fix(f)_2}"{description}, from=1-4, to=3-5]
+    \arrow["{\fix(f)_3}"{description}, from=1-4, to=3-7]
+    \arrow["\cdots"{marking}, draw=none, from=1-4, to=3-9]
+    \arrow["{r^X_1}", from=3-5, to=3-3]
+    \arrow["{r^X_3}", from=3-9, to=3-7]
+  \end{tikzcd}\]
+  \caption{The guarded fixpoint of $f$.}
+  \label{fig:topos-of-trees-fix}
+\end{figure}
+
+% TODO global elements?
\ No newline at end of file