• 検索結果がありません。

Local reasoning for robust observational equivalence

N/A
N/A
Protected

Academic year: 2022

シェア "Local reasoning for robust observational equivalence"

Copied!
149
0
0

読み込み中.... (全文を見る)

全文

(1)

Local reasoning for

robust observational equivalence

Koko Muroya

(RIMS, Kyoto University)

joint work with

Dan R. Ghica & Todd Waugh Ambridge (University of Birmingham)

(2)

Overview

1. Motivation: robustness of observational equivalence

2. Hypernet semantics

3. Locality & step-wise reasoning

4. Example: cbv linear β-law

(3)

Overview

1. Motivation: robustness of observational equivalence

2. Hypernet semantics

3. Locality & step-wise reasoning

4. Example: cbv linear β-law

(4)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

“Is it safe to replace a program fragment with another?”

let x = 100 in let y = 50 in y + y

(5)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

“Is it safe to replace a program fragment with another?”

let x = 100 in let y = 50 in y + y

(6)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

“Is it safe to replace a program fragment with another?”

let x = 100 in let y = 50 in y + y

let y = 50 in y + y

(7)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

“Is it safe to replace a program fragment with another?”

let x = 100 in let y = 50 in y + y

let y = 50 in

y + y 50 + 50

(8)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

“Is it safe to replace a program fragment with another?”

let x = 100 in let y = 50 in y + y

let y = 50 in

y + y 50 + 50

let x = 100 in let y = 50 in y + y

let x = 100 in

50 + 50 50 + 50

(9)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

“Is it safe to replace a program fragment with another?”

let x = 100 in let y = 50 in y + y

let y = 50 in

y + y 50 + 50

let x = 100 in let y = 50 in y + y

let x = 100 in

50 + 50 50 + 50

? ?

?

?

(10)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

“Is it safe to replace a program fragment with another?”

If YES (“Two program fragments are observationally equal.”):

• justification of compiler optimisation

• program verification

let x = 100 in let y = 50 in y + y

let y = 50 in

y + y 50 + 50

let x = 100 in let y = 50 in y + y

let x = 100 in

50 + 50 50 + 50

? ?

?

?

(11)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

(12)

Observational equivalence on program fragments

“Do two program fragments behave the same?”

“What program fragments behave the same?”

the beta-law

x .M) N M[x := N] a parametricity law

𝚕𝚎𝚝 a = 𝚛𝚎𝚏 1 𝚒𝚗 λx . (a := 2; !a) λx.2

(13)

Robustness of observational equivalence

“Do two program fragments behave the same?”

“When do program fragments behave the same?”

Does the beta-law always hold?

the beta-law

x .M) N M[x := N]

(14)

Robustness of observational equivalence

“Do two program fragments behave the same?”

“When do program fragments behave the same?”

Does the beta-law always hold?

No, it’s violated if program contexts use OCaml’s Gc module:

the beta-law

x .M) N M[x := N]

x.0) 100 0

for memory management

(15)

Robustness of observational equivalence

“Do two program fragments behave the same?”

“When do program fragments behave the same?”

Does the beta-law always hold?

No, it’s violated if program contexts use OCaml’s Gc module:

the beta-law

x .M) N M[x := N]

x.0) 100 0

for memory management

(16)

Robustness of observational equivalence

“Do two program fragments behave the same?”

“What fragments, in which contexts, behave the same?”

(17)

Robustness of observational equivalence

“Do two program fragments behave the same?”

“What fragments, in which contexts, behave the same?”

… in the presence of (arbitrary) language features:

pure vs. effectful (e.g. vs. ) encoded vs. native (e.g. vs. ) extrinsics (e.g. )

foreign language calls

50 + 50 ref 1

State ref

Gc.stat

(18)

Robustness of observational equivalence

“Do two program fragments behave the same?”

“What fragments, in which contexts, behave the same?”

… in the presence of (arbitrary) language features

Our (big) goal:

analysing robustness/fragility of observational equivalence, using a general framework

(19)

Robustness of observational equivalence

“Do two program fragments behave the same?”

“What fragments, in which contexts, behave the same?”

… in the presence of (arbitrary) language features

Our result:

analysing robustness/fragility of observational equivalence, using a graphical framework

• hypernet semantics: a graphical abstract machine

local & step-wise reasoning to prove observational equivalence, with the concept of robustness

(20)

Overview

1. Motivation: robustness of observational equivalence

2. Hypernet semantics

3. Locality & step-wise reasoning

4. Example: cbv linear β-law

(21)

Hypernet semantics

● program execution by a graphical abstract machine

● programs as

certain hierarchical hypergraphs (“hypernets”)

● execution as

step-by-step strategical update of hypernets

(22)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

(1 + 2) * 3

1 2 3

+

*

(23)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

(1 + 2) * 3

1 2 3

+

*

nodes

(24)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

(1 + 2) * 3

1 2 3

+

*

hyperedges

(25)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

(x + y) * z (i + j) * k

+

*

(26)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

(x + y) * z (i + j) * k

+

*

x y z

(27)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

(x + y) * z (i + j) * k

+

*

i j k

(28)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

x + x

+

(29)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

x + x

+

sharing

(30)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

if x > 0 then 3 else 4 + 5

0 5

>

if

+ 4 3

(31)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

if x > 0 then 3 else 4 + 5

0 5

>

if

+ 4 3

hierarchical hyperedge

(hyperedge labelled with hypergraph)

…representing deferred

(32)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

(λx. x + x) 3

3

λ

@ +

(33)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

new a = 1 in a := 2; !a

! :=

; 2 1

(34)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

new a = 1 in a := 2; !a

! :=

; 2 1

atom

(35)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

program hypernet (hierarchical hypergraph)

new a = 1 in a := 2; !a

:= !

; 2 1

atom occurences

(36)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

(37)

Programs, graphically as hypernets

Idea: abstracting away variable names, and more…

• making blocks of deferred computation explicit

• accommodating atoms (reference names/locations)

(38)

Program execution, graphically

Idea: updating hypernets step-by-step

(39)

Program execution, graphically

Idea: updating hypernets step-by-step

1 2 4

+

+

+ 3

4

3

+

+ 3

1 2

+

+

7

3

+

7

10

(40)

Program execution, graphically

Idea: updating hypernets step-by-step

+ 3

+

3 3

6

let x = 3 in

x + x 3 + 3

(41)

Program execution, graphically

Idea: updating hypernets step-by-step

3

λ

@ +

+ 3

+

3 3

6

(λx. x + x) 3 let x = 3 in

x + x 3 + 3

(42)

Program execution, graphically

Idea: updating hypernets step-by-step

   … and strategically, using focus with three modes:

• depth-first redex search

• backtracking

• triggering update of hypernet

?

(43)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

1 2 4

+

+

+ 3

(44)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

1 2 4

+

+

+

3 depth-first redex search

?

(45)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

1 2 4

+

+

+

3 depth-first redex search

?

(46)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

1 2 4

+

+

+

3 depth-first redex search

?

(47)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

1 2 4

+

+

+ 3

backtracking

(48)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

1 2 4

+

+

+ 3

?

depth-first redex search

(49)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

1 2 4

+

+

+ 3

backtracking

(50)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

1 2 4

+

+

+ 3

triggering update of hypernet

(51)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

4

3

+

+ 3

?

depth-first redex search

(52)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

4

3

+

+ 3

backtracking

(53)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

4

3

+

+ 3

depth-first redex search

?

(54)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

4

3

+

+ 3

depth-first redex search

?

(55)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

4

3

+

+ 3

backtracking

(56)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

4

3

+

+ 3

?

depth-first redex search

(57)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

4

3

+

+ 3

backtracking

(58)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

4

3

+

+ 3

triggering update of hypernet

(59)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

+

7

?

depth-first redex search

(60)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

+

7

backtracking

(61)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

+

7

triggering update of hypernet

(62)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

10

?

depth-first redex search

(63)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

10

backtracking

(64)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

λ

@ +

(65)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

λ

@ +

depth-first redex search

?

(66)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

λ

@ +

depth-first redex search

?

(67)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

λ

@ +

backtracking

(68)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

λ

@ +

?

depth-first redex search

(69)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

λ

@ +

backtracking

(70)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

λ

@ +

triggering update of hypernet

(71)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3 +

depth-first redex search

?

(72)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3 +

depth-first redex search

?

(73)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3 +

triggering update of hypernet

(74)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

+

3 depth-first redex search

?

(75)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

+ 3

backtracking

(76)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

+ 3

?

depth-first redex search

(77)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

+ 3

backtracking

(78)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

3

+ 3

triggering update of hypernet

(79)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

6

depth-first redex search

?

(80)

Program execution, graphically

Idea: updating hypernets step-by-step    … and strategically, using focus

6

backtracking

(81)

Hypernet semantics

● program execution by a graphical abstract machine

● programs as

certain hierarchical hypergraphs (“hypernets”)

● execution as

step-by-step strategical update of hypernets

(82)

Hypernet semantics

● program execution by a graphical abstract machine

● programs as

certain hierarchical hypergraphs (“hypernets”)

● execution as

step-by-step strategical update of hypernets

● state = hypernet with focus

● transition = move of focus, or update of hypernet

?

(83)

Overview

1. Motivation: robustness of observational equivalence

2. Hypernet semantics

3. Locality & step-wise reasoning

4. Example: cbv linear β-law

(84)

Proof of observational equivalence, using locality

“Do two program fragments behave the same?”

(85)

Proof of observational equivalence, using locality

“Do two program fragments behave the same?”

“Do two sub-graphs behave the same in hypernet semantics?”

(86)

Proof of observational equivalence, using locality

“Do two program fragments behave the same?”

“Do two sub-graphs behave the same in hypernet semantics?”

Sub-graphs can represent parts of a program that are not necessarily well-formed,

e.g. parts relevant to a certain reference:

… new a = 1 in … (λx. a := 2; !a) … (λx. a := 2; !a) …

(87)

Proof of observational equivalence, using locality

“Do two program fragments behave the same?”

“Do two sub-graphs behave the same in hypernet semantics?”

Sub-graphs can represent parts of a program that are not necessarily well-formed,

e.g. parts relevant to a certain reference:

… new a = 1 in … (λx. a := 2; !a) … (λx. a := 2; !a) …

(88)

Proof of observational equivalence, using locality

“Do two program fragments behave the same?”

“Do two sub-graphs behave the same in hypernet semantics?”

Sub-graphs can represent parts of a program that are not necessarily well-formed,

e.g. parts relevant to a certain reference:

… new a = 1 in … (λx. a := 2; !a) … (λx. a := 2; !a) …

Idea of locality:

analysing behaviour of program fragments, by tracing sub-graphs during execution

(89)

Proof of observational equivalence, using locality

Claim: “Behaviour of a sub-graph G can be matched by behaviour of a sub-graph H.”

(90)

Proof of observational equivalence, using locality

Claim: “Behaviour of a sub-graph G can be matched by behaviour of a sub-graph H.”

For any context C, if

then

P

C

G G

G

?

C

H H

H Q

(91)

Proof of observational equivalence, using locality

Claim: “Behaviour of a sub-graph G can be matched by behaviour of a sub-graph H.”

Proof idea (simplified):

1. take contextual closure R of (G,H)

2. prove that the contextual closure R is a *-simulation

(92)

Proof of observational equivalence, using locality

Claim: “Behaviour of a sub-graph G can be matched by behaviour of a sub-graph H.”

Proof idea (simplified):

1. take contextual closure R of (G,H)

R C

G G

G

f

C

H H

H

f

for any context C with focus

(93)

Proof of observational equivalence, using locality

Claim: “Behaviour of a sub-graph G can be matched by behaviour of a sub-graph H.”

Proof idea (simplified):

1. take contextual closure R of (G,H)

R is closed under contexts, by definition

R C

G G

G

f

C

H H

H

f

for any context C with focus

(94)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation

R C

G G

G

f

C H

1 P

f’

(95)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation

R C

G G

G

f

C

H H

1 P’

f’’

P

f’

n

m Q

f’’

(96)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation

R C

G G

G

f

C H

1 P

f’

n

C’

G

G f’’ G G

C’

H f’’ H R

(97)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation

R C

G G

G

f

C

H H

1 P

f’

n

m

C’

G

G f’’ G G

C’

H

H f’’ H

Idea of locality: R

tracing sub-graphs during each transition, by analysing what happens around the focus during the transition

(98)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation

R C

G G

G

f

C H

1 P

f’

n

C’

G

G f’’ G G

C’

H f’’ H Idea of locality: R

tracing sub-graphs during each transition, by analysing

move, or trigger update

(99)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (1) move of focus or inside context

R C

G G

G

f

C

H H

?

(100)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (1) move of focus or inside context

R C

G G

G

f

C H

?

C

G G

G

f’

(101)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (1) move of focus or inside context

R C

G G

G

f

C

H H

?

C

G G

G

f’

R C

H H

(102)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (2) move of focus or , entering G?

R C

G G

G

f

C H

(103)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (2) move of focus or , entering G

R C

G G

G

f

C

H H

f

?

C

G G

G

f’

(104)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (2) move of focus or , entering G

R C

G G

G

f

C H

?

C

G G

G

f’

sound condition of “safety(G,H)” identified:

(105)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (3) update of hypernet

R C

G G

G

C

H H

P

?

(106)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (3) update of hypernet

R C

G G

G

C H

P

?

(107)

Proof of observational equivalence, using locality

Proof idea (simplified):

2. prove that the contextual closure R is a *-simulation   Case (3) update of hypernet

R C

G G

G

C

H H

P

?

sound condrelative to a“robustnessition of (G,H)” identified:

ll possible re

(108)

Proof of observational equivalence, using locality

Claim: “Behaviour of a sub-graph G can be matched by behaviour of a sub-graph H.”

Proof idea (simplified):

1. take contextual closure R of (G,H)

2. prove that the contextual closure R is a *-simulation by case analysis

(109)

Proof of observational equivalence, using locality

Claim: “Behaviour of a sub-graph G can be matched by behaviour of a sub-graph H.”

Proof idea (simplified):

1. take contextual closure R of (G,H)

2. prove that the contextual closure R is a *-simulation by case analysis

Partial Characterisation Theorem

Robust and safe templates induce observational equivalences.

(110)

Overview

1. Motivation: robustness of observational equivalence

2. Hypernet semantics

3. Locality & step-wise reasoning

4. Example: cbv linear β-law

(111)

Example: cbv linear β-law

Proof methodology:

1. prepare a template {(G,H)}

2. prove that the template {(G,H)} is robust and safe 3. apply the Partial Characterisation Theorem

Partial Characterisation Theorem

Robust and safe templates induce observational equivalences.

(for deterministic & “reasonable” languages)

(112)

Example: cbv linear β-law

Proof methodology:

1. prepare the cbv linear β-template:

  where H represents a value

2. prove that the cbv linear β-template is robust and safe

Partial Characterisation Theorem

{( , )}

Gλ @ H G H

(113)

Example: cbv linear β-law

Proof methodology:

1. prepare the cbv linear β-template:

  where H represents a value

2. prove that the cbv linear β-template is robust and safe

Partial Characterisation Theorem

{( , )}

Gλ @ H G H

(114)

Safety of cbv linear β-template

Aim: when focus or enters Gj,

R

1 P

f’

n

m

?

R C

f

C

f

P’

f’’

Q

f’’

(115)

Safety of cbv linear β-template

Key scenario: when focus enters G? j,

R

1

C

?

C

?

C

(116)

Safety of cbv linear β-template

Key scenario: when focus enters G? j,

R

1

C

?

C

?

C

1

C

(117)

Safety of cbv linear β-template

Key scenario: when focus enters G? j,

R

1

C

?

C

?

C

2

C

(118)

Safety of cbv linear β-template

Key scenario: when focus enters Gj, because Hj represents a value,

?

R

1

C

?

C

?

C

n’+2

C

(119)

Safety of cbv linear β-template

Key scenario: when focus enters Gj, because Hj represents a value,

?

R

1

C

?

C

?

C

n’+3

C

(120)

Safety of cbv linear β-template

Key scenario: when focus enters Gj, because Hj represents a value,

?

R

1

C

?

C

?

C

n’+4

C

?

(121)

Safety of cbv linear β-template

Key scenario: when focus enters Gj, because Hj represents a value,

?

R

1

C

?

C

?

C

n’+4

C

?

0

C

?

(122)

Safety of cbv linear β-template

Key scenario: when focus enters Gj, because Hj represents a value,

?

R

1

C

?

C

?

C

n’+4

C’

?

0

C’

?

R

(123)

Example: cbv linear β-law

Proof methodology:

1. prepare the cbv linear β-template:

  where H represents a value

2. prove that the cbv linear β-template is robust and safe

Partial Characterisation Theorem

{( , )}

Gλ @ H G H

(124)

Example: cbv linear β-law

Proof methodology:

1. prepare the cbv linear β-template:

  where H represents a value

2. prove that the cbv linear β-template is robust and safe

Partial Characterisation Theorem

{( , )}

Gλ @ H G H

(125)

Aim: for any possible rewrite triggered by focus ,

R

1 P

?

n

m

R C

C

P’

f’’

Q

f’’

Robustness of cbv linear β-template

(126)

Robustness of cbv linear β-template

Example (1) arithmetic rewrite

R

1 P

C ?

C

(127)

Robustness of cbv linear β-template

Example (1) arithmetic rewrite

R

1 P

C ?

C

Q: How can the redex overlap with the template?

(128)

Robustness of cbv linear β-template

Example (1) arithmetic rewrite

R

1 P

C ?

C

Q: How can the redex overlap with the template?

A: No overlap is possible!

{Hi}i represent values.

(129)

Robustness of cbv linear β-template

Example (1) arithmetic rewrite

R

1

C

C

Q: How can the redex overlap with the template?

A: No overlap is possible!

{Hi}i represent values.

C’

(130)

Robustness of cbv linear β-template

Example (1) arithmetic rewrite

R

1

C

C

C’

C’

1

R robustness relative to arithmetic rewrite

(131)

Robustness of cbv linear β-template

Example (2) cbv linear β-reduction

R

1 P

C ?

C

(132)

Robustness of cbv linear β-template

Example (2) cbv linear β-reduction

R

1 P

C ?

C

Q: How can the redex overlap with the template?

(133)

Robustness of cbv linear β-template

Example (2) cbv linear β-reduction

R

1 P

C ?

C

Q: How can the redex overlap with the template?

A: Overlaps can only be inside boxes of the redex.

{Hi}i represent values.

• The redex is always outside a box .

(134)

Robustness of cbv linear β-template

Example (2) cbv linear β-reduction

R

1

C

C

Q: How can the redex overlap with the template?

A: Overlaps can only be inside boxes of the redex.

{Hi}i represent values.

• The redex is always outside a box .

C’

(135)

Robustness of cbv linear β-template

Example (2) cbv linear β-reduction

R

1

C

C

C’

robustness relative to arithmetic rewrite

C’

1

R

(136)

Robustness of cbv linear β-template

Example (3) measurement of space usage

R

1 P

C ?

C

k is the size of a whole graph

(137)

Robustness of cbv linear β-template

Example (3) measurement of space usage

R

1 P

C ?

C

k is the size of a whole graph

(138)

Robustness of cbv linear β-template

Example (3) measurement of space usage

R

1

C

C

k is the size of a whole graph

C’

(139)

Robustness of cbv linear β-template

Example (3) measurement of space usage

R

1

C

C

k is the size of a whole graph

C’

C’’

1

(140)

Robustness of cbv linear β-template

Example (3) measurement of space usage

R

1

C

C

k is the size of a whole graph

C’

robustness relative to

`stat`

due to and , k ≥ h

and hence possibly C’ ≠ C’’

λ @

C’’

1

(141)

Example: cbv linear β-law

Proof methodology:

1. prepare the cbv linear β-template:

  where H represents a value

2. prove that the cbv linear β-template is robust and safe … relative to arithmetic and cbv linear β-reduction

{( , )}

Gλ @ H G H

(142)

Example: cbv linear β-law

Proof methodology:

2. prove that the cbv linear β-template is robust and safe … relative to arithmetic and cbv linear β-reduction

3. apply the Partial Characterisation Theorem Partial Characterisation Theorem

Robust and safe templates induce observational equivalences.

(for deterministic & “reasonable” languages)

(143)

Example: cbv linear β-law

Proof methodology:

2. prove that the cbv linear β-template is robust and safe … relative to arithmetic and cbv linear β-reduction

3. apply the Partial Characterisation Theorem Proposition (cbv linear β-law)

The cbv linear β-template induces observational equivalence, if arithmetic and cbv linear β-reduction are the only computation allowed.

(144)

Partiality

The cbv linear β-template is not robust relative to `stat`

(measurement of space usage).

What can we say about the cbv linear β-law, in the presence of `stat`?

Partial Characterisation Theorem

Robust and safe templates induce observational equivalences.

(for deterministic & “reasonable” languages)

(145)

Partiality

The cbv linear β-template is not robust relative to `stat`

(measurement of space usage).

What can we say about the cbv linear β-law, in the presence of `stat`?

The counterexample of robustness would provide a

counterexample of the law, in the presence of conditional statements (e.g. `if`).

The template can be extended so it is robust relative to `stat`, if a language allows no computation to distinguish numbers.

Partial Characterisation Theorem

Robust and safe templates induce observational equivalences.

(for deterministic & “reasonable” languages)

(146)

Partiality

If a template is safe but fails to be robust, either:

(1) The intended observational equivalence fails too.

• Counterexamples of robustness would suggest how the observational equivalence could be violated.

(2) The intended observational equivalence actually holds.

• There may be a bigger, robust, template.

• Counterexamples of robustness would suggest how the template

Partial Characterisation Theorem

Robust and safe templates induce observational equivalences.

(for deterministic & “reasonable” languages)

(147)

Overview

1. Motivation: robustness of observational equivalence

2. Hypernet semantics

3. Locality & step-wise reasoning

4. Example: cbv linear β-law

(148)

Conclusion

● a (general) framework for analysing and proving robustness of observational equivalence

● current key limitation: determinism

• hypernet semantics: a graphical abstract machine

local & step-wise reasoning to prove observational equivalence, with the concept of robustness

(149)

Future directions

● dealing with nondeterminism

● overcoming unsoundness of *-simulation

● Sand’s improvement theory

● incorporating cost reduction in observational equivalence

● introducing quantitative restrictions on *-simulation

● (semi-)automating robustness & safety check

● exploiting techniques of critical pair analysis

参照

関連したドキュメント

The issue of classifying non-affine R-matrices, solutions of DQYBE, when the (weak) Hecke condition is dropped, already appears in the literature [21], but in the very particular

(In the sequel we shall restrict attention to homology groups arising from normalising partial isometries, this being appropriate for algebras with a regular maximal

I give a proof of the theorem over any separably closed field F using ℓ-adic perverse sheaves.. My proof is different from the one of Mirkovi´c

We describe the close connection between the linear system for the sixth Painlev´ e equation and the general Heun equation, formulate the Riemann–Hilbert problem for the Heun

This paper is concerned with the existence, the uniqueness, convergence and divergence of formal power series solutions of singular first order quasi-linear partial

(It is a standard convention to denote the unique line on two distinct collinear points x and y of a partial linear space by the symbol xy.) A linear space ðP ; LÞ with all lines

We present combinatorial proofs of several non-commutative extensions, and find a β-extension that is both a generalization of Sylvester’s identity and the β-extension of the

We also point out that even for some semilinear partial differential equations with simple characteristics Theorem 11 and Theorem 12 imply new results for the local solvability in