• 検索結果がありません。

階層的グラフの戦略的書き換えによる

N/A
N/A
Protected

Academic year: 2022

シェア "階層的グラフの戦略的書き換えによる"

Copied!
82
0
0

読み込み中.... (全文を見る)

全文

(1)

室屋 晃子

(京都大学 数理解析研究所)

階層的グラフの戦略的書き換えによる プログラム実行モデリングとその利用

早稲田大 上田研 セミナー, 30 January 2020

(2)

Koko Muroya

(RIMS, Kyoto University)

Modelling program execution with token-guided

(hierarchical) graph rewriting

早稲田大 上田研 セミナー, 30 January 2020

(3)

Muroya (RIMS, Kyoto U.)

Overview: graphical models of program execution

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

graph rewriting token passing

token-guided graph rewriting

(4)

Muroya (RIMS, Kyoto U.)

Overview: graphical models of program execution

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

graph rewriting token passing

token-guided graph rewriting

(5)

Muroya (RIMS, Kyoto U.)

Graph-rewriting model

● dates back to [Wadsworth 1971]

● useful to achieve time-efficiency (by flexible sharing)

● e.g. call-by-need evaluation without extra machinery

(6)

Muroya (RIMS, Kyoto U.)

Graph-rewriting model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

× 1 2

3 λ

@ +

× 3

+

×

3

+

3 6

+

× 1

+ 2

+ 1

+

2 1

+ 2

+ 3

1

+ 2

+ + 1

3 2

(7)

Muroya (RIMS, Kyoto U.)

Graph-rewriting model

● dates back to [Wadsworth 1971]

● useful to achieve time-efficiency (by flexible sharing)

● e.g. call-by-need evaluation without extra machinery

Question

How to specify a strategy (i.e. a particular way of rewriting)?

(8)

Muroya (RIMS, Kyoto U.)

Overview: graphical models of program execution

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

graph rewriting token passing

token-guided graph rewriting

(9)

Muroya (RIMS, Kyoto U.)

Token-passing model

based on Geometry of Interaction [Girard ’89], pioneered by [Danos & Regnier ’99] [Mackie ’95]

ingredients

the token, passed around on a fixed graph

hierarchy of the graph, managing re-evaluation

said to be space-efficient (due to fixed graphs)

… but not really time-efficient (due to re-evaluation)

Question

How to achieve time-efficiency?

+ λ

@ +

×

1 2

(10)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? *

(11)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

B,? *

(12)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * *

(13)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * <*,*>

(14)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * L<*,*>

(15)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

A,? L<*,*>

(16)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? L<*,*>

(17)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? <L<*,*>,*>

(18)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

1 <L<*,*>,*>

(19)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? <L<*,*>,1>

(20)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

2 <L<*,*>,1>

(21)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

3 L<*,*>

(22)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

A,3 L<*,*>

(23)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

3 * L<*,*>

(24)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

3 * <*,*>

(25)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * <*,3>

(26)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * L<*,3>

(27)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

A,? L<*,3>

(28)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? L<*,3>

(29)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? <L<*,3>,*>

(30)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

1 <L<*,3>,*>

(31)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? <L<*,3>,1>

(32)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

2 <L<*,3>,1>

(33)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

3 L<*,3>

(34)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

A,3 L<*,3>

(35)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

3 * L<*,3>

(36)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

3 * <*,3>

(37)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

6 * *

(38)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

B,6 *

(39)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

6 *

(40)

Muroya (RIMS, Kyoto U.)

Token-passing model

based on Geometry of Interaction [Girard ’89], pioneered by [Danos & Regnier ’99] [Mackie ’95]

ingredients

the token, passed around on a fixed graph

hierarchy of the graph, managing re-evaluation

said to be space-efficient (due to fixed graphs)

… but not really time-efficient (due to re-evaluation)

Question

How to achieve time-efficiency?

modelling call-by-name evaluation by default

+ λ

@ +

×

1 2

(41)

Muroya (RIMS, Kyoto U.)

Models of program execution

Questions

● a trade-off between time-efficiency and space-efficiency?

● a unified model to analyse the trade-off?

graph rewriting token passing

✔ time-efficiency ✔ space-efficiency

(42)

Muroya (RIMS, Kyoto U.)

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

Overview: graphical models of program execution

graph rewriting token passing

token-guided graph rewriting

(43)

Muroya (RIMS, Kyoto U.)

Token-guided graph-rewriting model

(λx. x + x) (1 + 2)

program

6

result

(44)

Muroya (RIMS, Kyoto U.)

Token-guided graph-rewriting model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? *

(45)

Muroya (RIMS, Kyoto U.)

Token-guided graph-rewriting model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

B,? *

(46)

Muroya (RIMS, Kyoto U.)

Token-guided graph-rewriting model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * *

(47)

Muroya (RIMS, Kyoto U.)

Token-guided graph-rewriting model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * *

the token has detected a redex…

> pass > rewrite

(48)

Muroya (RIMS, Kyoto U.)

Token-passing model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * <*,*>

the token has detected a redex…

✔ pass > rewrite

(49)

Muroya (RIMS, Kyoto U.)

Token-guided graph-rewriting model

(λx. x + x) (1 + 2)

program

6

result

+ λ

@ +

×

1 2

? * *

the token has detected a redex…

> pass > rewrite

(50)

Muroya (RIMS, Kyoto U.)

Token-guided graph-rewriting model

(λx. x + x) (1 + 2)

program

6

result

? *

the token has detected a redex…

> pass ✔ rewrite

+

× 1

+ 2

(51)

Muroya (RIMS, Kyoto U.)

Token-guided graph-rewriting model

● a combination of graph rewriting and token passing

● graph rewriting, guided and controlled by the token

● redexes always detected by the token

● rewrites can only be triggered by the token

freedom of choice

+ λ

@ +

×

1 2

+ λ

@ +

×

1 2

+

× 1

+ 2

(52)

Muroya (RIMS, Kyoto U.)

Modes of token-guided graph-rewriting model

graph rewriting token passing

“maximum” token-guided graph rewriting

(rewrites triggered by the token whenever possible)

“minimum” token-guided graph rewriting

(rewrites never triggered by the token)

rewrites triggered by the token whenever possible

rewrites never triggered by the token

modelling…

by default: call-by-need evaluation

also: call-by-value evaluation

by changing the routing of the token

modelling…

by default: call-by-name evaluation

(53)

Muroya (RIMS, Kyoto U.)

Modes of token-guided graph-rewriting model

graph rewriting token passing

“maximum” token-guided graph rewriting

(rewrites triggered by the token whenever possible)

“minimum” token-guided graph rewriting

(rewrites never triggered by the token)

rewrites triggered by the token whenever possible

rewrites never triggered by the token

modelling…

by default: call-by-need evaluation

also: call-by-value evaluation

by changing the routing of the token

modelling…

by default: call-by-name evaluation

demo: https://koko-m.github.io/GoI-Visualiser/

for the (pure, untyped) lambda-calculus

(54)

Muroya (RIMS, Kyoto U.)

Overview: graphical models of program execution

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

graph rewriting token passing

token-guided graph rewriting

(55)

Muroya (RIMS, Kyoto U.)

Application 1: cost analysis

Goal (also original motivation)

analysis of a trade-off between time-efficiency and space- efficiency

graph rewriting token passing

✔ time-efficiency ✔ space-efficiency

(56)

Muroya (RIMS, Kyoto U.)

Application 1: cost analysis

graph rewriting token passing

“maximum” token-guided graph rewriting

(rewrites triggered by the token whenever possible)

“minimum” token-guided graph rewriting

(rewrites never triggered by the token)

rewrites triggered by the token whenever possible

rewrites never triggered by the token

[— & Ghica, LMCS ’19]

proof of time-efficiency of the “maximum” mode

call-by-need evaluation

call-by-value evaluation

(57)

Muroya (RIMS, Kyoto U.)

Application 1: cost analysis

graph rewriting token passing

“maximum” token-guided graph rewriting

(rewrites triggered by the token whenever possible)

“minimum” token-guided graph rewriting

(rewrites never triggered by the token)

rewrites triggered by the token whenever possible

rewrites never triggered by the token

[ongoing work]

analysis of various modes, and hence the time-space trade-off

“maximum” mode & “minimum” mode,

“up-to” mode (e.g. allowing up to 100 rewrites),

“no-increase” mode (i.e. forbidding growth of the graph), etc.

(58)

Muroya (RIMS, Kyoto U.)

Overview: models of program execution

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

graph rewriting token passing

token-guided graph rewriting

(59)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

(60)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

[— & Cheung & Ghica, LICS ’18] [Cheung & Darvariu & Ghica & — & Rowe, FLOPS ’18]

Idealised TensorFlow

(61)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

construction of a parametrised model (e.g. f(x) = a * x + b)

as a network with parameter nodes

[— & Cheung & Ghica, LICS ’18] [Cheung & Darvariu & Ghica & — & Rowe, FLOPS ’18]

Idealised TensorFlow

166 CHAPTER 5. DISCUSSION

tion with parametrised data-flow networks. Its semantics, a variation of the DGoIM, accordingly has extra nodes that represent parameters, and an extra rewriting rule of graph abstraction. These extra features altogether model the behaviour of the parameter nodes in TensorFlow network, in a functional way. The rest of this section gives an informal description using a simplified style of the DGoIM graphs, deliberately ignoring their box structure, and making the token implicit in rewriting rules.

The starting point is to get rid of one of the three classes of nodes in theTen- sorFlow network, namely input nodes. They can be replaced with the nodes for lambda-abstraction and function application `a la DGoIM. The linear regres- sion model f(x) = ax+b can be represented as a lambda-abstraction with two parameter nodes:

a

+

b

The subtle manipulation required by prediction, which was to replace the input node with a value node x0, can be simply modelled by graphical beta reduction:

a

+

@ x0

b

99K

a

x0

+ b

Observation of the resulting network, which involves parameter nodes, can be achieved solely by token passing. The output data ax0 +b can be obtained by letting the token travel through the network from the bottom, as indicated by a thick gray arrow:

(62)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

prediction with a parametrised model by 1. graph rewriting:

function application to input data

[— & Cheung & Ghica, LICS ’18] [Cheung & Darvariu & Ghica & — & Rowe, FLOPS ’18]

Idealised TensorFlow

166 CHAPTER 5. DISCUSSION

tion with parametrised data-flow networks. Its semantics, a variation of the DGoIM, accordingly has extra nodes that represent parameters, and an extra rewriting rule of graph abstraction. These extra features altogether model the behaviour of the parameter nodes in TensorFlow network, in a functional way. The rest of this section gives an informal description using a simplified style of the DGoIM graphs, deliberately ignoring their box structure, and making the token implicit in rewriting rules.

The starting point is to get rid of one of the three classes of nodes in the Ten- sorFlow network, namely input nodes. They can be replaced with the nodes for lambda-abstraction and function application `a la DGoIM. The linear regres- sion model f(x) = ax+b can be represented as a lambda-abstraction with two parameter nodes:

a

+

b

The subtle manipulation required by prediction, which was to replace the input node with a value nodex0, can be simply modelled by graphical beta reduction:

a

+

@ x0

b

99K

a

x0

+ b

Observation of the resulting network, which involves parameter nodes, can be achieved solely by token passing. The output data ax0 +b can be obtained by letting the token travel through the network from the bottom, as indicated by a thick gray arrow:

(63)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

prediction with a parametrised model by 2. token passing over

the resulting network

[— & Cheung & Ghica, LICS ’18] [Cheung & Darvariu & Ghica & — & Rowe, FLOPS ’18]

Idealised TensorFlow

5.2. CASE STUDY: DATA-FLOW NETWORKS 167

a

x0

+ b

? ax0+b

Values in gray (‘?’ and ‘ax0+b’) represents token data, enriched to record values;

recall that the token uses its data to determine routing and control rewriting in the DGoIM. The token can itself read back a value from the network, if it can record a value of a node and perform an operation denoted by a node as it travels through the network.

The main manipulation of parametrised data-flow networks is to update param- eter nodes. This can be modelled as a combination of graphical beta reduction and graph abstraction, a new graph-rewriting rule. Graph abstraction “abstracts away”

all the parameters of a network, and turns the parametrised network into an ordi- nary network that represents a function on vectors. As a side product, it creates a value node that represents a vector whose elements are the current values of the parameters. The rewriting rule is formalised using two additional nodes: the node

‘A’ dedicated to trigger the rewrite, and a node ‘P’ that denotes projections of a vector. When applied to the linear regression model, the rule looks like below:

a

+

A b

99K

+ P

(a, b)

Graph abstraction is not a local rewriting rule, because it extractsall parameters in a network, which are not necessarily neighbours of the triggering node ‘A’. Pa- rameters are extracted altogether as a function argument, so that parameter update can be completed with graphical beta reduction. This deviates from the in-place update of TensorFlow. Once a new parameter vector (a0, b0) is computed, pre-

(64)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

functional update of parameters by 1. graph rewriting:

novel “graph abstraction”

to turn a parametrised model into an ordinary function

[— & Cheung & Ghica, LICS ’18] [Cheung & Darvariu & Ghica & — & Rowe, FLOPS ’18]

Idealised TensorFlow

5.2. CASE STUDY: DATA-FLOW NETWORKS 167

a

x0

+ b

? ax0+b

Values in gray (‘?’ and ‘ax0+b’) represents token data, enriched to record values;

recall that the token uses its data to determine routing and control rewriting in the DGoIM. The token can itself read back a value from the network, if it can record a value of a node and perform an operation denoted by a node as it travels through the network.

The main manipulation of parametrised data-flow networks is to update param- eter nodes. This can be modelled as a combination of graphical beta reduction and graph abstraction, a new graph-rewriting rule. Graph abstraction “abstracts away”

all the parameters of a network, and turns the parametrised network into an ordi- nary network that represents a function on vectors. As a side product, it creates a value node that represents a vector whose elements are the current values of the parameters. The rewriting rule is formalised using two additional nodes: the node

‘A’ dedicated to trigger the rewrite, and a node ‘P’ that denotes projections of a vector. When applied to the linear regression model, the rule looks like below:

a

+

A b

99K

+ P

(a, b)

Graph abstraction is not a local rewriting rule, because it extractsall parameters in a network, which are not necessarily neighbours of the triggering node ‘A’. Pa- rameters are extracted altogether as a function argument, so that parameter update can be completed with graphical beta reduction. This deviates from the in-place update of TensorFlow. Once a new parameter vector (a0, b0) is computed, pre-

(65)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

functional update of parameters by 2. graph rewriting:

function application to new parameter values

[— & Cheung & Ghica, LICS ’18] [Cheung & Darvariu & Ghica & — & Rowe, FLOPS ’18]

Idealised TensorFlow 168 CHAPTER 5. DISCUSSION diction with the updated modelf(x) =a0x+b0using input datax0can be started with two steps of graphical beta reduction:

+ P

(a0, b0)

@ x0

@

99K

+ P (a0, b0)

@ x0

99K

P (a0, b0)

x0

+

This will be followed by projections of the new parameter vector into each element.

The calculus ITF is proposed as an extension of the simply-typed lambda- calculus, in which the computation described above can be expressed with two extra language features: parameters and graph abstraction. The TensorFlow code in Sec. 5.2.1 corresponds to the following program in ITF, using the OCaml-like con- vention:

1 ; ; c o n s t r u c t t h e model as a f u n c t i o n w i t h two p a r a m e t e r s 2 let a = {1} in

3 let b = {0} in

4 let y = fun x -> a x + b in

5 ; ; t u r n t h e model i n t o a f u n c t i o n and a parameter v e c t o r 6 let (model,p) = abs y in

7 ; ; u p d a t e t h e parameter v e c t o r w i t h ‘ t r a i n ’ d e f i n e d e l s e w h e r e 8 let q = t r a i n model p in

9 ; ; p r e d i c t o u t p u t w i t h t h e u p d a t e d model 10 let y 0 = model q 10 in

11 y 0

The parameters, indicated by { }, enable users to construct a model without explicitly declaring which parameters are involved in each model. This convenient style of construction is taken from TensorFlow. What used to be in-place up- date of parameters inTensorFlowis now decomposed through graph abstraction, which can be accessed by programmers using the operation ‘abs’ (line 6).

As a final remark, the idea of local reasoning, which is extensively investigated in Chap. 4, was initially tried out with ITF and its DGoIM-style model. Although

(66)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

[— & Cheung & Ghica, LICS ’18] [Cheung & Darvariu & Ghica & — & Rowe, FLOPS ’18]

Idealised TensorFlow

extension of the simply-typed lambda-calculus with:

parameters, “graph abstraction”, “opaque” vector types

type soundness & some observational equivalences

visualiser of token-guided graph rewriting

https://cwtsteven.github.io/GoI-TF-Visualiser/CBV-with-CBN-embedding/index.html

OCaml PPX implementation https://github.com/DecML/decml-ppx

(67)

Muroya (RIMS, Kyoto U.)

Application 2: programming with data-flow networks

Goal programming language designs for:

construction of a dataflow network

evaluation of a dataflow network

update of a dataflow network

[Cheung & Ghica & —, unpublished manuscript (arXiv:1910.09579)]

Transparent Synchronous Dataflow

extension of the simply-typed lambda-calculus with:

spreadsheet-like “cells” (allowing circular dependency),

“step” command (updating cells step-by-step & concurrently)

type soundness & some efficiency guarantee

visualiser of token-guided graph rewriting https://cwtsteven.github.io/TSD-visual/

OCaml PPX implementation https://github.com/cwtsteven/TSD

(explained in https://danghica.blogspot.com/2019/11/making-ocaml-more-like-excel.html )

for pre

sentation, see (e

sp. fro

m 34:11):

https:

//www

.youtube.co

m/watch? v=sgmpVedCsNM&t

=102s

(68)

Muroya (RIMS, Kyoto U.)

Overview: graphical models of program execution

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

graph rewriting token passing

token-guided graph rewriting

(69)

Muroya (RIMS, Kyoto U.)

Application 3: reasoning about observational equivalence

Question(s)

Do two program fragments behave the same?

or, is it safe to replace a program fragment with another?

if YES:

● justification of refactoring, compiler optimisation

● verification of programs

let x = 100 in let y = 50 in y + y

let y = 50 in

y + y 50 + 50

let x = 100 in let y = 50 in y + y

let x = 100 in

50 + 50 50 + 50

? ?

?

?

(70)

Muroya (RIMS, Kyoto U.)

Application 3: reasoning about observational equivalence

Question(s)

Do two program fragments behave the same?

(71)

Muroya (RIMS, Kyoto U.)

Application 3: reasoning about observational equivalence

Question(s)

Do two program fragments behave the same?

What program fragments behave the same?

the beta-law

x .M) N M[x := N] a parametricity law

𝚕𝚎𝚝 a = 𝚛𝚎𝚏 1 𝚒𝚗 λx . (a := 2; !a) λx.2

(72)

Muroya (RIMS, Kyoto U.)

Application 3: reasoning about observational equivalence

Question(s)

Do two program fragments behave the same?

When do program fragments behave the same?

Does the beta-law always hold?

the beta-law

x .M) N M[x := N]

(73)

Muroya (RIMS, Kyoto U.)

Application 3: reasoning about observational equivalence

Question(s)

Do two program fragments behave the same?

When do program fragments behave the same?

Does the beta-law always hold?

No, it is violated by program contexts that can measure memory usage (e.g. with OCaml’s Gc module)…

the beta-law

x .M) N M[x := N]

x.0) 100 0

(74)

Muroya (RIMS, Kyoto U.)

Application 3: reasoning about observational equivalence

Question(s)

Do two program fragments behave the same?

What fragments, in which contexts?

… in the presence of (arbitrary) language features

pure vs. effectful (e.g. vs. ) encoded vs. native (e.g. vs. ) extrinsics (e.g. )

foreign language calls

50 + 50 ref 1

State ref

Gc.stat

(75)

Muroya (RIMS, Kyoto U.)

Application 3: reasoning about observational equivalence

Question(s)

Do two sub-graphs behave the same?

What sub-graphs, in which contexts?

… in token-guided graph rewriting for (arbitrary) language features

proof of (robustness of) observational equivalence by exploiting locality of graph representation/syntax

[Ghica & — & Waugh Ambridge, unpublished manuscript (arXiv:1907.01257)]

Local reasoning for robust observational equivalence

(76)

Muroya (RIMS, Kyoto U. & U. B’ham.)

Application 3: reasoning about observational equivalence

Locality of graph syntax

“Does behave the same as ?”

with linear syntax:

𝚗𝚎𝚠 a 1 𝚒𝚗 λx. (a := 2; !a) λx . 2

𝚗𝚎𝚠 a 1 𝚒𝚗 λx . (a := 2; !a) λx . (a := 2; !a)

⋯ ⋯ ⋯

λx. 2

⋯ ⋯

λx. 2

(77)

Muroya (RIMS, Kyoto U. & U. B’ham.)

Application 3: reasoning about observational equivalence

Locality of graph syntax

“Does behave the same as ?”

with linear syntax: comparison between sub-terms

𝚗𝚎𝚠 a 1 𝚒𝚗 λx. (a := 2; !a) λx . 2

𝚗𝚎𝚠 a 1 𝚒𝚗 λx . (a := 2; !a) λx . (a := 2; !a)

⋯ ⋯ ⋯

λx. 2

⋯ ⋯

λx. 2

(78)

Muroya (RIMS, Kyoto U. & U. B’ham.)

Application 3: reasoning about observational equivalence

Locality of graph syntax

“Does behave the same as ?”

with linear syntax: comparison between sub-terms

with graph syntax: comparison between sub-graphs

𝚗𝚎𝚠 a 1 𝚒𝚗 λx. (a := 2; !a) λx . 2

𝚗𝚎𝚠 a 1 𝚒𝚗 λx . (a := 2; !a) λx . (a := 2; !a)

⋯ ⋯ ⋯

λx. 2

⋯ ⋯

λx. 2

(79)

Muroya (RIMS, Kyoto U.)

Overview: graphical models of program execution

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

graph rewriting token passing

token-guided graph rewriting

(80)

Muroya (RIMS, Kyoto U.)

Application 4: visualising program execution

● OCaml Visual Debugger

https://fyp.jackhughesweb.com/ by Jack Hughes

● comparison between programs

● mutable state: encoded vs native

https://www.youtube.com/watch?v=ysZdqocIu7E

● sorting algorithms: insertion vs bubble

https://www.youtube.com/watch?v=bZMSwo0zLio

● sorting algorithms: merge vs insertion

https://www.youtube.com/watch?v=U1NI-mWeNe0&t=213s

(81)

Muroya (RIMS, Kyoto U.)

Overview: graphical models of program execution

applications:

● cost analysis

● language designs for programming with data-flow networks

● reasoning about observational equivalence

● visualising program execution

graph rewriting token passing

token-guided graph rewriting

(82)

Muroya (RIMS, Kyoto U.)

Overview: graphical models of program execution

biggest, persistent, challenge:

mathematical formalisation

● graph theory?

● category theory? (DPO rewriting, string diagrams, …)

● rewriting theory? (term-graph rewriting, …)

graph rewriting token passing

token-guided graph rewriting

参照

関連したドキュメント

We then con- sider CM abelian extensions of totally real fields and by combining our earlier considerations with the known validity of the Main Con- jecture of Iwasawa theory we

modular proof of soundness using U-simulations.. &amp; RIMS, Kyoto U.). Equivalence

The key lemma required is a combinatorial version of Dehn’s lemma and the loop theorem for immersed surfaces of the type considered by Hass and Scott with an extra condition —

Generalization (Figure 8): Plane trees as in (e) (with an extra group of size 1 added, to make n + 2 nodes) with the restrictions that the rightmost path of any subtree of the

TANK MIXTURES: Up to 48 fluid ounces of this product per acre may be applied postemergence (in-crop) over the top of Roundup Ready alfalfa in the seeding year in a tank-mix with

The total rate of HARMONY EXTRA SG herbicide for wheat (including durum), barley and triticale cannot exceed 1.5 oz/A (0.0312 lb/A thifensulfuron methyl and 0.0156 lb/A

Amount of Remuneration, etc. The Company does not pay to Directors who concurrently serve as Executive Officer the remuneration paid to Directors. Therefore, “Number of Persons”

Pre-harvest This product may be applied for pre-harvest annual and perennial weed control as a broadcast treatment to Roundup Ready Flex cotton after 60 percent boll crack. oz.) of