• 検索結果がありません。

英語 IB 1A5 (=E1R86), 1L1 (=E1R05), 英語 IIB E2R40, 2011 L12

N/A
N/A
Protected

Academic year: 2023

シェア "英語 IB 1A5 (=E1R86), 1L1 (=E1R05), 英語 IIB E2R40, 2011 L12"

Copied!
58
0
0

読み込み中.... (全文を見る)

全文

(1)

2012-01-19 ()

英語 IB 1A5 (=E1R86), 1L1 (=E1R05) , 英語 IIB E2R40 , 2011

L12

このスライドは次のURLから入手できます:

http://clsl.hi.h.kyoto-u.ac.jp/~kkuroda/lectures/11B-KU/KU-2011B-L12-slides.pdf

黒田 (非常勤) substituting for 出口雅也 (非常勤)

(2)

連絡 1/2

1

26

(

)

に補講

119()の休講の分

2

2

日が最終日

=

ボーナス試験

(L=14

に相当

)

出席する時間帯は自主的に変更してよいです

1,2時限目: 西棟共同12/ 3時限目: 西棟共同03

(3)

連絡 2/2

1

19

(

)

1A5, 1L1: TED

2R: The Feynman Lectures on Physics

1

26

(

)

1A5, 1L1: TED

2R: The Feynman Lectures on Physics

2

2

(

)

ボーナス試験

(4)

連絡 2/3

評価法

成績は毎回の訓練課題の得点の獲得率 (+α)

出席日数が間接的に評価されるので注意

ボーナス試験

(2012

2

2

)

は期末試験に非ず

前期と違ってボーナス試験は救済策にしません

(5)

ボーナス試験とは ?

本番と同じ課題に挑戦

一回目(本番)のハズレがアタリに修正される

一回目(本番)のアタリが変更されない

つまり単調に得点が増える

目的

復習の努力に報いる

今期は出席不足の学生の救済はナシ

(6)

ボーナス試験の課題

L4, L5, L11

の三回分

L4: Cynthia Breazeal: The Rise of Personal Robots, Part 2

L5: Eben Bayer: Are Mushrooms the New Plastic?

L11: Ben Goldacre: Battling Bad Science.

(7)

2/2 の試験が必須の方々 ( 赤 ) と気をつけた 方が良い方々 ( 橙 )

1A5

山本 心平, 藤本 俊平, 古元 , 本田 貴大, 夏目 知明, 町田 奈緒子, 脇田 健史 (7)

都築 雅美, 村上 奈央 (2)

1L1

板野 真衣, 白石 華子, 加藤 , 田代 大知, 祐太, 窪田 すみ, 平賀 拓史, 松元 大周 (8 )

斉藤 浩一郎, 齋藤 , 川崎 理子, 佐藤 彩香, 宮本 貴史, 真悠子 (6)

2R

出口 利樹, 鈴木 敢士, 城戸 , 草間 (4)

曽我 康太, 壱岐島 淳矢, 小林 正和, 小野原 龍一 (4)

(8)

講義資料

聴き取り用の教材は次の

Web

ページから入手可能

http://clsl.hi.h.kyoto-u.ac.jp/~kkuroda/lectures/KU-11B.html

授業時間外での予習や復習に利用して下さい

特にボーナス試験対策には有効でしょう

速読に関して完全に同じことはできませんが,工夫

します

(9)

本日の予定

前半

60

L11の結果の報告と正解の解説

後半

30

1A5, 1L1: TEDを使った聴き取り

Tim Harford: Trial, Error, and the God Complex (18)の前半

2R: The Feynman Lectures on Physicsを使った聴き取り

Volume 1, Chapter 4: Conservation of Energy の前半

(10)

Tim Harford の著作

The Undercover Economist: Exposing Why the Rich Are Rich, the Poor Are Poor— and Why You Can Never Buy a Decent Used

Car! (Oxford University Press)

『まっとうな経済学』(ランダムハウス講談社)

The Logic of Life: The Rational Economics of an Irrational World (Random House)

『人は意外と合理的』(武田ランダムハウスジャパン)

(11)

Date

L11 の速読訓練の結果

(12)

出題への評価

Q1: 分量 Q1: 分量 Q1: 分量

Q1: 分量 Q2: Q2: Q2: Q2: 難易度難易度難易度難易度

1A5 2.30 0.73 3.00 1.00 2.05 0.51 3.00 1.00 1L1 2.31 0.70 3.00 1.00 2.50 0.73 4.00 1.00 2R 2.40 0.58 3.00 1.00 2.16 0.69 4.00 1.00

(13)

平均得点の変遷

(14)

L11 の得点分布 1A5, 1L1, 2R

参加者: 62

平均: 65.35

標準偏差: 10.77

最高: 84.85; 最低: 39.39

得点グループ数=2

(15)

L11 の得点分布 1A5

受講者数: 21

平均: 64.07 [21.14/n]

標準偏差: 10.23 [ 8.77/n]

最高: 83.33 [27.50/n]

最低: 43.94 [14.50/n]

n = 33

得点グループ数=2

(16)

L11 の得点分布 1L1

受講者数: 16

平均: 72.82 [24.03/n]

標準偏差: 8.74 [ 2.88/n]

最高: 84.85 [28.00/n]

最低: 48.48 [16.00/n]

n = 33

得点グループ数=3?

(17)

L11 の得点分布 2R

受講者数: 25

平均: 61.64 [20.34/n]

標準偏差: 10.35 [ 3.41/n]

最高: 75.76 [25.00/n]

最低: 39.39 [13.00/n]

n = 33

得点グループ数=2?

(18)

平均正答率の変遷

(19)

L11 の正解率分布 1A5, 2R, 1L1

参加者: 62

平均値: 0.74

標準偏差: 0.07

最高値: 0.89; 最低値: 0.54

正答率のグループ数=1

(20)

L11 の正答率分布 1A5

参加者: 21

平均: 0.77; 標準偏差: 0.06

最高: 0.86; 最低: 0.65

正答率のグループ数=2

(21)

L11 の正答率分布 1L1

参加者: 16

平均: 0.76; 標準偏差: 0.05

最高: 0.84; 最低: 0.64

正答率のグループ数=2?

(22)

L11 の正答率分布 2R

参加者: 25

平均: 0.70; 標準偏差: 0.09

最高: 0.89; 最低: 0.54

正答率のグループ数=2

(23)

L11 の正解

(24)

L11 用の課題の正解

1. Epidemiology

2. cancer

3. unpicking

4. evidence-based⇒ evidence, every

5. authority

6. guru⇒ gour, girl, rule

7. remembers

8. laboratory⇒board, boat

9. measurable⇒ major, mager

10. snapshot⇒ snapshow, shap, shapshock, snack

11. skin wrinkles⇒ wrinkles, reasons

12. fish oil

13. randomize⇒ land, round, run, learn

14. flaw⇒ flow, flour, flu, slow, throw, snow, through, fruit

15. medicine

16. beats⇒ bit(s), big(s)

17. know

18. fascinating⇒ fa(s)cinating, fascinacing

19. drug⇒ dose, drag

20. useless

21. against

22. comparing⇒ compare, competing, comparison

23. better

24. pyramid⇒ periment, period

25. dot⇒ day, done, door, do, data, toy, drug(s), doy(s), top

26. bias

27. positive

28. what

29. reviews⇒ use, abuse, refuse, confuse, reduce, fews

30. lacking⇒ lucking, racking

31. that

32. drug⇒ dose, distractive, data

33. peer in⇒ pearing, paying, caring, payment, pearling, paring

(25)

1/31

So I’m a uh doctor, but I kind of slipped sideways into research, and now I’m an epidemiologist. And nobody really knows what epidemiology is. [1. Epidemiology] is the science of how we know in the real world if

something is good for you or bad for you. And it’s best understood uh through example as the science of those crazy, wacky newspaper headlines. And these are just some of the examples.

(26)

2/31

These are from the Daily Mail. Every country in the world has a newspaper like this. It has this kind of bizarre, ongoing

philosophical project of dividing all the inanimate objects in the world into the ones that either cause or prevent [2. cancer]. So here are some of the things they said cause cancer recently:

divorce, Wi-Fi, toiletries and coffee. Here are some of the things they say prevents cancer: crusts, red pepper, licorice and coffee. So already you can see there are contradictions. Coffee both causes and prevents cancer. And as you start to read on, you can see that maybe there’s some kind of political veilance behind some of this.

So for women, housework prevents breast cancer, but for men, shopping could make you impotent.

(27)

3/31

So we know that we need to start [3. unpicking] the science behind this. And what I hope to show is that unpicking dodgy claims, unpicking the evidence behind dodgy claims, isn’t uh a kind of nasty carping activity; it’s socially useful, but it’s also a kind of an— an extremely valuable explanatory tool.

Because real science is all about critically appraising the

evidence for somebody else’s position. That’s what happens in academic journals. That’s what happens uh at academic

conferences. The Q&A session after a post-op presents data is often uh a blood bath. And nobody minds that. We actively welcome it. It’s like a consenting intellectual S&M activity.

(28)

4/31

So what I’m gonna show you is all of the main things, all of the main features of my discipline: [4. evidence-based] medicine.

And I will talk you through all of these and demonstrate how they work, exclusively using examples of people getting stuff wrong.

So we’ll start with the absolute weakest form of evidence known to man, and that is [5. authority]. In science, we don’t care how many letters you have after your name. In science, we want to know what your reasons are for believing something. How do you know that something is good for us or bad for us? But we’re also unimpressed by authority, because it’s so easy to contrive.

(29)

5/31

This is somebody called Dr. Gillian McKeith, Ph.D, or, to give her full medical title, Gillian McKeith. (Laughter)

Again, every country has somebody like this. She is our TV diet [6. guru]. She has massive kind of five series of prime- time television, giving out very lavish and exotic health

advice. She, ah, it turns out, has uh a non-accredited

correspondence course Ph.D. from somewhere in America.

She also boasts that she’s a certified professional member of the American Association of Nutritional Consultants,

which sounds very glamorous and exciting.

(30)

6/31

You get a certificate and everything. This one belongs to my dead cat Hetti. She was a horrible cat. You just go to the

website, fill out the form, give them $60, and it arrives in the post.

Now that’s not the only reason that we think this person is an idiot. She also goes on ah, ah, ah and says things like, you

should eat lots of dark green leaves, because they contain lots of chlorophyll, and that will really oxygenate your blood. And anybody who’s done school biology [7. remembers] that

chlorophyll and chloroplasts only make oxygen in sunlight, uh and it’s quite dark in your bowels after you’ve eaten spinach.

(31)

7/31

Next, we need proper science, proper evidence. So, uh “Red wine can help prevent breast cancer.” This is a headline from

the Daily Telegraph in the U.K. “A glass of red wine a day could help prevent breast cancer.” So you go and find this paper, and what you find is it is a real piece of science. It is a description of the changes in one enzyme when you drip a chemical extracted from some red grape skin onto some cancer cells in a dish on a bench in a [8. laboratory] somewhere. And that’s a really useful thing to describe in a scientific paper, but on the question of

your own personal risk of getting breast cancer if you drink red wine, it tells you absolutely bugger all. Okay?

(32)

8/31

Actually, it turns out that your risk of breast cancer

actually increases slightly with every amount of alcohol that you drink.

So, all we want is studies in real human, people. And

here’s another example. This is from uh Britain’s leading diet and nutritionist in the Daily Mirror, which is our

second biggest selling newspaper. “An Australian study in 2001 found that olive oil in combination with fruits,

vegetables and pulses offers [9. measurable] protection against skin wrinklings.”

(33)

9/31

And then they give you advice: “If you eat olive oil and vegetables, you’ll have fewer skin wrinkles.” And they very helpfully tell you how to go and find the paper. So you go and find the paper, and what you find is an

observational study, right? Obviously nobody has been able to go back to 1930, get all the people born in one

maternity unit, and half of them eat lots of fruit and veg and olive oil, and then half of them eat McDonald’s,

and then we see how many wrinkles you’ve got later. You have to take a [10. snapshot] of how people are now.

(34)

10/31

And what you find is, of course, people who eat veg and

olive oil have fewer skin wrinkles. But that’s because people who eat fruit and veg and olive oil, they’re freaks, okay?

They’re not normal, they’re like you; they come to events like this, right? They are posh, they’re wealthy, they’re less likely to have outdoor jobs, they’re less likely to do manual labor, they have better social support, they’re less likely to smoke— so for a whole host of fascinating, interlocking social, political and cultural reasons, they are less likely to have [11. skin wrinkles]. That doesn’t mean that it’s the vegetables or the olive oil. (Laughter)

(35)

11/31

So ideally what you want to do is a trial. And everybody thinks they’re very familiar with the idea of a trial. Trials are very old. The first trial was in the Bible— Daniel 1:12.

It’s very straightforward— you take a bunch of people, you split them in half, you treat one group one way, you treat the other group the other way, and a little while later, you follow them up and see what happened to each of them.

So I’m gonna tell you about one trial, which is probably the most well-reported trial in the U.K. news media over the past decade. And this is the trial of fish oil pills.

(36)

12/31

And the claim was [12. fish oil] pills improve school

performance and behavior in mainstream children. And they said, “We’ve done a trial. All the previous trials

were positive, and we know this one’s gonna be too.”

That should always ring alarm bells. Because if you

already know the answer to your trial, you shouldn’t be doing one. Either you’ve rigged it by design, or ah you’ve got enough data so there’s no need to [13. randomize]

people anymore.

(37)

13/31

So, this is what they were gonna do in their trial. They

were taking 3,000 children, they were gonna give them all these huge fish oil pills, six of them a day, and then a year later, they were gonna measure their school exam

performance and compare their school exam

performance against what they predicted their exam

performance would have been if they hadn’t had the pills.

Now can anybody spot a [14. flaw] in this design? And no professors of clinical trial methodology are allowed to

answer this question.

(38)

14/31

So there’s no control; Okay, there’s no control group, but that sounds really techie, right? That sounds really— now, that’s a technical term. The kids got the pills, and then their

performance improved. What else could it possibly be if it wasn’t the pills?

They got older, okay? We all develop over time. And of course, also there’s the placebo effect. The placebo effect is one of the most fascinating things in the whole of [15. medicine]. It’s not just about taking a pill, and your performance and your pain getting better. It’s about our beliefs and expectations. It’s about the cultural meaning of a treatment.

(39)

15/31

And this has been demonstrated in a whole raft of

fascinating studies comparing one kind of uh placebo

against another. So we know, for example, that two sugar pills a day are a more effective treatment for getting rid of gastric ulcers than one sugar pill. Two sugar pills a day [16. beats] one sugar pill a day. And that’s an

outrageous and ridiculous finding, but it’s true. We know from three different studies on three different types of

pain that a saltwater injection is a more effective treatment for pain than taking a sugar pill,

(40)

16/31

taking a dummy pill that has no medicine in itnot

because the injection or the pills do anything physically to the body, but because an injection feels like a much more dramatic intervention.

So we [17. know] that our beliefs and expectations can be manipulated, which is why we do trials where we

control ag-- against a placebo, where one half of the people get the real treatment and the other half get placebo.

(41)

17/31

But that’s not enough. What I’ve just shown you are examples of the very simple and straightforward ways that journalists and food supplement pill peddlers and naturopaths can distort evidence for their own purposes.

What I find really [18. fascinating] is that the

pharmaceutical industry uses exactly the same kinds of tricks and devices, but slightly more sophisticated

versions of them, in order to distort the evidence that they give to doctors and patients, and which we use to make vitally important decisions.

(42)

18/31

So, firstly, trials against placebo: everybody thinks they know that a trial should be a comparison of your new [19. drug] against

placebo. But actually in a lot of situations that’s wrong. Because often we already have a very good treatment that is currently available, so we don’t want to know that your alternative new

treatment is better than nothing. We want to know that it’s better than the best currently available treatment that we have.

And yet, repeatedly, you consistently see people doing trials still against placebo. And you can get license to bring your drug to

market with only data showing that it’s better than nothing, which is [20. useless] for a doctor like me trying to make a decision.

(43)

19/31

But that’s not the only way you can rig your data. You can also rig your data by making the thing you compare your new drug against really rubbish. You can give the competing drug in too low a dose, so that people aren’t properly treated. You can give the competing drug in too high a dose, so that people get side effects. And this is

exactly what happened which antipsychotic medication for schizophrenia.

(44)

20/31

20 years ago, a new generation of antipsychotic drugs were brought in and the promise was that they would have fewer side effects. So people set about doing trials of these new drugs [21. against] the old drugs, but they gave the old drugs in ridiculously high doses— 20

milligrams a day of haloperidol.

And it’s a foregone conclusion, if you give a drug at that high a dose, that it will have more side effects and that your new drug will look better.

(45)

21/31

10 years ago, history repeated itself, interestingly, when risperidone, which was the first of the new-generation

antipscyhotic drugs, came off copyright, so anybody

could make copies. Everybody wanted to show that their drug was better than risperidone, so you see a bunch of trials [22. comparing] new antipsychotic drugs against risperidone at eight milligrams a day. Again, not an

insane dose, not an illegal dose, but very much at the

high end of normal. And so you’re bound to make your new drug look better.

(46)

22/31

And so it’s no surprise that overall, industry-funded trials are four times more likely to give a positive result than independently sponsored trials.

But and it’s a big but(t)— (Laughter) it turns out, when you look at the methods used by industry-funded trials, that they’re actually [23. better] than independently sponsored trials. And yet, they always manage to get the result that

they want. So how does this work? How can we explain this strange phenomenon?

wtf ? means what the fuck? cf. what the hell?

(47)

23/31

Well it turns out that what happens is the negative data goes missing in action; it’s withheld from doctors and patients. And this is the most important aspect of the whole story. It’s at the top of the [24. pyramid] of

evidence. We need to have all of the data on a particular treatment to know whether or not it really is effective.

And there are two different ways that you can spot

whether some data has gone missing in action. You can use statistics, or you can use stories. I personally prefer statistics, so that’s what I’m gonna do first.

(48)

24/31

This is something called funnel plot. And a funnel plot is a very clever way of spotting if small negative trials have disappeared, have gone missing in action. So this is a graph of all of the trials that have been done on a particular treatment. And as you go up towards the top of the graph, what you see is each [25. dot] is a trial. And as you go up, those are the bigger trials, so they’ve got less error in them. So they’re less likely to be randomly false

positives, randomly false negatives. So they all cluster together. The big trials are closer to the true answer. Then as you go further

down at the bottom, what you can see is, over on this side, the spurious false negatives, and over on this side, the spurious false positives.

(49)

25/31

If there is publication bias, if small negative trials have gone missing in action, you can see it on one of these

graphs. So you can see here that the small negative trials that should be on the bottom left have disappeared. This is a graph demonstrating the presence of publication

bias in studies of publication [26. bias]. And I think

that’s the funniest epidemiology joke that you will ever hear.

(50)

26/31

That’s how you can prove it statistically, but what about

stories? Well they’re heinous, they really are. This is a drug called reboxetine. This is a drug that I myself have prescribed to patients. And I’m a very nerdy doctor. I hope I try to go out of my way to try and read and understand all the

literature. I read the trials on this. They were all [27.

positive]. They were all well-conducted. I found no flaw.

Unfortunately, it turned out, that many of these trials were withheld. In fact, 76 percent of all of the trials that were done on this drug were withheld from doctors and patients.

(51)

27/31

Now, if you think about it, if I tossed a coin a hundred times, and I’m allowed to withhold from you the answers half the times, then I can convince you that I have a coin with two heads, okay?

If we remove half of the data, we can never know [28.

what] the true effect size of these medicines is. And this is not an isolated story. Around half of all of the trial data on antidepressants has been withheld, but it goes way beyond that.

(52)

28/31

The Nordic Cochrane Group were trying to get a hold of the data on that to bring it all together. The Cochrane

Groups are an international nonprofit collaboration that produce systematic [29. reviews] of all of the data that has ever been shown. And they need to have access to all of the trial data.

But the companies withheld that data from them, and so did the European Medicines Agency for three years. This is a problem that is currently [30. lacking] a solution.

(53)

29/31

And to show how big it goes, this is a drug called Tamiflu,

which governments around the world have spent billions and billions of dollars on. And they spend that money on the

promise that this is a drug which will reduce the rate of complications with flu.

We already have the data showing that it reduces the duration of your flu by a few hours. But I don’t really care about that.

Governments don’t care about [31. that]. I’m very sorry if you have the flu, I know it’s horrible, but we’re not going to spend billions of dollars trying to reduce the duration of your flu symptoms by half a day.

(54)

30/31

We prescribe these drugs, we stockpile them for

emergencies on the understanding that they will reduce the number of complications, which means pneumonia and

which means death. The infectious diseases Cochrane

Group, which are based in Italy, has been trying to get the full data in a usable form out of the drug companies so that they can make a full decision about whether this [32. drug]

is effective or not, and they’ve not been able to get that

information. This is undoubtedly the single biggest ethical problem facing medicine today. We cannot make decisions in the absence of all of the information.

(55)

31/31

So it’s a little bit difficult from there to spin in some kind of positive conclusion. But I would say this: I think that sunlight is the best disinfectant. All of these things are happening in plain sight, and they’re all protected by a force field of, of, of tediousness. And I think, with all of the problems in science, one of the best things that we can do is to lift up the lid, finger around in the

mechanics and [33. peer in].

Thank you very much.

(56)

Date

TED を使った聴き取り訓練

(57)

教材

1A5, 1L1

Tim Harford: Trial, Error, and the God Complex

ゆっくり話すイギリス英語話者で,有名な経済学者

全知全能コンプレックス (God Complex)の批判

GCは「原発神話」形成要因の一つ

2R

Richard P. Feynman: The Feynman Lectures on Physics, Volume 1: Chapter 4: Conservation of Energy.

(58)

参照

関連したドキュメント

I don’t know Dauphin well, but I think that people in Dauphin can’t speak Ukrainian.” Mary said, “Long ago, some people from *Ukraine came to live in Dauphin.. Then, ①the language came

Hello Kitty is not just a “cute” cat for kids anymore, but also a cultural sign for “cool.” 4 Lolita fashion, which started to be worn by Japanese girls in the 1980s, is also a part of