Expert Rule Versus Majority Rule Under
Partial Information, II
DANIELBEREND y
berend@cs.bgu.ac.il
Departmentsof Mathematicsand ofComputerScience,Ben-GurionUniversity,Beer-
Sheva,84105,Israel
LUBASAPIR
*
lsapir@cs.bgu.ac.il
DepartmentofMathematics,Ben-GurionUniversity,Beer-Sheva,84105, Israel
Abstract. Themainpurposeof thispaperisclarifyingtheconnectionbetween some
characteristicsofadecidingbodyandtheprobabilityofitsmakingcorrectdecisions.In
ourmodelagroupofdecisionmakersisrequiredtoselectoneoftwoalternatives. We
assumethe probabilitiesofthe decisionmakersbeingcorrectareindependentrandom
variables distributedaccordingto the samegivendistributionrule. Thisdistribution
belongstoageneralfamily,containingtheuniformdistributionasaparticularcase. We
investigatethebehavioroftheprobabilityof theexpert rulebeing optimal,as wellas
thatofthemajorityrule,bothasfunctionsofthedistributionparameterandthegroup
size. Themainresultisthatforanyvalueofthedistributionparametertheexpertrule
isfarmorelikelytobeoptimalthanthemajorityrule,especiallyasthedecidingbody
becomeslarger.
Keywords: Decisionrule,Expertrule,Majorityrule,Optimality,Probability
1. Introduction
Inawidevarietyofareas{science,business,familyandothers{decisions
aretobemadeonagroupbasis. Evensomenominallyindividualdecisions
are inuenced by external advice and information. Thus group decision
makingisatleastasimportantasindividualdecisionmaking.
In this paperwefocus onthe binary choice model, which goesback at
leastasfarasCondorcet[10]. Inthismodel,agroupofdecisionmakersis
requiredtoselectoneoftwoalternatives,ofwhichexactlyoneisregarded
as correct. Weassumethedecisionmakersto beindependent. Unlikethe
classicalsocialchoicemodel,theindividualsinthegroupshareacommon
goal { to identify the correct alternative. A decision rule is a rule for
y
RequestsforreprintsshouldbesenttoDanielBerend,DepartmentsofMathematics
andofComputerScience,Ben-GurionUniversity,Beer-Sheva,84105,Israel.
*
ResearchpartiallysupportedbyanEshkolFellowshipfromtheIsraeliScienceMin-
translatingthe individual opinionsof themembersinto agroupdecision.
Ifthesizeofthegroupisn,thenumberofallpossibledecisionrulesis2 2
n
.
A decision rule is optimal if it maximizes the probability that the group
willmakethecorrectdecisionforallpossiblecombinationsofopinions. If
theprobabilityofeachmembertomaketherightchoiceisknownandthe
alternatives are symmetric, then the optimal decision rule is a weighted
majorityrule. (SeeNitzanandParoush[16],[17], [19].)
Thereareseveraldirectionsinthestudyofthedichotomouschoicemodel.
Condorcet'sJuryTheoremisstudiedinvarioussetups. Somedealwiththe
casewherethecorrectnessprobabilitiesofthedecisionmakersarenotnec-
essarily equal { Grofman, Owen and Feld [13], Miller [15], Young [22],
Paroush[20],Berend andParoush[2]. Othersrelateto thegeneralization
whereby there may be some dependence between the group members {
Boland[8],Berg[4],[5]andLadha[14]). Specialattentionhasbeendrawn
to specialdecision rulesarising from indirectvoting systems(see, for ex-
ample, Boland, Proschan and Tong [9], Berg[6], Berg and Paroush[7]).
In some of the models, the group has freedom in choosing the decision
rule. Theidentication oftheoptimalrule isof primaryimportance here
(see, forexample, Nitzanand Paroush[16],[17],[19], Grofmanet al [13],
GradsteinandNitzan[12]).
Ourgoalisidentifyingtheoptimaldecisionruleunderpartialinformation
onthedecisionskills. Specically,weassumethecorrectnessprobabilities
ofthegroupmemberstobeindependentrandomvariables,distributedac-
cording to somegivendistribution rule. Moreover,while thevaluesthese
variables takeare unknown, we assumethat therankingof the members
in terms of their individual correctness probabilities is (at least partly)
known. Thus, one can followrules based onthis ranking. Theextremes
aretheexpertrule {followingtheadvice ofthemostqualied individual
whileignoring alltherest, andthemajorityrule{alwaystakingthema-
jorityadvice,evenwhenadvocatedbymostlylessqualiedgroupmembers
(strictlyspeaking,thelatterruleisdenedonlyforoddn). Clearly,there
arenumerousotherdecisionrulesin-betweenthesetwoextremes.
Denote by P
e
(n) the probability of the expert rule being optimal and
byP
m
(n)theprobabilityof themajorityrulebeingoptimal. Nitzanand
Paroush [19] obtained an explicit expression for P
e
(n) for the case of
log-normaldistributionofthe individualcorrectnessprobabilities. Nitzan
and Paroush [18], [19] considered the case where these probabilities are
uniformlydistributedon[ 1
2
;1]. UsingMonteCarlomethodtheyestimated
P
e
(n) for small values of n, from 3 to 9. They also identied all rules
whichcouldbeoptimalforn=5,andagainbyMonteCarloestimatedthe
rule comes last, with P
m
(5) = 0:022,while theexpert rule is the second
best,withP
e
(5)=0:199,andthe leadingoneisthebalanced expert rule
oforder4,givenbytheweights(2,1,1,1,0),withprobability0.229ofbeing
optimal. This line of researchwascontinued by Berend and Harmse[1],
still foruniform distributionon[ 1
2
;1]. They obtainedanexplicitformula
forP
e
(n)andanupperboundforP
m
(n). Thecombinationoftheseresults
impliesthatthelatterprobabilitydecaysto0muchquickerthantheformer.
This direction was followed by Sapir [21], dealing with the situation of
logarithmic expertise levels distributed exponentially. In this case, the
probabilities P
e
(n) and P
m
(n) (as well as that of the so-called balanced
expert ruleof order n) were calculated explicitly. A comparison of these
probabilitiesshowsthat,again,the expert rulehasamuch better chance
ofbeingoptimalthanthemajorityrule.
This paper continues in the direction of Berend and Harmse [1]. The
correctnessprobabilitiesaredrawnfromafamilyofdistributions,contain-
ing theuniform distribution asaparticularcase. Our goalis tocompare
theasymptoticbehavioroftheprobabilityoftheexpertrulebeingoptimal
withthatofthemajorityrule. ItturnsoutagainthatforsuÆcientlylarge
ntheprobabilityoftheexpertrulebeingoptimalexceedsbyfarthatofthe
majorityrule. Thusthemainquestionaddressedinthispaperisnotwhat
theprobabilityofeachdecisionruleistoprovidetherightdecision. These
probabilitiesmaybeassumedtobequitehigh,andinfactconvergeto1as
thenumberof expertsincreases,aslongasa\reasonable"decisionruleis
employed. Theseprobabilitiesarewhatcouldbecalled\theaveragecase".
Herewedealratherwith\theworstcase". Namely,eachdecisionrulehas
someborderlinecases. Oneshouldhesitatetousethemajorityruleif,say,
inacommitteecomprisingof11members,the6membersknowntobeleast
qualiedhappentofavoroneviewwhileall5morequaliedmembershold
theoppositeview. Similarly,employingtheexpertrulewouldseemstrange
ifthetopexpert isopposedbyall theothers. Toclaimthat,inaspecic
case,themajorityrule(expertrule,respectively)isoptimal,istantamount
toassertingthat weshouldindeedfavortheopinionofthe6againstthe5
intherstexample(ofthetopexpertinthesecondexample,respectively).
Consequently,comparing the optimalityprobabilities, asdonein thispa-
per,wecannot concludethat onerule is better than another. Rather, it
providesuswithaviewoftheperformanceoftherulesinquestioninsome
extremecases,and hintstowhat extentweshould rathermodifythem in
thosecases.
Section 2 is devoted to a more accurate description of our model. In
Section3wepresentthemainresults,andinSection4{theirproofs.
Theauthorsexpress theirgratitudetoJ.Raynerand totherefereesfor
theirveryhelpfulcommentsontherstdraft ofthepaper.
2. The Model
The group consists of n members, faced by two alternatives, of which
exactly one is correct. The alternatives are assumed to be symmetric.
Namely,theyare apriori equallylikelyto becorrect,and thebenet(or
loss)associatedwithasuccess(correctchoice)orafailure(incorrectchoice)
isindependent ofthe particularalternativecorrectly (or incorrectly)cho-
sen. Weassumethatthemembersareindependentintheirchoices. Denote
byp
i
theprobabilityoftheithexpertmakingtherightchoice. Thevector
~ p=(p
1
;p
2
;:::;p
n
)iscalledthevectorofabilitiesorskills.
Designatingthealternatives(inanarbitraryway)asrstandsecond,we
alsodenetherandomvariables
X
i
=
1; theithexpertselectstherstalternative;
1; theithexpertselectsthesecondalternative:
Adecisionprole~x=(x
1
;x
2
;:::;x
n
)isthen-tupleobtainedfromaspecic
instance of these variables. A decision rule is a rule for translating the
individualopinions intoagroupdecision. Moreformally,usingthetermi-
nologyofNitzanandParoush [19],adecisionrule'isafunctionfromthe
setof allpossibledecisionprolesto thesetf 1;1g:If'(~x )=1then the
rstalternativeisselected,whileif'(~x )= 1thenthesecondalternative
is selected. The number of all possibledecision rules is 2 2
n
. A decision
rule is neutral if'( x
1
; x
2
;:::; x
n
) = '(x
1
;x
2
;:::;x
n
)for everydeci-
sionprole~x=(x
1
;x
2
;:::;x
n
). Weassumeourdecisionrulestobeneutral.
A decisionrule is optimal ifit maximizesthe probabilityof the groupto
makethe correctdecisionforallpossiblecombinationsofopinions. Ifthe
membersindexedbysomesubsetAf1;2;:::;ngofthegrouprecommend
therstalternative,whilethoseindexedbyB=f1;:::;ngnArecommend
thesecond,thentherstalternativeshouldbechosenifandonlyif
Y
i2A p
i
1 p
i
>
Y
i2B p
i
1 p
i
(2:1)
or,equivalently,
X
ln
p
i
1 p
i
>
X
ln
p
i
1 p
i
(2:2)
(see Nitzanand Paroush[16], [17], [19]). Theoptimal decisionrule for a
groupofnexpertsturnsouttobeaweightedmajorityrule. Hereadecision
rule is a weighted majority rule if it satises '(~x ) =sign(
P
n
i=1 w
i x
i ) for
everydecisionprole~x,wherew~=(w
1
;w
2
;:::;w
n
)hasnon-negativeentries
andsatises P
n
i=1 w
i x
i
6=0foranyprole~x.Suchavectorw~isasystemof
weights. Notethat aweightedmajorityrulemayberepresentedby many
systemsof weights.
Inviewof(2:1)and(2:2)itisnaturaltodenethe expertiseofanindi-
vidual,whoseprobabilityofbeingcorrectisp,as p
1 p
,andhislogarithmic
expertiseasln
p
1 p
. ItwillbeconvenienttoconsiderthefunctionsF and
f denedby:
F(p)= p
1 p
; f(p)=ln p
1 p
=lnF(p):
Withthesenotations,(2:2)mayberestatedtosaythattheoptimaldecision
ruleisaweightedmajorityrule,withweightsw
i
=f(p
i
). Inotherwords,
'(~x )=sign(
P
n
i=1 f(p
i )x
i
)(seeNitzanandParoush[19],Theorem2:3:1).
Inthesequel,wewillemploymulti-dimensionalversionsofthefunctions
f and F, denoted in the same way. Thus, the combined expertise of a
set of l experts with correctness probabilities p
1
;:::;p
l is F(p
1
;:::;p
l ) =
Q
l
j=1 F(p
j
);andthecombinedlogarithmic expertiseofthesameexpertsis
f(p
1
;:::;p
l )=
P
l
j=1 f(p
j ):
3. MainResults
Theresultsinthissectionattempttogeneralizethoseofthecaseofuniform
distribution,discussedbyBerendand Harmse[1]. Weareconcernedwith
theprobabilityP
e
(n)of theexpertrulebeingoptimalandtheprobability
P
m
(n)for themajority rulebeingoptimal. Oneverieseasily that P
e (n)
istheprobabilitythat,in thecasethetopexpert disagreeswithallother
experts, thetopexpert ismorelikelyto becorrect thantherest. Strictly
speaking, P
m
(n) is dened only for odd n = 2s+1. It is equal to the
probability that the bottom s+1 experts, when opposed by the top s
experts, are more likely to be correct than the top ones. Our primary
goalistocomparetheprobabilitythattheexpertruleisoptimalwith the
probabilitythatthemajorityruleisoptimal. Thepointofthiscomparison
isthat itsubstantiatestheconclusionobtainedbyBerend andHarmse[1]
in theuniform case, byshowingthat theexpert rule isfar morelikelyto
whichcontainstheuniformdistributionasaveryparticularcase. Herewe
assumethatthe probabilityp
i
ofthei-thexpertto makeacorrectchoice
isdistributed accordingtothe(same)densityfunction
(x)= 8
<
:
2(2x 1) 1
; 1
2
x1;
0; otherwise;
(3:1)
whereisanypositiveparameter. If=1wegettheuniformdistribution
U[ 1
2
;1]asaspecialinstanceof(3:1).
Whyshould oneconsider thisfamilyof distributions? Theuniform dis-
tribution in theinterval[ 1
2
;1] lookslikea naturalguesswhen onehasno
information on the distribution, yet assumes that people perform better
by considering the possibilities than by tossing a coin. Yet, the results
obtained in the case of the uniform distribution may seem too narrow,
and invite the question as to how applicablethey are if the distribution
issomewhat perturbed. Thefamilyselectedhereis wideenoughto cover
bothcaseswhere theexpertsare mostlycorrect (large) andwhere they
mostlyperform onlymarginallybetterthan cointossing (0). Let us
alsomentionin passing,thatifp
i
isdistributedasin (3:1),andwedenote
q
i
=1 p
i ,then
1
p
i q
i
isdistributedParetowithparameter. Thusthe
resultsofthecurrentpaperprovetherobustnessof thoseobtainedearlier
onlyfortheuniform distribution.
Itwillbeconvenienttointroducethefollowingconstants:
I
k
=2 Z
1
0
(1 t) 1
t k
(1+t) +1
dt; k=0;1;2;::: :
Itiseasytosee that1=I
0
>I
1
>I
2
>::: .
ThefollowingtheoremdescribestheasymptoticbehaviorofP
e (n).
Theorem 1 The probability of the expert rulebeingoptimal decays expo-
nentially asthe numberof experts grows. Moreprecisely:
1)If 0<
1
2
;then
2nI n 1
1
2 2
nI n 1
2 P
e
(n)2nI n 1
1
2 2
nI n 1
2
+nI n 1
3
: (3:2)
2)If 1
2
;then
2nI n 1
2 2
nI n 1
P
e
(n)2nI n 1
: (3:3)
Remark 1. Clearly, for large n we may omit the last twoterms onthe
righthand sideof (3:2)and obtainthe(less accuratebut simpler) bound
of(3:3). However,thisisnotthecaseforsmalln.
Ournextresultrelatesto P
m
(n). Itprovidesanupperboundforit.
Theorem 2 Forodd-size n=2s+1;the probability of the majority rule
beingoptimal isboundedaboveas follows:
1)If 0<1;then
P
m (n)
s
s!
2 :
2)If >1;then
P
m (n)
n
s!
2
2
s
:
In Theorem 2 we chose the simplest form of upperbounds for P
m (n).
Actually,ourcalculationsenableustoobtainmoreaccurateresults,given
byTheorem3below. Topresentthemweusethefollowingnotations:
A
s;
= Z
1
0
(1 x 2
)x 2 1
ln 1+x
1 x
s
x 1
dx;
B
s;
= Z
1
0
(1 x 2
)x 1
ln 2
1+x
1 x
s
x 1
dx:
(3:4)
Theorem 3 Forodd-size n=2s+1;the probability of the majority rule
beingoptimal isboundedaboveas follows:
1)If 0<1;then
P
m (n)
n
s!
2
2
s
A
s;
n
s!
2 2
3
3
2+2
+1
!
s
:
2)If >1;then
P
m (n)
n
s!
2
2
2(+1)
1
+1
1
2
!
s
B
s;
n
s!
2
ln(4+7:14)
e
n 1
:
Remark 2. For = 1, Theorem 3 yields P
m (n)
n
s!
2
3
8
s
; which is
somewhat worse than the upper bound P
m (n)
n
2
1
s
obtained by
Berend andHarmse[1]. Ourmethod enablesus toobtaintighter bounds
for0<1 bysimplytakingmoretermsoftherelevantTaylorseriesin
theproof. Forexample,takingonemoretermwegetfor0<1:
P
m (n)
n
s!
2 0
@
+1
3 1
(2+) 2
11+14 p
55(+1) 2
30
+1+ 1
5 p
55(+1) 2
30
1
A s
: (3:5)
For=1thisalreadyimpliesP
m (n)
n
s!
2
1
3
s
. Thederivationof(3:5)
willbeexplainedafter theproofof Theorem2.
Combiningtheabovetheorems,wecancomparetheasymptoticbehavior
of P
e
(n)and P
m
(n). We see that, for any valueof ,P
e
(n) decaysto 0
exponentiallyasnincreases,while P
m
(n)decaysto 0super-exponentially
fast.
Theorem1alsoshowstheinuenceofontheprobabilityoftheexpert
rulebeingoptimal. Infact,routineestimatesshowthat
1
2(+1) I
1
1
+1 :
This showsthat,as afunction of,P
e
(n)decreasesas 1
n 2
forlarge.
Table 1 provides the values of P
e
(n) for a few values of and n. The
calculationsweredonebyMonteCarlomethodusing10 4
iterations.
Table1. Theprobabilityoftheexpertrulebeingoptimal.
e
e
e
e
n
0.05 0.25 0.5 1 1.5 2 3 10 100
3 0.99 0.94 0.85 0.68 0.57 0.48 0.36 0.14 0.02
5 0.98 0.74 0.48 0.20 0.10 0.05 0.02 0.00 0.00
7 0.95 0.54 0.23 0.05 0.01 0.00 0.00 0.00 0.00
15 0.79 0.10 0.01 0.00 0.00 0.00 0.00 0.00 0.00
Numerical calculations tend to show that the inuence of on P
m (n)
is opposite to its inuence on P
e
(n). Thus Table 2 indicates that, as a
function of ,P
m
(n) isincreasing. (Notethat this also agreeswith what
onemightguessin viewof Theorem2.) Thedatawasagaincomputedby
MonteCarlomethod.
Figure 1 represents graphically the data of Table 1 and 2 in the case
Table2. Theprobabilityofthemajorityrulebeingoptimal.
e
e
e
e
n
0.05 0.25 0.5 1 1.5 2 3 10 100
3 0.01 0.06 0.15 0.32 0.43 0.52 0.64 0.86 0.99
5 0.00 0.00 0.00 0.02 0.05 0.07 0.13 0.40 0.85
7 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.08 0.49
15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Figure1. Pe(5)andPm(5)asfunctionsof
0 0.2 0.4 0.6 0.8 1
0.5 1 1.5 2 2.5 3
Ρ
α
Ρ m (5)
Ρ e (5)
α ✻ ≈ 1.815
For integer=m, (x) isapolynomialon[ 1
2
;1]. Inthiscase, onecan
obtain anexact expression for P
e
(n) in terms of an innite Leibniz-type
series. An expansionofthis typemay beobtainedfor= 1
2
aswell. We
refertoBerendandSapir[3]fordetails.
4. Proofs
FortheproofofTheorem1weneedthefollowinglemma.
Lemma 1 Thefollowing inequalities hold:
1)(1 x)
1 x; 0x1; 1:
2)
1 x
1+x
1 2x+2
2
x 2
x 3
; 0x1; 0<
1
2 :
3)
1 x
1+x
1 2x+2
2
x 2
; 0x1; >0:
Theproofistechnical,andweomitthedetails.
Proof of Theorem 1: Let p;q
1
;:::;q
n 1
be independent random vari-
ables with the same density (x). Denote ~q = (q
1
;:::;q
n 1
); F(~q) =
Q
n 1
i=1 F(q
i
)andd~q=dq
1 :::dq
n 1
. Notethat F 1
(t)= t
1+t . Then:
P
e
(n) =nProb F(p)>
n 1
Y
i=1 F(q
i )
!
=n(2) n
Z
:::
Z
| {z }
[ 1
2
;1]
n 1 n 1
Y
i=1 (2q
i 1)
1 Z
1
F 1
(F(~q))
(2p 1) 1
dpd~q
=n(2) n 1
Z
:::
Z
| {z }
[ 1
2
;1]
n 1 n 1
Y
i=1 (2q
i 1)
1
1
2F(~q)
1+F(~q) 1
d~q
=n n(2) n 1
Z
:::
Z
| {z }
[ 1
2
;1]
n 1 n 1
Y
i=1 (2q
i 1)
1
2F(~q)
1+F(~q) 1
d~q
Substitutingt
i
= 1 q
i
q
i
andusingtheinequality 1
1+ Q
n 1
i=1 t
i 1
Q
n 1
i=1 t
i
;
weobtain:
P
e
(n) =n n(2) n 1
Z
:::
Z
| {z }
[0;1]
n 1 n 1
Y
i=1 (1 t
i )
1
(1+t
i )
+1
!
1 Q
n 1
i=1 t
i
1+ Q
n 1
i=1 t
i
!
d
~
t
n n(2)
n 1 Z
:::
Z
| {z }
[0;1]
n 1 n 1
Y
i=1 (1 t
i )
1
(1+t
i )
+1
!
1 n 1
Y
i=1 t
i
!
2
d
~
t:
(4:1)
If 1
2
,thenbyLemma1.1with x= Q
n 1
i=1 t
i
and =2this implies
P
e
(n)n 1 I n 1
0
+2I n 1
1
:
NowI
0
=1;andtherefore
P
e
(n)2nI n 1
1
: (4:2)
Denote x= n 1
Y
i=1 t
i
. Let 1
2
. By Lemma 1.2 and the equality part of
(4.1)weobtaintheupperbound:
P
e
(n) n 1 I n 1
0
+2I n 1
1
2 2
I n 1
2 +I
n 1
3
=2nI n 1
1
2 2
nI n 1
2
+nI n 1
3 :
(4:3)
Theequalitypartof(4.1)andLemma1.3providealowerboundforP
e (n):
P
e
(n) n 1 I n 1
0
+2I n 1
1
2 2
I n 1
2
=2nI n 1
1
2 2
nI n 1
2 :
(4:4)
Combining(4.2),(4.3)and(4.4)weobtainthetheorem.
TheproofofTheorem2isavariationofpartoftheproofofTheorem3.
Itwillthereforebeconvenientto provethelatterrst.
ProofofTheorem3: Theproofwillbecarriedoutinthreestages.Inthe
rststagewetransformoursituationtoonewithauniformdistribution. In
thesecondstageweobtaintheleftupperboundsforP
m
(n)inthetheorem,
which contain the integrals A
s;
and B
s;
. We will need to distinguish
between twocases: 1 and > 1. The proof in each of these cases
=1. Thethird stage is technical, dealingwith estimationsof A
s;
and
B
s;
. Hereweproducethenalupperbounds forP
m (n).
Stage 1: Suppose the correctness probability of the middle (i.e., (s+
1)st) expert is a for some a 2 [ 1
2
;1]. Denote the conditional correctness
probabilitiesofthestopexpertsbyP
1
;P
2
;:::;P
s
;andthoseofthesbottom
experts by Q
1
;Q
2
;:::;Q
s
. All variables are independent, with the rst s
takingvaluesin[a;1],andthelatters{in[ 1
2
;a]:Withthesenotations,the
desiredprobabilityistheprobabilitythat theinequality
f(Q
1
;Q
2
;:::;Q
s
;a)f(P
1
;P
2
;:::;P
s
) (4:5)
holds. Denote t
i
= (2p
i 1)
+1
2
; i=1;2;:::;2s+1: It is easy to see
thattherandomvariablest
i
arei.i.duniformlydistributedin[ 1
2
;1]. Putting
=
(2a 1)
+1
2
,and
T
j
= (2P
i 1)
+1
2
; V
j
= (2Q
i 1)
+1
2
; j=1;2;:::;s;
we observethat T
j
U[;1]and V
j
U[1=2;] foreach j, andall these
variablesareindependent. Letg(t)=ln
1+(2t 1) 1=
1 (2t 1) 1=
bethelogarithmic
expertise,expressedasafunctionoft(whichcorrespondstothevariablet
i ).
Aswiththelogarithmicexpertisefunctionf,wehaveamulti-dimensional
versionofg,alsodenoted byg, denedby
g(t
1
;t
2
;:::;t
k )=
k
X
j=1 g(t
i ):
Then(4:5)takesplaceifandonlyif:
g(V
1
;V
2
;:::;V
s
;)g(T
1
;T
2
;:::;T
s
): (4:6)
Weshallusethersttwoderivativesofg(t):
g 0
(t)= 4
(2t 1) 1
1
1 (2t 1) 2=
; (4:7)
g 00
(t)= 8
2
1 +(1+)(2t 1) 2=
1 (2t 1) 2=
2
(2t 1) 2
1
: (4:8)
Clearly,g 0
(t)0for1=2t1. With respectto g 00
, if0<1;then
00
its sign in the interval. Hence we have to distinguish between the cases
0<1and>1.
Stage 2:
Case2.1: 0<1.
Since the one-variable form of g is convexon [ 1
2
;1] and g(
1
2
) = 0, we
obtaintheupperbound:
g(V
j )
V
j 1
2
1
2
g()=g()
V
j
1
2
g(): (4:9)
Theconvexityofg alsoimpliesthelowerbound:
g(T
j
)g()+(T
j )g
0
()=g()+ T
j
1
g 0
()(1 ): (4:10)
Thusthedesiredevent(4.6)cannothold unless
g 0
()(1 ) s
X
j=1 X
j +g()
s
X
j=1 Y
j
g(); (4:11)
whereX
j
= T
j
1
andY
j
=
V
j
1
2
areindependentuniformlydistributed
in[0;1]. Asin theproofofTheorem4inBerend andHarmse[1],this im-
pliesthattheprobabilityoftheeventin(4.11)doesnotexceed
g() n 1
(g()g 0
()(1 )) s
(n 1)!
:
Now thecorrectnessprobabilityof the(s+1)st expert isitself arandom
variablewithdensityfunction
()= 8
<
: 2
n
n
s;s;1
( 1
2 )
s
(1 ) s
; 1
2
1;
0; otherwise;
(4:12)
(cf. Golberg[11],Sec. 11.5). Hence
P
m (n) =
Z
1
1
2 P( g(V
1
;V
2
;:::;V
s
;)g(T
1
;T
2
;:::;T
s
)) ()d
1
(n 1)!
Z
1
1
2
g()
g 0
()(1 )
s
()d:
Substituting (4:7)and(4:12) inthelastinequality,weobtain:
P
m (n)
2
s
2n
s!
2
Z
1
1
2
1 (2 1)
2
s
(2 1) s(2
1
)
ln
1+(2 1) 1=
1 (2 1)
1=
s
d:
Thechangeof variablex=(2 1) 1=
yieldstherstinequalityasserted
inthis case:
P
m (n)
2
s+1
2n
s!
2 R
1
0
(1 x 2
)x 2 1
ln 1+x
1 x
s
x 1
dx
= n
s!
2
2
s
A
s;
;
(4:13)
Case2.2: >1.
By (4:8)we see that g 00
(t) changes signs in the interval[ 1
2
;1]. In fact,
g 00
vanishes at the point t
= 1
2 +
1
2
1
+1
=2
, so that g 00
(t) < 0 for
t 2[ 1
2
;t
) andg
00
(t)>0fort 2(t
;1]. Thus in this caseg(t)is concave
in [ 1
2
;t
)and convexin (t
;1]. We proceed similarly to Case2.1. When
estimatingtheintegrandin
Z
1
1
2
P( g(V
1
;V
2
;:::;V
s
;)g(T
1
;T
2
;:::;T
s
)) ()d
asafunction of ,weshallproceed dierentlyfor>t
and for<t
.
Namely,write:
P
m (n) =
R
1
1
2 P( g(V
1
;V
2
;:::;V
s
;)g(T
1
;T
2
;:::;T
s
)) ()d
= R
t
1
2 +
R
1
t :
(4:14)
Werstconsidertheintegralovertheregion[t
;1]. Sinceg(t)isconvex
on[;1],thelowerbound(4:10)ong(T
j
)isstillvalid. Boundingtheg(V
j )'s
from aboveis lessobvious. Wewantto bound g(t)oninterval[;
1
2 ] by a
linearfunction. Tothisend,weusetheuniquepointt
0 2[
1
2
;t
]suchthat
thetangenttothegraphofgatthepointt
0
passesthrough(;g()). This
tangentprovidesalinearfunction asrequired,andin particular:
g(V
j
)g() g 0
(t
0 )( V
j
)=g() g 0
(t
0 )(
1
2 )
V
j
1
2
: (4:15)
Figure2. Linearboundsforgwhen>t
0 1 2 3 4 5
0.5 0.6 0.7 0.8 0.9 1
t α
t 0 µ t
g(t)
Thus(4:10)and(4:15)implythat (4:6)cannotholdunless
g 0
()(1 ) s
X
j=1 X
j +g
0
(t
0 )(
1
2 )
s
X
j=1 Y
j
g(); (4:16)
whereX
j
= T
j
1
andY
j
=
V
j
1
2
: Denotetheprobabilityoftheevent
(4:16)byProb
1
(n):Asin Case2.1,
Prob
1 (n)
g() n 1
(n 1)! g 0
(t
0 )(
1
2 )g
0
()(1 )
s :
Sinceg isconcavein[ 1
2
;t
],wehaveg 0
(t
)g
0
(t
0
). Hence:
Prob
1 (n)
g() n 1
(n 1)! g 0
(t
)(
1
2 )g
0
()(1 )
s
: (4:17)
Now consider the rst integral on the right hand side of (4:14). The
function g(t) is concavefort 2 [ 1
2
;], and (4:9) takesplace. We wantto
boundg on (;1]from belowby alinearfunction. Lett
0 2(t
;1) bethe
uniquepointsuchthatthetangenttothegraphofgatthepointt
passes
through(;g()). Thistangentliesbelowthegraphofgthroughout[;1],
thatis:
g(T
j
)g()+g 0
(t
0 )(T
j
)=g()+g 0
(t
0
)(1 ) T
j
1
: (4:18)
Thefollowinggureexplainspictorially(4.9)and(4.18).
Figure3. Linearboundsforgwhen<t
0 1 2 3 4 5
0.5 0.6 0.7 0.8 0.9 1
µ t
αt 0 ✳
t g(t)
Thus(4:9)and(4:18)implythat(4.6)cannotholdunless
g 0
(t
0
)(1 ) s
X
j=1 X
j +g
0
()(
1
2 )
s
X
j=1 Y
j
g(); (4:19)
where X
j and Y
j
are as before. Denote the probability of the event in
(4:19)byProb
2
(n):Again,similarlytoCase2.1weobtain
Prob
2 (n)
g() n 1
(n 1)! g 0
(t
0 )(
1
2 )g
0
()(1 )
s :
Sinceg isconvexin[t
;1],wehaveg 0
(t
)g
0
(t
0
). Hence:
Prob
2 (n)
g() n 1
(n 1)! g 0
(t
)(
1
)g 0
()(1 )
s
: (4:20)
Fortunately, the right hand sides of (4:17) and of (4:20) are the same.
Substituting (4:17)and(4:20)in (4:14)weobtain:
P
m (n)
Z
1
1
2
g() 2s
(n 1)! g 0
(t
)(
1
2 )g
0
()(1 )
s
()d:
By(4:12):
P
m (n)2
n
n
s;s;1
1
(n 1)!(g 0
(t
))
s Z
1
0 g()
2s
(g 0
()) s
d:
Replacing g() and g 0
() by their explicit values and substituting x =
(2 1) 1=
;weget:
P
m (n)
n
s!
2
g 0
(t
)
s Z
1
0
(1 x 2
)x 1
ln 2
1+x
1 x
s
x 1
dx:
Pluggingin thevalueofg 0
(t
)wearriveattherstinequalityforpart2.
Stage3: Forthesecondinequalityinpart1,wehavetoboundA
s;
from
above. Denote:
h(x)=(1 x 2
)x 2 1
ln 1+x
1 x
; x2[0;1]: (4:21)
Weclaimthat
h(x)h
1
(x)=2x 2
1 2
3 x
2
; x2[0;1]: (4:22)
Indeed,usingTaylor'sexpansionofln 1+x
1 x
weobtain:
h(x)=(1 x 2
)x 2 1
2x 1
X
k =0 x
2k
2k+1
=2x 2
1 2 1
X
k =1 x
2k
4k 2
1
!
: (4:23)
Takingonlytherstterminthe sumontherighthandsideof (4:23),we
obtain(4:22). Sinceh
1
(x)attainsitsmaximumatthepointx
= q
3
2+2
;
wehave
h(x)h
1 (x
)= 2
+1
3
2(+1)
; x2[0;1]:
Substitutingintheexpression(3:4)forA
s;
andcombiningwith(4:13),we
Tocompletetheproofofpart2,weneedtoestimateB
s;
. Denote:
(x)=ln 1+x
1 x
;
h
2
(x)=(1 x 2
)x 1
;
h
3 (x)=h
2 (x)
2
(x):
The better partof the remaining work is designed to nd a good upper
boundon h
3
(x) for>1(see(4:29) infra). Clearly, h
3 (0)=h
3
(1 )=0
and h
3
(x) >0forx 2 (0;1). Letx
2 (0;1)bea maximumpoint of h
3 .
Putting (x)=4x+ 1 (+1)x 2
(x),wendthat
h 0
3 (x)=x
2
(x) (x): (4:24)
Sinceh 0
3
vanishesatx
,sodoes ,andtherefore
(x
)=
4x
(+1)x 2
( 1)
: (4:25)
Thefunction
4x
(+1)x 2
( 1)
isnegativein
0;
q
1
+1
,andpositiveandde-
creasingin q
1
+1
;1
. Since(x)ispositiveandincreasingin(0;1),there
exists aunique pointx
satisfying (4:25). Thus x
isthe onlymaximum
point of h
3
(x). A routine calculationshowsthat q
+1:285
+2:285
< 0and
( +4
+5
)>0for>1,sothat x
2
+4
+5
; q
+1:285
+2:285
.
Since(x)isincreasing,forx2
+4
+5
; q
+1:285
+2:285
wehave:
(x)
r
+1:285
+2:285
!
=2ln( p
+2:285+ p
+1:285): (4:26)
Thefunctionh
2
decreasesfrom thepoint q
1
+1
on,andin particular:
h
2 (x)h
2
+4
+5
; x2
+4
+5
; r
+1:285
+2:285
!
: (4:27)
Thus
h
3 (x)h
3 (x
)h
2
+4
+5
2
r
+1:285
+2:285
!
: (4:28)
Combining(4:26);(4:27)and(4:28)weobtain:
h
3 (x
)
2+9
2
+4
1
4ln 2
( p
+2:285+ p
+1:285): (4:29)
Substituting (4.29)intheexpression(3:4)forB
s;
,weobtain:
P
m (n)
n
s!
2 0
@ +4
+5 r
1
+1
!
1
4
2
(+4:5)
(+5) 2
(+1) ln
2
( p
+2:285+ p
+1:285)
s
:
Alengthybutroutinecalculationshowsthattherighthandsideinthelast
inequalityis smallerthan that in part2of thetheorem, which concludes
theproof.
Proof of Theorem 2: Denote h
4 (x) =
1 x 2
x ln
1+x
1 x
for x 2 [0;1]: With
h(x)asin(4:21) wehaveh
4
(x)=h(x)=x 2
;andtherefore,by(4:23):
h
4
(x)2; x2[0;1]:
Substituting thisbound intheexpressionforA
s;
,wearriveat:
A
s;
2 s
Z
1
0 x
n 1
dx=2 s
1
n :
Combiningthiswith(4:13), wecompletetheresultintherstpartofthe
theorem:
P
m (n)
s
s!
2 :
Tocompletethesecondpart,denote:
h
5 ()=
+1
1
+1
1
2
; h
6 ()=
+1
+1
2
:
Clearlyh
5
()h
6
(). Itiseasytoseethath
5
()isdecreasing. Aroutine
calculationrevealsthatthefunctionh
7
(x),denedby
h
7
(x)=(1 x 2
)ln 2
1+x
1 x
; x2[0;1);
isboundedaboveby1.76,which produces:
P
m (n)
n
s!
2
(s 1 1
+1)
(0:88h
5 ())
s
:
Nowfor>1
h
5
() lim
!1 h
5
() lim
!1 h
6
()h
6 (1)=
1
;
andcombiningthelasttwoinequalitiesweproducetherequiredresult:
P
m (n)
n
s!
2
( 0:88h
5 ())
s
n
s!
2
0:88
2
s
n
s!
2
2
s
:
Proof ofRemark2: Theprooffollowsthesamelinesasthethird stage
oftheproofofTheorem3. Theonlydierenceisthatwetakethersttwo
termsin the sumof therighthand side of (4:23). Thus insteadof (4:22)
wereceive
h(x)H(x)=2x 2
1 2x
2
3 2x
4
15
:
ThemaximumpointofH(x)turnsouttobe
x
=
3
(+1) q
1+
6(+2)+1
5(+1) 2
:
CalculatingH(x
)andproceedingasintheproof, weobtaintherequired
result.
References
1. D.BerendandJ.Harmse,Expertruleversusmajorityruleunderpartialinforma-
tion. Theoryand Decision,35:179{197,1993.
2. D.BerendandJ.Paroush,WhenisCondorcet'sJuryTheoremvalid?SocialChoice
andWelfare,15:481{488,1998.
3. D.BerendandL.Sapir, Optimalityoftheexpertruleunderpartialinformation.
preprint,2001.
4. S.Berg,Condorcet'sJuryTheorem,dependenceamongjurors. SocialChoiceand
Welfare,10:87{95,1993.
5. S.Berg,Condorcet'sJuryTheoremrevisited.EuropeanJournalofPoliticalEcon-
omy,9:437{446,1993.
6. S.Berg,Indirectvotingsystems:Banzhafnumbers,majorityfunctionsandcollec-
tivecompetence.EuropeanJournalofPoliticalEconomy,13:557{573,1997.
7. S.BergandJ.Paroush, Collectivedecisionmakinginhierarchies. Mathematical
SocialSciences,35:233{244,1998.
8. P.Boland, MajoritysystemsandtheCondorcetJuryTheorem. TheStatistician,
38:181{189,1989.
9. P.Boland,F.Proschan,andY.Tong,Modellingdependenceinsimpleandindirect
majoritysystems.JournalofAppliedProbability,26:81{88,1989.
10. N.C.deCondorcet,Essaisurl'applicationdel'analysealaprobabilitedesdecisions
renduesalapluralitedesvoix.Paris,1785.
11. M.A.Golberg, Anintroduction toprobabilitytheory withstatisticalapplications.
12. M. Gradstein and S.Nitzan, Performance evaluation of somespecialclasses of
weightedmajorityrules.MathematicalSocialScience,12:31{46,1986.
13. B.Grofman, G.Owen, and S.Feld, Thirteen theorems insearch of the truth.
TheoryandDecision,15:261{278,1983.
14. K.Ladha, Information polling throughmajorityrule voting: Condorcet's Jury
Theoremwithcorrelatedvotes. JEconBehaviorOrganizat,26:353{372,1995.
15. N. Miller, Information, electorates, and democracy: someextensions and inter-
pretationsofCondorcetJuryTheorem,ininformationpoolingandgroupdecision
making.ed.B.GrofmanandG.Owen,Greenwich,CT:JAIPress,1986.
16. S.NitzanandJ.Paroush,Optimaldecisionrulesinuncertaindichotomouschoice
situations. InternationalEconomic Review,23:289{297,1982.
17. S.NitzanandJ.Paroush, Ageneraltheorem andeightcorollariesinsearchofa
correctdecision. TheoryandDecision,17:211{220,1984a.
18. S.NitzanandJ.Paroush, Partialinformationondecisionalcompetencesandthe
desirabilityofthe expert ruleinuncertaindichotomouschoicesituations. Theory
andDecision,17:275{286,1984b.
19. S.NitzanandJ.Paroush,Collectivedecisionmaking.CambridgeUniversityPress,
Cambridge,1985.
20. J.Paroush,Stayawayfromfaircoins:acorrectCondorcet'sJuryTheorem.Social
ChoiceandWelfare,15:15{20,1998.
21. L.Sapir, The optimality of the expert and majorityrules under exponentially
distributedcompetence.TheoryandDecision,45:19{35,1998.
22. H. Young, Condorcet's Theory of voting. American Political Science Review,
82:1231{1244,1989.