Skip to main content

Full text of "BBC Research and Development Report Number 1970-11"

See other formats


-^Is 





Research Department, Engineering 



Division 
CORPOR 



RESEARCH DEPARTMENT 



COMBINED EFFECT OF SEVERAL INTERFERING SIGNALS 



Research Department Report No. 1970/11 
UDG 621.391.827 



This Report may not be reproduced in any 
form without the written permission of the 
British Broadcasting Corporation. 

It uses SI units in accordance with B.S. 
document PD 5686. 



Work covered by this report was undertaken by the BBC Research Department 

for the BBC and the ITA 



j)tU,a**»*<^ 



J.W. Head, M.A., F.Inst. P., C.Eng., M.I.E.E., F.LM.A. 



Head of Research Department 



(RA-58) 



Research Department Report No. 1970/11 
COMBINED EFFECT OF SEVERAL INTERFERING SIGNALS 

Section Title Page 

SUMMARY 1 

1. INTRODUCTION 1 

2. QUANTIZED CONVOLUTION METHOD FOR FINDING THE DISTRIBUTION OF THE SUM OF 

TWO SIGNALS WITH ARBITRARY DISTRIBUTIONS 2 

3„ DISTRIBUTIONS FOR WHICH THE SUMMATION PROCESS CAN BE SIMPLIFIED 4 

4. QUANTIZATION IN TERMS OF FIXED PROBABILITY INTERVALS 5 

5. CONCLUSIONS 5 

6. REFERENCE 5 

APPENDIX A 5 

APPENDIX B 7 

{RA.58) 



larch 1970. 



Research Department Report No. 1970/11 
UDC 621.391.827 



COMBINED EFFECT OF SEVERAL INTERFERING SIGNALS 



SUMMARY 

y^hen a site is considered for a new radio or television transmitter the best avail- 
able assessment of interference expected from existing transmitters operating on or near 
the same frequency is required. This involves calculation of the combined effect of two 
or more interfering signals simultaneously present. The information available about each 
interfering signal is its distribution in time, that is, the percentages of time for which 
the signal can be expected to exceed certain levels. The resultant of interfering signals 
is best expressed in the same way, i.e., as the distribution of an equivalent single inter- 
fering signal. A general method of finding the resultant of two signal distributions, suit- 
ably quantized, is discussed and possible simplifications are considered for distributions 
approximating to certain standard types. 

Correlation has usually been cither neglected (so that signals simultaneously 
present are combined by convolution of their power distributions) or assumed to be com- 
plete (so that combination is achieved by power addition of the signals). A tentative 
method of interpolating between these extremes to allow for m:tual correlation is discus- 
sed in Appendix A; very little relevant experimental information is available. 

When an interfering signal has a sufficiently wide range of variation it is usually 
accurate enough to regard it at any instant as being either interfering or negligible. This 
is the basis of the 'probability multiplication' procedure hitherto used in Research Depart- 
ment for multiple-interference calculations. 



1. INTRODUCTION 

The strength of the signal from any particular 
potentially interfering transmitter may vary rapidiy 
and between wide limits, so that any assessment of 
the likelihood of interference can only be done statis- 
tically. The observable information can be reduced to 
the form that the signal in question can be expected 
to be within certain specified limits (or above a certain 
level) for a certain percentage of the time. We are 
here concerned with determining the effective result- 
ant of two ormore such signals present simultaneously. 
The data obtained for the variation of the resultant of 
two interfering signals must be obtained in the same 
form as the data for the variation of the component 
signals, so that the process can be repeated as often 
as is necessary. 

The process at present used in Research Depart- 
ment for estimating multiple interference probabilities 
is theoretically correct if it is assumed that signals 
vary so widely that, at a given moment, any one sig- 
nal can be rega'ded as either interfering even if no 
other signal were present, or negligible. Many new 
lovi^powe^ u.h.f. stations will have to be installed 
shortly, however, and interference from a nearby 
station, which only varies in strength by a few deci- 
bels, must also be taken into account in future. Any 
useful method of finding the resultant of interfering 
signals must take both possibilities into account. 



We shall here assume that data are available of 
the probabilities that any particular unwanted signal 
shall be above a specific level /o, 20 or more dB below 
[o, or in the intervals 0-1, 1-2, 2-3, 3-4.5, 4.5-6,6-8, 
8-10, 10-12, 12-14, 14-16, 16-18 or 18-20 dB below fo- 
We are not concerned with the manner in which such 
data are obtained. Direct observations would be pos- 
sible, but would require a great deal of effort. A more 
practicable alternative would be to observe directly 
the probabilities that the signal lie within a much 
smaller number of wide intervals, and interpolate be- 
tween observed probabilities byassuming that the sig- 
nal then obeyed some standard law, for instance, that 
the signal power or its logarithm was normally* dis- 
tributed. Special, simplified techniques are available, 
and are discussed, if the signal itself, or its logar- 
ithm, is approximately normally distributed over a 
wide range of levels. 

The question of possible correlation of unwanted 
signals is largely ignored in what follows. The com- 
bined effect of two interfering signals distributed 
normally or 'log-normally' can be calculated for any 
assumed correlation coefficient, but the correlation 
coefficient relevant when this resultant is compared 
with a third such interfering signal is very difficult 
to assess, unleas all the three signals are regarded 
as completely uncorrelated or completely correlated. 

* Throughout this report 'normally di stributed' or 'normal 
distribution' refer to a Goussion distribution 



In the latter case they are subject to the law of power 
addition. Here therefore it is assumed that any pairs 
or sets of signals likely to be highly correlated (be- 
cause they are associated with stations along paths 
subject to the same weather conditions, for example) 
can be combined by power addition, and that the 
resultant thus obtained is uncorrelated with any other 
interfering signal or with the resultant of any other 
group of highly-correlated signals. The effect of 
correlation is discussed in more detail in Appendix A, 
particularly for signals which do not vary greatly and 
can be adequately represented by the equivalent best- 
fitting normal distribution, or by the 'conservative 
normal equivalent' distribution discussed below. 

In the absence of adequate information on the 
actual degree of correlation between interfering sig- 
nals simultaneously present, all that can really be 
said is that the actual distribution of the resultant 
of several interfering signals simultaneously present 
is between thedistribution calculated (by convolution) 
on the assumption that no two of the constituent sig- 
nals are correlated, and that calculated (by power 
addition) on the assumption that every pair of con- 
stituent signals is completely correlated. If one sig- 
nal of significant magnitude varies much more widely 
than the remainder, the resultants obtained on these 
two extreme assumptions will not differ greatly, and 
the actual degree of correlation will not be very im- 
portant. The difference between these two resultants 
is greatest when the constituent signals have identi- 
cal distributions. 

When two or more interfering signals are present 
simultaneously, their relative phases are equally lil<e- 
ly to have any particular value at any particular 
instant, and therefore the power of the resultant can 
be taken as the sum of the powers of the constituent 
signals. It will therefore be assumed here that any 
signal distribution is expressed In terms of power be- 
fore the calculation begins. If the distribution of 
field strength is 'log-normal' (that is, the ratio of the 
field strength at any instant to an arbitrary field 
strength such as 1/xV/m is expressed In decibels by 
a nurrtoer which is normally distributed), then the 
corresponding distribution, expressed in terms of 
power, is also 'log-normal'. 

Now the distribution of the sum of two signals 
each having a known distribution of arbitrary form can 
always be obtained approximately by the process of 
'quantized convolution' described in Section 2; this 
process can be repeated Indefinitely and is suitably 
adapted to allow for the fact that the constituent sig- 
nals are never negative. The sum of two signals can 
be obtained more simply if both of them can be satis- 
factorily replaced by the best-fitting equivalent normal 
distribution. For signals which do not vary greatly, 
this normal distribution is a good fit, but unfortunate- 
ly it underestimates the probability that a 'log-normal' 
distribution reaches high levels. In Section 3 a 'con- 
servative normal equivalent' distribution is derived; 



reasons are explained for regarding this as a satis- 
factory approximate substitute for a 'log-normal' dis- 
tribution having a standard deviation less than about 
3dB. The sum of anynumber of uncorrelated variables 
with nornial distributions has a normal distribution, 
with a mean value equal to the sum of the individual 
means and a standard deviation equal to the square 
root of the sum of the squares of the individual stan- 
dard deviations. 

The process of quantized convolution discussed 
in Section 2 appears to be the only method available 
for assessing the resultant of pairs of signals both 
members of which cannot be satisfactorily approx- 
imated In one of the ways just mentioned. For very 
widely varying signals, however, such as 'log-normal' 
distributions having standard deviations greater than 
about 6dB, each signal can, as an approximation, be 
regarded at any instant as either Interfering in its own 
right (i.e. in the absence of any other signals) or 
negligible. This Is the basis of the existing 'prob- 
ability multiplication' procedure used in reference 1. 



2. QUANTIZED CONVOLUTION METHOD FOR FIND- 
ING THE DISTRIBUTION OF THE SUM OF TWO 
SIGNALS WITH ARBITRARY DISTRIBUTIONS 

Suppose that, for each of two signals, we are given 
the probabilities ax and hx that the signal (neces- 
sarily positive) lies within the limits A/q/IO and 
(A + 1)/o/10 for A - 0, 1 ...9, and also the probability 
that it exceeds Iq. We wish to determine the approx- 
imate probability distribution of the resultant signal 
S1+S2, on the understanding that if a signal value 
exceeds \o, this is all that matters - whether it is 
1-01/o or 10^/0 is Irrelevant. We assume that the 
quantization Is sufficiently fine, so that any signal 
value between Aio/10 and (A + 1)/o/10 can be treated 
as (A + VO/o/IO. Then the signal Si can be represent- 
ed geometrically by the histogram of Fig. 1 or alge- 
braically by the expression 

D(SO-XVo+aiX + a2X^+ai3X'^+..,. +a9X^+AaoX'°) 

(1) 



£0 



Fig. 






n---. 



0-5lr 



X 



I - Geometrical representation of distribution of 
signal Si 



where X" represents a displacement ra/o/10 to the 
right along the X-axis.* The sum of al! the prob- 
abilities ao, ai, etc. is unity, since Si certainly has 
some positive value; the symbol A±o in equation (1) 
denotes aio+aii +ai2 4-... since any value of Si above 
/o is for our present purpose equivalent to any other. 
This well-known 'generating function' representation 
of a quantized probability distribution has the Im- 
portant advantage that if the second signal S2 is rep- 
resented by 

D(S2) = X''M6o+6iX + 62X^+ ... +bsX^+ BioX^°) (2) 

(where Bio is the sum of all coefficients &10. bn, 
bi2 etc. which are associated with the second signal 
Sq), then the corresponding probability distribution 
for the sum Si + Sq (neglecting correlation) Is repre- 
sented by the algebraic product of the right hand sides 
of equations (1) and (2), namely 



D(Si + S2) = CiX + C^X^ + CsX\ ... 



+ C9X^ + CioX^°+DiiX 



11 



where 



i-i 



Ci=V «r?>i-i-r (^■=1,2... 21) 



(3) 



(4) 



r = o 



i>ii =C/ii + C12+ •— + ^21 

From equation (3) we conclude that the probability 
that (S1+S2) lies between (i-V3)/o/10 and {i + V7)Io/'\0 
is C,- for i = 1, 2, 3 ... 10, and that the probability of 
(Si + S2) exceeding /o is Dn + VsCio; we assume that 
signal levels associated with the term CioX in 
equation (3) are equally likely to be above or below 
/o. 

Now equation (3) has a form such that we could 
repeat this process to find the resultant distribution 
of (Si + S2) and a third signal S3 expressed in a form 
analogous to (1), and so on. By using a sufficiently 
fine quantization interval, we could determine (Si +S2) 
as accurately as we wished, but refining the quan- 
tization increases the computational labour(ormachlne 
time) required to calculate the distribution of S1+S2. 

The basic reason why this process works is that 

X'"X"=X'«+'' (5) 

and that the number of different relevant values of 
im + n) which occur in the algebraic product [equation 
(3)] is no greater than for each factor [equations (1) 
and (2)]. 

For actual signal distributions, the linear quan- 
tization assumed above is too coarse (for a given num- 



* The use of an exponent-type suffix to designate displace- 
ment is justified later. 



ber of quantization intervals) both near the interfer- 
ence level and at low signal levels, but if we have 
an arbitrary set of quantized levels in an attempt to 
remedy this, the number of different possible values 
of (m + n) may increase violently, so that the process 
of multiplication cannot be satisfactorily repeated. 
Here we shall assume that there are 14 quantized 
levels, as specified in Table 1 below. 



TABLE 1 

Quantization Ranges 



Range 


Quantized Value 


Prdjability 


(dB below Iq) 


Representing Range 


Symbol 


At or above Jo 


>/o 


Ais 


0-1 


0.89/0 


A12 


1-2 


0.71/0 


All 


2-3 


0.56/0 


Aio 


3-4.5 


0.42/0 


A^ 


4.5-6 


0.30/0 


As 


6-8 


0.20/0 


Ay 


8-10 


0.13/0 


Ae 


10-12 


O.O8/0 


A5 


12-14 


0.05/0 


A^ 


14-16 


0.03/0 


As 


16-18 


0.02/0 


A2 


18-20 


0.01/0 


Al 


More than 20 





Ao 



In a notation analogous to that used for equation (1), 
the signal Si can therefore be represented by the 
expression 

D(Si) ^ Ai.X^°° + AisX^^ + AiiX^^ + AioX^« 

+ AgX^^ + AbX'^^ + A7X^° + AeX^^ + AsX^ + A^X^ 

+ AsX^ +A2X^ + AiX + Ao (6) 

where X" denotes a displacement of 0.01 Ion. If a 
second signal S2 is represented by a similar expres- 
sion, with the same powers of X but different coef- 
ficients, say B,-, then (Si +S2) is strictly speaking 
correspondingly represented by the algebraic product 
of (6) with the similar expression for S2. 

Now this algebraic product involves all the powd- 
ers of X which can be obtained by adding two of the 
powers listed In equation (6), such as 56 + 30 = 86, 



20 + 13 = 33, 71 + 8 = 79, and so on. The approx- 
imation we shall make is to replace any of the num- 
bers 86, 33, 79 etc. by the nearest index occurring in 
equation (6); when we are in doubt, the higher of the 
two possible values for this index will be chosen. 
With this approximation, we can represent Si + S^ in 
the same form, namely 

D(Si +S2) = CigA + G12A + CiiA + Cio-^ 

+ C^X"^ + CbX=° + C7X^° + CeX^^ + C5X" + c^x^ 



+ C^X^ +C^X-^ + CrX + C, 



(7) 



where the formulae fwthe coefficients C, of the result- 
ant Si + S2 in terms of the corresponding coefficients 
associated, with Si and S2 themselves are given and 
briefly explained in Appendix B. 

The process of quantized convolution described 
above can be applied to a pair of signals having 
arbitrary distributions: the only limitation is due to 
the quantization. This quantization can be refined 
at the expense of greater labour of calculation (or 
greater machine time) which may not be justified since 
field strengths are notoriously difficult to measure to 
a high degree of accuracy. But there are certain dis- 
tributions for which the convolution process is greatly 
simplified. These cases are considered in Section 3. 



3. DISTRIBUTIONS FOR WHICH THE SUMMATION 
PROCESS CAN BE SIMPLIFIED 

For statistical investigations in general, the first 
simplifying approximation worth considering Is usually 
to replace the given distribution by the best-fitting 
equivalent normal distribution. There are well-known 
techniques for doing this. There is often good theor- 
etical justification for expecting that this equivalent 
normal distribution will represent well the given dis- 
tribution. 

Hitherto it has usually been assumed that field 
strength in general tends to be 'log-normally' distri- 
buted, with some reservations that the probability of 
very large signals may be significantly overestimated 
by this assunption. If the given distribution can be 
regarded as varying violently, in a manner comparable 
with a "log-normal* distribution having a standard 
deviation of 6dB or more, the obvious simplifying 
approximation is to regard the signal in question as 
either negligible, or interfering even if all ether sig- 
nals simultaneously present are ignored. If the given 
distribution can be regarded as comparable to a 'log- 
normal* distribution having standard deviation between 
say 3 and 6dB, there appears to be no short-cut to 
the process of quantized convolution discussed in 
Section 2 when the signal having this distribution is 
present at the same time as other signals. If the given 
distribution is comparable to a 'log-normal' distribu- 
tion having standard deviation less than about 3dB, 



the given distribution could be replaced by its best- 
fitting normal equivalent, but this procedure has the 
disadvantage that the probability of high signal 
strengths near the level of Interference which we wish 
to avoid tends to be underestimated. A preferable 
alternative therefore appears to be to replace the 
given distribution by the 'conservative normal equiv- 
alent* distribution described next. 

The 'conservative normal equivalent* of a 'log- 
normal' distribution of sufficiently low standard devi- 
ation is defined to be the normal distribution which 
is such that the interference level fo and an arbitrary 
lower level /i occur with the same probability in both 
distributions. Since an actual signal is essentially 
positive, the 'conservative normal equivalent' con- 
cept is not useful unless the occurrence of a negative 
signal in this distribution is extremely improbable. 
If in both distributions /o is m standard deviations 
above the mean and /i is n standard deviations below 
it, 

[m + n)o = 10 logio(^o//i) 
(m + n)a^ = lo-h 



(8) 



where o is the 'log-normal' standard deviation, a^ is 
the 'conservative normal equivalent* standard devia- 
tion, and the coefficient of logio{/o/^i) is 10 and not 
20 since the signals are assumed expressed in terms 
of power and not field strength as already mentioned. 
We shall assume that zero must be 3ct^ below the mean 
of the 'conservative normal equivalent* distribution 
for negative values to be sufficiently improbable, and 
we shall consider in detail only the case when m = n = 
2. The maximum permissible value of a^ is then 
0-2/0 and /i = 0.2/0, while a = 1-747dB. The 'con- 
servative normal equivalent' estimate of the proba- 
bility of any signal level between Iq and /i is greater 
than the estimate associated with the 'log-normal' 
distribution. The mean of the 'conservative normal 
equivalent' distribution is O.6/0 in this case, 2-22 dB 
below /o and 1.28dB above the mean of the 'log- 
normal' distribution. If the log-normal standard devi- 
ation tr is less than 1-747 dB, the 'conservative nomial 
equivalent' standard deviation ct^, is given by 



4aJIn = 1-10 



|-0-4-(7 



(9) 



If CT> 1-747 dB, o^ must remain at its maximum permis- 
sible value of O-2/o or negative 'conservative normal 
equivalent' values become excessively probable, and 
the discrepancy between the two distributions rapidly 
increases. We have therefore somewhat arbitrarily 
concluded that 3dB is about the highest value of a 
for which the idea of a 'conservative normal equiv- 
alent' distribution is useful. When this idea is use- 
ful, however, any number of signals replaceable by 
'conservative normal equivalent* distributions can be 
combined by adding the means of these distributions 
and taking the resultant standard deviation as the 
square root of the sum of the squares of the individual 
'conservative normal equivalent* standard deviations. 



4. QUANTIZATION IN TERMS OF FIXED PROBA- 
BILITY INTERVALS 

It isworth considering whether quantization should 
be carried out in terms of fixing the probability inter- 
val instead of fixing the range of signal strength to 
be regarded as a quantum. Thus instead of estimating 
the probability that a signal be 0-1 dB, 1-2dB, 2-3 dB, 
3-4.5dB, etc. below /o, we say that the signal is 
between /q and aidB below Iq for 1% of the time{and 
taken as ViaidB below /□ for that time), between aidB 
below Jo and agdB below /o for 2% of the time [and 
taken as Vi{ai + 05) dB below Iq for that time], and so 
on. This possibility has not been discussed here, 
because It means that when the determination of the 
resultant of two signals is carried out as indicated 
in Section 2 and Appendix B, the powers of X in- 
volved are different for each pair of signals to be 
combined, whereas the coefficients A-i, A2 etc. and 
Si, B2 etc. are fixed. This has the disadvantage 
that Table 1 and equations (B2) to (B15) would have 
to be formulated afresh (at least within the computer) 
for each pair of signals summed, and It Is not clear 
what compensating advantage can be derived from the 
constancy of the coefficients Ai, Bq etc. 



5. CONCLUSIONS 

A procedure for finding the resultant of two arbit- 
rary interfering signals has been devised which is 
repeatable and takes account of the fact that the 



constituent signals are essentially positive and not 
necessarily suitable for quantization in equal linear 
steps. The quantization intervals can easily be ad- 
justed if necessary. 

This procedure can be greatly simplified if some 
signals can be satisfactorily replaced by suitable 
'conservative normal equivalents' which always over- 
estimate the probability of occurrence of signals near 
the interference level. Widely-varying signals can 
usually be treated as either interfering in the absence 
of all other signals, or negligible. 

The convolution procedure discussed is strictly 
applicable only to uncorrelated signals. Completely 
correlated signals should instead be combined by the 
law of power addition. Insufficient experimental 
information is at present available to decide how in 
practice to interpolate satisfactorily between these 
two extremes, but it is tentatively suggested that 
interfering signals should be divided Into highly- 
correlated groups combined by power addition, and 
that otherwise correlation should be ignored. 



6. REFERENCE 

1. Calculation of the field strength required for a 
television service, in the presence of co-channel 
interfering signals: Effect of multiple interfering 
sources. BBC Research Department Report No. 
RA-12/2, Serial No. 1968/43. 



APPENDIX A 



The Effect of Correlation 



In the main text, we have assumed that two dis- 
tributions simultaneously present should nomially be 
combined by the 'quantized convolution' process dis- 
cussed in Section 2. Such convolution in fact neglects 
any correlation between the two distributions. At the 
other extreme, we can assume complete con-elation 
between the two distributions, and combine them by 
the law of power addition. 

Determination of the Qorrelatlon coefficient be- 
tween any pair of unwanted signals is not easy. 
Cases do occur of high correlation coefficients (of 
the order of 0-9) between the hourly means of signals 
from stations which are either near the receiver or 
have similar paths to the receiver. But the short- 
term correlation coefficient (which in effect measures 
the relative behaviour of Instantaneous deviations 
from these hourly means) Is usually small. Strictly 
speaking, it is simultaneous powers of unwanted sig- 
nals which must be added to obtain the corresponding 



instantaneous level of multiple interference, and it is 
the probability of a particular instantaneous value of 
each unwanted signal which has to be estimated from 
the statistics of its distribution. 

For signals which are (or are nearly) nomially 
distributed, the effect of correlation is easily ap- 
preciated in general terms, tf xi is normally distri- 
buted about mean Mi with standard deviation ai and 
%2 is normally distributed about mean Ms with stan- 
dard deviation cts, and the correlation coefficient 
between the signals is pi2, then (xi + xq) Is normally 
distributed about mean (Mi + M2) with standard devia- 
tion 2i2 given by 



-^12 = CTl + fTe + 2pi2(7iCT2 



(A1) 



Hence the general effect of a positive correlation is to 
increase the standard deviation of the sum (xi + X2) 
to 2i2 instead of (cti + a2)'^^ the value obtained in 



the absence of correlation. If p^^ == +''1 ^12 becomes 
(cti + (72). It cTi = aQ {the worst case) 2i2(pi2 = 1) is 
41% above its value for pi2 = 0. Hence by neglecting 
(a positive) correlation in this case we shall make no 
error in determining the mean of the sum, but we shall 
underestimate the standard deviation by as much as 
41% if Oi and 02 are nearly equal, and we shall corres- 
pondingly underestimate the probability that the sum 
shall exceed a given level higher than the mean. 
Again, if we assume that %i and X2 are completely 
correlated, and determine the distribution of the sum 
by power addition, we shall overestimate 2i2 relative 
to its correct value given by (A1), and we shall corres- 
pondingly overestimate the probability that the sum 
will exceed a given level higher than the mean. 

More generally, if we have several quantities of 
which ^i and fj are a typical pair, normally distri- 
buted about mean zero with standard deviations a-,, 
a-, and such that the correlation coefficient between 
the two distributions is pij, then the distribution of 
(^1 + ^2 + '•• + ^n) 's normal with mean zero and stan- 
dard deviation 2 where 

2 2 2 2 2 

2 ={(7i +(T2 +0-3 + ... + O-^) -h2pi2 0'1'72 +2pi3C7ia-3 +... 



+ 2p^-i,n^n.i"n 



{A6) 



If all the Pij are zero, 

2^ = 2= 



2 2 2 



2 



if all the pij are +1, 2 = 2i = o-i + (72 + ... + c7„ 

Here again, the effect of correlation appears to be 
simply to alter the resultant standard deviation 2. If 
Iq is the value of 2 obtained when correlation is 
neglected, and the corresponding value of X obtained 
when full account is taken of correlation is (1+Ai)2o, 
then the probability calculated as that for the sum to 
be between a and (B on the assumption of zero correla- 
tion is really the probability for the sum to be between 



(1 + Ai)a and (1 + Ai)/3. The difficulty is to estimate 
\i correctly. All we really know is that if non-negative 
correlation exists between each pair of unwanted sig- 
nals, Ai is between the value zero appropriate to un- 
correlated signals, and the maximum value appropriate 
when each pair of signals is completely correlated; 



this maximum value is A^^^^ where 



1+A„ 



= (Ol +(72 + 0-3 



+ (^n)^i^l + ^l 



(A2) 



This suggests that two resultant distributions Do and 
D, should always be worked out on the respective 
assumptions of zero and unity correlation between 
any pair of constituent signals. The actual distribu- 
tion is then bracketed between Dq and D^. Any normal 
or nearly norma! distribution can be combined for 
Do by adding means and taking the resultant standard 
deviation as the square root of the sum of the con- 
stituent standard deviations; this resultant is then 
combined by quantized convolution with the remaining 
non-normal constituent signals. Distribution Di is 
calculated as if there were total correlation between 
each pair of constituent signals. Any normal or near- 
ly normal distributions can be combined by adding 
means and standard deviations, and the resultant of 
these is then combined by power addition with the 
remaining non-normal constituent signals. 

One method of reducing the uncertainty because 
of partial correlation might be to group together sig- 
nals which are likely to be highly correlated (for 
example, because the signal paths are subject to 
similar weather conditions) and to treat signals within 
any such group, say G^, as completely correlated. 
Then the sum of the signals within group Gy can be 
regarded as a distribution of type D^, and the result- 
ant R,. obtained in an analogous manner. The sum 
of the various resultants R^ ^'^'^ '^he interfering sig- 
nals which did not belong to any of the groups G^, on 
the other hand, can be regarded as an uncorrelated 
distribution of type Dq. 



APPENDIX B 



Formulae for Coefficients in a Quantized Convolution Product 



We have to consider here the algebraic (xoduct of 
an expression 

Si = ^i3X'°° + AisX^^ + AiiX^^ + AioX^« + AgX"^ 
+ AsX^"" + A7X^° + AeX ^^ + A^X^ + A^X^ + AsX= 



+ ^2X^ +A1X+A0 



(Bl) 



with a similar expression S^ in which the samepowers 
of X occur but different coefficients Bo, Bi, Bq, ... 
B±3 occur. In order that the process of algebraic 
multiplication may be repeatable, it is necessary to 
express .the sum. Si + S2, of the two signals in the 
same form with different coefficients Co, Ci, Cq, C13, 
as in equation (7) of the main text. 



Table 1 shows how the powers of X arising when 
the algebraic product is formed can be rounded off 
and replaced by the nearest power occurring in equa- 
tion (Bl). This nearest value is estimated conserv- 
atively when there is any doubt. The rows and col- 
umns of Table 1 are numbered according to the suf- 
fices of the coefficients of equation (B1). The second 
row specifies the appropriate quantized values of 
signal S^ while the second column specifies the 
appropriate quantized values of signal Si. The tabu- 
lar entry is the corresponding quantized value appro- 
priate to the signal (Si + S2). The formulae for the 
coefficients Co, Ci, C2 ... Cia associated with 
(Si + S2) in equation (7) of the main text are given 
below. They are mainly derived by repeated use of 
Table 1. The only point of difficulty is the initial 
term {A^s + B13- AisBis) in the expression for Cis. 



TABLE 1 
Quantized values for Powers of X in Signal Products 





Column 


No. 


13 


12 


11 


10 


9 


8 


7 


6 


5 


4 


3 


2 


1 





Signal 


dB below (q 
Quantized Value 




0-1 
0-89lo 


1-2 

0.71/0 


2-3 
0.56/0 


3-4.5 
0.42/o 


4.5-6 
0.30/0 


6-8 
0.20/0 


8-10 
0.13/0 


10-12 
O.O8/0 


12-14 
0-05/0 


14-16 
0.03/0 


16-18 
0-02/0 


18-20 
0.01/0 


over 20 



Row 
Number 


Signal Si 

dB below Quantized 

/o Value 




The tabular entry Is the quantized value of (Si + S5} wlien S 
and S2 the quantized value at the head of the same column. 


has the quantized value at the left of the same row, 


13 


>Jo 


lo 


/o 


'0 


lo 


'0 . 


lo 


lo 


lo 


lo 


'0 


lo 


lo 


(0 


lo 


lo 


12 


0-1 


0.897o 


fo 


Jo 


lo " 


'o 


lo 


lo 


lo 


lo 


lo 


0.89/o 


O-89/o 


0.89/o 


O-89/o 


0.89/0 


11 


1-2 


0-71Jci 


io 


lo 


lo 


lo 


lo 


lo 


0.89/o 


D.89/o 


0.89/0 


0.71/0 


0.71/0 


0.71/0 


0-7 1/0 


0.71/o 


10 


2-3- 


0.56Io 


Jo 


lo 


lo 


lo 


lo 


0.89/o 


0.71/0 


0.71/0 


0.71/0 


0.56/0 


0.56/0 


0-5 6/0 


0.5 6/0 


0.56/a 


9 


3-4.5 


0.42/0 


/o 


lo 


lo 


lo 


0.89/o 


0.71/o 


0.71/0 


0-5e/o 


0.56/0 


0.42/0 


O-42/o 


0.42/0 


O-42/o 


0.42/0 


8 


4-5-6 


0.30/0 


To 


h 


h 


O-89/o 


O-71/o 


0.56/0 


0.5 6/0 


0-42/0 


0.42/0 


0.30/0 


0.30/0 


0.30/0 


O-30/o 


0.30/0. 


7 


6-8 


0.20/0 


/o 


lo 


0.8 9/0 


0.71/0 


0-7 1/0 


0.56!o 


0.42/0 


0.30/0 


0.30/0 


0.30/o 


0.20/0 


O-20/o 


0.20/0 


0-20/0 


6 


8-10 


0.1 37o 


lo 


io 


0-89/n 


O-71/o 


0-56lo 


0.42/0 


0.30/0 


O-30/o 


0.20/o 


0.20/0 


0.20/0 


0.13/0 


0.13/0 


0.13/0 


5 


10-12 


0.08/o 


lo 


!o 


0.89/o 


0.71/0 


0.5 6/0 


O-42/o 


0.30/0 


0.20/0 


0-20/0 


0.13/0 


0.13/0 


0.13/0 


O.O8/0 


O-O8/0 


4 


12-14 


0.05/0 


lo 


0.89/o 


O-71/o 


0.5 6/0 


0.42/0 


0.30/0 


0.30/0 


G-20/0 


O-13/o 


0.13/0 


O-O8/0 


O.O8/0 


0.05/a 


0.05/0 


3 


14-16 


0.03/0 


/□ 


0.89/0 


0.71/0 


0.56/o 


0.42/0 


0.30/0 


0.20/0 


0.20/0 


O-13/o 


O.O8/0 


O-05/o 


0.05/0 


0.05/0 


0.03/0 


2 


16-18 


0-02/0 


'fo 


0.89/o 


0.71/0 


0.56/o 


O-42/o 


0.30/0 


0.20/o 


O-13/o 


0.13/0 


O.O8/0 


0-05/0 


0.05/0 


:0;03/o 


0-02lo 


1 


18-20 


O-OI/o 


lo 


0.89/o 


0.71/0 


O-56/o 


0.42/0 


0.30/0 


0.20/0 


0-13/0 


O.O8/0 


0.05/0 


0.D5/o 


O-03/o 


0-02/0 


0.01/q 





over 20 





lo 


0.89io 


0.71/0 


0.56/o 


O-42/o 


0-30/0 


0.20/0 


0.13/0 


O-08/o 


0.05/0 


0.03/o 


0.02/0 


0.01/0 






If the first signal Si is above /o, then Sj + S2 
will necessarily be above /o, so that the probability 
that Si + S2 is above Iq is simply Aig. Similarly, the 
probability that Si + S2 is above /o because S2 is 



above Iq is simply B13. But this has counted twice 
the case when Si and S2 are both above Iq, for which 
the probability is A13B13. 

The full results are: 



Ci3 = (Ai3 +Bi3 - AisBia) + {As + Ab + ... + A12) B12 -f (Ab + Ag + Aio + An + Aie)Bii 

+ (Ag + Aio + All + Ai2)Bio + (Aio + All + Ai2)B9 + (An f Ai2)B8 + Ai2(B5 -+ Be + B7) (B2) 

C12 - (Ao + Ai + ... + A4)Bi2 + {As +Ab+ Aj)Bii + AbB^o + A9B9 + AioBe + Aii(B5 + Be + Bj) 

+ Ai2(Bo +Bi + ... +B4) (B3) 

Cii =(Ao + Ai +... + A4)Bii + (A5 +Ae + A7)Bio + (A7 + Ae)Bs + A^Bj + Bq) + Aio(B5 + Bq + B7) 

+ Aii(Bo +Bi +.„ +B4) (B4) 



Cio 

Ca 
C7 
Ce 
Cs 
C4 
C3 
C2 
Ci 
Co 



- (Ao + Ai + ... + A^)Bio + (As + A6)B9 + (A7 + AqIBq + AbBt + Ag(Bs + Be) + Aio(Bo + Bi + ... + B4) (B5) 



= (Ao + Ai + ... + A4}B9 + (As + A6)B8 + A7B7 + Aa(B5 + Be) + A9(Bo ■+ Bi ^ ... + B4) 

== (Ao + Ai + ... + A^)Bs + (A4 + As+ Ae)B7 + AeBe + A7(B4 + B5 + Be) + Ab(Bo + Bi + ... + B4) 

= (Ao + Ai + A2 + A3)B-; + {A3 + A4 + A5)Be + Ae(B3 + B4 + B5) + ^7(60 + Bi + B2 + Bg) 

= (Ao +"Ai + A2)Be + (A2 + A3 + A^)B5 + ,44B4 + A5(B2 + Bs + B4) + Ae(Bo + Bi + B^) 

= (Ao + Ai)B5 + (A2 + As)e4 + A^{B2 + Bs) + As{Bo + BO 

= (Ao + Ai)B4 + (Ai + A2 + A3)B3 + A3(Bi + B2) + A4(Bo + Bi) 

= AoBg + A1B2 + A2B1 + A3B0 

- A0B2 + AiBi + A2B« 
= AoB^ + AiBo 

= AoBn 



(86) 

(B7) 

(B8) 

(89) 

(BIO) 

(B11) 

(B12) 

(813) 

(B14) 
(B15) 



Now although the appearance of some of equations 
(B2) to (B15) Is somewhat formidable, It is In principle 
very little more difficult to write a computer programme 
to give the C's when the A's and B's are given (or 
have previously been found) than to write a programme 
for the ordinary algebraic-multiplication type of con- 
volution considered in relation to equation (3) of the 
main text. Equations (82) to (B15) as they stand 

JMP/JUC 



appear to be a satisfactory compromise between a 
quantization which is too coarse and one which is so 
fine that excessive machine time is required for 
evaluating the C*s when the A's and B's are known. 
But it would not be at all difficult to adjust Table 1 
and equations analogous to (B2), (83) etc. to cope 
with any different quantization which might appear to 
be preferable. 



Printed by BBC Research Department, Kingswood Warren, Tadworth, Surrey