. on 5 Mar 2001 22:23:54 -0000 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
[Nettime-bold] Might a laboratory experiment now being planned destroy planet Earth |
Might a laboratory experiment
destroy planet Earth? F. Calogero Dipartimento di Fisica,
Università di Roma "La Sapienza" Istituto Nazionale di Fisica
Nucleare, Sezione di Roma
Synopsis Recently some concerns have
been raised about the possibility that a high-energy ion-ion colliding beam
experiment which just began at Brookhaven National Laboratory in the United
States, and a similar one that is planned to begin some years hence at CERN in
Geneva, might have cataclysmic consequences, hypothetically amounting to the
disappearance of planet Earth. The probability that this happen is of course
tiny. In the first part of this paper a popularised review is presented of the
motivations for such concerns and of the extent they have been investigated, and
in the context of such a popularised treatment the appeasing conclusions of
these investigations are scrutinised. In the second part, in the light of this
example, a terse analysis is provided of some scientific, ethical, political and
sociological issues raised by the problematique associated with human endeavours
which might entail a tiny probability of an utterly catastrophic outcome -- with
special emphasis on related responsibilities of the scientific/technological
community. “First Commandment for
experimental physicists: Thou shalt put error bars on all your
observations. First Commandment for
theoretical physicists: Thou shall get the sign
right.” (Private Communication by
Professor Sebastian Pease) 0.
Recently concerns have been raised about the possibility that a major
catastrophe -- possibly amounting to complete destruction of planet Earth --
result from an experiment which just began (summer 2000; for up to date
information see http://www.rhichome.bnl.gov/AP/Status) at the Brookhaven
National Laboratory (BNL) in the United States, and/or from a similar experiment
under preparation at the European Centre for Nuclear Research (CERN) in Geneva,
scheduled to begin there some years hence. Purpose and scope of this paper is to
explain what these concerns are about, to report on various estimates [1-4] --
all of them appeasing -- of the risk, and to proffer some general
scientific-political-ethical-sociological
considerations related to this kind of issues; considerations whose
interest should be however qualified by the avowed lack of specific
scientific-political-ethical-sociological expertise of this
author. 1.
More detailed and professional analyses of the experiments in question
and of their hypothetical danger are given in [3] and [2] (see also [4]). The
presentation given below is largely in the nature of a popularised summary of
these papers; it also points out certain aspects which to this author appear as
possible shortcomings of these treatments. Some information on the publication
history of these papers [1-4] is also provided below, as it might have some
(minor) relevance with respect to the more philosophical considerations reported
in the second half of this paper. This author profited from many discussions by
voice and by e-mail with several colleagues, who will however remained unnamed
to avoid any hint that they agree with what is written herein, for which the
author feels he should take exclusive personal responsibility. An explicit thank
is however due to Adrian Kent for having called my attention (via Pugwash) to
this problematique. 2.
The experiments in question realise collisions of heavy ions, produced by
two beams of such particles colliding head-on against each other. The particles
of the two beams are accelerated by huge (and very expensive -- many hundred
million dollars) apparatuses ("colliders"). In the case of the Relativistic
Heavy Ion Collider (RHIC) experiment at BNL, the particles in the two beams are
gold ions (indeed, one can think of gold nuclei -- since the ionisation is
almost complete), which get accelerated to a (planned) energy of 20 TeV
(1 Tev=10^3 Gev=10^6 Mev=10^12 ev), amounting to an approximate
energy of 100 Gev per nucleon (each gold nucleus is composed of 79
protons and 118 neutrons, altogether 197 nucleons). The energy
in the centre-of-mass system when two particles collide adds up to 40
Tev. In the CERN experiment (A Large Ion Collider Experiment --
ALICE), the energies per ion are expected to be 30 times larger, and the
mass of each ion is also larger, but only marginally so: lead rather than gold,
mass 207 (82 protons and 125 neutrons) rather than
197. The total energies in the centre-of-mass system (which in this case
of colliding beams with equal energies coincides with the laboratory system; the
situation is of course quite different in the case of one beam hitting a fixed
target) are the largest ever realised in high-energy physics experiments;
although of course one should actually talk, in this context, of energy
density (the energies in macroscopic events, such as the collision of two
billiard balls, or two cars, are much larger). It should also be noted that
experiments with higher centre-of-mass energies per nucleon have been
realised in machines which accelerate protons rather than
ions.
In any case from some points of view these high-energy experiments
explore a region of microphysics never before attained by man-made experiments
-- although cosmic ray events with much higher centre-of-mass energies occur all
the time and many of them have been experimentally observed. It is therefore
justified to wonder whether the exploration of such (relatively) new
phenomenology might give rise to surprises. Indeed it is precisely the hope to
find something interesting that motivates these experiments and justifies their
huge costs. Specifically, it is expected/hoped that in these experiments,
immediately after the head-on collision, the nuclear matter that constituted the
two colliding nuclei become effectively a "plasma" made up of quarks and gluons --
namely of the objects which elementary particle physicists consider nowadays as
the ultimate constituents of matter. Such a plasma would to some extent mimic
the situation that existed in the early Universe (if the origin of our Universe
is correctly described by Big Bang scenarios); information on its structure is
obviously of great scientific interest. 3.
But could the "surprises" entail dangerous
consequences?
While the experimental exploration of any new phenomenology always
entails an unknown element, it is possible for elementary particle physicists to
envisage which are the potentially dangerous developments, and to analyse the
likelihood that they emerge. Indeed just such a task was recently assigned
(perhaps a bit late in the game!) by the Director of BNL to four American
physicists, who produced a Report [1], and who later issued another
(significantly modified) version of it [3]; and a somewhat analogous analysis
has been performed in parallel by three physicists associated with CERN [2].
These findings have also been commented upon by two other, quite distinguished,
physicists, in a note published by Nature [4]. All these experts agree
that the probability of any catastrophic outcome of the RHIC experiment is so
small to justify proceeding with it without delay. Their arguments also apply,
although to some respect with less cogency (but from some others with more
cogency) to the ALICE experiment at CERN. A committee to analyse the matter has
also been recently appointed by the Director General of CERN, Luciano
Maiani.
It is however desirable that a larger community of experts than those
specifically tasked to do so involve themselves in this problematique, and look
critically at these analyses; and it is also desirable that, to the maximum
extent possible, a larger community of responsible scientists, and of citizens,
become informed. Moreover this problematique also entails considerations of a
political and ethical character, which are to some extent independent of the
technical details, but can only profit from a better understanding of them. It
is for this reasons that this attempt is made to popularise (and in that
context, to some extent, to also criticise) the findings of the experts who have
looked at these issues [1-4], undoubtedly with greater scientific competence
than I can muster, yet giving me occasionally the impression to be biased
towards allaying fears "beyond reasonable doubt"; a posture quite understandable
under the circumstances -- since undue fears might block experiments which
promote scientific progress -- but which nevertheless constitutes, almost by
definition, an odd stand for scientists (doubt being the seed of science;
although, admittedly, only reasonable doubt!). 4.
Three hypothetical dangers which might emerge from these experiments have
been considered: (i) formation of a "black hole" or some other "gravitational
singularity" in which surrounding matter might then "fall"; (ii) transition to
another hypothetical "vacuum state", different, and lower in energy, than the
vacuum state of our world (which would therefore be metastable); (iii) formation
of a stable aggregate of "strange" matter, which might initiate a transition of
all surrounding matter to this new kind of matter, with the result of completely
destroying planet Earth (such a phenomenon would entail a great liberation of
energy; hence, if it were to unfold quickly, it would result in a Supernova-like
explosion).
The first ("gravitational") concern can be allayed by simple, hence quite
reliable, order-of-magnitude calculations, that definitely exclude any such
possibility. The second concern can also be
eliminated by estimates of the (very large) number of cosmic ray collisions, at
higher centre of mass energy than those envisaged in these experiments, which
have occurred in the past history of our Universe, and which should therefore
already have triggered a transition to another vacuum if such a phenomenon were
possible.
The third concern requires a more detailed
analysis. 5.
The corresponding, dangerous scenario goes as follows. (i) Suppose that a
sufficiently large aggregate of strange hadronic matter (exist and that it) be
stable (at zero pressure) -- or metastable, but with a sufficiently long
lifetime for the subsequent developments to unfold. (Strange hadronic matter is
nuclear matter formed not only by nucleons -- protons and neutrons -- but also
by "strange" baryons, which are well known to exist, although none of them is
stable; or, equivalently, but in terms of "more elementary" constituents, by the
quarks that make up nucleons, but also by those other quarks which
are called strange -- that
are also known to exist, as constituents of those elementary particles also
called strange -- perhaps because all of them are unstable and are
therefore not normally present in our environment). Following current usage, we
call such an aggregate of strange matter a strangelet. (ii) Suppose that
strangelets are negatively charged. (iii) Suppose that a (negatively
charged) strangelet is produced in the lab, in a collision-experiment
with heavy ions, and stops there without previously breaking up and thereby
disassembling. Then, via the long
range Coulomb (electric) force, it would attract (or, equivalently, be attracted
by) a nearby atomic nucleus, fuse with it, and become a larger
strangelet, whose charge might be initially positive due to the
contribution of the positive charge of the nucleus (all nuclei have positive
charge, and charge is conserved), but which would subsequently become again
negative via emission of positively charged electrons (a standard phenomenon,
known as beta decay) if the normal (ground) state of strange nuclear matter were
negatively charged. The new (larger) strangelet would then fuse with
another nucleus, and so on and on. In this manner all surrounding matter might
get transformed into strange matter. If the entire planet Earth were to be so
transformed, the final outcome would likely be a sphere of enormous density,
with roughly the mass of the Earth
and only, say, one hundred meters radius. An enormous amount of
nuclear energy would be liberated in the process which, if it were a fast one,
would result in an explosion of astronomical proportions. This prospect
represents clearly a very great, albeit hypothetical, danger. But how
hypothetical? Two approaches are
possible to assess this. One approach relies on current theoretical
understanding of nuclear and subnuclear physics, and tries to estimate the
probability that the scenario mentioned above unfolds. A second approach
relies on the observation that analogous collisions to those which are going
to be realised in the experiments under discussion have already occurred in
Nature (cosmic rays), without causing observable calamities; from this fact
upper bounds can be inferred on the probability that these experiments
have a catastrophic outcome. The main
disadvantage of the first approach is that it relies on a theoretical
framework which is still imperfectly known -- and it moreover depends on
computations nobody knows how to reliably perform. The second approach
must also necessarily rely, to some extent, on our theoretical understanding
of nuclear and subnuclear physics (and also astrophysics, see below), but of
course much less so. In the context of both approaches prudence requires that,
as a rule, uncertainties be generally replaced by "worst case assumptions",
although this should be done "within reason". The analyses should in any case be
conducted with a critical spirit -- not with the purpose to prove a conveniently
appeasing conclusion -- indeed a prudent methodology (not really followed so
far, to the best of my knowledge) should engage two groups of competent experts,
a blue team trying to make an "objective" assessment, and a red team
(acting as "devil's advocates") specifically tasked to make a genuine effort
at proving that the experiments are indeed dangerous -- an effort that, if
successful, might then be challenged by the blue team who might
perhaps point out that such a conclusion could only be achieved by making too
many too far-fetched worst-case assumptions -- and also perhaps by introducing
too encompassing a definition of what "dangerous" means. A debate might ensue,
which, if conducted in a genuine scientific spirit, would be quite enlightening
for those who eventually have the responsibility to decide. But we shall return
below to a discussion of these methodological issues. For the moment we
limit our presentation of the scientific aspects of the issue to a superficial
if occasionally critical outline of the arguments and conclusions of the
analyses performed so far [1-4] as we understand them; analyses which seem to us
to have been in the nature of blue team treatments -- conducted of course
in good faith by competent experts, but occasionally tainted by an excessive
awareness of the public relations relevance of the exercise, perhaps at the
expense of candour if not objectivity. 6.
Let us begin by reporting on a risk assessment performed in the framework
of the first point of view, namely based on the current theoretical
understanding of the likelihood that the dangerous scenario described above
unfold. We have seen that, in order for this to happen, three ingredients are
necessary: (i) strangelets should exist (namely, be stable -- or at least
long-lived -- at zero pressure); (ii) they should be negatively charged; (iii)
there should be a nonnegligible probability that they be produced in the
experiments in question. Let us consider each of these three items.
The possibility that strangelets might be stable (or metastable
but long-lived --without external pressure) is quite conceivable on the basis of
our present knowledge of nuclear and subnuclear physics, although nobody is
really able to make a firm prediction in that respect, and perhaps the present
body of evidence and understanding of nuclear and subnuclear physics might be
interpreted as rather suggesting otherwise. Under these circumstances, it cannot
in particular be excluded with any certainty that stable strangelets
might exist, and moreover only at masses larger than, say, 300
nucleonic masses, so that they might in principle be produced in gold-gold
or lead-lead collisions (which puts together a total mass of approximately
400 nucleonic units), but not, for instance, in the collision of two
nuclei of iron (iron is a rather common element, both as part of celestial
bodies and of cosmic rays, but its nucleus only has mass 56, being
composed of 26 protons and 30 nucleons). Indeed it is likely that
strangelets, if they exist, are only stable (or long-lived) at relatively
large masses. And they might be stable for arbitrarily large
mass.
The second element of the dangerous scenario outlined above requires
strangelets to be negatively charged (if they were positively charged,
they would be repelled by ordinary nuclei due to the long range Coulomb force,
which is sufficient to keep them sufficiently apart to exclude the initiation of
any nuclear reaction -- just the same mechanism that prevents ordinary nuclei
from initiating nuclear reactions among themselves, even when these processes
are energetically favoured). This appears, on the basis of our present knowledge
of nuclear and subnuclear forces, quite unlikely: the expectation -- to the
extent that such calculations can be performed with any degree of reliability --
is that, if strangelets exist and are stable, their charge will be
positive, albeit perhaps small (namely, if Z is their charge in standard
units, and A their mass in nucleonic units, then a likely guess is that
0<Z/A<<1 -- while for ordinary nuclei , or a little less). Of
course, the very fact that the charge would be small hints at the possibility
that it might end up being negative; but this appears most unlikely on the basis
of rather elementary notions about the strangelet' s make-up in terms of
quarks and known properties of different quarks (in particular,
the values of their masses). However our present picture of a strangelet
of mass, say, 300-400 is as a bound assembly of 900-1200
quarks, and current theoretical understanding of the detailed internal
structure of such an object is rather imperfect; the possibility that it might
be seriously flawed, due to insufficient insight on collective multi-body
phenomena, cannot be excluded.
The third element of the catastrophic scenario requires that
strangelets be produced in heavy-ion collisions such as those realised in
the experiments in question. This looks most unlikely. The reason is that, in
such a high energy collision, an
environment gets created in which all the elementary constituents -- be they
quarks or subassemblies of few quarks such as nucleons or strange
baryons -- have a lot of kinetic energy; the most natural outcome is therefore
that many fragments fly out; it is very difficult for a very large object such
as a strangelet (which, as we saw above, would be formed by a very large
number -- hundreds! -- of constituents) to get assembled and to come out
unbroken. On the other hand it is not easy to evince a quantitative estimate
from such a qualitative analysis -- which is most likely to be basically
correct, although again subject to the same caution mentioned above concerning a
possible lack of theoretical insight on collective multi-body effects: there
indeed is a theoretical model ("evaporation") which tends to predict somewhat
larger probabilities of producing large agglomerates in collisions such as those
under consideration, than other, generally considered more reliable, models do.
However, graduating from qualitative statements to some kind of quantitative
estimates seems to me a desirable
development, since a large number of collisions will be realised in the
experiments in question (approximately 20 billions per year at RHIC,
which is expected to run for 10
years), and one would like to be quite certain that not a single dangerous
strangelet gets produced, if indeed just one would be sufficient to
initiate a catastrophic process.
Let us now pause a moment to ponder on the nature of these arguments, and
on the significance in this context of terms and notions such as likelihood
and probability.
When we talk about the likelihood that stable strangelets exist,
and that they possibly be negatively charged, the notion of probability we
invoke is associated with our imperfect knowledge of the laws of nature. This
notion is rather different from that associated with probabilistic evaluations
caused by our inability to predict exactly -- perhaps because of insufficient
knowledge of the initial conditions -- the outcome of a physical phenomenon
whose dynamics we do understand, which is instead the context in which we talk
more usually, in ordinary life, about probabilities, for instance when we state
that the probability to get, say, a two by throwing a dice, is one
sixth (in this case the relevant "classical dynamics", while well known,
actually entails a "sensitive dependence" on the initial conditions). It is
indeed clear that the question whether negatively charged strangelets are
or are not stable can in principle be settled; either one or the other
alternative is true, and we eventually might be able to find out for sure
(especially if the answer is positive), and thereafter no room would be left for
probabilistic assessments. The probability of producing a dangerous
strangelet -- if such a stable or metastable object does exist --
by running the RHIC or ALICE experiment for some time belongs instead to the
second notion of probability: even if we had a much deeper knowledge of nuclear
and subnuclear physics than we now have, we could never hope to go beyond a
probabilistic assessment as regards the risk of producing a dangerous
strangelet in such circumstances. This is a fundamental consequence of
the quantum character of the laws of microphysics, as we understand them.
However, we could -- if we knew enough -- be able to estimate accurately that
probability, and thereby possibly to conclude it is small enough to exclude any
reasonable concern.
On the other hand, at the current stage of knowledge, it would be
desirable to provide some kind of probabilistic assessment for all the
components of the catastrophic scenario -- in particular, for the three points
mentioned above -- in order to come up with an estimate of the risk -- unless
one can convincingly argue that this risk is certainly so tiny that any attempt
to quantify it is useless. In the context of such an exercise, the question
shall arise whether these probabilities -- of which the second and third are
presumably quite small, presumably the latter more so than the former -- are
independent, and should therefore be multiplied to get a final
assessment. I have heard arguments that suggest this to be the case. I am not
convinced. For instance, an important element which might, as it were
simultaneously, affect all these evaluations would be some
(possibly unexpected and perhaps a priori quite "improbable") feature of
that very very-many-body problem which would have to be mastered in order to
establish the properties of (heavy) strangelets. And it would obviously
be fallacious to multiply the probabilities based on theoretical considerations,
with those based on empirical evidences, as we explain below, after we have
tersely reviewed this second line of argumentation. 7.
Let us then survey what can be learned by taking the second point of
view. High-energy collisions of heavy nuclei occur naturally, when cosmic
rays impinge on heavenly bodies or among each other in the cosmos. Yet no
catastrophic event has been so far attributed to such collisions. Does this
provide sufficient assurance that no disaster will occur in the experiments
under consideration? The short answer is, unfortunately, rather inconclusive
-- indeed negative if one believes that the evaluation must be
prudently made on the basis of "worst-case analyses". But before providing some
details, let us inject two remarks.
Firstly we like to emphasise that, if one tries to set an upper bound on
the probability of a disaster occurring by arguments such as those just
mentioned, and finds out that the upper bound thus obtained is not sufficiently
small to conclude that the risk is small enough to be acceptable, this does by
no means entail that the risk is indeed sizeable: it only indicates that that
particular argument is not useful to provide confidence. In this respect the
difference among what we dubbed above first approach and second
approach must be emphasised: in the first case, one is trying to
assess the actual likelihood that a dangerous outcome emerge; in the
second, one is trying to find an upper bound to the probability
that a catastrophe occur. Let us repeat the obvious: in the first case,
if one gets from the analysis a probability that is not quite small, then
concern is indeed appropriate; in the second, if it turns out that the
computed probability is too large to provide assurance, this merely indicates
that we do not have an argument that provides confidence, but it would be quite
wrong to interpret such a finding as an indication that the probability of
disaster has been shown to be sizeable, because the nature of the argument
clearly prevents any such conclusion.
Secondly, we like to note an advantage of the second approach: it
tends to be applicable to a larger variety of catastrophic hypotheses, rather
than only to a particular kind of scenario. For instance -- if it did work -- it
might also serve to exclude the risk that in high-energy ion-ion collisions such
as those envisaged in the RHIC and ALICE experiments a different configuration
of ordinary nuclear matter -- not strange: made up of ordinary
nucleons -- be created which, if more tightly bound than the standard
nuclear matter that constitutes standard (heavy) nuclei, might serve as "centre
of condensation" for a transition of the nuclear matter of standard nuclei to
this new configuration -- a transition which, if it were to involve a
macroscopic chunk of matter, would be accompanied by a large release of energy
and would therefore also entail catastrophic consequences. (For instance such a
hypothetical, albeit implausible, more bound configuration of ordinary nuclear
matter could be caused by a prevalence of the spin-orbit component of the
nuclear force -- which can always be adjusted to be attractive -- over the
central and tensor components, which provide instead most of the binding energy
in standard nuclei. Such an anomalous "spin-orbit-bound" [5] configuration of
nuclear matter would be characterised by large values of the relative angular
momentum for every nucleon pair -- something that can indeed be in principle
achieved -- and would therefore be very different from the configuration of the
nuclear matter of standard (heavy) nuclei, and this might explain the
metastability of such standard nuclear matter -- a metastability which might
have a longer lifetime than that of the Universe, but might hypothetically be
broken by a sufficiently energetic collision of sufficiently heavy nuclei --
giving thereby rise to a hypothetically catastrophic scenario analogous to that
described above for strange nuclear matter -- although it is not clear in this
case what the mechanism might be to cause the process to continue, so as to
eventually involve macroscopic quantities of matter). 8.
Let us then proceed to the second approach and report tersely two
arguments which have been made (but also largely unmade) by BJSW [1], DDR [2]
and JBSW [3], to provide, from empirical considerations based on cosmic ray
phenomenology, confidence about the safety of the RHIC experiment. These
arguments apply also to ALICE, but are less conclusive in that context. Of
course, in order to be reliable, these arguments must refer to cosmic ray events
analogous (in terms of the energies, and masses, involved) to those being
envisaged in these experiments -- via direct evidence based on analogous cosmic
rays events which have been actually measured, or via reliable extrapolations of
such data (high energy cosmic ray data for heavy nuclei such as gold or lead are
scarce).
BJSW [1] point out that analogous collisions to those that are planned at
RHIC occur when cosmic rays hit the Moon. (The use of the Moon, rather than the
Earth itself, for this argument is required because the majority of the cosmic
rays that hit the Earth interact with its atmosphere before reaching the ground,
and the atmosphere contains few heavy elements; hence in the case of the Earth
the analogy with the collisions among heavy ions, or heavy nuclei,
is missing). Such collisions have occurred for a long time (the Moon is a few
billion years old), without producing the catastrophic disappearance of our
satellite. From this evidence they inferred [1] a very small upper bound on the
probability that a catastrophe occur in the RHIC experiment, which would be
quite appeasing, were it not for their failure -- as pointed out by DDH [2] --
to take due account of an important difference among the impact of cosmic rays
on the nuclei in the lunar soil, and the collisions of heavy ions in the planned
experiments. In the first case, a strangelet hypothetically produced in
the collision would move with high speed relatively to the lunar matter and
would therefore have a high chance to break up before coming to rest (to
initiate the catastrophic scenario); in the second case the hypothetical
strangelet, produced in a head-on collision of two ions, would be already
almost at rest in the lab. Taking due account of this difference (and using in
the related computations some rather extreme -- but perhaps not excessively so
-- worst-case
hypotheses), DDH [2] have shown that the safety margin provided by the
persistence of the Moon essentially evaporates.
DDH [2] -- who also seemed bent at providing reassurance if at all
possible, but insisted in trying to do so by the second approach (perhaps
being motivated by lack of confidence on theoretical considerations alone; for
instance they state that "our understanding of the interactions between
quarks is insufficient to decide with confidence whether or not strangelets are
stable forms of matter" [2]) -- tried then to review the relevant evidence
based on cosmic ray phenomenology, but keeping in the process due account of the
need to restrict attention to collisions in which a hypothetically formed
dangerous strangelet would not be likely to break up before getting in
equilibrium with the matter surrounding it.
Strangelets produced in cosmic space would eventually be swept
into star matter (DDH [2] provide arguments that this would indeed happen, if
the strangelets were negatively charged), and they would then cause stars
to blow up as supernovae, if the catastrophic scenario indeed prevails. But only
about 5 supernovae per millennium are observed (and there are other well
understood scenarios to produce at least some of them). In this manner DDH [2]
obtain, as an upper bound to the probability of producing a dangerous
strangelet in one year of running the RHIC experiment, the estimate
1/500,000,000 (one over five hundred million, namely two
billionth). This argument also produces a bound for the ALICE experiment,
which is however much larger.
DDH [2] state that this bound implies that "it is safe to run RHIC for
500 million years". A (substantially equivalent -- in operational terms! --
but) more correct -- albeit, perhaps, less appeasing -- language would state
that this bound indicates that the time scale over which a catastrophe might
emerge from the RHIC experiment is (at least) of the order of magnitude of
hundred million years.
But this bound is only applicable if the catastrophic phenomenon is fast
enough to yield a supernova event. If the process is slow, so that no visible
supernova explosion emerges, a different approach is needed. DDH [2] then argue
as follows. Firstly they observe that if the process is excessively slow, then
one need not worry: in particular if it would take more than ten billion
years to destroy the Earth, no concern seems appropriate, since anyway we
expect that ten billion years hence planet Earth will be engulfed by a
much enlarged Sun, which by that time will have become a red giant star. Hence,
they focus on the intermediate range of a hypothetical scenario that is not so
fast to yield a supernova explosion, yet it is fast enough to cause reasonable
concern in terms of the Earth getting destroyed before its natural death, as
predicted by current astrophysical expectations, occurs. In this context, they
look at the increased luminosity of stars that would be caused if some of them
were destroyed, even relatively slowly, by strangelets, and they get
again an upper bound of the same order of magnitude, or perhaps -- if worst-case
assumptions are made to model the destruction of a star caused by the
strangelet mechanism -- a bound one hundred times larger -- which
would indicate that the time scale over which a catastrophe might emerge from
the RHIC experiment is still quite large, of the order of (at least) million
years.
But this part of their analysis seem to me somewhat unconvincing -- in
particular, their modelling of the dynamics of star destruction via the
strangelet-caused mechanism. This point is, however, moot, since the DDH
[2] bound is invalidated [3] if one takes account of the possibility that
strangelets be metastable -- with a lifetime short enough for them not to
be "eaten" by stars once they are formed in interstellar space, yet long enough
to cause a catastrophe when they are produced at rest in the
lab.
It seems in conclusion that the empirical evidence from cosmic rays
yields no appeasing upper bound on the probability of producing a dangerous
strangelet in the experiments in question, at least if one insists that
any such bound, to be entirely reliable, should be obtained by treating
uncertainties via worst-case hypotheses. Indeed JBSW [3] (rather in contrast to
BJSW [1]) state:
"By making sufficiently unlikely assumptions about the properties of
strangelets, it is possible to render both of these empirical bounds irrelevant
to RHIC. The authors of Ref. [2] [namely, DDH] construct just such a
model in order to discard the lunar limits: They assume that strangelets are
produced only in gold-gold collisions [this is imprecisely stated -- the
assumption is that strangelets be stable only at masses large enough, of the
order or larger than the mass of two gold nuclei], only at or above RHIC
energies, and only at rest in the centre of mass [this is also imprecisely
stated]. We are sceptical of all these assumptions. If they are accepted,
however, lunar persistence provides no useful limits. Others [presumably a
call to Ref. [5] is missing here, which reads: "We thank W. Wagner and A.
Kent for correspondence on the subject of strangelet metastability"; indeed,
this Reference is quoted nowhere in JBSW [3] !], in turn, have pointed out
that the astrophysical limits of Ref. [2] can be avoided if the dangerous
strangelet is metastable and decays by baryon emission with a lifetime longer
than sec. In this case strangelets
produced in the interstellar medium decay away before they can trigger the death
of stars, but a negatively charged strangelet produced at RHIC could live long
enough to cause catastrophic results. Under these conditions the DDH bound
evaporates." 9.
There is one other point that does not seem to have been taken quite into
consideration in these analyses: namely the possibility that the catastrophic
scenario, rather than ending up in the destruction of the entire planet Earth,
yield "only" a local calamity. This might, for instance, possibly be the case if
(i) there were a valley of stability or metastability for strangelets
of masses, say, from 300 (in units of nucleon mass) to some finite
mass B -- analogous to the
situation for standard nuclei, except for the fact that, in the dangerous
strangelet case, there would be a lower mass limit (here arbitrarily
guessed at 300) and at least some negatively charged specimens would also
be included among the stable and metastable strangelets; and if moreover
(ii) heavier strangelets had a sufficiently large probability to
fission into such stable or metastable strangelets. If the various
lifetimes and cross sections for the various decays and reactions were properly
adjusted, a chain reaction might be initiated by the production of a negatively
charged strangelet in the lab and it might result in a nuclear explosion,
which might however stop before reaching astronomical proportions. Such
fine-tuning of parameters might look contrived, hence unlikely; but it would not
be the first time that Nature surprises us: who could have a priori
guessed that standard nuclear physics was so finely tuned, not only to allow
the creation of controlled nuclear reactions, but even to organise naturally
such an experiment on our planet, over geological times, in the Uranium-rich
mines of Gabon? Moreover, what about the observation (anthropic principle?) according to
which, of the infinitely many other possibilities, quite a number must be
excluded since, if they had prevailed, we would not be here to argue about them.
10. But let us
abandon such far-fetched speculations, to try and summarise this part of our
discussion. To this end it is perhaps both expedient and instructive to quote GW
[4]:"If strangelets exist (which is conceivable), and if they form
reasonably stable lumps (which is unlikely), and if they are negatively charged
(although the theory strongly favours positive charges), and if tiny strangelets
can be created at RHIC (which is exceedingly unlikely), then there just might be
a problem. A new-born strangelet could engulf atomic nuclei, growing
relentlessly and ultimately consuming the Earth. The word 'unlikely', however
many times it is repeated, just isn't enough to assuage our fears of this total
disaster."
GW [4] then go on to report that, by relying on what we called above the
second approach, sufficiently small upper limits can be put on the risk
probability. They report the lunar argument of BJSW [1], without mentioning the
criticism of it by DDH [2], and quote the BJSW conclusion ("cosmic ray
collisions provide ample reassurance that we are safe from a
strangelet-initiated catastrophe at RHIC" [1]), and likewise they quote
uncritically DDH [2] ("beyond reasonable doubt, heavy-ion experiments at RHIC
will not endanger our planet"); and they appeasingly conclude that "even
though the risks were always minimal, it is reassuring to know that someone has
bothered to calculate them." Unfortunately this conclusion, to the extent it
relies on the second approach, seems to me to be by now somewhat
unjustified -- as we have tried to explain above. This is, to some degree,
reflected in the modified flair of JBSW [3] relative to BJSW [1]: indeed the
sentence from BJSW [1] quoted by GW [3] -- which was the final sentence of this
Report [1] commissioned by the Director of BNL, and was indeed introduced by a
rather peremptory "we demonstrate that" -- is no more to be found in JBSW
[3], and it is replaced there by the following final paragraph of the
introductory section (which indeed follows the one we quoted above, at the end
of § 8): "We wish to
stress once again that we do not consider these empirical analyses central to
the argument for safety at RHIC. The arguments which are invoked to destroy the
empirical bounds from cosmic rays, if valid, would not make dangerous strangelet
production at RHIC more likely. Even if the bounds from lunar and astrophysical
arguments are set aside, we believe that basic physics considerations rule out
the possibility of dangerous strangelet production at
RHIC." 11. In
conclusion the main arguments to allay fears of a catastrophic outcome of the
experiments RHIC at BNL and ALICE at CERN is (i) the unplausibility, on
theoretical grounds, that stable or (sufficiently long-lived) metastable
strangelets with negative charge (i. e.., "dangerous strangelets")
exist, and (ii) the hunch that, even if they do exist, the probability that even
a single one of them be created in these experiments is exceedingly small (but
how small is small enough? -- more on this below).
People might feel that (iii) the empirical arguments based on cosmic-ray
phenomenology, even if not totally convincing, provide additional confidence.
Perhaps so. Yet I have also read that each of the 3 arguments, (i), (ii)
respectively (iii), can be interpreted as providing (tiny) upper limits, call
them respectively , to the probability that a
catastrophe occur, and that these 3 small numbers should be multiplied to obtain
a final estimate of the upper limit to the probability of a catastrophe, , since these 3 estimates are
based on independent arguments. I think this is unconvincing as regards the
first two probabilities, and , because the arguments that
lead to them are not quite independent, as pointed out above; and it is, in my
opinion, definitely incorrect as regards the third probability, , which is based on
empirical considerations rather than theoretical
analyses.
(Indeed, imagine you are tasked to estimate the probability to draw a
black ball from a box. The theoretical information you have is that the
box contains two balls, one black and one white; you also know
empirically, from a number of previous draws, that the black ball came
out about half the times. So your estimate based on theory is that the
black ball has probability 1/2 to be drawn; your estimate based on empirical
data also suggests it has probability 1/2 to be drawn; do you then conclude
the probability is 1/4 ? I would not have introduced parenthetically this
trivial argument, were it not for the fact that an eminent colleague indeed
suggested the probabilities, , mentioned above should be
multiplied, and he has not yet recognised -- to the best of my knowledge -- that
he was wrong on that count; so, in deference to his scientific eminence, I must
continue to doubt whether my trivial example is really applicable. Let the
reader judge).
Let us moreover recall the fundamental difference among the probability,
call it, that a "dangerous
strangelet" exist, and the probability, call it, that such a "dangerous
strangelet" be produced in a particular experiment. In particular
(once it is well defined by providing a
precise definition of "dangerous strangelet", mainly by specifying its
minimal lifetime) is a probability that originates from our ignorance of the
theoretical framework and our inability to perform reliable computations; it
would become exactly one or nil if we had a complete understanding of the
matter, namely if we could ascertain with certainty whether dangerous
strangelets do or do not exist (for instance, we know for sure that no
stable nucleus exist which is composed only of neutrons -- although we also know
that neutron stars, kept together by the long-range gravitational force, can
exist and indeed most probably do exist). Acquiring such knowledge is of course
entirely possible; and of course knowing for sure that dangerous strangelets
(with mass less than, say, ) do not exist () would be the most convincing
way to allay any concern about the ion-collider experiments at BNL () and CERN (). The value of -- a quantity which of course only makes
sense if dangerous strangelets do exist, and only for experiments in
which they can in principle be produced (in particular, the total mass of
the two projectiles should exceed the mass of the strangelet) -- is instead never exactly zero. Indeed,
by definition, it is a positive number, although it could nevertheless be
sufficiently small to allay any reasonable concern -- indeed this is what most
expert seem to think, even if they do not seem (so far) able to come up with a
quantitative estimate (for this reason I used above the term "hunch" -- although
the justification for my doing might be viewed as mot, inasmuch as it is merely
based on the difficulty to come up with a reliable quantitative estimate, see
below).
12. My personal
assessment of the situation concerning the ion-ion experiments at BNL and CERN
is not one of serious concern, because I have confidence on the assessment of
the experts who unanimously state there is no danger. I am, however, a bit
disturbed by what I perceive as lack of a very sustained effort at getting more
quantitative estimates of the various probabilities involved in this
problematique than I saw reported so far. I wonder in this context about the
ratio of the funds that have been devoted to such an endeavour, relative to
those that have gone and are going into the experimental set up. If that ratio
is less than, say, a few percent, I would -- in my admitted naiveté -- feel
puzzled and disturbed (however, as a theoretical and mathematical physicist, I
might be biased in this assessment).
In this context, I hope (and expect) the Committee tasked by the
Director-General of CERN to look into the matter will do a thorough job, and I
also hope their findings will be adequately scrutinised by the expert community
after they are published. The usefulness of such open discussions is of course
obvious, and it has indeed been demonstrated by the progress in understanding
these matters that has resulted from the fruitful reciprocal criticism among the
two experts groups BJSW and DDH [1-3]. 13. But I am
also somewhat disturbed by what I perceive as a lack of candour in discussing
these matters by many -- including several friends and colleagues with whom I
had private discussions and exchanges of messages -- although I do understand
their motivations for doing so. Many, indeed most, of them seemed to me to be
more concerned with the public relations impact of what they, or others, said
and wrote, than in making sure the facts were presented with complete scientific
objectivity
This is, of course, a subjective assessment, for which I must take
personal responsibility. It has, in any case, motivated me to also try and
outline below some scientific-political-ethical-sociological considerations
related to this kind of issues, which are perhaps of more general validity than
referring to this particular case, although this is a significant example which
I keep in mind throughout these reflections. 14. First of
all it is clear to me that risk evaluations can be reliably done only by
experts: in this particular case by experts on nuclear and subnuclear
physics, as well as, to the extent relevant, on astrophysics etc.; and also by
experts on the evaluation of the risk of mishaps which are both "extremely
unlikely and extremely catastrophic". And it is of course appropriate that, to
the maximal extent possible, those who are tasked with making such evaluations
not be affected by any "conflict of interest". If an experiment potentially
dangerous is being planned by a group in a lab, it is of course desirable that
risk evaluations be performed by scientists who have no interests vested in the
performance of that experiment. This is, of course, not always easy -- since the
more knowledgeable experts are often to be found just among those who are also
most keen to see the experimental results in question.
It is also obvious that an adequate investment in risk evaluation should
be made before the funds already spent in the preparation of an experiment
render its cancellation exceedingly wasteful and therefore hard to decide, even
if there were good reasons to do so. 15. Such risk
evaluation exercises yield eventually a probabilistic estimate, which is arrived
at after detailed investigations of catastrophic scenarios, that often entail
lots of detailed computations, in which context choices must often be made among
"most plausible" and "worst-case" hypotheses. Prudence of course suggest that
the latter be preferred, but judgements may vary. It is therefore quite possible
that different groups of experts end up with final evaluations which do not
quite tally. Just for this reason it is generally desirable that more than one
team be engaged in such analyses -- as already mentioned above, a desirable
technique is to task a blue team to perform as objective an
analysis as possible, and a red team to act as "devil's advocate", namely
to try deliberately to prove that the experiment in question is indeed dangerous
-- of course always using sound scientific arguments. At the end the two teams
should compare their findings, and especially the analyses which led to those
results. This approach is also desirable to minimise the possibility that a risk
scenario be altogether ignored. I understand such a procedure is (more or less)
generally followed whenever a potentially dangerous enterprise is undertaken,
for instance the construction of a nuclear power plant. In such a case generally
the blue team is organised by those who plan the plant, and the red
team by those who authorise its construction.
Once such an analysis has been performed, it should preferably end up
with quantitative conclusions, in the guise of probabilistic estimates --
possibly yielding a range of values. Then the nontrivial question arises of what
the acceptable value for the probability of a disaster happening should
be -- namely a value small enough that the risk be considered worth taking. This
of course depends quite significantly on the magnitude of the catastrophe that
might occur if things went wrong, and on the advantages that accrue by
proceeding with the project. Especially when the gains are purely scientific,
there is clearly the danger that an excessively prudential approach end up in
impeding scientific progress.
Anyway I suggest some kind of benchmark to assess "acceptable"
probabilities of extreme catastrophes might be set by the probability of some
such impending "natural" catastrophe. I understand for instance the probability
that an asteroid with diameter over 10 Km hit our planet -- an event which would
most likely put an end to the human presence on Earth -- is estimated to be of
the order of one over one hundred million per year.[6] It is probably
correct to argue that any man-made risk of total catastrophe should be smaller
than any natural risk -- but it seems to me reduction by an extra small factor
-- one tenth, one hundredth, one thousandth, perhaps one over ten
thousand -- should be sufficient. Hence I would probably advise in favour of
authorising a worthwhile undertaking -- be it a scientific experiment or
some other worthwhile human enterprise -- if I were reliably guaranteed
it entails a risk of ultimate catastrophe per year less than one over one
trillion (probability per year less than ).
Of course what I have in mind here are major undertakings, which involve
large investments of funds technology expert manpower – and which therefore, as
it were automatically, entail a careful and responsible behaviour. The situation
is quite different if consideration is extended to actions which can be
performed on a much smaller scale – as is for instance the case for certain
experiments in molecular biology and genetic engineering. Indeed simple
arithmetic shows that the human experiment on Earth is unlikely to last for many
more centuries if all humans on this planet were to exercise a hypothetical
individual right to engage in an activity that entails a probability per year to cause a global
catastrophe!
Considerations of this kind – paradoxical as they may appear – underline
the obvious need to pay special attention to all those scientific/technological
developments that entail the risk of major catastrophes being produced by small
scale endeavours – involving few individuals.
On the other hand, even in the context of major kind of enterprises such
as those discussed above, I will not be surprised to find many who strongly
disagree with the figure suggested above -- -- possibly because they think
that upper bound to the probability of ultimate catastrophe per year should be
set to a much smaller value, quite likely because they would argue it should be
to "zero". Those who make the latter argument would probably not be impressed if
I pointed out to them (and rigorously proved, if not convincingly to them,
certainly to the satisfaction of every knowledgeable physicist), not only that
there indeed is a nonzero probability that our planet Earth explode
within the next second, but moreover that every act of their life -- including
each time they breath -- has a nonzero probability to cause the end of
the world, in whichever way they like to define this event as well as the term
"to cause"!. (The point is of course that zero is a very special number
-- indeed, it was recognised as a number only a few centuries ago -- it is
qualitatively different from any small number, however small that may be:
and of course the nonzero numbers we are talking about here are indeed
much smaller than those we indicated before -- perhaps might be a representative wild
guess!).
The discussion in the previous paragraph seemed to lead us astray. In
fact, it raised two important, and related, issues: (i) Who should, in the end,
make decisions? (ii) How to cope with the fact that the majority of people are
simply unable to comprehend a probabilistic argument? 16. Granted that
(fortunately) it is not up to me to take decisions, who then should decide? Let
us firstly -- for the sake of making the argument concrete -- focus on the RHIC
experiment: should a controversy about its safety emerge and should the
laboratory governance get overruled, who should/could finally decide whether it
should proceed? Since the experiment takes place in the USA and it is funded by
American taxpayers' money, clearly the decision-makers in this instance are the
relevant US institutions: the President, Congress, the judiciary. But to the
extent that experiment puts at risk the entire planet Earth, is it fair
that the (non American) majority of
world citizens have no say whatsoever on the matter? In any case, even
US citizens may have some say in this matter only in a very indirect way, namely
to the extent those who decide are their representatives -- within that context
of representative democracy which still seems the better system of governance
human kind has been able to devise so far. In any case, what I think is really
important is that American citizens, as well as the rest of us who live on this
planet, have a reasonable assurance that, within the decision-making system that
eventually decides, there exist an adequate capability, and will, to make a
competent and objective evaluation of the risk.
As for the CERN experiment, the situation is somewhat less clear-cut.
CERN is an European institution located between France and Switzerland. I
understand the final decision, on such matters as authorising a potentially
dangerous experiment at CERN -- if it ever had to be taken at a political level
beyond the CERN governance -- would be taken by French rather than Swiss
authorities. Somebody told me this was arranged so that such decisions could not
be subject to a popular referendum -- a practice common in Switzerland, and
inexistent in France. I do not know whether this gossip is true; I was informed
that the legal department at CERN has cogent arguments to sustain the validity
of this choice (arguments I would certainly not dare to challenge, nor indeed
even to scutinize: not my cup of tea). In any case I am in no position to judge
whether or not such a decision was wise. As it is, if a controversy were to
emerge over whether to go ahead with the ALICE experiment at CERN, and if the
CERN leadership -- who are to begin with the natural decision-makers on this
matter -- were eventually overruled (a development I would not a priori
like), then the final decision would rest with the French decision-making
system (be it the executive, legislative, or judicial power, as the case may
be): the decision-making system, to be sure, of a democratic and highly
civilised country, which however does not have a spotless record on such
matters. (Suffice to recall
the following episode. Years ago -- perhaps in the aftermath of the catastrophe
associated with the use of AIDS-infected blood for transfusions -- an
"independent" National Committee was created in France by President Mitterand to
assess potential "great risks"; and it was granted the power, somewhat unusual
in France, not only to select which topics to focus upon, but also to publish
its findings, without having to ask for a governmental permission to do so. That
Committee did use such powers, for instance in the context of its assessment of
the potential risks associated with the fact that a new TGV (fast train) track
was planned to pass close to a nuclear power station -- the TGV path is a very
delicate political issue, with important electoral implications. Perhaps for
this reason -- in any case, without any explanation being proffered -- that
Committee was soon after, very suddenly, abolished -- via a one-line rider in
the context of the omnibus budget bill).
This is not to say I would prefer that Switzerland rather than France be
the ultimate decision-making authority over whether or not to make an experiment
at CERN: indeed my clear preference is that such decisions be taken by CERN’s
own decision-making system. Yet I can't help asking myself how the citizenry in
Switzerland views this matter. 17. We are indeed
coming again here to a crucial point: the role of citizens, who in democracy
should ultimately be the determining element in decision-making. But to play
this role responsibly, citizens should possess some understanding of the basic
facts; yet this is sometimes, nay often when decisions concern scientific
issues, next to impossible for the "man in the street", or for the "housewife in
the kitchen" (to mention some politically incorrect, yet realistically quite
relevant, stereotypes). A relevant example in this context is precisely the
point mentioned above (end of § 15), namely the difficulty to understand
probabilistic arguments, in particular those based on very small, yet finite,
probabilities; arguments which are essential to form an informed opinion about
any assessment of the risk of very major catastrophes having very low
probability of occurring.
It seems to me in this context the scientific community has a great
responsibility. There is of course a strong temptation to take a "let the matter
to us" attitude. But clearly this is neither quite right nor politically viable,
nor, indeed, generally acceptable: indeed some of my elementary particle
colleagues, whose tendency towards such a stand I found somewhat disturbing when
I broached with them the potential danger of high energy ion-ion collision
experiments (especially when it tended to take the subliminal form of simply
joking the matter away), were themselves quite unwilling to accept such an
attitude in the matter, say, of molecular biology experiments involving gene
manipulation: there, some of them kept insisting, we know the dangers are real,
hence oversight over what scientists (i.e., biologists -- not fellow
physicists!) are up to is needed.
18. There also
is a strong temptation for many in the scientific community to view these
matters primarily in terms of public relations. This may be subliminal or
deliberate; it may be good or bad (in my judgement).
A deliberate effort to inform the public -- particularly those closer at
hand, who might influence local political decision-making -- on the scientific
activities being undertaken is, in my opinion, a legitimate part of the public
relations game, especially in major labs which require large public funding: and
in this context even a certain amount of hype (of the results) and minimisation
(of the dangers -- if any) is acceptable, especially if the task to propagate
these notions is performed by public relation professionals rather than by the
scientists themselves -- who should instead, I submit, rather keep a low
profile, indeed be modest-minded, or at least act modestly: preferably the
former, but at least the latter, to respect a scientific etiquette which is part
of a more civilised way of living (a context, however, where the danger that bad
behaviours eventually prevail over good ones looms large -- especially in those
scientific environments, such as contemporary "high-energy" experimental
physics, where the main traits required of top scientists are managerial skills
and aggressiveness rather than thoughtful scholarship).
What however must by all means be avoided is any attempt at obfuscation
in the context of scientific analyses. This seems so obvious not to require any
mention. But in fact it is not obvious at all. And it seems to me a scrutiny of
the papers [1-4] on which this discussion has largely focussed provides as good
an illustration as any, of the potential pitfalls that may emerge when
scientific writings are somewhat affected [1-3]-- or even primarily motivated
[4] -- by the concern "not to alarm the public"; not to mention editorial
policies, such as that underlining the decision by Nature to refuse to
publish DDH [2] on the grounds it was of no interest to their readers, which
smack of deliberate attempts to manage public opinion rather than to inform
it.
The possibility that, on certain topics, the public may indeed "become
alarmed" by reading a scientific paper (or quotations from it) does exist, since
it is -- rather, it should be -- in the nature of such writings to avoid
peremptory statements; while, when treating of extreme potential dangers, only
peremptory statements (excluding any possibility that such risks might indeed
materialise) are adequate to allay the fears of the public. And the concern is
in my opinion justified that an "alarmed public" might lead to "irrational"
decision-making, which might interfere with sound scientific progress, and
perhaps end up in damaging the public interest (which, we trust, is indeed
vested in the promotion of sound scientific progress). But any attempt by the
scientific community to remedy this situation by resorting to lack of candour
and transparency is in my opinion unwise and dangerous. For two, equally
important, reasons. 19. If the
scientific debate gets muted or distorted -- or altogether suppressed -- because
the imperative "not to alarm the public" takes precedence over the objective
candour and the open confrontation of points of view which is a main
characteristic of the scientific method, then the danger of eventually indeed
making some silly mistakes increases significantly. This has been illustrated
time and again in the context of military research, when such debates (for
instance on the effects of radiation on military personnel and civilian
populations) were altogether suppressed -- especially in totalitarian societies;
but not only there -- by imposing military secrecy. But even in the context of
open scientific research, one should not forget that grossly mistaken
assessments have sometimes been made by well meaning most competent scientists
(a classical example being the famous pronouncement by Lord Rutherford --
foremost nuclear physicist of his times -- that any prospect of exploiting
nuclear energy was "moonshine"). The only cure against this risk is the
scientific method of completely honest, completely candid, give-and-take open
debate among competent practitioners. As we already noted above (end of § 12),
this has indeed been once more demonstrated by the scientific exchanges that
resulted in the JBSW [3] revision of BJSW [1]. 20. Another, no
less important, drawback of any deviation by the scientific community -- under
the banner of combating alarmism -- from the practice of completely unrestrained
open candid unobfuscated transparency in all their utterances, is the
(justified!) lack of confidence by the general public in the "integrity of
scientists", that is eventually likely to result from any hint that such
behaviour is prevailing. Such a lack of confidence is particularly deplorable
precisely in the context we are discussing here. Indeed I would like to conclude
this paper by reaffirming my strong belief that it is most desirable that any
decision on the kind of matters we have been discussed herein be, if not always
taken -- which would be politically impossible, and indeed ethically dubious in
a democratic context -- certainly always primarily influenced by the scientific
community, rather than by demagogues or charlatans or, at best, incompetent
generalists. But this will become impossible -- at least in a democratic context
(to which I assume few would like to renounce: particularly those of us who had
some chance to experience the alternative!) -- if such a (justified!) lack of
confidence by the general public in the scientific community will eventually
prevail. References [1] W.
Busza, R. L. Jaffe, J. Sandweiss and F. Wilczek: Review of Speculative
"Disaster Scenarios" at RHIC, hep-ph/9910333, 13 October 1999, referred to
herein as BJSW. This is the text of a Report commissioned by Dr. John Marburger,
Director of BNL. [2] Arnon
Dar, A. De Rujula, Ulrich Heinz: Will relativistic heavy-ion colliders
destroy our planet?, Phys. Lett. B 470, 142-148 (1999), referred to
herein as DDH. This paper appeared on the web as hep-ph/9910471, 25 October
1999; it was refused by Nature on the grounds that the topic was not of
sufficient general interest, although Nature asked to be informed about
eventual publication elsewhere, indicating they might be interested to publish a
comment on the issue (which they eventually did [4]); it was submitted to
Phys. Lett. B on November 2, 1999, accepted by the editor R. Gatto on
November 3, 1999, and the issue on which it appeared is dated 16 December 1999;
the comment [4] appeared on the issue of Nature dated 9 December
1999. [3] R. L.
Jaffe, W. Busza, J. Sandweiss and F. Wilczek: Review of Speculative "Disaster
Scenarios" at RHIC, hep-ph/9910333 v2, 19 May 2000, referred to herein as
JBSW (this is a significantly modified version of [1]).
[4] Sheldon L.
Glashow and Richard Wilson: Taking serious risks seriously, in the
section News and Views (Nuclear physics), Nature 402, 596-597 (1999),
referred to herein as GW. [5] F. Calogero
and F. Palumbo, "Spin‑Orbit‑Bound
Nuclei", Phys. Rev. C7,
2219‑2228 (1973). [6] C. R.
Chapman and D. Morrison, "Impacts on the Earth by asteroids and comets:
assessing the hazard", Nature 367, 33-40 (1994). ______________________________________________________________________________________________ Francesco Calogero is professor of theoretical
physics at the Department of Physics of the University of Rome I "La Sapienza".
His main current research activity deals with the mathematical physics of
integrable dynamical systems and nonlinear partial differential equations. He is
now completing a book of Lecture Notes (to be published by Springer) with the
tentative title "Classical many-body problems in one-, two- and
three-dimensional space amenable to exact treatments (solvable and/or integrable
and/or linearizable…)". He served from 1989 to 1997 as Secretary General of the
Pugwash Conferences on Science and World Affairs, and in that capacity accepted
in Oslo the 1995 Nobel Peace Prize awarded jointly to Joseph Rotblat and to
Pugwash. He serves now as Chairman of the Pugwash Council (1997-2002). He served
as member of the Governing Board of the Stockholm International Peace Research
Institute (SIPRI) from 1982 to 1992. The ideas and opinions proffered in this paper are
strictly personal and should not be construed as expressing the views of any
institution or organization. |