THE CASE AGAINST SMOKING BANS

(THE ABRIDGED-- BELIEVE IT OR NOT-- VERSION)

c. Stewart, NYC CLASH, 2003



 
 
TABLE OF CONTENTS


INTRODUCTION

The War on Smokers
The Anti-Smoking Movement


 
PROHIBITIONIST TACTICS

"Quick and Dirty," 
says Elizabeth Whelan

"Attack the Messenger," 
says Stanton Glantz

Toss the Jargon
(everyone says it)


 
THE THEORETICAL (EPIDEMIOLOGICAL) SCIENCE

The EPA Report: 
Lung cancer and Secondhand Smoke


 
"HOW TO READ A STUDY" 
(Understanding the Jargon)

 
Heart Disease
and 
Secondhand Smoke 
(53,000 "Deaths")

 
THE REAL WORLD SCIENCE

THE AIR 
ACCORDING TO OSHA

Cigarette constituents 
in the air (OSHA standards)

Table 1: 
constituents, charted

Anti-Smokers sue OSHA...
and say "Never mind."


 
RESTAURANT WORKERS AND RESTAURANT AIR

What else is in 
restaurant air.

"Cooking the Books," a restaurant study

Bartenders' "exposure"

Cotinine as a measure 
(of what?)


 
VENTILATION

For it: The facts

Against it: The Prohibitionists


 
CONCLUSION

 
APPENDIX

PROHIBITION'S POLITICS
AND 
PERSONALITIES IN NYC

 
 

HOW TO READ A STUDY
Copyright: L. Stewart, 1994, 2002

 
TABLE OF CONTENTS
>What is Epidemiology?

>Hypotheses

>Relative Risk

>Statistical Significance

>Gaming Significance

>How relatively risky...?

>Meta-analysis

>Multifactorial disease

>Confounding

>Bias

>Classification/
Misclassification

>Data Torture

>Adjustment

>Cherry-picking

>Dose-Response Trend

>Doses v Poisons

>Arsenic: A Case in Point

>Conclusion

>Bibliography

Since this war is being waged on the battleground of Jargon, it's important to understand what the jargon really means--and doesn't-- in order to understand what the studies really mean-- and don't.  So-- let's begin:
 
 

EPIDEMIOLOGY 101
OR HOW TO READ AND 
UNDERSTAND A STUDY










WHAT IS EPIDEMIOLOGY?

Epidemiology (the science used to talk about secondhand smoke) is a science of statistics.  A statistical approach to the study of epidemics. As such, all it does is: It counts things up.  It then relates one set of numbers to another:

In a group of 10 people, 6 have a cold. Of the 6 who have a cold, 3 own a cat. Of the 4 without a cold, 1 owns a cat.

It can give you a formula: "People who own cats have twice the rate of colds." What it can't do is offer a causal connection:."Cats cause colds."

HYPOTHESIS

A theory.  Like "Cats cause colds." 

RELATIVE RISK

The results of an epidemiological study are reported in terms of a ballpark, estimated relative risk. (RR) A relative risk is a comparative measure--  the observed risk of disease in people who are exposed to (whatever's considered risky) vs the observed  risk of disease in some similar people who aren't.

Inject the hypothesis that cats are risky business and the data can take a twist: According to my findings, as shown up above, I can now make the statement that: "People who own cats (as opposed to those who don't) run a 2 to 1 relative risk of catching colds." Or make it hairier than that. The risk-- catch this-- is 100% higher!!  (Meaning 2 is 100% higher than 1.) 

If I have an agenda --like ridding the world of cats-- then I'd probably use the latter. But note: I still haven't shown causation, a fact you may have forgotten since I've scared you 100%

HOW ARE RELATIVE RISKS MEASURED?

A Relative Risk of 1.0 Means No Increased Risk.

1.0 is the number of  unexposed [catless] people one expects to come down with colds. Using 1.0 as the benchmark, a reported relative risk of say, 1.25 may indicate statistically (as opposed to actually) that the cold-risk from owning kittens (which are smaller than whole cats) is .25 -- or 25% higher.

STATISTICAL SIGNIFICANCE

If I'm going to play by the rules, my reported relative risk will have to achieve something called statistical significance-- a technical way of saying that my cat/cold relationship isn't just random chance. And to prove it, I'll have to publish my Confidence Level.  (How confident I am that it's not random chance.) 

The generally agreed upon confidence level is 95%-- the level that's touted in the basic textbooks, employed by most reputable statisticians and, historically, the standard used by the EPA (till they found it inconvenient in their study of secondhand smoke.)  At 95% there's only (but still) a 5% chance that my findings are total garbage-- no more significant than the random luck of the draw.

My level of confidence will also be expressed with a similarly published Confidence Interval. (CI)  Keep your eye on the interval. Keep your eye on the lower (left) number of the interval. That number will tell you -- quickly, at a glance-- if the results of my study are statistically significant or whether to roll your eyes.

For example: An RR of 1.25 (@95% with a confidence interval of 0.90 to 1.6) means simply that it's 95% probable that the "actual" RR lies anywhere in the range between 0.90 and 1.6. This obviously indicates that's it's equally probable that the relative risk is nil (or 1.0) or even negative (0.90.)  Only when the lower number of the interval exceeds 1.0 can the finding be said to have statistical significance, and then its significance is only significant at the given confidence level. 

The other thing to watch for in a confidence interval is the width of its middle gap. A study that purported to link breast cancer to smoking (a link that's never been made) had a confidence interval of 29.7 (CI= 1.6- 31.3). (1)  Even the EPA has discredited intervals as narrow (or as wide) as a gap of 5.1, stating that such intervals serve as the red flag of, among other things, "statistical uncertainty." (2)

Then too, there's also this:

When the upper end of the interval is below 1.0, you can interpret it two ways: that the thing folks are exposed to is not the cause of disease... or that the thing folks are exposed to is protecting  them from the disease. 

This is done routinely when the "thing" is, for instance, a vitamin or a drug. On the other hand, when the "thing" is environmental tobacco smoke (ETS) the finding is magically given neither interpretation. 

For example, the largest-ever study of ETS, conducted by the World Health Organization (3) showed that childhood exposure to secondhand smoke was so much a non-factor in later-life lung cancer that had it been a vitamin, it would have been called protective. (95% CI= 0.64 - 0.96)  Similar results have also been obtained from numerous other studies. But do you hear about them? No.

CAN SIGNIFICANCE BE GAMED?

You betcha. For example: 

Of the 31 then-extant studies on secondhand smoke that were used as the fodder for the EPA report (4)  fully "27 were NOT reported as statistically significant at the conventional (95%) level."  And as the US Congressional
Research Service went on to note:

"Confidence intervals can also be used to make results which are not statistically significant at one level, significant at another level. For example, a confidence interval reported at the 95% level to be (0.90 - 1.6) may be reported at a 90% level to be (1.05 - 1.8.)" (5)

And in fact, that's exactly what the EPA did to achieve it's in-the-end not-so-impressive numbers. It lowered the confidence levels to 90%. Also doubling the odds that its findings were random chance, and simultaneously violating its own traditional standards. And in fact, the standards of epidemiology.

It was partially, but far from exclusively on these grounds, that when the EPA was sued, a federal judge called their findings outright "fraud" and overturned their report that called ETS a danger. (6)

STRENGTH OF ASSOCIATION

People who own kittens have a 1.25 RR of catching colds.

Okay; so how strong is this relative association between kitten-owning and colds?

In general, of course, the higher the RR,  the more likely the risk is "true." In general, too, the reported RR for lung cancer risk among non-smoking women who've  been married to smoking men (for 20 to 40 years) is less than the risk of catching cold from owning a kitten.  (1.19 @ 90% confidence, per the EPA.)  So how scared should a woman be?

Again, in the canon of conventional science,  we're told that any RR rating of less than 2 (2.0-- which is magnitudes higher than 1.19) is very difficult to interpret, very iffy, very weak, and very likely to be due to either Bias, Confounding or the whimsy of pure chance. And further, any RR rating of under 3 (3.0) is just a blip of statistical static.

Says who? 

The National Cancer Institute: 

"RRs of less than 2 [2.0] are usually difficult to interpret. Such increases may be due to chance, statistical bias, or effects of confounding factors that are sometimes not evident." (7)

The International Agency for Research on Cancer: 

"RRs of less than 2.0 may readily reflect some unperceived bias or confounding factor."(7)

The editor on the New England Journal of Medicine, Marcia Angell:

"As a general rule of thumb, we are looking for a relative risk of 3 [3.0] or more before we accept a paper for publication." (7)

The director of Drug Evaluation for the FDA, Robert Temple:

"My basic rule is that if the relative risk isn't at least 3 or 4 [3.0 or 4.0] forget it."(7)

Still doubtful?  Pick up any library text at random.

Then, too, consider this:

The federal EPA has refused to go "causal" on electromagnetic fields (as a probable cause of cancer) "largely because," they said, "the relative risks have seldom exceeded 3." And an EPA scientist, discussing this on Frontline, said anything less than 3 was just "statistical static." (8)

Dr. Eugenia Calle, director of analytic epidemiology for the American Cancer Society, rejected a 1.5 as hardly more than a waste of time. "Not so fast," she said briskly to a guy from the Wall Street Journal, "a 1.5 is not strong enough to call [something] a risk." (9)

When calcium channel blockers were linked (1.6) to an additional risk of heart attack, the findings were simply cast as being "miniscule" and "weak." And Dr. Nanette Wenger, a noted cardiologist at Emory University, made the sweeping generalization that "case control studies [in themselves] are obviously weak" and quite prone to "hidden biases." (10)

When a study conducted at Harvard showed that women who took selenium had a higher (1.8, in fact) risk of developing cancer, we were simply told, "the data don't support a protective effect. (!)" (11)

When a study of 122,000 nurses showed that those who took hormones had a higher risk of breast cancer (1.3 for the under-60s; 1.7 for 60+) we were told, nonchalantly, to "put the risk in perspective" (and continue taking our pills.) (12)

When a recent study found that the relative risk for connective tissue disease was 1.24 among women who'd had implants we were told that the risk was "modest," even possibly "not real" because "bias is hard to dismiss when RRs are this tiny." (13)

To repeat,  per the EPA  (and all subsequent Meta-Analyses) the relative risks of getting cancer from secondhand smoke were 1.19 at 90% confidence for women married to smokers for from 20 to 40 years. But when the same or greater numbers are arrived at (indeed, at 95% confidence) they're dismissed as either "tiny," quite possibly inaccurate, and no cause for concluding "cause."   Hmm.  What's up?  Could there be a double standard?

HOW RELATIVELY RISKY IS A RELATIVE RISK?

Let's accept, for the moment, the EPA's standards for our kittenish risk of colds. In the math game we're playing, we are 90% confident (1.02-1.31) that there's some kind of possibly sinister connection that leads to an RR of 1.25.

How sinister, really, is a 1.25?

Well... put it this way: If I buy one lottery ticket, the odds of my winning are 11 million to 1.  If I buy two lottery tickets, the relative chance of my winning is-- 100 higher!  But the odds haven't changed.  They're still 11 million to 1 for each of my two tickets.

Or let's put it this way:  People who are right now crossing the street have an astronomically higher risk of being hit-and-run over by a yellow Ferrari than people who are right now soaking in the tub.  But in the actual scheme of things, how many people get hit by yellow Ferraris?  (Should we ban yellow Ferraris on the grounds of relative risk?)

The point to be made is that a relative risk is a mathematical game, having nothing to do with reality and nothing to do with The Odds.  And, relatively speaking, the relative importance of +25% is like splurging on a whole
extra quarter of a ticket, or crossing an additional quarter of a street.

And if the risk is really 0 (as +25 may be) then doubling the risk (2 x 0) is still 0.  So before you subject your kitten to summary death at the nearest  pound, let's examine:

META-ANALYSIS

When the findings of single studies are either extremely "weak" (ie, under "2" or "3") or statistically insignificant, or inconsistent with other studies, scientists often like to combine the results of these studies to see if they can tease a result out of the mess. 

The process, as described by one leading epidemiologist, is to "run them through computer models of bewildering complexity, which then produce results of implausible precision." (14)

This dubious procedure is called meta-analysis. As with all other purely statistical methods, the results of meta-analysis are only remotely valid if certain, very strict criteria are fulfilled.

If studies with negative results (ie, studies with one or both bases below 1.0) are omitted, which is frequently the case, a false positive is obviously much more likely. 

For instance in one often-cited meta-analysis (Law et al) purporting to show a weak (1.23) association between ETS and heart disease, the author admits to having excluded the results of the two single largest studies ever done (with a total of 2.2 million subjects) whose reported results were nil (1.0) Or to be more specific: 95% (CI: 0.97-1.04) and 95% (CI: 0.90-1.07).  And even then, he only managed to eke out a .23. (14)

Similarly, the EPA in its own meta-analysis, which served as the basis of its landmark Report, omitted the then-largest study ever done (Brownson et al, NCI) which would have rendered impossible its conclusion that ETS was a human carcinogen.

The other common problems are the quality (good or poor) of the underlying studies, and their comparability.  Are they relatively similar, or all over the lot? Were they conducted in different countries, using different methodologies, controlling for different Confounders, using different numbers of people, or different kinds of people or... you get the idea.

EPA, OSHA, WHO (2002)* and most (not all) studiers have used meta-analysis when studying ETS.   And used this kind of analysis on widely divergent studies.

As to the accuracy of the results: 

"Meta-analysis" (to quote the same epidemiologist) "ignores what is an absolute limit to epidemiologic inference. In the non-experimental domain, epidemiologic methods can only yield valid documentation of large relative risks.  Relative risks of low magnitude (say less than 2) are virtually beyond the resolving power of the epidemiologic microscope.  If many studies based on different methods are nevertheless congruent in producing markedly elevated RRs, we can set our misgivings aside. If however, many studies produced only modest increases, those increases may well be due to the same biases in all the studies. It (thus) does not follow that meta-analysis can resolve this dilemma." (Ibid)

* WHO's 1998 study--an original 10-year study in 7 European cities-- showed no statistically significant risk between ETS and lung cancer, much to their chagrin. So in 2002 they tried to bury it by doing a (what?) meta-analysis and blaring its low RRs in the form a press release. The analysis itself has neither been peer-reviewed or published and is (at this date) unavailable for review.

CAN WE GO HOME NOW?

Soon. First we need to discuss both biases and confounding. And before we discuss those, we need to quickly discuss disease.

MULTIFACTORIAL DISEASE

If you're looking for the cause or the vector of smallpox, you're relatively lucky. Small pox is caused by (and only caused by) the smallpox virus. But what about colds?

There are well over 300 distinct bacteria and viruses that cause the common cold. Some remain alive for up to 72 hours on a pencil or a phone, so it's possible to catch them from a long-gone hand. (Quick, which did it: the virus on the pencil or the germ on the phone, or the cumulative combination, or none of the above?) 

Add to that, this: Exposure itself is just a fraction of the game; it's how you handle that exposure. Were you stressed, exhausted, or chilled when you were exposed? Was your system busy fighting a virus from other sources, or a throbbing pain in  the gum? And finally (and you won't be able to answer this question) are you genetically predisposed to respiratory infections?

While it's technically feasible to postulate a host of mathematical hypotheses ("Why, Exactly, Did Johnny Catch Cold?") the print-out alone would be the size of a phonebook and wouldn't, in any case, offer you the Truth. 

(Here, kitty, kitty....)

And if it's that way with colds, take some time to consider the complex and even more perplexing etiology of heart disease and cancer.

CONFOUNDING

I once had a friend who, observing her boyfriend whistling on the street, decided he was whistling because he was ready to break up with her. This is known as "confounding"; observing an effect that could stem from a lot of causes but assigning it, quite arbitrarily, to one. 

This is one of the perils of attempting to finger cats for a multifactorial cold. The best you could do with your cat would be to add her to a very long list of possible risks, and add a question mark at the end.

Efforts are sometimes made to Control For Confounders. This too is a game of math. An overweight, out-of-work smoker with high cholesterol, high blood pressure, marital problems and itchy feet presents a problem, to say the least. How can you hope to isolate which of the many factors will "cause" his eventual death? 

Again, the computer is dispatched to "figure it out" by sheer mathematical means, as though the mysteries of life and death would succumb to computer algebra. However, if the researchers didn't ask about age, or forgot to ask about weight, or  whether his father had died of a stroke, these confounders won't be "controlled for," not even by dubious means.

BIAS

Slant. Any scientific experiment can be biased - wittingly or unwittingly. Start with the sample.  The kind of people, for example, who'll offer to answer intimate questions about sex are, therefore, de facto, more likely to be interested in, or anxious about, sex. The sample will be weighted. Or, looked at another way, the decks can be stacked. 

I'm conducting a study on the incidence of frostbite in people without mittens. Only where do I want to do it? Alaska in January? Or Naples in July? 

I'm conducting a phone survey on--anything at all. If I make my calls on a weekday afternoon,  I've automatically biased my sample--towards people who don't work.

Are people who own cats, per se, rather crucially different from those who don't? Are the non-pet owners more likely to be germophobes and thus avoid other situations that lead to colds? If so, cats are irrelevant and my sample is biased.

Mathematical bias has been shown time and again to be based on the choice of the wrong computer program or a mid-stream switch to another kind of program or mathematical system. (For more, see Mathematical Adjustment)

Intellectual, or label it, Emotional bias is the general proclivity to find what you seek. To confirm your hypothesis and pat yourself on the back. To blame whatever you hate, excuse whatever you like, and to hell with the ambiguity. The mind can't help it. Our biases affect almost everything we see. They select what we see. We pay heightened attention to the things we agree with and ignore what we don't, give credence to whatever indulges our conceptions and dismiss whatever doesn't.

And so do the researchers.

And so do their subjects.

It's well known that polls can be steered or even stacked depending on the subtle, exact wording of the questions. So too with the questionnaires that are given to studies' subjects. 

Also, the studies' subjects rarely live under heavy rocks. If they're subjected to propaganda on the dangers of owning cats, their perceptions will be distorted. (That's the purpose of propaganda.) They'll remember a fictive cat or turn an actual short-haired kitten into a dangerous snarling lion. 

And in almost every study that's attempted to study "studies",  it's been shown that the person questioned tries to please on the questionnaire-- to give the "correct" answer--meaning the answer the questioner "wants" or else the answer that makes it seem that the questioned is good as gold, along the current standards of gold.  (Ex-smokers say that they "never;" current smokers claim to be "ex.")

CLASSIFICATION/ MISCLASSIFICATION

Epidemiological studies begin with a questionnaire.

Before I can do a study, I have to know who I'm studying. So:  Do you own a cat? Do you own mittens? How frequently do you have intercourse?  Do you think you're an alcoholic? These are heavily freighted questions. The benighted respondent is sitting in his living room facing a questionnaire.  Or a faceless Voice on the Phone. Or a White-Coated Professional with a beady look in her eye. Does he lie or tell the truth? Does he even know the truth?  Welcome to GIGO-- Garbage In, Garbage Out-- the invisible worm that eats holes in the heart of a study and disappears up its own kazoo.

You lied about owning mittens. I've classified you as a non-mitten-owner and your frostbite, or lack of it, is about to be graven in stone. It will take on the luster of Science. It will fall into General Knowledge. The long arm of the law may--or may not--  wind up wearing mittens-- and all because of a lie.

And then there's the corollary: Nothing In, Nothing Out.  If I don't ask a relevant question, I won't get a relevant answer.  A study conducted in 1727, "Why Do Sailors Get Scurvy?" would probably not have asked them, "How long since you've eaten a lime?" Though it well may have asked them, "How long have you smoked tobacco?" or "How long have you owned a cat?"

And then, finally, it's not only people who get misclassified, diseases can too. (It's called misdiagnosis.)  And so can the Cause of Death.

A 1994 study published in the journal Epidemiology, showed that of self-reported histories of heart attack, 40% were wrong!

Two studies which sought medical confirmation of deaths officially attributed to lung cancer, found error rates of 12-13%. 

All of which may explain why Dr. Wegner (op cit) explained that "case-control studies in themselves are obviously weak and quite prone to hidden bias."

DATA TORTURING

It's been said that if you torture the data long enough, it will confess to anything. One method of torture can be meta-analysis. Here are a few others:

MATHEMATICAL "ADJUSTMENT"

In one study (Helsing) that purported to show a link between heart disease and ETS, the only "confounders" considered were: age, marital status, housing quality and years of schooling (!)

In other words, it failed to adjust for cholesterol, hypertension, diet, exercise, family history, etc.  Even here the mathematical "adjustment process" yielded questionable results in which "among women, adjustment changed the direction of the risk estimate-- which inexplicably went from a statistically significant negative risk of 0.68 to a statistically positive value of 1.24!" (15)

Or as the Gershwin song goes: Nice work if you can get it, and you can get it if you try.

CHERRY PICKING

We've already discussed how a meta-analysis can game its findings by simply excluding whole studies it doesn't like (not in terms of their quality, but in terms of their results.)  It's also possible to cherry-pick within an existing study.

And, in fact, that's exactly what  Michael Siegel did in his study of lung cancer in restaurant workers.  Siegel based his study (a meta-analysis) on 6 previous studies (actually, not "studies," but merely ballpark statistical data) picking only the data he "liked."  Thus, from one data-bank, he tossed into his hopper only the subset of "currently employed males" in the ambiguous category of "other food service workers" who statistically showed a slight increase in risk, while ignoring the results among female workers in the same category who showed a slightly decreased risk.

Then too, if I have an agenda, I can skew even my own original study by putting the cherries on top of the whipped cream.  I can highlight, in both my abstract and my conclusion, the results of a single sub-set-- hiding others down in a table that more than likely will never get read.  I can even go further and refuse to publish my tables, or refuse-- upon subpoena-- to reveal what my tables showed, or even to discuss how I got my data. 

You want to believe that it's never been done?  Well, alas, it's been done often. By the sued (and reprimanded) Australian EPA and here at home by a number of scientists whose studies are frequently cited and all of whom are idols of the anti-smoking brigade.

NO DOSE-RESPONSE TREND

The first and most fundamental tenet of toxicology is: THE DOSE MAKES THE POISON.  Thus, the degree of poisoning should correlate with the dose of the alleged poison.

"Dose" can be measured in several different ways. 

Duration is one. Or to put that metaphorically, the longer you go without mittens, the more bitten your hands will be.

Amount is another. 33 full-strength aspirin can kill you. Two will cure your headache.  One probably won't, but may help to prevent heart disease. 

Different amounts will have totally different effects. From none to benign to lethal.

Dilution is something else. Acetic acid may kill you. But diluted into vinegar (which is acetic acid) it won't do a bit of harm.

Therefore, an epidemiological study ideally should show a clear dose-response trend.  So say the experts:

"In most epidemiology textbooks...dose-response (sometimes called 'biologic gradient') is listed as one of several criteria for inferring causation. The implication is that 'dose-response' means more than just 'association.'" (16)

"The presence of a dose-response relation is evidence that the reported effect is genuine, not the result of arbitrary classification." (17)

And yet, many epidemiological studies, with pitifully low RRs, show no dose-response trend either.  Or show a counter-logical trend. 

When you do the breakdown on the Hirayama study purporting to show a link between lung cancer and ETS among the nonsmoking wives (for 20-40 years) of smoking husbands, the trend is something like this: It's more dangerous to live with an ex- or light smoker than a 2-pack-a-day man.

In fact, according to another analyst: "there is no clear pattern of dose/response in the majority of the studies tracking ETS and lung cancer where the quantity of exposure is measured."  Of the "only 16.6% of the papers used in the EPA report that included the RRs necessary to conduct a trend analysis, there was no correlation between dose increase and risk increase."(18)

MORE ON DOSES

The dose makes the poison.  But, quick, how poisonous is a dose of say, 50ng/gm?  Is it more or less worrisome than 50ug/gm?  And what's the LD (lethal dose) of the stuff anyway?  Let's start with a simple chart.

Gram (g)   =  1 thousandth of a kilogram or 0.0352 (3/100ths) of an ounce

Milligram  (mg)  =  1 thousandth of a gram (a grain of salt is 64.79 mg)

Microgram  (ug)  =  1 millionth of a gram (salt grain is 64,790 ug)

Nanogram  (ng)  =   1 billionth of a gram (salt grain is 64,790,000 ng)

DOSES V POISONS

Much has been made recently of the "toxic chemicals" in secondhand smoke. One of the favorite red flags has been the hot-word Arsenic !!

Let's now look at Arsenic!! in the context of real life.

According to the National Research Council, arsenic is naturally occurring in our water, soil, air and food. Its greatest concentration is found naturally in our soil-- the soil in which our fruits and vegetables (and yes, our tobacco plants) are grown and on which our cattle graze--where it's found at an average of 5 parts per million (5 ppm, or 5 ug.).  Lake water has an average of 65 ppb (or 65 ng per gram of water); plants and animals--corn and cows--an average of 300 ppb; and shrimp an average of 30 ppb. One third of the arsenic we get from our food is the most dangerous (inorganic) form, for a total estimated average of 53 ug a day in a normal well-balanced diet.  Says NRC.

The Agency for Toxic Substance Disease Registry estimates 100 ug/day from food, and the World Health Organization tops it with an estimate of of 126-273 ug/day. 

Which may show you, if nothing else, that...nobody knows anything.

The amount of arsenic that's currently allowed in "safe" drinking water is 50ng/gm. At that rate, according to the calcs of a friendly physicist, an 8 ounce glass of water would contain 12,000 ng of arsenic. Following the health instructions of 8 glasses a day, one could technically consume 96,000 ng (96 ug) of arsenic without ill effects,

And what about the air?

According to the World Health Organization, urban American air contains 400-600 ppb (ng/g) of which the average human being, going about his life, will inhale an average of 20 - 30 ng per day. (Joggers and other heavy-breathers inhale more.) (19)

Airborne arsenic is also one of the substances for which OSHA has set Permissible (safe) Exposure Levels (PELs.) In the case of arsenic, exposure to 10 ug per cubic meter of air (10 ug/m3) for the duration of an 8-hour shift is considered  safe. 

And OSHA plays it safe. The actual level of airborne exposure that's been said to arouse symptoms is "about 100ug" according to the Agency for Toxic Substances and Disease Registry.

Let's return to those cigarettes.

According to the same WHO report, a pack a day smoker will inhale from 0.8 to 2.4 ug arsenic per 24 hour day from the cigarettes he smokes (or divided into "work shifts", from 0.26 to 0.46 per 8 hours.) The lower figure is for American cigarettes

If we accept the conclusions of the National Research Council, the US DHHS and/or  the EPA, (20)-- that nonsmokers exposed to smoke inhale from 0.1%-1% of what smokers themselves inhale-- then we can begin to calculate the amounts of arsenic inhaled by a nonsmoker.  And further, to compare those amounts to OSHA regs.

Let's make that nonsmoker a nonsmoking BARTENDER.

Assume that in real life there are 25 seats at his bar and that 10 of those seats are occupied by smokers, each smoking an average of 2 cigarettes per hour or 160 cigarettes in the course of his 8 hour shift.

The OSHA standard for airborne arsenic is 10 ug per cubic meter of air.-- or to put that another way,  per 40 inch cube.  Since most taverns (this side of Oz) are more  roomy than squat cubes, in actual life the amount of arsenic would be diluted by volume of air, and further diluted by ventilation.

But let's leave it the way it is.

Let's simply put our bartender into a 40 inch cube--and so he doesn't get crushed to death--we'll have him share it with 1 smoker, smoking 160 smokes in the course of his 8 hours.  With zero ventilation.  If the bartender inhales even 1% of the arsenic that the smoker himself inhales, his total exposure in 8 hours would be 0.064 ug (assuming a city smoker smokes American cigarettes)-- far far far within what OSHA determines "safe." And about what he'd be exposed to spending an  hour in Bryant Park.

(Smoker per 20 US cgts: 0.8 ug; per 160 cgts: 8 x 0.8 =6.4 x 1% = 0.064 ug)

And remember, this is happening in a totally sealed cube.

And the same kinds of calculations can be made for every "poison" and "toxin" in all the ads.  Which is why OSHA has stated that it's well-nigh impossible to find any actual workplace where its PELs for secondhand smoke or any constituent thereof would be met, let alone exceeded.

The point we're trying to make is that while Arsenic!! is a "poison" and even a "carcinogen" it's neither at these doses.  And further, people's normal exposure from other sources is greater by great amounts.  Nor is this a question of "two wrongs don't make a right."  Scientifically speaking, it's more along the lines of "two rights don't make a wrong."

If nothing else, there should be four "take-away" points:

STATISTICAL SIGNIFICANCE AT THE 95% LEVEL IS, AND ALWAYS HAS BEEN THE STANDARD OF THE SCIENCE.

RELATIVE RISKS OF LESS THAN 3.0 OR 2.0 ARE HIGHLY 
SUSPECT AND/OR MEANINGLESS.

CORRELATION IS NOT CAUSATION

THE DOSE MAKES THE POISON
 
 


BIBLIOGRAPHY

(1)   "Health Effects of Exposure to Environmental Tobacco Smoke," California EPA, 1997, citing a study by Hirayama and Sandler

(2)  "Respiratory Health Effects of Passive Smoking," US EPA, Dec. 1992

(3)   "Multicenter Case-Control Study of Exposure to Environmental Tobacco Smoke and Lung Cancer in Europe,"  IARC (WHO) Jnl of NCI, Oct 7, 1998

(4)   EPA, op cit

(5)   "Choices in Risk Assessment," US Dept of Energy, 1994, also citing the US Congressional Research Service Report of 1994 (qv)

(6)   "Judge Voids Study Linking Cancer to Secondhand Smoke," NY Times 7/20/98

(7)  National Cancer Inst (NCI) Release, 10/26/94; International Agency for Research on Cancer (IARC) "Statistical Methods in Cancer Research, V. 1, 1980;  "No Convincing Evidence of Carcinogenicity", Littlewood & Fennell, 1999, Comments to National Toxicology Program, 2/8/99, quoting Marcia Angell and Robert Temple.

(8)  "Evaluation of the Potential Carcinogenicity of Electromagnetic Fields," EPA, Review Draft, October 1990.  Also "Frontline," PBS, 6/13/95

(9)  The "something" in question was induced abortion--which is more politically popular than smoking a cigarette.  The risk was for breast cancer. Wall Street Journal,  1/3/95

(10)   "Heart Attacks May Have Tie to Drug: Scientists  Describe Risk as 'Miniscule.'" New York Times, 3/12/95

(11)  "Selenium Study Finds No Anti-Cancer Role," New York Times, 4/18/95

(12)  "New Clues in Balancing Risk of Hormones After Menopause," New York Times 3/12/95

(13)   "Study Reports Small Risk, If Any, From Breast Implants," NY Times, 2/28/96

(14)   "Meta-analysis, Shmeta-analysis," Shapiro, S, Am J Epidem, 1994: 140

(15)   "ETS Exposure, Lung Cancer and Heart Disease: The Epidemiological Evidence
Sears, Steichen: OSHA

(16)    "Tests for Trend and Dose Response: Misinterpretations and Alternatives," Am J Epidem. 1992, V. 136, No 1, p 96-104

(17)    "Data Torturing," New Eng J Md, 10/14/93, pp 1196-1199

(18)   "Environmental Tobacco Smoke," Littlewood & Fennel for National Toxicology Program, 2/8/99

(19)    WHO, Ch 6.1, www.who.dk.document/aif/6_1_arsenic.pdf.  Also Agency for Toxic Substance Division Registry (tk), National Research Council (tk) and National Statistical Assessment Service (tk); OSHA standards at www.waterindustry.org/arsenic3.htm

(20)   "Environmental Tobacco Smoke: Measuring Exposures and Assessing Health Effects," National Academy of Sciences (NAS) National Research Center (NRC)  Washington DC, National Academy Press, 1986