“Part Three. Critique of OHS in Canada” in “The Political Economy of Workplace Injury in Canada”
— THREE —
CRITIQUE OF OHS IN CANADA
On 27 July 2009, 37-year-old Darryl Binder died at Athabasca University after falling five meters and being impaled by rebar. Binder was a construction foreman at Athabasca’s new research building and left behind five children.1 Alberta’s occupational health and safety (OHS) regulations require workers to use fall protection if they could fall more than three meters or face an increased risk of injury because of the surface onto which they could fall. Despite this requirement, Binder was not adequately protected and he died.
Binder’s death is, unfortunately, not unusual. Despite OHS laws, over half a million Canadians are significantly injured at work each year. And more than 1000 workers — likely many more — die as a result of work-related injury and disease. These very crude indicators are a good place to start examining the reality of occupational health and safety in Canada. These numbers (and their deficiencies) reveal interesting things about what workplace injuries and hazards are recognized in Canada.
From this discussion, we turn our attention to specific injury prevention activities. This includes examining the effectiveness of internal and external safety systems. Of particular concern is the degree to which workers can and do exercise their rights under health and safety legislation. We will also examine the degree to which government standards and external enforcement work. Finally, we look at the effectiveness of state–employer partnership programs and the perception that they create regarding workplace safety.
RECOGNIZING INJURY AND HAZARDS
How many injuries?
To determine whether OHS efforts are effective, it is useful to consider the rate and severity of injuries and fatalities. We might then look at changes over time. Or we might compare industries in different provinces or other countries.2 Such comparisons can be tricky — it is difficult to know whether any two things are comparable, no matter how similar they appear on the surface. Yet even crude indicators of injury and death are a useful place to start.
Unfortunately, we simply don’t know how many workers are injured and killed each year. In the Introduction, we saw that 630,000 Canadian adults reported being injured on the job in 2003 severely enough to limit their activity. This number is a significant underestimate of workplace injuries. It ignores injuries that did not limit activity, such as minor cuts, burns, bruises, and strains. It also ignores injuries to minors, repetitive strain injuries, multiple injuries, and unreported injuries.3 Despite these problems, 630,000 injuries is likely closer to the truth than almost any other number available.
This difficulty exists because most “injury statistics” are actually the number of compensation claims accepted by workers’ compensation boards (WCBs).4 Injuries not reported to or rejected by a WCB are not counted. Frequently, injuries that did not cause time off work are also excluded. Injuries to the 10 to 20 percent of workers not enrolled in workers’ compensation are also missing. On top of this, there appears to be significant under-reporting of potentially compensable workplace injuries.5 Consequently, the injury numbers commonly used are low. For example, in 2003, the Association of Workers’ Compensation Boards of Canada (AWCBC) reported 348,715 accepted time-loss injury/disease claims. That same year, the AWCBC reported 963 work-related fatalities.6
This suggests three things. First, injury statistics commonly do not report the true level of workplace injury and death. Second, this under-reporting distorts our perception about the frequency of injury: there are many more injuries than we “see.” Third, injury rates are a social construction. That is to say, how we define injury affects the level of injury we see. WCB claims data is the easiest way to “see” injuries. But, in accepting these statistics, we accept the definitions and biases built into them.
Who gets hurt affects injury recognition
A workplace injury is damage a worker sustains at work. Many injuries are easy to see, such as cuts, bruises, broken bones, and burns. Depending on how a worker gets hurt, the relationship between work and the injury may also be obvious. In these cases — such as that of Darryl Binder — it is easy to agree there is an injury and that it was caused by work. But this is not always so. Consider injuries caused by repetitive motions.
Repetitive strain injuries (RSIs) became an important public policy issue in the 1980s. But they began appearing in factory and office workers during the early nineteenth century. Often referred to as writers’ cramp or telegraphists’ cramp, these RSIs received uneven treatment over time — often depending on who was experiencing them. For example, research by Dr. George Phalen contributed to chronic hand disorders not being recognized as work-related for nearly 40 years. His assertion was that many women experienced these disorders but, since women did no “manual” work, such injuries could not be work-related.7
The Phalen example illustrates that who gets injured plays an important role in the recognition of hazards and injuries.8 The physical demands of women’s work in the home and on the job are often downplayed. And much of the “science” of workplace health and safety is based upon studies of healthy men. Exposure limits to chemical and biological agents have historically been set with little consideration that there may be meaningful physiological differences between men and women, as well as between healthy and unhealthy individuals.
The type of injury and its cost also affect recognition
We limit workers’ exposure to chemical and biological agents because these agents cause occupational diseases, as well as injuries (such as burns). But recognition of the exposure– disease relationships has not come easily or quickly. Occupational diseases typically have long latency periods that make it difficult to see the relationships between work and the injury. The presence of other potential causes or contributory factors can make it even harder to determine definitively that work contributed to the disease in other than a minimal way.9 While governments and employers often cite the complex causation of such diseases as a reason for delay or refusals, there is evidence that concerns about financial liability drive these decisions.
Fluorspar miners in St. Lawrence, Newfoundland, for example, struggled for 30 years to gain adequate health and safety protections, as well as compensation for work-related silicosis and lung cancer. Government concern about the economic impact of accepting such diseases played a part in delaying and limiting compensation. Only mounting public pressure and workplace disruption triggered change.10 Uranium miners at Elliott Lake, Ontario, faced similar barriers to gaining recognition of the link between their work and silicosis and cancer and used similar tactics to gain redress.11
Employers may impede injury recognition
Employers withholding information about an occupational hazard also contributes to delays in recognizing such hazards. The danger of asbestos exposure was well known in the industry12 but Bendix Automotive, for example, withheld this information from brake plant workers in Windsor, Ontario. When political pressure finally resulted in both health and safety enforcement and compensation claims, Bendix closed its plant in 1980.13
Bendix is not an isolated case of an employer withholding information and then abandoning its workers. In the United States, the Johns-Manville Company engaged in a decades-long pattern of denying asbestos was a hazard and repressing information about its effects in order to maximize its profitability.14 During this time, hundreds of its workers fell ill and died from asbestos-related diseases. The 1980 shutdown of the Johns-Manville plant in Scarborough, Ontario, allowed the employer to externalize the costs of compensating asbestos-related injuries by removing itself from the workers’ compensation rate-group to which those injuries would be charged.15
The social construction of injury and hazards
These examples indicate that injuries, like accidents, are a social construction. That is to say, factors other than “science” contribute to what we consider legitimate work-related injuries. Workers, employers, and governments have much at stake when defining an injury. They are also differently able to cause or impede the recognition of workplace injuries.
Eric Tucker notes that constructing what is considered “normal” in the workplace is a fundamentally political act, because the expectations that are generated arise out of divergent and competing interests.16 Thus, the recognition of and response to injuries depends on positions taken by the state and other players, such as corporations and trade unions. Naturally, those who are more powerful are better situated to have their versions of reality become the dominant ones. And in our society, capital is very much more powerful and better organized than workers are.
Health and safety hazards are also social constructions. They grow (in part) out of what we consider legitimate injuries. Over time, substances and activities that we once did not consider hazardous have been redefined. Asbestos, coal dust, and silica all come to mind. More recently, we’ve come to see activities such as sitting in one position for hours or making repetitive motions as hazardous. And new hazardous substances, such as photocopier and printer toner, have been identified.
Employer tactics in contesting injury recognition
As with injuries, employers, workers, and the state often contest what work and substances are considered hazardous. Angela Nugent traces the plight of young women who died from radium poisoning after painting luminescent paint on watch faces. These workers experienced severe anemia, lesions of the gums, and necrosis of the jaw before dying.17 The United States Radium Corporation responded by engaging researchers to determine if radium was indeed the cause of these deaths. This research indicated that it was.
The employer’s response exemplifies how employers can evade responsibility for such injuries. The employer requested additional research, criticized the methods, prohibited publication of the research, misrepresented the findings to government, hired a more compliant researcher to create evidence that there was no risk, blamed the workers for their exposure, and then argued that the deaths of the young women was an acceptable price to pay for glow-in-the-dark watch faces.18 Eventually the employer was forced to address these hazardous conditions. It chose to alter how watches were made to provide more protection to workers, rather than eliminate the hazard from the workplace.
Radium poisoning demonstrates that some hazards are easier to find agreement about than others are. An unguarded saw blade poses an obvious hazard. The way a subcontracting system pressures workers to work unsafely is more difficult to see. For example, the effect of the subcontracting system used in Australia’s long-haul trucking system on workplace safety has been largely ignored because of the dispersed and incremental nature of fatalities — fatalities were not viewed through the lens of work.19 This subcontracting system exacerbated normal demands on drivers, compelling them to work excessive hours, speed, and use drug stimulants.
Ignoring the context in which the accidents occurred meant the hazard was constructed as driver error. Regulation then became focused on drivers and not on the system of work. This construction benefited employers because it ignored how employers caused accidents through the organization of work. It also benefitted the state by obscuring the role played by government deregulation of the trucking industry.
Perpetuating the careless worker myth
Focusing attention on worker behaviour is a recurring issue in the prevention of workplace injuries. As we saw in Chapter 2, the careless worker narrative has been a powerful tool for employers over time. Despite being largely discredited, it continued to inform state efforts to prevent workplace injuries. For example, Alberta ran an OHS awareness campaign in 2005 that showed workers engaging in or the consequences of unsafe work practices.20 The common visual component of the campaign was the word “stupid.” Its location in each poster (e.g., on a crate being moved unsafely, on a wall from which an unsecured ladder had fallen) was designed to focus attention on the contribution of worker behaviour to workplace accidents. In this way, the campaign implicitly blamed workers for their injuries.
Alberta launched a similar campaign in 2008, with graphic videos showing how workers’ behaviour causes accidents.21 In one vignette, a worker in high heels is injured when she falls from a ladder while trying to reach unstable stock on a high shelf. The video ignores the role of the employer in creating jobs that require workers to engage in risky behaviour to complete routine tasks. The employer required the worker to wear high heels to look fashionable. The employer gave the worker a rickety ladder. The employer placed stock high up and stacked it unsafely. That the worker was injured was entirely predictable and preventable — by the employer. The safety tips provided for workers at the end of this particular video all suggest alterations to how the job is designed — things entirely out of the control of workers. Missing is any indication that employees have a statutory right (and obligation) to refuse such obviously unsafe work.
Identifying occupational cancer
Occupational cancer provides an interesting example of how injuries and hazards are socially constructed in ways that endanger workers. Occupational cancer is important because of the long-term growth in the proportion of work-related fatalities attributed to occupational diseases.22 Discussion and research about occupational cancer typically focuses more on treatment than prevention.23 When preventing occupational cancer is discussed, it typically focuses on determining “safe” levels and types of exposure to carcinogens, rather than discussing whether exposure ought to occur at all. These discussions also tend to emphasize the importance of lifestyle factors (e.g., diet, exercise) in preventing cancer — in doing so, making workers’ implicitly responsible for cancer prevention.24 Framing the discussion this way ignores that involuntary and even unknowing exposure to carcinogens (for which, incidentally, there are no definitively safe levels of exposure) frequently occurs as a result of job-design decisions made by employers.
There is significant debate over the percentage of cancers that are occupationally linked. Although estimates range from 4 percent to 40 percent, and obviously vary by type of cancer and country, there is widespread acceptance of numbers between 8 percent and 10 percent.25 Occupationally linked cancers are not evenly distributed through the workforce, more commonly affecting blue-collar workers and having gendered elements.26 Yet these numbers are not reflected in government statistics, which are derived from worker’s compensation claims.
For example, in 2005, approximately 13,100 Albertans were diagnosed with cancer and 5,500 Albertans died from cancer.27 The Alberta Cancer Board estimates that 8 percent of all cancers are occupationally caused.28 This suggests just over 1,000 of Alberta’s 2005 cancers were occupational cancers and about 440 deaths were occupationally related. Yet, in 2005, the WCB accepted only 29 claims for cancer and reported just 38 cancer-related fatalities. This example is consistent with the pattern over the previous 10 years.29 The vast majority of cancer cases accepted by the WCB are lung cancer (mesothelioma) with benefits going mostly to firefighters, coal miners, and workers exposed to asbestos. In this way, the prevalence of occupational cancer is hidden.
Preventing occupational cancer
Not surprisingly, discussion of occupational cancer has been largely absent from Alberta’s occupational health and safety system, which — like workers’ compensation — is designed to address more traditional accidents and injuries. The word carcinogen appears only three times in the 539-page Occupational Health and Safety Code. One mention requires the labelling of asbestos storage units. The other two instances require documentation of workplace and non-workplace exposures to asbestos, silica, or coal dust during a health assessment of such a worker.
While it may seem facile to criticize legislation based on the (dis)appearance of a single word, this forms part of a broader pattern of ignoring cancer in the workplace. The Occupational Health and Safety Code does require employers to keep exposures to chemicals, biological hazards, and harmful substances (many of which are carcinogens) as low as reasonably practicable and below certain threshold levels.30 Yet, relying on such exposure thresholds for carcinogens simply sets an “acceptable” level of occupational cancer, rather than preventing it.
When workers get cancer from legal and/or illegal work-place exposures to carcinogens, the state then frames these (as at least partially) the consequence of worker behaviour. Cancer in the Workplace, a 2005 publication by the Alberta Cancer Foundation and Work Safe Alberta, notes how lifestyle and genetic factors are influential factors in the development of cancer.31 The document acknowledges that job design is a pivotal factor in exposing workers to carcinogens, but then fails to follow that logic through in its recommendations.
Instead, workers are provided with tips, some of which are useful in limiting occupational exposures (e.g., wear personal protective devices, wash your hands) and some of which are not (e.g., eat lots of vegetables, get some exercise). While workers are encouraged to limit their exposure to hazardous substances, this is indeed something over which they have little control or even knowledge about. Employers, who do have control and knowledge, are recommended to “ensure that the products being used in the workplace are the least hazardous possible for the intended use” and that engineering controls and other equipment can be used to reduce exposures.32
Constructing cancer as a non-issue
In this example, we see occupational cancer and carcinogens being (de)constructed as a low-priority issue. The medical community pays little attention to the occupational origins of cancer. When causation is discussed, the multi-factorial nature of cancer is (legitimately) raised as a barrier to identifying causes, which then seems to preclude effective prevention. This assertion is vexing in several ways. First, unequivocal evidence regarding causation in individual cases is likely impossible to find. Yet, this does not preclude taking action to eliminate exposure to known carcinogens immediately. Second, many non-occupational exposures to carcinogens that confound causation are the result of workers using or ingesting the products produced by employers, such as cigarettes, alcohol, pesticides, and herbicides on food, and fumes and other residue from industrial plants or manufactured products.
When an occupational link becomes established in the public’s mind, compensation is provided.33 Yet, overall, there is little regulation of carcinogens and the method of regulation legitimizes questionable levels of exposure. The state prefers advising workers to wear protective gear and eat well to requiring the redesign of work processes. In these ways, Alberta has constructed a (non)response to occupational cancer. This approach is common and advances the economic interests of employers. Many carcinogens are fundamental to industrial processes and eliminating them entails significant additional costs (resulting in higher prices and/or reduced profitability) and significant liability. Acknowledging that corporations knowingly expose workers to carcinogens and that the government allows this to continue is also a significant threat to both social stability and the legitimacy of the state. Thus, this topic receives little attention.
Conceptual models of injury
Embedded in legislation, policy, and practice are beliefs about the cause and nature of injuries. Three basic assumptions about work-related injuries underlie efforts to prevent and compensate workplace injuries:
- The mechanism of injury will be discernable, or at least mostly distinguishable from other events or disease processes,
- The injury will manifest itself at the time of or reasonably soon after the injury occurs, such that the injury can be causally related to a work-place event, and
- The course and treatment of the injury will be broadly similar from one person to the next.
This biomedical model asserts that illness has a biological source (or pathology). Further, the degree of illness is proportional to the degree of biological malfunction. Objective medical knowledge (e.g., test results, observations, functional evaluations) is more valued than patient self-reports.34 This model plays a significant and useful role in both occupational health and safety and workers’ compensation.
Limits to the biomedical model
Yet, this model also has some serious drawbacks. Determining whether a worker concern can be medically substantiated takes time and money. Employer resistance can impede such a determination. Until a determination is made, workers continue to be exposed to the hazard. Further, worker-identified injuries or illnesses that cannot be validated by such tests are not given much credence, and thus prevention and compensation may be denied. This, however, does not eliminate any hazard that exists or injury that occurs — it simply transfers these costs to workers.
This model also runs afoul of recent research that suggests (1) injuries are often multi-factorial, and (2) work exerts significant effects on health and a broad range of diseases have work-related components.35 Attempting to classify injuries as work-related and non-work-related (thereby ignoring the interactive effect between occupational and broader environmental factors) is likely an impossible task.36 Yet this biomedical approach remains commonplace.37 Consequently, instead of triggering a broader effort to reduce unsafe work practices and the use of toxic substances in society, discussions define these hazards out of existence — uncertainty is used to preclude action.
Workers may take a different approach to constructing illness and injury. Many rely upon their own observations of how the workplace affects them and their co-workers to draw conclusions about health and safety.38 The reliance of workers on this form of knowledge may have implications for the regulation of occupational health and safety.39 Where injuriousness is contested, medical knowledge is typically given precedence over worker knowledge. Knowing this, workers may not engage in OHS systems knowing that they will fail, or they may engage it in a way they believe they can succeed at, even if the outcome is less than optimal.40
REGULATING WORKPLACE HAZARDS
Approaches to regulation
Chapter 2 suggested that the state has intervened in injury prevention because workplace injuries threaten production and social reproduction. Governments can intervene in the operation of society in several ways. It is useful to think about these policy instruments as falling into four different categories of increasing invasiveness.41 States can use hortatory instruments to signal priorities and propel actions by appealing to values via symbols. Campaigns like “Bring ’em back alive” attempt to convince motorists to drive safely in order to protect their children.42
States can also use capacity-building instruments, such as investing in intellectual, material, or human resources to enable activity. For example, governments can offer educational campaigns, such as Alberta’s online ergonomic training programs, to improve the knowledge of workers and managers.43 Alternately, the state could provide training materials, safety equipment, or trainers to build the capacity of workers and managers to act safety.
Incentive-based instruments use inducements, sanctions, charges, or force to encourage action. This could include financial incentives to employers for reducing workplace injuries, such as Alberta’s Partners in Injury Reduction program (discussed below) or ticketing workers and employers for unsafe acts or circumstances, such as Ontario’s 2005 initiative.44 Prosecutions under health and safety legislation as well as the 2004 amendment to the federal Criminal Code are other examples.
Finally, states can use authority-based policy instruments that grant permission, or prohibit or require action. They can also change the distribution of authority of power within a system. So a state could sanction the creation of a no-fault system of injury compensation that displaces tort law. It could also compel employers to participate in that system. It could impose duties upon employers that are greater than their common law duties. It could set standards regarding chemical exposures.
Limits on regulation
Governments often use several policy instruments to achieve a goal, such as making workplaces safer. The exact choice of instrument(s) can be constrained by political pressure, such as employers wanting to minimize regulation. The state may also be limited by popular conceptions about the appropriate role of the state and the effectiveness of particular forms of regulation. Constraints are often categorized as political and practical. This division is false and obscures how “practical” constraints reflect earlier political decisions.
For example, many people believe that the state must rely upon employers and workers to make workplaces safe because there aren’t enough inspectors. In this way, the internal responsibility system is cast as a reaction to a practical problem: inspectors can’t be everywhere. Is that really true? The number of inspectors that the state chooses to hire is a political decision. This decision reflects how many inspections the government believes is desirable. More broadly, it also reflects the degree of state interference in the operation of workplaces that the government thinks is required (or is politically palatable). Viewed this way, the seeming practical problem of not enough inspectors is actually a political decision.
It is also useful to be mindful that the ability of the state to regulate is compromised by the multiple goals that regulatory agencies must often pursue. In addition to adjudicating disputes and enforcing compliance, OHS agencies may undertake research, provide policy advice, distribute funding or compensation, or collect premiums. Such agencies may need to trade off how and how aggressively they enforce rules in order to achieve other organizational objectives.45
The internal responsibility system
Beginning in the 1970s, Canadian governments began emphasizing the internal responsibility system (IRS), with workers having the right to know, participate, and refuse. The IRS was adopted when the influence of labour was near its peak and the standard employment relationship was widespread.46 In practice, those workers most able to benefit from the IRS have been unionized workers and non-unionized workers in work-places where employers are prepared to cooperate.47 With Canadian unionization rates hovering at around 30 percent, the IRS clearly does not equally benefit all workers.
In order to participate in decision-making about safety and exercise the right to refuse, workers need to be aware of the hazards they are facing. It is unclear whether workers are aware of hazards and whether the hazards workers identify are accepted as such. Workers may also be reluctant to ask for information. For example, only one in five Ontario workers with a health and safety concern asked for information about it — with the majority of workers asking their supervisors.48
Workers may also not know their rights. For example, a 1988 Ontario study found that 44 percent of workers knew nothing about their rights.49 Workers who did know about their rights were those already most advantaged in the workforce: highly educated and/or unionized men.50 More recently, a 2007 study found that only 21 percent of new Canadian workers received health and safety training in their first year with a new employer.51 This suggests that most employers do not take safety issues seriously enough to train workers.
Knowledge is power?
A popular refrain among health and safety activists during the 1970s was “Knowledge is not power. Power is power.”52 While this slogan is incisive, there is good reason to believe that knowledge is power — for employers. Specifically, when there is a large difference in what workers and employers know about health and safety, the rights to know and participate may actually increase employer power in the workplace.53
This seemingly counterintuitive outcome occurs in two ways. First, employers can influence which hazards workers pay attention to by what knowledge they choose to share. Employers may be more likely to acknowledge or provide information about hazards that are easy to address rather than hazards that require more involved remediation.
The ability of employers to influence what hazards are recognized is heightened by having a designated group that is “responsible” for workplace health and safety issues, such as a joint health and safety committee (JHSC). This arrangement channels health and safety concerns in a single venue that the employer can dominate. Having an official place to discuss health and safety also delegitimizes discussion that occurs elsewhere, such as in a union hall or on the shop floor.
Second, employers can influence how workers think about hazards by how the employer frames an issue or the solution. For example, workplace air quality may be an issue. An employer may frame this as a worker “concern,” thereby subtly contesting whether there is indeed a hazard. The employer can then quite reasonably suggest evidence needs to be collected (or provided by the workers) to substantiate the “concern.” This delays, and possibly derails, action. In the meantime, workers continue to be exposed to the hazard.
Should a hazard be identified, the employer can shape the solution(s) considered by using its managerial power. For example, it can require the use of personal protective equipment (PPE) such as respirators (or other low-cost solutions) in lieu of altering the production process to eliminate the hazard. Without some way for workers to compel a specific remedy (e.g., collective bargaining, direct action on the shop floor), workers must accept the employer’s solution.
Joint health and safety committees
The JHSC is a central feature of the IRS. Committees typically comprise an equal number of employer and worker representatives. When consensus on health and safety issues can be found, the JHSC can make non-binding recommendations to the employer. Data from the UK, U.S., and Canada suggest that such committees are associated with a reduction in workplace injuries.54 The effectiveness of the committees appears, however, mediated by union representation, involvement of workers, management attitudes, and the degree of external regulation.55 The most frequent criticism of JHSCs is that they lack the authority to compel employers to act on safety issues.56 In short, the potentially positive effects of JHSCs only occur if employers accept the work of the committees.57
This is not to say that workers are entirely helpless. Recent research at the University of Windsor found that how worker safety representatives behave has an important effect on what can be achieved. When worker representatives gather their own research on hazards, emphasize worker knowledge, and mobilize workers around safety issues, significant improvements in working conditions were more likely to occur.58 Workers with a more politically active orientation tend to challenge the way employers shape and limit discussion, recognizing that remedy often required action beyond simply identifying concerns to the employer.59
By contrast, the Windsor study also suggests that workers with a technical-legal orientation typically focused their attention on basic housekeeping and maintenance issues — concerns that are neither disruptive nor costly to address.60 This finding builds upon the observation by Vivienne Walters that worker representatives on JHSCs are often drawn into technical, collaborative discussions shaped by employer notions of what risks are reasonable, and what costs are affordable.61
This research suggests two things. On the one hand, employers can use JHSCs to limit the impact worker of participation rights by controlling information flow, shaping discussion about safety, and ignoring recommendations they do not agree with. On the other hand, workers prepared to engage in more overtly political action could use JHSCs as a platform from which to exert pressure on employers. In short, employers continue to enjoy a structural advantage. But the effectiveness of participation rights for workers is determined, in part, by how those rights are exercised.
Despite (or perhaps because of) these shortcomings, JHSCs remains an important part of most provincial OHS systems. This is not the case everywhere though. In Alberta, for example, statutory JHSCs are formed at the order of the Minister. There were about 321,000 significant occupational injuries in Alberta in 2007 — an average year.62 Despite this, the Minister did not order any committees formed. In fact, there are no Minister-ordered JHSCs in Alberta. Committees that operate do so where workers are unionized (about 20 percent of the workforce) and have bargained them into place or where the employer allows a committee to function. Voluntary committees are not, however, subject to the provision of Alberta’s Health and Safety Act. Without specific collective-agreement language, employers set the rules for such committees.63
The right to refuse
The right to refuse unsafe work is the most powerful right workers have under the IRS. It is one of the few instances where workers can legally disobey their employer. A refusal can compel the employer to pay attention to safety concerns. Yet, despite staggering numbers of injuries and deaths each year, workers do not refuse unsafe work very often.64 To understand why workers don’t refuse requires us to consider the nature of this right in practice.
A refusal is a reactive right. It operates only after the employer has made many decisions about the organization of work — some of which have made the work unsafe. And scope of the right is simply to work or not. The right to refuse confers no ability on workers to influence what hazards exist in the workplace — only to absent themselves from those hazards they know about and believe unsafe. By contrast, the employer has significant latitude to (re)organize work in ways that make it minimally acceptable to — although perhaps not entirely safe for — the worker. In this way, employers have significantly more discretion and flexibility around work refusals than workers do.
Formal work refusals are not the only kind of work refusal. An Ontario study found that only 1 percent of workers exercised their legislative right to refuse unsafe work, although 40 percent informally refused.65 An informal work refusal may be confrontational (i.e., a refusal without triggering the formal legislative process) or non-confrontational (e.g., altering the work process).66 Such behaviour may pressure the employer to alter unsafe work. Or, if the employer ignores or is unaware of a worker’s resistance, an informal refusal may result in somewhat less unsafe work.
Workers and employers may also differ in their sense of what is “safe” and “unsafe” in the workplace. The right to refuse may also be affected by broader dynamics of industrial relations. Employers, for example, may often raise the spectre that workers might use their statutory right to exert pressure on the employer during collective bargaining — although there is no evidence of this in Canada.67
Employer responses to refusals
One outcome of a refusal is that the employer may simply assign the task to another worker, perhaps without telling the second worker about the hazard that has been identified. This may trigger cynicism about the efficacy of the right to refuse, potentially reducing workers’ willingness to exercise this right in the future. Employers may also haggle with workers — applying pressure such as “you’re holding up the line” or “we have to make this deadline.” Indeed, fellow workers may also apply such pressure.68
Pressuring workers is effective because refusing unsafe work entails significant risk for workers. Workers who refuse may be disciplined for insubordination. Knowing that they may face discipline for exercising their right, workers may be reluctant to do so.69 When such discipline is appealed to an administrative tribunal or arbitrator, the burden of proving the work is unsafe — whatever the actual rules about the burden of proof are — may well fall to the worker. A worker with a good work record whose refusal is measured and appears reasonable tends to fare best when he or she appeals discipline.70
The specific rules around refusals may also affect the ability of workers to exercise their rights. If the law says the refusal is only legitimate if the hazard is abnormal, employers do not have to remedy long-standing or industry-wide OHS concerns. If the law says danger must be imminent, workers must wait until matters escalate and the risk to their safety is grave.
Refusal as a weak right
The unwillingness of workers to refuse unsafe work highlights that employers need not exert their power to get their way. That is to say, the powerful rarely have to prove their strength — simply the expectation that employers may exercise their power may be sufficient to gain compliance.71 Consequently, the right to refuse is paradoxical. On the one hand, it provides workers with a rare opportunity to override employers’ common law right to manage. Yet, on the other hand, workers face disincentives and barriers to exercising this right. And even if they do exercise the right, employers do not necessarily have to remedy the problem. In this way, the right to refuse is a weak right.
Yet it is difficult to see this weakness on the surface. It is (superficially) true that “workers have the right to refuse unsafe work” in Canada. This creates the appearance workers can to protect themselves. This, in turn, undermines the political power workers can derive from legitimate concerns about being injured or killed on the job. This appearance also protects employers from state interference: workers can (allegedly) protect themselves. And this appearance protects the state from political backlash when workers are injured because it makes workers appear responsible for their own injuries and death — why didn’t the workers just utilize their right to refuse?
Effectiveness of the internal system
The internal system creates weak rights for workers. While workers have more protection than they would otherwise, much of the protection is notional. To exercise these rights — to make them real and meaningful — workers must take a chance in defying their employer and face dismissal.72 The workers best able to exercise these rights are workers who are already advantaged in the workplace: educated and/or unionized men. In this way, the internal system is consistent with many of the legislative compromises between the interests of labour and capital brokered by the state during the twentieth century. Women, the ill-educated, the non-unionized, and those who have precarious employment (see Chapter 4) receive less protection than educated and/or unionized men do.
To the degree that workers can exercise their rights — perhaps with the cooperation of a sympathetic employer — there are significant limits on these rights. Health and safety rights tend to focus on quantifiable and obvious hazards to health. There is little scope to address qualitative issues in the work environment that may affect health and safety, such as pace, repetitiveness, and deskilling.73 Workers can suggest changes to such factors through JHSCs, but employers are under no obligation to consider them. In short, “management” decisions about when, where, and how to produce things — the decisions that create risks and hazards — are out of workers’ reach under the internal system.74 Further, some commentators suggest that workers are slowly being squeezed out of an active role in the IRS as the state and employers increasingly adopt partnership models.75
Exposure levels and threshold limit values
Internal systems operate in conjunction with the external responsibility system. Governments set standards and obligations, conduct inspections and investigations, and then enforce their laws via orders, fines, and prosecutions. Among the standards set by the state are exposure limits to some of the chemical and biological agents found in the workplace. Workers clearly benefit from knowing to which chemical and biological agents they are being exposed. The limits used, however, raise many concerns.
There are more than 70,000 chemical substances in use in North America. Another 800 substances are introduced each year. There is no toxicity data available for 80 percent of these substances.76 And the federal Workplace Hazardous Materials Information System (WHMIS) places no obligation on manufacturers or employers to determine the hazardous properties of products before introducing them into the workplace. Consequently, workers are often the first humans to experience prolonged and significant exposure to these substances.
The results of using workers as guinea pigs can be disastrous. As we saw in Chapter 2, workers were the first to experience lead poisoning as a result of adding tetraethyl lead to gasoline. Employers and the government wilfully disregarded these warnings. As a result, not only were workers injured, but also a hazardous product became widely used. The United States now faces an estimated four to five million metric tons of lead dust (8.8 to 11 billion pounds) deposited in soil from car emissions. This constitutes a significant hazard to children playing outdoors.77
Are exposure levels safe?
Exposure limits are theoretically supposed to be the level of exposure at which it is believed that nearly all workers may be exposed without adverse effect. There is, however, no scientific basis for this claim.78 These limits are also largely based on data derived from research on healthy men. Consequently, there is little consideration of the effects of age and gender.79 Also excluded is the effect of being unhealthy and on exposures from outside the workplace.80 In this way, exposure limits are likely to overestimate what is a safe exposure.
A concerning trend is that these “safe” levels of exposure go down over time, often dramatically. The exposure level for benzene, for example, dropped from 100 parts per million (ppm) to 10 ppm between 1945 and 1988 and exposure limits on vinyl chloride dropped from 500 ppm to 5 ppm.81 This phenomenon is not just a part of the distant past. Alberta reduced its exposure levels of chrysotile asbestos from 2 fibres per cubic centimetre (f/cc) in 1982 to 0.5f/cc in 1988 to 0.1f/cc in 2004.82 These changes reflect that, in 90 percent of cases where threshold limit values (TLVs) have been set, there is insufficient data on the long-term effects of exposure from either animal or human studies.83
Barry Castleman and Grace Ziem have also exposed the corporate influence on the setting of TLVs in the United States. Nearly one sixth of all TLVs have been set based on unpublished corporate data, which raises concerns about the reliability of the results. Further, the committees that set these standards have included significant numbers of industry representatives and consultants — many of whose relationships to industry were hidden. This raises significant concerns about conflict of interest. Finally, TLVs have only been set for about 700 chemical substances — a fraction of the over 70,000 substances found in modern workplaces.
Why do exposure levels always go down?
The trend towards lower TLVs seems to indicate the system “works”: regulators revise TLVs in response to emerging scientific discoveries. This conclusion is incorrect and misleading. The constant downward trend actually demonstrates a systemic underestimation of risk to workers by regulators. It is true that additional research should alter what is considered a “safe” level of exposure. The law of probability suggests that initial exposure levels will sometimes be too high and sometimes too low.84 Yet it is rare for TLVs to be set too low — they are almost universally set too high. Why is this?
To be fair, regulators operate in some degree of uncertainty due to a lack of credible research on the effects of chemical substances. This is particularly true when employers hide evidence that substances negatively affect workers, sometimes by producing studies of questionable validity.85 There is also little research on the synergistic effects of chemicals where there is a multi-agent exposure. For example, the chance of developing lung cancer following asbestos exposure increases dramatically if the worker also smokes cigarettes. Yet, these factors alone cannot explain the consistent underestimation of the hazards posed chemicals.
Regulators also operate in a political environment, where workers, employers, and the state all seek to advance their interests. It follows that regulators setting standards must ask what actions will be politically palatable. In this way, setting exposure limits is not a scientific process, but rather a political one. Among the findings of researchers is that most exposure limits have been set at levels industries were already achieving.86 That is to say, “safe” appears to be defined in practice as “convenient for employers” rather than “posing no hazard to workers.” Incorporating such standards into government regulations results in the incorrect belief that such exposures are safe.
This discussion expands our understanding of how hazards are socially defined concepts. By labelling levels of exposure as “safe” (even when they aren’t), the state is able to define hazards out of existence. This benefits employers because many of these substances are integral to industrial processes and/or are the least expensive substance available to do the job. The effect of such hazardous substances on workers is ignored. After all, how can a “safe” substance cause harm to a worker?
Inspections and inspectors
As we saw in Chapter 2, Canadian workplace inspectors have historically favoured achieving compliance by means of persuasion, rather than sanction. Inspections are the main way inspectors identify workplace hazards and pressure employers to remediate them. It is difficult to find data on the number of inspectors. What information can be found suggests that the ratio of inspectors to workers and employers is low. For example, in 1983/84, Ontario had 360 inspectors and 20 occupational hygienists and a workforce of three million.87 In 2008, Alberta had 84 health and safety inspectors and 144,000 employers.88
Inspection data is also difficult to come by. In British Columbia, the number of inspections decreased by 40 percent between 1995 and 2005.89 By contrast, the total number of field visits in Ontario increased from 59,345 in 1996/97 to 101,275 in 2007/08, with static or declining field visits from 1996 to 2004, followed by a near doubling thereafter.90 Some commentators have suggested, however, that this increase in inspections masks a reduction in the quality of the inspections.91 It is difficult to use this data to draw any conclusions. Nationally, information is fragmentary and conflicting. Quantitative measures also exclude important qualitative details, such as the depth and rigour of the inspection.
These weaknesses do not entirely preclude analysis, however. Consider the case of Alberta. In 2005, the Government of Alberta inspected 5,237 worksites. These inspections are part of the province’s plan to ensure workplaces are fair, safe, and healthy.92 While 5,237 inspections seem like a lot, there are more than 140,000 employers in Alberta (many with multiple worksites). Assuming no worksites received multiple visits, this data indicates less than one out of every 26 worksites received a visit. Or, put another way, it would take more than 26 years for every worksite to receive a single visit.
That same year, more than 33,305 Alberta workers were injured so badly that they could not report to work the next day and at least 143 died from work-related injuries and disease.93 Even if inspections focused exclusively on worksites with demonstrably hazardous conditions, only around one-sixth of these worksites would have received an inspection. It is difficult to believe that this level of inspection can lead to fair, safe, and healthy workplaces.
Bias in inspections
Setting aside the level and quality of the inspection, there is also reason to believe that inspections target traditional industries and work patterns. A 2007 CBC investigation found that inspectors were up to 10 times as likely to visit a traditionally inspected workplace (e.g., construction, manufacturing, mining, and forestry) as one not subject to traditional inspections (e.g., education, health care, office environment).94
Nurses, for example, are nearly 20 times less likely to be subject to inspections than workers in forestry are. While nursing is not normally considered an “unsafe” occupation, in 2005, there were 73,000 nurse assaults in Canadian hospitals and care facilities, affecting approximately one-fifth of all nurses. While focusing on workplaces that are traditionally inspected might be explained in terms of the potential dangers in each sector, the number of workers’ compensation claims from traditionally and non-traditionally inspected workplaces are approximately equal.95
This same investigation found that most government inspections occurred during normal working hours. In Ontario, BC, Newfoundland, Nova Scotia, and New Brunswick, less than 1 percent of inspections occurred on weekends.96 This ignores the increasing number of workers who work on weekends and evenings. And evidence suggests that their likelihood of having an accident increases during those times.97
The effect of orders
When an inspector identifies a hazard, the most common consequence is that the inspector directs the employer to remedy it. The idea is that (assuming the employer complies) this addresses the situation and maintains a good working relationship between the inspector and the employer.98 Such direction can be verbal or can take the form of a written order. There is no data available comparing the incidence or circumstances when verbal directions become orders.
This approach of trading forbearance for compliance gets mixed reviews. Some suggest that by overlooking minor violations, not enforcing regulations that have a poor cost-benefit ratio, and/or delaying enforcement in return for an employer’s promise to comply or mostly comply, inspectors are acting in a reasonable manner.99 This approach makes inspection work much more cordial. It also reflects that inspectors generally have not had the power to issue on-the-spot fines. It may also reflect the fact that bureaucrats face political pressure that limits their access to prosecution. In this way, cajoling employers may be a reasonable (to their minds) trade-off. It may also reflect their orientation to capitalist relations, which emphasize the rights of the employer to direct work.
Yet this approach has its downsides. For example, this reluctance to deviate from cajoling undermines the effectiveness of workers seeking to address hazards via the internal responsibility system.100 It also reduces the cost of non-compliance for employers. Employers begin to expect one or more opportunities to remedy deficiencies. This can’t help but reduce their attention to safety because there is, in effect, no real penalty for operating a hazardous workplace. In effect, persuasion sends the message that non-compliance is only a problem when it results in injuries. That is to say, the state is legitimizing unsafe work practices so long as nothing bad happens.
Prosecution and fines
When employers don’t comply with orders or legislation, governments may pursue prosecution. Prosecution has been relatively uncommon in Canada. The time and effort involved are significant. It also requires governments to get past their general reluctance to recognize or label anything done in the course of business as criminal.101 When prosecuted, employers may employ the due diligence defence.102
Again, we’re faced with fragmentary evidence about the number of prosecutions and level of fines. In British Columbia, the real-dollar value of fines declined from $2.3 million in 1995 to $1.4 million in 2005.103 Ontario provides a contrast with a doubling of prosecutions and fines between 2004/05 and 2007/08.104 Fine levels and prosecutions are, however, crude measures. The likelihood of being prosecuted and the relative level of fines is a more nuanced indicator of the effectiveness of enforcement.
In 2008, Alberta reported 22 successful prosecutions under the Occupational Health and Safety Code for violations going as far back as 2004. During this time, Alberta recorded approximately 700 occupational fatalities.105 The largest fine was $419,250 for a 2004 violation. That sounds impressive. When compared to the company’s annual revenues of $47 million in 2007, such fine is akin a person with an annual income of $50,000 getting a $440 ticket — about same fine you’d get for doing 80 kilometres per hour in a construction zone. The upshot is that Alberta employers face little chance of prosecution and a relative small fine, even when they horribly injure or kill a worker.
In a comprehensive review of the international literature, Canadian researchers Emile Tompa, Scott Trevithick, and Chris McLeod found limited evidence that health and safety inspections resulted in fewer or less severe injuries.106 There was also only mixed evidence that the prospect of being penalized for health and safety violations lead to fewer or less severe injuries. The researchers suggest several possible explanations, including the fact that the penalties may not be significant enough to motivate compliance. It may also be that organizations do not always act rationally.
This conclusion is hardly surprising for workers. Inspections and the potential penalties have been demonstrably ineffective for decades, as evidenced by the ongoing high level of injury and death in the workplace. More interesting is the researchers’ finding of strong evidence that actually being penalized led to a reduction in injuries. This suggests that enforcement of regulations can positively affect workplace safety.107 Calls for enforcement (versus simple cajoling) were made as far back at 1898108 and continue into modern times.109
Partnerships and the mantra of “safety pays”
Recently, governments and workers’ compensation boards have begun creating partnerships with employers to improve work-place safety and reduce injuries. These partnerships are meant to encourage employers to undertake activities that will reduce workplace injuries. This encouragement often comes in the form of a financial incentive, such as a workers’ compensation premium rebate. The government and employers may also benefit from the appearance that they are trying to reduce the number of workplace injuries. The Government of Alberta uses WCB claims data to allocate rewards in its partnership program and, indeed, to measure the success of its entire occupational health and safety program.
These programs are based on (and reinforce) the widespread belief that “safety pays.”110 This mantra asserts that organizations can increase profitability by reducing workplace injuries.111 The most cited evidence for this perspective is a 1993, five-workplace study carried out by the British Health and Safety Executive (HSE).112 The apparent and hidden costs of injuries were found to be as high as 37 percent of a transportation firm’s annual profits, 8.5 percent of a construction project’s tender cost for a second firm, and 5 percent of operational costs for a hospital.113 Among the conclusions of the study is that, for every dollar of insurable costs triggered by an accident, employers faced between $8 and $36 of uninsured costs. The study’s conclusion is, therefore, that it pays to improve safety.
This study has been criticized for several reasons.114 First, the incidents that were selected for analysis were those deemed “economic to prevent” by a joint employer-state panel.115 That is to say, the study’s conclusion is more accurately stated as, “It pays to prevent accidents that are economical to prevent.” This sort of circular reasoning makes the HSE results largely meaningless. Second, the study does not look at why injuries are occurring. There is no assessment of whether the injuries were caused by organizations responding to financial incentives to organize work unsafely. This omission makes it impossible to tell if the costs of the injuries were greater than the costs saved by allowing hazardous conditions to persist. That is to say, even in the cases where the accident was deemed economical to prevent, we don’t know if that is true or not!
Finally, the notion that safety pays obscures the real message of “safety pays”: improve safety only if it pays. In short, the “safety pays” narrative is simply sloganeering that obscures employers’ traditional cost-benefit approach to health and safety issue. As we know, historically, this leads to the injury or death of hundreds of thousands of workers. Further, by suggesting safety is profitable, the “safety pays” narrative downplays the need for state regulation. Why would the state check to see if employers had acted in what is (allegedly) the employers’ own best interest?
Similar studies have been done in other countries. For example, total injury costs in Australia were estimated at $20 billion in 1995.116 As Andrew Hopkins points out, eliminating injuries does not make Australia $20 billion better off, because injuries also create benefits such as treating these injuries, replacing damaged equipment, and hiring new workers to replace injured or killed ones.117 Further, these benefits are not evenly distributed among all stakeholders — 70 percent of the benefits accrue to workers and the state. This creates very little incentive for employers to reduce injury costs — particularly since organizing work in an injurious manner may be ultimately the most profitable choice for employers. Hopkins goes on to note that employers may not be affected (and may even benefit) from large-scale accidents. The death of 3000 and the injury of 300,000 people following a 1984 gas leak in Bhopal, India, resulted in large short-term costs to Union Carbide, however, restructuring led to record earnings per share in 1988.118 Similar trends can be seen in other organizations.119
Interestingly, the Health and Safety Executive has changed its tune about whether safety pays. A 2003 report suggests that there is conflict between safety and other management priorities. And that safety is actually traded off against other priorities.120 Further, the study confirms that employers are motivated to achieve health and safety standards by regulatory requirements and that “government regulations are necessary in order to protect employees against excessive levels of workplace risk.”121
Creating evidence of safe workplaces
A significant issue with the safety pays narrative is that it is not likely to result in fewer accidents or injuries. This may undermine the legitimacy of particular governments and, more broadly, the capitalist social formation. Fortunately (for governments), data derived from OHS and workers’ compensation programs can provide “evidence” that things are safer. Unfortunately (for workers), the measures used, however, obscure the actual injury rate, and erroneously suggest that workplaces are increasingly safe.122
Between 2002 and 2008, Alberta used the lost-time claim (LTC) rate as its main indicator of the level of occupational health and safety (i.e., whether workplaces were “safe and healthy”). The LTC rate is the number of times (per 100 person-years worked) that a worker sustained a compensable, work-related injury that made the worker unable to work beyond the date of injury as reported to the Alberta WCB. This measure is normally expressed as a number of claims (e.g., 2.9 claims per 100 person-years worked) and the results are listed in Table 3.1.
Table 3.1 Lost-time claims per 100 person-years worked, 1991–2008.123
Year | Lost-time claims |
---|---|
1991 | 4.1 |
1992 | 3.7 |
1993 | 3.5 |
1994 | 3.5 |
1995 | 3.4 |
1996 | 3.4 |
1997 | 3.4 |
1998 | 3.26 |
1999 | 3.21 |
2000 | 3.43 |
2001 | 3.13 |
2002 | 2.93 |
2003 | 2.78 |
2004 | 2.54 |
2005 | 2.41 |
2006 | 2.35 |
2007 | 2.12 |
2008 | 1.88 |
Table 3.1 makes it look like injury rates are falling. But there are a number of deficiencies with this measure.124 For example, workplace “injury” is limited to injuries registered with the WCB that cause the worker to be unable to work beyond the date the injury occurred. This excludes approximately 17 percent of the workforce (approximately 325,000 workers) not covered by workers’ compensation and ignores both injuries not requiring time off from work beyond the date of injury and injuries serious enough that workers are subsequently unable to do their job, but to whom their employer provides modified work. It also excludes unreported injuries. Further, it excluded LTCs filed but rejected. This percentage has increased from 2.3 percent of time-loss claims in 1996 to 7.8 percent in 2008.125 Controlling for rejection rates reduces the degree of the LTC reduction over time.
Although the LTC rate has declined over time, the number of actual LTC injuries has remained relatively stable with 37,500 injuries in 2003 and 38,500 injuries in 2007.126 Alberta’s growing pool of workers masks this stability because the lost-time claim rate (i.e., the percentage of workers who experience lost-time claim injuries) is reported as a ratio. That said, stabilizing the number of LTCs during a period of workforce expansion might be a significant achievement. But some additional consideration is necessary.
Employers can reduce the number of LTCs by reducing the rate at which injuries occur or the severity of the injuries. Employers can also simply increase the rate at which they provide modified work, thereby causing the number or rate of lost-time claims to decrease.127 This, in turn, can yield reductions in an employer’s WCB premiums under both the Partnership in Injury Reduction program and the WCB’s experience-rating system. It also creates the appearance an employer is “accident free,” which can be an important perception to create when bidding on contracts (particularly in the construction industry) and attempting to hire workers. Since 2007, the government has attempted to discern whether employers are gaming the measure by also measuring the disabling injury rate.
Disabling injury rate and severity
A disabling injury “is a work-related injury serious enough to result in time lost from work beyond the day of injury, a modification of work duties, medical treatment beyond first aid, or an occupational disease.”128 In effect, this measure includes both lost-time injuries and instances where the employer provided modified work (and thereby avoided a lost-time claim). This measure does a better job of representing the actual rate of workplace injury, although it still excludes injuries that don’t require time off beyond the first day, injuries that are not reported, and injuries to workers outside the ambit of the workers’ compensation system.
The disabling injury rate is contrasted with the lost-time claim rate in Table 3.2. This table shows that, while the rate of lost-time claims has gone down over time, the overall rate of workplace injury (the disabling injury rate) has remained relatively stable.
Table 3.2 Disabling injury rate and lost-time claim, 1998–2008.129
Note: Rounding differences in data drawn from different publicly available sources results slight discrepancies in 2002–2004 disabling injury rates.
Year | Lost-time claims | Disabling injury rate |
---|---|---|
1998 | 3.26 | unavailable |
1999 | 3.21 | unavailable |
2000 | 3.43 | unavailable |
2001 | 3.13 | unavailable |
2002 | 2.93 | 3.8 |
2003 | 2.78 | 3.7 |
2004 | 2.54 | 3.9 |
2005 | 2.41 | 4.02 |
2006 | 2.35 | 4.14 |
2007 | 2.12 | 3.88 |
2008 | 1.88 | 3.50 |
Again, some additional consideration is warranted. While the disabling injury rate has remained stable, it may be that the seriousness of acute injuries has been reduced. This would explain why fewer workers are requiring time off from work. Further, declining severity might also be indicated by a decline in the duration of average lost-time claims, from 50.9 days in 2003 to 40 days in 2008.130 But it may also be that employers are simply gaming their lost-time claims (i.e., offering employees modified work in lieu of time off) rather than actually reducing the incidence of serious injuries. The duration measure would also be affected by such gaming (motivated by the WCB’s experience-rating mechanism, which is discussed in Chapter 6) thus does not, in itself, allow us to determine whether the seriousness of injuries has declined. Only a study of the seriousness of individual WCB claims would do so. Studies of seriousness almost all focus on lost-time claim rates, which do not control for the gaming behaviour of concern.
Another way to consider injury rates and severity is to examine work-related fatalities (the most serious kind of occupational injury). The number of workplace fatalities accepted by the WCB has increased over time, from 91 in 1996 to 165 in 2008.131 That said, there is significant annual fluctuation in this number, in part due to its small size.132 There has been consistent change in the type of fatality accepted over time, with the proportion of fatalities caused by motor vehicle accidents and workplace incidents declining and the proportion of fatalities caused by occupational disease has increased.133
Measures as conceptual technologies
Returning to the point made at the beginning of this chapter, a key weakness of accident statistics is that they are really workers’ compensation claim statistics.134 Injuries not reported to and accepted by a WCB are not counted. Even setting aside accidents where the employer is not enrolled in workers’ compensation, there is still a significant potential for underreporting of potentially compensable workplace injuries.135 Underreporting has the potential to significantly distort conclusions drawn from claims data about overall occupational health and safety.
In this case, the long-term care (LTC) and disability insurance (DI) rates used by Alberta may significantly underestimate accident rates and numbers. Further, when experience rating and other systems create incentives for employers to reduce the number and duration of claims through claims management, real accident rates and numbers will likely further diverge from those derived from compensation data.136 In short, incentive programs designed to reduce injuries may in fact make injury data even less accurate over time.
In considering whether these measures are useful, it can be helpful to think of them as conceptual technologies. That is to say, the measures shape what issues we think about and how we think about those issues by embedding normative assumptions into the structure of the indicators.137 For example, the act of measurement delineates what activity or outcome is valued and, by operationalizing it in measurable terms, shapes how that activity or outcome is conceptualized. In this case, measuring the LTC rate indicates that reducing lost-time claims (not necessarily reducing injuries) is the desire behaviour.
The recent development of the disabling injury rate partially addresses the issue of employers gaming the LTC rate by using modified work to avoid lost-time claims. But the disabling injury rate is not used to allocate incentives, either to employers in the Partnership in Injury Reduction (PIR) program or to bureaucrats through the government’s accountability system. And, if it were, it still creates an incentive for employers to “hide” accidents, by failing to report them, by disputing claims, or by managing injuries such that they do not fall within the definition of a disabling injury. The potentially significant problem for both the LTC and DI rate of simple underreporting is not addressed at all.
By providing an easily communicated and apparently definitive measure of injury rates, the government creates the appearance that the number of injuries is decreasing. The actual number of time-loss injuries is surprisingly stable over time, but this is hidden because injuries are expressed as a rate.138 Further, attention is focused on time-loss injuries. These injuries are important because of there are normally severe injuries, but these 34,000 timeline injuries in 2007 are also a minority of injuries. Alberta had approximately 321,000 injuries in 2007 — an overall injury rate 10 times what one “sees” when one looks at time-loss injuries. Creating a false impression of workplace safety raises difficult questions, such as why does the government not measure actual changes (ideally improvements) in workplace safety?
Why use inadequate measures?
It is unclear why the government continues to evaluate its programming and reward employers on the basis of a significantly deficient measure. Questions have been raised about the lost-time claim rate in particular so ignorance of the problem is not a particularly compelling explanation.139 Examining how this measure (and the Partnerships in Injury Reduction program) advances the interests of some stakeholders and not others is insightful.
As noted in Chapter 2, the prevention and compensation of workplace injuries are issues that have galvanized workers to demand action. The resulting social programs (e.g., workers’ compensation, occupational health and safety codes and bureaucracies, joint health and safety committees, criminal code amendments) have the potential to significantly impede the profitability of businesses. The state is thus placed in the awkward position of needing to take action on workplace safety to maintain its political legitimacy but not wanted to impede the capital accumulation process.
Measuring the LTC rate and the use of soft regulatory techniques (based on incentives) allows government to appear to be addressing the interests of workers (and perhaps even partially doing so) without requiring (costly) changes in employer operations. The LTC rate suggests that workplaces are getting safer, although the government has been careful never to quite make this assertion. Instead, the LTC rate is provided and individuals are left to infer what they will from it. Given that few Albertans have the knowledge or inclination necessary to analyze what this means, the impression conveyed is that workplaces are safer.
It is also important to be mindful that those who are regulated can sometimes capture regulators. Perhaps employers, in ways that are not readily apparent, are influencing government regulators. This is obviously not the “partnership” that the government wants to convey but there is historical precedent for industry calling the tune. For example, the introduction of fluorescent light bulbs by Sylvania in the 1940s resulted in a spate of workers dying from sarcoidosis, a disease now known to be caused by exposure to beryllium. Publicity about these deaths resulted in the United Electrical Workers Union requesting the government further investigate the hazards associated with making fluorescent bulbs. The government consulted with Sylvania’s executives (who did not desire any further bad publicity about their product), who suggested Sylvania would not object to a statement by the government re-assuring workers that there was no undue risk to them. The government promptly complied.140
CONCLUSION
The discussion presented above raises serious questions about the degree to which current efforts to prevent workplace injuries are effective. While both the internal and external responsibility systems provide workers with better protections than those they would enjoy under common law, there are significant reasons to be concerned with their operation. Chapter 4 explores why the state would implement ineffective prevention methods. Before considering the explanation presented in Chapter 4, it is useful to consider how our answers to three questions affect the way we choose to frame workplace safety.
1. What hazards do we see in the workplace?
2. How and to what degree do we think these hazards should be addressed?
3. How much state oversight do we think is required to ensure standards are met?
Answering the first question is tricky. What hazards we see in the workplace depends upon how we construct concepts such as accidents, injuries, and hazards. There is broad agreement about many hazards that cause traditional injuries such as cuts, bruises, breaks, and burns — although this does not mean there is agreement about how and to what degree to address these hazards. There is less agreement about hazards that cause many occupational diseases and non-traditional injuries (e.g., RSIs, psychological injuries).
Our answer to the first question shapes how we answer the second. As we saw in Chapter 2, prevention efforts have focused more on hazards that cause traditional injuries. This emphasis reflects the interplay of worker and employer interests. Employers have, historically, sought to limit restrictions on their right to organize work as they see fit. It is more difficult to resist addressing hazards that cause traditional injuries than it is to resist hazards that cause occupational diseases and non-traditional injuries. This latter type of injury frequently entails a long latency period and murky causality that makes it easier to question whether the injury is real and whether it is work-related. Consequently, we focus much of our attention on obvious safety hazards. It easier to see, understand the implications of, gain agreement upon, and remedy the hazard posed by an unshored trench than it is to remedy the hazard posed by a chemical agent.
In determining how and to what degree hazards are remedied, employers retain significant discretion. Employers can approach hazard reduction in several ways. They can eliminate the hazard or, somewhat less effectively, implement engineering controls that contain the substance and thereby limit worker exposure (e.g., venting fumes before they reach workers).141 Less effective still are human resource strategies (e.g., training and job rotation) and, finally, the use of personal protective equipment (PPE).142 It is important to be mindful that the cost of each approach is typically related to its effectiveness: higher cost options result in better protection for workers. Employers can, of course, also do nothing and hope to transfer the costs of any resulting injuries to workers or to the state.
Our answer to the third question — determining the degree of state oversight required — is shaped by our sense of whether corporate behaviour ought to be regulated in the same way that we regulate the behaviour of individuals. The two extremes of this approach are embodied in the compliance and punishment schools. Compliance advocates view employers who injure workers as engaging in otherwise socially productive activities and as able to act responsibly. Consequently, persuasion and small fines are the most appropriate means of addressing “non-compliance” that results in injuries. By contrast, advocates of punishment suggest that aggressive policing and prosecution is required because, whatever socially productive activities are occurring, they do not warrant governments sanctioning the injury and death of workers via special treatment.143
Much of this debate turns on whether one believes that injuries are the result of amoral calculations designed to maximize profitability or are unintentional and unpredictable by-products of production. It is entirely possible for employers to make mistakes when determining how safe work is. And employers must often make production decisions in conditions of uncertainty. It is also possible for employers simply to act irrationally or without much thought to safety. Yet it is difficult to ignore the pressure on employers to organize production in the most profitable manner. And it is irresponsible to ignore the evidence that employers have responded to this pressure over and over by intentionally transferring productions costs to workers via injury and death.
Canadian governments clearly approach regulation from a compliance perspective. Education, persuasion, and the occasional prosecution are the primary methods by which the state ensures standards are met. By taking action only when workers are seriously injured or killed (and sometimes not even then), the state appears to be adopting the suggestion of compliance theorists that aggressive policing of minor infractions is counter-productive and not cost effective.
This ignores the fact that minor infractions can have significant consequences. A missing machine guard can result in amputation or death. A slippery floor can result in a bruise, concussion, or fracture. Minor infractions such as these are often hard to see. They can sometimes be identified by the occurrence of near misses. But a near miss is rarely reported and thus does not usually result in any change in the work process. This, in turn, creates a culture where safety is not particularly important. Workers learn about the hazards and try their best to avoid them. In doing so, responsibility for preventing accidents is shifted to workers. This, in turn, undermines the point of the occupational health and safety movement: hazards ought not to be a part of a workers’ daily job and employers (and, failing that, the state) are responsible to ensure they are remedied.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.