Artificial Intelligence (AI)
On this page
- What is AI?
- What are the current and potential uses of AI in the workplace?
- What legislation governs the use of AI?
- What are some potential health and safety benefits of using AI?
- What are some potential health and safety impacts of AI?
- What are some ways to address health and safety impacts?
- What are key considerations when using AI systems in the workplace?
- Why are prompts important when using AI to create content?
What is AI?
Back to topArtificial intelligence (AI) is an umbrella term which describes computing machines or software that can detect specific or unspecific patterns, make predictions and decisions, generate outputs or content, and optimize processes. AI systems or tools analyze large amounts of data to perceive, reason, learn, and act autonomously or semi-autonomously to achieve specific goals.
For example, AI is used when:
- Voice-controlled virtual assistants understand our voice
- Streaming platforms recommend shows we might like
- Self-driving cars recognize traffic patterns and road signs
- Spam filters block unwanted emails
- Map applications provide directions and suggestions
What are the current and potential uses of AI in the workplace?
Back to topAI is being used in many types of workplace tools. It helps automate simple and complex tasks and supports decision-making and work management.
Examples include robotics, wearable technology, equipment inspections, and predictive maintenance. AI may also be used in tools like chatbots, smart personal protective equipment, and training simulations. Some organizations use AI for hiring and work management, including tracking performance and assigning tasks.
At work, AI-enabled technology could:
- Optimize task and resource allocation
- Assist with administrative tasks
- Automate repetitive tasks
- Predict workload and production needs
- Predict maintenance needs
- Track work performance
- Monitor and adjust the environment (temperature, humidity, noise levels, air quality, etc.)
- Assign tasks and resources
- Inspecting worksites (including drone use)
AI may be used to perform physical and cognitive tasks, including driving, supporting legal work, or assisting in medical decisions. Because of this, AI is being introduced in many fields: manufacturing, farming, healthcare, hospitality, transport, mining, oil and gas, customer service, and others.
What legislation governs the use of AI?
Back to topAs of August 2025, no specific legislation covers AI use and development in Canada. However, the regulatory landscape is evolving. Various laws have been introduced, but they are not yet in force.
In September 2023, the “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems” was announced by the federal government. This temporary and voluntary code provides organizations with common standards for demonstrating responsibility when developing and using generative AI systems until a formal regulation is in effect.
In 2022, the Government of Canada introduced the Digital Charter Implementation Act (Bill C-27), an Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA). The AIDA was tabled as part of Bill C-27, establishing a framework for regulating AI design, development, and deployment.
Note that laws, including human rights, privacy, and workplace health and safety legislation, may apply to certain uses of AI, even if the term "artificial intelligence" is not used directly.
Examples of workplace health and safety legislation that could apply to AI use in the workplace include requirements related to:
- General duty to take reasonable measures to protect workers
- Hazard identification, risk assessments, and hazard controls
- Robotics, automation, and safeguarding
- Control of hazardous energy
- Ergonomics
- Psychological health and safety
What are some potential health and safety benefits of using AI?
Back to topAI can potentially improve workplace health and safety in several practical ways.
- Automation of hazardous tasks: AI-powered robotics can perform high-risk jobs, reducing worker exposure to dangerous environments.
- Hazard detection and incident prediction: Real-time monitoring systems can identify unsafe conditions or behaviours. AI can also analyze historical safety data to forecast future incidents.
- Monitoring worker health: Wearables and sensors can track fatigue, stress, vital signs, and environmental factors like air quality and temperature, allowing for early intervention.
- Mental health detection and support: AI-driven tools, such as chatbots, can assess communication patterns to flag potential mental health issues and offer personalized support or resources.
- Training simulations: AI supports immersive, scenario-based training environments that improve safety skills without real-world risk.
- Program drafting and communication: Generative AI can assist in writing safety policies and translating complex health and safety requirements into accessible language.
Any AI-enabled technology used for health and safety needs to be appropriate for the organization and the type of work being done. The introduction of using AI to improve health and safety needs to be overseen by qualified individuals who can verify that the generated outputs will have the desired results. Comprehensive risk assessments should be done before implementation and on a continuous basis to ensure any new hazards and worker impacts are identified, assessed, and controlled. See the OSH Answers document on “Introducing New Technology at the Workplace” for more information.
What are some potential health and safety impacts of AI?
Back to topThe potential risks and challenges of AI in the workplace include:
Inaccuracy of AI-generated content: AI outputs can be inaccurate, outdated, or out of context. Qualified professionals should review AI-generated content before it is used to make decisions. When AI-generated information is blindly trusted, potential misinformation can present many health and safety issues, especially considering Canada's complex legislative framework and the uniqueness of each workplace. Health and safety standards, guidance, and good practices from other resources may also be missed or incorrectly characterized because of AI database limitations.
AI malfunctions (hallucinations): AI systems can produce nonsensical or incorrect information, which is known as hallucinations. AI hallucinations are similar to how humans sometimes see figures in the clouds or faces on the moon. These errors stem from issues like overfitting, training data bias or inaccuracy, and high model complexity.
Security and privacy risks: Entering sensitive or personal information into AI systems could be risky unless the system’s privacy and security measures are clearly understood and verified. Collecting confidential or sensitive data for AI-enabled systems can also raise privacy concerns among workers and lead to feelings of distrust.
Bias and discrimination: AI systems may reflect or amplify biases in the data they are trained on. For example, a recruitment tool trained on limited demographic data may screen out qualified candidates based on irrelevant characteristics that were underrepresented in the training data, such as gender.
Psychological impacts and work intensification: AI management tools that assign tasks, monitor performance, and set schedules could direct workers to increase the pace of work. This pace may lead to reduced breaks, stress, physical strain, increased risk of incidents, and burnout. Where performance results are visible to peers, AI worker management can create an environment of unhealthy competition and increased isolation. Workers may also experience higher pressures to perform to keep up with productive AI-enabled systems or machinery.
If AI systems are overly relied on for performance management, workers may experience mental health impacts related to losing control over how they perform their tasks. As AI enables the automation of more tasks, workers could also face job insecurity. Monitoring technologies can create feelings of being constantly watched, further impacting mental health.
Cognitive overload and underload: With increased automation, some jobs may become more about providing oversight rather than performing the tasks. These operators will likely be required to supervise several work processes, increasing mental demands. Others may face repetitive, simplified tasks that could reduce engagement. Both situations can increase the risk of errors and reduce situational awareness.
Ergonomic hazards: Process automation could introduce a new work pace or make work more repetitive and less diversified, increasing the potential for musculoskeletal injuries and other ergonomic issues.
Skill degradation: Over-reliance on AI to detect hazards or manage workflows can reduce workers’ ability to identify issues independently. This over-reliance may lead to deskilling and greater health and safety risks during system failures.
Lack of transparency: If workers do not understand how an AI system makes decisions, they may find it challenging to:
- work with it,
- respond to problems, or
- report concerns.
A lack of transparency about how AI reaches conclusions makes the reliability and safety of AI-enabled systems difficult to evaluate.
Ethical concerns and decision-making: AI systems may prioritize efficiency over human well-being. When safety and productivity goals conflict, clear ethical frameworks are needed to ensure decisions that protect everyone at work.
Research gaps: AI's long-term health and safety impacts in the workplace are not fully known. Ongoing evaluation is necessary to ensure AI systems support worker health, safety, and well-being.
What are some ways to address health and safety impacts?
Back to topAI’s impact on workplace health and safety is not yet fully understood, so good practices and specific guidance on minimizing impacts to workers may not be available yet. Therefore, organizations should continuously assess these technologies and make changes when needed to protect everyone at work. It is important to find and address hazards before, during, and after introducing AI. Test systems regularly to spot problems early.
Leverage health and safety fundamentals
Identify hazards: Look at how the AI system could create new hazards or make existing ones worse.
Assess the risks: Decide how likely those hazards are to cause harm and how severe the harm could be.
Put controls in place: Take steps to eliminate the hazards or reduce the risks, considering the hierarchy of controls. Controls could include prohibiting or limiting AI use for certain tasks, changes to the work process, training, and other measures.
Keep evaluating: Evaluate the control measures in place, such as by measuring process and outcome metrics, to ensure they are working as intended and are keeping everyone healthy and safe.
Think about worker impacts
Involve workers early: Ask for input before choosing or deploying AI tools. Bring workers into the design process and harness their skills and experience to shape how AI is used.
Use worker-centred design: Similar to ergonomics, match the job to the worker. Integrate digital work processes from the point of view of workers being assisted by the technology rather than the other way around.
Be transparent: Explain what data you’re collecting, how it’s being used, and why.
Provide training: Help workers understand how the AI technology functions and how it affects their jobs.
Have clear goals: Focus on making work better, not just faster. Make sure work remains engaging and interesting to prevent deskilling and cognitive underload.
Get ahead of potential bias
Before implementing an AI tool, ask:
- Where did its training data come from?
- What’s in the training data?
- Who selected and prepared the training data, and how?
Your organization may need guidance from health and safety professionals, legal counsel, privacy and security experts, and other stakeholders.
Handle data carefully
Before allowing AI systems to collect data, ask:
- Is the data collection fair and necessary?
- Is worker privacy protected?
- How secure is the data?
- Who has access to the data? Empower workers by ensuring they can access the same data the system uses.
Put clear data policies in place. Be honest about what you’re collecting and why.
Watch for Cybersecurity Risks
AI systems can be targets for cyber attacks or misused to launch them. Get help from cybersecurity professionals to identify and fix weaknesses in your systems.
What are key considerations when using AI systems in the workplace?
Back to topThe process for identifying hazards, assessing risk, and controlling hazards related to AI is similar to how other health and safety hazards are managed. Therefore, it is crucial to have a comprehensive health and safety policy and programs to promote and protect worker health and safety. The program should outline the process for identifying hazards, assessing risk, controlling hazards, and evaluating control measures.
If your organization plans to use AI tools, or already does, leadership should develop an AI policy with objectives that align with the organization’s strategic direction. This policy should show a clear commitment to protecting worker health and safety and include a plan for continuous risk assessments and improvements. This policy can also set out which AI tools are approved, how they can be used, how use will be monitored and reviewed, and how personal information, sensitive data, and intellectual property will be protected. Resources such as ISO/IEC Standard 42001:2023 - Artificial Intelligence Management System, and the Government of Canada resources on the responsible use of artificial intelligence may be useful when developing this policy.
Involving workers in risk assessments can help build trust and increase technology acceptance. For example, holding drop-in sessions where workers can learn about the technology, give feedback, and ask questions can foster curiosity and reduce fear. Since AI is a tool, workers must be trained to use it appropriately and safely. Because AI is evolving quickly, this training must be ongoing, so workers are kept updated on its capabilities and how to work safely. For example, when using generative AI tools like ChatGPT, outputs depend on how questions are asked. Workers need to understand how to create prompts that generate accurate and useful information, while also verifying that the sources are credible and the content is correct.
Why are prompts important when using AI to create content?
Back to topWhen using generative AI models, such as large language models, specific instructions, called prompts, are required to create text, images, code, and other content. The specific outputs produced from generative AI depend on the quality of the instructions provided in the prompt. The process of developing effective instructions to guide AI systems to produce desired content is known as prompt engineering. The details provided in the prompt will determine the quality, accuracy, and relevance of the AI-generated content. Being detailed includes making sure to use specific and clear prompts, and explaining complex technical terms and concepts using plain language. It is a good idea to always ask for references to allow a verification of the content with the source of the information. The content should be reviewed carefully by qualified individuals to ensure the information is correct, appropriate, and it aligns with good practices and legislation. It is also important not to enter personal information, sensitive data, and intellectual property unless there is complete confidence in the security and privacy of the information, and it is allowed by your organization's policies
- Fact sheet first published: 2025-12-31
- Fact sheet last revised: 2025-12-31