
The European Union is imposing strict rules on artificial intelligence activities that pose high and unacceptable risks, including those deployed in the workplace.
The AI Act, which is being implemented in stages through to Aug. 2, 2026, applies to all member states without the need for local legislation to be adopted, though some states may choose to do so, according to a report by WTW.
The act classifies, defines and regulates AI activity based on four levels of risk: unacceptable, high, limited and minimal. Applications of AI that pose an unacceptable level of risk are prohibited, including the use of these systems for ‘social scoring,’ such as evaluating or categorizing people based on social behaviour or personality characteristics, resulting in certain types of detrimental or unfavourable treatment or for biometric categorization to infer protected personal attributes such as race, union membership and sexual orientation. The act’s ban on prohibited AI applications took effect on Feb. 2, 2025.
Read: Impacts of AI, upskilling, flexible work among employers’ HR priorities for 2025: expert
High-risk AI systems will be subject to substantial regulation, with the great bulk applying to the system developers. Deployers of high-risk systems, including employers, will be subject to lesser obligations, such as ensuring human oversight and properly using the system. Additional implementation guidelines regarding high-risk systems are to be released by Feb. 2, 2026, noted the report, adding the act’s requirements related to high-risk systems generally take effect on Aug. 2, 2026.
For high-risk AI systems used in the workplace, employers must inform workers’ representatives and the affected workers before putting the system into service.
The act defines employment-related high-risk AI systems as those used for recruiting or selecting individuals (in particular, placing targeted job advertisements), analyzing and filtering job applications and evaluating candidates, as well as making decisions affecting terms of work relationships, promoting or terminating work-related contractual relationships, allocating tasks based on individual behaviour or personal traits or characteristics or monitoring and evaluating the performance and behaviour of persons in such relationships.
Limited-risk AI systems are subject to lighter transparency obligations (for example, firms must ensure that end users are aware they’re interacting with AI) while minimal-risk AI activity — the bulk of AI currently in use — is left largely unregulated, the report said.
Read: How AI can help employers with reskilling, career development during uncertain times