Canada Unveils AI Code of Conduct, Balancing Safety and Innovation

37

As the onward march of artificial intelligence (AI) raises alarm in many quarters, the federal government unveiled a discretionary code of conduct on Wednesday, designed to regulate and mitigate the rapid proliferation and evolution of generative AI.

Innovation Minister Francois-Philippe Champagne communicated the strategic safeguards at Montreal’s All In artificial intelligence conference, buoyantly pronouncing that they would “build safety and trust as the technology spreads”. The safety measures have already garnered signatures from prominent executives of twelve Canadian associates inclusive of BlackBerry, OpenText, and Telus.

Follow us on Google News! ✔️


The proposed code outlines precautionary actions businesses can adopt when deploying advanced generative AI, the computational dynamo beneath chatbots like San Francisco’s OpenAI’s ChatGPT, which can generate diverse outputs from academic essays to psychoanalytical insights. The suggested actions encompass clinical screening of datasets for potential biases and risk assessment for potential misapplication of the system in question. Fundamentally, they encapsulate six cardinal principles: equity, transparency, and human oversight among them.

In the face of exhilaration and unease over the seemingly unfettered scope of AI advancement, the federal government attempted to balance optimism with practicality. “We have witnessed technology advancing at what I would say is lightning speed,” Champagne mused, adding that the palpable fear should transition into potential opportunities.

A more general approach to framing the boundaries around machine learning was presented through a bill proposed in June by the federal government. It deliberately leaves room for the specifics to be clarified at a later date. Ottawa confirmed that Bill C-27 will not be coming into force any sooner than the year 2025.

Internationally acclaimed AI expert Yoshua Bengio recognized the feasibility of the normative efforts, but expressed concern over the slow pace of progress and the lingering public anxiety around the technology. “Fear might be a starting point, but we need to act…There is basically zero AI regulation that’s actually in effect right now,” the Universite de Montreal professor opined. The Turing Award laureate further exhorted governments to monitor components of the AI industry, with graphics processing units, the robust engine propelling complex AI models, as a case in point.

In order to verify the secure development and deployment of deep learning systems, Bengio advocated for the establishment of national and international surveillance entities, backed by increased government funding. These bodies would assume the responsibility of combating security threats posed by nations such as Russia and China, who are perceived as potential perpetrators of cyberattacks using AI tools.

In May, Bengio urged an immediacy in deploying regulations against distinct threats, an example being AI-driven bots “counterfeiting humans”. Mirroring a sentiment widely shared across society, he further emphasized transparency in decision-making over the control and application of AI technology.

The Liberals’ Artificial Intelligence and Data Act, despite having been criticized for its ambiguity, outlines a strategic plan for responsible AI development geared towards adjusting to the constant evolution of technology. The legislation, as part of a larger bill on consumer privacy and data protection, takes a hard stance against “reckless and malicious” AI use, introduces an AI commissioner and the industry minister for oversight and potential imposition of financial penalties. However, it leaves the specifics on how to adhere to human rights laws and the definition of terms such as “high-impact AI systems” for future development.

Although the Act will initially focus on education and voluntary business compliance, it firmly establishes Canada’s commitment to pragmatic and careful progress in the sphere of artificial intelligence.