The checkered path to AI Regulation

The nuance of a computer using language and learning and generating concepts through a contextualised understanding of large swathes of data is where the complexity lies. Picture: Gerd Altmann/Pixbabay.

The nuance of a computer using language and learning and generating concepts through a contextualised understanding of large swathes of data is where the complexity lies. Picture: Gerd Altmann/Pixbabay.

Published Feb 27, 2024

Share

By Tayyibah Suliman

In a world where our lives are interwoven into an intangible reality, the complexities of how we access, create and digest digital content are constantly evolving.

One wonders whether Alan Turing (regularly credited as the father of artificial intelligence) anticipated that his initial research on machine intelligence in the 1940s and 1950s was the inception of and would lead to the synthetic reality that we contend with some 70 years later.

The term “artificial intelligence”, coined at some point in 1956, represents a complex field of study to explore the parameters for computer systems to replicate rationality, reasoning, natural language and in essence, human intelligence.

The advent of so-called deep learning, which has become more prominently funded in the past decade has catapulted the development of AI tools through the exploration of neural networks.

The implications of the enhanced computer-generated capability that are constantly evolving to artificially replicate rational thinking and human intelligence have left law and policymakers in an ethical and legal quandary.

We have transitioned from the use of more traditional forms of AI, such as Apple’s Siri and Amazon’s Alexa, where the intelligence was restricted to responding to command prompts to the use of “generative AI” that involves the use of large language models that have used natural language processing to support the development of content (the likes of Gemini and Chat GPT).

This nuance of a computer using language and learning and, generating concepts through a contextualised understanding of large swathes of data is where the complexity lies.

Consider this: a person writes an account of how to commit fraud successfully in content obtained via the internet. Albeit challenging, he may be identified via an IP address, and he may be held responsible under the law.

AI takes 100 million records of how to commit the same fraud, each time contextualising and eliminating the possibility of being caught and avoiding any criminal accountability. Who can we hold legally accountable for empowering and enabling criminality? The aggregation of the content appears to be conducted independently of human intervention.

The reliance is solely on a non-human entity which cannot be held criminally liable as an accessory as it lacks intent, cannot be found to be complicit and does not contemplate the implications of the crime. Yet it is a powerful and extraordinary accomplice to any individual or criminal enterprise.

This is but one example that we face from a legal perspective. Other legal challenges to using AI include transparency of the source of data used by the models and whether the system of probability augments biases and discrimination. The lack of accuracy, potential infringement of intellectual property rights and violations of data protection laws are also problematic.

While South Africa has no specific laws to address AI, there are some elements of the legal framework that will impact it, including protecting copyright and preventing illegal processing of personal information.

Law and policymakers are grappling with how to regulate:

  • Transparency.
  • Bias and discrimination.
  • Data accuracy.
  • Intellectual property rights.
  • Data privacy.

The EU is in the final stages of the Artificial Intelligence Act which adopts a risk-based approach to classify AI tools and prohibits systems that are intended to be manipulative, deceitful or discriminatory. It also mandates transparency and continuous adversarial testing.

The legislation will probably become the hallmark for other forms of AI regulation across the world but it is not clear whether the regulation will be effective in achieving the desired outcomes as technology is a challenging sector to regulate due to its rapid developments and ever-changing landscape.

Tayyibah Suliman is a director in corporate and commercial practice and head of the technology and communications sector at Cliffe Dekker Hofmeyr.

Tayyibah Suliman. Image: Supplied.

BUSINESS REPORT