close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Business Reporter – Technology – The arguments for (and against) AI regulation
aecifo

Business Reporter – Technology – The arguments for (and against) AI regulation

ISMS.online’s Luke Dash questions how and if governments should regulate AI

The debate over whether AI is friend or foe continues to ebb and flow.

I’m not talking about cinematic depictions of robots taking over the world. In concrete terms, fear has largely focused on the prospect of AI causing significant disruptions to the labor market that could threaten and displace people’s livelihoods.

However, more recently, AI concerns, beyond potential job losses, have begun to gain momentum. With generative AI now entering the mainstream, concerns have also begun to emerge about how algorithms are developed, trained and managed.

Notably, several ethical concerns have been raised regarding fairness, confidentiality, and accountability that require careful consideration:

  • Bias and fairness: First, there is a risk that AI systems trained using historical data will inherit and amplify existing biases, leading to unfair outcomes. Here, Amazon’s AI-powered recruiting tool, which was abandoned due to gender bias, is a prime example, but the fear is that the impacts could also extend to areas such as justice criminal law and loans.
  • Confidentiality: AI’s reliance on datasets also poses challenges from a privacy perspective. From unauthorized data collection and inference of sensitive details to identification risk from anonymized data, AI can potentially pose several challenges related to personal and sensitive data.
  • Copyright: Copyright is also an important concern given that AI models are often trained using large datasets. If these algorithms are asked to generate new content, they could inadvertently incorporate copyrighted material, leading to possible legal liabilities for companies.
  • Legal responsibilities: Similarly, it can be difficult to know who should be liable in the event that an AI system leads to or causes harm, with such systems creating a gray area in terms of legal responsibilities.

Different approaches: EU versus US

Naturally, these concerns are fueling significant public debate about balancing the benefits and risks of AI, with governments now stepping in to try to find ways to better manage potential challenges.

Several regulatory frameworks have already begun to emerge, with the US and EU taking markedly different approaches.

In the EU we have the European AI Act which is currently the most comprehensive and advanced form of legislation. Above all, the focus is largely on protecting individual rights and fairness, taking an approach that aims to establish key safeguards while making AI applications safer and more trustworthy.

The United States, in contrast, appears to be taking a more flexible and decentralized approach to AI regulation. While the Frontier AI Act bill aims to establish consistent standards for safety, security, and transparency across the United States, it is also possible to make adaptations at the state level, if necessary.

Here, a great example can be seen with California’s proposed SB 1407 bill, which will require large AI companies to rigorously test their systems before public release, make their security protocols publicly available, and give gives the state attorney general the power to sue developers for any significant harm caused by their systems.

There is no easy answer as to which of these approaches is good or bad. However, it is already clear that the motivations and goals may be slightly different.

One of the main advantages of the EU approach is that its law provides a unified framework, offering clear guidelines for companies operating in member countries and setting high standards for system security and consumer protection.

A focus on rights and fairness can help build trust in AI systems across Europe. However, the counterargument is that the strict requirements of these regulations could potentially deter companies from pursuing AI development in the region, with compliance being too complicated or burdensome.

In the eyes of some, this impact is already starting to be felt, with Apple and Meta having refused to sign the European Pact on AI, the first of these companies having announced in June 2024 that it would delay the release of three new features of ‘AI. in Europe, citing “regulatory uncertainties.”

Here, the US may have the upper hand as a market that appears to be taking a more flexible approach to AI legislation, leaving room for adaptation at the state level. Once again, this approach attracts criticism.

First, by prioritizing innovation, privacy concerns regarding AI systems remain greater in the US market. Second, the possibility of a series of disparate and conflicting state and federal standards can add considerable complexity to businesses operating in multiple states.

ISO 42001: a coherent path forward

It is clear that the main challenge for government in regulating AI is striking the right balance between prioritizing public safety and addressing growing ethical concerns around AI, without hindering continued technological progress nor make it difficult for companies to comply.

This will not be an easy task for policymakers, and it is likely that we will continue to see adaptations and iterations of key frameworks evolve regularly. However, I believe there is an opportunity for businesses and regulators to leverage ready-made and recognized international standards.

Enter ISO 42001, a standard that provides key guidelines for establishing, implementing, maintaining and continuously improving an artificial intelligence management system (AIMS).

This framework is fundamentally based on the principle that responsible AI does not have to be a barrier to innovation or success. Instead, he argues that by making ethical considerations a priority in AI development, companies will be able to actively address growing AI concerns, build greater trust with consumers and proactively mitigate risks.

For businesses, it offers several key benefits. As it is already a globally recognized standard for AI risk management, emphasizing security, transparency and accountability, businesses can use it as a basis for aligning with various regulations, whether international or cross-border in the United States.

For regulators, the ISO 42001 standard can also be beneficial. By aligning with the fundamental principles of the framework, they will be able to facilitate compliance while potentially reducing the complexity of complying with the rules of different states or countries, making it easier for businesses to expand into new territories.

Of course, regulators will likely continue to develop their own frameworks in ways that balance the unique needs of their companies, businesses, and local economies. However, adopting consistent and recognized standards such as ISO 42001 as central guiding frameworks can be an effective way to help businesses navigate this complex compliance landscape in a safe and competitive manner.


Luke Dash is CEO of ISMS.online

Main image courtesy of iStockPhoto.com and Sarah5