SHOULD ‘AI’ BE REGULATED?

This is a question that is being greatly discussed in the corridors of every government building, every corporate office directly or indirectly, at various Industry forums, expert gatherings, etc. To answer this bluntly Yes, I do agree that it should be regulated whether it’s Generative AI or Predictive AI, etc. It has the potential to transform many aspects of society, from healthcare and education to transportation and finance, but it also poses significant risks, such as bias, privacy violations, and job displacement.

SHOULD ‘AI’ BE REGULATED?

CONTRIBUTED BY: MR. NIRVAN BISWAS - CHIEF TECHNOLOGY & DIGITAL PLATFORMS OFFICER, NBHC

This is a question that is being greatly discussed in the corridors of every government building, every corporate office directly or indirectly, at various Industry forums, expert gatherings, etc. To answer this bluntly Yes, I do agree that it should be regulated whether it’s Generative AI or Predictive AI, etc. It has the potential to transform many aspects of society, from healthcare and education to transportation and finance, but it also poses significant risks, such as bias, privacy violations, and job displacement.

 

Effective governance of AI requires collaboration between governments, industry, academia, and civil society, and should be grounded in principles of transparency, accountability, fairness, and human rights. Governments should establish regulatory frameworks that balance the need to promote innovation with the need to protect individuals and society.

 

Additionally, Government can play a key role in promoting research and development of AI, and in fostering collaboration between industry, academia, and civil society to ensure that the benefits of AI are shared widely and equitably.

 

But here I would like to break this question into 2 parts i.e. Should AI be governed / regulated? For which the answer is a clear Yes. What part of AI should be governed? Now here the thought process gets interesting. AI consists of 2 components namely the research part / prep or in other words the modelling of the use case which is training the data sets etc. And the second part is applying the said result of the model of a use case to make a product of the same. You see, what I am trying to say is that do not have any boundary conditions to train the data sets as AI needs robust models with large data sets to arrive at any conclusions. But yes, the moment you think of productising this or making a product out of this is when you need regulations that govern this. And here the government needs to set up regulators who will look at this product as a clinical trial in Pharma and then decide whether to approve this for production or not.

 

What is important is we continuously innovate to come up with various algorithms or varied data sets (no boundary conditions, no regulations) so that enough and more is available for various corporations to build products basis this. And yes, here we need at this point regulations that tell the corporations on yes or no to build the product.

Government needs to build frameworks around this which the regulator can govern with. And this indeed is required so that no bad state actor could misuse it to cause harm to the society in any manner.

 

We all know that sustainable development is built on 3 pillars namely economic development, social equity and environmental protection. All of them could be in danger if we do not bring regulation in force. Do bring it, but bring it at the right juncture of the AI value chain.