AI Regulation Is on the Way

For most of the last decade, public anxiety about digital technology has centered on the possible misuse of personal data. Some say that limiting it will harm Europe’s and the United States’ economic performance in comparison to less restrictive countries. Others argue that greater regulation has put smaller European enterprises at a significant disadvantage in comparison to larger US competitors such as Google and Amazon. As businesses rapidly incorporate artificial intelligence into their goods, services, processes, and decision-making, the focus is shifting to how the software uses data.  

The EU is once again setting the standard (in its 2020 white paper ‘On Artificial Intelligence – A European Approach to Excellence and Trust” and its 2021 proposal for an AI legislative framework), believing that regulation is critical to the development of consumer-trustworthy AI technologies. We provide a framework to guide executives through these duties, based in part on concepts used in strategic risk management.   

Apple’s credit card algorithm has been accused of gender discrimination, while a recent study discovered that risk prediction algorithms used in health care have a considerable racial bias. Every error has the potential to damage millions of individuals, exposing firms to class-action lawsuits. It might be able to program some notion of justice into the software, requiring that all outcomes meet specific criteria. However, one impediment is that there is no universally accepted concept of fairness.   

Image Source

Individual accountability is being eroded as AI becomes more prevalent. Every defect has the potential to damage millions of people and exposes firms to class action lawsuits. Organizations that rely on human decision-makers must still account for unconscious prejudice. Amazon ultimately decided not t employ AI as a recruiting tool, but rather to use I to identify problems in its present recruiting strategy.   

Companies must disclose the exact nature and breadth of the decisions to which they are using AI. In many situations, even those with major ramifications, this is a rather simple task. However, when decisions are perceived to be subjective or the variables fluctuate, human judgment is more trusted. An algorithm might not be equitable across all geographies and markets. Regulations aimed at reducing local or small-group biases are likely to reduce the ability of AI to provide scale advantages.  

Discrimination between regions or subpopulations might be hidden by average statistics. Customizing products and services for specific markets add production and monitoring expenses as well. All of these variables add to the complexity and overhead of the organization. Companies may even exit particular markets if the costs become too high.  

When people commit mistakes, there is frequently an investigation and an assignment of blame, which may result in legal consequences for the decision-maker. In its white paper and AI regulation proposal, the EU identified explainability as a vital aspect in increasing trust in AI. But what does it mean to get an explanation for automated decisions, for which we often have an insufficient understanding of cause and effect?  

Most individuals lack the advanced mathematics or computer science skills required to comprehend an algorithm’s formula. In machine learning, defects or biases in the data, rather than the algorithm, maybe the root cause of any issue. Local explanations can take the form of statements that address the question, “What are the key customer characteristics that, had they been different, would have changed the output or decisions?”. The most powerful algorithms are, by definition, opaque. Firms will need to be able to explain how an algorithm defines commonalities between customers in order to employ AI. Tailored payment terms in B2B marketplaces, insurance underwriting, and self-driving cars are just a few examples of how tight AI explainability standards may stifle companies’ ability to innovate.   

Companies that create AI algorithms with superior explanatory capabilities will be in a better position to gain customer and regulatory trust. If Citibank could create explainable AI for small-business loans as powerful as Ant’s, it would undoubtedly dominate the EU and US markets. The capacity to express the fairness and openness of decision-making in services is a potential differentiation for technology enterprises. When the risk and effect of an unfair or poor outcome are significant, people are less receptive to growing AI. Certain things, such as medical gadgets, may endanger their consumers if they were tampered with without control.   

Only “locked” algorithms are permitted to be used in such items by some regulators. People may be more willing to accept AI that is intelligently supplemented by human decision-making. If the quality of the competing applicant’s changes, a recruiting manager may make two distinct conclusions about the same job applicant. The feedback that people provide to the algorithms contributes significantly to the value of the cooperation. With its Dynabench platform, Facebook has adopted an intriguing way to monitor and accelerate AI learning.   

Companies must play an active part in developing an algorithm rulebook. As analytics are applied to choices such as loan approvals or criminal recidivism assessments, concerns about hidden biases grow. The intrinsic opacity of the complicated programming that underpins machine learning is alarming. Unless all companies address these issues early on, they risk undermining faith in AI-enabled goods.

Information from Harvard Business Review

Click Here for Article

Leave a Reply

Discover more from KuantSol

Subscribe now to keep reading and get access to the full archive.

Continue reading