White label solutions


Subscribe

订阅

Asia Casino News │ ACN东方博彩新闻

Asia Casino News outlet for Online Gaming and Gambling Industry in Asia.

Image Source http://www.investing.com/

Addressing Bias in Artificial Intelligence: The Role of Blockchain in Ensuring Fairness

April 18, 2023 World BlockchainEmerging Markets

Decentralized solutions, according to experts, can aid in securing the impartiality and integrity of data provided to AI systems, but there are still very obvious restrictions.

Artificial intelligence (AI)-based projects are quickly assimilating into the current technological paradigm, assisting in decision-making processes across a range of industries, from banking to healthcare. Though there has been substantial improvement, AI systems still have drawbacks. Data biases, which refer to the existence of systematic flaws in a given set of information leading to biased results while training machine learning models, are one of the most important problems facing AI today.

The quality of the incoming data is crucial since AI systems largely rely on data and any kind of biased information might cause prejudice within the system. This might help social injustice and discrimination to persist even more. Therefore, it is crucial to guarantee the objectivity and integrity of data.

For instance, a new research investigates the potential for AI-generated photos to misrepresent and homogenize the cultural context of facial expressions, particularly when they are produced from data sets that are heavily impacted by American sources. It gives numerous instances of troops or fighters from various historical eras, all grinning in the manner of Americans.

Additionally, the pervasive bias runs the risk of deleting crucial cultural histories and meanings, which could have an impact on the richness of human experiences as well as the diversity and nuanced expressions of people around the world. Diverse and representative data sets must be incorporated into AI training processes in order to reduce this prejudice.

Biased data in AI systems is caused by a number of sources. First, the sampling procedure itself could be incorrect, resulting in samples that aren’t representative of the intended audience. This may result in the under- or overrepresentation of particular groups. Second, historical preconceptions can contaminate training data, which can reinforce prejudicial attitudes already present in society. AI systems, for instance, may continue to reinforce racial or gender prejudices if they were educated on biased historical data.

Last but not least, because data labelers may have unintentional preconceptions, biases induced by people unintentionally throughout the classification process. Because some attributes may be more connected with particular groups than others, unjust treatment may result from the selection of features or variables employed in AI models. Researchers and practitioners must be aware of potential causes of skewed objectivity and actively endeavor to eradicate them in order to reduce these problems.

Is unbiased AI made possible by blockchain?
Blockchain technology is not a panacea for completely eradicating biases, but it can help with some aspects of making AI systems impartial. Based on the data they are educated on, AI systems, such as machine learning models, might develop specific discriminatory inclinations. Additionally, if there are different predispositions in the training data, the system is likely to pick them up and use them in its outputs.

Despite this, blockchain technology has the potential to overcome AI biases in a variety of novel ways. For instance, it can support ensuring data transparency and provenance. Decentralized methods enable transparency in the information gathering and aggregation process by tracking the source of the data needed to train AI systems. This can assist stakeholders in locating and addressing potential bias sources.

Similar to this, blockchains can help numerous parties share data in a secure and effective manner, enabling the creation of data sets that are more varied and representative.

Additionally, by making the training process decentralized, blockchain can allow various parties to add their own knowledge and experience, which can lessen the impact of any one biased viewpoint.

Data collection, model training, and evaluation are three phases of AI development that must be carefully considered in order to maintain objectivity. Furthermore, it is critical to continuously review and update AI systems to address any potential biases that may develop over time.

Cointelegraph contacted Ben Goertzel, founder and CEO of SingularityNET, a project fusing artificial intelligence and blockchain, to learn more about whether blockchain technology can make AI systems totally impartial.

According to him, the idea of “complete objectivity” is not particularly useful when discussing the analysis of finite data sets by finite intelligence systems.

“What blockchain and Web3 systems can provide is transparency so that people can plainly see what bias an AI system has, not total impartiality or an absence of prejudice. Additionally, it offers open configurability, which enables a user community to modify an AI model to represent the biases they want while also being honest about doing so, according to the expert.

He added that “bias” is not a terrible term in the realm of AI research. Instead, it only illustrates the direction in which an AI system is looking for specific patterns in data. Goertzel did, however, acknowledge that individuals should be cautious of opaque skews imposed by centralized institutions on consumers who are unaware of them yet are driven and influenced by them. He stated:

“The majority of widely used AI algorithms, including ChatGPT, have insufficient transparency and disclosure of their inherent biases. Decentralized participatory networks and open models—not just open-source but also open-weight matrices that are trained, adaptable models with open content—are thus some of what are required to appropriately manage the AI-bias issue.”

Similar to how it is difficult to define neutrality, Dan Peterson, chief operating officer of Tenet, an AI-focused blockchain network, told Cointelegraph that some AI metrics cannot be unbiased since there is no measurable boundary for when a data set loses neutrality. In his opinion, the question ultimately comes down to where the engineer draws the line, and that line can differ from person to person.

“Historically, it has been challenging to get over the idea that anything can truly be neutral. Although it may be difficult to determine the exact truth in any data set fed into generative AI systems, what we can do is make use of the tools made more easily accessible to us by the use of blockchain and Web3 technologies,” he stated.

Looking ahead to a world powered by AI
Scalability of blockchain technology continues to be a major challenge. Blockchain-based solutions might not be able to handle the enormous amounts of data produced and processed by AI systems as users and transactions expand. Even the adoption and incorporation of blockchain-based solutions into current AIs present formidable difficulties.

First, there is a shortage of knowledge and experience in both blockchain and AI technologies, which could obstruct the creation and implementation of solutions that successfully combine both paradigms. Second, it might be difficult, at least initially, to persuade stakeholders of the advantages of blockchain platforms, particularly when it comes to assuring impartial AI data transmission.

Despite these difficulties, blockchain technology has enormous potential for leveling the quickly changing AI environment. The decentralization, openness, and immutability of blockchain can be used to remove biases in data collecting, management, and labeling, which will eventually result in more equitable AI systems. It will be interesting to observe how the future develops from this point forward.

Leave a Reply

Your email address will not be published. Required fields are marked *