A new draft outlines best practices for companies working with artificial intelligence.
President Biden is taking new steps to ensure that the rapid development of artificial intelligence technology is effectively managed. The Biden administration recently released a draft “AI Bill of Rights,” consisting of five recommendations to ensure that artificial intelligence systems are safe, fair, optional, and, most importantly, ethical.

Unlike the Bill of Rights itself, this document is not legally binding. Rather, the plan exists to formalize best practices from the major players in the A.I. and machine learning space. These actions include ensuring that A.I. is not biased by bad data, providing notice of when automation is being used, and providing human-based alternatives to automated services, according to Venkat Rangapuram, CEO of data solutions provider Pactera Edge.

Here are the five “rights” outlined in the White House blueprint and how businesses should use them when developing and using automated systems.

1.ensuring the security and efficiency of automated systems.

User safety should always be paramount in the design of A.I. systems as drafted. The administration argues that automated systems should be developed with public input, allowing for consultation with a diverse group of people capable of identifying potential risks and problems, and that systems should be rigorously pre-screened and deployment monitored to demonstrate their security.

One example of a malicious A.I. mentioned in the paper is Amazon, which has installed A.I.-powered cameras in its vans to assess the safety habits of its drivers. This system improperly penalizes drivers when other vehicles cut them off or when other events out of their control occur on the road. As a result, some drivers were not eligible for the bonus.

2.Protecting users from algorithmic discrimination.
The second right relates to the tendency of automated systems to “get unfair results” by using data that does not account for existing systemic biases in American society, such as facial recognition programs that mislead people of color more often than white people, or an automated hiring mechanism that rejects female applications.

Algorithmic Workforce Displacement Safeguards, a document of best practices developed by a consortium of industry leaders including IBM, Meta and Deloitte, is proposed to combat this project. The document illustrates steps for informing workers about algorithmic bias, as well as instructions for implementing workplace safeguards.

З. Protecting users from data policy abuse.
Under the third right, everyone should have an agency overseeing the use of their data. The proposal suggests that designers and developers of automated systems should seek user permission and respect users’ decisions regarding the collection, use, access, transfer and delivery of personal data. The draft also says that any requests for consent should be brief and in plain language.

Rangapuram says that designing automated systems to be continually learning without overlapping as a power balance” to find, but adds that allowing individual users to determine their own level of comfort and privacy is a good first step.

4.Providing users with notifications and explanations.
Consumers should always know when an automated system is being used, and they should be given enough information to understand how and why it contributes to the outcomes that affect them, according to the fourth right.

Overall, Rangapuram says that the public’s negative attitude toward corporations collecting data can negatively impact the development of new technology, so explaining how and why data is used has never been more important. By informing people about their data, businesses can gain trust among their users, which can lead to a greater willingness of those users to share their information.

5.Offer human alternatives and fallback options.
According to the project, you must be able to forego the use of automated systems in favor of a human alternative. At the same time, automated systems should have backup plans tailored to human needs in case of technological problems. For example, the project highlights customer service systems that use chatbots to respond to common customer complaints, but redirect users to human agents to solve more complex problems.