Against a backdrop of growing concern surrounding biased data and rights to privacy and informed consent, the White House has released the “Blueprint for an AI Bill of Rights” that lays out five principles and associated practices to protect the American public against potential harm. “Among the great challenges posed to democracy today is the use of technology, data and automated systems in ways that threaten the rights of the American public,” read the White House statement introducing the blueprint. The statement cites the use of algorithms in hiring and credit decisions that exacerbate unwanted inequities and discrimination, as well as unchecked social media data collection as just some of the concerns facing the country, while also recognizing the significant benefits brought about through the use of automated systems in agriculture, emergency preparedness and health care.
“This important progress must not come at the price of civil rights or democratic values,” the statement reads. “The President has spoken forcefully about the urgent challenges posed to democracy today and has regularly called on people of conscience to act to preserve civil rights—including the right to privacy, which he has called ‘the basis for so many more rights that we have come to take for granted that are ingrained in the fabric of this country.’”
The blueprint includes five principles that lay out individual’s rights and signal potential regulatory frameworks to guide the design, use and deployment of automated systems to protect those rights. They include, in part:
“Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards … Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.”
“… Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight …”
“… Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed …”
“Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes … Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.”
“You should be able to opt out from automated systems in favor of a human alternative, where appropriate .. You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.”
Read the Blueprint for an AI Bill of Rights