oVb5jZi81nwdwyvYxTITX8Q6J0
Happy african american family having fun with device at home. Black parents and child using digital tablet
UUJOMsRkCTPXY9OGcfZ6qsS50
oVb5jZi81nwdwyvYxTITX8Q6J0
Happy african american family having fun with device at home. Black parents and child using digital tablet
UUJOMsRkCTPXY9OGcfZ6qsS50

True innovation means technology works for all of us, especially communities historically pushed to the margins. People of color, women, LGBTQ+ people, people with disabilities,  people with limited English proficiency, older individuals, religious minorities, and immigrants all deserve technology that works for them. This Framework serves as a proactive vision for how emerging technology products, tools, and services, with a focus on AI, can be rights inclusive, safe, and equitable for all people. Rights-based design and thoughtful creation go hand-in-hand, building a future that’s better for all.

Companies must integrate civil rights and related principles around equity, fairness, and efficiency into core business practices. To advance that goal we have created a Framework of Foundational Values for managing business decisions and specific Lifecycle Pillars aligned with the AI development pipeline to ensure these values are implemented in practice.

Who we are

The Center for Civil Rights and Technology is a joint project of The Leadership Conference on Civil and Human Rights and The Leadership Conference Education Fund. Launched in September 2023, the Center serves as a hub for advocacy, education, and research at the intersection of civil rights and technology policy. Our experts and partners dive into the most pressing policy issues in three key areas: AI and privacy, industry accountability, and broadband access. We are advocates and experts who understand that a fair and just society does not differentiate between technological innovation and civil rights.

Happy black kid during medical examination at doctor's office.

Why this Framework and why now

Technology is shaping nearly every aspect of modern life, from work, to education, to health care and other essential services. In the past few years, many emerging technological products, services, and tools have become powered by AI. While this technological progress can benefit people, many AI tools also carry tremendous risks. AI-powered systems can work incorrectly due to technical errors, malicious or negligent design, or misuse. Problematic AI-powered systems or uses can result in individuals paying more for the products they buy, failing to be considered for a job, or unfairly paying more for health insurance. They can deny someone access to public benefits or even falsely accuse an individual of a crime. But these real-world harms are not inevitable.

It is the responsibilities of companies and individuals investing in, creating, and using AI and emerging technologies to ensure that the systems they develop and deploy respect people’s civil rights. People need assurances that the technology used by companies that make decisions impacting them actually works and works fairly. Now is the time to move beyond principles and identify concrete measures to ensure that the technologies being used have appropriate guardrails and center people from the start.

Happy black kid during medical examination at doctor's office.

How we did it

We sought a diverse range of input from across the AI ecosystem. We began this process in earnest through deep consultation and collaboration with stakeholders in the fall of 2024. In October 2024, we convened a small group of representatives from industry, including leading developers and deployers of AI, along with civil society, to discuss and gather feedback on our initial outline. We also held several small group and individual feedback sessions with members of the civil rights community, the Center’s Advisory Council, and individual companies, all of which we used to inform our work.

The genesis of this framework started more than a decade ago, with principles and proposed algorithmic and AI safeguards. In 2014, The Leadership Conference, together with leading civil rights and technology organizations, released “Civil Rights Principles for the Era of Big Data” and an updated version in 2020.

P1010688
P1010688

How to use it

This Innovation Framework turns established rights and principles, agreed upon by civil society and industry alike, into a guide for everyday practice for investors, developers, and deployers of AI. It can be used by C-suite leaders, product teams, and engineers to prioritize effectiveness, fairness, and safety throughout the lifecycle phases of an AI product. It also highlights issues that companies using AI systems should consider before acquiring or deploying that technology. By prioritizing fairness and safety, companies help themselves by building trust with consumers and developing quality products that outlive fleeting trends, leading to sustainable innovation.

xmqDpLwcse7MjerHXI66Ce6IsFw
w6f0gABT5BkXv8ibzsANvvUa7jM
xmqDpLwcse7MjerHXI66Ce6IsFw
w6f0gABT5BkXv8ibzsANvvUa7jM

"Rights-based design and thoughtful creation go hand-in-hand, building a future that’s better for all."

Foundational Values

With a focus on fairness and trustworthiness, these four Foundational Values can help guide corporate decision-makers with managing business decisions in the development and use of AI.

CIVIL AND HUMAN RIGHTS BY DESIGN

Every AI system and tool must respect and uphold core tenets of civil and human rights, both in how they are designed and how they are deployed – putting people first. Civil and Human Rights by Design means embedding these rights into every stage of the development process: ensuring that principles like nondiscrimination, privacy, fairness, and accessibility are not afterthoughts, but foundational design requirements embedded into systems.

HUMANS ARE INTEGRAL TO AI

AI development and deployment should consider the lived experiences of people and communities, including the impacts of AI on these people. AI should complement and uplift human work, not replace or undermine it. It is also vital to keep a “human in the loop” whenever AI is used in decision making processes. While AI may provide insights, a human should be responsible for making determinations that could impact individuals.

AI IS A TOOL, NOT THE SOLUTION

Societal issues are complex and deeply entangled with one another. AI alone cannot resolve these interlocking issues, but it can serve as one tool among many others to meet challenges and play a role in benefiting everyone. People who are building and deploying AI should recognize this complexity and ensure their AI tools do no harm while, when practicable, improving social, economic, and political equity. This includes working in tandem with broader justice efforts such as policy advocacy, economic community investment, and structural changes that address root causes.

INNOVATION MUST BE SUSTAINABLE

New, technology-driven solutions to social problems must be environmentally, socially, and economically sustainable to provide long-term benefits. It is imperative that the goal of building quickly to gain market share, competitive leverage, and attract customers does not come at the expense of our communities and the building of efficient technologies. While AI holds the potential to transform industries and drive economic growth, it must be managed to ensure equitable benefits. For AI to be sustainable, its benefits must be accessible to all.

Lifecycle Pillars

Values alone are not enough if they are not put into practice. These Lifecycle Pillars are intended to ensure the Foundational Values are implemented in practice by C-suite leaders, product teams, and engineers at companies throughout the four lifecycle phases: 1) Envisioning, 2) Design, 3) Training and Development, and 4) Deployment and Production.

ENVISIONING

1. Identify Appropriate Use Cases

AI design should be intentional, built with purpose, and implemented with care, recognizing that not all problems can or should be solved with technology. AI systems should be fit for purpose. Where AI is not a good technical approach, or where the application of the technology violates civil rights or harms historically marginalized communities, the AI system should not be deployed.

Case Study: Using Inaccurate AI to Transcribe Medical Information
Medical AI startups have used generative AI transcription tools to transcribe millions of doctor-patient visits, spanning thousands of clinicians and health systems globally. However, given generative AI’s tendency to “hallucinate” and produce factually inaccurate information, this raises serious risks of miscommunication or misdiagnosis. OpenAI’s Whisper, for example, has error rates in up to 80 percent of transcriptions, with nearly 40 percent of inaccuracies deemed harmful or concerning. While AI companies advise against the use of generative AI technologies in “high-risk domains,” they do not actively monitor how models are used and deployed. Thus, the importance of developers identifying appropriate use cases is critical to avoiding harm. Some AI use cases, particularly those where errors can lead to real harm, are simply inappropriate for integration into “high-risk domains” like health care systems or law enforcement where an error can lead to physical harm or death.

2. Center Historically Marginalized Users

The internal AI design processes should center historically marginalized users from the beginning of the envisioning process, breaking with the more common methodology of focusing on “ideal” or “normal” users (too often defined as people who share the experiences of those involved in the design process). Building a system that addresses the needs and considerations of these communities with respect to privacy, fairness, and accessibility will advance the creation of more efficient and innovative systems that serve all customers better, with higher accuracy and potentially lower costs.

Case Study: Māori Natural Language Processing (New Zealand)
Building Natural Language Processing (NLP) systems for non-English languages can be difficult because of the lack of training corpus (i.e. the body of data used to train AI) with large volumes of language samples and vetted translations. The aim is most often to expand the number of languages offered by the NLP system, not necessarily to faithfully reflect the nuances of the language’s use among its speakers. Te Hiku Media, a nonprofit established and overseen by indigenous Māori community leaders, aims to preserve te reo, the Māori language, and actively undo violent policies of assimilation that led to the rapid decline in the number of te reo speakers. Te Hiku established new data access and uses indigenous data sovereignty protocols that prioritize Māori values and principles on how the data are used by others. With these data, they build digital tools for language exposure and acquisition not only in close collaboration with living tereo speakers in their communities and with the oversight and consent of local tribes, but also in a way that does not endanger Māori sovereignty.

Resource: Design from the Margins methodology
Companies have used the De|Center’s Design from the Margins methodology (DFM) to design and implement technology and product changes that prioritize the most impacted and vulnerable users. The organization offers practical, step-by-step guidance that is applicable across the stack and on all sizes of projects, based on considerations of human rights, justice, and community-based research. Grounded in the knowledge that when those most marginalized are designed for – DFM allows technologists to identify and mitigate serious harms, designing interventions that benefit all users. Their research, work, and DFM-based changes to technologies has led to features and tools that now affect billions of people but are based on the needs of highly marginalized communities.

DESIGN

3. Co-Design with Communities

Companies designing AI systems should be proactive in seeking out impacted and historically marginalized communities to build meaningful products that both avoid harm and benefit these communities, whose experiences provide priceless input for improved design. It is important to ensure those who provide input and engage with the company can understand conceptually how the AI system works, what it’s intended to do, and how it affects them. Companies should also compensate those consulted for their time and effort. Finally, impacted and historically marginalized communities may include company workers themselves, and these populations should also be consulted.

Case Study: Examining LLM Data for Bias with the Disability Community
A 2023 Google Research study examining biases in Large Language Model (LLM) training data underscores the need to involve impacted and historically marginalized communities in identifying and addressing AI bias. Researchers worked with people with disabilities to analyze an LLM’s outputs, revealing that while responses were rarely overtly offensive, they often reinforced subtle yet harmful stereotypes. Google has used this (and other) research to support their Disability Innovation program, which informs the creation of Google technology that is accessible to people with disabilities. This study highlights the importance of community engagement in uncovering nuanced biases that standard detection methods or external reviewers might miss, creating a better and more trusted product.

Resource: Partnership on AI’s Guidance for Inclusive AI
PAI’s newly released Guidance for Inclusive AI provides commercial sector AI developers and deployers with a framework for ethically engaging with the public, particularly socially marginalized communities. Building on the expertise of civil society advocates, industry practitioners, and academic researchers, as well as on insights from different disciplines and domains, the guidance establishes a different standard for public engagement in the AI sector. It focuses on meaningful public engagement with an
emphasis on broadening developers’ understanding of the historical and social contexts in which their technology will operate to ensure AI products are designed for the full spectrum of people who will interact with them in their daily lives.

TRAINING & DEVELOPMENT

4. Set Norms and Standards Around Build Processes

Shared development norms can reduce bias, provide transparency, and create a culture of accountability. Regularized best practices are emerging across the entire AI lifecycle, including hiring qualified and representative build teams, conducting common team trainings, running cyclical research and assessment phases, publishing development documentation and build processes, and providing access for third party audits and assessments. These practices provide scaffolding that increases transparency across the process, thus reducing individual biases and creating a path to improvement over time. They also provide demonstrable yardsticks by which deployers can assess one AI product versus another.

Case Study: Anthropic’s Constitutional AI Approach to Values-Aligned Generative AI
Constitutional AI (CAI) is an approach to developing generative AI systems that involves building certain behavioral constraints and values directly into the training process, rather than trying to add them after the fact. The “constitution” is meant to define how AI models should handle sensitive topics, respect user privacy, and avoid illegal activities. CAI pushes teams to be intentional and systematic about value alignment, rather than treating it as an optional add-on. This creates a framework for consistent decision making across the development process and ensures a safe and trusted product.

Resource: Norms and Standards for Build Processes
Establishing norms and standards around AI build processes helps prevent harm, provide needed safeguards, ensure compliance with existing civil rights laws, and build consumer trust. While the industry values individualism and diverse approaches, standards mitigate risks such as personal bias, inconsistent treatment, and compliance gaps. Customers and consumers will be more willing to try a new product if they know it meets an established standard for safety, efficacy, and fairness.

AI Standards
NIST Risk Management Framework
Berkman Klein Center for Internet and Society at Harvard University’s Principled Artificial Intelligence
Advancing Accountability in AI (OECD)
ISO/IEC 42001:2023 – AI Management System Standard
UK Algorithmic Transparency Recording Standard
Documentation Standards
Data Cards
Datasheets for Datasets
Data Statements
Nutrition Labels for Datasets
Model Cards
System Cards
AI Factsheets
Fairness Toolkits
IBM AI Fairness 360
Google Responsible AI for Developers
ABOUT ML Documentation, Partnership on AI
Data Enrichment Sourcing Guidelines, Partnership on AI

5. Create and Use Representative Data

AI should be built on data that are representative and that mitigate foreseeable biases. If the underlying training datasets include social or historical biases, are under-representative of the communities protected by civil rights laws with whom they will be used, or contain factual inaccuracies, the AI itself is likely to perpetuate those problems. There is no such thing as a “perfect” dataset for all use cases, so the data used must be mapped and aligned to the expected use case.

Resource: Unrepresentative Data Causing Algorithmic Bias in Health Care
A health care algorithm used to manage care for 200 million Americans systematically underestimated Black patients’ medical needs because it used health care costs as a proxy for health status — overlooking systemic disparities in access to care. As a result, Black patients had to be sicker than white patients to receive the same referrals for critical care programs. Researchers found that adjusting the model to use more representative health indicators reduced bias by 84 percent, highlighting how flawed data assumptions can reinforce systemic inequities. This case underscores the need for AI systems to be built on representative data and rigorously tested to prevent amplifying existing biases.

6. Protect Sensitive Data

Companies should minimize data capture to what is reasonably necessary for a specific purpose, limit data sharing with third parties, and regularly delete stored data that is no longer needed. Data minimization does not mean a company has to minimize absolutely the amount of data it uses; rather it means that data collection and use must be proportional and limited to the need. Companies should also only use private personal data to train AI systems if they have the opt-in consent of the individual and meaningful user controls. Such personalization and transparency offer dual benefits: enhancing user security and agency while fostering trust and cultivating sustained product userbases.

Best Practice: Privacy by Design
Privacy by design is a proactive product development approach that embeds privacy protections into technology, business practices, and systems from the outset, rather than treating privacy as an afterthought. By integrating privacy safeguards throughout the data lifecycle, organizations can enhance user trust, comply with evolving regulations like General Data Protection Regulation (GDPR), and mitigate risks associated with data collection and sharing. Real-world implementations of privacy by design include end-to-end encrypted messaging apps like Signal, search engines that block trackers like DuckDuckGo, systems that provide robust user privacy controls and protective defaults like Apple’s iOS, and the Tor browser and network, which ensure browsing anonymity through relays and multi-layered encryption. These examples highlight how privacy by design not only safeguards personal data but also fosters transparency, accountability, and innovation, making it an essential best practice in responsible data management. They also demonstrate how privacy by design can provide a competitive advantage in the marketplace by appealing to consumers looking for more trustworthy platforms.

7. Assess for Bias and Discriminatory Impacts

AI should not automate discrimination or lead to unequal treatment. Before AI systems are deployed, and then as they continue to be operated, maintained, and updated, they must be tested, assessed, and adjusted for unjustified differential impacts, including but not limited to those identified during the co-design phase. If harm mitigation is not possible, organizations should move to decommission the system. Replication of historic inequities or automated segregation is not innovative.

Case Study: Airbnb’s Monitoring Tool “Project Lighthouse” Reduces Bias Over Time
Project Lighthouse, developed by Airbnb in 2020, is an assessment tool designed to monitor racial bias on the platform, specifically focusing on “booking success rates.” Initial racial disparities showed that Black guests booked at a 91.4 percent success rate compared to 94.1 percent for white guests. An updated report in 2023 shows that success rates for all groups rose above 94 percent, though disparities continue to persist. Airbnb continues to invest in monitoring and addressing these disparities through product policies and feature updates. Project Lighthouse demonstrates the importance of ongoing bias assessment to identify and address discrimination during the development and post-deployment stages of a product to ensure fairness for all consumers. Programs like this increase consumer trust in Airbnb’s products and services, benefiting the company’s bottom line.

DEVELOPMENT & PRODUCTION

8. Close the Feedback Loop

Including impacted and historically marginalized communities in the product design is necessary but not sufficient. Developers must also then return to those original designs to confirm they have built in alignment with the original intent, document their actions, and communicate with those communities to close the feedback loop on what was actually built. This should enable conversation about whether the product successfully addressed their concerns or needs to be further modified.

Best Practice: Community Consultation Practices at All Stages of Development
Technology companies should involve impacted and historically marginalized communities throughout the entire product lifecycle, from design to deployment, ensuring ongoing feedback. While there are consultation programs at some technology companies to engage with external stakeholders early in the design process, these efforts do not represent a complete implementation of this principle, because they very rarely, if ever, encompass closing the feedback loop through re-consultation during the development and post-development stages. This is particularly relevant for AI tools built for historically marginalized populations, such as educational tools for children or AI monitoring tools for older individuals, which may be deployed and run in production without any feedback from those communities on whether they address the actual need or have the positive impact initially intended. This gap could be addressed through closing the feedback loop beyond deployment by involving those communities in an assessment of actual value.

9. Integrate Clear Mechanisms for Accountability

For every AI system, there should be clear mechanisms for accountability. Both developers and deployers bear responsibilities as systems are designed, created, and used. Accountability starts with a known responsible and accountable entity (i.e., an organization, other point of contact) that can provide transparency into the inputs and outputs of the system and who can update, pause, or decommission the AI. Further mechanisms can include infrastructure to allow advocates, regulators, or auditors to test AI systems and recourse for individuals who have been harmed by the systems. There must be internal accountability structures and processes, including incorporating civil rights by design into a company’s employee training and assessment or auditing processes. It is also vital to have clear lines of responsibility for ensuring those mechanisms are implemented.

Case Study: Responsible Practices for Synthetic Media
Clear AI accountability mechanisms — both community-driven and legally mandated — are still emerging. A strong example of a voluntary mechanism is the Partnership on AI’s Responsible Practices for Synthetic Media, a framework for the ethical development, creation, and sharing of AI-generated audiovisual content. This initiative brings together industry leaders, researchers, and civil society groups to establish standards that balance innovation with ethical responsibility, ensuring synthetic media benefits society rather than erodes trust. As AI accountability evolves, there is a significant opportunity for industry to lead the way in identifying, building, and communicating new mechanisms for safe and transparent AI.

10. Monitor and Improve Consistently

AI systems should be built to be reliable and stable over time. However, they must also be flexible and responsive to the inevitable shifts of a changing world (such as security issues, user expectations, system bugs, and changes in data input). Developers and deployers should rigorously monitor their AI systems for emerging issues or shifts and update them frequently and regularly to provide better service over time. It is critical that companies evaluate whether an AI system is working properly, without harm or bias. That means documenting and assessing outcomes as well as looking for harmful impacts and inaccurate outputs. Automated monitoring with pre-determined thresholds can also help to rapidly identify and respond to threats or harms, which can reduce the harm footprint to consumers as well as costs to the company.

Case Study: IDs in Apple Wallet
When Apple launched its “IDs in Wallet” feature, which allows iPhone users to add a digital copy of their state-issued identification card to their Wallet app as an alternative way to provide identity and age verification, it built in mechanisms to monitor for post-deployment bias in the identity verification process. To do so, Apple introduced a voluntary, privacy-preserving data collection mechanism (differentially private federated statistics) to collect age, sex, race/ethnicity (if available), and apparent skin tone from the identification cards of users who opt-in, in order to determine whether patterns of bias exist between those who successfully complete the verification process to activate their digital ID and those who do not. User data is anonymized and aggregated to minimize any risks associated with sensitive data collection to the user.

Teamwork brainstorm. Diverse multiracial colleagues woman in chair for people with disability working together. Group of people sitting in creative office discussing business. Young professional team

Conclusion

This Framework is the first step in establishing what companies investing in, envisioning, designing, developing, and using AI systems must consider to create inclusive technology. Creating assessment tools that incorporate the values and pillars discussed throughout this Framework is imperative to hold entities in the AI lifecycle accountable and create more explicit and transparent metrics. Moreover, there are opportunities to use this Framework as a foundation for assessing the inclusivity of AI technology stemming from specific sectors, including health care, banking, and housing. To ensure technology, including AI, truly benefits people and society, companies must be proactive and take action to ensure positive outcomes and a better future.

InnovationFramework.org is a website of The Center for Civil Rights and Technology which is a joint project of The Leadership Conference on Civil and Human Rights and The Leadership Conference Education Fund. 

©2025 The Leadership Conference on Civil and Human Rights/The Leadership Conference Education Fund. All rights reserved.