Algorithmic accountability is about holding tech companies responsible for the effects that the algorithms they create have on people and our society. Drew Parker, a FHWA technical analyst with more than seven years of experience, discusses this important subject.
Parker holds a Bachelor’s in Computer Science from UC Berkeley. He followed that up with a Master’s in Information Technology from Georgia Tech. His adeptitude at data analysis and system optimization comes from both the finance and healthcare industries.
This diverse background provides him with a unique perspective on the implications of algorithmic bias and underscores the need for transparency. Through his blog, Parker shares insights on technical trends and analytics, guiding readers through the complexities of the tech landscape.
This post examines how we can inject fairness and ethical considerations into the design and deployment of algorithms.
What Is Algorithmic Accountability?
Algorithmic accountability is one important mechanism for ensuring responsibility in these systems powered by algorithms. First, it focuses on the wider societal impact of these algorithms and second, it encourages fairness and transparency in algorithmic decision-making. Algorithms are increasing their reach and influence in all sectors.
As their impact has increased, so has the demand for accountability because of the harms they can cause. Algorithmic accountability asks who is obligated to justify the design, use, and outcome of machine learning systems, and who assumes responsibility for negative consequences.
Defining Responsibility in Code
At its core, algorithmic accountability places the onus on the entities developing algorithms and making decisions through them. This requires much more than vague promises to do better. It requires specific policies and procedures that govern the responsible use of algorithms.
This is where developers and organizations come in to generate accountable algorithms that take these ethical implications into account in their development. In healthcare, algorithms suggest treatment plans. These algorithms need to be developed responsibly to mitigate against artificial biases that produce inequitable care.
Why It Matters in America Today
Algorithms are being used now more than ever in important decisions that impact people all over the US. In practice though, algorithmic bias can and does result in discrimination and inequitable outcomes. In fact, many algorithms, such as those used in loan applications, have been proven to discriminate against certain demographic groups.
These algorithms further exacerbate already ingrained societal inequalities, putting civil rights and justice at stake. The Algorithmic Accountability Act of 2022 was introduced to Congress.
Beyond Buzzwords: Real-World Impact
Here’s how algorithmic accountability can help fix real-world harms from bad algorithms. It makes finance, healthcare, and education more fair and transparent. For example, within the criminal justice system, accountable algorithms can promote fairer, less biased sentencing.
By increasing transparency and creating trust in these systems, algorithmic accountability paves the way for responsible AI adoption. Mechanisms such as interpretability, transparency, and auditability help make machine learning systems fairer, more reliable, and trustworthy.
The Rising Stakes of Algorithms
Algorithms are part of our daily lives more than ever before, affecting critical decisions in areas like health care, finance, criminal justice, and more. For Drew Parker, a technical analyst with the D.C., he’s not blind to the risks they pose either, such as bias and unfairness. The public’s call for oversight and accountability has skyrocketed.
From privacy to bias, folks everywhere want to make sure these systems are used responsibly and ethically.
Algorithms Shaping Daily US Life
These algorithms have become a major unsurprising influence in almost all facets of life in the US today. In healthcare, for example, they help diagnose diseases and personalize treatment plans. Also in finance, they are used to set credit scores and approve loans.
Correspondingly in criminal justice, they’ve been used to predict likelihood of recidivism. These algorithmic decisions shape people’s opportunities and outcomes, often reinforcing broader societal biases and discrimination. As an example, a proprietary algorithm that the US courts use to predict whether someone will commit a crime again was found to be biased against black defendants.
Parker argues for more transparency and accountability in these powerful systems.
Potential Harms: Bias and Unfairness
Algorithmic bias happens when algorithms lead to unfairly discriminatory outcomes, usually because they’re trained on biased data or poorly designed. This can manifest in various contexts, such as facial recognition software that misidentifies people of color or loan applications that unfairly deny credit to certain demographic groups.
These are the harms that all of these systems can further societal inequalities. Parker stresses that to achieve fairness and justice, we must address knotty issues of algorithmic bias. He argues that because commercial algorithms aren’t transparent, we can’t even identify these biases or correct them.
The Call for Oversight Grows Louder
Public awareness of the potential harms of algorithms is growing. The public outcry has led to increased calls for limiting the use of these systems through government regulation. Advocacy groups and independent researchers are leading the charge for increased algorithmic accountability.
It will take a multi-stakeholder approach that includes policymakers, technologists, and the public to address these challenges. We’ve seen documented instances of individuals being falsely denied parole and faulty bail decisions as a result of algorithmic decision-making.
Parker is convinced that knowing why an algorithm makes a certain prediction is key to knowing how to act on that prediction.
Core Elements of True Accountability
Algorithmic accountability considers the systemic effects AI systems have on people and society. It creates an environment where these systems are transparent, predictable, and equitable.
Drew Parker, with his extensive background in data analysis and system optimization, emphasizes that true accountability needs a complete framework. This new framework should empower people with enforceable rights while creating enforceable responsibilities for those who develop or deploy AI technologies.
He argues that transparency, explainability, auditability, and redress are essential. These elements should be a priority for organizations as they begin to govern AI.
Transparency: Seeing Inside the Box
Transparency includes knowing exactly how these algorithms operate and affect transportation equity. Additionally, transparency not only builds public trust, it creates space for third-party oversight.
Being transparent when dealing with complex algorithms or other proprietary match formulas is difficult. Open-source code and explainable AI, referred to as XAI, techniques are just two methods to encourage transparency.
Some researchers argue that focusing on fairness and accountability can stop people from questioning the justice of a system, even if it works perfectly.
Explainability: Understanding the “Why”
Explainability is the idea that you should be able to understand the “why” behind an algorithm’s decision. Not all AI systems require the same level of explainability.
Techniques such as feature importance analysis, surrogate decision trees, and LIME methods can be used to help explain algorithms. Explainability is crucial for detecting and correcting bias.
Auditability: Verifying Performance Claims
Auditability helps ensure that algorithms are actually doing what they’re intended to do. Audits, such as computational and fairness audits, are a way to audit algorithmic systems.
Auditing large-scale, chaotic systems is incredibly difficult. They are the only source of independent third-party evaluations on how algorithms are doing.
Governance and accountability audits should call for extensive records on design, development, and public asset management.
Redress: Fixing Algorithmic Wrongs
Redress mechanisms address algorithmic harms, including reparations and appeal processes. This could be in the form of compensation, correction of errors, or changes to policy.
Redressing algorithmic harms is challenging, but necessary. Without meaningful and understandable redress available to everyone, accountability is impossible.
Regulatory agencies regularly monitor and audit AI systems, fining non-compliance. To achieve true accountability, we need a multidisciplinary approach that includes social and economic impacts.
Algorithmic Impact Assessments Explained
Drew Parker understands that algorithmic auditing is essential for ensuring responsible AI development and deployment. These assessments help identify and mitigate potential biases and harms in algorithms, particularly those used in critical decision-making systems. Inspired by environmental impact assessments, they evaluate an algorithm’s potential effects on individuals, communities, and society, ensuring algorithmic fairness.
Algorithmic impact assessments involve several key steps, including identifying stakeholders, assessing risks, gathering input data, planning harm reduction, documenting findings, and setting up continuous checks. Some argue that these assessments should be mandatory, especially for high-stakes decision-making systems. They can help pinpoint biases related to fairness, transparency, and ai accountability.
For example, the Government of Canada has developed an Algorithmic Impact Assessment tool to evaluate the potential impacts of algorithms on individuals and society. These assessments can evaluate the potential impacts of algorithms on human rights, including non-discrimination and privacy. Transparency and participation are crucial, involving multiple stakeholders and making results publicly available.
1. What Are AI Impact Evaluations?
AI impact evaluations are systematic, standard processes designed to measure and address the impact of AI systems before they are deployed. In the realm of algorithmic accountability, these evaluations help to explain the real-world impacts that AI systems will bring to fruition. Their aims are to predict harms and benefits that could result from deploying AI, ensuring ethical AI practices are followed.
A key component of an AI impact evaluation process involves defining the scope of the evaluation, determining the appropriate methodology, gathering relevant data, analyzing the data to identify potential impacts, and documenting the findings. For instance, an AI system used in loan applications might be evaluated to determine if it leads to algorithmic harm by unfairly discriminating against certain demographic groups.
AI impact evaluations are crucial for promoting responsible AI development and deployment, focusing on fairness, transparency, and accountability.
2. Identifying Who and What Is Affected
Identifying who and what will be affected by the algorithmic system is essential. These stakeholders can be specific individuals, interest groups, or entire affected communities. They can be way more than just environmental, but economic, social, psychological, and physical impacts.
For instance, an AI-based hiring tool might impact job applicants, existing staff, and the applicant pool diversity of the business as a whole. These methods for identifying potential impacts might involve stakeholder consultations, data analysis, and expert reviews.
It’s essential to look at impacts—not just negative ones. An AI system may increase efficiency, but it would need to acknowledge job displacement. By identifying who and what is affected, organizations can better understand the potential consequences of their AI systems. Equipped with this understanding, they can act preemptively to avoid harmful effects.
3. Assessing Risks: Bias, Fairness, Safety
Assessing risks related to bias, fairness, and safety is a core part of algorithmic impact assessments. Bias can creep into algorithmic systems through biased data, flawed algorithms, or biased human input. These biases can lead to unfair or discriminatory outcomes.
For example, facial recognition systems have been shown to be less accurate for people of color, leading to potential misidentification and unfair treatment. Methods for assessing these risks include bias audits, fairness metrics, and adversarial testing.
Bias audits involve analyzing the data and algorithms to identify potential sources of bias. Fairness metrics quantify the fairness of the system’s outcomes. Adversarial testing involves trying to trick the system into making mistakes. Mitigating these risks is essential for ensuring responsible AI deployment and promoting fairness and equity.
4. Gathering Diverse Stakeholder Input
Gathering input from diverse stakeholders is vital for a comprehensive AI impact assessment. This input can provide valuable insights into the potential impacts of an algorithmic system that might otherwise be missed.
Methods for collecting diverse stakeholder input can range from surveys to interviews to focus groups to public forums. For example, if you’re building an AI-powered healthcare tool, get input from patients, doctors, and nurses. Their insights will be invaluable in helping craft an effective solution.
This feedback may be used to surface possible usability issues, ethical concerns, and even unintended consequences. This step does multiple things. It aids in surfacing any potential blind spots, making sure that all points of view are represented, and establishing trust and acceptance of the AI system.
Gathering input from a diverse range of stakeholders is crucial to ensuring that the system in development meets their needs and values.
5. Planning How to Reduce Harm
Developing mitigation strategies to reduce potential harms identified in the AI impact assessment is crucial. These strategies can involve a range of interventions, from data preprocessing to algorithm modification to human oversight.
If researchers find that an AI system has a discriminatory impact on a protected class, they are empowered to respond. They can use data preprocessing techniques to try to rebalance the data and remove the bias. Algorithm remediation involves changing the algorithm to make it more equitable.
Ensuring human accountability means including a human element in the application to determine if the algorithm’s decision is not only fair, but appropriate. It’s important to consider the harms that need to be addressed and mitigate them as a priority.
Evaluating the potential impacts of various alternatives to choose the best path forward is essential. Continuing to monitor the effectiveness of mitigation strategies and making adjustments as needed should be part of the process.
6. Documenting Findings Clearly
Documenting the findings of the AI impact assessment in a clear and concise manner is essential for accountability and transparency. The impact assessment report should include a description of the algorithmic system, the assessment methodology, the identified impacts, the mitigation strategies, and the monitoring and evaluation plan.
The report should be written in plain language and should be accessible to a wide audience. The benefits of documenting findings clearly include facilitating communication, enabling accountability, and promoting trust.
Making the impact assessment report publicly available is important for ensuring transparency and building public confidence in AI systems.
7. Setting Up Continuous Checks
Ongoing review and testing for algorithmic systems should be the norm. This process helps them to make sure they function as designed and to avoid creating any unintended consequences. Methods for monitoring and evaluating algorithmic systems include performance metrics, bias audits, and user feedback.
For example, performance metrics would allow agencies to monitor the accuracy and efficiency of the system. Predictive bias audits and other tools could be employed to catch any new biases that arise. User feedback from these communities can shed light on how to improve the system’s usability and effectiveness.
Continuous checks provide greater value. They prevent dangerous biases and other issues from becoming widespread, maintain the system’s fairness and accuracy, and create room for adjustment to new needs and realities. Adjusting the monitoring and evaluation process in tandem with the algorithmic system’s development and implementation is critical as well.
How to Conduct an Algorithmic Impact Assessment
An algorithmic impact assessment is key to ensuring algorithmic accountability and identifying potential harms of biased algorithms. It also guides the development of mitigation strategies for responsible artificial intelligence.
Assemble Your Assessment Team
Assembling the right team is essential. A diverse, interdisciplinary, and informed team is crucial. Include a diverse set of roles, including data scientists, ethicists, and legal experts.
Additionally, include people who represent impacted communities. You’ll want to aim for 8-12 members with a variety of backgrounds. This panel can’t be statistically representative of the public, but should be demographically diverse and represent a variety of perspectives.
Ideally, the independence and objectivity of the team is vital. These tech developers should be required to officially present their algorithm in plain English.
Select an Appropriate Framework (e.g., NIST)
Choose a framework like the NIST AI Risk Management Framework to ensure algorithmic auditing is comprehensive. Consider the assessment’s scope and the algorithm’s type for responsible artificial intelligence.
Execute Key Assessment Activities
Collect data, identify risks, and consult stakeholders. A 3-hour workshop can be an effective format. Identify potential impacts in one group or two breakout groups.
Use appropriate tools and document all results. Be thorough and systematic.
Analyze Results and Plan Actions
Analyze algorithmic results and plan actions. Calculate potential harms and benefits of artificial intelligence, creating mitigation strategies.
Communicate Findings Transparently
Communicate findings in a transparent and accessible way. Use reports, presentations, and websites. Tailor the communication to your audience.
A notetaker should write up the team’s answers, taking 1–2 hours. The reflexive exercise, an initial impact identification, should be submitted. The lead should organize a 2–3 hour meeting with relevant team members.
Navigating Accountability Challenges
Accomplishing such algorithmic accountability in practice faces daunting challenges. It involves navigating accountability challenges, where bad actors should know the rules but don’t because they have never faced the consequences for their actions.
The reality of public accountability, especially in practice, is that it requires difficult trade-offs between competing goals, positive claims, and outside expectations. A safe and substantive process goes beyond information gathering to a process of justification and consideration of repercussions.
Partnerships and integrative thinking will be crucial for tackling these challenges, as will continued R&D.
Dealing with Complex “Black Boxes”
Understanding and auditing complex algorithmic systems, known as “black boxes,” is a major challenge. Algorithmic biases and malfunctions are more difficult to diagnose given their complex “black box” operation.
Tackling opacity in these algorithms will take both technical and non-technical solutions. For instance, using methods such as LIME (Local Interpretable Model-agnostic Explanations) one can explain each individual prediction made.
Continued research in explainable AI (XAI) will be needed. Without transparency, independent auditors can’t ensure models won’t be based on guesswork.
Balancing Transparency and Trade Secrets
There is an inherent tension between transparency and the protection of trade secrets. Strategies for balancing these interests include differential privacy, adding noise to data to protect individual privacy, and federated learning, training models across decentralized devices.
Creating uniform policies and procedures for discovering these innovations is key to striking the balance between safeguarding trade secrets and fostering transparency. Ongoing discussion between all affected parties will be required to meet this challenge.
Getting Organizational Support
Establishing organizational support for initiatives on algorithmic accountability is crucial. Gaining organizational support from senior management, employees, and other stakeholders starts with communicating the value of these initiatives.
The upside to having that organizational support is more resources and better implementation. For instance, showing how accountability measures result in less risk of costly litigation can help win support.
Keeping Up with US Regulations
The landscape of US regulations related to algorithmic accountability is rapidly changing. Organizations need to pay attention to regulations in the pipeline, including the Algorithmic Accountability Act.
With so much change, staying abreast of new regulations and guidance is critical. Compliance programs will be required to identify whether algorithmic systems are in compliance with regulatory standards.
For example, reinvesting into updating a company’s compliance program, according to the latest NIST AI Risk Management Framework recommendations.
Transparency and Audits: Building Trust
Transparency and audits are key to rebuilding trust in algorithmic systems. These are important transparency measures that can go a long way toward ensuring that algorithms are fair, accurate, and accountable. Greater transparency helps users of these systems know how and why decisions are being made, and independent audits help confirm the integrity of these systems.
While the overall concepts are simple, putting them into practice can be difficult, and a multifaceted approach may be needed to tackle these nuances.
Making Algorithmic Decisions Clearer
Users need to be able to figure out why the algorithm made the decision that it did. Algorithmic explainability would be greatly enhanced if an algorithm could explain its decision on each individual decision.
So, for instance, if someone is denied a loan, the notice could tell them exactly why based on the new algorithm’s judgment. Providing clear, transparent information about these processes allows users to better understand how algorithms impact their daily lives.
- Providing clear explanations for automated decisions.
- Using simple language to describe complex algorithmic processes.
- Providing new, easy-to-use tools that allow users to input different scenarios and see how different inputs change outcomes.
- Publishing easy-to-understand summaries of algorithmic logic.
Why Independent Audits Are Crucial
Independent audits are essential to ensuring algorithmic systems are fair and accurate. These are performed to increase the credibility of the audits and to eliminate bias.
Choosing the right auditor with the right expertise and resources is mission critical for performing in-depth evaluations. Second-party audits, in which contractors are used to perform the audit, are increasingly the norm, especially those contractors with expertise in certain types or industries of audits.
Proving Accountability Through Openness
Through openness and transparency, we can prove accountability in algorithmic systems. Strategies for building trust through transparency involve making code and data public.
It is important to balance transparency with privacy and security issues. Regulations like the EU AI Act require transparency in AI systems, such as disclosing key characteristics of models, including training data and performance metrics.
- Openly sharing model architecture and training data details.
- Publishing performance metrics and validation results.
- Allowing external reviews of code and algorithms.
- Ensuring access to datasets used for training and testing (with privacy protections).
The Evolving Accountability Landscape
The evolving accountability landscape goes beyond logistical concerns. As such, it is a welcome and much-needed dose of critical analysis grounded in theory. This looking ahead approach is growing in importance within the academic community.
The growing use of algorithms makes the need for accountability even more important. Accountability includes transparency, responsibility, fairness, and ethics. First, it’s crucial to recognize that what algorithmic accountability means and looks like is vastly different across sectors.
The need for accountability in government use of algorithms is growing fast. This is incredibly vital for addressing deep complex sustainability issues such as managing energy grids and natural assets in a more sustainable way.
Accountability for Self-Learning Systems
Accountability for self-learning systems is inherently challenging. How do we hold accountable and rein in the actions of self-learning systems? What methods are available to us?
It is crucial, then, to develop strong safeguards that would minimize the risk of unintended consequences. Continued research and development will be key to solving these challenges.
Consider the case of an AI that self-learns while making recommendations for loan applications. What once was a system working towards equity now becomes one that discriminates against certain demographics because it finds patterns in the data.
Emerging US Regulatory Frameworks
Emerging regulatory frameworks in the US aim to promote algorithmic accountability. Several legislative proposals endorse elements of a structured approach to regulating the tech industry, including the Algorithmic Accountability Act.
These frameworks may include requirements for impact assessments and audits. A recent report from a consortium of UK-based regulators identified “governance audits” as a tool that requires companies to provide detailed documentation on operational structures for design, development, management, and internal mechanisms oversight for algorithmic systems.
These provisions could significantly impact organizations using algorithmic systems.
Global Standards and Collaboration
Global standards and collaboration are critical in advancing algorithmic accountability. Several important international initiatives are currently in progress to promote this collaborative spirit.
Global standards have the added advantage of improving interoperability and lessening the regulatory burden. If the US and EU go their separate ways on standards for algorithmic accountability in healthcare, it will leave challenges for cross-border cooperation.
This mismatch may set back advancement in health research. Continued dialogue and collaboration among all stakeholders will be essential to create meaningful global standards.
Conclusion
DrewParker’s experience illustrates the need for algorithmic accountability in our technology-reliant society today. He’s witnessed the harm of bad algorithms in industries from finance to healthcare.
Real accountability goes beyond just making announcements—it means establishing firm guidelines, conducting ongoing audits, and ensuring informative public dialogue. Algo IA’s support organizations in proactively identifying risks and ensuring that systems are equitable. Addressing issues such as bias and lack of transparency are essential to establishing public trust.
Just like with technology, our approaches to algorithmic accountability need to evolve as well. Judging from DrewParker’s blog, it’s all part of his willingness to go above and beyond to teach others about these complicated topics.
Interested in algorithmic accountability. Learn more with resources from the AI Now Institute and the Partnership on AI. Join the fight for fairness and transparency in the tech you interact with every single day.